Unnamed: 0
int64 0
15.9k
| cleaned_code
stringlengths 67
124k
⌀ | cleaned_prompt
stringlengths 168
30.3k
⌀ |
|---|---|---|
1,800
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
xs = np.arange(10, 14)
ys = np.arange(20, 25)
print(xs, ys)
n = len(xs)
m = len(ys)
indices = np.arange(n)
array = np.tile(ys, (n, 1))
print(array)
[np.random.shuffle(array[i]) for i in range(n)]
print(array)
counts = np.full_like(xs, m)
print(counts)
weights = np.array(counts, dtype=float)
weights /= np.sum(weights)
print(weights)
i = np.random.choice(indices, p=weights)
print(i)
counts[i] -= 1
pair = xs[i], array[i, counts[i]]
array[i, counts[i]] = -1
print(pair)
print(counts)
print(array)
weights = np.array(counts, dtype=float)
weights[i] = 0
weights /= np.sum(weights)
print(weights)
i = np.random.choice(indices, p=weights)
counts[i] -= 1
pair = xs[i], array[i, counts[i]]
array[i, counts[i]] = -1
print(pair)
print(counts)
print(array)
def generate_pairs(xs, ys):
n = len(xs)
m = len(ys)
indices = np.arange(n)
array = np.tile(ys, (n, 1))
[np.random.shuffle(array[i]) for i in range(n)]
counts = np.full_like(xs, m)
i = -1
for _ in range(n * m):
weights = np.array(counts, dtype=float)
if i != -1:
weights[i] = 0
weights /= np.sum(weights)
i = np.random.choice(indices, p=weights)
counts[i] -= 1
pair = xs[i], array[i, counts[i]]
array[i, counts[i]] = -1
yield pair
for pairs in generate_pairs(xs, ys):
print(pairs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here are some example values for x and y. I assume that there are no repeated values in x.
Step2: indices is the list of indices I'll choose from at random
Step3: Now I'll make an array to hold the values of y
Step4: And shuffle the rows independently
Step5: I'll keep track of how many unused ys there are in each row
Step6: Now I'll choose a row, using the counts as weights
Step7: i is the row I chose, which corresponds to a value of x.
Step8: Now I decrement the counter associated with i, assemble a pair by choosing a value of x and a value of y.
Step9: We can check that the counts got decremented
Step10: And one of the values in array got used
Step11: The next time through is almost the same, except that when we assemble the weights, we give zero weight to the index we just used.
Step12: Everything else is the same
Step13: Now we can wrap all that up in a function, using a special value for i during the first iteration.
Step14: And here's how it works
|
1,801
|
<ASSISTANT_TASK:>
Python Code:
# we'll use the pythonic pyplot interface
import matplotlib.pyplot as plt
# necessary for the notebook to render the plots inline
%matplotlib inline
import numpy as np
np.random.seed(42)
x = np.linspace(0, 40, 1000)
y = np.sin(np.linspace(0, 10*np.pi, 1000))
y += np.random.randn(len(x))
plt.plot(x, y)
# this gives us a style and color palette similar to ggplot2
plt.style.use('ggplot')
plt.plot(x, y)
# import seaborn's helpful functions without applying its style
import seaborn.apionly as sns
# importing seaborn can sometimes reset matplotlib's style to default
plt.style.use('ggplot')
# this will remove the noisy grids the 'ggplot' style gives
sns.set_style('ticks')
plt.plot(x, y)
plt.plot(x, y)
sns.despine()
plt.plot(x, y)
sns.despine(offset=10)
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
from IPython.display import Image
Image(filename='testfigure.png')
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax.set_ylim(-4.5, 4.5)
ax.set_yticks(np.linspace(-4, 4, 5))
ax.set_yticks(np.linspace(-3, 3, 4), minor=True)
plt.tight_layout()
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
Image(filename='testfigure.png')
# we can override matplotlib's settings; for example, changing the font size
plt.rcParams['font.size'] = 8
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax.set_ylim(-4.5, 4.5)
ax.set_yticks(np.linspace(-4, 4, 5))
ax.set_yticks(np.linspace(-3, 3, 4), minor=True)
plt.tight_layout()
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
Image(filename='testfigure.png')
from matplotlib import gridspec
fig = plt.figure(figsize=(7, 3))
gs = gridspec.GridSpec(1, 2, width_ratios=[3,1] )
# plot the timeseries
ax0 = plt.subplot(gs[0])
ax0.plot(x, y)
# let's add some axes labels to boot
ax0.set_ylabel(r'displacement ($\AA$)')
ax0.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax0.set_ylim(-4.5, 4.5)
ax0.set_yticks(np.linspace(-4, 4, 5))
ax0.set_yticks(np.linspace(-3, 3, 4), minor=True)
# plot the distribution
ax1 = plt.subplot(gs[1])
ax1.hist(y, histtype='step', bins=40, normed=True, orientation='horizontal')
# this will remove the grid and the ticks on the x-axis
ax1.set_xticks([])
ax1.grid(False)
ax1.set_xlim((0,.5))
ax1.set_ylim(-4.5, 4.5)
ax1.set_yticks(np.linspace(-3, 3, 4), minor=True)
sns.despine(ax=fig.axes[0], offset=10)
sns.despine(ax=fig.axes[1], bottom=True)
plt.tight_layout()
sns.palplot(sns.color_palette())
from matplotlib import gridspec
fig = plt.figure(figsize=(7, 3))
gs = gridspec.GridSpec(1, 2, width_ratios=[3,1] )
# plot the timeseries
ax0 = plt.subplot(gs[0])
ax0.plot(x, y, color=sns.color_palette()[1])
# let's add some axes labels to boot
ax0.set_ylabel(r'displacement ($\AA$)')
ax0.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax0.set_ylim(-4.5, 4.5)
ax0.set_yticks(np.linspace(-4, 4, 5))
ax0.set_yticks(np.linspace(-3, 3, 4), minor=True)
# plot the distribution
ax1 = plt.subplot(gs[1])
ax1.hist(y, histtype='step', bins=40, normed=True, orientation='horizontal', color=sns.color_palette()[1])
# this will remove the grid and the ticks on the x-axis
ax1.set_xticks([])
ax1.grid(False)
ax1.set_xlim((0,.5))
ax1.set_ylim(-4.5, 4.5)
ax1.set_yticks(np.linspace(-3, 3, 4), minor=True)
sns.despine(ax=fig.axes[0], offset=10)
sns.despine(ax=fig.axes[1], bottom=True)
plt.tight_layout()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The default plots created with matplotlib aren't bad, but they do have elements that are, at best, unnecessary. At worst, these elements detract from the display of quantitative information. We want to change this. First, let's change matplotlib's style with a built-in style sheet.
Step2: This produces something a bit more pleasing to the eye, with what probably amounts to better coloration. It replaced the box with a grid, however. Although this is useful for some plots, in particular panels of plots in which one needs to compare across different plots, it is often just unnecessary noise.
Step3: It almost looks like we took two steps forward and one step back
Step4: We can also go further, moving the axes a bit so they distract even less from the data, which should be front-and-center.
Step5: Now let's do some refining. Figures for exploratory work can be any size that's convenient, but when making figures for a publication, you must consider the real size of the figure in the final printed form. Considering a page in the U.S. is typically 8.5 x 11 inches, a typical figure should be no more than 4 inches wide to fit in a single column of the page. We can adjust figure sizes by giving matplotlib a bit more detail
Step6: We added some axes labels, too. Because this is a timeseries, we deliberately made the height of the figure less than the width. This is because timeseries are difficult to interpret when the variations with time are smashed together. Tufte's general rule is that no line in the timeseries be greater than 45$^\circ$; we would have a hard time doing that here with such noisy data, but going wider than tall is a step in the right direction.
Step7: We can view the resulting PNG directly
Step8: Woah...something's wrong. The figure doesn't fit in the frame! This is because the figure elements were adjusted after the figure object was created, and so some of these elements, including the axis labels, are beyond the figure's edges. We can usually fix this with a call to plt.tight_layout to ensure everything fits in the plots we write out.
Step9: Okay...better. But it looks like the labels are a bit too big to make these dimensions work well. We can adjust these directly by changing matplotlib's settings. These are the same settings you might have set defaults for in your matplotlibrc file.
Step10: Much better!
Step11: Don't like the color? We can use seaborn to get at different colors in the color palette
|
1,802
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import decomposition
from sklearn import datasets
tabela = pd.read_csv("exemplo_7/iris.data",header=None,sep=',')
tabela
tabela.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
tabela
tabela.tail()
X = tabela.ix[:,0:4].values
y = tabela.ix[:,4].values
X
y
nomes = list(set(y))
tabela.columns
colors = ['navy', 'turquoise', 'darkorange']
fig,ax = plt.subplots(2,2)
#n, bins, patches = P.hist(x, 10, normed=1, histtype='bar',
# color=['crimson', 'burlywood', 'chartreuse'],
# label=['Crimson', 'Burlywood', 'Chartreuse'])
# Coluna 0
dados_sepal_len = [X[y==nomes[0],0], X[y==nomes[1],0], X[y==nomes[2],0]]
n, bins, patches = ax[0,0].hist(dados_sepal_len,color=colors, label=list(set(y)))
ax[0,0].set_title('Sepal Length (cm)')
# Coluna 1
dados_sepal_wid = [X[y==nomes[0],1], X[y==nomes[1],1], X[y==nomes[2],1]]
ax[0,1].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[0,1].legend()
ax[0,1].set_title('Sepal Width (cm)')
# Coluna 2
dados_sepal_wid = [X[y==nomes[0],2], X[y==nomes[1],2], X[y==nomes[2],2]]
ax[1,0].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[1,0].legend()
ax[1,0].set_title('Petal Length (cm)')
# Coluna 3
dados_sepal_wid = [X[y==nomes[0],3], X[y==nomes[1],3], X[y==nomes[2],3]]
ax[1,1].hist(dados_sepal_wid,color=colors, label=list(set(y)))
#ax[1,1].legend()
ax[1,1].set_title('Petal Width (cm)')
fig.legend(patches, list(set(y)))
pca = decomposition.PCA()
print(pca)
pca.fit(X)
print(pca.explained_variance_ratio_)
Xnew = pca.transform(X)
print(X)
print(Xnew)
fig,ax = plt.subplots()
plt.cla()
ax.scatter(Xnew[:, 0], Xnew[:, 1], cmap=plt.cm.spectral)
plt.show()
y[0]
list(set(y))
Xnew[y=='Iris-setosa']
fig2, ax2 = plt.subplots()
for color, i, name in zip(colors, [0, 1, 2], list(set(y))):
ax2.scatter(Xnew[y == name, 0], Xnew[y == name, 1], color=color, label=names)
ax2.legend(loc='best', shadow=False, scatterpoints=1)
ax2.set_title('PCA of IRIS dataset')
colors
from mpl_toolkits.mplot3d import Axes3D
fig3d = plt.figure(3)
ax = Axes3D(fig3d)
for color, i, name in zip(colors, [0, 1, 2], list(set(y))):
ax.scatter(Xnew[y == name, 0], Xnew[y == name, 1], Xnew[y==name, 2], color=color, label=names)
ax.legend(loc='best', shadow=False, scatterpoints=1)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Agora, vamos separar os dados entre as medidas e as espécies.
Step2: Agora, vamos calcular a decomposição em componentes principais
Step3: pca agora é uma referência para a função que calcula o PCA de X. Para efetivamente calcularmos os componentes principais, fazemos
Step4: Daqui pra frente, o objeto pca será onde nossas informações estão armazenadas. Para, por exemplo, verificarmos quais são os autovalores (variâncias) do nosso conjunto de dados, podemos fazer
Step5: Podemos ver então que o primeiro componente principal explica 92% dos dados.
Step6: Agora, queremos visualizar estes dados. Precisamos então selecionar quantos componentes queremos representar. Se quisermos mostrar dois componentes, fazemos
|
1,803
|
<ASSISTANT_TASK:>
Python Code:
def hex_key(num):
primes = ('2', '3', '5', '7', 'B', 'D')
total = 0
for i in range(0, len(num)):
if num[i] in primes:
total += 1
return total
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,804
|
<ASSISTANT_TASK:>
Python Code:
from CesiumWidget import CesiumWidget
from IPython import display
import numpy as np
cesium = CesiumWidget()
cesium
cesium.kml_url = '/nbextensions/CesiumWidget/cesium/Apps/SampleData/kml/gdpPerCapita2008.kmz'
for lon in np.arange(0, 360, 0.5):
cesium.zoom_to(lon, 0, 36000000, 0 ,-90, 0)
cesium._zoomto
cesium.fly_to(14, 90, 20000001)
cesium._flyto
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create widget object
Step2: Display the widget
Step3: Cesium is packed with example data. Let's look at some GDP per captia data from 2008.
Step4: Example zoomto
Step5: Example flyto
|
1,805
|
<ASSISTANT_TASK:>
Python Code:
import subprocess
import os
import sys
from dask_jobqueue import PBSCluster
from distributed import Client, progress
from datetime import datetime, timedelta
from pkg_resources import load_entry_point
from distributed import progress
def exec_adi(info_dict):
This function will call adi_cmac2 from within Python. It takes in a dictionary where the inputs to adi_cmac2 are
stored.
Parameters
----------
info_dict: dict
A dictionary with the following keywords:
'facility' = The facility marker (i.e. 'sgp', 'nsa', etc.)
'site' = The site marker (i.e. i4, i5, i6)
'start_date' = The start date as a string formatted YYYYMMDD
'end_date' = The end date as a string formatted YYYYMMDD
facility = info_dict['facility']
site = info_dict['site']
start_date = info_dict['start_date']
end_date = info_dict['end_date']
# Change this directory to where you want your adi logs stored
logs_dir = "/home/rjackson/adi_logs"
# Set the path to your datasteam here!
os.environ["DATASTREAM_DATA"] = "/lustre/or-hydra/cades-arm/rjackson/"
logs_dir += logs_dir + "/" + site + start_date + "_" + end_date
if not os.path.isdir(logs_dir):
os.makedirs(logs_dir)
os.environ["LOGS_DATA"] = logs_dir
os.environ["PROJ_LIB"] = "/home/rjackson/anaconda3/envs/adi_env3/share/proj/"
# Set the path to the clutter file here!
os.environ["CMAC_CLUTTER_FILE"] = "/home/rjackson/cmac2.0/scripts/clutter201901.nc"
subprocess.call(("/home/rjackson/anaconda3/envs/adi_env3/bin/adi_cmac2 -D 1 -f " +
facility + " -s " + site + " -b " + start_date + " -e "+ end_date), shell=True)
the_cluster = PBSCluster(processes=6, cores=36, queue="arm_high_mem",
walltime="3:00:00", resource_spec="qos=std",
job_extra=["-A arm", "-W group_list=cades-arm"],
env_extra=[". /home/rjackson/anaconda3/etc/profile.d/conda.sh", "conda activate adi_env3"])
the_cluster.scale(36)
client = Client(the_cluster)
client
client
def make_date_list_dict_list(start_day, end_day):
This automatically generates a list of day inputs for the exec_adi function.
Parameters
----------
start_day: datetime
The start date
end_day:
The end date
Returns
-------
the_list: A list of dictionary inputs for exec_adi
cur_day = start_day
the_list = []
while(cur_day < end_day):
next_day = cur_day + timedelta(days=1)
temp_dict = {}
# Change these next two lines to fit your facility
temp_dict['facility'] = "I5"
temp_dict['site'] = "sgp"
temp_dict['start_date'] = cur_day.strftime("%Y%m%d")
temp_dict['end_date'] = next_day.strftime("%Y%m%d")
the_list.append(temp_dict)
cur_day = cur_day + timedelta(days=1)
return the_list
# Here we specify the dates that we want to process
date_list = make_date_list_dict_list(datetime(2019, 1, 1), datetime(2019,2,6))
# Run the cluster
futures = client.map(exec_adi, date_list)
# Put up a little progress bar!
progress(futures)
# This will make the tasks quit
del futures
cluster.stop_all_jobs()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Installation instructions
Step2: This will start a distributed cluster on the arm_high_mem queue. I have set it to have 6 adi_cmac2 processes per node,
Step3: Run the above code to start the distributed client, and then use the output of this cell to determine whether your client got started. You should have nonzero resources available if the cluster has started.
Step5: This creates the list of dictionaries mapped onto exec_adi when adi_cmac2 is run on the cluster.
|
1,806
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from netCDF4 import Dataset
import holoviews as hv
from postladim import ParticleFile
hv.extension('bokeh')
# Read bathymetry and land mask
with Dataset('../data/ocean_avg_0014.nc') as ncid:
H = ncid.variables['h'][:, :]
M = ncid.variables['mask_rho'][:, :]
jmax, imax = M.shape
# Select sea and land features
H = np.where(M > 0, H, np.nan) # valid at sea
M = np.where(M < 1, M, np.nan) # valid on land
# Make land image
ds_land = hv.Dataset((np.arange(imax), np.arange(jmax), M), ['x', 'y'], 'Land mask')
im_land = ds_land.to(hv.Image, kdims=['x', 'y'], group='land')
# Make bathymetry image
ds_bathy = hv.Dataset((np.arange(imax), np.arange(jmax), -np.log10(H)),
['x', 'y'], 'Bathymetry')
im_bathy = ds_bathy.to(hv.Image, kdims=['x', 'y'])
background = im_bathy * im_land
pf = ParticleFile('line.nc')
def pplot(timestep):
Scatter plot of particle distibution at a given time step
X, Y = pf.position(timestep)
return background * hv.Scatter((X, Y))
%%opts Image (cmap='blues_r' alpha=0.7)
%%opts Image.land (cmap=['#AABBAA'])
%%opts Scatter (color='red')
pplot(0) + pplot(pf.num_times-1) # Final particle distribution
%%output size=150
%%opts Scatter (color='red')
dmap = hv.DynamicMap(pplot, kdims=['timestep'])
dmap.redim.range(timestep=(0, pf.num_times-1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Background map
Step3: Particle plot function
Step4: Still images
Step5: Dynamic map
|
1,807
|
<ASSISTANT_TASK:>
Python Code:
district = 'http://www.cian.ru/cat.php?deal_type=sale&district%5B0%5D=13&district%5B1%5D=14&district%5B2%5D=15&district%5B3%5D=16&district%5B4%5D=17&district%5B5%5D=18&district%5B6%5D=19&district%5B7%5D=20&district%5B8%5D=21&district%5B9%5D=22&engine_version=2&offer_type=flat&p={}&room1=1&room2=1&room3=1&room4=1&room5=1&room6=1'
links = []
for page in range(1, 30):
page_url = district.format(page)
search_page = requests.get(page_url)
search_page = search_page.content
search_page = BeautifulSoup(search_page, 'lxml')
flat_urls = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"})
flat_urls = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_urls))
for link in flat_urls:
if link.isdigit():
links.append(link)
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[0]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150531912/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
def getPrice(flat_page):
price = flat_page.find('div', attrs={'class':'object_descr_price'})
price = re.split('<div>|руб|\W', str(price))
price = "".join([i for i in price if i.isdigit()][-4:])
return int(price)
def getAllPrices(l, r):
prices = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
prices.append(getPrice(flat_page))
return prices
prices = getAllPrices(0, len(links))
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[0]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150531912/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1]
coords = re.split('&|center=|%2C', str(coords))
coords
coords_list = []
for item in coords:
if item[0].isdigit():
coords_list.append(item)
lat = float(coords_list[0])
lon = float(coords_list[1])
lat
lon
def getCoords_at(flat_page):
coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1]
coords = re.split('&|center=|%2C', str(coords))
coords_list = []
for item in coords:
if item[0].isdigit():
coords_list.append(item)
lat = float(coords_list[0])
lon = float(coords_list[1])
return lat, lon
def getAllCoordinates(l, r):
coordinates = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
coordinates.append(getCoords(flat_page))
return coordinates
coordinates = getAllCoordinates(0, len(links))
from math import radians, cos, sin, asin, sqrt
AVG_EARTH_RADIUS = 6371
def haversine(point1, point2):
# извлекаем долготу и широту
lat1, lng1 = point1
lat2, lng2 = point2
# переводим все эти значения в радианы
lat1, lng1, lat2, lng2 = map(radians, (lat1, lng1, lat2, lng2))
# вычисляем расстояние по формуле
lat = lat2 - lat1
lng = lng2 - lng1
d = sin(lat * 0.5) ** 2 + cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * asin(sqrt(d))
return h
MSC_POINT_ZERO = (55.755831, 37.617673)
distance = []
for i in range(0, len(coordinates)):
distance.append(haversine(MSC_POINT_ZERO, coordinates[i]))
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
#flat_url = 'http://www.cian.ru/sale/flat/150844464/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'})
rooms_n = html_stripper(rooms_n)
rooms_n
re.split('-|\n', rooms_n)
def getRoom(flat_page):
rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'})
rooms_n = html_stripper(rooms_n)
room_number = ''
flag = 0
for i in re.split('-|\n', rooms_n):
if 'много' in i:
flag = 1
break
elif 'комн' in i:
break
else:
room_number += i
if (flag):
room_number = 'mult'
room_number = "".join(room_number.split())
return room_number
def getAllRooms(l, r):
rooms = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
rooms.append(getRoom(flat_page))
return rooms
rooms = getAllRooms(0, len(links))
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
metro
re.split('metro_name|мин', str(metro))
def getMetroDistance(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2): # если оба поля не были заполнены, то предыдущий сплит даст размерность 2
metro_dist = 0
power = 0
# кусок metro[1] после сплита будет содержать в конце кучу хлама, потом количество минут (если есть)
flag = 0
for i in range(0, len(metro[1])):
if metro[1][-i-1].isdigit():
flag = 1
metro_dist += int(metro[1][-i-1]) * 10 ** power
power += 1
elif (flag == 1):
break
else:
metro_dist = np.nan
return metro_dist
def getAllMetroDistances(l, r):
metro_distance = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro_distance.append(getMetroDistance(flat_page))
return metro_distance
metro_distances = getAllMetroDistances(0, len(links))
def getMetroWalking(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2): # если оба поля не были заполнены, то предыдущий сплит даст размерность 2
if 'пешк' in metro[2]:
walking = 1
elif 'машин' in metro[2]:
walking = 0
else:
walking = np.nan # да, проверка на то, отсутствовали ли оба поля была. мне лично не попадались ситуации, где бы не
# было заполнено только значение поля "пешком/на машине", но вдруг они есть? на такой случай проверка
else:
walking = np.nan
return walking
def getAllMetroWalking(l, r):
metro_walking = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
metro_walking.append(getMetroWalking(flat_page))
return metro_walking
walking = getAllMetroWalking(0, len(links))
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
building_block = re.split('Этаж|Тип продажи', table)[1]
building_block
def getBrick(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
brick = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if (('кирпич' in building_block) | ('монолит' in building_block)):
brick = 1
elif (('панельн' in building_block) | ('деревян' in building_block) | ('сталин' in building_block) |
('блочн' in building_block)):
brick = 0
return brick
def getAllBricks(l, r):
bricks = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
bricks.append(getBrick(flat_page))
return bricks
bricks = getAllBricks(0, len(links))
def getNew(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
new = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if 'новостр' in building_block:
new = 1
elif 'втор' in building_block:
new = 0
return new
def getAllNew(l, r):
new = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
new.append(getNew(flat_page))
return new
new = getAllNew(0, len(links))
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
building_block = re.split('Этаж|Тип продажи', table)[1]
building_block
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
floor_block
def getFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floor_is = 0
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
for i in range(1, len(floor_block[2]) + 1):
if(floor_block[2][-i].isdigit()):
floor_is += int(floor_block[2][-i]) * 10**(i - 1)
return floor_is
def getAllFloors(l, r):
floors = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
floors.append(getFloor(flat_page))
return floors
floors = getAllFloors(0, len(links))
def getNFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floors_count = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
if floor_block[3].isdigit():
floors_count = int(floor_block[3])
return floors_count
def getAllNFloors(l, r):
nfloors = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
nfloors.append(getNFloor(flat_page))
return nfloors
nfloors = getAllNFloors(0, 20)
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
space_block = re.split('Общая площадь', table)[1]
space_block
def myStrToFloat(string):
delimiter = 0
value = 0
for i in range(0, len(string)):
if string[i] == ',':
delimiter = i
for i in range(0, delimiter):
value += int(string[delimiter - i - 1]) * 10 ** i
for i in range(1, len(string) - delimiter):
value += (int(string[delimiter + i]) * (10 ** (i - 2)))
return value
def getTotsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
total = re.split('Площадь комнат', space_block)[0]
total_space = re.split('\n|\xa0', total)[2]
if total_space.isdigit():
total_space = int(total_space)
else:
total_space = myStrToFloat(total_space)
return total_space
def getAllTotsp(l, r):
totsp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
totsp.append(getTotsp(flat_page))
return totsp
totsp = getAllTotsp(0, len(links))
def getLivesp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
living = re.split('Жилая площадь', space_block)[1]
living_space = re.split('\n|\xa0', living)[2]
if living_space.isdigit():
living_space = int(living_space)
else:
living_space = myStrToFloat(living_space)
return living_space
def getAllLivesp(l, r):
livesp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
livesp.append(getLivesp(flat_page))
return livesp
livesp = getAllLivesp(0, len(links))
#flat_url = 'http://www.cian.ru/sale/flat/' + str(links[2]) + '/'
flat_url = 'http://www.cian.ru/sale/flat/150387502/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
table
space_block = re.split('Общая площадь', table)[1]
space_block
optional_block = re.split('Жилая площадь', space_block)[1]
optional_block
def getKitsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
kitchen_space = np.nan
if 'Площадь кухни' in optional_block:
kitchen_block = re.split('Площадь кухни', optional_block)[1]
if re.split('\n|\xa0', kitchen_block)[2] != '–':
if re.split('\n|\xa0', kitchen_block)[2].isdigit():
kitchen_space = int(re.split('\n|\xa0', kitchen_block)[2])
else:
kitchen_space = myStrToFloat(re.split('\n|\xa0', kitchen_block)[2])
return kitchen_space
def getAllKitsp(l, r):
kitsp = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
kitsp.append(getKitsp(flat_page))
return kitsp
kitsp = getAllKitsp(0, len(links))
def getBal(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
balcony = np.nan
if 'Балкон' in optional_block:
balcony_block = re.split('Балкон', optional_block)[1]
if re.split('\n', balcony_block)[1] != 'нет':
if re.split('\n', balcony_block)[1] != '–':
balcony = int(re.split('\n', balcony_block)[1][0])
else:
balcony = 0
return balcony
def getAllBal(l, r):
bal = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
bal.append(getBal(flat_page))
return bal
bal = getAllBal(0, len(links))
def getTel(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
telephone = np.nan
if 'Телефон' in optional_block:
telephone_block = re.split('Телефон', optional_block)[1]
if re.split('\n', telephone_block)[1] == 'да':
telephone = 1
elif re.split('\n', telephone_block)[1] == 'нет':
telephone = 0
return telephone
def getAllTel(l, r):
tel = []
for i in range(l, r):
flat_url = 'http://www.cian.ru/sale/flat/' + str(links[i]) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
tel.append(getTel(flat_page))
return tel
tel = getAllTel(0, len(links))
N = []
for i in range(0, len(links)):
N.append(i)
district = []
for i in range(0, len(links)):
district.append('CAD')
data = dict([('New', new), ('Bal', bal), ('Tel', tel), ('Walk', walk), ('Metrdist', metrdist), ('Nfloors', nfloors), ('Floor', floor), ('Totsp', totsp), ('Livesp', livesp), ('Kitsp', kitsp), ('N', N), ('Price', prices), ('Rooms', rooms), ('Distance', distance), ('Brick', bricks), ('District', district)])
df = pd.DataFrame(data)
df.T
df.to_csv('cian.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Собираем ссылки на все квартиры первых тридцати страниц выдачи
Step2: Cтандартный блок, в котором мы получаем по ссылке текст страницы в удобном формате
Step3: Prices
Step4: Функция для сбора всех цен с сайта в выбранном диапазоне ссылок ( l < r )
Step5: Теперь используем эти функции и составим колонку для всех цен
Step6: Dist
Step7: Определим функцию получения координат из текста страницы
Step8: И функцию для получения всех координат в выбранном диапазоне ссылок (такая схема из двух функций будет повторяться везде)
Step9: Теперь нам предстоит использовать полученные координаты для вычисления расстояния до центра города. Воспользуемся для этого формулой для вычисления расстояния между двумя точками, расположенными на сфере. Запишем отдельную функцию
Step10: Rooms
Step11: Страницы о многокомнатных квартирах ( > 5 ) не содержат информации о количестве комнат в явном виде, такие квартиры фигурируют как "многокомнатные". Так что соответствующие объекты у меня получат в столбец Rooms значение 'mult'. Это более достоверная информация, чем, например, 6.
Step12: Metrdist и Walking
Step13: Тут у нас появляется отличие от того, что было раньше
Step14: Walking
Step15: Теперь перейдем к работе с данными, которые мы будем извлекать из таблицы. Это будущие колонки
Step16: Информация по Brick и New содержится между следующими двумя обязательными полями
Step17: Ситуация следующая
Step18: New
Step19: Теперь переходим дальше
Step20: Информация по Floor и Nfloors тоже содержится между обязательными полями "Этаж" и "Тип продажи".
Step21: Floor указан обязательно, и он лежит во второй ячейке, Nfloors может не быть, но если есть - то он в третьей.
Step22: Nfloors
Step23: Перейдем дальше
Step24: Всех полей, что находятся справа от "Общая площадь", может не быть. Так что ограничим нашу область поиска только слева.
Step25: Totsp и Livesp будут обязательно. Тут мы столкнемся с проблемой
Step26: Totsp
Step27: Livesp
Step28: Переходим к опциональным полям
Step29: Нужный нам подблок информации мы снова можем ограничить только слева названием поля "Общая площадь"
Step30: Kitsp
Step31: Bal
Step32: Tel
Step33: Последние штрихи перед сборкой DataFrame
Step34: В этом ноутбуке квартиры только из ЦАО, укажу это
Step35: Собираем все воедино
|
1,808
|
<ASSISTANT_TASK:>
Python Code:
from scipy import misc
from scipy.ndimage import rotate
import numpy as np
data_orig = misc.face()
x0,y0 = 580,300 # left eye; (xrot,yrot) should point there
angle = np.random.randint(1, 360)
def rot_ans(image, xy, angle):
im_rot = rotate(image,angle)
org_center = (np.array(image.shape[:2][::-1])-1)/2.
rot_center = (np.array(im_rot.shape[:2][::-1])-1)/2.
org = xy-org_center
a = np.deg2rad(angle)
new = np.array([org[0]*np.cos(a) + org[1]*np.sin(a),
-org[0]*np.sin(a) + org[1]*np.cos(a) ])
return im_rot, new+rot_center
data_rot, (xrot, yrot) =rot_ans(data_orig, np.array([x0, y0]), angle)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
1,809
|
<ASSISTANT_TASK:>
Python Code:
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
def h(x, w): return sp.dot(x, w)
def plot_decision_boundary(h, boundary=0, margins=None):
x = linspace(-10, 10)
y = linspace(-10, 10)
X1, X2 = np.meshgrid(x, y)
XX = sp.dstack((sp.ones((50, 50)), X1, X2))
plt.contour(X1, X2, h(XX), linecolor='red', levels=[boundary])
if margins!=None:
CS = plt.contour(X1, X2, h(XX), colors=['gray', 'gray'], levels=[margins[0],margins[1]])
plt.clabel(CS, fontsize=9, inline=1)
w = [-2, 1, -0.5]
plot_decision_boundary(lambda x : h(x, w), margins=(-1,1))
w = [2, -1, 0.5]
plot_decision_boundary(lambda x : h(x, w), margins=(-1,1))
w = [2, -1.5, 0.3]
plot_decision_boundary(lambda x : h(x, w), margins=(-1,1))
xs = sp.linspace(-5,5, 100)
plt.plot(xs, xs);
def sigm(x): return 1 / (1 + sp.exp(-x))
plt.plot(xs, sigm(xs));
plt.plot(xs, sign(xs));
w = [-4, 2, -1]
plot_decision_boundary(lambda x : h(x, w), margins=(-1,1))
plot_decision_boundary(lambda x : sigm(sp.dot(x, w)), boundary=0.5, margins=(0.1,0.9))
def h2(x, w):
x2 = sp.dstack((x, x[:,:,1]*x[:,:,2], x[:,:,1]**2, x[:,:,2]**2))
return sp.dot(x2, w)
w = [-0.05 , -0.15 , -0.5 , 0.15 , -0.08 , 0.05]
plot_decision_boundary(lambda x : h2(x, w), margins=[-1, 1])
w = sp.array([-4, 2, -1])
X = sp.array([[1, 5, -1],
[1, -5, 5]])
plot_decision_boundary(lambda x : h(x, w), margins=(-1, 1))
plt.scatter(X[:,1],X[:,2]);
h(X[0], w)
h(X[1], w)
sp.linalg.norm(w[1:])
def distance(x,w): return sp.dot(x, w) / sp.linalg.norm(w[1:])
distance(X[0], w)
distance(X[1], w)
w2 = w/10.0
plot_decision_boundary(lambda x : h(x, w2), margins=(-1,1))
plt.scatter(X[:,1],X[:,2]);
h(X[0], w2)
h(X[1], w2)
sp.linalg.norm(w2[1:])
distance(X[0], w2)
distance(X[1], w2)
h1 = lambda x: h(x, [0, 2, 1])
plot_decision_boundary(h1)
h2 = lambda x: h(x, [-0.2, 0.7, -0.8])
plot_decision_boundary(h2)
h3 = lambda x: h(x, [-1.5, 0.1, 0.5])
plot_decision_boundary(h3)
plt.scatter(X[:,1],X[:,2]);
print h1(X[0]), h2(X[0]), h3(X[0])
print h1(X[1]), h2(X[1]), h3(X[1])
def ovr(x): return sp.argmax([h1(x), h2(x), h3(x)])
ovr(X[0])
ovr(X[1])
x = linspace(-10, 10)
y = linspace(-10, 10)
X1, X2 = np.meshgrid(x, y)
XX = sp.dstack((sp.ones((50, 50)), X1, X2))
n, m, _ = shape(XX)
YY = sp.zeros((n,m))
for i in range(0,n):
for j in range(0,m):
YY[i,j] = ovr(XX[i, j])
plt.contourf(X1, X2, YY);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sadržaj
Step2: Poopćeni linearan model
Step3: Odabir funkcije $f$ nema utjecaja na linearnost granice, budući da će, očigledno, funkcija $f$ za iste ulazne vrijednosti $\mathbf{w}^\intercal\mathbf{x}$ davati iste vrijednosti $f(\mathbf{w}^\intercal\mathbf{x})$
Step4: Kao i kod regresije, možemo koristiti preslikavanje
Step5: Geometrija linearnog modela
Step6: Višeklasna klasifikacija ($K>2$)
|
1,810
|
<ASSISTANT_TASK:>
Python Code:
%%file mr_s3_log_parser.py
import time
from mrjob.job import MRJob
from mrjob.protocol import RawValueProtocol, ReprProtocol
import re
class MrS3LogParser(MRJob):
Parses the logs from S3 based on the S3 logging format:
http://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.html
Aggregates a user's daily requests by user agent and operation
Outputs date_time, requester, user_agent, operation, count
LOGPATS = r'(\S+) (\S+) \[(.*?)\] (\S+) (\S+) ' \
r'(\S+) (\S+) (\S+) ("([^"]+)"|-) ' \
r'(\S+) (\S+) (\S+) (\S+) (\S+) (\S+) ' \
r'("([^"]+)"|-) ("([^"]+)"|-)'
NUM_ENTRIES_PER_LINE = 17
logpat = re.compile(LOGPATS)
(S3_LOG_BUCKET_OWNER,
S3_LOG_BUCKET,
S3_LOG_DATE_TIME,
S3_LOG_IP,
S3_LOG_REQUESTER_ID,
S3_LOG_REQUEST_ID,
S3_LOG_OPERATION,
S3_LOG_KEY,
S3_LOG_HTTP_METHOD,
S3_LOG_HTTP_STATUS,
S3_LOG_S3_ERROR,
S3_LOG_BYTES_SENT,
S3_LOG_OBJECT_SIZE,
S3_LOG_TOTAL_TIME,
S3_LOG_TURN_AROUND_TIME,
S3_LOG_REFERER,
S3_LOG_USER_AGENT) = range(NUM_ENTRIES_PER_LINE)
DELIMITER = '\t'
# We use RawValueProtocol for input to be format agnostic
# and avoid any type of parsing errors
INPUT_PROTOCOL = RawValueProtocol
# We use RawValueProtocol for output so we can output raw lines
# instead of (k, v) pairs
OUTPUT_PROTOCOL = RawValueProtocol
# Encode the intermediate records using repr() instead of JSON, so the
# record doesn't get Unicode-encoded
INTERNAL_PROTOCOL = ReprProtocol
def clean_date_time_zone(self, raw_date_time_zone):
Converts entry 22/Jul/2013:21:04:17 +0000 to the format
'YYYY-MM-DD HH:MM:SS' which is more suitable for loading into
a database such as Redshift or RDS
Note: requires the chars "[ ]" to be stripped prior to input
Returns the converted datetime annd timezone
or None for both values if failed
TODO: Needs to combine timezone with date as one field
date_time = None
time_zone_parsed = None
# TODO: Probably cleaner to parse this with a regex
date_parsed = raw_date_time_zone[:raw_date_time_zone.find(":")]
time_parsed = raw_date_time_zone[raw_date_time_zone.find(":") + 1:
raw_date_time_zone.find("+") - 1]
time_zone_parsed = raw_date_time_zone[raw_date_time_zone.find("+"):]
try:
date_struct = time.strptime(date_parsed, "%d/%b/%Y")
converted_date = time.strftime("%Y-%m-%d", date_struct)
date_time = converted_date + " " + time_parsed
# Throws a ValueError exception if the operation fails that is
# caught by the calling function and is handled appropriately
except ValueError as error:
raise ValueError(error)
else:
return converted_date, date_time, time_zone_parsed
def mapper(self, _, line):
line = line.strip()
match = self.logpat.search(line)
date_time = None
requester = None
user_agent = None
operation = None
try:
for n in range(self.NUM_ENTRIES_PER_LINE):
group = match.group(1 + n)
if n == self.S3_LOG_DATE_TIME:
date, date_time, time_zone_parsed = \
self.clean_date_time_zone(group)
# Leave the following line of code if
# you want to aggregate by date
date_time = date + " 00:00:00"
elif n == self.S3_LOG_REQUESTER_ID:
requester = group
elif n == self.S3_LOG_USER_AGENT:
user_agent = group
elif n == self.S3_LOG_OPERATION:
operation = group
else:
pass
except Exception:
yield (("Error while parsing line: %s", line), 1)
else:
yield ((date_time, requester, user_agent, operation), 1)
def reducer(self, key, values):
output = list(key)
output = self.DELIMITER.join(output) + \
self.DELIMITER + \
str(sum(values))
yield None, output
def steps(self):
return [
self.mr(mapper=self.mapper,
reducer=self.reducer)
]
if __name__ == '__main__':
MrS3LogParser.run()
!python mr_s3_log_parser.py -r emr s3://bucket-source/ --output-dir=s3://bucket-dest/
!python mr_s3_log_parser.py input_data.txt > output_data.txt
%%file test_mr_s3_log_parser.py
from StringIO import StringIO
import unittest2 as unittest
from mr_s3_log_parser import MrS3LogParser
class MrTestsUtil:
def run_mr_sandbox(self, mr_job, stdin):
# inline runs the job in the same process so small jobs tend to
# run faster and stack traces are simpler
# --no-conf prevents options from local mrjob.conf from polluting
# the testing environment
# "-" reads from standard in
mr_job.sandbox(stdin=stdin)
# make_runner ensures job cleanup is performed regardless of
# success or failure
with mr_job.make_runner() as runner:
runner.run()
for line in runner.stream_output():
key, value = mr_job.parse_output_line(line)
yield value
class TestMrS3LogParser(unittest.TestCase):
mr_job = None
mr_tests_util = None
RAW_LOG_LINE_INVALID = \
'00000fe9688b6e57f75bd2b7f7c1610689e8f01000000' \
'00000388225bcc00000 ' \
's3-storage [22/Jul/2013:21:03:27 +0000] ' \
'00.111.222.33 ' \
RAW_LOG_LINE_VALID = \
'00000fe9688b6e57f75bd2b7f7c1610689e8f01000000' \
'00000388225bcc00000 ' \
's3-storage [22/Jul/2013:21:03:27 +0000] ' \
'00.111.222.33 ' \
'arn:aws:sts::000005646931:federated-user/user 00000AB825500000 ' \
'REST.HEAD.OBJECT user/file.pdf ' \
'"HEAD /user/file.pdf?versionId=00000XMHZJp6DjM9x500000' \
'00000SDZk ' \
'HTTP/1.1" 200 - - 4000272 18 - "-" ' \
'"Boto/2.5.1 (darwin) USER-AGENT/1.0.14.0" ' \
'00000XMHZJp6DjM9x5JVEAMo8MG00000'
DATE_TIME_ZONE_INVALID = "AB/Jul/2013:21:04:17 +0000"
DATE_TIME_ZONE_VALID = "22/Jul/2013:21:04:17 +0000"
DATE_VALID = "2013-07-22"
DATE_TIME_VALID = "2013-07-22 21:04:17"
TIME_ZONE_VALID = "+0000"
def __init__(self, *args, **kwargs):
super(TestMrS3LogParser, self).__init__(*args, **kwargs)
self.mr_job = MrS3LogParser(['-r', 'inline', '--no-conf', '-'])
self.mr_tests_util = MrTestsUtil()
def test_invalid_log_lines(self):
stdin = StringIO(self.RAW_LOG_LINE_INVALID)
for result in self.mr_tests_util.run_mr_sandbox(self.mr_job, stdin):
self.assertEqual(result.find("Error"), 0)
def test_valid_log_lines(self):
stdin = StringIO(self.RAW_LOG_LINE_VALID)
for result in self.mr_tests_util.run_mr_sandbox(self.mr_job, stdin):
self.assertEqual(result.find("Error"), -1)
def test_clean_date_time_zone(self):
date, date_time, time_zone_parsed = \
self.mr_job.clean_date_time_zone(self.DATE_TIME_ZONE_VALID)
self.assertEqual(date, self.DATE_VALID)
self.assertEqual(date_time, self.DATE_TIME_VALID)
self.assertEqual(time_zone_parsed, self.TIME_ZONE_VALID)
# Use a lambda to delay the calling of clean_date_time_zone so that
# assertRaises has enough time to handle it properly
self.assertRaises(ValueError,
lambda: self.mr_job.clean_date_time_zone(
self.DATE_TIME_ZONE_INVALID))
if __name__ == '__main__':
unittest.main()
!python test_mr_s3_log_parser.py -v
runners:
emr:
aws_access_key_id: __ACCESS_KEY__
aws_secret_access_key: __SECRET_ACCESS_KEY__
aws_region: us-east-1
ec2_key_pair: EMR
ec2_key_pair_file: ~/.ssh/EMR.pem
ssh_tunnel_to_job_tracker: true
ec2_master_instance_type: m3.xlarge
ec2_instance_type: m3.xlarge
num_ec2_instances: 5
s3_scratch_uri: s3://bucket/tmp/
s3_log_uri: s3://bucket/tmp/logs/
enable_emr_debugging: True
bootstrap:
- sudo apt-get install -y python-pip
- sudo pip install --upgrade simplejson
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Step3: Running Amazon Elastic MapReduce Jobs
Step4: Run a MapReduce job locally on the specified input file, sending the results to the specified output file
Step5: Unit Testing S3 Logs
Step6: Running S3 Logs Unit Test
Step7: Sample Config File
|
1,811
|
<ASSISTANT_TASK:>
Python Code:
#untar and compile ms and sample_stats
!tar zxf ms.tar.gz; cd msdir; gcc -o ms ms.c streec.c rand1.c -lm; gcc -o sample_stats sample_stats.c tajd.c -lm
#I get three compiler warnings from ms, but everything should be fine
#now I'll just move the programs into the current working dir
!mv msdir/ms . ; mv msdir/sample_stats .;
!conda install scikit-learn --yes
!pip install -U scikit-learn
#simulate under the equilibrium model
!./ms 20 2000 -t 100 -r 100 10000 | ./sample_stats > equilibrium.msOut.stats
#simulate under the contraction model
!./ms 20 2000 -t 100 -r 100 10000 -en 0 1 0.5 -en 0.2 1 1 | ./sample_stats > contraction.msOut.stats
#simulate under the growth model
!./ms 20 2000 -t 100 -r 100 10000 -en 0.2 1 0.5 | ./sample_stats > growth.msOut.stats
#now lets suck up the data columns we want for each of these files, and create one big training set; we will use numpy for this
# note that we are only using two columns of the data- these correspond to segSites and Fay & Wu's H
import numpy as np
X1 = np.loadtxt("equilibrium.msOut.stats",usecols=(3,9))
X2 = np.loadtxt("contraction.msOut.stats",usecols=(3,9))
X3 = np.loadtxt("growth.msOut.stats",usecols=(3,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
#the last step in this process will be to shuffle the data, and then split it into a training set and a testing set
#the testing set will NOT be used during training, and will allow us to check how well the classifier is doing
#scikit-learn has a very convenient function for doing this shuffle and split operation
#
# will will keep out 10% of the data for testing
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
from sklearn.ensemble import RandomForestClassifier
rfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = rfClf.fit(X_train, Y_train)
from sklearn.preprocessing import normalize
#These two functions (taken from scikit-learn.org) plot the decision boundaries for a classifier.
def plot_contours(ax, clf, xx, yy, **params):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
def make_meshgrid(x, y, h=.05):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
#Let's do the plotting
import matplotlib.pyplot as plt
fig,ax= plt.subplots(1,1)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1, h=0.2)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
# plotting only a subset of our data to keep things from getting too cluttered
ax.scatter(X_test[:200, 0], X_test[:200, 1], c=Y_test[:200], cmap=plt.cm.coolwarm, edgecolors='k')
ax.set_xlabel(r"$\theta_{w}$", fontsize=14)
ax.set_ylabel(r"Fay and Wu's $H$", fontsize=14)
ax.set_xticks(())
ax.set_yticks(())
ax.set_title("Classifier decision surface", fontsize=14)
plt.show()
#here's the confusion matrix function
def makeConfusionMatrixHeatmap(data, title, trueClassOrderLs, predictedClassOrderLs, ax):
data = np.array(data)
data = normalize(data, axis=1, norm='l1')
heatmap = ax.pcolor(data, cmap=plt.cm.Blues, vmin=0.0, vmax=1.0)
for i in range(len(predictedClassOrderLs)):
for j in reversed(range(len(trueClassOrderLs))):
val = 100*data[j, i]
if val > 50:
c = '0.9'
else:
c = 'black'
ax.text(i + 0.5, j + 0.5, '%.2f%%' % val, horizontalalignment='center', verticalalignment='center', color=c, fontsize=9)
cbar = plt.colorbar(heatmap, cmap=plt.cm.Blues, ax=ax)
cbar.set_label("Fraction of simulations assigned to class", rotation=270, labelpad=20, fontsize=11)
# put the major ticks at the middle of each cell
ax.set_xticks(np.arange(data.shape[1]) + 0.5, minor=False)
ax.set_yticks(np.arange(data.shape[0]) + 0.5, minor=False)
ax.axis('tight')
ax.set_title(title)
#labels
ax.set_xticklabels(predictedClassOrderLs, minor=False, fontsize=9, rotation=45)
ax.set_yticklabels(reversed(trueClassOrderLs), minor=False, fontsize=9)
ax.set_xlabel("Predicted class")
ax.set_ylabel("True class")
#now the actual work
#first get the predictions
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
classOrderLs=['equil','contraction','growth']
#now do the plotting
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
X1 = np.loadtxt("equilibrium.msOut.stats",usecols=(1,3,5,7,9))
X2 = np.loadtxt("contraction.msOut.stats",usecols=(1,3,5,7,9))
X3 = np.loadtxt("growth.msOut.stats",usecols=(1,3,5,7,9))
X = np.concatenate((X1,X2,X3))
#create associated 'labels' -- these will be the targets for training
y = [0]*len(X1) + [1]*len(X2) + [2]*len(X3)
Y = np.array(y)
X_train, X_test, Y_train, Y_test = train_test_split(X,Y,test_size=0.1)
rfClf = RandomForestClassifier(n_estimators=100,n_jobs=10)
clf = rfClf.fit(X_train, Y_train)
preds=clf.predict(X_test)
counts=[[0.,0.,0.],[0.,0.,0.],[0.,0.,0.]]
for i in range(len(Y_test)):
counts[Y_test[i]][preds[i]] += 1
counts.reverse()
fig,ax= plt.subplots(1,1)
makeConfusionMatrixHeatmap(counts, "Confusion matrix", classOrderLs, classOrderLs, ax)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install scikit-learn
Step2: or if you don't use conda, you can use pip to install scikit-learn with
Step3: Step 1
Step4: Step 2
Step5: That's it! The classifier is trained. This Random Forest classifer used 100 decision trees in its ensemble, a pretty large number considering that we are only using two summary stats to represent our data. Nevertheless it trains on the data very, very quickly.
Step6: Above we can see which regions of our feature space are assigned to each class
Step7: Looks pretty good. But can we make it better? Well a simple way might be to increase the number of features (i.e. summary statistics) we use as input. Let's give that a whirl using all of the output from Hudson's sample_stats
|
1,812
|
<ASSISTANT_TASK:>
Python Code:
try:
%load_ext watermark
watermark = True
except ImportError:
watermark = False
pass
import sys
sys.path.append("../") # Add parent dir in the Path
from hyperstream import HyperStream
from hyperstream import TimeInterval
from hyperstream.utils import UTC
from hyperstream import Workflow
import hyperstream
from datetime import datetime
from utils import plot_high_chart
from utils import plot_multiple_stock
from dateutil.parser import parse
if watermark:
%watermark -v -m -p hyperstream -g
hs = HyperStream(loglevel=30)
M = hs.channel_manager.memory
print(hs)
print([p.channel_id_prefix for p in hs.config.plugins])
def dateparser(dt):
return parse(dt.replace('M', '-')).replace(tzinfo=UTC)
def split_temperatures(d):
Parameters
----------
d: dictionary of the following form:
{'BangkokMin': 24.0, 'BangkokMax': 32.8, 'TokyoMin': 4.2, 'TokyoMax': 11.2}
Returns
-------
dictionary of the following form
{'Bangkok': {'min': 24.0, 'max': 32.8}, 'Tokyo': {'min': 4.2, 'max': 11.2}}
new_d = {}
for name, value in d.iteritems():
key = name[-3:].lower()
name = name[:-3]
if name not in new_d:
new_d[name] = {}
new_d[name][key] = value
return new_d
def dict_mean(d):
x = d.values()
x = [value for value in x if value is not None]
return float(sum(x)) / max(len(x), 1)
countries_dict = {
'Asia': ['Bangkok', 'HongKong', 'KualaLumpur', 'NewDelhi', 'Tokyo'],
'Australia': ['Brisbane', 'Canberra', 'GoldCoast', 'Melbourne', 'Sydney'],
'NZ': ['Auckland', 'Christchurch', 'Dunedin', 'Hamilton','Wellington'],
'USA': ['Chicago', 'Houston', 'LosAngeles', 'NY', 'Seattle']
}
# delete_plate requires the deletion to be first childs and then parents
for plate_id in ['C.C', 'C']:
if plate_id in [plate[0] for plate in hs.plate_manager.plates.items()]:
hs.plate_manager.delete_plate(plate_id=plate_id, delete_meta_data=True)
for country in countries_dict:
id_country = 'country_' + country
if not hs.plate_manager.meta_data_manager.contains(identifier=id_country):
hs.plate_manager.meta_data_manager.insert(
parent='root', data=country, tag='country', identifier=id_country)
for city in countries_dict[country]:
id_city = id_country + '.' + 'city_' + city
if not hs.plate_manager.meta_data_manager.contains(identifier=id_city):
hs.plate_manager.meta_data_manager.insert(
parent=id_country, data=city, tag='city', identifier=id_city)
C = hs.plate_manager.create_plate(plate_id="C", description="Countries", values=[], complement=True,
parent_plate=None, meta_data_id="country")
CC = hs.plate_manager.create_plate(plate_id="C.C", description="Cities", values=[], complement=True,
parent_plate="C", meta_data_id="city")
print hs.plate_manager.meta_data_manager.global_plate_definitions
ti_all = TimeInterval(datetime(1999, 1, 1).replace(tzinfo=UTC),
datetime(2013, 1, 1).replace(tzinfo=UTC))
# parameters for the csv_mutli_reader tool
csv_temp_params = dict(
filename_template='data/TimeSeriesDatasets_130207/Temp{}.csv',
datetime_parser=dateparser, skip_rows=0, header=True)
csv_rain_params = dict(
filename_template='data/TimeSeriesDatasets_130207/{}Rainfall.csv',
datetime_parser=dateparser, skip_rows=0, header=True)
def mean(x):
Computes the mean of the values in x, discarding the None values
x = [value for value in x if value is not None]
return float(sum(x)) / max(len(x), 1)
with Workflow(workflow_id='tutorial_05',
name='tutorial_05',
owner='tutorials',
description='Tutorial 5 workflow',
online=False) as w:
country_node_raw_temp = w.create_node(stream_name='raw_temp_data', channel=M, plates=[C])
country_node_temp = w.create_node(stream_name='temp_data', channel=M, plates=[C])
city_node_temp = w.create_node(stream_name='city_temp', channel=M, plates=[CC])
city_node_avg_temp = w.create_node(stream_name='city_avg_temp', channel=M, plates=[CC])
country_node_avg_temp = w.create_node(stream_name='country_avg_temp', channel=M, plates=[C])
country_node_raw_rain = w.create_node(stream_name='raw_rain_data', channel=M, plates=[C])
city_node_rain = w.create_node(stream_name='city_rain', channel=M, plates=[CC])
country_node_avg_rain = w.create_node(stream_name='country_avg_rain', channel=M, plates=[C])
city_node_temp_rain = w.create_node(stream_name='city_temp_rain', channel=M, plates=[CC])
country_node_avg_temp_rain = w.create_node(stream_name='country_avg_temp_rain', channel=M, plates=[C])
world_node_avg_temp = w.create_node(stream_name='world_avg_temp', channel=M, plates=[])
for c in C:
country_node_raw_temp[c] = hs.plugins.data_importers.factors.csv_multi_reader(
source=None, **csv_temp_params)
country_node_temp[c] = hs.factors.apply(
sources=[country_node_raw_temp[c]],
func=split_temperatures)
country_node_raw_rain[c] = hs.plugins.data_importers.factors.csv_multi_reader(
source=None, **csv_rain_params)
for cc in CC[c]:
city_node_temp[cc] = hs.factors.splitter_from_stream(
source=country_node_temp[c],
splitting_node=country_node_temp[c],
use_mapping_keys_only=True)
city_node_avg_temp[cc] = hs.factors.apply(
sources=[city_node_temp[c]],
func=dict_mean)
city_node_rain[cc] = hs.factors.splitter_from_stream(
source=country_node_raw_rain[c],
splitting_node=country_node_raw_rain[c],
use_mapping_keys_only=True)
city_node_temp_rain[cc] = hs.plugins.example.factors.aligned_correlation(
sources=[city_node_avg_temp[cc],
city_node_rain[cc]],
use_mapping_keys_only=True)
country_node_avg_temp[c] = hs.factors.aggregate(
sources=[city_node_avg_temp],
alignment_node=None,
aggregation_meta_data='city', func=mean)
country_node_avg_rain[c] = hs.factors.aggregate(
sources=[city_node_rain],
alignment_node=None,
aggregation_meta_data='city', func=mean)
country_node_avg_temp_rain[c] = hs.factors.aggregate(
sources=[city_node_temp_rain],
alignment_node=None,
aggregation_meta_data='city', func=mean)
world_node_avg_temp[None] = hs.factors.aggregate(sources=[country_node_avg_temp],
alignment_node=None,
aggregation_meta_data='country',
func=mean)
w.execute(ti_all)
ti_sample = TimeInterval(datetime(2007, 1, 1).replace(tzinfo=UTC),
datetime(2007, 2, 1).replace(tzinfo=UTC))
for stream_id, stream in M.find_streams(name='temp_data').iteritems():
print(stream_id)
print(stream.window(ti_sample).items())
for stream_id, stream in M.find_streams(name='city_avg_temp').iteritems():
print('[{}]'.format(stream_id))
print(stream.window(ti_sample).items())
for stream_id, stream in M.find_streams(name='city_temp_rain').iteritems():
print('[{}]'.format(stream_id))
print(stream.window(ti_sample).items())
def get_x_y_names_from_streams(streams, tag=None):
names = []
y = []
x = []
for stream_id, stream in streams.iteritems():
if len(stream.window().items()) == 0:
continue
if tag is not None:
meta_data = dict(stream_id.meta_data)
name = meta_data[tag]
else:
name = ''
names.append(name)
y.append([instance.value for instance in stream.window().items()])
x.append([str(instance.timestamp) for instance in stream.window().items()])
return y, x, names
data, time, names = get_x_y_names_from_streams(M.find_streams(country='Australia', name='city_avg_temp'), 'city')
plot_multiple_stock(data, time=time, names=names,
title='Temperatures in Australia', ylabel='ºC')
data, time, names = get_x_y_names_from_streams(M.find_streams(country='NZ', name='city_avg_temp'), 'city')
plot_multiple_stock(data, time=time, names=names,
title='Temperatures in New Zealand', ylabel='ºC')
data, time, names = get_x_y_names_from_streams(M.find_streams(country='NZ', name='city_rain'), 'city')
plot_multiple_stock(data, time=time, names=names,
title='Rain in New Zealand', ylabel='some precipitation unit')
data, time, names = get_x_y_names_from_streams(M.find_streams(name='city_temp_rain'), 'city')
plot_multiple_stock(data, time=time, names=names,
title='Temperatures in New Zealand', ylabel='Cº/rain units')
data, time, names = get_x_y_names_from_streams(M.find_streams(name='country_avg_temp'), 'country')
plot_multiple_stock(data, time=time, names=names,
title='Temperatures in countries', ylabel='ºC')
data, time, names = get_x_y_names_from_streams(M.find_streams(name='country_avg_rain'), 'country')
plot_multiple_stock(data, time=time, names=names,
title='Average rain in countries', ylabel='some precipitation unit')
data, time, names = get_x_y_names_from_streams(M.find_streams(name='world_avg_temp'))
plot_multiple_stock(data, time=time, names=names,
title='Average temperature in all countries', ylabel='Cº')
from pprint import pprint
pprint(w.to_dict(tool_long_names=False))
print(w.to_json(w.factorgraph_viz, tool_long_names=False, indent=4))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the data
Step3: Once the csv_reader has created the instances in the country plate, we will modify the dictionaries applying a function split_temperatures to each instance and storing the results in a new stream temp_data.
Step4: Then, we will use a splitter_from_stream tool that will be applied to every country and store the values of the temp_dat stream into the corresponding city nodes. The new city nodes will contain a dictionary with minimum and maximum values, in the form
Step5: Create the plates and meta_data instances
Step7: Create the workflow and execute it
Step8: See the temperature and rain in all the cities
Step9: We can see the ratio between the temperature and the rain for every month. In this case, we do not have the rain for most of the cities. For that reason, some of the nodes are empty.
Step10: Visualisations
Step11: Here we visualize the average temperatures in some cities of New Zealand.
Step12: The rain-fall in New Zealand.
Step13: And the correlation between temperature and rain of all the cities. In this case, we only have this ratio for the some of the cities of New Zealand.
Step14: We can see the streams at a country level with the averages of each of its cities.
|
1,813
|
<ASSISTANT_TASK:>
Python Code:
ls
pwd
cd 2017oct04
ls
pwd
ls M52*fit
ls M52-001*fit
ls *V*
cd ..
# Make a new directory, "temporary"
# Move into temporary
# Move the test_file.txt into this current location
# Create a copy of the test_file.txt, name the copy however you like
# Delete the original test_file.txt
# Change directories to original location of notebook.
ls
ls ./temporary/
2 < 5
3 > 7
x = 11
x > 10
2 * x < x
3.14 <= 3.14 # <= means less than or equal to; >= means greater than or equal to
42 == 42
3e8 != 3e9 # != means "not equal to"
type(True)
temperature = float(input('What is the temperature in Fahrenheit? '))
if temperature > 70:
print('Wear shorts.')
else:
print('Wear long pants.')
names = ['Henrietta', 'Annie', 'Jocelyn', 'Vera']
for n in names:
print('There are ' + str(len(n)) + ' letters in ' + n)
for i in range(5):
print(i)
i = 0 # This starts the initial value off at zero
while i < 11:
print(i)
i = i + 3 # This adds three to the value of i, then goes back to the line #3 to check if the condition is met
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're in a new folder now, so issue commands in the next two cells to look at the folder content and list your current path
Step2: Now test out a few more things. In the blank cells below, try the following and discuss in your group what each does
Step3: What does the asterisk symbol * do?
Step4: A1.2) A few more helpful commands
Step5: If all went according to plan, the following command should show three directories, a zip file, a .png file, this notebook, and the Lab6 notebook
Step6: And the following command should show the contents of the temporary folder, so only your new text file (a copy of test_file.txt, which is now gone forever) within it
Step7: Appendix 2
Step8: You see that conditions are either True or False (with no quotes!) These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python, the name Boolean is shortened to the type bool. It is the type of the results of true-false conditions or tests.
Step9: The four lines in the previous cell are an if-else statement. There are two indented blocks
Step10: This is an example of a for loop. The way a for loop works is a follows. We start with a list of objects -- in this example a list of strings, but it could be anything -- and then we say for variable in list
Step11: There are also other ways of iterating, which may be more convenient depending on what you're trying to do. A very common one is the while loop, which does exactly what it sounds like it should
|
1,814
|
<ASSISTANT_TASK:>
Python Code:
import pubchempy as pcp
c = pcp.Compound.from_cid(5090)
c
print(c.molecular_formula)
print(c.molecular_weight)
print(c.isomeric_smiles)
print(c.xlogp)
print(c.iupac_name)
print(c.synonyms)
results = pcp.get_compounds('Glucose', 'name')
results
for compound in results:
print compound.isomeric_smiles
pcp.get_compounds('C1=CC2=C(C3=C(C=CC=N3)C=C2)N=C1', 'smiles')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let’s get the Compound with CID 5090
Step2: Now we have a Compound object called c. We can get all the information we need from this object
Step3: Searching
Step4: The first argument is the identifier, and the second argument is the identifier type, which must be one of name, smiles, sdf, inchi, inchikey or formula. It looks like there are 4 compounds in the PubChem Database that have the name Glucose associated with them. Let’s take a look at them in more detail
Step5: It looks like they all have different stereochemistry information.
|
1,815
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import json
from pandas.io.json import json_normalize
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
# load json data frame
dataFrame = pd.read_json('data/world_bank_projects.json')
dataFrame
dataFrame.info()
dataFrame.columns
dataFrame.groupby(dataFrame.countryshortname).count().sort('_id', ascending=False).head(10)
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
themeNameCode += codes
themeNameCode = json_normalize(themeNameCode)
themeNameCode['count']=themeNameCode.groupby('code').transform('count')
themeNameCode.sort('count', ascending=False).drop_duplicates().head(10)
#Create dictionary Code:Name to replace empty names.
codeNameDict = {}
for codes in dataFrame.mjtheme_namecode:
for code in codes:
if code['name']!='':
codeNameDict[code['code']]=code['name']
index=0
for codes in dataFrame.mjtheme_namecode:
innerIndex=0
for code in codes:
if code['name']=='':
dataFrame.mjtheme_namecode[index][innerIndex]['name']=codeNameDict[code['code']]
innerIndex += 1
index += 1
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
themeNameCode += codes
themeNameCode = json_normalize(themeNameCode)
themeNameCode['count']=themeNameCode.groupby('code').transform('count')
themeNameCode.sort('count', ascending=False).drop_duplicates().head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: imports for Python, Pandas
Step2: JSON example, with string
Step3: JSON example, with file
Step4: JSON exercise
Step5: Top 10 Countries with most projects
Step6: Top 10 major project themes
Step7: Dataframe with the missing names filled in
|
1,816
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image('images/12_adversarial_noise_flowchart.png')
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt
tf.__version__
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
data.test.cls = np.argmax(data.test.labels, axis=1)
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None, noise=0.0):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Get the i'th image and reshape the array.
image = images[i].reshape(img_shape)
# Add the adversarial noise to the image.
image += noise
# Ensure the noisy pixel-values are between 0 and 1.
image = np.clip(image, 0.0, 1.0)
# Plot image.
ax.imshow(image,
cmap='binary', interpolation='nearest')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
noise_limit = 0.35
noise_l2_weight = 0.02
ADVERSARY_VARIABLES = 'adversary_variables'
collections = [tf.GraphKeys.VARIABLES, ADVERSARY_VARIABLES]
x_noise = tf.Variable(tf.zeros([img_size, img_size, num_channels]),
name='x_noise', trainable=False,
collections=collections)
x_noise_clip = tf.assign(x_noise, tf.clip_by_value(x_noise,
-noise_limit,
noise_limit))
x_noisy_image = x_image + x_noise
x_noisy_image = tf.clip_by_value(x_noisy_image, 0.0, 1.0)
x_pretty = pt.wrap(x_noisy_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(class_count=num_classes, labels=y_true)
[var.name for var in tf.trainable_variables()]
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
adversary_variables = tf.get_collection(ADVERSARY_VARIABLES)
[var.name for var in adversary_variables]
l2_loss_noise = noise_l2_weight * tf.nn.l2_loss(x_noise)
loss_adversary = loss + l2_loss_noise
optimizer_adversary = tf.train.AdamOptimizer(learning_rate=1e-2).minimize(loss_adversary, var_list=adversary_variables)
y_pred_cls = tf.argmax(y_pred, dimension=1)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session = tf.Session()
session.run(tf.initialize_all_variables())
def init_noise():
session.run(tf.initialize_variables([x_noise]))
init_noise()
train_batch_size = 64
def optimize(num_iterations, adversary_target_cls=None):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# If we are searching for the adversarial noise, then
# use the adversarial target-class instead.
if adversary_target_cls is not None:
# The class-labels are One-Hot encoded.
# Set all the class-labels to zero.
y_true_batch = np.zeros_like(y_true_batch)
# Set the element for the adversarial target-class to 1.
y_true_batch[:, adversary_target_cls] = 1.0
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# If doing normal optimization of the neural network.
if adversary_target_cls is None:
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
else:
# Run the adversarial optimizer instead.
# Note that we have 'faked' the class above to be
# the adversarial target-class instead of the true class.
session.run(optimizer_adversary, feed_dict=feed_dict_train)
# Clip / limit the adversarial noise. This executes
# another TensorFlow operation. It cannot be executed
# in the same session.run() as the optimizer, because
# it may run in parallel so the execution order is not
# guaranteed. We need the clip to run after the optimizer.
session.run(x_noise_clip)
# Print status every 100 iterations.
if (i % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
def get_noise():
# Run the TensorFlow session to retrieve the contents of
# the x_noise variable inside the graph.
noise = session.run(x_noise)
return np.squeeze(noise)
def plot_noise():
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Print statistics.
print("Noise:")
print("- Min:", noise.min())
print("- Max:", noise.max())
print("- Std:", noise.std())
# Plot the noise.
plt.imshow(noise, interpolation='nearest', cmap='seismic',
vmin=-1.0, vmax=1.0)
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
noise=noise)
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
optimize(num_iterations=1000)
print_test_accuracy(show_example_errors=True)
init_noise()
optimize(num_iterations=1000, adversary_target_cls=3)
plot_noise()
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
def find_all_noise(num_iterations=1000):
# Adversarial noise for all target-classes.
all_noise = []
# For each target-class.
for i in range(num_classes):
print("Finding adversarial noise for target-class:", i)
# Reset the adversarial noise to zero.
init_noise()
# Optimize the adversarial noise.
optimize(num_iterations=num_iterations,
adversary_target_cls=i)
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Append the noise to the array.
all_noise.append(noise)
# Print newline.
print()
return all_noise
all_noise = find_all_noise(num_iterations=300)
def plot_all_noise(all_noise):
# Create figure with 10 sub-plots.
fig, axes = plt.subplots(2, 5)
fig.subplots_adjust(hspace=0.2, wspace=0.1)
# For each sub-plot.
for i, ax in enumerate(axes.flat):
# Get the adversarial noise for the i'th target-class.
noise = all_noise[i]
# Plot the noise.
ax.imshow(noise,
cmap='seismic', interpolation='nearest',
vmin=-1.0, vmax=1.0)
# Show the classes as the label on the x-axis.
ax.set_xlabel(i)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
plot_all_noise(all_noise)
def make_immune(target_cls, num_iterations_adversary=500,
num_iterations_immune=200):
print("Target-class:", target_cls)
print("Finding adversarial noise ...")
# Find the adversarial noise.
optimize(num_iterations=num_iterations_adversary,
adversary_target_cls=target_cls)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
# Newline.
print()
print("Making the neural network immune to the noise ...")
# Try and make the neural network immune to this noise.
# Note that the adversarial noise has not been reset to zero
# so the x_noise variable still holds the noise.
# So we are training the neural network to ignore the noise.
optimize(num_iterations=num_iterations_immune)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
make_immune(target_cls=3)
make_immune(target_cls=3)
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
make_immune(target_cls=i)
# Print newline.
print()
plot_noise()
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
init_noise()
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: Load Data
Step4: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step5: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step6: Data Dimensions
Step7: Helper-function for plotting images
Step8: Plot a few images to see if data is correct
Step9: TensorFlow Graph
Step10: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step13: Adversarial Noise
Step14: The optimizer for the adversarial noise will try and minimize two loss-measures
Step15: When we create the new variable for the noise, we must inform TensorFlow which variable-collections that it belongs to, so we can later inform the two optimizers which variables to update.
Step16: Then we create a list of the collections that we want the new noise-variable to belong to. If we add the noise-variable to the collection tf.GraphKeys.VARIABLES then it will also get initialized with all the other variables in the TensorFlow graph, but it will not get optimized. This is a bit confusing.
Step17: Now we can create the new variable for the adversarial noise. It will be initialized to zero. It will not be trainable, so it will not be optimized along with the other variables of the neural network. This allows us to create two separate optimization procedures.
Step18: The adversarial noise will be limited / clipped to the given
Step19: The noisy image is just the sum of the input image and the adversarial noise.
Step20: When adding the noise to the input image, it may overflow the boundaries for a valid image, so we clip / limit the noisy image to ensure its pixel-values are between 0 and 1.
Step21: Convolutional Neural Network
Step22: Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.
Step23: Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.
Step24: Optimization of these variables in the neural network is done with the Adam-optimizer using the loss-measure that was returned from PrettyTensor when we constructed the neural network above.
Step25: Optimizer for Adversarial Noise
Step26: Show the list of variable-names. There is only one, which is the adversarial noise variable that we created above.
Step27: We will combine the loss-function for the normal optimization with a so-called L2-loss for the noise-variable. This should result in the minimum values for the adversarial noise along with the best classification accuracy.
Step28: Combine the normal loss-function with the L2-loss for the adversarial noise.
Step29: We can now create the optimizer for the adversarial noise. Because this optimizer is not supposed to update all the variables of the neural network, we must give it a list of the variables that we want updated, which is the variable for the adversarial noise. Also note the learning-rate is much greater than for the normal optimizer above.
Step30: We have now created two optimizers for the neural network, one for the variables of the neural network and another for the single variable with the adversarial noise.
Step31: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.
Step32: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
Step33: TensorFlow Run
Step34: Initialize variables
Step35: This is a helper-function for initializing / resetting the adversarial noise to zero.
Step36: Call the function to initialize the adversarial noise.
Step37: Helper-function to perform optimization iterations
Step38: Below is the function for performing a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step39: Helper-functions for getting and plotting the noise
Step40: This function plots the adversarial noise and prints some statistics.
Step41: Helper-function to plot example errors
Step42: Helper-function to plot confusion matrix
Step43: Helper-function for showing the performance
Step44: Normal optimization of neural network
Step45: The classification accuracy is now about 96-97% on the test-set. (This will vary each time you run this Python Notebook).
Step46: Find the adversarial noise
Step47: Now perform optimization of the adversarial noise. This uses the adversarial optimizer instead of the normal optimizer, which means that it only optimizes the variable for the adversarial noise, while ignoring all the other variables of the neural network.
Step48: The adversarial noise has now been optimized and it can be shown in a plot. The red pixels show positive noise-values and the blue pixels show negative noise-values. This noise-pattern is added to every input image. The positive (red) noise-values makes the pixels darker and the negative (blue) noise-values makes the pixels brighter. Examples of this are shown below.
Step49: When this noise is added to all the images in the test-set, the result is typically a classification accuracy of 10-15% depending on the target-class that was chosen. We can also see from the confusion matrix that most images in the test-set are now classified as the desired target-class - although some of the target-classes require more adversarial noise than others.
Step50: Adversarial noise for all target-classes
Step51: Plot the adversarial noise for all target-classes
Step52: Red pixels show positive noise values, and blue pixels show negative noise values.
Step53: Make immune to noise for target-class 3
Step54: Now try and run it again. It is now more difficult to find adversarial noise for the target-class 3. The neural network seems to have become somewhat immune to adversarial noise.
Step55: Make immune to noise for all target-classes
Step56: Make immune to all target-classes (double runs)
Step57: Plot the adversarial noise
Step58: Interestingly, the neural network now has a higher classification accuracy on noisy images than we had on clean images before all these optimizations.
Step59: Performance on clean images
Step60: The neural network now performs worse on clean images compared to noisy images.
Step61: Close TensorFlow Session
|
1,817
|
<ASSISTANT_TASK:>
Python Code:
import Lib.subdivision as sub
Method_subdivision=sub.WellGrid(Rect0=(0,0),Rect1=(50,50),Qw=1000,Qe=(200,400,300,100),h=26.25,phi=0.2)
Method_subdivision.Subdivision()
SL,TOF,SL_end,TOF_end=Method_subdivision.SLTrace(NSL=80)
Method_subdivision=sub.WellGrid(Rect0=(0,0),Rect1=(50,50),Qw=1000,Qe=(0,0,500,500),h=26.25,phi=0.2)
Method_subdivision.Subdivision(debug=0)
SL,TOF,SL_end,TOF_end=Method_subdivision.SLTrace(NSL=80)
import Lib.embedded as emb
Boundary_vert=[(0, 0), (0, 50), (50, 50), (50, 0)]
Well_vert=[(25,25)]
Qwell=[1000]
Qedge=[400,300,100,200]
Method_embedded=emb.WellGrid(Pts_e=Boundary_vert,Pts_w=Well_vert,Qe=Qedge,Qw=Qwell,Nbd=10,rw=0.25,h=26.25,phi=0.2,miu=1,kxy=(200,200))
Method_embedded.Meshing()
Method_embedded.FlowSol()
SL,TOF,SL_end,TOF_end=Method_embedded.SLtrace(NSL=80,deltaT=0.1,method='Adaptive')
P,V_x,V_y=Method_embedded.FieldPlot(vmax=50)
Boundary_vert=[(0, 0), (0, 50), (50, 50), (50, 0)]
Well_vert=[(25,25)]
Qwell=[1000]
Qedge=[0,500,500,0]
Method_embedded=emb.WellGrid(Pts_e=Boundary_vert,Pts_w=Well_vert,Qe=Qedge,Qw=Qwell,Nbd=10,rw=0.25,h=26.25,phi=0.2,miu=1,kxy=(200,200))
Method_embedded.Meshing()
Method_embedded.FlowSol()
SL,TOF,SL_end,TOF_end=Method_embedded.SLtrace(NSL=81,deltaT=0.1,method='Adaptive')
P,V_x,V_y=Method_embedded.FieldPlot(vmax=50)
Boundary_vert=[(2.88675,0), (0,5), (2.88675,10), (8.66025,10),(11.54701,5),(8.66025,0)]
Well_vert=[(4,5),(7.547,5)]
Qwell=[500,500]
Qedge=[0,250,250,250,250,0]
Method_embedded=emb.WellGrid(Pts_e=Boundary_vert,Pts_w=Well_vert,Qe=Qedge,Qw=Qwell,Nbd=7,rw=0.25,h=26.25,phi=0.2,miu=1,kxy=(200,200))
Method_embedded.Meshing()
Method_embedded.FlowSol()
SL,TOF,SL_end,TOF_end=Method_embedded.SLtrace(NSL=82,deltaT=0.00005,method='Adaptive',tol=0.01)
P,V_x,V_y=Method_embedded.FieldPlot(vmax=250)
import Lib.fillgrid as flg
Boundary_vert=[(2.88675,0), (0,5), (2.88675,10), (8.66025,10),(11.54701,5),(8.66025,0)]
Well_vert=[(4,5),(7.547,5)]
Qwell=[500,500]
flg.Fillgrid(Pts_e=Boundary_vert,Pts_w=Well_vert,Qw=Qwell,h=26.25,phi=0.2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Quick Start-Embedded Method
Step7: Fill-Grid Method
|
1,818
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#%matplotlib notebook
import matplotlib
matplotlib.rcParams['figure.figsize'] = (9, 9)
import pandas as pd
def conv_func(s):
s = s.replace('<', '')
if s == 'ND':
return np.nan
elif s.strip() == '':
return np.nan
else:
return float(s)
url = "https://data.iledefrance.fr/explore/dataset/qualite-de-lair-mesuree-dans-la-station-chatelet/download/?format=csv&timezone=Europe/Berlin&use_labels_for_header=true"
#dtype_dict = {'NO': np.float64,
# 'NO2': np.float64,
# 'PM10': np.float64,
# 'CO2': np.float64,
# 'TEMP': np.float64,
# 'HUMI': np.float64}
converter_dict = {'NO': conv_func,
'NO2': conv_func,
'PM10': conv_func,
'CO2': conv_func,
'TEMP': conv_func,
'HUMI': conv_func}
df = pd.read_csv(url,
#encoding='iso-8859-1',
index_col=0,
sep=';',
decimal=',',
parse_dates=["DATE/HEURE"],
#dtype=dtype_dict,
#na_values='ND',
converters=converter_dict)
df = df.sort_index()
df.head()
df.columns
df.dtypes
df.index
df.PM10.plot(figsize=(18,6));
df.PM10.resample('7D').mean().plot(figsize=(18,6));
df.PM10.rolling('7D').mean().plot(figsize=(18,6));
df.PM10.resample('1M').mean().plot(figsize=(18,6));
ts = df.PM10
# https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html#Digging-into-the-data
ts_mean = ts.groupby(ts.index.time).mean()
ts_median = ts.groupby(ts.index.time).median()
ts_quartile_1 = ts.groupby(ts.index.time).quantile(0.25)
ts_quartile_3 = ts.groupby(ts.index.time).quantile(0.75)
ts_percentile_5 = ts.groupby(ts.index.time).quantile(0.05)
ts_percentile_95 = ts.groupby(ts.index.time).quantile(0.95)
ts_min = ts.groupby(ts.index.time).min()
ts_max = ts.groupby(ts.index.time).max()
color = "blue"
ax = ts_mean.plot(y='duration', figsize=(18, 12), color=color, label="mean", alpha=0.75)
ts_median.plot(ax=ax, color=color, label="median", style="--", alpha=0.75)
ts_quartile_1.plot(ax=ax, color=color, alpha=0.5, style="-.", label="1st quartile")
ts_quartile_3.plot(ax=ax, color=color, alpha=0.5, style="-.", label="3rd quartile")
ts_percentile_5.plot(ax=ax, color=color, alpha=0.25, style=":", label="5th percentile")
ts_percentile_95.plot(ax=ax, color=color, alpha=0.25, style=":", label="95th percentile")
ts_min.plot(ax=ax, color=color, alpha=0.2, style=":", label="min")
ts_max.plot(ax=ax, color=color, alpha=0.2, style=":", label="max")
plt.fill_between(ts_percentile_5.index, ts_percentile_5.values, ts_percentile_95.values, facecolor=color, alpha=0.1)
plt.fill_between(ts_quartile_1.index, ts_quartile_1.values, ts_quartile_3.values, facecolor=color, alpha=0.1)
ts = df.TEMP
ax2 = ax.twinx()
# https://jakevdp.github.io/PythonDataScienceHandbook/03.11-working-with-time-series.html#Digging-into-the-data
ts_mean = ts.groupby(ts.index.time).mean()
ts_median = ts.groupby(ts.index.time).median()
ts_quartile_1 = ts.groupby(ts.index.time).quantile(0.25)
ts_quartile_3 = ts.groupby(ts.index.time).quantile(0.75)
ts_percentile_5 = ts.groupby(ts.index.time).quantile(0.05)
ts_percentile_95 = ts.groupby(ts.index.time).quantile(0.95)
ts_min = ts.groupby(ts.index.time).min()
ts_max = ts.groupby(ts.index.time).max()
color = "red"
ax2 = ts_mean.plot(y='duration', figsize=(18, 12), color=color, label="mean", alpha=0.75)
ts_median.plot(ax=ax2, color=color, label="median", style="--", alpha=0.75)
ts_quartile_1.plot(ax=ax2, color=color, alpha=0.5, style="-.", label="1st quartile")
ts_quartile_3.plot(ax=ax2, color=color, alpha=0.5, style="-.", label="3rd quartile")
ts_percentile_5.plot(ax=ax2, color=color, alpha=0.25, style=":", label="5th percentile")
ts_percentile_95.plot(ax=ax2, color=color, alpha=0.25, style=":", label="95th percentile")
ts_min.plot(ax=ax2, color=color, alpha=0.2, style=":", label="min")
ts_max.plot(ax=ax2, color=color, alpha=0.2, style=":", label="max")
plt.fill_between(ts_percentile_5.index, ts_percentile_5.values, ts_percentile_95.values, facecolor=color, alpha=0.1)
plt.fill_between(ts_quartile_1.index, ts_quartile_1.values, ts_quartile_3.values, facecolor=color, alpha=0.1)
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax.set_xlabel('Time')
ax.set_ylabel('PM10');
ax2.set_ylabel('Temperature');
ax = df.PM10.groupby(df.index.time).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Time")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.time).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax = df.PM10.groupby(df.index.weekday).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Weekday")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.weekday).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
ax = df.PM10.groupby(df.index.month).mean().plot(figsize=(18,6), color="blue")
ax.set_xlabel("Month")
ax2 = ax.twinx()
df.TEMP.groupby(df.index.month).mean().plot(ax=ax2, color="red")
ax.legend(loc='upper left')
ax2.legend(loc='upper right');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Infos diverses sur le DataFrame
Step2: Analyse de la concentration en particules fines (PM10)
|
1,819
|
<ASSISTANT_TASK:>
Python Code:
# グラフが文章中に表示されるようにするおまじない
%matplotlib inline
from sklearn import datasets
digits = datasets.load_digits()
print(digits.data.shape)
import matplotlib.pyplot as plt
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[0], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
from sklearn import svm
create_model = lambda : svm.SVC(C=1, gamma=0.0001)
classifier = create_model()
classifier.fit(digits.data, digits.target)
from sklearn import metrics
predicted = classifier.predict(digits.data)
score = metrics.accuracy_score(digits.target, predicted)
print(score)
from sklearn.externals import joblib
joblib.dump(classifier, "./machine.pkl")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Data
Step2: 1797は行数、64は次元数です。手書き文字の画像データが8×8のサイズであるため、その中のピクセル情報は64となります。
Step3: Create the Model
Step4: Training the Model
Step5: Evaluate the Model
Step6: Store the Model
|
1,820
|
<ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# Exercício 1 - Crie um objeto a partir da classe abaixo, chamado roc1, passando 2 parâmetros e depois faça uma chamada
# aos atributos e métodos
from math import sqrt
class Rocket():
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
self.x += x_increment
self.y += y_increment
def print_rocket(self):
print(self.x, self.y)
# Exercício 2 - Crie uma classe chamada Pessoa() com os atributos: nome, cidade, telefone e e-mail. Use pelo menos 2
# métodos especiais na sua classe. Crie um objeto da sua classe e faça uma chamada a pelo menos um dos seus métodos
# especiais.
# Exercício 3 - Crie a classe Smartphone com 2 atributos, tamanho e interface e crie a classe MP3Player com os
# atributos capacidade. A classe MP3player deve herdar os atributos da classe Smartphone.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercícios
|
1,821
|
<ASSISTANT_TASK:>
Python Code:
# utility imports
from __future__ import print_function
from pprint import pprint
from matplotlib import pyplot as plt
# main imports
import numpy as np
import distarray.globalapi as da
from distarray.plotting import plot_array_distribution
# output goodness
np.set_printoptions(precision=2)
# display figures inline
%matplotlib inline
distributions = [('n','b'), ('b','n'), ('b','b')]
sizes = [8, 16, 32, 64]
print(distributions)
context = da.Context()
def synthetic_data_generator(contextobj, datashape=(16, 16), distscheme=('b', 'n')):
Return objective matrix with specified size and distribution.
distribution = da.Distribution(contextobj, shape=datashape, dist=distscheme)
_syndata = np.random.random(datashape)
syndata = contextobj.fromarray(_syndata, distribution=distribution)
return syndata
def parallel_gauss_elim(darray, pivot_row, k, m):
Perform in-place gaussian elimination locally on all engines.
Parameters
---------
darray : DistArray
Handle for the array to be manipulated (global)
pivot_row : numpy.ndarray
Array containing pivot row (global)
k : integer
Pivot row index (global)
m : numpy.ndarray
Vector containing pivoting factors (global)
import numpy as np
# retrieve local indices for submatrix that needs to be operated on
n_rows, n_cols = darray.distribution.global_shape
i_slice, j_slice = darray.distribution.local_from_global((slice(k+1, n_rows),
slice(k, n_cols)))
# limit the slices using actual size of local array
n_rows_local, n_cols_local = darray.ndarray.shape
i_indices, j_indices = (i_slice.indices(n_rows_local),
j_slice.indices(n_cols_local))
# determine which elements of global pivot row correspond to local entries
_, piv_slice = darray.distribution.global_from_local((slice(0, n_rows_local),
slice(*j_indices)))
# limit the slice to the size of the global pivot row
piv_indices = piv_slice.indices(n_cols)
# determine which elements of global pivot factor vector corresponds to local
mul_slice, _ = darray.distribution.global_from_local((slice(*i_indices),
slice(0, n_cols_local)))
# limit the slice to the size of the global pivot factor vector
mul_indices = mul_slice.indices(n_rows)
# perform the elimination to create zeros below pivot
if (i_indices[0] == i_indices[1] or j_indices[0] == j_indices[1]):
# computation for the local block is done
return
else:
for i, mul in zip(xrange(*i_indices), xrange(*mul_indices)):
np.subtract(darray.ndarray[i, slice(*j_indices)],
np.multiply(m[mul], pivot_row[slice(*piv_indices)]),
out=darray.ndarray[i, slice(*j_indices)])
return
context.register(parallel_gauss_elim)
def execute_ge(contextobj, darray):
N = min(darray.shape)
for k in range(N-1):
pivot_factors = (d_array[:, k]/d_array[k, k]).toarray()
contextobj.parallel_gauss_elim(darray, darray[k, :].toarray(), k, pivot_factors)
N = sizes[0]
for scheme in distributions:
d_array = 1000 * synthetic_data_generator(context, datashape=(N,N), distscheme=scheme)
execute_ge(context, d_array)
process_coords = [(0, 0), (1, 0), (2, 0), (3, 0)]
plot_array_distribution(d_array, process_coords, legend=True,
title=str("Distribution Scheme = " + str(scheme)))
#create a dictionary of lists
from collections import defaultdict
performance_data = defaultdict(list)
for scheme in distributions:
for N in sizes:
d_array = 1000 * synthetic_data_generator(context, datashape=(N,N), distscheme=scheme)
_time = %timeit -o -q execute_ge(context, d_array)
performance_data[scheme].append(_time.best)
plt.plot(sizes, performance_data[scheme], label=str(scheme), linewidth=2)
plt.legend(loc=4)
plt.xlabel('Problem Size N')
plt.ylabel('Execution Time [seconds]')
plt.title('Gaussian Elimination N vs t')
plt.grid(True)
def execute_lu(contextobj, darray):
# create placeholders for lower and upper triangular matrices
uarray = contextobj.fromarray(darray.toarray(), distribution=darray.distribution)
larray = contextobj.fromarray(np.zeros(darray.shape), distribution=darray.distribution)
N = min(darray.shape)
for k in range(N-1):
pivot_factors = (uarray[:, k]/uarray[k, k]).toarray()
pivot_factors[0:k] = 0.0
# populate lower triangular matrix
larray[:, k] = pivot_factors
contextobj.parallel_gauss_elim(uarray, uarray[k, :].toarray(), k, pivot_factors)
larray[-1, -1] = 1.0
return larray, uarray
N = sizes[0]
for scheme in distributions:
d_array = 10 * synthetic_data_generator(context, datashape=(N,N), distscheme=scheme)
L, U = execute_lu(context, d_array)
if (np.allclose(np.dot(L.toarray(), U.toarray()), d_array.toarray())):
print("Success: LU == A for distribution scheme = {}".format(scheme))
else:
print("Failure: LU != A for distribution scheme = {}".format(scheme))
process_coords = [(0, 0), (1, 0), (2, 0), (3, 0)]
plot_array_distribution(L, process_coords, legend=True,
title=str("Lower Triangular Piece L for Distribution Scheme = " + str(scheme)))
plot_array_distribution(U, process_coords, legend=True,
title=str("Upper Triangular Piece U for Distribution Scheme = " + str(scheme)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We now define the parameter space for our study. We will perform GE on matrices that are block distributed in any one or both dimensions, while simultaneously varying the size
Step3: Next, we create a context and devise a scheme for generating some synthetic data (in this case a matrix) on which to operate
Step5: In order for the Gaussian Elimination operation to be truly parallel, we need to define a uFunc to perform the desired computation
Step6: We want to use a nice syntax for calling out uFunc hence we register it with our context (alternatively, we could have just used Context.apply which has a more obscure call format)
Step7: All that is left now is to define the high level function that runs on the client and manages the GE operation. Using this function is a way of ensuring synchronicity between the many engines performing this operation. After a pivot row is determined, it is broadcast along with a vector of pivoting factors to the worker engines via the parallel_gauss_elim uFunc. Note how we have actually subverted the need to use canonical MPI constructs (in this case, MPI_Bcast()) by making use of the fact that our uFunc can accept arbitrary arguments.
Step8: In order to enable the reader to better visualize what is happening in this example, we will make the first set of runs with the size fixed at 8, while cycling through the distribution types. We also print out a graphical representation of the distribution of the resulting upper triangular matrices.
Step9: Now we write a quick routine that runs through all sizes and distributions and records the runtimes. The resulting information is best represented as a plot the data for which is collected in a Dictionary called performance_data. Depending on the contents of your sizes vector, the runtimes may very a great deal on this section. To see the progress of execution, the user may choose to disable the -q (quiet) option on the %timeit magic function.
Step10: We see that we observe similar performance from all three distributions with a block-block map marginally most efficient.
Step11: Let us first test our implementation by checking if we can reproduce our objective matrix $\mathbf{A}$ by multiplying $\mathbf{L}$ and $\mathbf{U}$. For multiplication, we will convert the DistArrays back to NumPy arrays and for comparison of floating point entries we use numpy.allclose() which returns True if all entries are equal within a tolerance.
Step12: Hence we have validated our implementation. Just to confirm that our matrices are actually upper and lower triangular, lets generate a schematic for one $\mathbf{LU}$ pair
|
1,822
|
<ASSISTANT_TASK:>
Python Code:
def find_tile(tile, State):
n = len(State)
for row in range(n):
for col in range(n):
if State[row][col] == tile:
return row, col
to_list = lambda State: [list(row) for row in State]
to_tuple = lambda State: tuple(tuple(row) for row in State)
def move_dir(State, row, col, dx, dy):
State = to_list(State)
State[row ][col ] = State[row + dx][col + dy]
State[row + dx][col + dy] = 0
return to_tuple(State)
def next_states(State):
n = len(State)
row, col = find_tile(0, State)
New_States = set()
Directions = [ (1, 0), (-1, 0), (0, 1), (0, -1) ]
for dx, dy in Directions:
if row + dx in range(n) and col + dy in range(n):
New_States.add(move_dir(State, row, col, dx, dy))
return New_States
start = ( (8, 0, 6),
(5, 4, 7),
(2, 3, 1)
)
next_states(start)
goal = ( (0, 1, 2),
(3, 4, 5),
(6, 7, 8)
)
start2 = ( ( 0, 1, 2, 3 ),
( 4, 5, 6, 8 ),
( 14, 7, 11, 10 ),
( 9, 15, 12, 13 )
)
goal2 = ( ( 0, 1, 2, 3 ),
( 4, 5, 6, 7 ),
( 8, 9, 10, 11 ),
( 12, 13, 14, 15 )
)
def manhattan(stateA, stateB):
n = len(stateA)
PositionsB = {}
for row in range(n):
for col in range(n):
tile = stateB[row][col]
PositionsB[tile] = (row, col)
result = 0
for rowA in range(n):
for colA in range(n):
tile = stateA[rowA][colA]
if tile != 0:
rowB, colB = PositionsB[tile]
result += abs(rowA - rowB) + abs(colA - colB)
return result
manhattan(start, goal)
import ipycanvas as cnv
import time
Colors = ['white', 'lightblue', 'pink', 'magenta', 'orange', 'red', 'yellow', 'lightgreen', 'gold',
'CornFlowerBlue', 'Coral', 'Cyan', 'orchid', 'DarkSalmon', 'DeepPink', 'green'
]
size = 100
def draw(State, canvas, dx, dy, tile, offset):
canvas.text_align = 'center'
canvas.text_baseline = 'middle'
with cnv.hold_canvas(canvas):
canvas.clear()
n = len(State)
for row in range(n):
for col in range(n):
tile_to_draw = State[row][col]
color = Colors[tile_to_draw]
canvas.fill_style = color
if tile_to_draw not in (0, tile):
x = col * size
y = row * size
canvas.fill_rect(x, y, size, size)
canvas.line_width = 3.0
x += size // 2
y += size // 2
canvas.stroke_text(str(tile_to_draw), x, y)
elif tile_to_draw == tile:
x = col * size + offset * dx
y = row * size + offset * dy
canvas.fill_rect(x, y, size, size)
canvas.line_width = 3.0
x += size // 2
y += size // 2
if tile_to_draw != 0:
canvas.stroke_text(str(tile_to_draw), x, y)
def create_canvas(n):
canvas = cnv.Canvas(size=(size * n, size * n))
canvas.font = '100px serif'
return canvas
delay = 0.002
def tile_and_direction(state, next_state):
row0, col0 = find_tile(0, state)
row1, col1 = find_tile(0, next_state)
return state[row1][col1], col0-col1, row0-row1
def animation(Solution):
start = Solution[0]
n = len(start)
canvas = create_canvas(n)
draw(start, canvas, 0, 0, 0, 0)
m = len(Solution)
display(canvas)
for i in range(m-1):
state = Solution[i]
tile, dx, dy = tile_and_direction(state, Solution[i+1])
for offset in range(size+1):
draw(state, canvas, dx, dy, tile, offset)
time.sleep(delay)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Since breadth first search stores the set of states that have been visited, we have to represent states by immutable objects and hence we represent the states as tuples of tuples. In order to be able to change these states, we have to transform these tuples of tuples into lists of lists.
Step2: The function to_tuple transforms a list of lists into a tuple of tuples.
Step3: Given a State that satisfies
Step4: Given a State of the sliding puzzle, the function next_states(State) computes all those states that can be reached from State in one step.
Step5: Below, we have defined the start state, which is the state shown in the figure above on the left.
Step6: Below is an instance of the $4 \times 4$ puzzle that can be solved in 36 steps.
Step7: For informed search we need to implement a
Step8: Animation
Step9: The module time is part of the standard library, so it is preinstalled. We have imported it because we need the function time.sleep(secs) to pause the animation for a specified time.
Step10: The global variable Colors specifies the colors of the tiles.
Step11: The global variable size specifies the size of one tile in pixels.
Step12: The function draw(State, canvas, dx, dy, tile, x) draws a given State of the sliding puzzle, where tile has been moved by offset pixels into the direction (dx, dy).
Step13: The global variable delay controls the speed of the animation.
Step14: The function call tile_and_direction(state, next_state) takes a state and the state that follows this state and returns a triple (tile, dx, dy) where tile is the tile that is moved to transform state into next_state and (dx, dy) is the direction in which this tile is moved.
Step15: Given a list of states representing a solution to the sliding puzzle, the function call
|
1,823
|
<ASSISTANT_TASK:>
Python Code:
plt.hist(values.map(len))
def pad_smiles(smiles_string, smile_max_length):
if len(smiles_string) < smile_max_length:
return smiles_string + " " * (smile_max_length - len(smiles_string))
padded_smiles = [pad_smiles(i, smile_max_length) for i in values if pad_smiles(i, smile_max_length)]
shuffle(padded_smiles)
def create_char_list(char_set, smile_series):
for smile in smile_series:
char_set.update(set(smile))
return char_set
char_set = set()
char_set = create_char_list(char_set, padded_smiles)
print(len(char_set))
char_set
char_list = list(char_set)
chars_in_dict = len(char_list)
char_to_index = dict((c, i) for i, c in enumerate(char_list))
index_to_char = dict((i, c) for i, c in enumerate(char_list))
char_to_index
X_train = np.zeros((len(padded_smiles), smile_max_length, chars_in_dict), dtype=np.float32)
X_train.shape
for i, smile in enumerate(padded_smiles):
for j, char in enumerate(smile):
X_train[i, j, char_to_index[char]] = 1
X_train, X_test = train_test_split(X_train, test_size=0.33, random_state=42)
X_train.shape
# need to build RNN to encode. some issues include what the 'embedded dimension' is (vector length of embedded sequence)
from keras import backend as K
from keras.objectives import binary_crossentropy #objs or losses
from keras.models import Model
from keras.layers import Input, Dense, Lambda
from keras.layers.core import Dense, Activation, Flatten, RepeatVector
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import GRU
from keras.layers.convolutional import Convolution1D
def Encoder(x, latent_rep_size, smile_max_length, epsilon_std = 0.01):
h = Convolution1D(9, 9, activation = 'relu', name='conv_1')(x)
h = Convolution1D(9, 9, activation = 'relu', name='conv_2')(h)
h = Convolution1D(10, 11, activation = 'relu', name='conv_3')(h)
h = Flatten(name = 'flatten_1')(h)
h = Dense(435, activation = 'relu', name = 'dense_1')(h)
def sampling(args):
z_mean_, z_log_var_ = args
batch_size = K.shape(z_mean_)[0]
epsilon = K.random_normal(shape=(batch_size, latent_rep_size),
mean=0., stddev = epsilon_std)
return z_mean_ + K.exp(z_log_var_ / 2) * epsilon
z_mean = Dense(latent_rep_size, name='z_mean', activation = 'linear')(h)
z_log_var = Dense(latent_rep_size, name='z_log_var', activation = 'linear')(h)
def vae_loss(x, x_decoded_mean):
x = K.flatten(x)
x_decoded_mean = K.flatten(x_decoded_mean)
xent_loss = smile_max_length * binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - \
K.exp(z_log_var), axis = -1)
return xent_loss + kl_loss
return (vae_loss, Lambda(sampling, output_shape=(latent_rep_size,),
name='lambda')([z_mean, z_log_var]))
def Decoder(z, latent_rep_size, smile_max_length, charset_length):
h = Dense(latent_rep_size, name='latent_input', activation = 'relu')(z)
h = RepeatVector(smile_max_length, name='repeat_vector')(h)
h = GRU(501, return_sequences = True, name='gru_1')(h)
h = GRU(501, return_sequences = True, name='gru_2')(h)
h = GRU(501, return_sequences = True, name='gru_3')(h)
return TimeDistributed(Dense(charset_length, activation='softmax'),
name='decoded_mean')(h)
x = Input(shape=(smile_max_length, len(char_set)))
_, z = Encoder(x, latent_rep_size=292, smile_max_length=smile_max_length)
encoder = Model(x, z)
encoded_input = Input(shape=(292,))
decoder = Model(encoded_input, Decoder(encoded_input, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
x1 = Input(shape=(smile_max_length, len(char_set)), name='input_1')
vae_loss, z1 = Encoder(x1, latent_rep_size=292, smile_max_length=smile_max_length)
autoencoder = Model(x1, Decoder(z1, latent_rep_size=292,
smile_max_length=smile_max_length,
charset_length=len(char_set)))
autoencoder.compile(optimizer='Adam', loss=vae_loss, metrics =['accuracy'])
autoencoder.fit(X_train, X_train, shuffle = True, validation_data=(X_test, X_test))
def sample(a, temperature=1.0):
# helper function to sample an index from a probability array
a = np.log(a) / temperature
a = np.exp(a) / np.sum(np.exp(a))
return np.argmax(np.random.multinomial(1, a, 1))
test_smi = values[0]
test_smi = pad_smiles(test_smi, smile_max_length)
Z = np.zeros((1, smile_max_length, len(char_list)), dtype=np.bool)
for t, char in enumerate(test_smi):
Z[0, t, char_to_index[char]] = 1
# autoencoder.
string = ""
for i in autoencoder.predict(Z):
for j in i:
index = sample(j)
string += index_to_char[index]
print("\n callback guess: " + string)
values[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: so some keras version stuff. 1.0 uses keras.losses to store its loss functions. 2.0 uses objectives. we'll just have to be consistent
Step2: Here I've adapted the exact architecture used in the paper
Step3: encoded_input looks like a dummy layer here
Step4: create a separate autoencoder model that combines the encoder and decoder (I guess the former cells are for accessing those separate parts of the model)
Step5: we compile and fit
|
1,824
|
<ASSISTANT_TASK:>
Python Code:
from pprint import pprint
from IPython.display import Image
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import FunctionTransformer
from sklearn.preprocessing import StandardScaler
from dstoolbox.utils import get_nodes_edges
from dstoolbox.visualization import make_graph
my_pipe = Pipeline([
('step0', FunctionTransformer()),
('step1', FeatureUnion([
('feat0', FunctionTransformer()),
('feat1', FunctionTransformer()),
])),
('step2', FunctionTransformer()),
])
nodes, edges = get_nodes_edges('my_pipe', my_pipe)
pprint(nodes)
pprint(edges)
my_pipe = Pipeline([
('step1', FunctionTransformer()),
('step2', FunctionTransformer()),
('step3', FeatureUnion([
('feat3_1', FunctionTransformer()),
('feat3_2', Pipeline([
('step10', FunctionTransformer()),
('step20', FeatureUnion([
('p', FeatureUnion([
('p0', FunctionTransformer()),
('p1', FunctionTransformer()),
])),
('q', FeatureUnion([
('q0', FunctionTransformer()),
('q1', FunctionTransformer()),
])),
])),
('step30', StandardScaler()),
])),
('feat3_3', FeatureUnion([
('feat10', FunctionTransformer()),
('feat11', FunctionTransformer()),
])),
])),
('step4', StandardScaler()),
('step5', FeatureUnion([
('feat5_1', FunctionTransformer()),
('feat5_2', FunctionTransformer()),
('feat5_3', FunctionTransformer()),
])),
('step6', StandardScaler()),
])
graph = make_graph('my pipe', my_pipe)
Image(graph.create_png())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Nodes and edges of a pipeline
Step2: Visualizing a pipeline
|
1,825
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.plot(range(20))
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
plt.scatter(x, y)
plt.show()
y = np.random.rand(5)
x = np.arange(5)
plt.bar(x,y)
plt.show()
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.colorbar()
plt.show()
X = np.linspace(-np.pi, np.pi, 256,endpoint=True)
C,S = np.cos(X), np.sin(X)
plt.plot(X,C)
plt.plot(X,S)
plt.show()
# Create a new subplot from a grid of 1x1
plt.subplot(111)
# Plot cosine using blue color with a continuous line of width 1 (pixels)
plt.plot(X, C, color="blue", linewidth=1.0, linestyle="-")
# Plot sine using green color with a continuous line of width 1 (pixels)
plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
# Set x limits
plt.xlim(-4.0,4.0)
# Set x ticks
plt.xticks(np.linspace(-4,4,9,endpoint=True))
# Set y limits
plt.ylim(-1.0,1.0)
# Set y ticks
plt.yticks(np.linspace(-1,1,5,endpoint=True))
# Save figure using 72 dots per inch
# savefig("../exercice_2.png",dpi=72)
# Show result on screen
plt.show()
plt.figure(figsize=(10,6), dpi=80)
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
plt.show()
plt.figure(figsize=(10,6), dpi=80)
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
plt.show()
plt.figure(figsize=(10,6), dpi=80)
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.title('sin/cos')
plt.show()
ax = plt.gca()
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
plt.show()
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
plt.xlim(X.min()*1.1, X.max()*1.1)
plt.ylim(C.min()*1.1, C.max()*1.1)
plt.legend(loc='upper left', frameon=False)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Libraries we will be using
Step2: A basic matplotlib using Python's range function for data
Step3: A scatter plot with using NumPy's random function to create data
Step4: A bar chart
Step5: A scatter plot with colours, area and alpha blending
Step6: Changing matplotlib default output
Step7: Setting the defaults explictily
Step8: Increasing size, x y limits and change colours
Step9: Set tick values and Latex
Step10: Label Plot and Axis
Step11: Moving spines
Step12: Adding a Legend
|
1,826
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
%matplotlib inline
import xarray as xr
import datetime
import numpy as np
from dask.distributed import LocalCluster, Client
import s3fs
import cartopy.crs as ccrs
import cartopy.io.shapereader as shpreader
import cartopy
import boto3
import matplotlib.pyplot as plt
import os
import pandas as pd
import imageio
import shutil
from matplotlib.dates import MonthLocator, DayLocator, YearLocator
from matplotlib.ticker import MultipleLocator
from matplotlib import colors as c
import salem
import warnings
import multiprocessing.popen_spawn_posix
from distributed import Client
warnings.filterwarnings("ignore")
bucket = 'era5-pds'
client = boto3.client('s3')
var = 'air_temperature_at_2_metres'
client = Client()
client
fs = s3fs.S3FileSystem(anon=False)
def inc_mon(indate):
if indate.month < 12:
return datetime.datetime(indate.year, indate.month+1, 1)
else:
return datetime.datetime(indate.year+1, 1, 1)
def gen_d_range(start, end):
rr = []
while start <= end:
rr.append(start)
start = inc_mon(start)
return rr
def get_z(dtime,var):
f_zarr = 'era5-pds/zarr/{year}/{month:02d}/data/{var}.zarr/'.format(year=dtime.year, month=dtime.month,var=var)
return xr.open_zarr(s3fs.S3Map(f_zarr, s3=fs),consolidated=True)
def gen_zarr_range(start, end,var):
return [get_z(tt,var) for tt in gen_d_range(start, end)]
def get_zarr_data(time_start, time_end, var, longitude_west, longitude_east, latitude_north, latitude_south):
tmp_a = gen_zarr_range(time_start, time_end,var)
tmp_all = xr.concat(tmp_a, dim='time0')
temperature = tmp_all[var].sel(lon=slice(longitude_east, longitude_west),lat=slice(latitude_north,latitude_south))
if 'temp' in var:
temperature = temperature- 272.15
lon = temperature.lon.values; lat = temperature.lat.values
return lon, lat, temperature
%%time
lon,lat,temperature = get_zarr_data(datetime.datetime(2021,2,10), datetime.datetime(2021,2,18),var,360, 0, 90, -26)
def make_globe_imgs(lon, lat, data, img_out,timevar,colorbar_name):
us_shapes = list(shpreader.Reader('shapefiles/gadm36_USA_shp/gadm36_USA_1.shp').geometries())
plt.ioff()
for i in range(0,len(data[timevar])):
order_nr = str(i).rjust(3,'0')
img_name = f'{img_out}/{order_nr}_temp.png'
if os.path.exists(img_name):
continue
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111,projection=ccrs.Orthographic(-103,60))
ax.set_global()
ax.gridlines()
ax.add_geometries(us_shapes,ccrs.PlateCarree(), edgecolor='#222933',linewidth=0.5,facecolor='none')
ax.add_feature(cartopy.feature.BORDERS,color='#222933',linewidth=0.5)
ax.add_feature(cartopy.feature.COASTLINE,color='#222933',linewidth=0.5)
tmp = data[i].compute()
t = pd.to_datetime(str(data[i][timevar].values))
timestring = t.strftime('%b %d %Y %H:%M')
print ('tmp',timestring)
pcm = ax.pcolormesh(lon,lat,tmp,vmin=-35,vmax=35,cmap='RdYlBu_r',transform=ccrs.PlateCarree())
S1 = ax.contour(lon,lat,tmp,[0],colors='black',linewidths=0.4,alpha = 0.6,transform=ccrs.PlateCarree()) #0 line
cbar = plt.colorbar(pcm,fraction=0.019, pad=0.03)
cbar.set_label(colorbar_name)
ttl = plt.title(timestring,fontsize=20,fontweight = 'bold',y=1.05)
plt.savefig(img_name,bbox_inches = 'tight',dpi=200)
plt.close()
img_out = 'texas_imgs'
if not os.path.exists(img_out):
os.mkdir(img_out)
%%time
make_globe_imgs(lon, lat, temperature,img_out, 'time0','Temperature [°C]')
def make_animation(img_out_folder):
files = [f for f in sorted(os.listdir(img_out_folder))]
fileList = []
for file in files:
if not file.startswith('.'):
complete_path = f"{img_out}/{file}"
fileList.append(complete_path)
writer = imageio.get_writer('era5_us_cold_wave.mp4', fps=8)
for im in fileList:
writer.append_data(imageio.imread(im))
writer.close()
make_animation(img_out)
def make_plot(data,dataset_key1,title,unit,**kwargs):
plt.ion()
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
if 'daily' in kwargs:
if kwargs['daily'] == True:
data_time = data['time0']
else:
data_time = data['time0.year']
if 'trend' in kwargs:
z = np.polyfit(data_time, data, 1)
p = np.poly1d(z)
plt.plot(data_time,p(data_time),"r--",c='#1B9AA0')
plt.plot(data_time,data, linestyle = '-',marker='*',linewidth = 1,c='#EC5840',label = dataset_key1)
plt.grid(color='#C3C8CE',alpha=1)
if 'locator' in kwargs:
ml = MultipleLocator(kwargs['locator'][1])
bl = MultipleLocator(kwargs['locator'][0])
else:
ml = MultipleLocator(1)
bl = MultipleLocator(5)
ax.xaxis.set_minor_locator(ml)
ax.xaxis.set_major_locator(bl)
#plt.minorticks_on()
if len(kwargs) > 0:
if 'ylabel' in kwargs:
plt.ylabel(kwargs['ylabel'],fontsize=15)
if 'xlabel' in kwargs:
plt.ylabel(kwargs['xlabel'],fontsize=15)
if 'compare_line' in kwargs:
props = dict(boxstyle='round', facecolor='#1B9AA0',edgecolor='#1B9AA0')
ax.text(max(data_time) + 1,kwargs['compare_line'],str("%.1f" % kwargs['compare_line']) + unit,verticalalignment='center',bbox = props,color='white')
ax.plot([min(data_time) - 1,max(data_time) + 1], [kwargs['compare_line'],kwargs['compare_line']], '-',linewidth = 2, c='#1B9AA0', label = 'w0 year average temperature')
if 'ylim' in kwargs:
plt.ylim(np.min(kwargs['ylim']),np.max(kwargs['ylim']))
plt.xticks(rotation = 0)
try:
plt.xlim(np.min(data_time)-0.5,np.max(data_time)+0.5)
except:
pass
ttl = plt.title(title,fontsize=20,fontweight = 'bold',y = 1.05)
plt.savefig('plot_out' + title + '.png', dpi=300,transparent=False)
plt.show()
plt.close()
longitude_east = -107.9; longitude_west = -93.5
latitude_north = 37; latitude_south = 24.7
shp = salem.read_shapefile('shapefiles/gadm36_USA_shp/gadm36_USA_1.shp')
shp_texas = shp[shp['NAME_1']=='Texas']
%%time
lon,lat,temp_texas_appr = get_zarr_data(datetime.datetime(1980,1,1), datetime.datetime(2021,2,18),var,longitude_west+360, longitude_east+360, latitude_north, latitude_south)
temp_texas_appr['lon'] = temp_texas_appr.lon.values - 360
monhtly_mins = temp_texas_appr.resample(time0="1MS").min(dim=['time0']).salem.roi(shape=shp_texas).min(dim=['lon','lat'])
feb_mins = monhtly_mins.sel(time0=monhtly_mins['time0.month']==2).compute()
monhtly_means = temp_texas_appr.resample(time0="1MS").mean(dim=['time0']).salem.roi(shape=shp_texas).mean(dim=['lon','lat'])
feb_means = monhtly_means.sel(time0=monhtly_means['time0.month']==2).compute()
make_plot(feb_mins,'ECMWF ERA5','Minimum Temperature in Texas in February','°C',compare_line=np.mean(feb_mins),ylabel='Temperature °C')
make_plot(feb_means,'ECMWF ERA5','Mean Temperature in Texas in February','°C',trend=True,compare_line=np.mean(feb_means),ylabel='Temperature °C')
jan_mins = monhtly_mins.sel(time0=monhtly_mins['time0.month']==1).compute()
jan_means = monhtly_means.sel(time0=monhtly_means['time0.month']==1).compute()
make_plot(jan_mins,'ECMWF ERA5','Min Temperature in Texas January','°C',compare_line=np.mean(jan_mins),
ylabel='Temperature °C')
make_plot(jan_means,'ECMWF ERA5','Mean Temperature in Texas January','°C',compare_line=np.mean(jan_means),
ylabel='Temperature °C')
current_year_data = temp_texas_appr.sel(time0=temp_texas_appr['time0.year']==2021).resample(time0="1D").mean(dim=['time0']).salem.roi(shape=shp_texas).mean(dim=['lon','lat'])
make_plot(current_year_data,'ECMWF ERA5','Daily Temperature in Texas in 2021','°C',daily = True)
daily_winter_mins = temp_texas_appr.sel(time0=temp_texas_appr['time0.season']=='DJF').resample(time0="1D").min(dim=['time0']).salem.roi(shape=shp_texas).min(dim=['lon','lat']).compute()
df = daily_winter_mins.to_dataframe()
df.sort_values('air_temperature_at_2_metres').head(10)
janfeb_2011 = temp_texas_appr.sel(time0=slice('2011-01-01','2011-02-28')).resample(time0="1D").mean(dim=['time0']).salem.roi(shape=shp_texas).mean(dim=['lon','lat']).compute()
make_plot(janfeb_2011,'ECMWF ERA5','Daily Mean Temperature in Texas in 2011 January and February','°C',daily = True,locator = [10,1])
if os.path.exists(img_out):
shutil.rmtree(img_out)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: First we define some variables for reading zarr
Step3: Here we define some functions to read in zarr data.
Step4: This is where we read in the data. We need to define the time range and variable name. At first, we will choose to select area over Northern hemisphere and some part of southern to make globe animation from February.
Step5: Here we will read in data for the animation. You can change the time range, but note that the data is with an hour resolution, so it will take some time.
Step6: This is where we start making images for the animation.
Step7: Now we will make animation of all the images.
Step8: Here, at first we will define approximate Texas boarders, but also read in Texas polygon from the shapefile. At first, we will read data from zarr with approximate boarders, but for calculating averages, we will use exact polygon.
Step9: We will read in ERA5 data for approximate Texas location from 1980 until 2021 Feb 18.
Step10: Here we will find minimum and average values for February in Texas. The reason we will look into February first is that 2021 cold wave was in February.
Step11: Also mean temperature has been lowest in 41 year history. Also, 2010 has been relatively cold, however, as we saw from the last image, there wasnt' that low temperatures.
Step12: As January can be reaally cold as well, we will look into January data as well. We can see that 1984 had very cold temperatures in January, almost as cold as 2021.
Step13: From the mean temperature we can see how minimum and mean values can change the overview. 1984 has had cold days, but in overall, it wasn't that cold. On the other hand, 1985 had relatively cold January.
Step14: Let's see what happened in 2021 cold wave at daily level.
Step15: Below we will see the coldest daily temperatures in Texas history. We will see that 15 Feb 2021 had the lowest one - -28.5 C.
Step16: As we can see from the images above, we can see that there was very low temperatures in 2011 February as well and when looking into news, we found there were some issues with power grids. This is why we decided to look into the data on 2011 as well. We can see that temperatures really dropped pretty low, however, we can't see it much in the mean temperatures. The reason seems to be that after low temperatures in the end of Jan and beginng of February temperatures rised pretty high and this makes it warmer in average. Many days it was even above 15 C.
|
1,827
|
<ASSISTANT_TASK:>
Python Code:
from pysismo.pspreprocess import Preprocess
from obspy.signal.cross_correlation import xcorr
from obspy import read
from obspy.core import Stream
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
# list of example variables for Preprocess class
FREQMAX = 1./1 # bandpass parameters
FREQMIN = 1/20.0
CORNERS = 2
ZEROPHASE = True
ONEBIT_NORM = False # one-bit normalization
PERIOD_RESAMPLE = 0.02 # resample period to decimate traces, after band-pass
FREQMIN_EARTHQUAKE = 1/75.0 # earthquakes periods band
FREQMAX_EARTHQUAKE = 1/25.0
WINDOW_TIME = 0.5 * 1./FREQMAX_EARTHQUAKE # time window to calculate time-normalisation weights
WINDOW_FREQ = 0.0002 # freq window (Hz) to smooth ampl spectrum
CROSSCORR_TMAX = 300 # set maximun cross-correlation time window both positive and negative!
# initialise the Preprocess class
PREPROCESS = Preprocess(FREQMIN, FREQMAX, FREQMIN_EARTHQUAKE,
FREQMAX_EARTHQUAKE, CORNERS, ZEROPHASE,
PERIOD_RESAMPLE, WINDOW_TIME, WINDOW_FREQ,
ONEBIT_NORM)
# specify the path to the desired waveforms, the example folder xcorr is provided.
example_folder = 'tools/examples/xcorr'
def paths(folder_path, extension, sort=False):
Function that returns a list of desired absolute paths called abs_paths
of files that contains a given extension e.g. .txt should be entered as
folder_path, txt. This function will run recursively through and find
any and all files within this folder with that extension!
abs_paths = []
for root, dirs, files in os.walk(folder_path):
for f in files:
fullpath = os.path.join(root, f)
if os.path.splitext(fullpath)[1] == '.{}'.format(extension):
abs_paths.append(fullpath)
if sort:
abs_paths = sorted(abs_paths, key=paths_sort)
return abs_paths
# create a list of possible MSEED files inside this folder to iterate over.
abs_paths = paths(example_folder, 'mseed')
preprocessed_traces = []
for abs_path in abs_paths:
st_headers = read(abs_path, headonly=True)
starttime = st_headers[0].stats.starttime
endtime = starttime + 10*60 #only input 10min waveforms for this example
# import a trace from the example waveform
st = read(abs_path, starttime=starttime, endtime=endtime)
# merge stream
st.merge(fill_value='interpolate')
# select trace from merged stream
tr = st[0]
#preprocess the trace
tr = PREPROCESS.bandpass_filt(tr)
tr = PREPROCESS.trace_downsample(tr)
# copy the trace for time normalisation
tr_copy = tr
# process for time normalisation
tr = PREPROCESS.time_norm(tr, tr_copy)
tr = PREPROCESS.spectral_whitening(tr)
preprocessed_traces.append(tr)
# set the two traces tr1 and tr2 to be cross-correlated
tr1, tr2 = preprocessed_traces
# initializing time and data arrays of cross-correlation, because xcorrs can be symmetric, the final waveform will be
# symmetic about 0 where the maximum extents are CROSSCORR_TMAX in seconds both +ve and -ve.
nmax = int(CROSSCORR_TMAX / PERIOD_RESAMPLE)
timearray = np.arange(-nmax * PERIOD_RESAMPLE, (nmax + 1)*PERIOD_RESAMPLE, PERIOD_RESAMPLE)
# the shift length provides the ability to choose the overlap between the traces to be correlated.
SHIFT_LEN = (len(timearray) - 1) * 0.5
crosscorr = xcorr(tr1, tr2, shift_len=len(tr1.data)/2, full_xcorr=True)[2]
plt.figure()
plt.plot(crosscorr)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The Preprocess class requires many input parameters to function. Below is a list of examples.
Step3: Now perform a cross-correlation using obspy's c-wrapped xcorr function.
|
1,828
|
<ASSISTANT_TASK:>
Python Code:
tmax = .2
t = np.linspace(0., tmax, 1000)
x0, y0 = 0., 0.
vx0, vy0 = 1., 1.
g = 10.
x = vx0 * t
y = -g * t**2/2. + vy0 * t
fig = plt.figure()
ax.set_aspect("equal")
plt.plot(x, y, label = "Exact solution")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
dt = 0.02 # Pas de temps
X0 = np.array([0., 0., vx0, vy0])
nt = int(tmax/dt) # Nombre de pas
ti = np.linspace(0., nt * dt, nt)
def derivate(X, t):
return np.array([X[2], X[3], 0., -g])
def Euler(func, X0, t):
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt-1):
X[i+1] = X[i] + func(X[i], t[i]) * dt
return X
%time X_euler = Euler(derivate, X0, ti)
x_euler, y_euler = X_euler[:,0], X_euler[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
def RK4(func, X0, t):
dt = t[1] - t[0]
nt = len(t)
X = np.zeros([nt, len(X0)])
X[0] = X0
for i in range(nt-1):
k1 = func(X[i], t[i])
k2 = func(X[i] + dt/2. * k1, t[i] + dt/2.)
k3 = func(X[i] + dt/2. * k2, t[i] + dt/2.)
k4 = func(X[i] + dt * k3, t[i] + dt)
X[i+1] = X[i] + dt / 6. * (k1 + 2. * k2 + 2. * k3 + k4)
return X
%time X_rk4 = RK4(derivate, X0, ti)
x_rk4, y_rk4 = X_rk4[:,0], X_rk4[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.plot(x_rk4, y_rk4, "gs", label = "RK4")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
from scipy import integrate
X_odeint = integrate.odeint(derivate, X0, ti)
%time x_odeint, y_odeint = X_odeint[:,0], X_rk4[:,1]
plt.figure()
plt.plot(x, y, label = "Exact solution")
plt.plot(x_euler, y_euler, "or", label = "Euler")
plt.plot(x_rk4, y_rk4, "gs", label = "RK4")
plt.plot(x_odeint, y_odeint, "mv", label = "ODEint")
plt.grid()
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reformulation
Step2: Runge Kutta 4
Step3: Using ODEint
|
1,829
|
<ASSISTANT_TASK:>
Python Code:
3 # a rank 0 tensor; this is a scalar with shape []
[1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0, dtype=tf.float32)
print(node1, node2)
sess = tf.Session()
print(sess.run([node1, node2]))
node3 = tf.add(node1, node2)
print("node3: ", node3)
print("sess.run(node3): ", sess.run(node3))
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a:3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2,4]}))
add_and_triple = adder_node * 3
print(sess.run(add_and_triple, {a:3, b:4.5}))
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(linear_model, {x:[1,2,3,4]}))
import tensorflow as tf
# Create variables and placeholder
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
# Create the model
linear_model = W * x + b
# Create a session object
sess = tf.Session()
# Initialize all the variables in a TensorFlow program
init = tf.global_variables_initializer()
sess.run(init)
# Evaluate linear_model for several values of x simultaneously
print(sess.run(linear_model, {x:[1,2,3,4]}))
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})
print(sess.run([W, b]))
import numpy as np
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TensorFlow Core tutorial
Step2: This gives Python access to all of TensorFlow's classes, methods, and symbols. Most of the documentation assumes you have already done this.
Step3: Notice that printing the nodes does not output the values 3.0 and 4.0 as you might expect. Instead, they are nodes that, when evaluated, would produce 3.0 and 4.0, respectively. To actually evaluate the nodes, we must run the computational graph within a session. A session encapsulates the control and state of the TensorFlow runtime.
Step4: We can build more complicated computations by combining Tensor nodes with operations (Operations are also nodes.). For example, we can add our two constant nodes and produce a new graph as follows
Step5: TensorFlow provides a utility called TensorBoard that can display a picture of the computational graph. Here is a screenshot showing how TensorBoard visualizes the graph
Step6: The preceding three lines are a bit like a function or a lambda in which we define two input parameters (a and b) and then an operation on them. We can evaluate this graph with multiple inputs by using the feed_dict parameter to specify Tensors that provide concrete values to these placeholders
Step7: In TensorBoard, the graph looks liket his
Step8: The preceding computational graph would look as follows in TensorBoard
Step9: Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows
Step10: It is important to realize init is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call sess.run, the variables are uninitialized.
Step11: Summary of the code for the model so far
Step12: We've created a model, but we don't know how good it is yet. To evaluate the model on training data, we need a y placeholder to provide the desired values, and we need to write a loss function.
Step13: We could improve this manually by reassigning the values of W and b to the perfect values of -1 and 1. A variable is initialized to the value provided to tf.Variable but can be changed using operations like tf.assign. For example, W=-1 and b=1 are the optimal parameters for our model. We can change W and b accordingly
Step14: We guessed the "perfect" values of W and b, but the whole point of machine learning is to find the correct model parameters automatically. We will show how to accomplish this in the next section.
Step15: Now we have done actual machine learning! Although doing this simple linear regression doesn't require much TensorFlow core code, more complicated models and methods to feed data into your model necessitate more code. Thus TensorFlow provides higher level abstractions for common patterns, structures, and functionality. We will learn how to use some of these abstractions in the next section.
|
1,830
|
<ASSISTANT_TASK:>
Python Code:
# Import packages here:
import math as m
import numpy as np
from IPython.display import Image
import matplotlib.pyplot as plt
# Properties of Materials (engineeringtoolbox.com, Cengel, Tian, DuPont, http://www.dtic.mil/dtic/tr/fulltext/u2/438718.pdf)
# Coefficient of Thermal Expansion
alphaAluminum = 0.0000131 # in/in/*F
alphaPTFE = 0.0000478 # in/in/*F (over the range in question)
# Elastic Moduli
EAluminum = 10000000 # psi
EAluminumCryo = 11000000 # psi
EPTFE = 40000 # psi
EPTFECryo = 500000 # psi
# Yield Strength
sigmaY_PTFE = 1300 # psi
sigmaY_PTFECryo = 19000 # psi
# Poisson's Ratio
nuAluminum = 0.33 # in/in
nuPTFE = 0.46 # in/in
# Temperature Change Between Ambient and LN2
DeltaT = 389 # *F
# Geometry of Parts
# Main Ring Outer Radius
roMain = 2.0000 # in
# End Cap Inner Radius
riCap = 1.3750 # in
# Interfacial Radius
r = 1.5000 # in
# Liner Thickness
t = 0.125 # in
m = 2.00
P = 45 # psi
yAmbient = 1200 # psi
sigmaPTFEAmbient1 = yAmbient
sigmaPTFEAmbient2 = m*P
sigmaPTFEAmbient = sigmaPTFEAmbient1
deltaLinerAmbient = (sigmaPTFEAmbient/EPTFE)*t
print('The change in liner thickness due to compression must be', "%.4f" % deltaLinerAmbient, 'in, in order to achieve a proper seal.')
rCryo = r - r*alphaAluminum*DeltaT
Deltar = r - rCryo
print('The maximum change in end cap radius equals: ', "%.4f" % DeltaR, 'in')
print('This means that the maximum theoretical interference for the shrink fit is ', "%.4f" % DeltaR, 'in')
deltaLinerAmbientMax = DeltaR - 0.00125
print('The achievable ambient temperature change in liner thickness due to shrink fitting is', "%.4f" % deltaLinerAmbientMax, 'in')
tCryo = t - t*alphaPTFE*DeltaT
print ('The liner thickness at cryogenic temperature is', "%.4f" % tCryo,'in')
deltat = t*alphaPTFE*DeltaT
print ('The change in liner thickness due to thermal contraction is', "%.4f" % deltat, 'in')
tGap = t - deltaLinerAmbient
print ('The ambient temperature liner gap width is', "%.4f" % tGap, 'in')
deltaGap = tGap*alphaAluminum*DeltaT
print ('The change in gap width is', "%.4f" % deltaGap, 'in')
deltaLinerCryo = deltaLinerAmbient + deltaGap - deltat
print ('The total change in liner thickness at cryogenic temperature is', "%.4f" % deltaLinerCryo, 'in')
sigmaPTFECryo = (deltaLinerCryo/tCryo)*EPTFECryo
print('Thus, the maximum achievable pressure exerted on the PTFE at cryogenic temperature is', "%.2f" % sigmaPTFECryo, 'psi')
h = 0.125
mu = 1.2
deltaInterference = ((2*P*r**4)/(mu*h*EAluminum))*((roMain**2 - riCap**2)/((roMain**2 - r**2)*(r**2 - riCap**2)))
print('The intereference thickness needed to overcome the pressure force on the end caps is', "%.4f" % deltaInterference, 'in')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ASME Pressure Vessel Code Equations
Step2: Change in Liner Thickness Necessary to Achieve Seating Stress
Step3: To know if this can be achieved, we must examine how much we can actually shrink the end cap, and whether or not that will allow enough clearance to fit the end cap into place before expansion.
Step4: Clearance for the End Cap
Step5: The necessary change in liner thickness is less than the achievable change in liner thickness.
Step6: Although the load on the PTFE at cryogenic temperature is greater, the yield strength of the PTFE is much greater at 19000 psi. The ratio of load to yield strength at ambient temperature is then much higher than the ratio at cryogenic temperature. We do have a value to use for seating stress of cryogenic PTFE, so we must either trust the ASME code for our extreme temperature conditions, or imperically test for the seating stress, which we do not have time to do before this project is terminated.
|
1,831
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
def sigmoid(x):
return ((1/(1+np.exp(-x))))
self.activation_function = sigmoid
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets-final_outputs
# TODO: Backpropagated error
hidden_errors = np.dot(output_errors,self.weights_hidden_to_output)*(hidden_outputs*(1-hidden_outputs)).T
# TODO: Update the weights
self.weights_hidden_to_output += output_errors*hidden_outputs.T*self.lr
self.weights_input_to_hidden += (inputs*hidden_errors*self.lr).T
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 1000
learning_rate = 0.05
hidden_nodes = 3
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
1,832
|
<ASSISTANT_TASK:>
Python Code:
! pip install google-cloud
! pip install google-cloud-storage
! pip install requests
! pip install tensorflow_datasets
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth application-default login
! gcloud auth login
! pip install tensorflow-enterprise-addons
# Restart the kernel after pip installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
from tensorflow_enterprise_addons import cloudtuner
import kerastuner
REGION = 'us-central1'
PROJECT_ID = '[your-project-id]' #@param {type:"string"}
! gcloud config set project $PROJECT_ID
from tensorflow.keras.datasets import mnist
(x, y), (val_x, val_y) = mnist.load_data()
x = x.astype('float32') / 255.
val_x = val_x.astype('float32') / 255.
x = x[:10000]
y = y[:10000]
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.optimizers import Adam
def build_model(hp):
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
# the number of layers is tunable
for _ in range(hp.get('num_layers')):
model.add(Dense(units=64, activation='relu'))
model.add(Dense(10, activation='softmax'))
# the learning rate is tunable
model.compile(
optimizer=Adam(lr=hp.get('learning_rate')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Configure the search space
HPS = kerastuner.engine.hyperparameters.HyperParameters()
HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log')
HPS.Int('num_layers', 2, 10)
tuner = cloudtuner.CloudTuner(
build_model,
project_id=PROJECT_ID,
region=REGION,
objective='accuracy',
hyperparameters=HPS,
max_trials=5,
directory='tmp_dir/1')
tuner.search_space_summary()
tuner.search(x=x, y=y, epochs=10, validation_data=(val_x, val_y))
tuner.results_summary()
model = tuner.get_best_models(num_models=1)[0]
print(model)
print(model.weights)
import tensorflow as tf
import tensorflow_datasets as tfds
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
# tfds.load introduces a new logger which results in duplicate log messages.
# To mitigate this issue following removes Jupyter notebook root logger handler. More details @
# https://stackoverflow.com/questions/6729268/log-messages-appearing-twice-with-python-logging
import logging
logger = logging.getLogger()
logger.handlers = []
def normalize_img(image, label):
Normalizes images: `uint8` -> `float32`.
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)
def build_pipeline_model(hp):
model = Sequential()
model.add(Flatten(input_shape=(28, 28, 1)))
# the number of layers is tunable
for _ in range(hp.get('num_layers')):
model.add(Dense(units=64, activation='relu'))
model.add(Dense(10, activation='softmax'))
# the learning rate is tunable
model.compile(
optimizer=Adam(lr=hp.get('learning_rate')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Configure the search space
pipeline_HPS = kerastuner.engine.hyperparameters.HyperParameters()
pipeline_HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log')
pipeline_HPS.Int('num_layers', 2, 10)
pipeline_tuner = cloudtuner.CloudTuner(
build_pipeline_model,
project_id=PROJECT_ID,
region=REGION,
objective='accuracy',
hyperparameters=pipeline_HPS,
max_trials=5,
directory='tmp_dir/2')
pipeline_tuner.search(x=ds_train, epochs=10, validation_data=ds_test)
pipeline_tuner.results_summary()
pipeline_model = pipeline_tuner.get_best_models(num_models=1)[0]
print(pipeline_model)
print(pipeline_model.weights)
# Configure the search space
STUDY_CONFIG = {
'algorithm': 'ALGORITHM_UNSPECIFIED',
'metrics': [{
'goal': 'MAXIMIZE',
'metric': 'accuracy'
}],
'parameters': [{
'discrete_value_spec': {
'values': [0.0001, 0.001, 0.01]
},
'parameter': 'learning_rate',
'type': 'DISCRETE'
}, {
'integer_value_spec': {
'max_value': 10,
'min_value': 2
},
'parameter': 'num_layers',
'type': 'INTEGER'
}, {
'discrete_value_spec': {
'values': [32, 64, 96, 128]
},
'parameter': 'units',
'type': 'DISCRETE'
}],
'automatedStoppingConfig': {
'decayCurveStoppingConfig': {
'useElapsedTime': True
}
}
}
tuner = cloudtuner.CloudTuner(
build_model,
project_id=PROJECT_ID,
region=REGION,
study_config=STUDY_CONFIG,
max_trials=10,
directory='tmp_dir/3')
tuner.search_space_summary()
tuner.search(x=x, y=y, epochs=5, steps_per_epoch=2000, validation_steps=1000, validation_data=(val_x, val_y))
tuner.results_summary()
from multiprocessing.dummy import Pool
# If you are running this tutorial in a notebook locally, you may run multiple
# tuning loops concurrently using multi-processes instead of multi-threads.
# from multiprocessing import Pool
import time
import datetime
STUDY_ID = 'CloudTuner_study_{}'.format(
datetime.datetime.now().strftime('%Y%m%d_%H%M%S'))
def single_tuner(tuner_id):
Instantiate a `CloudTuner` and set up its `tuner_id`.
Args:
tuner_id: Integer.
Returns:
A CloudTuner.
tuner = cloudtuner.CloudTuner(
build_model,
project_id=PROJECT_ID,
region=REGION,
objective='accuracy',
hyperparameters=HPS,
max_trials=18,
study_id=STUDY_ID,
directory=('tmp_dir/cloud/%s' % (STUDY_ID)))
tuner.tuner_id = str(tuner_id)
return tuner
def search_fn(tuner):
# Start searching from different time points for each worker to avoid `model.build` collision.
time.sleep(int(tuner.tuner_id)*2)
tuner.search(x=x, y=y, epochs=5, validation_data=(val_x, val_y), verbose=0)
return tuner
# Number of search loops we would like to run in parallel
num_parallel_trials = 4
tuners = [single_tuner(i) for i in range(num_parallel_trials)]
p = Pool(processes=num_parallel_trials)
result = p.map(search_fn, tuners)
p.close()
p.join()
result[0].results_summary()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up your Google Cloud project
Step2: Install CloudTuner
Step3: Restart the Kernel
Step4: Import libraries and define constants
Step5: Tutorial
Step6: Define model building function
Step7: Instantiate CloudTuner
Step8: Let's use the search_space_summary() method to display what the search space for this optimization study looks like.
Step9: Search
Step10: Results
Step11: Get the Best Model
Step12: Tutorial
Step13: Load MNIST Data
Step15: Build training pipeline
Step16: Create and train the model
Step17: Tutorial
Step18: Instantiate CloudTuner
Step19: Let's use the search_space_summary() method to display what the search space for this optimization study looks like.
Step20: Search
Step21: Results
Step23: Tutorial
Step24: Search
Step25: Results
|
1,833
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
from datetime import datetime
import requests
from io import BytesIO
# Dataset
wpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))
res = mod.fit(disp=False)
print(res.summary())
# Dataset
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
data['ln_wpi'] = np.log(data['wpi'])
data['D.ln_wpi'] = data['ln_wpi'].diff()
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
# Levels
axes[0].plot(data.index._mpl_repr(), data['wpi'], '-')
axes[0].set(title='US Wholesale Price Index')
# Log difference
axes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')
axes[1].hlines(0, data.index[0], data.index[-1], 'r')
axes[1].set(title='US Wholesale Price Index - difference of logs');
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
fig = sm.graphics.tsa.plot_acf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[0])
fig = sm.graphics.tsa.plot_pacf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[1])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))
res = mod.fit(disp=False)
print(res.summary())
# Dataset
air2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content
data = pd.read_stata(BytesIO(air2))
data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')
data['lnair'] = np.log(data['air'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)
res = mod.fit(disp=False)
print(res.summary())
# Dataset
friedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content
data = pd.read_stata(BytesIO(friedman2))
data.index = data.time
# Variables
endog = data.loc['1959':'1981', 'consump']
exog = sm.add_constant(data.loc['1959':'1981', 'm2'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))
res = mod.fit(disp=False)
print(res.summary())
# Dataset
raw = pd.read_stata(BytesIO(friedman2))
raw.index = raw.time
data = raw.loc[:'1981']
# Variables
endog = data.loc['1959':, 'consump']
exog = sm.add_constant(data.loc['1959':, 'm2'])
nobs = endog.shape[0]
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog.loc[:'1978-01-01'], exog=exog.loc[:'1978-01-01'], order=(1,0,1))
fit_res = mod.fit(disp=False)
print(fit_res.summary())
mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))
res = mod.filter(fit_res.params)
# In-sample one-step-ahead predictions
predict = res.get_prediction()
predict_ci = predict.conf_int()
# Dynamic predictions
predict_dy = res.get_prediction(dynamic='1978-01-01')
predict_dy_ci = predict_dy.conf_int()
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')
# Plot data points
data.loc['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed')
# Plot predictions
predict.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast')
ci = predict_ci.loc['1977-07-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)
predict_dy.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)')
ci = predict_dy_ci.loc['1977-07-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='g', alpha=0.1)
legend = ax.legend(loc='lower right')
# Prediction error
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')
# In-sample one-step-ahead predictions and 95% confidence intervals
predict_error = predict.predicted_mean - endog
predict_error.loc['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast')
ci = predict_ci.loc['1977-10-01':].copy()
ci.iloc[:,0] -= endog.loc['1977-10-01':]
ci.iloc[:,1] -= endog.loc['1977-10-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], alpha=0.1)
# Dynamic predictions and 95% confidence intervals
predict_dy_error = predict_dy.predicted_mean - endog
predict_dy_error.loc['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)')
ci = predict_dy_ci.loc['1977-10-01':].copy()
ci.iloc[:,0] -= endog.loc['1977-10-01':]
ci.iloc[:,1] -= endog.loc['1977-10-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)
legend = ax.legend(loc='lower left');
legend.get_frame().set_facecolor('w')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ARIMA Example 1
Step2: Thus the maximum likelihood estimates imply that for the process above, we have
Step3: To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model
Step4: ARIMA Example 3
Step5: Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.
Step6: ARIMA Postestimation
Step7: Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).
Step8: The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).
Step9: We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.
Step10: We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978
Step11: Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.
|
1,834
|
<ASSISTANT_TASK:>
Python Code:
def double(number):
bigger = number * 2
return bigger
double(5)
lst = list(range(1,5))
for n in lst:
print(double(n))
elem = input('Wie heisst Du?')
length = len(elem)
print('Hallo '+ elem+ ','+ ' Dein Name hat '+ str(length)+ ' Zeichen.')
def km_rechner(m):
m = m * 1.60934
return round(m, 3)
km_rechner(5)
km_rechner(123)
km_rechner(53)
#Unsere Formate
var_first = { 'measurement': 3.4, 'scale': 'kilometer' }
var_second = { 'measurement': 9.1, 'scale': 'mile' }
var_third = { 'measurement': 2.0, 'scale': 'meter' }
var_fourth = { 'measurement': 9.0, 'scale': 'inches' }
def m_converter(measurement):
if measurement['scale'] == 'kilometer':
return measurement['measurement'] * 1000
if measurement['scale'] == 'mile':
return measurement['measurement'] * 1600
if measurement['scale'] == 'inches':
return measurement['measurement'] * 0.0254
else:
return measurement['measurement']
print(m_converter(var_first))
print(m_converter(var_second))
print(m_converter(var_third))
print(m_converter(var_fourth))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.Baue einen for-loop, der durch vordefinierte Zahlen-list geht, und mithilfe der eben kreierten eigenen Funktion, alle Resultate verdoppelt ausdruckt.
Step2: 3.Entwickle einen Code, der den Nutzer nach der Länge seinem Namen fragt, und ihm dann sagt, wieviele Zeichen sein Name hat.
Step3: 4.Entwickle eine Funktion mit dem Namen km_rechner, der für die untenaufgeführten automatisch die Umrechung von Meilen in km durchführt und gerundet auf eine Kommastelle anzeigt.
Step4: 5.Wir haben in einem Dictionary mit Massen, die mit ganz unterschiedlichen Formaten daherkommen. Entwickle eine Funktion namens, die diese Formate berücksichtigt, und in Meter umwandelt.
|
1,835
|
<ASSISTANT_TASK:>
Python Code:
# loading packages
%pylab inline
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def mle_gauss_mu(samples):
Calculates the Maximum Likelihood Estimate for a mean vector
from a multivariate Gaussian distribution.
Keyword arguments:
samples (numpy array): Training samples for the MLE.
Every sample point represents a row; dimensions by column.
Returns a row vector (numpy.array) as the MLE mean estimate.
dimensions = samples.shape[1]
mu_est = np.zeros((dimensions,1))
for dim in range(dimensions):
mu_est = np.zeros((dimensions,1))
col_mean = sum(samples[:,dim])/len(samples[:,dim])
mu_est[dim] = col_mean
return mu_est
def mle_gausscov(samples, mu_est):
Calculates the Maximum Likelihood Estimate for the covariance matrix.
Keyword Arguments:
x_samples: np.array of the samples for 1 class, n x d dimensional
mu_est: np.array of the mean MLE, d x 1 dimensional
Returns the MLE for the covariance matrix as d x d numpy array.
dimensions = samples.shape[1]
assert (dimensions == mu_est.shape[0]), "columns of sample set and rows of'\
'mu vector (i.e., dimensions) must be equal."
cov_est = np.zeros((dimensions,dimensions))
for x_vec in samples:
x_vec = x_vec.reshape(dimensions,1)
cov_est += (x_vec - mu_est).dot((x_vec - mu_est).T)
return cov_est / len(samples)
# true parameters and 100 3D training data points
mu_vec = np.array([[0],[0]])
cov_mat = np.eye(2)
multi_gauss = np.random.multivariate_normal(mu_vec.ravel(), cov_mat, 100)
print('Dimensions: {}x{}'.format(multi_gauss.shape[0], multi_gauss.shape[1]))
import prettytable
# mean estimate
mu_mle = mle_gauss_mu(multi_gauss)
mu_mle_comp = prettytable.PrettyTable(["mu", "true_param", "MLE_param"])
mu_mle_comp.add_row(["",mu_vec, mu_mle])
print(mu_mle_comp)
# covariance estimate
cov_mle = mle_gausscov(multi_gauss, mu_mle)
mle_gausscov_comp = prettytable.PrettyTable(["covariance", "true_param", "MLE_param"])
mle_gausscov_comp.add_row(["",cov_mat, cov_mle])
print(mle_gausscov_comp)
### Implementing the Multivariate Gaussian Density Function
def pdf_multivariate_gauss(x, mu, cov):
Caculate the multivariate normal density (pdf)
Keyword arguments:
x = numpy array of a "d x 1" sample vector
mu = numpy array of a "d x 1" mean vector
cov = "numpy array of a d x d" covariance matrix
assert(mu.shape[0] > mu.shape[1]), 'mu must be a row vector'
assert(x.shape[0] > x.shape[1]), 'x must be a row vector'
assert(cov.shape[0] == cov.shape[1]), 'covariance matrix must be square'
assert(mu.shape[0] == cov.shape[0]), 'cov_mat and mu_vec must have the same dimensions'
assert(mu.shape[0] == x.shape[0]), 'mu and x must have the same dimensions'
part1 = 1 / ( ((2* np.pi)**(len(mu)/2)) * (np.linalg.det(cov)**(1/2)) )
part2 = (-1/2) * ((x-mu).T.dot(np.linalg.inv(cov))).dot((x-mu))
return float(part1 * np.exp(part2))
Z_true.shape
# Plot Probability Density Function
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(9, 9))
ax = fig.gca(projection='3d')
X = np.linspace(-5, 5, 100)
Y = np.linspace(-5, 5, 100)
X,Y = np.meshgrid(X,Y)
Z_mle = []
for i,j in zip(X.ravel(),Y.ravel()):
Z_mle.append(pdf_multivariate_gauss(np.array([[i],[j]]), mu_mle, cov_mle))
Z_mle = np.asarray(Z_mle).reshape(len(Z_mle)**0.5, len(Z_mle)**0.5)
surf = ax.plot_wireframe(X, Y, Z_mle, color='red', rstride=2, cstride=2, alpha=0.3, label='MLE')
Z_true = []
for i,j in zip(X.ravel(),Y.ravel()):
Z_true.append(pdf_multivariate_gauss(np.array([[i],[j]]), mu_vec, cov_mat))
Z_true = np.asarray(Z_true).reshape(len(Z_true)**0.5, len(Z_true)**0.5)
surf = ax.plot_wireframe(X, Y, Z_true, color='green', rstride=2, cstride=2, alpha=0.3, label='true param.')
ax.set_zlim(0, 0.2)
ax.zaxis.set_major_locator(plt.LinearLocator(10))
ax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('p(x)')
ax.legend()
plt.title('True vs. Predicted Gaussian densities')
plt.show()
# loading packages
import numpy as np
from matplotlib import pyplot as plt
%pylab inline
def comp_theta_mle(d):
Computes the Maximum Likelihood Estimate for a given 1D training
dataset for a Rayleigh distribution.
theta = len(d) / sum([x**2 for x in d])
return theta
def likelihood_ray(x, theta):
Computes the class-conditional probability for an univariate
Rayleigh distribution
return 2*theta*x*np.exp(-theta*(x**2))
training_data = [12, 17, 20, 24, 25, 30, 32, 50]
theta = comp_theta_mle(training_data)
print("Theta MLE:", theta)
# Plot Probability Density Function
from matplotlib import pyplot as plt
x_range = np.arange(0, 150, 0.1)
y_range = [likelihood_ray(x, theta) for x in x_range]
plt.figure(figsize=(10,8))
plt.plot(x_range, y_range, lw=2)
plt.title('Probability density function for the Rayleigh distribution')
plt.ylabel('p(x|theta)')
ftext = 'theta = {:.5f}'.format(theta)
plt.figtext(.15,.8, ftext, fontsize=11, ha='left')
plt.ylim([0,0.04])
plt.xlim([0,120])
plt.xlabel('random variable x')
plt.show()
def poisson_theta_mle(d):
Computes the Maximum Likelihood Estimate for a given 1D training
dataset from a Poisson distribution.
return sum(d) / len(d)
import math
def likelihood_poisson(x, lam):
Computes the class-conditional probability for an univariate
Poisson distribution
if x // 1 != x:
likelihood = 0
else:
likelihood = math.e**(-lam) * lam**(x) / math.factorial(x)
return likelihood
# Drawing training data
import numpy as np
true_param = 1.0
poisson_data = np.random.poisson(lam=true_param, size=100)
mle_poiss = poisson_theta_mle(poisson_data)
print('MLE:', mle_poiss)
# Plot Probability Density Function
from matplotlib import pyplot as plt
x_range = np.arange(0, 5, 0.1)
y_true = [likelihood_poisson(x, true_param) for x in x_range]
y_mle = [likelihood_poisson(x, mle_poiss) for x in x_range]
plt.figure(figsize=(10,8))
plt.plot(x_range, y_true, lw=2, alpha=0.5, linestyle='--', label='true parameter ($\lambda={}$)'.format(true_param))
plt.plot(x_range, y_mle, lw=2, alpha=0.5, label='MLE ($\lambda={}$)'.format(mle_poiss))
plt.title('Poisson probability density function for the true and estimated parameters')
plt.ylabel('p(x|theta)')
plt.xlim([-1,5])
plt.xlabel('random variable x')
plt.legend()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Sebastian Raschka
Step3: Sample training data for MLE
Step5: Estimate parameters via MLE
Step8: <a name='uni_rayleigh'></a>
Step11: <a name='uni_poisson'></a>
|
1,836
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(10.)
y = 5*x+3
np.random.seed(3)
y+= np.random.normal(scale=10,size=x.size)
plt.scatter(x,y);
def lin_reg(x,y):
Perform a linear regression of x vs y.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
beta = np.mean(x*y)-np.mean(x)*np.mean(y)
beta /= np.mean(x**2)-np.mean(x)**2
alpha = np.mean(y)-beta*np.mean(x)
return alpha, beta
lin_reg(x,y)
def lin_reg2(x,y):
Perform a linear regression of x vs y. Uses covariances.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
c = np.cov(x,y)
beta = c[0,1]/c[0,0]
alpha = np.mean(y)-beta*np.mean(x)
return alpha, beta
lin_reg2(x,y)
def lin_reg3(x,y):
Perform a linear regression of x vs y. Uses least squares.
x, y are 1 dimensional numpy arrays
returns alpha and beta for the model y = alpha + beta*x
A = np.vstack([x,np.ones_like(x)])
return np.linalg.lstsq(A.T,y)[0]
lin_reg3(x,y)
np.polyfit(x,y, 1)
import scipy.stats as stats
alpha, beta, r, *__ = stats.linregress(x,y)
alpha, beta, r
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit(x[:, np.newaxis], y)
clf.coef_, clf.intercept_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear regression
Step3: We could also implement it with the numpy covariance function. The diagonal terms represent the variance.
Step5: Coding as a least square problem
Step6: The simple ways
Step7: scipy
Step8: scikit-learn
|
1,837
|
<ASSISTANT_TASK:>
Python Code:
# Author: Ivana Kojcic <ivana.kojcic@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Kostiantyn Maksymenko <kostiantyn.maksymenko@gmail.com>
# Samuel Deslauriers-Gauthier <sam.deslauriers@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
# To simulate the sample dataset, information of the sample subject needs to be
# loaded. This step will download the data if it not already on your machine.
# Subjects directory is also set so it doesn't need to be given to functions.
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
meg_path = op.join(data_path, 'MEG', subject)
# First, we get an info structure from the sample subject.
fname_info = op.join(meg_path, 'sample_audvis_raw.fif')
info = mne.io.read_info(fname_info)
tstep = 1 / info['sfreq']
# To simulate sources, we also need a source space. It can be obtained from the
# forward solution of the sample subject.
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
src = fwd['src']
# To simulate raw data, we need to define when the activity occurs using events
# matrix and specify the IDs of each event.
# Noise covariance matrix also needs to be defined.
# Here, both are loaded from the sample dataset, but they can also be specified
# by the user.
fname_event = op.join(meg_path, 'sample_audvis_raw-eve.fif')
fname_cov = op.join(meg_path, 'sample_audvis-cov.fif')
events = mne.read_events(fname_event)
noise_cov = mne.read_cov(fname_cov)
# Standard sample event IDs. These values will correspond to the third column
# in the events matrix.
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
activations = {
'auditory/left':
[('G_temp_sup-G_T_transv-lh', 100), # label, activation (nAm)
('G_temp_sup-G_T_transv-rh', 200)],
'auditory/right':
[('G_temp_sup-G_T_transv-lh', 200),
('G_temp_sup-G_T_transv-rh', 100)],
'visual/left':
[('S_calcarine-lh', 100),
('S_calcarine-rh', 200)],
'visual/right':
[('S_calcarine-lh', 200),
('S_calcarine-rh', 100)],
}
annot = 'aparc.a2009s'
# Load the 4 necessary label names.
label_names = sorted(set(activation[0]
for activation_list in activations.values()
for activation in activation_list))
region_names = list(activations.keys())
# Define the time course of the activity for each region to activate. We use a
# sine wave and it will be the same for all 4 regions.
source_time_series = np.sin(np.linspace(0, 4 * np.pi, 100)) * 10e-9
source_simulator = mne.simulation.SourceSimulator(src, tstep=tstep)
for region_id, region_name in enumerate(region_names, 1):
events_tmp = events[np.where(events[:, 2] == region_id)[0], :]
for i in range(2):
label_name = activations[region_name][i][0]
label_tmp = mne.read_labels_from_annot(subject, annot,
subjects_dir=subjects_dir,
regexp=label_name,
verbose=False)
label_tmp = label_tmp[0]
amplitude_tmp = activations[region_name][i][1]
source_simulator.add_data(label_tmp,
amplitude_tmp * source_time_series,
events_tmp)
# To obtain a SourceEstimate object, we need to use `get_stc()` method of
# SourceSimulator class.
stc_data = source_simulator.get_stc()
raw_sim = mne.simulation.simulate_raw(info, source_simulator, forward=fwd,
cov=None)
raw_sim.set_eeg_reference(projection=True).crop(0, 60) # for speed
mne.simulation.add_noise(raw_sim, cov=noise_cov, random_state=0)
mne.simulation.add_eog(raw_sim, random_state=0)
mne.simulation.add_ecg(raw_sim, random_state=0)
# Plot original and simulated raw data.
raw_sim.plot(title='Simulated raw data')
method, lambda2 = 'dSPM', 1. / 9.
epochs = mne.Epochs(raw_sim, events, event_id)
inv = mne.minimum_norm.make_inverse_operator(epochs.info, fwd, noise_cov)
stc_aud = mne.minimum_norm.apply_inverse(
epochs['auditory/left'].average(), inv, lambda2, method)
stc_vis = mne.minimum_norm.apply_inverse(
epochs['visual/right'].average(), inv, lambda2, method)
stc_diff = stc_aud - stc_vis
brain = stc_diff.plot(subjects_dir=subjects_dir, initial_time=0.1,
hemi='split', views=['lat', 'med'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to simulate source time courses, labels of desired active regions
Step2: Create simulated source activity
Step3: Simulate raw data
Step4: Reconstruct simulated source time courses using dSPM inverse operator
|
1,838
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
matches = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_matches.csv')
print len(matches)
print matches.head()
matches.columns[:11] # Columns 1 - 10 identify the match and the number of goals scored by each team
matches.columns[85:] # Columns 85 - 115 are betting odds from different websites
matches.columns[11:55] # Columns 11-55 are (X,Y) coordinates for players on the pitch - Describing formation
matches.columns[55:85] # Columns 55 - 77 give the player names. Columns 77-84 give some statistics based on the match.
matches_reduced = matches.copy()
removecols = matches.columns[11:85]
removecols_other = ['country_id','league_id']
for col in matches_reduced.columns:
if col in removecols or col in removecols_other:
del matches_reduced[col]
print matches_reduced.shape #Reduced from 115 columns to 106 columns
matches_reduced.season.value_counts() #Equal numer of matches per-season
# What does the 'stage' variable mean?
matches_reduced[matches_reduced.season=='2008/2009'].stage.value_counts()
matches_reduced.head()
null_dict = {}
for col in matches_reduced.columns[4:]:
nulls = matches_reduced[col].isnull().sum()
if nulls > 0:
null_dict[col] = nulls
null_dict
for key in null_dict.keys():
if null_dict[key] > 10:
del matches_reduced[key]
matches_reduced.shape
matches_reduced.to_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_Matches_Reduced.csv',index= False)
team_attributes = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_team_attributes.csv')
print len(team_attributes)
print team_attributes.head()
team_attributes['date']
team_attributes.columns
null_dict = {}
for col in team_attributes.columns[4:]:
nulls = team_attributes[col].isnull().sum()
if nulls > 0:
null_dict[col] = nulls
if team_attributes[col].dtype == 'int64' or team_attributes[col].dtype == 'float64':
team_attributes[col].plot(kind = 'hist')
plt.xlabel(col)
plt.title(col + 'Histogram')
plt.show()
elif team_attributes[col].dtype == 'object':
team_attributes[col].value_counts().plot(kind ='bar') #Build up play passing class value counts totals to 204, no nulls
plt.title(col + 'Bar Chart')
plt.show()
null_dict
teams = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/EPL_teams.csv')
print len(teams)
print teams.head()
teams.head()
player_attributes = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/Player_Attributes.csv')
print len(player_attributes)
print player_attributes.head()
players = pd.read_csv('/Users/mtetkosk/Google Drive/Data Science Projects/data/processed/Players.csv')
print len(players)
print players.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First step is to read in the csv files created by the extraction notebook
Step2: Lets remove any variables from matches df that we won't need for this analysis
Step3: 'Stage' variable must mean 'week' of the season. Each 'stage' consists of 10 matches. This is a way to group matches by date.
Step4: Now let's check for missing values
Step5: Many of the betting odds have null values. Let's remove the columns that have excessive nulls.
Step6: From 'null_dict' object, only the attribute 'buildUpPlayDribbling' numeric attribute has null values.
Step7: Because players will not be used in this version of the model, we will not explore these attributes.
|
1,839
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
|
1,840
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.warn("This is a deprecation warning", DeprecationWarning)
warnings.warn("This is a syntax warning", SyntaxWarning)
x = 5
warnings.warn("This is a unicode warning", UnicodeWarning)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A header
Step2: A subheader
|
1,841
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import pandas
import seaborn
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar
from openfisca_france_indirect_taxation.surveys import SurveyScenario
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
simulated_variables = ['coicop12_{}'.format(coicop12_index) for coicop12_index in range(1, 13)]
for year in [2000, 2005, 2011]:
survey_scenario = SurveyScenario.create(year = year)
pivot_table = pandas.DataFrame()
for values in simulated_variables:
pivot_table = pandas.concat([
pivot_table,
survey_scenario.compute_pivot_table(values = [values], columns = ['niveau_vie_decile'])
])
df = pivot_table.T
df['depenses_tot'] = df[['coicop12_{}'.format(i) for i in range(1, 13)]].sum(axis = 1)
for i in range(1, 13):
df['part_coicop12_{}'.format(i)] = \
df['coicop12_{}'.format(i)] / df['depenses_tot']
print 'Profil de la consommation des ménages en {}'.format(year)
graph_builder_bar(df[['part_coicop12_{}'.format(i) for i in range(1, 13)]])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import de modules spécifiques à Openfisca
Step2: Import d'une nouvelle palette de couleurs
Step3: Construction de la dataframe et réalisation des graphiques
|
1,842
|
<ASSISTANT_TASK:>
Python Code:
def create_examples(N, batch_size):
A = np.random.binomial(n=1, p=0.5, size=(batch_size, N))
B = np.random.binomial(n=1, p=0.5, size=(batch_size, N,))
X = np.zeros((batch_size, 2 *N,), dtype=np.float32)
X[:,:N], X[:,N:] = A, B
Y = (A ^ B).astype(np.float32)
return X,Y
X, Y = create_examples(3, 2)
print(X[0,:3], "xor", X[0,3:],"equals", Y[0])
print(X[1,:3], "xor", X[1,3:],"equals", Y[1])
import math
class Layer(object):
def __init__(self, input_size, output_size):
tensor_b = tf.zeros((output_size,))
self.b = tf.Variable(tensor_b)
tensor_W = tf.random_uniform((input_size, output_size),
-1.0 / math.sqrt(input_size),
1.0 / math.sqrt(input_size))
self.W = tf.Variable(tensor_W)
def __call__(self, x):
return tf.matmul(x, self.W) + self.b
tf.ops.reset_default_graph()
sess = tf.InteractiveSession()
N = 5
# x represents input data
x = tf.placeholder(tf.float32, (None, 2 * N), name="x")
# y_golden is a reference output data.
y_golden = tf.placeholder(tf.float32, (None, N), name="y")
layer1 = Layer(2 * N, N)
# y is a linear projection of x with nonlinearity applied to the result.
y = tf.nn.sigmoid(layer1(x))
# mean squared error over all examples and all N output dimensions.
cost = tf.reduce_mean(tf.square(y - y_golden))
# create a function that will optimize the neural network
optimizer = tf.train.AdagradOptimizer(learning_rate=0.3)
train_op = optimizer.minimize(cost)
# initialize the variables
sess.run(tf.initialize_all_variables())
for t in range(5000):
example_x, example_y = create_examples(N, 10)
cost_t, _ = sess.run([cost, train_op], {x: example_x, y_golden: example_y})
if t % 500 == 0:
print(cost_t.mean())
X, _ = create_examples(N, 3)
prediction = sess.run([y], {x: X})
print(X)
print(prediction)
N_EXAMPLES = 1000
example_x, example_y = create_examples(N, N_EXAMPLES)
# one day I need to write a wrapper which will turn the expression
# below to:
# tf.abs(y - y_golden) < 0.5
is_correct = tf.less_equal(tf.abs(y - y_golden), tf.constant(0.5))
accuracy = tf.reduce_mean(tf.cast(is_correct, "float"))
acc_result = sess.run(accuracy, {x: example_x, y_golden: example_y})
print("Accuracy over %d examples: %.0f %%" % (N_EXAMPLES, 100.0 * acc_result))
tf.ops.reset_default_graph()
sess = tf.InteractiveSession()
N = 5
# we add a single hidden layer of size 12
# otherwise code is similar to above
HIDDEN_SIZE = 12
x = tf.placeholder(tf.float32, (None, 2 * N), name="x")
y_golden = tf.placeholder(tf.float32, (None, N), name="y")
layer1 = Layer(2 * N, HIDDEN_SIZE)
layer2 = Layer(HIDDEN_SIZE, N) # <------- HERE IT IS!
hidden_repr = tf.nn.tanh(layer1(x))
y = tf.nn.sigmoid(layer2(hidden_repr))
cost = tf.reduce_mean(tf.square(y - y_golden))
optimizer = tf.train.AdagradOptimizer(learning_rate=0.3)
train_op = optimizer.minimize(cost)
sess.run(tf.initialize_all_variables())
for t in range(5000):
example_x, example_y = create_examples(N, 10)
cost_t, _ = sess.run([cost, train_op], {x: example_x, y_golden: example_y})
if t % 500 == 0:
print(cost_t.mean())
X, Y = create_examples(N, 3)
prediction = sess.run([y], {x: X})
print(X)
print(Y)
print(prediction)
N_EXAMPLES = 1000
example_x, example_y = create_examples(N, N_EXAMPLES)
is_correct = tf.less_equal(tf.abs(y - y_golden), tf.constant(0.5))
accuracy = tf.reduce_mean(tf.cast(is_correct, "float"))
acc_result = sess.run(accuracy, {x: example_x, y_golden: example_y})
print("Accuracy over %d examples: %.0f %%" % (N_EXAMPLES, 100.0 * acc_result))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Xor cannot be solved with single layer of neural network
Step2: Notice that the error is far from zero.
Step3: Accuracy is not that hard to predict...
Step4: Xor Network with 2 layers
Step5: This time the network works a tad better
|
1,843
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.reshape(tf.transpose(a_C), [n_C, n_H * n_W])
a_G_unrolled = tf.reshape(tf.transpose(a_G), [n_C, n_H * n_W])
# compute the cost with tensorflow (≈1 line)
J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled, a_G_unrolled))) / (4 * n_C * n_W* n_H)
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
### START CODE HERE ### (≈1 line)
GA = tf.matmul(a=A, b=A, transpose_b=True)
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.reshape(tf.transpose(a_S), [n_C, n_H * n_W])
a_G = tf.reshape(tf.transpose(a_G), [n_C, n_H * n_W])
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = tf.reduce_sum(tf.reduce_sum(tf.square(tf.subtract(GS, GG)), axis=1), axis=0) / (4 * n_C**2 * (n_W * n_H)**2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
def compute_style_cost(model, STYLE_LAYERS):
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
### START CODE HERE ### (≈1 line)
J = alpha * J_content + beta * J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ###
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
model_nn(sess, generated_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1 - Problem Statement
Step2: The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the tf.assign function. In particular, you will use the assign function like this
Step4: The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
Step5: Expected Output
Step7: This painting was painted in the style of impressionism.
Step9: Expected Output
Step10: Expected Output
Step12: You can combine the style costs for different layers as follows
Step14: Note
Step15: Expected Output
Step16: Let's load, reshape, and normalize our "content" image (the Louvre museum picture)
Step17: Let's load, reshape and normalize our "style" image (Claude Monet's painting)
Step18: Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in nst_utils.py to see the details of generate_noise_image(...); to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
Step19: Next, as explained in part (2), let's load the VGG16 model.
Step20: To get the program to compute the content cost, we will now assign a_C and a_G to be the appropriate hidden layer activations. We will use layer conv4_2 to compute the content cost. The code below does the following
Step21: Note
Step22: Exercise
Step23: You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. See reference
Step24: Exercise
Step25: Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
|
1,844
|
<ASSISTANT_TASK:>
Python Code:
import feets
class MaxMagMinTime(feets.Extractor): # must inherit from Extractor
data = ['magnitude', 'time'] # Which data is needed
# to calculate this feature
features = ["magmax", "mintime"] # The names of the expected
# feature
# The values of data are the params
def fit(self, magnitude, time):
# The return value must be a dict with the same values
# defined in features
return {"magmax": magnitude.max(), "mintime": time.min()}
feets.register_extractor(MaxMagMinTime)
# let's create the feature-space
fs = feets.FeatureSpace(only=["magmax", "mintime"])
fs
# extract the features
fs.extract(time=[1,2,3], magnitude=[100, 200, 300])
class WeightedMean(feets.Extractor):
data = ['magnitude', 'error']
optional = ['error']
features = ["weighted_mean"]
# if error is not provided to the FeatureSpace,
# the value of erro will be None.
def fit(self, magnitude, error):
if error is None:
weighted_mean = np.average(magnitude)
else:
weighted_mean = np.average(magnitude, weights=error)
return {"weighted_mean": weighted_mean}
feets.register_extractor(WeightedMean)
fs = feets.FeatureSpace(data=["magnitude", "error"])
fs.features_extractors_
fs = feets.FeatureSpace(data=["magnitude"])
fs.features_extractors_
import numpy as np
magnitude = np.random.uniform(12, 14, size=100)
error = np.random.normal(size=100)
np.average(magnitude)
np.average(magnitude, weights=error)
fs = feets.FeatureSpace(only=["weighted_mean"])
fs.extract(magnitude=magnitude, error=error).as_dict()
fs.extract(magnitude=magnitude).as_dict()
class RobustMean(feets.Extractor):
data = ["magnitude"]
features = ["robust_mean"]
def fit(self, magnitude):
# extract the percentiles
llimit, ulimit = np.percentile(magnitude, (5, 95))
# remove the two limits
crop = magnitude[
(magnitude > llimit) & (magnitude < ulimit)]
# calculate the mean
robust_mean = np.mean(crop)
return{"robust_mean": robust_mean}
feets.register_extractor(RobustMean)
fs = feets.FeatureSpace(only=["robust_mean"])
fs
magnitude = np.random.uniform(12, 14, size=100)
fs.extract(magnitude=magnitude)["robust_mean"]
class RobustMean(feets.Extractor):
data = ["magnitude"]
features = ["robust_mean"]
# by default the percentile to crop is still 5.
params = {"percentile": 5}
# now magnitude (from data), and percentile (from params)
# are are given as keyword argument.
def fit(self, magnitude, percentile):
# first calculate the lower and upper percentile
lp, up = percentile, 100 - percentile
# extract the percentiles
llimit, ulimit = np.percentile(magnitude, (lp, up))
# remove the two limits
crop = magnitude[
(magnitude > llimit) & (magnitude < ulimit)]
# calculate the mean
robust_mean = np.mean(crop)
return{"robust_mean": robust_mean}
feets.register_extractor(RobustMean)
fs = feets.FeatureSpace(only=["robust_mean"], RobustMean={"percentile": 6})
fs
fs.extract(magnitude=magnitude).as_dict()
@feets.register_extractor
class MagErrHistogram(feets.Extractor):
data = ["magnitude", "error"]
features = ["Mag_Err_Histogram"]
def fit(self, magnitude, error):
histogram2d = np.histogram2d(magnitude, error)[0]
return {"Mag_Err_Histogram": histogram2d}
fs = feets.FeatureSpace(only=["Mag_Err_Histogram"])
magnitude = np.random.uniform(12, 14, size=100)
error = np.random.normal(size=100)
rs = fs.extract(magnitude=magnitude, error=error)
rs.as_dict()
rs.as_dataframe()
@feets.register_extractor
class MagErrHistogram(feets.Extractor):
data = ["magnitude", "error"]
features = ["Mag_Err_Histogram"]
def flatten_feature(self, feature, value, extractor_features, **kwargs):
return {"Mag_Err_Histogram": np.mean(value)}
def fit(self, magnitude, error):
histogram2d = np.histogram2d(magnitude, error)[0]
return {"Mag_Err_Histogram": histogram2d}
fs = feets.FeatureSpace(only=["Mag_Err_Histogram"])
rs = fs.extract(magnitude=magnitude, error=error)
rs.as_dataframe()
time = np.arange(100) + np.random.normal(size=100, loc=10)
magnitude = np.random.uniform(12, 14, size=100)
error = np.random.normal(size=100)
fs = feets.FeatureSpace(data=["magnitude", "time", "error"])
rs = fs.extract(time=time, magnitude=magnitude, error=error)
rs.plot("SignaturePhMag");
fig, ax = plt.subplots(figsize=(5, 5))
rs.plot("SignaturePhMag", ax=ax, cmap="viridis_r");
rs.plot("magmax")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Finally to make the extractor available for the FeaturSpace class, you need to register it with the command
Step2: Now the extractor are available as any other provided in feets
Step3: Example 2
Step4: The interesting thing about this extractor is that it is selected by the FeatureSpace regardless of whether we request for the error or not (In both cases the new extractor will be at the end of the set)
Step5: Now let's try to use this extractor to calculate the average value of 100 random quantities between 12 and 14
Step6: We can see that the average magnitude is
Step7: And using the error as weight is
Step8: Now we create the FeatureSpace only with the extractor of interest
Step9: And we can verify the difference of providing the error
Step10: Or use only the magnitude
Step11: Example 3
Step12: now we can create the FeatureSpace instance
Step13: And finally, we can extract robust_mean from a random uniform magnitudes.
Step14: Now let's assume that we want a configurable extractor that allows the user to determine which percentiles to remove before calculating the average. This is possible thanks to the params attribute.
Step15: The parameter percentile of our extractor, is configurable, at the time of
Step16: Example 4
Step17: The interesting thing is that if we call the method as_arrays() or as_dataframe() each one of the values the matrix is split into independent elements.
Step18: The conversion of a 2D matrix to scalars is executed in the flatten_feature () method of the extractor; whose default behavior works for arrays of an arbitrary dimension by adding suffixes for each dimension.
Step19: The flatten_feature method receives several parameters
Step20: We can plot the SignaturePhMag with the next command.
Step21: But not only that, but we also provide full integration with Matplotlib
Step22: The catch is, by default all the extractors can't plot their features.
|
1,845
|
<ASSISTANT_TASK:>
Python Code:
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem import LancasterStemmer
stemmer = LancasterStemmer()
lemmer = WordNetLemmatizer()
print(stemmer.stem('dictionaries'))
print(lemmer.lemmatize('dictionaries'))
from gensim import models
import numpy as np
from pandas import DataFrame, Series
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from gensim import models
import matplotlib.pyplot as plt
import seaborn
from nltk.corpus import stopwords
from nltk.tokenize import wordpunct_tokenize, RegexpTokenizer
stop = stopwords.words('english')
alpha_tokenizer = RegexpTokenizer('[A-Za-z]\w+')
from subprocess import check_output
print(check_output(["ls", "../input"]).decode("utf8"))
df_train = DataFrame.from_csv('../input/train.csv').dropna()
texts = np.concatenate([df_train.question1.values, df_train.question2.values])
def process_sent(words, lemmatize=False, stem=False):
words = words.lower()
tokens = alpha_tokenizer.tokenize(words)
for index, word in enumerate(tokens):
if lemmatize:
tokens[index] = lemmer.lemmatize(word)
elif stem:
tokens[index] = stemmer.stem(word)
else:
tokens[index] = word
return tokens
corpus_lemmatized = [process_sent(sent, lemmatize=True, stem=False) for sent in texts]
corpus_stemmed = [process_sent(sent, lemmatize=False, stem=True) for sent in texts]
corpus = [process_sent(sent) for sent in texts]
VECTOR_SIZE = 100
min_count = 10
size = VECTOR_SIZE
window = 10
model_lemmatized = models.Word2Vec(corpus_lemmatized, min_count=min_count,
size=size, window=window)
model_stemmed = models.Word2Vec(corpus_stemmed, min_count=min_count,
size=size, window=window)
model = models.Word2Vec(corpus, min_count=min_count,
size=size, window=window)
model_lemmatized.most_similar('playstation')
q1 = df_train.question1.values[200000:]
q2 = df_train.question2.values[200000:]
Y = np.array(df_train.is_duplicate.values)[200000:]
def preprocess_check(words, lemmatize=False, stem=False):
words = words.lower()
tokens = alpha_tokenizer.tokenize(words)
model_tokens = []
for index, word in enumerate(tokens):
if lemmatize:
lem_word = lemmer.lemmatize(word)
if lem_word in model_lemmatized.wv.vocab:
model_tokens.append(lem_word)
elif stem:
stem_word = stemmer.stem(word)
if stem_word in model_stemmed.wv.vocab:
model_tokens.append(stem_word)
else:
if word in model.wv.vocab:
model_tokens.append(word)
return model_tokens
old_err_state = np.seterr(all='raise')
def vectorize(words, words_2, model, num_features, lemmatize=False, stem=False):
features = np.zeros((num_features), dtype='float32')
words_amount = 0
words = preprocess_check(words, lemmatize, stem)
words_2 = preprocess_check(words_2, lemmatize, stem)
for word in words:
words_amount = words_amount + 1
features = np.add(features, model[word])
for word in words_2:
words_amount = words_amount + 1
features = np.add(features, model[word])
try:
features = np.divide(features, words_amount)
except FloatingPointError:
features = np.zeros(num_features, dtype='float32')
return features
X_lem = []
for index, sentence in enumerate(q1):
X_lem.append(vectorize(sentence, q2[index], model_lemmatized, VECTOR_SIZE, True, False))
X_lem = np.array(X_lem)
X_stem = []
for index, sentence in enumerate(q1):
X_stem.append(vectorize(sentence, q2[index], model_stemmed, VECTOR_SIZE, False, True))
X_stem = np.array(X_stem)
X = []
for index, sentence in enumerate(q1):
X.append(vectorize(sentence, q2[index], model, VECTOR_SIZE))
X = np.array(X)
results = []
title_font = {'size':'10', 'color':'black', 'weight':'normal',
'verticalalignment':'bottom'}
axis_font = {'size':'10'}
plt.figure(figsize=(10, 5))
plt.xlabel('Training examples', **axis_font)
plt.ylabel('Accuracy', **axis_font)
plt.tick_params(labelsize=10)
for X_set, name, lstyle in [(X_lem, 'Lemmatizaton', 'dotted'),
(X_stem, 'Stemming', 'dashed'),
(X, 'Default', 'dashdot'),
]:
estimator = LogisticRegression(C = 1)
cv = ShuffleSplit(n_splits=6, test_size=0.01, random_state=0)
train_sizes=np.linspace(0.01, 0.99, 6)
train_sizes, train_scores, test_scores = learning_curve(estimator, X_set, Y, cv=cv, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
results.append({'preprocessing' : name, 'score' : train_scores_mean[-1]})
plt.plot(train_sizes, train_scores_mean, label=name, linewidth=5, linestyle=lstyle)
plt.legend(loc='best')
clf = LogisticRegression(C = 1)
clf.fit(X, Y)
#df_test = DataFrame.from_csv('../input/test.csv').fillna('None')
q1 = df_train.question1.values[:100]
q2 = df_train.question2.values[:100]
#q1 = df_test.question1.values
#q2 = df_test.question2.values
X_test = []
for index, sentence in enumerate(q1):
X_test.append(vectorize(sentence, q2[index], model, VECTOR_SIZE))
X_test = np.array(X_test)
result = clf.predict(X_test)
sub = DataFrame()
sub['is_duplicate'] = result
sub.to_csv('submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A visible example of how do they work
Step2: So, what approach will be better for the given task? Let's see.
Step3: And a little bit more of the linguistic tools! We will use a tokenization( breaking a stream of text up into meaningful elements called tokens, for instance, words) and a stop-word dictionary for English.
Step4: And check if the .csv-files with the data are okay.
Step5: So let's write some code. First of all, let's train a Word2Vec model. We will use the training set as a training corpus (Previously I used the test set, but it uses much more memory while the model trained on it has the same efficiency; thanks to @Gian12 for the notion). This set contains some NaN values, but we can just drop them since in our task their lack is not meaningful.
Step6: Let's make a list of sentences by merging the questions.
Step7: Okay, now we are up to the key method of preprocessing comparation. It provides lemmatization or stemming depending on the given flag.
Step8: And then we can make two different corpora to train the model
Step9: Now let's train the models. I've pre-defined these hyperparameters since models on them have the best performance. You can also try to play with them yourself.
Step10: Let's check the result of one of the models.
Step11: Great! The most similar words seem to be pretty meaningful. So, we have three trained models, we can encode the text data with the vectors - let's make some experiments! Let's make data sets from the loaded data frame. I take a chunk of the traning data because the run of the script on the full data takes too much time.
Step12: A little bit modified preprocess. Now it returns only words which model's vocabulary contains.
Step13: This method will help to obtaining a bag of means by vectorising the messages.
Step14: And now we can obtain the features matrices.
Step15: That's almost all! Now we can train the classifier and evaluate it's performance. It's better to use a metric classifier because we are performing operations in the vector space, so I choose a Logistic Regression. But of course you can try a something different and see what can change.
Step16: So, the lemmatized model outperformed the "clear" model! And the stemmed model showed the worst result. Why does it happen?
|
1,846
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime as dt
import time as tm
import pytz as tz
import calendar as cal
dto = dt.strptime ('2014-09-06 07:16 +0000', "%Y-%m-%d %H:%M %z")
dto
tto = tm.strptime ('2014-09-06 07:16 +0000', "%Y-%m-%d %H:%M %z")
tto
dto.timetuple() == tto
dt.fromtimestamp(tm.mktime(tto))
dto = dt.strptime('2014:09:13 21:07:15', '%Y:%m:%d %H:%M:%S')
timezone = tz.timezone('Europe/London')
dto = timezone.localize(dto)
dto
epoch_time = 0
tm.gmtime(epoch_time)
epoch_time = tm.time()
tm.gmtime(epoch_time)
tm.gmtime(tm.mktime(tto))
tm.time()
dt.now()
dt.now().strftime('%Y-%m-%d %H:%M:%S %Z%z')
tm.strftime("%Y-%m-%d %H:%M",tto)
dto.strftime("%Y-%m-%d %H:%M")
dto.hour, dto.minute, dto.second,
dto.tzname()
from datetime import datetime as dt
import pytz as tz
def change_tz(datetime_obj, tz_str):
change the timezone
datatime_obj - a datetime.datetime object representing the time
tz_str - time zone string, eg 'Europe/London'
return - a datetime.datetime object
the_tz = tz.timezone(tz_str)
the_dt = the_tz.normalize(datetime_obj.astimezone(the_tz))
return the_dt
ams = tz.timezone('Europe/Amsterdam')
dto_ams = ams.normalize(dto.astimezone(ams))
dto_ams.strftime('%Y-%m-%d %H:%M:%S %Z%z')
dto_ams2 = change_tz(dto, "Europe/Amsterdam")
dto_ams2
dto_ams2.timetuple()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generating times from a string
Step2: Missing timezone information in the string
Step3: Epoch related functions
Step4: Current-time related functions
Step5: Time output
Step7: Time Zones
|
1,847
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
x = np.array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604,
5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745,
1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 ,
0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357,
1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ])
_ = plt.hist(x, bins=8)
precip = pd.read_table("../data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True)
precip.head()
_ = precip.hist(sharex=True, sharey=True, grid=False)
plt.tight_layout()
precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)
precip_mean = precip.mean()
precip_mean
precip_var = precip.var()
precip_var
alpha_mom = precip_mean ** 2 / precip_var
beta_mom = precip_var / precip_mean
alpha_mom, beta_mom
from scipy.stats.distributions import gamma
precip.Jan.hist(normed=True, bins=20)
plt.plot(np.linspace(0, 10), gamma.pdf(np.linspace(0, 10), alpha_mom[0], beta_mom[0]))
axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False)
for ax in axs.ravel():
# Get month
m = ax.get_title()
# Plot fitted distribution
x = np.linspace(*ax.get_xlim())
ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m]))
# Annotate with parameter estimates
label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m])
ax.annotate(label, xy=(10, 0.2))
plt.tight_layout()
y = np.random.poisson(5, size=100)
plt.hist(y, bins=12, normed=True)
plt.xlabel('y'); plt.ylabel('Pr(y)')
poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod()
lam = 6
value = 10
poisson_like(value, lam)
np.sum(poisson_like(yi, lam) for yi in y)
lam = 8
np.sum(poisson_like(yi, lam) for yi in y)
lambdas = np.linspace(0,15)
x = 5
plt.plot(lambdas, [poisson_like(x, l) for l in lambdas])
plt.xlabel('$\lambda$')
plt.ylabel('L($\lambda$|x={0})'.format(x))
lam = 5
xvals = np.arange(15)
plt.bar(xvals, [poisson_like(x, lam) for x in xvals])
plt.xlabel('x')
plt.ylabel('Pr(X|$\lambda$=5)')
from scipy.optimize import newton
# some function
func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1
xvals = np.linspace(0, 6)
plt.plot(xvals, func(xvals))
plt.text(5.3, 2.1, '$f(x)$', fontsize=16)
# zero line
plt.plot([0,6], [0,0], 'k-')
# value at step n
plt.plot([4,4], [0,func(4)], 'k:')
plt.text(4, -.2, '$x_n$', fontsize=16)
# tangent line
tanline = lambda x: -0.858 + 0.626*x
plt.plot(xvals, tanline(xvals), 'r--')
# point at step n+1
xprime = 0.858/0.626
plt.plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:')
plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16)
from scipy.special import psi, polygamma
dlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log
dl2gamma = lambda m, *args: 1./m - polygamma(1, m)
# Calculate statistics
log_mean = precip.mean().apply(np.log)
mean_log = precip.apply(np.log).mean()
# Alpha MLE for December
alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1]))
alpha_mle
beta_mle = alpha_mle/precip.mean()[-1]
beta_mle
dec = precip.Dec
dec.hist(normed=True, bins=10, grid=False)
x = np.linspace(0, dec.max())
plt.plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-', label='Moment estimator')
plt.plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--', label='ML estimator')
plt.legend()
from scipy.stats import gamma
gamma.fit(precip.Dec)
x = np.random.normal(size=10000)
# Truncation point
a = -1
# Resample until all points meet criterion
x_small = x < a
while x_small.sum():
x[x_small] = np.random.normal(size=x_small.sum())
x_small = x < a
_ = plt.hist(x, bins=100)
from scipy.stats.distributions import norm
trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) -
np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()
from scipy.optimize import fmin
fmin(trunc_norm, np.array([1,2]), args=(-1, x))
# Some random data
y = np.random.random(15) * 10
y
x = np.linspace(0, 10, 100)
# Smoothing parameter
s = 0.4
# Calculate the kernels
kernels = np.transpose([norm.pdf(x, yi, s) for yi in y])
plt.plot(x, kernels, 'k:')
plt.plot(x, kernels.sum(1))
plt.plot(y, np.zeros(len(y)), 'ro', ms=10)
# Create a bi-modal distribution with a mixture of Normals.
x1 = np.random.normal(0, 3, 50)
x2 = np.random.normal(4, 1, 50)
# Append by row
x = np.r_[x1, x2]
plt.hist(x, bins=8, normed=True)
from scipy.stats import kde
density = kde.gaussian_kde(x)
xgrid = np.linspace(x.min(), x.max(), 100)
plt.hist(x, bins=8, normed=True)
plt.plot(xgrid, density(xgrid), 'r-')
cdystonia = pd.read_csv("../data/cdystonia.csv")
cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8);
# Write your answer here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Estimation
Step2: Fitting data to probability distributions
Step3: The first step is recognixing what sort of distribution to fit our data to. A couple of observations
Step4: Now, let's calculate the sample moments of interest, the means and variances by month
Step5: We then use these moments to estimate $\alpha$ and $\beta$ for each month
Step6: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January
Step7: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution
Step8: Maximum Likelihood
Step9: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true.
Step10: We can plot the likelihood function for any value of the parameter(s)
Step11: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$.
Step12: Why are we interested in the likelihood function?
Step13: Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function
Step14: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is
Step15: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function.
Step16: Time to optimize!
Step17: And now plug this back into the solution for beta
Step18: We can compare the fit of the estimates derived from MLE to those from the method of moments
Step19: For some common distributions, SciPy includes methods for fitting via MLE
Step20: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution.
Step21: We can construct a log likelihood for this function using the conditional form
Step22: For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages
Step23: In general, simulating data is a terrific way of testing your model before using it with real data.
Step24: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution
Step25: Exercise
|
1,848
|
<ASSISTANT_TASK:>
Python Code:
# We'll be doing some examples, so let's import the libraries we'll need
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Set a seed so we can play with the data without generating new random numbers every time
np.random.seed(123)
normal = np.random.randn(500)
print np.mean(normal[:10])
print np.mean(normal[:100])
print np.mean(normal[:250])
print np.mean(normal)
# Plot a stacked histogram of the data
plt.hist([normal[:10], normal[10:100], normal[100:250], normal], normed=1, histtype='bar', stacked=True);
plt.ylabel('Frequency')
plt.xlabel('Value');
print np.std(normal[:10])
print np.std(normal[:100])
print np.std(normal[:250])
print np.std(normal)
#Generate some data from a bi-modal distribution
def bimodal(n):
X = np.zeros((n))
for i in range(n):
if np.random.binomial(1, 0.5) == 0:
X[i] = np.random.normal(-5, 1)
else:
X[i] = np.random.normal(5, 1)
return X
X = bimodal(1000)
#Let's see how it looks
plt.hist(X, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value')
print 'mean:', np.mean(X)
print 'standard deviation:', np.std(X)
mu = np.mean(X)
sigma = np.std(X)
N = np.random.normal(mu, sigma, 1000)
plt.hist(N, bins=50)
plt.ylabel('Frequency')
plt.xlabel('Value');
from statsmodels.stats.stattools import jarque_bera
jarque_bera(X)
def sharpe_ratio(asset, riskfree):
return np.mean(asset - riskfree)/np.std(asset - riskfree)
start = '2012-01-01'
end = '2015-01-01'
# Use an ETF that tracks 3-month T-bills as our risk-free rate of return
treasury_ret = get_pricing('BIL', fields='price', start_date=start, end_date=end).pct_change()[1:]
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
returns = pricing.pct_change()[1:] # Get the returns on the asset
# Compute the running Sharpe ratio
running_sharpe = [sharpe_ratio(returns[i-90:i], treasury_ret[i-90:i]) for i in range(90, len(returns))]
# Plot running Sharpe ratio up to 100 days before the end of the data set
_, ax1 = plt.subplots()
ax1.plot(range(90, len(returns)-100), running_sharpe[:-100]);
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio');
# Compute the mean and std of the running Sharpe ratios up to 100 days before the end
mean_rs = np.mean(running_sharpe[:-100])
std_rs = np.std(running_sharpe[:-100])
# Plot running Sharpe ratio
_, ax2 = plt.subplots()
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
ax2.plot(range(90, len(returns)), running_sharpe)
# Plot its mean and the +/- 1 standard deviation lines
ax2.axhline(mean_rs)
ax2.axhline(mean_rs + std_rs, linestyle='--')
ax2.axhline(mean_rs - std_rs, linestyle='--')
# Indicate where we computed the mean and standard deviations
# Everything after this is 'out of sample' which we are comparing with the estimated mean and std
ax2.axvline(len(returns) - 100, color='pink');
plt.xlabel('Date')
plt.ylabel('Sharpe Ratio')
plt.legend(['Sharpe Ratio', 'Mean', '+/- 1 Standard Deviation'])
print 'Mean of running Sharpe ratio:', mean_rs
print 'std of running Sharpe ratio:', std_rs
# Load time series of prices
start = '2012-01-01'
end = '2015-01-01'
pricing = get_pricing('AMZN', fields='price', start_date=start, end_date=end)
# Compute the rolling mean for each day
mu = pd.rolling_mean(pricing, window=90)
# Plot pricing data
_, ax1 = plt.subplots()
ax1.plot(pricing)
ticks = ax1.get_xticks()
ax1.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Price')
plt.xlabel('Date')
# Plot rolling mean
ax1.plot(mu);
plt.legend(['Price','Rolling Average']);
print 'Mean of rolling mean:', np.mean(mu)
print 'std of rolling mean:', np.std(mu)
# Compute rolling standard deviation
std = pd.rolling_std(pricing, window=90)
# Plot rolling std
_, ax2 = plt.subplots()
ax2.plot(std)
ax2.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
plt.ylabel('Standard Deviation of Moving Average')
plt.xlabel('Date')
print 'Mean of rolling std:', np.mean(std)
print 'std of rolling std:', np.std(std)
# Plot original data
_, ax3 = plt.subplots()
ax3.plot(pricing)
ax3.set_xticklabels([pricing.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates
# Plot Bollinger bands
ax3.plot(mu)
ax3.plot(mu + std)
ax3.plot(mu - std);
plt.ylabel('Price')
plt.xlabel('Date')
plt.legend(['Price', 'Moving Average', 'Moving Average +1 Std', 'Moving Average -1 Std'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Example
Step2: Notice that, although the probability of getting closer to 0 and 1 for the mean and standard deviation, respectively, increases with the number of samples, we do not always get better estimates by taking more data points. Whatever our expectation is, we can always get a different result, and our goal is often to compute the probability that the result is significantly different than expected.
Step3: Sure enough, the mean is incredibly non-informative about what is going on in the data. We have collapsed all of our data into a single estimate, and lost of a lot of information doing so. This is what the distribution should look like if our hypothesis that it is normally distributed is correct.
Step4: We'll test our data using the Jarque-Bera test to see if it's normal. A significant p-value indicates non-normality.
Step5: Sure enough the value is < 0.05 and we say that X is not normal. This saves us from accidentally making horrible predictions.
Step6: The Sharpe ratio looks rather volatile, and it's clear that just reporting it as a single value will not be very helpful for predicting future values. Instead, we can compute the mean and standard deviation of the data above, and then see if it helps us predict the Sharpe ratio for the next 100 days.
Step7: The standard deviation in this case is about a quarter of the range, so this data is extremely volatile. Taking this into account when looking ahead gave a better prediction than just using the mean, although we still observed data more than one standard deviation away. We could also compute the rolling mean of the Sharpe ratio to try and follow trends; but in that case, too, we should keep in mind the standard deviation.
Step8: This lets us see the instability/standard error of the mean, and helps anticipate future variability in the data. We can quantify this variability by computing the mean and standard deviation of the rolling mean.
Step9: In fact, the standard deviation, which we use to quantify variability, is itself variable. Below we plot the rolling standard deviation (for a 90-day window), and compute <i>its</i> mean and standard deviation.
Step10: To see what this changing standard deviation means for our data set, let's plot the data again along with the Bollinger bands
|
1,849
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries: NumPy, pandas, matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import rc
# Tell iPython to include plots inline in the notebook
%matplotlib inline
# Set styles for seaborn
%config InlineBackend.figure_formats = {'png', 'retina'}
rc_sns = {'lines.linewidth': 2,
'axes.labelsize': 14,
'axes.titlesize': 14,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', font_scale=1.2, rc=rc_sns, )
sns.set_style ('darkgrid', rc=rc_sns)
# Read dataset
data = pd.read_csv("wholesale-customers.csv")
print "Dataset has {} rows, {} columns".format(*data.shape)
print data.head() # print the first 5 rowsdata.describe()
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler, MinMaxScaler
sc = StandardScaler()
X_std = sc.fit_transform(data)
cols = list(data.columns)
sns.heatmap(np.corrcoef(X_std.T),
cbar=True, square=True, annot=True, fmt='.2f',
xticklabels=cols, yticklabels=cols)
data.describe()
# TODO: Apply PCA with the same number of dimensions as variables in the dataset
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# Standardize input data
sc = StandardScaler()
X = data.values.astype(np.float64)
X_std = sc.fit_transform(X)
# Perform PCA analysis
pca = PCA(n_components=None)
X_std_pca = pca.fit_transform(X_std)
# Print the components and the amount of variance in the data contained in each dimension
print('Principal components (pc) :')
print(pca.components_)
print('\nExplained variance ratios (EVR):')
print(pca.explained_variance_ratio_)
print('\nEVR of the 1st and 2nd pc: %.3f' % sum(pca.explained_variance_ratio_[:2]))
print('EVR of the 3rd and 4th pc: %.3f' % sum(pca.explained_variance_ratio_[2:4]))
print('EVR of the first four pc: %.3f' % sum(pca.explained_variance_ratio_[:4]))
# visualization of indivudual and cumulative explained variance ratio
# code adapted from p132 of S. Raschka "Python Machine Learning" 2015
n_pca = pca.n_components_
evr = pca.explained_variance_ratio_
cum_evr = np.cumsum(evr)
plt.bar (range(1, n_pca+1), evr, alpha=0.75,
align='center', label='explained variance ratio (individual)')
plt.step(range(1, n_pca+1), cum_evr,
where='mid', label='explained variance ratio (cumulative)')
plt.legend(loc='best')
plt.xlabel('Number of principal components')
rc('font', weight='bold')
pd.DataFrame(pca.components_, columns=cols, index=['PC %d' %idx for idx in range(1, len(cols)+1)])
# TODO: Fit an ICA model to the data
# Note: Adjust the data to have center at the origin first!
from sklearn.decomposition import FastICA
ica = FastICA(n_components=None, whiten=True, random_state=0)
X_ica = ica.fit_transform(X_std)
print('The unmixing matrix:')
pd.set_option('display.precision', 4)
pd.DataFrame(ica.components_, columns=cols, index=['IC %d' %idx for idx in range(1, len(cols)+1)])
# Generate a plot for silhouette scores vs number of specified clusters
# in order to aid selection of optimal cluster size
# Import clustering modules
import itertools
import matplotlib.gridspec as gridspec
from mlxtend.evaluate import plot_decision_regions
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
from sklearn.metrics import silhouette_score
# Use silhouette score to select the number of clusters for K-means and GMM
n_cluster_min = 2
n_cluster_max = 11
cluster_range = range(n_cluster_min, n_cluster_max+1)
# Compute silhouette scores
km_silhouette = []
cov_types = ['spherical', 'tied', 'diag', 'full']
empty_list = [[] for i in cov_types]
gmm_silhouette = dict(zip(cov_types, empty_list))
for i in cluster_range:
y_km = KMeans(n_clusters=i, random_state=0).fit_predict(X_std)
km_silhouette.append (silhouette_score(X_std, y_km, metric='euclidean'))
for cov_type in cov_types:
y_gmm = GMM(n_components=i, random_state=0, covariance_type= cov_type).fit_predict(X_std)
gmm_silhouette[cov_type].append(silhouette_score(X_std, y_gmm, metric='euclidean'))
# Plot silhouette score vs number of clusters chosen
plt.figure(figsize=(8, 5), dpi=300)
plt.plot(cluster_range, km_silhouette, marker='o', label = 'K-Means')
for cov_type in cov_types:
plt.plot(cluster_range, gmm_silhouette[cov_type], marker='s', label = 'GMM (%s)' %cov_type )
plt.xlabel('Number of clusters specified')
plt.ylabel('Silhouette score')
plt.xlim(n_cluster_min-1, n_cluster_max+1)
plt.ylim(0, 0.805)
plt.yticks(np.arange(0, 0.805, 0.2))
plt.legend(loc='upper right')
rc('font', weight='bold')
# TODO: First we reduce the data to two dimensions using PCA to capture variation
pca_reduced = PCA(n_components=2)
X_std_pca_reduced = pca_reduced.fit_transform(X_std)
print('==============================================================================')
print('First 5 elements after transformation with the first two principle components:')
print(X_std_pca_reduced[:5])
print('==============================================================================')
# TODO: Implement your clustering algorithm here,
# and fit it to the reduced data for visualization
# initializing clustering algorithms KMeans/GMM and centroids
clus = [KMeans(n_clusters=2), GMM(n_components=2, covariance_type='tied')]
centroids = {}
# plotting decision regions for KMeans and GMM
gs = gridspec.GridSpec(1, 2)
grds = itertools.product([0, 1], repeat=1)
fig = plt.figure(figsize=(12, 5))
for clu, grd in zip(clus, grds):
# fit to the reduced reduced data
y_clu = clu.fit_predict(X_std_pca_reduced)
clu_name = clu.__class__.__name__
if clu_name=='KMeans': centroids[clu_name] = clu.cluster_centers_
elif clu_name=='GMM': centroids[clu_name] = clu.means_
print('%s:\n%s' % (clu_name, clu))
print('==============================================================================')
# plotting the decision boundaries with mlxtend.evaluate.plot_decision_regions
# http://rasbt.github.io/mlxtend/user_guide/evaluate/plot_decision_regions/
ax = plt.subplot(gs[grd[0]])
fig = plot_decision_regions(X=X_std_pca_reduced, y=y_clu, clf=clu)
fig.legend(loc='lower left', frameon=True)
# plotting the centroids of the clusters
plt.scatter(centroids[clu_name][:, 0], centroids[clu_name][:, 1],
marker='x', s=150, linewidths=2, color='w', zorder=2)
plt.title (clu_name)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
print('Centroids of KMeans:')
print(centroids['KMeans'])
print('\nMeans of GMM:')
print(centroids['GMM'])
pd.set_option('precision', 4)
print('\nCustomer types based on KMeans centroids:')
pd.DataFrame(sc.inverse_transform(pca_reduced.inverse_transform(centroids['KMeans'])),
columns=cols, index=range(1,3)).T.plot(kind='barh')
print('Customer types based on GMM means:')
pd.DataFrame(sc.inverse_transform(pca_reduced.inverse_transform(centroids['GMM'])),
columns=cols, index=range(1,3)).T.plot(kind='barh')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Exploration
Step2: Feature Transformation
Step3: 2) How quickly does the variance drop off by dimension? If you were to use PCA on this dataset, how many dimensions would you choose for your analysis? Why?
Step4: Answer
Step5: 4) For each vector in the ICA decomposition, write a sentence or two explaining what sort of object or property it corresponds to. What could these components be used for?
Step6: From the plot shown above, GMM clustering algorithm with a covariance type of tied and n_components=2 provides the best silhouette-score of 0.733, which is around 19.3% better compared to K-means(n_cluster=2).
Step7: 7) What are the central objects in each cluster? Describe them as customers.
|
1,850
|
<ASSISTANT_TASK:>
Python Code:
# Read in the list of 250 movies, making sure to remove commas from their names
# (actually, if it has commas, it will be read in as different fields)
import csv
movies = []
with open('movies.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for index, row in enumerate(myreader):
movies.append( ' '.join(row) ) # the join() call merges all fields
# We might like to split this into two tasks, one for movies pre-1980 and one for post-1980,
import re # used for "regular-expressions", a method of searching strings
cutoffYear = 1980
oldMovies = []
newMovies = []
for mv in movies:
sp = re.split(r'[()]',mv)
#print sp # output looks like: ['Kill Bill: Vol. 2 ', '2004', '']
year = int(sp[1])
if year < cutoffYear:
oldMovies.append( mv )
else:
newMovies.append( mv )
print("Found", len(newMovies), "new movies (after 1980) and", len(oldMovies), "old movies")
# and for simplicity, let's just rename "newMovies" to "movies"
movies = newMovies
# Make a dictionary that will help us convert movie titles to numbers
Movie2index = {}
for ind, mv in enumerate(movies):
Movie2index[mv] = ind
# sample usage:
print('The movie ', movies[3],' has index', Movie2index[movies[3]])
# Read in the list of 60 questions
AllQuestions = []
with open('questions60.csv', 'r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
# the rstrip() removes blanks
AllQuestions.append( row[0].rstrip() )
print('Found', len(AllQuestions), 'questions')
questions = list(set(AllQuestions))
print('Found', len(questions), 'unique questions')
# As we did for movies, make a dictionary to convert questions to numbers
Question2index = {}
for index,quest in enumerate( questions ):
Question2index[quest] = index
# sample usage:
print('The question ', questions[40],' has index', Question2index[questions[40]])
YesNoDict = { "Yes": 1, "No": -1, "Unsure": 0, "": 0 }
# load from csv files
X = []
y = []
with open('MechanicalTurkResults_149movies_X.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
X.append( list(map(int,row)) )
with open('MechanicalTurkResults_149movies_y.csv','r') as csvfile:
myreader = csv.reader(csvfile)
for row in myreader:
y = list(map(int,row))
from sklearn import tree
# the rest is up to you
# up to you
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in the list of questions
Step2: Read in the training data
Step3: Your turn
Step4: Use the trained classifier to play a 20 questions game
|
1,851
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
test_model(tflite_model_file, test_image_index, model_type="Float")
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
evaluate_model(tflite_model_file, model_type="Float")
evaluate_model(tflite_model_quant_file, model_type="Quantized")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Post-training integer quantization
Step2: Generate a TensorFlow Model
Step3: Convert to a TensorFlow Lite model
Step4: It's now a TensorFlow Lite model, but it's still using 32-bit float values for all parameter data.
Step5: The model is now a bit smaller with quantized weights, but other variable data is still in float format.
Step6: Now all weights and variable data are quantized, and the model is significantly smaller compared to the original TensorFlow Lite model.
Step7: That's usually good for compatibility, but it won't be compatible with devices that perform only integer-based operations, such as the Edge TPU.
Step8: The internal quantization remains the same as above, but you can see the input and output tensors are now integer format
Step9: Now you have an integer quantized model that uses integer data for the model's input and output tensors, so it's compatible with integer-only hardware such as the Edge TPU.
Step10: Run the TensorFlow Lite models
Step11: Test the models on one image
Step12: Now test the float model
Step13: And test the quantized model
Step14: Evaluate the models on all images
Step15: Evaluate the float model
Step16: Evaluate the quantized model
|
1,852
|
<ASSISTANT_TASK:>
Python Code:
# Add MOSQITO to the Python path
import sys
sys.path.append('..')
# To get inline plots (specific to Jupyter notebook)
%matplotlib notebook
# Import numpy
import numpy as np
# Import plot function
import matplotlib.pyplot as plt
# Import load function
from mosqito.utils import load
# Import MOSQITO color sheme [Optional]
from mosqito import COLORS
# define the path to the wav file (to be replaced by your own path)
file_path = "../validations/sq_metrics/loudness_zwtv/input/ISO_532-1/Annex B.5/Test signal 24 (woodpecker).wav"
# load signal
sig_wav, fs_wav = load(file_path, wav_calib=2 * 2 **0.5)
# plot signal
t_wav = np.linspace(0, (len(sig_wav) - 1) / fs_wav, len(sig_wav))
plt.figure(1)
plt.plot(t_wav, sig_wav, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
# define the path to the mat file (to be replaced by your own path)
file_path = "../tests/input/noise_1Pa_RMS.mat"
# load signal
sig_mat, fs_mat = load(file_path, mat_signal="signal", mat_fs="fs")
# plot signal
t_mat = np.linspace(0, (len(sig_mat) - 1) / fs_mat, len(sig_mat))
plt.figure(2)
plt.plot(t_mat, sig_mat, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
# define the path to the uff file (to be replaced by your own path)
file_path = "../tests/input/noise_1Pa_RMS.uff"
# load signal
sig_uff, fs_uff = load(file_path)
# plot signal
t_uff = np.linspace(0, (len(sig_uff) - 1) / fs_uff, len(sig_uff))
plt.figure(3)
plt.plot(t_uff, sig_uff, color=COLORS[0])
plt.xlabel('Time [s]')
plt.ylabel('Acoustic pressure [Pa]')
# Load function
from mosqito.sound_level_meter import noct_spectrum
# Compute third octave band spectrum
spec_3, freq_3 = noct_spectrum(sig_wav, fs_wav, fmin=24, fmax=12600, n=3)
# Compute 24th octave band spectrum
spec_24, freq_24 = noct_spectrum(sig_wav, fs_wav, fmin=24, fmax=12600, n=24)
# plot
plt.figure(4)
plt.semilogx(freq_3, 20*np.log10(spec_3/2e-5), label="1/3 oct.", color=COLORS[0])
plt.semilogx(freq_24, 20*np.log10(spec_24/2e-5), label="1/24 oct.", color=COLORS[1])
plt.legend()
plt.xlabel('Freq [Hz]')
plt.ylabel('Acoustic pressure [dB re. 2e-5 Pa]')
from datetime import date
print("Tutorial generation date:", date.today().strftime("%B %d, %Y"))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load signal from .wav file
Step2: Load signal from a .mat file
Step3: Load signal from a .uff file
Step4: Compute nth octave band spectrum
Step5:
|
1,853
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
Xmin = np.min(image_data)
Xmax = np.max(image_data)
return 0.1 + (image_data - Xmin)*(0.9 - 0.1) / (Xmax - Xmin)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
|
1,854
|
<ASSISTANT_TASK:>
Python Code:
try:
count_datasets
except:
assert False
else:
assert True
c = count_datasets("submission_partial.json")
assert c == 4
c = count_datasets("submission_full.json")
assert c == 9
try:
c = count_datasets("submission_nonexistent.json")
except:
assert False
else:
assert c == -1
try:
get_dataset_by_index
except:
assert False
else:
assert True
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_index("submission_partial.json", 1)
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_index("submission_full.json", 8)
try:
c = get_dataset_by_index("submission_partial.json", 5)
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_index("submission_nonexistent.json", 4983)
except:
assert False
else:
assert c is None
try:
get_dataset_by_name
except:
assert False
else:
assert True
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_name("submission_partial.json", "01.01.test")
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_name("submission_full.json", "04.01.test")
try:
c = get_dataset_by_name("submission_partial.json", "nonexistent")
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_name("submission_nonexistent.json", "02.00.test")
except:
assert False
else:
assert c is None
try:
count_pixels_in_dataset
except:
assert False
else:
assert True
assert 29476 == count_pixels_in_dataset("submission_full.json", "01.01.test")
assert 30231 == count_pixels_in_dataset("submission_full.json", "04.01.test")
try:
c = count_pixels_in_dataset("submission_partial.json", "02.00.test")
except:
assert False
else:
assert c == -1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
Step2: C
Step3: D
|
1,855
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from math import pi, sin, cos
import numpy as np
import openmc
fuel = openmc.Material(name='fuel')
fuel.add_element('U', 1.0)
fuel.add_element('O', 2.0)
fuel.set_density('g/cm3', 10.0)
clad = openmc.Material(name='zircaloy')
clad.add_element('Zr', 1.0)
clad.set_density('g/cm3', 6.0)
heavy_water = openmc.Material(name='heavy water')
heavy_water.add_nuclide('H2', 2.0)
heavy_water.add_nuclide('O16', 1.0)
heavy_water.add_s_alpha_beta('c_D_in_D2O')
heavy_water.set_density('g/cm3', 1.1)
# Outer radius of fuel and clad
r_fuel = 0.6122
r_clad = 0.6540
# Pressure tube and calendria radii
pressure_tube_ir = 5.16890
pressure_tube_or = 5.60320
calendria_ir = 6.44780
calendria_or = 6.58750
# Radius to center of each ring of fuel pins
ring_radii = np.array([0.0, 1.4885, 2.8755, 4.3305])
# These are the surfaces that will divide each of the rings
radial_surf = [openmc.ZCylinder(r=r) for r in
(ring_radii[:-1] + ring_radii[1:])/2]
water_cells = []
for i in range(ring_radii.size):
# Create annular region
if i == 0:
water_region = -radial_surf[i]
elif i == ring_radii.size - 1:
water_region = +radial_surf[i-1]
else:
water_region = +radial_surf[i-1] & -radial_surf[i]
water_cells.append(openmc.Cell(fill=heavy_water, region=water_region))
plot_args = {'width': (2*calendria_or, 2*calendria_or)}
bundle_universe = openmc.Universe(cells=water_cells)
bundle_universe.plot(**plot_args)
surf_fuel = openmc.ZCylinder(r=r_fuel)
fuel_cell = openmc.Cell(fill=fuel, region=-surf_fuel)
clad_cell = openmc.Cell(fill=clad, region=+surf_fuel)
pin_universe = openmc.Universe(cells=(fuel_cell, clad_cell))
pin_universe.plot(**plot_args)
num_pins = [1, 6, 12, 18]
angles = [0, 0, 15, 0]
for i, (r, n, a) in enumerate(zip(ring_radii, num_pins, angles)):
for j in range(n):
# Determine location of center of pin
theta = (a + j/n*360.) * pi/180.
x = r*cos(theta)
y = r*sin(theta)
pin_boundary = openmc.ZCylinder(x0=x, y0=y, r=r_clad)
water_cells[i].region &= +pin_boundary
# Create each fuel pin -- note that we explicitly assign an ID so
# that we can identify the pin later when looking at tallies
pin = openmc.Cell(fill=pin_universe, region=-pin_boundary)
pin.translation = (x, y, 0)
pin.id = (i + 1)*100 + j
bundle_universe.add_cell(pin)
bundle_universe.plot(**plot_args)
pt_inner = openmc.ZCylinder(r=pressure_tube_ir)
pt_outer = openmc.ZCylinder(r=pressure_tube_or)
calendria_inner = openmc.ZCylinder(r=calendria_ir)
calendria_outer = openmc.ZCylinder(r=calendria_or, boundary_type='vacuum')
bundle = openmc.Cell(fill=bundle_universe, region=-pt_inner)
pressure_tube = openmc.Cell(fill=clad, region=+pt_inner & -pt_outer)
v1 = openmc.Cell(region=+pt_outer & -calendria_inner)
calendria = openmc.Cell(fill=clad, region=+calendria_inner & -calendria_outer)
root_universe = openmc.Universe(cells=[bundle, pressure_tube, v1, calendria])
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
materials = openmc.Materials(geometry.get_all_materials().values())
materials.export_to_xml()
plot = openmc.Plot.from_geometry(geometry)
plot.color_by = 'material'
plot.colors = {
fuel: 'black',
clad: 'silver',
heavy_water: 'blue'
}
plot.to_ipython_image()
settings = openmc.Settings()
settings.particles = 1000
settings.batches = 20
settings.inactive = 10
settings.source = openmc.Source(space=openmc.stats.Point())
settings.export_to_xml()
fuel_tally = openmc.Tally()
fuel_tally.filters = [openmc.DistribcellFilter(fuel_cell)]
fuel_tally.scores = ['flux']
tallies = openmc.Tallies([fuel_tally])
tallies.export_to_xml()
openmc.run(output=False)
sp = openmc.StatePoint('statepoint.{}.h5'.format(settings.batches))
output_tally = sp.get_tally()
output_tally.get_pandas_dataframe()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's begin by creating the materials that will be used in our model.
Step2: With our materials created, we'll now define key dimensions in our model. These dimensions are taken from the example in section 11.1.3 of the Serpent manual.
Step3: To begin creating the bundle, we'll first create annular regions completely filled with heavy water and add in the fuel pins later. The radii that we've specified above correspond to the center of each ring. We actually need to create cylindrical surfaces at radii that are half-way between the centers.
Step4: Let's see what our geometry looks like so far. In order to plot the geometry, we create a universe that contains the annular water cells and then use the Universe.plot() method. While we're at it, we'll set some keyword arguments that can be reused for later plots.
Step5: Now we need to create a universe that contains a fuel pin. Note that we don't actually need to put water outside of the cladding in this universe because it will be truncated by a higher universe.
Step6: The code below works through each ring to create a cell containing the fuel pin universe. As each fuel pin is created, we modify the region of the water cell to include everything outside the fuel pin.
Step7: Looking pretty good! Finally, we create cells for the pressure tube and calendria and then put our bundle in the middle of the pressure tube.
Step8: Let's look at the final product. We'll export our geometry and materials and then use plot_inline() to get a nice-looking plot.
Step9: Interpreting Results
Step10: The return code of 0 indicates that OpenMC ran successfully. Now let's load the statepoint into a openmc.StatePoint object and use the Tally.get_pandas_dataframe(...) method to see our results.
|
1,856
|
<ASSISTANT_TASK:>
Python Code:
try:
import scipy.optimize
except ImportError:
print("Vous avez lu le paragraphe précédent...?")
print("Je t'envoie sur https://scipy.org/install.html et tu auras plus d'informations...")
import webbrowser
webbrowser.open_new_tab("https://scipy.org/install.html")
# Objective Function: 50x_1 + 80x_2
# Constraint 1: 5x_1 + 2x_2 <= 20
# Constraint 2: -10x_1 + -12x_2 <= -90
result = scipy.optimize.linprog(
[50, 80], # Cost function: 50x_1 + 80x_2
A_ub=[[5, 2], [-10, -12]], # Coefficients for inequalities
b_ub=[20, -90], # Constraints for inequalities: 20 and -90
bounds=(0, None), # Bounds on x, 0 <= x_i <= +oo by default
)
if result.success:
print(f"X1: {round(result.x[0], 2)} hours")
print(f"X2: {round(result.x[1], 2)} hours")
else:
print("No solution")
print(result)
# Objective Function: x_1 + 6*x_2 + 13*x_3
# Constraint 1: x_1 <= 200
# Constraint 2: x_2 <= 300
# Constraint 3: x_1 + x_2 + x_3 <= 400
# Constraint 4: x_2 + 3*x_3 <= 600
# les variables sont supposées positives par défaut
# x_1 >= 0
# x_2 >= 0
# x_3 >= 0
result = scipy.optimize.linprog(
[-1, -6, -13], # Cost function: -x_1 + -6*x_2 + -13*x_3 to MINIMIZE
A_ub=[ # Coefficients for inequalities
[1, 0, 0], # for C1: 1*x_1 + 0*x_2 + 0*x_3 <= 200
[0, 1, 0], # for C2: 0*x_1 + 1*x_2 + 0*x_3 <= 300
[1, 1, 1], # for C3: 1*x_1 + 1*x_2 + 1*x_3 <= 400
[0, 1, 3], # for C4: 0*x_1 + 1*x_2 + 3*x_3 <= 600
],
b_ub=[200, 300, 400, 600], # Constraints for inequalities: 200, 300, 400, 600
bounds=(0, None), # Bounds on x, 0 <= x_i <= +oo by default
method="simplex",
)
print(result)
if result.success:
print(f"X1: {round(result.x[0], 2)} chocolats simples")
print(f"X2: {round(result.x[1], 2)} pyramides")
print(f"X2: {round(result.x[2], 2)} pyramides de luxe")
else:
print("No solution")
# TODO
print(result)
print("Variables slack:")
print(result.slack)
methods = [
"simplex",
"interior-point",
#"revised-simplex",
#"highs-ipm",
#"highs-ds",
#"highs",
]
def solve_problem_1(method):
return scipy.optimize.linprog(
[50, 80], # Cost function: 50x_1 + 80x_2
A_ub=[[5, 2], [-10, -12]], # Coefficients for inequalities
b_ub=[20, -90], # Constraints for inequalities: 20 and -90
method=method
)
for i, method in enumerate(methods):
# solve problem with this method
print(f"\n- Pour la méthode #{i}, {method}...")
solution = solve_problem_1(method)
print(f"La solution trouvée est {solution}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Quelques petits problèmes linéaires
Step2: On utilise les fonctionnalités de scipy pour les problèmes linéaires (doc), et pour commencer la seule fonction scipy.optimize.linprog
Step3: Et voilà, pas plus compliqué !
Step4: Problème exemple du cours sur le simplexe
Step5: On trouve donc la solution commerciale optimale pour Charlie le chocolatier
Step6: Exercice 2
|
1,857
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%pylab inline --no-import-all
pylab.rcParams['figure.figsize'] = (18, 10)
from ntfdl import Dl
stl = Dl('STL', exchange='OSE', download=False)
history = stl.get_history()
history.tail()
fig, ax = plt.subplots()
ax.tick_params(labeltop=False, labelright=True)
history.close.plot()
plt.grid()
# Annotate last quote
xmin, xmax = ax.get_xlim()
plt.annotate(history.iloc[-1].close, xy=(1.005, history.iloc[-1].close), xytext=(0, 0), \
xycoords=('axes fraction', 'data'), textcoords='offset points', backgroundcolor='k', color='w')
history_ma = stl.get_history(mas=[10,20,50,100,200])
history_ma.tail(5)
fig, ax = plt.subplots()
ax.tick_params(labeltop=False, labelright=True)
history_ma[['close','ma10','ma20','ma50','ma100','ma200']].plot(ax=ax)
plt.grid()
fig, ax = plt.subplots()
ax.tick_params(labeltop=False, labelright=True)
history_ma['2008-01-01':'2010-01-01'][['close','ma10','ma20','ma50','ma100','ma200']].plot(ax=ax)
plt.grid()
history.turnover.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Including moving averages
Step2: Busy chart, let's instead slice the pandas with the [from
|
1,858
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from pylab import *
%matplotlib inline
from scipy.stats.stats import spearmanr
from scipy.stats.mstats import normaltest
import warnings
warnings.filterwarnings('ignore')
import sys
sys.path.append("/Users/rfinn/Dropbox/pythonCode/")
sys.path.append("/anaconda/lib/python2.7/site-packages")
sys.path.append("/Users/rfinn/Ureka/variants/common/lib/python2.7/site-packages")
def spearman_with_errors(x,y,yerr,Nmc=1000,plotflag=False):
ysim=np.zeros(Nmc,'f')
rhosim=np.zeros(Nmc,'f')
psim=np.zeros(Nmc,'f')
for i in range(Nmc):
ysim=np.random.normal(y,scale=yerr,size=len(y))
rhosim[i],psim[i] = spearmanr(x,ysim)
cave=np.mean(rhosim)
cstd=np.std(rhosim)
q1=50-34 # mean minus one std
lower=np.percentile(rhosim,q1)
q2=50+34 # mean minus one std
upper=np.percentile(rhosim,q2)
print 'mean = %5.2f, std = %5.2f'%(cave,cstd)
print 'confidence interval from sorted list of MC fit values:'
print 'lower = %5.2f (%5.2f), upper = %5.2f (%5.2f)'%(lower,cave-cstd, upper,cave+cstd)
k,pnorm=normaltest(rhosim)
print 'probability that distribution is normal = %5.2f'%(pnorm)
if plotflag:
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.hist(rhosim,bins=10,normed=True)
plt.xlabel(r'$Spearman \ \rho $')
plt.axvline(x=cave,ls='-',color='k')
plt.axvline(x=lower,ls='--',color='k')
plt.axvline(x=upper,ls='--',color='k')
plt.subplot(1,2,2)
plt.hist(np.log10(psim),bins=10,normed=True)
plt.xlabel(r'$\log_{10}(p \ value)$')
plt.figure()
plt.hexbin(rhosim,np.log10(psim))
plt.xlabel(r'$Spearman \ \rho $')
plt.ylabel(r'$\log_{10}(p \ value)$')
return rhosim,psim
%run ~/Dropbox/pythonCode/LCSanalyzeblue.py
s.plotsizeBTblue()
flag = s.sampleflag & ~s.agnflag & s.gim2dflag
membflag = flag & s.membflag
fieldflag = flag & ~s.membflag
t=spearman_with_errors(s.s.B_T_r[membflag], s.s.SIZE_RATIO[membflag],s.s.SIZE_RATIOERR[membflag],Nmc=1000,plotflag=True)
t=spearman_with_errors(s.s.B_T_r[fieldflag], s.s.SIZE_RATIO[fieldflag],s.s.SIZE_RATIOERR[fieldflag],plotflag=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Plot Dennis is referring to
Step2: Fixing the spearman rank correlation
|
1,859
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import re
from os import path
from scipy.ndimage import imread
from nltk.util import ngrams
from collections import Counter
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from IPython.display import display, HTML
# importing plotly-related modules
import cufflinks as cf
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
from plotly.tools import FigureFactory as FF
plt.style.use('ggplot')
%matplotlib inline
# Reading in data
holyr_df = pd.read_csv('../tmp/clean/holyokecon_confessional_reports.csv')
holys_df = pd.read_csv('../tmp/clean/holyokecon_confessional_secrets.csv')
holyraw_df = pd.read_csv('../tmp/raw/holyokecon_confessional_secrets.csv')
holyrawr_df = pd.read_csv('../tmp/raw/holyokecon_confessional_reports.csv')
# defining some global variables
SECRET_COL = 'clean_tokens_secret'
REPORT_COL = 'clean_tokens_report'
# merge the clean secrets, clean reports, raw reports, and raw secrets data
holysr_df = holys_df.merge(holyr_df, left_on='id', right_on="secret_id",
how='left', suffixes=('_secret', '_report'))
holysr_df = holysr_df.merge(holyraw_df[['id', 'create_date', 'confession']],
left_on='id_secret', right_on='id', how='left')
holysr_df = holysr_df.merge(holyrawr_df[['id', 'reason']],
left_on='id_report', right_on='id', how='left')
holysr_df.rename(columns={'reason': 'report_reason'}, inplace=True)
#preprocess: remove rows with null clean_tokens_secret value
holysr_df = holysr_df[holysr_df[SECRET_COL].notnull()]
holysr_df[['id_secret', 'confession', 'clean_tokens_secret', ]].head()
# Removing the offensive word by matching it to a pattern.
def preprocess_pattern(text, replace="n_word", p=r'nigger|niger|niggar|nigar'):
return " ".join([replace if re.search(p, t) else t
for t in text.lower().split()])
# these are the columns we want to process
text_columns = [
"confession",
"clean_tokens_secret",
"clean_tokens_report",
"report_reason"
]
# apply the preprocess_pattern function to
# each column that contains text
for c in text_columns:
holysr_df[c] = holysr_df[c].apply(
lambda x: x if isinstance(x, float) and np.isnan(x) else preprocess_pattern(x)
)
holysr_df[[SECRET_COL]][holysr_df[SECRET_COL].str.contains("n_word")].head()
# detecting secrets containing a specific word
pattern = r''
selector = holysr_df[SECRET_COL].str.contains(pattern)
match_df = holysr_df[selector]
# Drop duplicate secrets
match_secrets = match_df.drop_duplicates('clean_tokens_secret')
# Match secrets that were not reported
match_not_reported = match_secrets[match_secrets['id_report'].isnull()]
# Match secrets that were reported
match_reported = match_secrets[match_secrets['id_report'].notnull()]
# Select report text
report_text = match_df[match_df[REPORT_COL].notnull()]
word_cloud_options = {
'width': 1000,
'height': 1000,
'background_color': "white",
'max_words': 300,
'stopwords': STOPWORDS,
'random_state': 42
}
def create_word_cloud(text_iterable, image_color_fp=None,
title='', **kwargs):
confesh_coloring = imread(image_color_fp)
# generating the word cloud plot
kwargs.update({'mask': confesh_coloring})
wc = WordCloud(**kwargs)
text = " ".join(text_iterable)
wc.generate(text)
# prettifying the plot
image_colors = ImageColorGenerator(confesh_coloring)
plt.figure(figsize=(12,12))
plt.title(title)
plt.imshow(wc.recolor(color_func=image_colors))
plt.axis("off")
plt.show();
logo_fp = '../assets/logo-purple.png'
create_word_cloud(match_secrets[SECRET_COL].astype(str),
logo_fp, **word_cloud_options);
# Defining functions to compute ngram frequency
def word_counter(text, n=1, length_thres=50):
t = text.split()
t = [tk for tk in t if len(tk) < length_thres]
for i in range(n):
t_ngrams = [" ".join(b) for b in list(ngrams(t, i + 1))]
t.extend(t_ngrams)
return Counter(t)
def word_aggregater(corpus_list, n=1):
c = Counter()
for doc in corpus_list:
c.update(word_counter(doc, n=n))
return c
def count_token_frequency(token_series, filter_thres, **kwargs):
freq_df = pd.DataFrame(word_aggregater(token_series, **kwargs).items())
freq_df.rename(columns={0: 'word', 1: 'frequency'}, inplace=True)
freq_df = freq_df[freq_df['frequency'] > filter_thres] \
.sort_values('frequency', ascending=False)
freq_df['ngrams'] = freq_df['word'].apply(lambda x: len(x.split()))
return freq_df.reset_index(drop=True)
# create frequency count dataframes
secrets_corpus = count_token_frequency(match_secrets['clean_tokens_secret'], 0, n=3)
secrets_not_reported_corpus = count_token_frequency(match_not_reported['clean_tokens_secret'], 0, n=3)
secrets_reported_corpus = count_token_frequency(match_reported['clean_tokens_secret'], 0, n=3)
report_text_corpus = count_token_frequency(report_text['clean_tokens_secret'], 0, n=3)
# merge frequencies for all secrets, reported, and not reported
merge_cols = ['word', 'frequency']
all_corpus = secrets_corpus.merge(secrets_not_reported_corpus[merge_cols], on="word",
how="left", suffixes=("_all", "_not_reported"))
all_corpus = all_corpus.merge(secrets_reported_corpus[merge_cols], on="word", how="left")
all_corpus = all_corpus.rename(columns={'frequency': 'frequency_reported'})
all_corpus.head()
secret_sum = all_corpus[['frequency_not_reported', 'frequency_reported']].sum(axis=1)
not_equal = all_corpus[~(secret_sum == all_corpus['frequency_all'])]
print "We should expect this to be zero!: %d" % not_equal.shape[0]
# creating custom annotations for the plot
# when you hover over a specific bar on the plot,
# you should be able to see the top 4 posts
# containing that word, sorted by number of comments
def format_text_annotation(text_list, n=35):
text_list = [t.decode('utf-8').encode('ascii', 'ignore') for t in text_list]
text_list = [" ".join(t.split()) for t in text_list]
text_list = "<br>".join([t if len(t) < n else t[:n] + "..." for t in text_list])
return text_list
def token_top_secrets(token, comment_col='comments', n=5):
secrets = holysr_df[holysr_df['id_report'].isnull()].copy()
top_secrets = secrets[secrets[SECRET_COL].str.contains(token)] \
.sort_values(comment_col, ascending=False)['confession']
top_secrets = top_secrets.drop_duplicates().tolist()
if len(top_secrets) < n:
n = len(top_secrets)
return format_text_annotation(top_secrets[:n])
def token_reports_text(token, comment_col='comments', n=5):
top_reports = report_text[report_text[SECRET_COL].str.contains(token)] \
.sort_values(comment_col, ascending=False)['confession']
top_reports = top_reports.drop_duplicates().tolist()
if len(top_reports) < n:
n = len(top_reports)
return format_text_annotation(top_reports[:n])
# filter all_corpus to pick top n tokens for each ngram
n = 20
all_corpus = pd.concat([
all_corpus[all_corpus['ngrams'] == 1].sort_values('frequency_all', ascending=False)[:n],
all_corpus[all_corpus['ngrams'] == 2].sort_values('frequency_all', ascending=False)[:n],
all_corpus[all_corpus['ngrams'] == 3].sort_values('frequency_all', ascending=False)[:n]
])
all_corpus['top_secrets'] = all_corpus['word'].apply(token_top_secrets)
all_corpus['top_reports'] = all_corpus['word'].apply(token_reports_text)
all_corpus[['word', 'top_secrets', 'top_reports']].head()
def create_bar_trace(dataframe, graph_obj, x_col, y_col, text_col, **go_kwargs):
return graph_obj(
y=dataframe[x_col],
x=dataframe[y_col],
text=dataframe[text_col],
**go_kwargs)
def create_word_freq_subplot(dataframe, ngrams=1, colorlist=[]):
dataframe = dataframe[dataframe['ngrams'] == ngrams].copy()
dataframe.sort_values('frequency_all', inplace=True, ascending=False)
dataframe.fillna(0, inplace=True)
if ngrams == 1:
gram_text = "Unigrams"
if ngrams == 2:
gram_text = "Bigrams"
if ngrams == 3:
gram_text = "Trigrams"
trace1 = create_bar_trace(dataframe, go.Bar, 'frequency_not_reported', 'word',
'top_secrets',name='<b>%s Not Reported</b>' % gram_text,
marker={'color': colorlist[0]})
trace2 = create_bar_trace(dataframe, go.Bar,'frequency_reported', 'word',
'top_reports', name='<b>%s Reported</b>' % gram_text,
marker={'color': colorlist[1]})
data = [trace1, trace2]
return data
def add_subplot_fig(fig, row, col, traces):
for t in traces:
fig.append_trace(t, row, col)
return fig
subplot1 = create_word_freq_subplot(all_corpus, ngrams=1, colorlist=['#bc94d3', '#8551a3'])
subplot2 = create_word_freq_subplot(all_corpus, ngrams=2, colorlist=['#82ddbc', '#459b7c'])
subplot3 = create_word_freq_subplot(all_corpus, ngrams=3, colorlist=['#f2d37d', '#c9a654'])
fig = tools.make_subplots(rows=3, cols=1,
subplot_titles=('Unigrams', 'Bigrams', 'Trigrams'),
vertical_spacing = 0.14);
add_subplot_fig(fig, 1, 1, subplot1)
add_subplot_fig(fig, 2, 1, subplot2)
add_subplot_fig(fig, 3, 1, subplot3)
title = 'Frequency of Words/Phrases in Mount Holyoke Confessions'
xaxis_domain = fig['layout']['xaxis1']['domain']
fig['layout'].update(
{
'title': title,
'titlefont': {'size': 16},
'height': 1200,
'width': 750,
'barmode': 'stack',
'margin': {'l': 100, 'r': 100, 'b': 155, 't': 100, 'pad': 10},
'xaxis1': {
'tickangle': -45
},
'xaxis2': {
'tickangle': -45
},
'xaxis3': {
'tickangle': -45
},
'legend': {
'traceorder': 'normal'
}
}
)
url = py.iplot(fig, filename="confesh-exploration")
HTML("{::nomarkdown}" + url.embed_code + "{:/nomarkdown}")
report_text[['report_reason']].drop_duplicates().head(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Data
Step2: The confession column is the original raw text, and the clean_tokens_secret is the result of some preprocessing that I did. For this initial preprocessing step, I did the following
Step3: Let's find a pattern... not!
Step4: Makin' a Word Cloud, 'cause we can...
Step5: As you can see, the n-word is one of the most frequently used words in the dataset, along with a smattering of expletives and some other pretty mundane verbiage. I don't know about you, but when I first saw this word cloud I thought to myself
Step6: Just looking at the first 5 rows in the ngram frequency table, we can pose an interesting hypothesis
Step7: Sanity check passed!
Step8: Compare and Contrast
Step9: Takeaways
|
1,860
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import numpy as np
m=6
n=4
k=5
a = np.array(range(11,41)).reshape((k,m)).T
print(a)
b = np.array(range(11,31)).reshape((n,k)).T
print(b)
c = np.array(range(11,35)).reshape((n,m)).T
print(c)
np.matmul(a,b)
np.diag(range(11,15))
np.ones(m*n)
# bias_broadcasted
np.matmul( np.ones(m*n).reshape((m,n)) , np.diag(range(11,15)) )
# a*b + bias_broadcasted
np.matmul(a,b) + np.matmul( np.ones(m*n).reshape((m,n)) , np.diag(range(11,15)) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: cf. Matrix computations
|
1,861
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#commands that start with "%" are called "magic words" and are used in Jupyter
%config InlineBackend.figure_format = 'retina'
import numpy as np #is a library that helps to manage arrays www.numpy.org/
import pandas as pd #a library to analyze and show data. http://pandas.pydata.org/pandas-docs/stable/10min.html
import matplotlib.pyplot as plt #a library so we can create graphs easily.
data_path = 'Bike-Sharing-Dataset/hour.csv' #this is the data in a csv file (just data separated with commas)
rides = pd.read_csv(data_path) #here we open the data and name it "rides" instead
rides.head() #we ask the computer to show a little bit of the initial data to check it out
rides[:24*10].plot(x='dteday', y='cnt')
#we are getting into the data to make a graph from the beginning to position 24*10. and labaling X and Y
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] #we create a list
for each in dummy_fields: #then we go through each element in that list
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) #we create a variable called "dummies". to change this columns into to 0 or 1. here is a video about how to use pd.get_dummies to create this variables https://www.youtube.com/watch?v=0s_1IsROgDc
rides = pd.concat([rides, dummies], axis=1) #then we create a variable "rides" to add all the columns. you have to concat (add) the columns (axis=1) results because versions of pandas bellow 0.15.0 cant do the whole DataFrame at once. older version now can do it. for more info: https://stackoverflow.com/questions/24109779/running-get-dummies-on-several-dataframe-columns
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr'] #we create a list of filds we wanto to drop
data = rides.drop(fields_to_drop, axis=1) # we drop the collumns in this list
data.head() #we ask for an example of the initial info of the dataframe
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
( self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
# sigmoid,Activation function is the sigmoid function
self.activation_function = (lambda x: 1/(1 + np.exp(-x)))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T # shape [feature_diemension, 1]
targets = np.array(targets_list, ndmin=2).T
# Forward pass
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# y = x
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
### Backward pass###
# Output layer error is the difference between desired target and actual output.
output_errors = (targets_list-final_outputs)
# Backpropagated error
# errors propagated to the hidden layer
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output)*(hidden_outputs*(1-hidden_outputs)).T
# Update the weights
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += output_errors * hidden_outputs.T * self.lr
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += (inputs * hidden_errors * self.lr).T
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import sys
### Set the hyperparameters here ###
epochs = 100
learning_rate = 0.1
hidden_nodes = 20
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
def runTest(self):
test_upper (self)
test_isupper (self)
test_split (self)
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Training the network
Step9: Check out your predictions
Step10: Thinking about your results
|
1,862
|
<ASSISTANT_TASK:>
Python Code:
#@title
# Copyright 2020 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install -q -U trax
import trax # Our Main Library
from trax import layers as tl
import os # For os dependent functionalities
import numpy as np # For scientific computing
import pandas as pd # For basic data analysis
import random as rnd # For using random functions
data = pd.read_csv("/kaggle/input/entity-annotated-corpus/ner_dataset.csv",encoding = 'ISO-8859-1')
data = data.fillna(method = 'ffill')
data.head()
## Extract the 'Word' column from the dataframe
words = data.loc[:, "Word"]
## Convert into a text file using the .savetxt() function
np.savetxt(r'words.txt', words.values, fmt="%s")
vocab = {}
with open('words.txt') as f:
for i, l in enumerate(f.read().splitlines()):
vocab[l] = i
print("Number of words:", len(vocab))
vocab['<PAD>'] = len(vocab)
class Get_sentence(object):
def __init__(self,data):
self.n_sent=1
self.data = data
agg_func = lambda s:[(w,p,t) for w,p,t in zip(s["Word"].values.tolist(),
s["POS"].values.tolist(),
s["Tag"].values.tolist())]
self.grouped = self.data.groupby("Sentence #").apply(agg_func)
self.sentences = [s for s in self.grouped]
getter = Get_sentence(data)
sentence = getter.sentences
words = list(set(data["Word"].values))
words_tag = list(set(data["Tag"].values))
word_idx = {w : i+1 for i ,w in enumerate(words)}
tag_idx = {t : i for i ,t in enumerate(words_tag)}
X = [[word_idx[w[0]] for w in s] for s in sentence]
y = [[tag_idx[w[2]] for w in s] for s in sentence]
def data_generator(batch_size, x, y,pad, shuffle=False, verbose=False):
num_lines = len(x)
lines_index = [*range(num_lines)]
if shuffle:
rnd.shuffle(lines_index)
index = 0
while True:
buffer_x = [0] * batch_size
buffer_y = [0] * batch_size
max_len = 0
for i in range(batch_size):
if index >= num_lines:
index = 0
if shuffle:
rnd.shuffle(lines_index)
buffer_x[i] = x[lines_index[index]]
buffer_y[i] = y[lines_index[index]]
lenx = len(x[lines_index[index]])
if lenx > max_len:
max_len = lenx
index += 1
X = np.full((batch_size, max_len), pad)
Y = np.full((batch_size, max_len), pad)
for i in range(batch_size):
x_i = buffer_x[i]
y_i = buffer_y[i]
for j in range(len(x_i)):
X[i, j] = x_i[j]
Y[i, j] = y_i[j]
if verbose: print("index=", index)
yield((X,Y))
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state=1)
def NERmodel(tags, vocab_size=35181, d_model = 50):
model = tl.Serial(
# tl.Embedding(vocab_size, d_model),
trax.models.reformer.Reformer(vocab_size, d_model, ff_activation=tl.LogSoftmax),
tl.Dense(tags),
tl.LogSoftmax()
)
return model
model = NERmodel(tags = 17)
print(model)
from trax.supervised import training
rnd.seed(33)
batch_size = 64
train_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
eval_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_test, y_test,vocab['<PAD>'] ,True),
id_to_mask=vocab['<PAD>'])
def train_model(model, train_generator, eval_generator, train_steps=1, output_dir='model'):
train_task = training.TrainTask(
train_generator,
loss_layer = tl.CrossEntropyLoss(),
optimizer = trax.optimizers.Adam(0.01),
n_steps_per_checkpoint=10
)
eval_task = training.EvalTask(
labeled_data = eval_generator,
metrics = [tl.CrossEntropyLoss(), tl.Accuracy()],
n_eval_batches = 10
)
training_loop = training.Loop(
model,
train_task,
eval_tasks = eval_task,
output_dir = output_dir)
training_loop.run(n_steps = train_steps)
return training_loop
train_steps = 100
training_loop = train_model(model, train_generator, eval_generator, train_steps)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Author - @SauravMaheshkar
Step2: Introduction
Step3: Pre-Processing
Step4: Creating a Vocabulary File
Step5: Creating a Dictionary for Vocabulary
Step6: Extracting Sentences from the Dataset
Step7: Making a Batch Generator
Step8: Splitting into Test and Train
Step9: Building the Model
Step10: Train the Model
|
1,863
|
<ASSISTANT_TASK:>
Python Code:
np.random.seed(8675309)
sim_pa = np.arange(20,175)
sim_c = 890*np.sin(np.deg2rad(sim_pa))**0.8
# at each point draw a poisson variable with that mean
sim_c_n = np.asarray([np.random.poisson(v) for v in sim_c ])
prob=0.1
sim_c_n2 = np.asarray([np.random.negative_binomial((v*prob)/(1-prob), prob) for v in sim_c ])
# Two subplots, the axes array is 1-d
f, axarr = plt.subplots(3, sharex=True, sharey=True, figsize=(7,9))
axarr[0].plot(sim_pa, sim_c, lw=2)
axarr[0].set_ylabel('Truth')
axarr[0].set_xlim((0,180))
axarr[0].set_yscale('log')
axarr[1].plot(sim_pa, sim_c_n, lw=2)
axarr[1].set_ylabel('Measured\nPoisson')
axarr[1].set_xlim((0,180))
axarr[1].set_yscale('log')
axarr[2].plot(sim_pa, sim_c_n2, lw=2)
axarr[2].set_ylabel('Measured\nNegBin')
axarr[2].set_xlim((0,180))
axarr[2].set_yscale('log')
axarr[2].set_ylim(axarr[1].get_ylim())
# generate some data
with pm.Model() as model:
bkg = pm.NegativeBinomial('bkg', mu=pm.Uniform('m_bkg', 0, 1e5,shape=len(sim_pa), testval=1e3),
alpha=0.1,
observed=sim_c_n2, shape=len(sim_pa))
# truth_mc = pm.Uniform('truth', 0, 100, shape=dat_len)
# noisemean_mc = pm.Uniform('noisemean', 0, 100)
# noise_mc = pm.Poisson('noise', noisemean_mc, observed=obs[1:20])
# real_n_mc = pm.Poisson('real_n', truth_mc+noisemean_mc, shape=dat_len)
# psf = pm.Uniform('psf', 0, 5, observed=det)
# obs_mc = pm.Normal('obs', (truth_mc+noisemean_mc)*psf.max(), 1/5**2, observed=obs, shape=dat_len)
trace = pm.sample(50000)
pm.traceplot(trace)
pm.summary(trace)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This will then serve as the background
|
1,864
|
<ASSISTANT_TASK:>
Python Code:
df = pd.read_excel('Data.xlsx', sheetname=None)
df['Run 1']
keys = ['Run 1', 'Run 2', 'Run 3', 'Run 4']
# Fall time in seconds when V = 0
tf = np.array([df[key]['t_f (s)'] for key in keys])
tf
# Average Fall times for each run (seconds)
avg_tf = np.array([np.mean(tf[i]) for i in np.arange(4)])
avg_tf
# Standard Deviation Fall times for each run (seconds)
std_tf = np.array([np.std(tf[i]) for i in np.arange(4)])
std_tf
tf = np.array([ma.masked_outside(entry, np.mean(entry) - 2.5*np.std(entry), np.mean(entry) + 2.5*np.std(entry))
for entry in tf])
# Standard deviation of the mean (seconds)
tf_u = np.array([std_tf[i]/np.sqrt(len(tf[i])) for i in np.arange(4)])
tf_u
# Units of m/s
speeds = np.array([(.5*1e-3)/entry for entry in avg_tf ])
speeds
#Units of m/s
speeds_u = np.array([((.5*1e-3)/avg_tf[i]**2)*tf_u[i] for i in np.arange(4)])
speeds_u
d = 7.63*1e-3 #Electrode spacer thickness in meters
V = 301.8 #Volts. Constant through all measurements
E = V/d #V/m
E
Resistances = np.array([2.065, 2.011, 1.957, 1.959])*1e6 #From thermistor. Used to find temperature. Ohms
temperatures = np.array([24, 25, 26, 26]) #From Thermistor table. Celcius
rho = 885 #kg/m^3. Density of oil drop.
g = 9.80 #gravity
def viscosity(T):
#Returns the viscosity of air for a given temp in celsius
# Viscosity units [Ns/m^2]
return (1.8 + (T-15)/209)*1e-5
viscosities = np.array([viscosity(entry) for entry in temperatures])
viscosities
def radius(speed,visc):
return np.sqrt(9*visc*speed/(2*g*rho))
radii = np.array([radius(speeds[i],viscosities[i]) for i in np.arange(4)]) #Blob radius in meters
radii
def mass(r):
return (4/3)*np.pi*rho*r**3
masses = np.array([mass(entry) for entry in radii])
masses
masses_u = np.array([np.pi*(radii[i]**2)*np.sqrt(18*viscosities[i]*rho/g)*np.sqrt(1/speeds[i])*speeds_u[i] for i in np.arange(4)])
masses_u
def gamma(r):
b = .082*1e-7
return (1+b/r)**(-3/2)
gammas = np.array([gamma(entry) for entry in radii])
gammas
# Rise time in seconds when V = +
tu = np.array([df[key]['t_u (s)'] for key in keys])
tu
#### tu0: Time Up 0'th drop.
#### Average Time
#### Std of Times
#### Resulting Speed
#### Speed Uncertainty
tu0 = ma.masked_array(tu[0],mask=[0,0,0,0,0,0,0,0,0,1])
avg_tu0 = np.mean(tu0)
std_tu0 = np.std(tu0)
tu0_speed = .5*1e-3/avg_tu0
tu0_speed_u = .5*1e-3/(avg_tu0**2)*std_tu0/np.sqrt(len(tu0))
tu1 = ma.masked_array(tu[1],mask=[0,0,1,1,0,0,0,0,0,0])
avg_tu1 = np.mean(tu1)
std_tu1 = np.std(tu1)
tu1_speed = .5*1e-3/avg_tu1
tu1_speed_u = .5*1e-3/(avg_tu1**2)*std_tu1/np.sqrt(len(tu1))
tu2_1 = ma.masked_array(tu[2],mask=[0,1,1,0,1,0,0,0,0,0])
avg_tu2_1 = np.mean(tu2_1)
std_tu2_1 = np.std(tu2_1)
tu2_1_speed = .5*1e-3/avg_tu2_1
tu2_1_speed_u = .5*1e-3/(avg_tu2_1**2)*std_tu2_1/np.sqrt(len(tu2_1))
tu2_2 = ma.masked_array(tu[2],mask=[1,0,0,1,0,1,1,1,1,1])
avg_tu2_2 = np.mean(tu2_2)
std_tu2_2 = np.std(tu2_2)
tu2_2_speed = .5*1e-3/avg_tu2_2
tu2_2_speed_u = .5*1e-3/(avg_tu2_2**2)*std_tu2_2/np.sqrt(len(tu2_2))
tu3 = tu[3]
avg_tu3 = np.mean(tu3)
std_tu3 = np.std(tu3)
tu3_speed = .5*1e-3/avg_tu3
tu3_speed_u = .5*1e-3/(avg_tu3**2)*std_tu3/np.sqrt(len(tu3))
#This data can't be used unless the free fall time data is analyzed
tu4 = np.array([4.20, 3.51, 3.44, 3.21, 3.56, 3.27, 3.25]) #This data from an extra column not entered into excel
avg_tu4 = np.mean(tu4)
std_tu4 = np.std(tu4)
tu4_speed = .5*1e-3/avg_tu4
tu4_speed_u = .5*1e-3/(avg_tu4**2)*std_tu4/np.sqrt(len(tu4))
avg_tu0
std_tu0/np.sqrt(len(tu0))
def up_charge(vu,vf,gamma,mass):
return gamma*mass*g*((vu/vf) + 1)/E
qu0 = up_charge(tu0_speed, speeds[0], gammas[0], masses[0])
qu1 = up_charge(tu1_speed, speeds[1], gammas[1], masses[1])
qu2_1 = up_charge(tu2_1_speed, speeds[2], gammas[2], masses[2])
qu2_2 = up_charge(tu2_2_speed, speeds[2], gammas[2], masses[2])
qu3 = up_charge(tu3_speed, speeds[3], gammas[3], masses[3])
qu = np.array([qu0,qu1,qu2_1,qu2_2,qu3])
qu*1e19
def delta_qu(qu,dm, m, dvu, vu, dvf, vf):
return np.sqrt((dm/m)**2 + (dvu/(vu+vf))**2 + ((vu/vf)*dvf/(vf+vu))**2)*qu
qu0_u = delta_qu(qu0,masses_u[0],masses[0],tu0_speed_u, tu0_speed, speeds_u[0], speeds[0])
qu1_u = delta_qu(qu1,masses_u[1],masses[1],tu1_speed_u, tu1_speed, speeds_u[1], speeds[1])
qu2_1_u = delta_qu(qu2_1,masses_u[2],masses[2],tu2_1_speed_u, tu2_1_speed, speeds_u[2], speeds[2])
qu2_2_u = delta_qu(qu2_2,masses_u[2],masses[2],tu2_2_speed_u, tu2_2_speed, speeds_u[2], speeds[2])
qu3_u = delta_qu(qu3,masses_u[3],masses[3],tu3_speed_u, tu3_speed, speeds_u[3], speeds[3])
qu_u = np.array([qu0_u,qu1_u,qu2_1_u,qu2_2_u,qu3_u])
qu_u*1e19
df['Run 1']
# Fall time in seconds when V = -
td = np.array([df[key]['t_d (s)'] for key in keys])
td
#### tu0: Time Up 0'th drop.
#### Average Time
#### Std of Times
#### Resulting Speed
#### Speed Uncertainty
td0 = td[0] #All values here are about the same
avg_td0 = np.mean(td0)
std_td0 = np.std(td0)
td0_speed = .5*1e-3/avg_td0
td0_speed_u = .5*1e-3/(avg_td0**2)*std_td0/np.sqrt(len(td0))
td1 = np.array([td[1]]) #All values here are about the same
avg_td1 = np.mean(td1)
std_td1 = np.std(td1)
td1_speed = .5*1e-3/avg_td1
td1_speed_u = .5*1e-3/(avg_td1**2)*std_td1/np.sqrt(len(td1))
td2 = np.array([td[2]]) #All values here are about the same
avg_td2 = np.mean(td2)
std_td2 = np.std(td2)
td2_speed = .5*1e-3/avg_td2
td2_speed_u = .5*1e-3/(avg_td2**2)*std_td2/np.sqrt(len(td2))
td3 = td[3] #All values here are about the same
avg_td3 = np.mean(td3)
std_td3 = np.std(td3)
td3_speed = .5*1e-3/avg_td3
td3_speed_u = .5*1e-3/(avg_td3**2)*std_td3/np.sqrt(len(td3))
avg_td0
std_td0/np.sqrt(len(td0))
def down_charge(vd,vf,gamma,mass):
return gamma*mass*g*((vd/vf) - 1)/E
def delta_qd(qu,dm, m, dvu, vu, dvf, vf):
return np.sqrt((dm/m)**2 + (dvu/(vu-vf))**2 + ((vu/vf)*dvf/(vf-vu))**2)*qu
qd0 = down_charge(td0_speed, speeds[0], gammas[0], masses[0])
qd1 = down_charge(td1_speed, speeds[1], gammas[1], masses[1])
qd2 = down_charge(td2_speed, speeds[2], gammas[2], masses[2])
qd3 = down_charge(td3_speed, speeds[3], gammas[3], masses[3])
qd = np.array([qd0,qd1,qd2,qd3])
qd*1e19
qd0_u = delta_qd(qd0,masses_u[0],masses[0],td0_speed_u, td0_speed, speeds_u[0], speeds[0])
qd1_u = delta_qd(qd1,masses_u[1],masses[1],td1_speed_u, td1_speed, speeds_u[1], speeds[1])
qd2_u = delta_qd(qd2,masses_u[2],masses[2],td2_speed_u, td2_speed, speeds_u[2], speeds[2])
qd3_u = delta_qd(qd3,masses_u[3],masses[3],td3_speed_u, td3_speed, speeds_u[3], speeds[3])
qd_u = np.array([qd0_u,qd1_u,qd2_u,qd3_u])
qd_u*1e19
qu*1e19
#Check with known e
np.array([qu[0]/1.6, qu[1]/1.6, qu[2]/1.6, qu[3]/1.6,qu[4]/1.6])*1e19
up_e = np.array([qu[0]/5, qu[1]/7, qu[2]/4, qu[3]/3,qu[4]/5])
up_e*1e19
qu_u*1e19
up_e_u = np.array([qu_u[0]/5, qu_u[1]/7, qu_u[2]/4, qu_u[3]/3,qu_u[4]/5])
up_e_u*1e19
qd*1e19
#Check with known e
np.array([qd[0]/1.6, qd[1]/1.6, qd[2]/1.6, qd[3]/1.6])*1e19
down_e = np.array([qd[0]/5, qd[1]/7, qd[2]/5, qd[3]/5])
down_e*1e19
qd_u*1e19
down_e_u = np.array([qd_u[0]/5, qd_u[1]/7, qd_u[2]/5, qd_u[3]/5])
down_e_u*1e19
Charges = np.hstack((up_e,down_e))
Charges*1e19
Charge_Best = np.mean(Charges)
Charge_Best*1e19
Charges_u = np.hstack((up_e_u,down_e_u))
Charges_u*1e19
delta_e = np.std(Charges)/np.sqrt(len(Charges))
#delta_e_meas = np.sqrt(np.sum(np.array([(Charges_u[i]/Charges[i])**2 for i in np.arange(len(Charges))])))*Charge_Best
#delta_e = np.sqrt(delta_e_ran**2 + delta_e_meas**2)
#delta_e = delta_e_meas
delta_e*1e19
print('e = (%.3f +/- %.3f)x10^(-19) C' % (Charge_Best*1e19,delta_e*1e19))
# From a previous lab
eperm = 1.648*1e11 #C/kg
eperm_u = .287*1e11
m = Charge_Best/eperm
m
dm = np.sqrt( (delta_e/Charge_Best)**2 + (eperm_u/eperm)**2 )*m
dm
print('m = (%.3f +/- %.3f)x10^(-31) Kg' % (m*1e31,dm*1e31))
eo = 8.854*1e-12
E_r = 13.77 #eV
E_r = E_r*1.602*1e-19
dE_r = .01 #Ev
h = np.sqrt(m * Charge_Best**4/(8*eo**2 * E_r))
h
dh = np.sqrt((dm/m)**2 + (.01/13.77)**2 + (delta_e/Charge_Best)**2)*h
dh
print('h = (%.3f +/- %.3f)x10^(-34) Js' % (h*1e34,dh*1e34))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Mask points that lie outside 2.5$\sigma$
Step2: Uncertainty
Step3: Fall Speed
Step4: \begin{equation}
Step5: E Field
Step6: Air Viscosity
Step7: Drop Radius
Step8: Droplet Mass
Step9: \begin{equation}
Step10: Correction Factor
Step11: Upward Movement
Step12: \begin{equation}
Step13: Drop Charge
Step14: Drop Charge
Step15: Search for Charge Unit
Step16: Upwards Uncer
Step17: Down Charge
Step18: Down Uncer
Step19: Determine Electron Charge
Step20: Charge to Mass Ratio
Step21: Plank Constant
|
1,865
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
iris = pd.read_csv('../datasets/iris.csv')
# Print some info about the dataset
iris.info()
iris['Class'].unique()
iris.describe()
# Create a scatterplot for sepal length and sepal width
import matplotlib.pyplot as plt
%matplotlib inline
sl = iris['Sepal_length']
sw = iris['Sepal_width']
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(sl[iris['Class'] == 'Iris-setosa'], sw[iris['Class'] == 'Iris-setosa'], color='red')
plt.scatter(sl[iris['Class'] == 'Iris-versicolor'], sw[iris['Class'] == 'Iris-versicolor'], color='green')
plt.scatter(sl[iris['Class'] == 'Iris-virginica'], sw[iris['Class'] == 'Iris-virginica'], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
# Show graph
plt.show()
# Create a scatterplot for petal length and petal width
pl = iris['Petal_length']
pw = iris['Petal_width']
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(pl[iris['Class'] == 'Iris-setosa'], pw[iris['Class'] == 'Iris-setosa'], color='red')
plt.scatter(pl[iris['Class'] == 'Iris-versicolor'], pw[iris['Class'] == 'Iris-versicolor'], color='green')
plt.scatter(pl[iris['Class'] == 'Iris-virginica'], pw[iris['Class'] == 'Iris-virginica'], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
# Show graph
plt.show()
X = iris.drop('Class', axis=1)
t = iris['Class'].values
RANDOM_STATE = 4321
# Use sklean's train_test_plit() method to split our data into two sets.
from sklearn.model_selection import train_test_split
Xtr, Xts, ytr, yts = train_test_split(X, t, random_state=RANDOM_STATE)
# Use the training set to build a LogisticRegression model
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression().fit(Xtr, ytr) # Fit a logistic regression model
# Use the LogisticRegression's score() method to assess the model accuracy in the training set
lr.score(Xtr, ytr)
# Use the LogisticRegression's score() method to assess the model accuracy in the test set
lr.score(Xts, yts)
# scikit-learn provides a function called "classification_report" that summarizes the three metrics above
# for a given classification model on a dataset.
from sklearn.metrics import classification_report
# Use this function to print a classification metrics report for the trained classifier.
# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html
print(classification_report(yts, lr.predict(Xts)))
from sklearn.metrics import confusion_matrix
# Use scikit-learn's confusion_matrix to understand which classes were misclassified.
# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
confusion_matrix(yts, lr.predict(Xts))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualizing data
Step2: Classifying species
Step3: Inspecting classification results
Step4: Another useful technique to inspect the results given by a classification model is to take a look at its confusion matrix. This is an K x K matrix (where K is the number of distinct classes identified by the classifier) that gives us, in the position (i, j), how many examples belonging to class i were classified as belonging to class j.
|
1,866
|
<ASSISTANT_TASK:>
Python Code:
import py2neo
import pandas as pd
graph = py2neo.Graph()
query = "MATCH (a:Method) RETURN a"
result = graph.data(query)
result[0:3]
df = pd.DataFrame.from_dict([data['a'] for data in result]).dropna(subset=['name'])
df.head()
# filter out all the constructor "methods"
df = df[df['name'] != "<init>"]
# assumption 1: getter start with "get"
df.loc[df['name'].str.startswith("get"), "method_type"] = "Getter"
# assumption 2: "is" is just the same as a getter, just for boolean values
df.loc[df['name'].str.startswith("is"), "method_type"] = "Getter"
# assumption 3: setter start with "set"
df.loc[df['name'].str.startswith("set"), "method_type"] = "Setter"
# assumption 4: all other methods are "Business Methods"
df['method_type'] = df['method_type'].fillna('Business Methods')
df[['name', 'signature', 'visibility', 'method_type']][20:30]
grouped_data = df.groupby('method_type').count()['name']
grouped_data
import matplotlib.pyplot as plt
# some configuration for displaying nice diagrams directly in the notebook
%matplotlib inline
plt.style.use('fivethirtyeight')
# apply additional style for getting a blank background
plt.style.use('seaborn-white')
# plot a nice business people compatible pie chart
ax = grouped_data.plot(kind='pie', figsize=(5,5), title="Business methods or just Getters or Setters?")
# get rid of the distracting label for the y-axis
ax.set_ylabel("")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6
Step6: Step 7
|
1,867
|
<ASSISTANT_TASK:>
Python Code:
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
svc = SVC(kernel="linear", C=1)
rfe = RFE(estimator=svc, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
print(__doc__)
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.feature_selection import RFE
import matplotlib.pyplot as plt
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
svc = SVC(kernel="linear", C=1)
rfe = RFE(estimator=svc, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 數位數字資料是解析度為8*8的手寫數字影像,總共有1797筆資料。預設為0~9十種數字類型,亦可由n_class來設定要取得多少種數字類型。
Step2: 可以用方法ranking_來看輸入的特徵權重關係。而方法estimator_可以取得訓練好的分類機狀態。比較特別的是當我們核函數是以線性來做分類時,estimator_下的方法coef_即為特徵的分類權重矩陣。權重矩陣的大小會因為n_features_to_select與資料的分類類別而改變,譬如本範例是十個數字的分類,並選擇以一個特徵來做分類訓練,就會得到45*1的係數矩陣,其中45是從分類類別所需要的判斷式而來,與巴斯卡三角形的第三層數正比。
Step3: (四)原始碼
|
1,868
|
<ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.2,<2.3"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
times = np.linspace(0,1,51)
b.add_dataset('lc', compute_times=times, dataset='lc01')
b.add_dataset('orb', compute_times=times, dataset='orb01')
b.add_dataset('mesh', compute_times=times, dataset='mesh01', columns=['teffs'])
b.run_compute(irrad_method='none')
afig, mplanim = b.plot(y={'orb': 'ws'},
animate=True, save='animations_1.gif', save_kwargs={'writer': 'imagemagick'})
afig, mplanim = b.plot(y={'orb': 'ws'},
times=times[:-1:2], animate=True, save='animations_2.gif', save_kwargs={'writer': 'imagemagick'})
afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True,\
c='r', linestyle=':',\
highlight_marker='s', highlight_color='g',
animate=True, save='animations_3.gif', save_kwargs={'writer': 'imagemagick'})
afig, mplanim = b['mesh01@model'].plot(times=times[:-1], fc='teffs', ec='None',
animate=True, save='animations_4.gif', save_kwargs={'writer': 'imagemagick'})
afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True, xlim='frame',
animate=True, save='animations_5.gif', save_kwargs={'writer': 'imagemagick'})
afig, mplanim = b['orb01@model'].plot(times=times[:-1], projection='3d', azim=[0, 360], elev=[-20,20],
animate=True, save='animations_6.gif', save_kwargs={'writer': 'imagemagick'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Default Animations
Step3: Note that like the rest of the examples below, this is simply the animated version of the exact same call to plot
Step4: Plotting Options
Step5:
Step6: Disabling Fixed Limits
Step7: 3D axes
|
1,869
|
<ASSISTANT_TASK:>
Python Code:
import os
from datetime import datetime
import numpy as np
import pandas as pd
import bokeh.charts as bk
import bokeh.plotting as bk_plt
import bokeh.models as bk_md
bk.output_notebook()
import urllib.request
# Download the file from `url` and save it locally under `file_name`:
zip_url = "https://github.com/dataventureutc/Kaggle-HandsOnLab/blob/master/data/data.zip?raw=true"
urllib.request.urlretrieve(zip_url, "data.zip")
import zipfile
zip_ref = zipfile.ZipFile("data.zip", 'r')
zip_ref.extractall("data/")
zip_ref.close()
df_sample_submission = pd.read_csv("data/sample_submission_NDF.csv")
df_sample_submission.head(n=5) # Only display a few lines and not the whole dataframe
df_test_users = pd.read_csv("data/test_users.csv")
df_test_users.head(n=5) # Only display a few lines and not the whole dataframe
df_countries = pd.read_csv("data/countries.csv")
df_countries.head(n=5) # Only display a few lines and not the whole dataframe
df_age_gender_bkts = pd.read_csv("data/age_gender_bkts.csv")
df_age_gender_bkts.head(n=5) # Only display a few lines and not the whole dataframe
tmp_sorted = df_age_gender_bkts.sort_values(by=['age_bucket', 'gender'])
tmp_sorted.head(n=12) # Only display a few lines and not the whole dataframe
tmp_sorted.year.unique() # We query the distinct values over the last column "year"
df_train_users = pd.read_csv("data/train_users_2.csv")
df_train_users.head(n=5) # Only display a few lines and not the whole dataframe
df_sessions = pd.read_csv("data/sessions.csv")
print("There are " + str(df_sessions.shape[0])+ " rows in the dataset")
df_sessions.head(n=5) # Only display a few lines and not the whole dataframe
training_min_date = df_train_users.date_account_created.min()
training_max_date = df_train_users.date_account_created.max()
print ("Training Date Range : [" + training_min_date +", " + training_max_date +"]")
# ===============
testing_min_date = df_test_users.date_account_created.min()
testing_max_date = df_test_users.date_account_created.max()
print ("Testing Date Range : [" + testing_min_date +", " + testing_max_date +"]")
df_train_users.transpose().ix[:,:5]
# How many rows in the DataFrame
row_count = len(df_train_users.index)
print ("Row Count = " + str(row_count))
print("\n")
# ===============================
# Print what's the size of the ID
field_length = df_train_users.id.astype(str).map(len)
id_maxlength = len(df_train_users.loc[field_length.argmax(), 'id'])
id_minlength = len(df_train_users.loc[field_length.argmin(), 'id'])
if (id_maxlength != id_minlength):
print ("ID Length = [" + str(id_minlength) + ", " + str(id_maxlength) + "]")
else:
print ("ID Length = " + str(id_maxlength))
print("\n")
# ===============================
# Count NaN Values for date_first_booking
NaN_Count_date_first_booking = df_train_users.date_first_booking.isnull().sum()
print ("date_first_booking NaN Count = " + str(NaN_Count_date_first_booking) \
+ " && " + str("%.2f" % (float(NaN_Count_date_first_booking)/row_count*100)) +"%")
print("\n")
# ===============================
# Possible Values for gender
gender_repartition = df_train_users.gender.value_counts()
for gender, count in gender_repartition.iteritems():
print ("Gender: " + gender + " && Count: " + str(count) + " && " \
+ str("%.2f" % (float(count)/row_count*100)) +"%")
print("\n")
# ===============================
# Count NaN Values for age
NaN_Count_age = df_train_users.age.isnull().sum()
print ("age NaN Count = " + str(NaN_Count_age) + " && " \
+ str("%.2f" % (float(NaN_Count_age)/row_count*100)) +"%")
print("\n")
# ===============================
# Possible Values for signup_method
signup_method_repartition = df_train_users.signup_method.value_counts()
for method, count in signup_method_repartition.iteritems():
print ("Method: " + method + " && Count: " + str(count) + " && " \
+ str("%.2f" % (float(count)/row_count*100)) +"%")
print("\n")
# ===============================
# Possible Values for language
language_repartition = df_train_users.language.value_counts()
for language, count in language_repartition.iteritems():
print ("language: " + language + " && Count: " + str(count) \
+ " && " + str("%.2f" % (float(count)/row_count*100)) +"%")
break
print("\n")
# ===============================
# Possible Values for affiliate_channel
affiliate_channel_repartition = df_train_users.affiliate_channel.value_counts()
i = 0
for channel, count in affiliate_channel_repartition.iteritems():
print ("affiliate_channel: " + channel + " && Count: " + str(count) + " && " \
+ str("%.2f" % (float(count)/row_count*100)) +"%")
i += 1
if (i == 3):
break
print("\n")
# ===============================
# Possible Values for first_affiliate_tracked
first_affiliate_tracked_repartition = df_train_users.first_affiliate_tracked.value_counts()
i = 0
for affiliate, count in first_affiliate_tracked_repartition.iteritems():
print ("first_affiliate_tracked: " + affiliate + " && Count: " + str(count) + \
" && " + str("%.2f" % (float(count)/row_count*100)) +"%")
i += 1
if (i == 3):
break
print("\n")
# ===============================
# Possible Values for signup_app
signup_app_repartition = df_train_users.signup_app.value_counts()
for signup_app, count in signup_app_repartition.iteritems():
print ("signup_app: " + signup_app + " && Count: " + str(count) + " && " \
+ str("%.2f" % (float(count)/row_count*100)) +"%")
print("\n")
# ===============================
# Possible Values for first_browser
first_browser_repartition = df_train_users.first_browser.value_counts()
i = 0
for first_browser, count in first_browser_repartition.iteritems():
print ("first_browser: " + first_browser + " && Count: " + str(count) + \
" && " + str("%.2f" % (float(count)/row_count*100)) +"%")
i += 1
if (i == 6):
break
print("\n")
dict_output = dict()
country_destination_repartition = df_train_users.country_destination.value_counts()
for country_destination, count in country_destination_repartition.iteritems():
dict_output[country_destination] = float(count)/row_count*100
df_output = pd.DataFrame(list(dict_output.items()),
columns=['Country', 'Repartition'])
df_output.sort_values(by=['Repartition'], ascending=False).head(n=5)
p = bk.Bar(df_output, label=['Country'], values='Repartition', ylabel='Booking Proportion in %')
bk.show(p)
df_output_by_year_and_month = df_train_users.loc[:,['country_destination','date_account_created']]
df_output_by_year_and_month.loc[:,'year_account_created'] = df_output_by_year_and_month['date_account_created'].apply(
lambda x: datetime.strptime(x, "%Y-%m-%d").strftime("%Y")
)
df_output_by_year_and_month.loc[:,'month_account_created'] = df_output_by_year_and_month['date_account_created'].apply(
lambda x: datetime.strptime(x, "%Y-%m-%d").strftime("%m")
)
df_output_by_year_and_month.loc[:,'year_month_account_created'] = df_output_by_year_and_month['date_account_created'].apply(
lambda x: datetime.strptime(x, "%Y-%m-%d").strftime("%Y - %m")
)
df_output_by_year_and_month.head(n=3)
tmp_df = df_output_by_year_and_month.loc[:,['country_destination','year_account_created']]
rslt = tmp_df.groupby("year_account_created")['country_destination'].value_counts()
list_year = [x[0] for x in rslt.index.values]
list_dest = [x[1] for x in rslt.index.values]
list_count = rslt.values
df_tmp = pd.DataFrame(
{'Year': list_year,
'Destination': list_dest,
'Count': list_count
}
)
df_tmp.sort_values(by=["Destination", "Year"]).head(n=5)
df_pivot = pd.pivot_table(df_tmp, values='Count', index=['Destination'], columns=['Year'],
aggfunc=np.sum, margins=True).astype(int)
total_by_years = df_pivot.loc[['All']]
for year, col in total_by_years.iteritems():
subtotal_by_year = col[0]
df_pivot[year] = df_pivot[year]/subtotal_by_year*100
df_pivot = df_pivot.sort_values(by="All", ascending=False).drop("All") # We sort by the column "All" and remove the line "All"
df_pivot
df_pivot2 = df_pivot.copy()
df_pivot2["Destination"] = df_pivot.index
df_pivot2 = pd.melt(df_pivot2, id_vars="Destination")
df_pivot2 = df_pivot2.drop(df_pivot2[df_pivot2.Year == "All"].index) # We remove the column 'All' from the plotting
df_pivot2.head(n=3)
p = bk.Bar(df_pivot2, label=['Destination'], values='value', group="Year", legend='top_right', ylabel='Booking Proportion in %')
bk.show(p)
tmp_df = df_output_by_year_and_month[['country_destination','year_month_account_created']].sort_values(
by="year_month_account_created"
)
tmp_df = tmp_df.groupby(["year_month_account_created"]).agg("count")
tmp_df.head(n=4)
# Plot Data
x_data = [str(x) for x in tmp_df["country_destination"].index.values]
y_data = tmp_df["country_destination"].values
p = bk_plt.figure(plot_width=980, plot_height=400, y_range=[0, y_data.max()+1000], x_range=x_data)
p.line(x_data, y_data, line_width=2)
p.circle(x_data, y_data, fill_color="white", size=8)
# Draw a line when X% of the users of the dataset have sign-up
split_1 = 20 # 80% left
split_2 = 30 # 70% left
split_3 = 50 # 50% left
split_4 = 70 # 30% left
thrs_value_1 = 213450 * split_1 / 100
thrs_value_2 = 213450 * split_2 / 100
thrs_value_3 = 213450 * split_3 / 100
thrs_value_4 = 213450 * split_4 / 100
index_1 = 0
index_2 = 0
index_3 = 0
index_4 = 0
tmp_sum = 0
for val in y_data:
tmp_sum += val
if tmp_sum <= thrs_value_1:
index_1 += 1
index_2 += 1
index_3 += 1
index_4 += 1
elif tmp_sum <= thrs_value_2:
index_2 += 1
index_3 += 1
index_4 += 1
elif tmp_sum <= thrs_value_3:
index_3 += 1
index_4 += 1
elif tmp_sum <= thrs_value_4:
index_4 += 1
else:
break
vline_1 = bk_md.Span(location=index_1, dimension='height', line_color='blue', line_width=3) # 80% Data Left
vline_2 = bk_md.Span(location=index_2, dimension='height', line_color='green', line_width=3) # 70% Data Left
vline_3 = bk_md.Span(location=index_3, dimension='height', line_color='orange', line_width=3) # 50% Data Left
vline_4 = bk_md.Span(location=index_4, dimension='height', line_color='red', line_width=3) # 30% Data Left
p.renderers.extend([vline_1, vline_2, vline_3, vline_4])
# We set label orientation
from math import pi
p.xaxis.major_label_orientation = pi/2
# We display the graph
bk_plt.show(p)
df_age = df_train_users[["age"]]
na_count = df_age.isnull().sum()[0]
row_count = df_age.shape[0]
na_percent = "%.2f" % (float(na_count) / row_count * 100)
print ("Count of NaN Values = " + str(na_count) + " => " + str(na_percent) + "%")
print ("Total Row Count = " + str(row_count))
df_age = df_age.dropna().reset_index(drop=True)
df_age.head(n=3)
df_age_bucket = df_age.groupby(pd.cut(df_age['age'], np.arange(0,999999,5))).count()
idx_tmp = df_age_bucket.index.values
bckt_count_tmp = df_age_bucket.age.values
data = dict()
for bck, cnt in zip(idx_tmp, bckt_count_tmp):
start_point = int(bck[1:].partition(",")[0])
end_point = int(bck[:-1].partition(",")[2])
range_name = "("+ "%03d" % (start_point) + "," + "%03d" % (end_point) + "]"
if start_point < 100:
data[range_name] = cnt
elif start_point >= 100 and start_point < 1000:
if not "(100, 999]" in data:
data["(100, 999]"] = cnt
else:
data["(100, 999]"] += cnt
else:
if not "+1000" in data:
data["+1000"] = cnt
else:
data["+1000"] += cnt
df_age_stats = pd.DataFrame({
'Bucket': list(data.keys()),
'Count': list(data.values())
}).sort_values(by="Bucket").reset_index(drop=True)
df_age_stats.head(n=5)
x_range = df_age_stats["Bucket"].values
bar2 = bk.Bar(
df_age_stats,
values = 'Count',
label = ['Bucket'],
title = "Reported Ages of Users",
width = 980,
ylabel = "Number of Users by Bucket"
)
bk.show(bar2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Dataset Loading & Visualition
Step2: As we can see, it is a fairly simple file with only two columns
Step3: Here are the information available on our test users
Step4: We have a file containing a list of countries with the following features
Step5: Displayed as followed, it is fairly hard to really see what is in this file.
Step6: Now it appears that we have a file containing demographic information by country, by age and by gender.
Step7: => We can conclude that we only have data about the year 2015 !
Step8: Here are the information available on our test users
Step9: 1.3 Summary - Important Information
Step10: 1.3.2 - Summary
Step11: 1.4.1 Feature Analysis
Step12: 1.4.2 Output Analysis - Is the dataset balanced ?
Step13: 1.4.2.2 Same visualisation, with data grouped by Booking-year
Step14: 1.4.2.3 Conclusion
Step15: If we ony consider of 2013 and 2014, we still have more than 70% of the data for training.
|
1,870
|
<ASSISTANT_TASK:>
Python Code:
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = './traffic-signs-data/train.p'
validation_file = './traffic-signs-data/valid.p'
testing_file = './traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Loading done!")
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# Number of training examples
n_train = len(X_train)
# Number of testing examples.
n_test = len(X_test)
# Number of validation examples
n_valid = len(X_valid)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", n_train)
print("Number of validation examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
import matplotlib.pyplot as plt
import random
import numpy as np
import csv
import pandas as pd
# Visualizations will be shown in the notebook.
%matplotlib inline
def show_sample(features, labels, histogram = 1, sample_num = 1, sample_index = -1, color_map ='brg'):
if histogram == 1 :
col_num = 2
#Create training sample + histogram plot
f, axarr = plt.subplots(sample_num+1, col_num, figsize=(col_num*4,(sample_num+1)*3))
else:
if sample_num <= 4:
col_num = sample_num
else:
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
if sample_num == 1:
#Create training sample plot
f, ax = plt.subplots(row_num, col_num)
else:
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
index = sample_index - 1
for i in range(0, sample_num, 1):
if sample_index < -1:
index = random.randint(0, len(features))
else:
index = index + 1
if histogram == 1 :
image = features[index].squeeze()
axarr[i,0].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i,0].imshow(image,color_map)
hist,bins = np.histogram(image.flatten(),256, normed =1 )
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max()/ cdf.max()
axarr[i,1].plot(cdf_normalized, color = 'b')
axarr[i,1].hist(image.flatten(),256, normed =1, color = 'r')
axarr[i,1].legend(('cdf','histogram'), loc = 'upper left')
axarr[i,0].axis('off')
axarr[sample_num,0].axis('off')
axarr[sample_num,1].axis('off')
else:
image = features[index].squeeze()
if row_num > 1:
axarr[int(i/col_num),i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[int(i/col_num),i%col_num].imshow(image,color_map)
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
elif sample_num == 1:
ax.set_title('%s' % signnames.iloc[labels[index], 1])
ax.imshow(image,color_map)
ax.axis('off')
ax.axis('off')
ax.axis('off')
else:
axarr[i%col_num].set_title('%s' % signnames.iloc[labels[index], 1])
axarr[i%col_num].imshow(image,color_map)
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
axarr[i%col_num].axis('off')
# Tweak spacing to prevent clipping of title labels
f.tight_layout()
plt.show()
def show_training_dataset_histogram(labels_train,labels_valid,labels_test):
fig, ax = plt.subplots(figsize=(15,5))
temp = [labels_train,labels_valid,labels_test]
n_classes = np.unique(y_train).size
# the histogram of the training data
n, bins, patches = ax.hist(temp, n_classes, label=["Train","Valid","Test"])
ax.set_xlabel('Classes')
ax.set_ylabel('Number of occurence')
ax.set_title(r'Histogram of the data sets')
ax.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
plt.show()
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, sample_num = 6)
import cv2
from tqdm import tqdm
from sklearn.utils import shuffle
def random_transform_image(dataset, index):
# Hyperparameters
# Values inspired from Pierre Sermanet and Yann LeCun Paper : Traffic Sign Recognition with Multi-Scale Convolutional Networks
Scale_change_max = 0.1
Translation_max = 2 #pixels
Rotation_max = 15 #degrees
Brightness_max = 0.1
# Generate random transformation values
trans_x = np.random.uniform(-Translation_max,Translation_max)
trans_y = np.random.uniform(-Translation_max,Translation_max)
angle = np.random.uniform(-Rotation_max,Rotation_max)
scale = np.random.uniform(1-Scale_change_max,1+Scale_change_max)
bright = np.random.uniform(-Brightness_max,Brightness_max)
#Brightness
#create white image
white_img = 255*np.ones((32,32,3), np.uint8)
black_img = np.zeros((32,32,3), np.uint8)
if bright >= 0:
img = cv2.addWeighted(dataset[index].squeeze(),1-bright,white_img,bright,0)
else:
img = cv2.addWeighted(dataset[index].squeeze(),bright+1,black_img,bright*-1,0)
# Scale
img = cv2.resize(img,None,fx=scale, fy=scale, interpolation = cv2.INTER_CUBIC)
# Get image shape afeter scaling
rows,cols,chan = img.shape
# Pad with zeroes before rotation if image shape is less than 32*32*3
if rows < 32:
offset = int((32-img.shape[0])/2)
# If shape is an even number
if img.shape[0] %2 == 0:
img = cv2.copyMakeBorder(img,offset,offset,offset,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
else:
img = cv2.copyMakeBorder(img,offset,offset+1,offset+1,offset,cv2.BORDER_CONSTANT,value=[0,0,0])
# Update image shape after padding
rows,cols,chan = img.shape
# Rotate
M = cv2.getRotationMatrix2D((cols/2,rows/2),angle,1)
img = cv2.warpAffine(img,M,(cols,rows))
# Translation
M = np.float32([[1,0,trans_x],[0,1,trans_y]])
img = cv2.warpAffine(img,M,(cols,rows))
# Crop centered if image shape is greater than 32*32*3
if rows > 32:
offset = int((img.shape[0]-32)/2)
img = img[offset: 32 + offset, offset: 32 + offset]
return img
# Parameters
# Max example number per class
num_example_per_class = np.bincount(y_train)
min_example_num = max(num_example_per_class)
for i in range(len(num_example_per_class)):
# Update number of examples by class
num_example_per_class = np.bincount(y_train)
# If the class lacks examples...
if num_example_per_class[i] < min_example_num:
# Locate where pictures of this class are located in the training set..
pictures = np.array(np.where(y_train == i)).T
# Compute the number of pictures to be generated
num_example_to_generate = min_example_num - num_example_per_class[i]
# Compute the number of iteration necessary on the real data
num_iter = int( num_example_to_generate/len(pictures) ) + 1
# Compute the pool of real data necessary to fill the classes
if num_iter == 1 :
num_pictures = num_example_to_generate
else:
num_pictures = len(pictures)
# # Limit the number of iteration to 10
# num_iter = min(num_iter, 10)
# Create empty list
more_X = []
more_y = []
for k in range(num_iter):
# if we are in the last iteration, num_pictures is adjusted to fit the min_example_num
if (k == num_iter - 1) and (num_iter > 1):
num_pictures = min_example_num - num_iter * len(pictures)
# For each pictures of this class, generate 1 more synthetic image
pbar = tqdm(range(num_pictures), desc='Iter {:>2}/{}'.format(i+1, len(num_example_per_class)), unit='examples')
for j in pbar:
# Append the transformed picture
more_X.append(random_transform_image(X_train,pictures[j]))
# Append the class number
more_y.append(i)
# Append the synthetic images to the training set
X_train = np.append(X_train, np.array(more_X), axis=0)
y_train = np.append(y_train, np.array(more_y), axis=0)
print("New training feature shape",X_train.shape)
print("New training label shape",y_train.shape)
print("Data augmentation done!")
# Visualization
show_training_dataset_histogram(y_train,y_valid,y_test)
show_sample(X_train, y_train, histogram = 0, sample_num = 8, sample_index = 35000)
import cv2
from numpy import newaxis
def equalize_Y_histogram(features):
images = []
for image in features:
# Convert RGB to YUV
temp = cv2.cvtColor(image, cv2.COLOR_BGR2YUV);
# Equalize Y histogram in order to get better contrast accross the dataset
temp[:,:,0] = cv2.equalizeHist(temp[:,:,0])
# Convert back YUV to RGB
temp = cv2.cvtColor(temp, cv2.COLOR_YUV2BGR)
images.append(temp)
return np.array(images)
def CLAHE_contrast_normalization(features):
images = []
for image in features:
# create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(4,4))
temp = clahe.apply(image)
images.append(temp)
return np.array(images)
def convert_to_grayscale(features):
gray_images = []
for image in features:
# Convert RGB to grayscale
temp = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray_images.append(temp)
return np.array(gray_images)
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
a = 0.1
b = 0.9
image_data_norm = a + ((image_data - np.amin(image_data))*(b-a))/(np.amax(image_data) - np.amin(image_data))
return image_data_norm
index = 255
X_temp1 = convert_to_grayscale(X_train)
X_temp2 = CLAHE_contrast_normalization(X_temp1)
X_temp3 = normalize_grayscale(X_temp2)
show_sample(X_train, y_train, histogram = 1, sample_num = 1, sample_index = index)
show_sample(X_temp1, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
show_sample(X_temp2, y_train, histogram = 1, sample_num = 1, sample_index = index, color_map ='gray')
print(X_temp2[index])
print(X_temp3[index])
#Preprocessing pipeline
print('Preprocessing training features...')
X_train = convert_to_grayscale(X_train)
X_train = CLAHE_contrast_normalization(X_train)
X_train = normalize_grayscale(X_train)
X_train = X_train[..., newaxis]
print("Processed shape =", X_train.shape)
print('Preprocessing validation features...')
X_valid = convert_to_grayscale(X_valid)
X_valid = CLAHE_contrast_normalization(X_valid)
X_valid = normalize_grayscale(X_valid)
X_valid = X_valid[..., newaxis]
print("Processed shape =", X_valid.shape)
print('Preprocessing test features...')
X_test = convert_to_grayscale(X_test)
X_test = CLAHE_contrast_normalization(X_test)
X_test = normalize_grayscale(X_test)
X_test = X_test[..., newaxis]
print("Processed shape =", X_test.shape)
# Shuffle the training dataset
X_train, y_train = shuffle(X_train, y_train)
print("Pre-processing done!")
import tensorflow as tf
from tensorflow.contrib.layers import flatten
def model(x, keep_prob):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Network Parameters
n_classes = 43 # MNIST total classes (0-9 digits)
filter_size = 5
# Store layers weight & bias
weights = {
'wc1' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 1, 100], mean = mu, stddev = sigma)),
'wc2' : tf.Variable(tf.truncated_normal([filter_size, filter_size, 100, 200], mean = mu, stddev = sigma)),
'wfc1': tf.Variable(tf.truncated_normal([9900, 100], mean = mu, stddev = sigma)),
'out' : tf.Variable(tf.truncated_normal([100, n_classes], mean = mu, stddev = sigma))}
biases = {
'bc1' : tf.Variable(tf.zeros([100])),
'bc2' : tf.Variable(tf.zeros([200])),
'bfc1': tf.Variable(tf.zeros([100])),
'out' : tf.Variable(tf.zeros([n_classes]))}
def conv2d(x, W, b, strides=1., padding='SAME'):
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding=padding)
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2, padding='SAME'):
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding=padding)
# Layer 1: Convolution 1 - 32*32*1 to 28*28*100
conv1 = conv2d(x, weights['wc1'], biases['bc1'], padding='VALID')
# Max Pool - 28*28*100 to 14*14*100
conv1 = maxpool2d(conv1, k=2)
# Layer 2: Convolution 2 - 14*14*100 to 10*10*200
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'], padding='VALID')
# Max Pool - 10*10*200 to 5*5*200
conv2 = maxpool2d(conv2, k=2)
#Fork second max pool - 14*14*100 to 7*7*100
conv1 = maxpool2d(conv1, k=2)
#Flatten conv1. Input = 7*7*100, Output = 4900
conv1 = tf.contrib.layers.flatten(conv1)
# Flatten conv2. Input = 5x5x200. Output = 5000.
conv2 = tf.contrib.layers.flatten(conv2)
# Concatenate
flat = tf.concat(1,[conv1,conv2])
# Layer 3 : Fully Connected. Input = 9900. Output = 100.
fc1 = tf.add(tf.matmul(flat, weights['wfc1']), biases['bfc1'])
fc1 = tf.nn.relu(fc1)
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 4: Fully Connected. Input = 100. Output = 43.
logits = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return logits
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Hyperparameters
EPOCHS = 100 #Max EPOCH number, if ever early stopping doesn't kick in
BATCH_SIZE = 256 #Max batch size
rate = 0.001 #Base learning rate
keep_probability = 0.75 #Keep probability for dropout..
max_iter_wo_improvmnt = 3000 #For early stopping
#Declare placeholder tensors
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
keep_prob = tf.placeholder(tf.float32)
one_hot_y = tf.one_hot(y, 43)
logits = model(x, keep_prob)
probabilities = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
from sklearn.utils import shuffle
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
# Max iteration number without improvement
max_interation_num_wo_improv = 1000
print("Training...")
iteration = 0
best_valid_accuracy = 0
best_accuracy_iter = 0
stop = 0
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
iteration = iteration + 1
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: keep_probability})
# After 10 Epochs, for every 200 iterations validation accuracy is checked
if (iteration % 200 == 0 and i > 10):
validation_accuracy = evaluate(X_valid, y_valid)
if validation_accuracy > best_valid_accuracy:
best_valid_accuracy = validation_accuracy
best_accuracy_iter = iteration
saver = tf.train.Saver()
saver.save(sess, './best_model')
print("Improvement found, model saved!")
stop = 0
# Stopping criteria : if not improvement since 1000 iterations stop training
if (iteration - best_accuracy_iter) > max_iter_wo_improvmnt:
print("Stopping criteria met..")
stop = 1
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
if stop == 1:
break
# saver.save(sess, './lenet')
# print("Model saved")
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
print("Evaluating..")
train_accuracy = evaluate(X_train, y_train)
print("Train Accuracy = {:.3f}".format(train_accuracy))
valid_accuracy = evaluate(X_valid, y_valid)
print("Valid Accuracy = {:.3f}".format(valid_accuracy))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
test_images = os.listdir('traffic-signs-data/web_found_signs/')
X_web = []
for file in test_images:
image = mpimg.imread('traffic-signs-data/web_found_signs/' + file)
plt.imshow(image)
plt.show()
print("Loaded ", file)
X_web.append(image)
X_web = np.array(X_web)
# Preprocess images
print('Preprocessing features...')
X_web = equalize_Y_histogram(X_web)
X_web = convert_to_grayscale(X_web)
X_web = normalize_grayscale(X_web)
X_web = X_web[..., newaxis]
print("Processed shape =", X_web.shape)
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
import tensorflow as tf
# hardcoded..
y_web = [9,22,2,18,1,17,4,10,38,4,4,23]
#We have to set the keep probability to 1.0 in the model..
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
logits_web = sess.run(tf.argmax(logits,1), feed_dict={x: X_web, keep_prob: 1.0})
print("Prediction =", logits_web)
# show_sample(X_web, logits_web, histogram = 0, sample_num = len(test_images), sample_index = 0, color_map = 'gray')
#Number of column to show
sample_num = len(test_images)
col_num = 4
if sample_num%col_num == 0:
row_num = int(sample_num/col_num)
else:
row_num = int(sample_num/col_num)+1
#Create training sample plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num*4,(row_num+1)*2))
signnames = pd.read_csv('signnames.csv')
for i in range(0, sample_num, 1):
image = X_web[i].squeeze()
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
title_str = 'Predicted : %s \n Real: %s' % (signnames.iloc[logits_web[i], 1],signnames.iloc[y_web[i], 1])
axarr[int(i/col_num),i%col_num].set_title(title_str, color = color_str)
axarr[int(i/col_num),i%col_num].imshow(image,'gray')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
axarr[int(i/col_num),i%col_num].axis('off')
f.tight_layout()
plt.show()
### Calculate the accuracy for these 5 new images.
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_web, y_web)
print("Web images Accuracy = {:.3f}".format(test_accuracy))
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
import matplotlib.gridspec as gridspec
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax_prob = sess.run(tf.nn.top_k(probabilities,k = 5), feed_dict={x: X_web, keep_prob: 1.0})
signnames = pd.read_csv('signnames.csv')
for i in range(len(test_images)):
plt.figure(figsize = (6,2))
gs = gridspec.GridSpec(1, 2,width_ratios=[2,3])
plt.subplot(gs[0])
plt.imshow(X_web[i].squeeze(),cmap="gray")
plt.axis('off')
plt.subplot(gs[1])
plt.barh(6-np.arange(5),softmax_prob[0][i], align='center')
if logits_web[i] != y_web[i]:
color_str = 'red'
else:
color_str = 'green'
for i_label in range(5):
temp_string = "%.1f %% : %s" % (softmax_prob[0][i][i_label]*100, str(signnames.iloc[softmax_prob[1][i][i_label], 1]))
plt.text(softmax_prob[0][i][0]*1.1,6-i_label-.15, temp_string, color = color_str)
plt.show()
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: 3. Include an exploratory visualization of the dataset
Step3: Step 2
Step4: 5. Show a sample of the augmented dataset
Step6: 6. Pre-process functions
Step7: 7. Show a sample of the preprocess functions outputs
Step8: 8. Preprocess the Dataset
Step9: 9. Model Architecture
Step10: 10. Train, Validate and Test the Model
Step11: 11. Features and Labels
Step12: 12. Training Pipeline
Step13: 13. Model Evaluation
Step14: 14. Train the Model
Step15: 15. Evaluate accuracy of the different data sets
Step16: Step 3
Step17: 17. Predict the Sign Type for Each Image
Step18: 18. Analyze Performance
Step19: 19. Output Top 5 Softmax Probabilities For Each Image Found on the Web
Step20: Step 4
|
1,871
|
<ASSISTANT_TASK:>
Python Code:
%%capture
#Instalando o tweepy
!pip install tweepy
import tweepy
import math
import os.path
import pandas as pd
import json
from random import shuffle
import string
#Dados de autenticação do twitter:
#Coloque aqui o identificador da conta no twitter: @fulano
#leitura do arquivo no formato JSON
with open('auth.pass') as fp:
data = json.load(fp)
#Configurando a biblioteca. Não modificar
auth = tweepy.OAuthHandler(data['consumer_key'], data['consumer_secret'])
auth.set_access_token(data['access_token'], data['access_token_secret'])
#Produto escolhido:
produto = 'itau'
#Quantidade mínima de mensagens capturadas:
n = 500
#Quantidade mínima de mensagens para a base de treinamento:
t = 300
#Filtro de língua, escolha uma na tabela ISO 639-1.
lang = 'pt'
#Cria um objeto para a captura
api = tweepy.API(auth)
#Inicia a captura, para mais detalhes: ver a documentação do tweepy
i = 1
msgs = []
for msg in tweepy.Cursor(api.search, q=produto, lang=lang).items():
msgs.append(msg.text.lower())
i += 1
if i > n:
break
#Embaralhando as mensagens para reduzir um possível viés
shuffle(msgs)
#Verifica se o arquivo não existe para não substituir um conjunto pronto
if not os.path.isfile('./{0}.xlsx'.format(produto)):
#Abre o arquivo para escrita
writer = pd.ExcelWriter('{0}.xlsx'.format(produto))
#divide o conjunto de mensagens em duas planilhas
dft = pd.DataFrame({'Treinamento' : pd.Series(msgs[:t])})
dft.to_excel(excel_writer = writer, sheet_name = 'Treinamento', index = False)
dfc = pd.DataFrame({'Teste' : pd.Series(msgs[t:])})
dfc.to_excel(excel_writer = writer, sheet_name = 'Teste', index = False)
#fecha o arquivo
writer.save()
exceltr=pd.read_excel("itau.xlsx")
excelte = pd.read_excel("itauteste.xlsx")
SIMs = []
NAOs = []
listaprobs = []
for i in range(len(exceltr.Treinamento)):
coluna1 = exceltr.Treinamento[i].lower().split()
exceltr.Treinamento[i] = coluna1
for k in range(len(coluna1)):
for punctuation in string.punctuation:
coluna1[k] = coluna1[k].replace(punctuation, '')
coluna1[k] = coluna1[k].replace('—', '')
if exceltr.Relevancia[i] == 'sim':
SIMs.append(coluna1[k])
elif exceltr.Relevancia[i] == 'não':
NAOs.append(coluna1[k])
while '' in coluna1:
coluna1.remove('')
while '' in SIMs:
SIMs.remove('')
while '' in NAOs:
NAOs.remove('')
for i in exceltr.Relevancia:
if i == 'sim':
listaprobs.append(i)
if i == 'não':
listaprobs.append(i)
QY = 0
QN = 0
for a in listaprobs:
if a == 'sim':
QY += 1
if a == 'não':
QN += 1
#Conta cada palavra da lista
LS = [[x,SIMs.count(x)] for x in set(SIMs)]
LN = [[y,NAOs.count(y)] for y in set(NAOs)]
#Calcula quantas palavras existem no espaço amostral
palav = 0
sins = 0
naos = 0
for a in range(len(LS)):
palav = palav + LS[a][1]
sins = sins + LS[a][1]
for a in range(len(LN)):
palav = palav + LN[a][1]
naos = naos + LN[a][1]
print("Quantidade de sim", QY)
print("Quantidade de sim", QN)
print('Total de palavras', len(LS)+len(LN))
print('Total relevantes', len(LS))
print('Total não relevantes', len(LN))
#Limpando a nova planilha
for a in range(len(excelte.Teste)):
coluna11 = excelte.Teste[a].lower().split()
for b in range(len(coluna11)):
for punctuation in string.punctuation:
coluna11[b] = coluna11[b].replace(punctuation, '')
coluna11[b] = coluna11[b].replace('—', '')
coluna11[b] = coluna11[b].replace('rt', '')
while '' in coluna11:
coluna11.remove('')
excelte.Teste[i] = coluna11
probSIM = []
l = 1
for i in range(len(excelte.Teste)):
clinha = []
for k in range(len(excelte.Teste[i])):
chance = 0
for j in range(len(LS)):
if LS[j][0] == excelte.Teste[i][k]:
chance = ((LS[j][1]+1)/(len(LS)+(len(LS)+len(LN))))
break
if chance > 0:
clinha.append(chance)
elif chance == 0:
clinha.append(1/(len(LS)+(len(LS)+len(LN))))
l = 1
for x in clinha:
l *= x
probSIM.append(l)
probNAO = []
l = 1
for i in range(len(excelte.Teste)):
clinha = []
for k in range(len(excelte.Teste[i])):
chance = 0
for j in range(len(LN)):
if LN[j][0] == excelte.Teste[i][k]:
chance = ((LN[j][1]+1)/(len(LN)+(len(LS)+len(LN))))
break
if chance > 0:
clinha.append(chance)
elif chance == 0:
clinha.append(1/(len(LN)+(len(LS)+len(LN))))
l = 1
for x in clinha:
l *= x
probNAO.append(l)
L2 = []
for a in range(len(probSIM)):
if probSIM[a]>probNAO[a]:
L2.append('sim')
else:
L2.append('não')
print("Positivos Falsos", (20/161))
print("Positivos verdadeiros", (16/39))
print("Irrelevantes verdadeiros", (141/161))
print("Irrelevantes Falsos", (23/39))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importando as Bibliotecas que serão utilizadas. Esteja livre para adicionar outras.
Step2: Autenticando no Twitter
Step3: Coletando Dados
Step4: Capturando os dados do twitter
Step5: Salvando os dados em uma planilha Excel
Step6: Classificando as Mensagens
Step7: Cálculos e contagem de palavras
Step8: Calculando as Probabilidade da Relevância dos Tweets
Step9: Por fim, podemos comparar as probabilidades
Step10: Verificando a performance
|
1,872
|
<ASSISTANT_TASK:>
Python Code:
import os
imgList = ['../training/%s'%x for x in os.listdir('../training/')]; imgList.sort()
imgList
import seaborn as sns
import matplotlib.pylab as plt
%matplotlib inline
from nilearn import image, datasets, input_data, plotting
plotting.plot_stat_map(imgList[-1],title=imgList[-1],threshold=0.8);
masker = input_data.NiftiMasker(mask_img='../masks/MNI152_T1_2mm_brain_mask.nii.gz',
smoothing_fwhm=8).fit()
masker
plotting.plot_stat_map( masker.inverse_transform(masker.transform(imgList[-1])),title=imgList[-1],threshold=0.5);
import pandas as pd
def makeBigDf(imgList,masker):
bigDf = pd.DataFrame()
for img in imgList:
thisName = img.split('/')[-1].split('.')[0]
cond,num,content = thisName.split('_')
cont = '%s_%s' % (num,content)
thisDf = pd.DataFrame(masker.transform(img))
thisDf.index = [[cond],[cont]]
bigDf = pd.concat([bigDf,thisDf])
bigDf.sort_index(inplace=True)
return bigDf
blockDf = makeBigDf(imgList,masker)
blockDf
# funktion, um unsere Tabelle aufzuteilen
def makeHalfDf(bigDf,start,stop):
# leere Tabelle, in die wir 1/2 unserer Blöcke schreiben
halfDf = pd.DataFrame()
# wir gehen durch die Bedingungen (Gesichter,Wörter,etc.)
for cond in bigDf.index.levels[0]:
# wir nehmen eine Bedinung
thisDf = bigDf.ix[cond]
# und wählen nur Zeilen (Blöcke) aus, die zwischen 'start' und 'stop' liegen
thisHalf = thisDf.ix[start:stop]
# wir machen uns einen neuen index für diese Auswahl
thisHalf.index = [[cond]*thisHalf.shape[0],thisHalf.index]
# wir packen diese Auswahl zu unserer großen Tabelle
halfDf = pd.concat([halfDf,thisHalf])
# wenn wir damit fertig sind, geben wir die Tabelle aus
return halfDf
thisHalfDf = makeHalfDf(blockDf,0,5)
thisHalfDf
otherHalfDf = makeHalfDf(blockDf,5,10)
otherHalfDf
thisHalfMeanDf = thisHalfDf.groupby(level=0).mean()
thisHalfMeanDf
otherHalfMeanDf = otherHalfDf.groupby(level=0).mean()
otherHalfMeanDf
# wir gehen durch die Bedingungen (Gesichter,Wörter,etc.)
for cond in thisHalfMeanDf.index:
# Abbildung mit zwei Teilen (links und rechts)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(16,4))
# Auswahl der Bedingung aus der ersten Tabelle
thisA = masker.inverse_transform(thisHalfMeanDf.ix[cond])
# Auswahl der Bedingung aus der zweiten Tabelle
thisB = masker.inverse_transform(otherHalfMeanDf.ix[cond])
# Abbildungsteil links, mit Daten der ersten Tabelle)
display = plotting.plot_stat_map(thisA,title='%s, 1st half'%cond,threshold=0.2,axes=ax1)
# Abbildungsteil rechts, mit Daten der zweiten Tabelle
plotting.plot_stat_map(thisB,title='%s, 2nd half'%cond,threshold=0.2,axes=ax2,cut_coords=display.cut_coords)
# Abbildung zeigen
plt.show()
import numpy as np
# wir nehmen die Mittelungen der beiden Hälften und packen sie zusammen
myMeanDf = pd.concat([thisHalfMeanDf,otherHalfMeanDf])
# wir machen einen Index, der uns sagt welche Zeilen zu welcher Hälfte gehören
myMeanDf.index = [ np.concatenate([ ['1st half']*5,['2nd half']*5 ]),myMeanDf.index ]
myMeanDf
# wir korrelieren alle Bedingungen mit allen anderen (wobei wir für jede
# Bedinung zwei Variaten haben: einmal die Variante aus der ersten Hälfte
# und einmal die Variate aus der zweiten Hälfte)
meanCorrDf = myMeanDf.T.corr()
meanCorrDf
plt.figure(figsize=(16,16))
sns.heatmap(meanCorrDf,square=True,vmin=-1,vmax=1,annot=True)
plt.yticks(rotation=90)
plt.xticks(rotation=90)
plt.show()
plt.figure(figsize=(8,8))
sns.heatmap(meanCorrDf.ix['1st half']['2nd half'],square=True,vmin=-1,vmax=1,annot=True)
plt.show()
myCorrDf = pd.DataFrame(np.corrcoef(thisHalfMeanDf,otherHalfDf)[5:,:5],
index=thisHalfDf.index,
columns=otherHalfMeanDf.index)
plt.figure(figsize=(12,10))
sns.heatmap(myCorrDf,annot=True)
plt.show()
def makeCorrPred(myCorrDf):
d = {}
# wir gehen durch jede Zeile
for cond,num in myCorrDf.index:
# wir wählen diese Zeile aus
thisDf = myCorrDf.ix[cond].ix[num]
# wir wählen die Spalte mit dem höhsten Wert aus
winner = thisDf.idxmax()
# wir schreiben einen eintrag mit folgenden infos:
# real : die tatsächliche bedingung (aus der zeile)
# winner: die spalte mit der höchsten korrelation
# hit: wir fragen, ob real und winner identisch sind (kann wahr oder falsch sein)
d[num] = {'real':cond, 'winner':winner,'hit':cond==winner }
# wir packen das ganze in eine tabelle, die wir nett formatieregn
predDf = pd.DataFrame(d).T
# wir rechnen aus, in wie viel prozent der Fälle wir richig lagen
percentCorrect = np.mean( [int(x) for x in predDf['hit']] )*100
return predDf,percentCorrect
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
corrPredDf
print "%i%% richtige Vorhersagen!" % corrPcCorrect
myCorrDf = pd.DataFrame(np.corrcoef(thisHalfMeanDf,otherHalfDf)[5:,:5],
index=thisHalfDf.index,
columns=thisHalfMeanDf.index)
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
print "%i%% richtige Vorhersagen!" % corrPcCorrect
myCorrDf = pd.DataFrame(np.corrcoef(thisHalfMeanDf,otherHalfDf)[5:,:5],
index=thisHalfDf.index,
columns=otherHalfMeanDf.index)
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
print "%i%% richtige Vorhersagen!" % corrPcCorrect
myCorrDf = pd.DataFrame(np.corrcoef(thisHalfMeanDf,thisHalfDf)[5:,:5],
index=thisHalfDf.index,
columns=thisHalfMeanDf.index)
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
print "%i%% richtige Vorhersagen!" % corrPcCorrect
myCorrDf = pd.DataFrame(np.corrcoef(otherHalfMeanDf,otherHalfDf)[5:,:5],
index=otherHalfDf.index,
columns=otherHalfMeanDf.index)
corrPredDf,corrPcCorrect = makeCorrPred(myCorrDf)
print "%i%% richtige Vorhersagen!" % corrPcCorrect
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Wir können uns - wie gewohnt - die Bilder anschauen
Step2: Hier schauen wir uns als Beispiel das letzte Bild an
Step3: Wir extrahieren nun die Daten aus diesen Bildern und glätten dabei mit 8mm
Step4: So sieht das selbe Bild nach der Glättung aus
Step5: Wir packen alle Daten in eine Tabelle
Step6: Ähnlichkeit von Template und einzelnem Block
Step7: Vergleich der beiden Hälften über ihre gemittelten Daten
Step8: nur die Korrelation von erster und zweiter Hälfte
Step9: Vorhersage mit unabhängigen Daten
Step10: Entscheidungsregel (winner takes all)
Step11: Es fällt auf
Step12: Blöcke zweite Hälfte, Mittelungen erste Hälfte
Step13: Blöcke erste Hälfte, Mittelungen erste Hälfte
Step14: Blöcke zweite Hälfte, Mittelungen zweite Hälfte
|
1,873
|
<ASSISTANT_TASK:>
Python Code:
from datetime import datetime
import requests
def requests_get(index=None):
response = requests.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
requests_get()
before = datetime.now()
for index in range(0, 5):
requests_get(index)
after = datetime.now()
print(f"total time: {after - before}")
import httpx
def httpx_get(index=None):
response = httpx.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
httpx_get()
before = datetime.now()
for index in range(0, 5):
httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
async with httpx.AsyncClient() as client:
response = await client.get('https://httpbin.org/delay/1')
print(response)
async def async_httpx_get(index=None):
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
await async_httpx_get()
before = datetime.now()
for index in range(0, 5):
await async_httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
many_gets = tuple(async_httpx_get(index) for index in range(0,5))
import asyncio
before = datetime.now()
await asyncio.gather(*many_gets)
after = datetime.now()
print(f"total time: {after - before}")
semaphore = asyncio.Semaphore(3)
async def async_semaphore_httpx_get(index=None):
async with semaphore:
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
semaphore_many_gets = tuple(
async_semaphore_httpx_get(index) for index in range(0,10))
before = datetime.now()
await asyncio.gather(*semaphore_many_gets)
after = datetime.now()
print(f"total time: {after - before}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using httpbin.org
|
1,874
|
<ASSISTANT_TASK:>
Python Code:
i1 = [1, 2, 3]
a1 = [1, 3, 5]
assert set(a1) == set(add_index(i1))
i2 = [0, 0, 0, 9, 10, 11]
o2 = 12
a2 = [12, 13, 14, 24, 26, 28]
assert set(a2) == set(add_index(i2, o2))
i1 = [1, 2, 3]
a1 = [2]
assert set(a1) == set(remove_odds(i1))
i2 = [0, 0, 0, 9, 10, 11]
a2 = [0, 0, 0, 10]
assert set(a2) == set(remove_odds(i2))
i1 = [1, 2, 1]
assert is_palindrome(i1)
i2 = [1, 1, 2, 1]
assert not is_palindrome(i2)
i3 = [1, 2, 3, 4, 5, 4, 3, 2, 1]
assert is_palindrome(i3)
i4 = [1, 2, 3, 4, 5, 5, 4, 3, 2, 1]
assert is_palindrome(i4)
i5 = [1, 2, 3, 4, 5, 5, 4, 3, 2]
assert not is_palindrome(i5)
i1 = [1, 2, 3]
n1 = 1
a1 = [2]
assert set(a1) == set(strip_n(i1, n1))
i2 = [1, 2, 3, 4]
n2 = 2
a2 = []
assert set(a2) == set(strip_n(i2, n2))
i3 = [1, 2, 3, 4, 5]
n3 = 3
try:
strip_n(i3, n3)
except ValueError:
assert True
else:
assert False
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: B
Step2: C
Step3: D
|
1,875
|
<ASSISTANT_TASK:>
Python Code:
import itertools
import random
from collections import deque
from copy import deepcopy
import numpy
from nupic.bindings.math import SparseBinaryMatrix, GetNTAReal
def makeSparseBinaryMatrix(numRows, numCols):
Construct a SparseBinaryMatrix.
There is a C++ constructor that does this, but it's currently not available
to Python callers.
matrix = SparseBinaryMatrix(numCols)
matrix.resize(numRows, numCols)
return matrix
def rightVecSumAtNZ_sparse(sparseMatrix, sparseBinaryArray):
Like rightVecSumAtNZ, but it supports sparse binary arrays.
@param sparseBinaryArray (sequence)
A sorted list of indices.
Note: this Python implementation doesn't require the list to be sorted, but
an eventual C implementation would.
denseArray = numpy.zeros(sparseMatrix.nCols(), dtype=GetNTAReal())
denseArray[sparseBinaryArray] = 1
return sparseMatrix.rightVecSumAtNZ(denseArray)
def setOuterToOne(sparseMatrix, rows, cols):
Equivalent to:
SparseMatrix.setOuter(rows, cols,
numpy.ones((len(rows),len(cols)))
But it works with the SparseBinaryMatrix. If this functionality is added to
the SparseBinaryMatrix, it will have the added benefit of not having to
construct a big array of ones.
for rowNumber in rows:
sparseRow = sorted(set(sparseMatrix.getRowSparse(rowNumber)).union(cols))
sparseMatrix.replaceSparseRow(rowNumber, sparseRow)
class SetMemory(object):
Uses proximal synapses, distal dendrites, and inhibition to implement "set
memory" with neurons. Set Memory can recognize a set via a series of
inputs. It associates an SDR with each set, growing proximal synapses from
each cell in the SDR to each proximal input. When the SetMemory receives an
ambiguous input, it activates a union of these SDRs. As it receives other
inputs, each SDR stays active only if it has both feedforward and lateral
support. Each SDR has lateral connections to itself, so an SDR has lateral
support if it was active in the previous time step. Over time, the union is
narrowed down to a single SDR.
Requiring feedforward and lateral support is functionally similar to computing
the intersection of the feedforward support and the previous active cells.
The advantages of this approach are:
1. Better noise robustness. If cell is randomly inactive, it's not excluded in
the next time step.
2. It doesn't require any new neural phenomena. It accomplishes all this
through distal dendrites and inhibition.
3. It combines well with other parallel layers. A cell can grow one distal
dendrite segment for each layer and connect each to an object SDR, and use
the number of active dendrite segments to drive inhibition.
This doesn't model:
- Synapse permanences. When it grows a synapse, it's immediately connected.
- Subsampling. When growing synapses to active cells, it simply grows
synapses to every one.
These aren't needed for this experiment.
def __init__(self,
layerID,
feedforwardID,
lateralIDs,
layerSizes,
sdrSize,
minThresholdProximal,
minThresholdDistal):
@param layerID
The layer whose activity this SetMemory should update.
@param feedforwardID
The layer that this layer might form feedforward connections to.
@param lateralIDs (iter)
The layers that this layer might form lateral connections to.
If this layer will form internal lateral connections, this list must include
this layer's layerID.
@param layerSizes (dict)
A dictionary from layerID to number of cells. It must contain a size for
layerID, feedforwardID, and each of the lateralIDs.
@param sdrSize (int)
The number of cells in an SDR.
@param minThresholdProximal (int)
The number of active feedforward synapses required for a cell to have
"feedforward support".
@param minThresholdDistal (int)
The number of active distal synapses required for a segment to be active.
self.layerID = layerID
self.feedforwardID = feedforwardID
self.sdrSize = sdrSize
self.minThresholdProximal = minThresholdProximal
self.minThresholdDistal = minThresholdDistal
# Matrix of connected synapses. Permanences aren't modelled.
self.proximalConnections = makeSparseBinaryMatrix(layerSizes[layerID],
layerSizes[feedforwardID])
# Synapses to lateral layers. Each matrix represents one segment per cell.
# A cell won't grow more than one segment to another layer. If the cell
# appears in multiple object SDRs, it will connect its segments to a union
# of object SDRs.
self.lateralConnections = dict(
(lateralID, makeSparseBinaryMatrix(layerSizes[layerID],
layerSizes[lateralID]))
for lateralID in lateralIDs)
self.numCells = layerSizes[layerID]
self.isReset = True
def learningCompute(self, activity):
Chooses active cells using the previous active cells and the reset signal.
Grows proximal synapses to the feedforward layer's current active cells, and
grows lateral synapses to the each lateral layer's previous active cells.
Reads:
- activity[0][feedforwardID]["activeCells"]
- activity[1][lateralID]["activeCells"] for each lateralID
Writes to:
- activity[0][layerID]["activeCells"]
- The feedforward connections matrix
- The lateral connections matrices
# Select active cells
if self.isReset:
activeCells = sorted(random.sample(xrange(self.numCells), self.sdrSize))
self.isReset = False
else:
activeCells = activity[1][self.layerID]["activeCells"]
# Lateral learning
if len(activity) > 1:
for lateralID, connections in self.lateralConnections.iteritems():
setOuterToOne(connections, activeCells,
activity[1][lateralID]["activeCells"])
# Proximal learning
setOuterToOne(self.proximalConnections, activeCells,
activity[0][self.feedforwardID]["activeCells"])
# Write the activity
activity[0][self.layerID]["activeCells"] = activeCells
def inferenceCompute(self, activity):
Chooses active cells using feedforward and lateral input.
Reads:
- activity[0][feedforwardID]["activeCells"]
- activity[1][lateralID]["activeCells"] for each lateralID
Writes to:
- activity[0][layerID]["activeCells"]
# Calculate feedforward support
overlaps = rightVecSumAtNZ_sparse(self.proximalConnections,
activity[0][self.feedforwardID]["activeCells"])
feedforwardSupportedCells = set(
numpy.where(overlaps >= self.minThresholdProximal)[0])
# Calculate lateral support
numActiveSegmentsByCell = numpy.zeros(self.numCells)
if self.isReset:
# Don't activate any segments
self.isReset = False
elif len(activity) >= 2:
for lateralID, connections in self.lateralConnections.iteritems():
overlaps = rightVecSumAtNZ_sparse(connections,
activity[1][lateralID]["activeCells"])
numActiveSegmentsByCell[overlaps >= self.minThresholdDistal] += 1
# Inference
activeCells = []
# First, activate cells that have feedforward support
orderedCandidates = sorted((cell for cell in feedforwardSupportedCells),
key=lambda x: numActiveSegmentsByCell[x],
reverse=True)
for _, cells in itertools.groupby(orderedCandidates,
lambda x: numActiveSegmentsByCell[x]):
activeCells.extend(cells)
if len(activeCells) >= self.sdrSize:
break
# If necessary, activate cells that were previously active and have lateral
# support
if len(activeCells) < self.sdrSize and len(activity) >= 2:
prevActiveCells = activity[1][self.layerID]["activeCells"]
orderedCandidates = sorted((cell for cell in prevActiveCells
if cell not in feedforwardSupportedCells
and numActiveSegmentsByCell[cell] > 0),
key=lambda x: numActiveSegmentsByCell[x],
reverse=True)
for _, cells in itertools.groupby(orderedCandidates,
lambda x: numActiveSegmentsByCell[x]):
activeCells.extend(cells)
if len(activeCells) >= self.sdrSize:
break
# Write the activity
activity[0][self.layerID]["activeCells"] = sorted(activeCells)
def reset(self):
Signal that we're now going to observe a different set.
With learning, this signals that we're going to observe a never-before-seen
set.
With inference, this signals to start inferring a new object, ignoring
recent inputs.
self.isReset = True
LAYER_4_SIZE = 2048 * 8
def createFeatureLocationPool(size=10):
duplicateFound = False
for _ in xrange(5):
candidateFeatureLocations = [frozenset(random.sample(xrange(LAYER_4_SIZE), 40))
for featureNumber in xrange(size)]
# Sanity check that they're pretty unique.
duplicateFound = False
for pattern1, pattern2 in itertools.combinations(candidateFeatureLocations, 2):
if len(pattern1 & pattern2) >= 5:
duplicateFound = True
break
if not duplicateFound:
break
if duplicateFound:
raise ValueError("Failed to generate unique feature-locations")
featureLocationPool = {}
for i, featureLocation in enumerate(candidateFeatureLocations):
if i < 26:
name = chr(ord('A') + i)
else:
name = "Feature-location %d" % i
featureLocationPool[name] = featureLocation
return featureLocationPool
def experiment(objects, numColumns, selectRandom=True):
#
# Initialize
#
layer2IDs = ["Column %d Layer 2" % i for i in xrange(numColumns)]
layer4IDs = ["Column %d Layer 4" % i for i in xrange(numColumns)]
layerSizes = dict((layerID, 4096) for layerID in layer2IDs)
layerSizes.update((layerID, LAYER_4_SIZE) for layerID in layer4IDs)
layer2s = dict((l2, SetMemory(layerID=l2,
feedforwardID=l4,
lateralIDs=layer2IDs,
layerSizes=layerSizes,
sdrSize=40,
minThresholdProximal=20,
minThresholdDistal=20))
for l2, l4 in zip(layer2IDs, layer4IDs))
#
# Learn
#
layer2ObjectSDRs = dict((layerID, {}) for layerID in layer2IDs)
activity = deque(maxlen=2)
step = dict((layerID, {})
for layerID in itertools.chain(layer2IDs, layer4IDs))
for objectName, objectFeatureLocations in objects.iteritems():
for featureLocationName in objectFeatureLocations:
l4ActiveCells = sorted(featureLocationPool[featureLocationName])
for _ in xrange(2):
activity.appendleft(deepcopy(step))
# Compute Layer 4
for layerID in layer4IDs:
activity[0][layerID]["activeCells"] = l4ActiveCells
activity[0][layerID]["featureLocationName"] = featureLocationName
# Compute Layer 2
for setMemory in layer2s.itervalues():
setMemory.learningCompute(activity)
for layerID, setMemory in layer2s.iteritems():
layer2ObjectSDRs[layerID][objectName] = activity[0][layerID]["activeCells"]
setMemory.reset()
#
# Infer
#
objectName = "Object 1"
objectFeatureLocations = objects[objectName]
# Start fresh for inference. No max length because we're also using it as a log.
activity = deque()
success = False
for attempt in xrange(60):
if selectRandom:
featureLocationNames = random.sample(objectFeatureLocations, numColumns)
else:
# Naively move the sensors to touch every point as soon as possible.
start = (attempt * numColumns) % len(objectFeatureLocations)
end = start + numColumns
featureLocationNames = list(objectFeatureLocations)[start:end]
overflow = end - len(objectFeatureLocations)
if overflow > 0:
featureLocationNames += list(objectFeatureLocations)[0:overflow]
# Give the feedforward input 3 times so that the lateral inputs have time to spread.
for _ in xrange(3):
activity.appendleft(deepcopy(step))
# Compute Layer 4
for layerID, name in zip(layer4IDs, featureLocationNames):
activity[0][layerID]["activeCells"] = sorted(featureLocationPool[name])
activity[0][layerID]["featureLocationName"] = name
# Compute Layer 2
for setMemory in layer2s.itervalues():
setMemory.inferenceCompute(activity)
if all(activity[0][layer2]["activeCells"] == layer2ObjectSDRs[layer2][objectName]
for layer2 in layer2IDs):
success = True
print "Converged after %d touches" % (attempt + 1)
break
if not success:
print "Failed to converge after %d touches" % (attempt + 1)
return (objectName, activity, layer2ObjectSDRs)
featureLocationPool = createFeatureLocationPool(size=8)
objects = {"Object 1": set(["A", "B", "C", "D", "E", "F", "G"]),
"Object 2": set(["A", "B", "C", "D", "E", "F", "H"]),
"Object 3": set(["A", "B", "C", "D", "E", "G", "H"]),
"Object 4": set(["A", "B", "C", "D", "F", "G", "H"]),
"Object 5": set(["A", "B", "C", "E", "F", "G", "H"]),
"Object 6": set(["A", "B", "D", "E", "F", "G", "H"]),
"Object 7": set(["A", "C", "D", "E", "F", "G", "H"]),
"Object 8": set(["B", "C", "D", "E", "F", "G", "H"])}
results = experiment(objects, numColumns=1)
results = experiment(objects, numColumns=1, selectRandom=False)
results = experiment(objects, numColumns=7)
for numColumns in xrange(1, 8):
print "With %d columns:" % numColumns
results = experiment(objects, numColumns)
print
for numColumns in xrange(1, 8):
print "With %d columns:" % numColumns
results = experiment(objects, numColumns, selectRandom=False)
print
(testObject,
activity,
layer2ObjectSDRs) = results
for t, step in enumerate(reversed(activity)):
print "Step %d" % t
for column in xrange(len(step) / 2):
layer2ID = "Column %d Layer 2" % column
layer4ID = "Column %d Layer 4" % column
featureLocationName = step[layer4ID]["featureLocationName"]
activeCells = set(step[layer2ID]["activeCells"])
layer2Contents = {}
for objectName, objectCells in layer2ObjectSDRs[layer2ID].iteritems():
containsRatio = len(activeCells & set(objectCells)) / float(len(objectCells))
if containsRatio >= 0.20:
layer2Contents[objectName] = containsRatio
print "Column %d: Input: %s, Active cells: %d %s" % (column,
featureLocationName,
len(activeCells),
layer2Contents)
print
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Functionality that could be implemented in SparseBinaryMatrix
Step10: This SetMemory docstring is worth reading
Step11: Experiment code
Step12: Initialize some feature-locations and objects
Step13: We're testing L2 in isolation, so these "A", "B", etc. patterns are L4 representations, i.e. "feature-locations".
Step14: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
Step15: Test
Step16: Test
Step17: Move sensors deterministically, trying to touch every point with some sensor as quickly as possible.
Step18: Can I watch?
|
1,876
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pylab as plt
%matplotlib inline
import numpy as np
import os
import pandas as pd
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook')
from scipy.stats import kurtosis
import sys
%load_ext autoreload
%autoreload 2
sys.path.append('../SCRIPTS/')
import kidsmotion_stats as kms
import kidsmotion_datamanagement as kmdm
import kidsmotion_plotting as kmp
behav_data_f = '../Phenotypic_V1_0b_preprocessed1.csv'
behav_df = kmdm.read_in_behavdata(behav_data_f)
fig, ax_list = kmp.histogram_motion(behav_df)
# Note that there is a warning here but don't worry about it :P
for var in ['func_mean_fd', 'func_perc_fd']:
print(var)
print(' kurtosis = {:2.1f}'.format(kurtosis(behav_df[var])))
print(' corr with age:')
kms.report_correlation(behav_df, 'AGE_AT_SCAN', var, covar_name=None, r_dp=2)
fig, ax_list = kmp.corr_motion_age(behav_df, fit_reg=False)
fig, ax_list = kmp.corr_motion_age(behav_df)
age_l = 6
age_u = 18
motion_measure='func_perc_fd'
n_perms = 100
motion_thresh = 50
corr_age_df = pd.DataFrame()
for n in [ 25, 50, 75, 100, 125, 150 ]:
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['N{:2.0f}'.format(n)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='Thr: {:1.0f}%'.format(motion_thresh))
age_l = 6
age_u = 18
motion_measure='func_perc_fd'
n = 100
n_perms = 100
corr_age_df = pd.DataFrame()
for motion_thresh in [ 5, 10, 25, 50 ]:
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['Thr{:1.0f}'.format(motion_thresh)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}'.format(n))
motion_measure='func_perc_fd'
n = 100
n_perms = 100
motion_thresh = 25
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
motion_measure='func_perc_fd'
n = 100
n_perms = 100
motion_thresh = 25
for motion_thresh in [ 5, 10, 25, 50 ]:
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
motion_measure='func_perc_fd'
n = 30
n_perms = 100
motion_thresh = 25
for motion_thresh in [ 5, 10, 25, 50 ]:
corr_age_df = pd.DataFrame()
for age_l in [ 6, 8, 10, 12, 14 ]:
age_u = age_l + 4
filtered_df = kmdm.filter_data(behav_df, motion_thresh, age_l, age_u, motion_measure=motion_measure)
r_list = []
for i in range(n_perms):
sample_df = kmdm.select_random_sample(filtered_df, n=n)
r, p = kms.calculate_correlation(sample_df, 'AGE_AT_SCAN', motion_measure, covar_name=None)
r_list+=[r]
corr_age_df['{:1.0f} to {:1.0f}'.format(age_l, age_u)] = r_list
fig, ax = kmp.compare_groups_boxplots(corr_age_df, title='N: {:1.0f}; Thr: {:1.0f}%'.format(n, motion_thresh))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the data
Step2: Motion measures
Step3: We can see from the plot above that we have a data set of people who do not move all that much and that these two measures correlate well for low motion scans but start to diverge for the scans that have higher motion.
Step4: Yes! It does, and you can see that this correlation is stronger for func_perc_fd. I don't think this is really particularly important and I suspect it is driven by the kurtosis of the distribution. The func_mean_fd distribution is more non-normal (less normal?) than the func_perc_fd and I wonder if this causing the correlation to look messier. To be honest, I don't know and I don't really care. If this is what makes a big difference to our results I'll start to care more ;)
Step5: Well. That's underinspiring. Does that really count as a significant correlation? Gun to your head would you put that line there?
Step6: What I take from this plot is that there is a negative correlation between age and head motion (the older you are the less you move) and that the more participants we have in a sample the more consistent the measure (the narrower the box)
Step7: What I take from this plot is that the correlation with age is less strong when you are more stringent in your exclusion criteria. Which makes sense
Step8: Woah - that's interesting. In this sample we seem to only be able to detect a movement relationship for a 5 year age range (remember that the upper and lower limits are inclusive) when the participants are either 10-14 or 12-16 years old!
Step9: So, this to me is the crazy bit that I need to get my head around
|
1,877
|
<ASSISTANT_TASK:>
Python Code:
import gmql as gl
import matplotlib.pyplot as plt
genes = gl.load_from_path("../data/genes/")
promoters = genes.reg_project(new_field_dict={
'start':genes.start-2000,
'stop':genes.start + 2000})
gl.set_remote_address("http://gmql.eu/gmql-rest/")
gl.login()
gl.set_mode("remote")
hms = gl.load_from_remote("HG19_ENCODE_BROAD_AUG_2017",
owner="public")
hms_ac = hms[hms["experiment_target"] == "H3K9ac-human"]
mapping = promoters.map(
hms_ac,
refName='prom',
expName='hm',
new_reg_fields={
'avg_signal': gl.AVG('signal')})
mapping = mapping.materialize()
import seaborn as sns
heatmap=mapping.to_matrix(
columns_meta=['hm.biosample_term_name'],
index_regs=['gene_symbol'],
values_regs=['avg_signal'],
fill_value=0)
plt.figure(figsize=(10, 10))
sns.heatmap(heatmap,vmax = 20)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The code begins by loading a local dataset of gene annotations and extracting their promotorial regions (here defined as regions at $\left[gene_{start}-2000;gene_{start}+2000\right])$.
Step2: The genes and promoters variables are GMQLDataset; the former is loaded directly, the latter results from a projection operation. Region feature names can be accessed directly from variables to build expressions and predicates (e.g., gene.start + 2000).
Step3: In the following snippet we show how to load the Chip-Seq data of the ENCODE dataset from the remote GMQL repository and select only the experiments of interest.
Step4: Next, the PyGMQL map operation is used to compute the average of the signal of hms_ac intersecting each promoter; iteration over all samples is implicit. Finally, the materialize method triggers the execution of the query.
Step5: At this point, Python libraries for data manipulation, visualization or analysis can be applied to the GDataframe. The following portion of code provides an example of data manipulation of a query result. The to_matrix method transforms the GDataframe into a Pandas matrix, where each row corresponds to a gene and each column to a cell line; values are the average signal on the promoter of the given gene in the given cell line. Finally, the matrix is visualized as a heatmap.
|
1,878
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("HELK Reader") \
.master("spark://helk-spark-master:7077") \
.enableHiveSupport() \
.getOrCreate()
es_reader = (spark.read
.format("org.elasticsearch.spark.sql")
.option("inferSchema", "true")
.option("es.read.field.as.array.include", "tags")
.option("es.nodes","helk-elasticsearch:9200")
.option("es.net.http.auth.user","elastic")
)
#PLEASE REMEMBER!!!!
#If you are using elastic TRIAL license, then you need the es.net.http.auth.pass config option set
#Example: .option("es.net.http.auth.pass","elasticpassword")
%%time
sysmon_df = es_reader.load("logs-endpoint-winevent-sysmon-*/")
sysmon_df.createOrReplaceTempView("sysmon_events")
## Run SQL Queries
sysmon_ps_execution = spark.sql(
'''
SELECT event_id,process_parent_name,process_name
FROM sysmon_events
WHERE event_id = 1
AND process_name = "powershell.exe"
AND NOT process_parent_name = "explorer.exe"
'''
)
sysmon_ps_execution.show(10)
sysmon_ps_module = spark.sql(
'''
SELECT event_id,process_name
FROM sysmon_events
WHERE event_id = 7
AND (
lower(file_description) = "system.management.automation"
OR lower(module_loaded) LIKE "%\\\\system.management.automation%"
)
'''
)
sysmon_ps_module.show(10)
sysmon_ps_pipe = spark.sql(
'''
SELECT event_id,process_name
FROM sysmon_events
WHERE event_id = 17
AND lower(pipe_name) LIKE "\\\\pshost%"
'''
)
sysmon_ps_pipe.show(10)
%%time
powershell_df = es_reader.load("logs-endpoint-winevent-powershell-*/")
powershell_df.createOrReplaceTempView("powershell_events")
ps_named_pipe = spark.sql(
'''
SELECT event_id
FROM powershell_events
WHERE event_id = 53504
'''
)
ps_named_pipe.show(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a SparkSession instance
Step2: Read data from the HELK Elasticsearch via Spark SQL
Step3: Read Sysmon Events
Step4: Register Sysmon SQL temporary View
Step5: Read PowerShell Events
Step6: Register PowerShell SQL temporary View
|
1,879
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
model = keras.Sequential()
model.add(
layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))
)
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
model = build_model(allow_cudnn_kernel=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras를 사용한 반복적 인 신경망 (RNN)
Step2: 내장 RNN 레이어
Step3: 내장 RNN은 여러 유용한 기능을 지원합니다.
Step4: 또한 RNN 레이어는 최종 내부 상태를 반환할 수 있습니다. 반환된 상태는 나중에 RNN 실행을 재개하거나 다른 RNN을 초기화하는 데 사용될 수 있습니다. 이 설정은 인코더의 최종 상태가 디코더의 초기 상태로 사용되는 인코더-디코더 시퀀스-시퀀스 모델에서 일반적으로 사용됩니다.
Step5: RNN 레이어 및 RNN 셀
Step6: RNN 상태 재사용
Step7: 양방향 RNN
Step8: 막후에서 Bidirectional은 전달된 RNN 레이어를 복사하고 새로 복사된 레이어의 go_backwards 필드를 뒤집어 입력을 역순으로 처리합니다.
Step9: MNIST 데이터세트를 로드합니다.
Step10: 모델 인스턴스를 만들고 훈련시키겠습니다.
Step11: 이제 CuDNN 커널을 사용하지 않는 모델과 비교해 보겠습니다.
Step12: NVIDIA GPU 및 CuDNN이 설치된 시스템에서 실행하는 경우, CuDNN으로 빌드된 모델은 일반 TensorFlow 커널을 사용하는 모델에 비해 훈련 속도가 훨씬 빠릅니다.
Step13: 목록/사전 입력 또는 중첩 입력이 있는 RNN
Step14: 중첩된 입력/출력으로 RNN 모델 구축
Step15: 무작위로 생성된 데이터로 모델 훈련
|
1,880
|
<ASSISTANT_TASK:>
Python Code:
from functions import connect, touch, light, sound, ultrasonic, disconnect, next_notebook
connect()
touch() # Per a executar repetidament, useu Ctrl + Enter
light() # Per a executar repetidament, useu Ctrl + Enter
sound() # Per a executar repetidament, useu Ctrl + Enter
ultrasonic() # Per a executar repetidament, useu Ctrl + Enter
from functions import test_sensors
test_sensors()
disconnect()
next_notebook('touch')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sensor de tacte
Step2: Sensor de llum
Step3: Sensor de so (micròfon)
Step4: Sensor ultrasònic
Step5: <img src="img/interrupt.png" align="right">
Step6: És el moment de fer nous programes amb els sensors, però abans cal desconnectar el robot d'esta pàgina.
|
1,881
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
import numpy as np
import scipy.io
import time
import theano
import theano.tensor as T
import theano.sparse as Tsp
import lasagne as L
import lasagne.layers as LL
import lasagne.objectives as LO
from lasagne.layers.normalization import batch_norm
sys.path.append('..')
from icnn import aniso_utils_lasagne, dataset, snapshotter
base_path = '/home/shubham/Desktop/IndependentStudy/EG16_tutorial/dataset/FAUST_registrations/data/diam=200/'
# train_txt, test_txt, descs_path, patches_path, geods_path, labels_path, ...
# desc_field='desc', patch_field='M', geod_field='geods', label_field='labels', epoch_size=100
ds = dataset.ClassificationDatasetPatchesMinimal(
'FAUST_registrations_train.txt', 'FAUST_registrations_test.txt',
os.path.join(base_path, 'descs', 'shot'),
os.path.join(base_path, 'patch_aniso', 'alpha=100_nangles=016_ntvals=005_tmin=6.000_tmax=24.000_thresh=99.900_norm=L1'),
None,
os.path.join(base_path, 'labels'),
epoch_size=50)
# inp = LL.InputLayer(shape=(None, 544))
# print(inp.input_var)
# patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
# print(patch_op.shape)
# print(patch_op.input_var)
# icnn = LL.DenseLayer(inp, 16)
# print(icnn.output_shape)
# print(icnn.output_shape)
# desc_net = theano.dot(patch_op, icnn)
nin = 544
nclasses = 6890
l2_weight = 1e-5
def get_model(inp, patch_op):
icnn = LL.DenseLayer(inp, 16)
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 16, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 32, nscale=5, nangl=16))
icnn = batch_norm(aniso_utils_lasagne.ACNNLayer([icnn, patch_op], 64, nscale=5, nangl=16))
ffn = batch_norm(LL.DenseLayer(icnn, 512))
ffn = LL.DenseLayer(icnn, nclasses, nonlinearity=aniso_utils_lasagne.log_softmax)
return ffn
inp = LL.InputLayer(shape=(None, nin))
patch_op = LL.InputLayer(input_var=Tsp.csc_fmatrix('patch_op'), shape=(None, None))
ffn = get_model(inp, patch_op)
# L.layers.get_output -> theano variable representing network
output = LL.get_output(ffn)
pred = LL.get_output(ffn, deterministic=True) # in case we use dropout
# target theano variable indicatind the index a vertex should be mapped to wrt the latent space
target = T.ivector('idxs')
# to work with logit predictions, better behaved numerically
cla = aniso_utils_lasagne.categorical_crossentropy_logdomain(output, target, nclasses).mean()
acc = LO.categorical_accuracy(pred, target).mean()
# a bit of regularization is commonly used
regL2 = L.regularization.regularize_network_params(ffn, L.regularization.l2)
cost = cla + l2_weight * regL2
params = LL.get_all_params(ffn, trainable=True)
grads = T.grad(cost, params)
# computes the L2 norm of the gradient to better inspect training
grads_norm = T.nlinalg.norm(T.concatenate([g.flatten() for g in grads]), 2)
# Adam turned out to be a very good choice for correspondence
updates = L.updates.adam(grads, params, learning_rate=0.001)
funcs = dict()
funcs['train'] = theano.function([inp.input_var, patch_op.input_var, target],
[cost, cla, l2_weight * regL2, grads_norm, acc], updates=updates,
on_unused_input='warn')
funcs['acc_loss'] = theano.function([inp.input_var, patch_op.input_var, target],
[acc, cost], on_unused_input='warn')
funcs['predict'] = theano.function([inp.input_var, patch_op.input_var],
[pred], on_unused_input='warn')
n_epochs = 50
eval_freq = 1
start_time = time.time()
best_trn = 1e5
best_tst = 1e5
kvs = snapshotter.Snapshotter('demo_training.snap')
for it_count in xrange(n_epochs):
tic = time.time()
b_l, b_c, b_s, b_r, b_g, b_a = [], [], [], [], [], []
for x_ in ds.train_iter():
tmp = funcs['train'](*x_)
# do some book keeping (store stuff for training curves etc)
b_l.append(tmp[0])
b_c.append(tmp[1])
b_r.append(tmp[2])
b_g.append(tmp[3])
b_a.append(tmp[4])
epoch_cost = np.asarray([np.mean(b_l), np.mean(b_c), np.mean(b_r), np.mean(b_g), np.mean(b_a)])
print(('[Epoch %03i][trn] cost %9.6f (cla %6.4f, reg %6.4f), |grad| = %.06f, acc = %7.5f %% (%.2fsec)') %
(it_count, epoch_cost[0], epoch_cost[1], epoch_cost[2], epoch_cost[3], epoch_cost[4] * 100,
time.time() - tic))
if np.isnan(epoch_cost[0]):
print("NaN in the loss function...let's stop here")
break
if (it_count % eval_freq) == 0:
v_c, v_a = [], []
for x_ in ds.test_iter():
tmp = funcs['acc_loss'](*x_)
v_a.append(tmp[0])
v_c.append(tmp[1])
test_cost = [np.mean(v_c), np.mean(v_a)]
print((' [tst] cost %9.6f, acc = %7.5f %%') % (test_cost[0], test_cost[1] * 100))
if epoch_cost[0] < best_trn:
kvs.store('best_train_params', [it_count, LL.get_all_param_values(ffn)])
best_trn = epoch_cost[0]
if test_cost[0] < best_tst:
kvs.store('best_test_params', [it_count, LL.get_all_param_values(ffn)])
best_tst = test_cost[0]
print("...done training %f" % (time.time() - start_time))
rewrite = True
out_path = '/tmp/EG16_tutorial/dumps/'
print "Saving output to: %s" % out_path
if not os.path.isdir(out_path) or rewrite==True:
try:
os.makedirs(out_path)
except:
pass
a = []
for i,d in enumerate(ds.test_iter()):
fname = os.path.join(out_path, "%s" % ds.test_fnames[i])
print fname,
tmp = funcs['predict'](d[0], d[1])[0]
a.append(np.mean(np.argmax(tmp, axis=1).flatten() == d[2].flatten()))
scipy.io.savemat(fname, {'desc': tmp})
print ", Acc: %7.5f %%" % (a[-1] * 100.0)
print "\nAverage accuracy across all shapes: %7.5f %%" % (np.mean(a) * 100.0)
else:
print "Model predictions already produced."
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data loading
Step2: Network definition
Step3: Define the update rule, how to train
Step4: Compile
Step5: Training (a bit simplified)
Step6: Test phase
|
1,882
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = None
parameters["b" + str(l+1)] = None
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = None
mini_batch_Y = None
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = None
mini_batch_Y = None
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = None
v["db" + str(l+1)] = None
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = None
v["db" + str(l+1)] = None
# update parameters
parameters["W" + str(l+1)] = None
parameters["b" + str(l+1)] = None
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = None
v["db" + str(l+1)] = None
s["dW" + str(l+1)] = None
s["db" + str(l+1)] = None
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = None
v["db" + str(l+1)] = None
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = None
v_corrected["db" + str(l+1)] = None
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = None
s["db" + str(l+1)] = None
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = None
s_corrected["db" + str(l+1)] = None
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = None
parameters["b" + str(l+1)] = None
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
train_X, train_Y = load_dataset()
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1 - Gradient Descent
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step13: Expected Output
Step15: We have already implemented a 3-layer neural network. You will train it with
Step16: You will now run this 3 layer neural network with each of the 3 optimization methods.
Step17: 5.2 - Mini-batch gradient descent with momentum
Step18: 5.3 - Mini-batch with Adam mode
|
1,883
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.colors as colors
%matplotlib inline
# Get the data
### Subdivide the data into a feature table
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/'
data_path = local_path + '/revo_healthcare/data/processed/MTBLS315/'\
'uhplc_pos/xcms_camera_results.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Show me a distribution of retention time widths
rt_width = df['rtmax']-df['rtmin']
#sns.violinplot(rt_width, inner='box')
#sns.rugplot(rt_width)
rt_width.shape
sns.kdeplot(rt_width, kernel='gau')
plt.ylabel('Probability')
plt.xlabel('retention-width')
plt.title('Distribution of retention-time widths')
# Show me a distribution of intensity values
intensities = df['X1001_P']
normalized_intensities = df['X1001_P'].div(df['X1001_P'].max())
#sns.kdeplot(df['X1001_P'])
sns.kdeplot(intensities)
plt.xlabel("Normalized Intensity")
plt.ylabel('probability \(not correct, but w/e\)')
plt.title('Distribution of intensities')
# Show me a scatterplot of m/z rt dots
# distribution along mass-axis and rt axist
plt.scatter(df['mz'], df['rt'], s=normalized_intensities*100,)
plt.xlabel('mz')
plt.ylabel('rt')
plt.title('mz vs. rt')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
def plot_mz_rt(df, path, rt_bounds):
# the random data
x = df['rt']
y = df['mz']
print np.max(x)
print np.max(y)
nullfmt = NullFormatter() # no labels
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
bottom_h = left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
#fig = plt.figure(1, figsize=(8, 8))
fig = plt.figure(1, figsize=(10,10))
axScatter = plt.axes(rect_scatter)
axHistx = plt.axes(rect_histx)
axHisty = plt.axes(rect_histy)
# no labels
axHistx.xaxis.set_major_formatter(nullfmt)
axHisty.yaxis.set_major_formatter(nullfmt)
# the scatter plot:
axScatter.scatter(x, y, s=1)
# now determine nice limits by hand:
binwidth = 0.25
#xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))])
#lim = (int(xymax/binwidth) + 1) * binwidth
x_min = np.min(x)-50
x_max = np.max(x)+50
axScatter.set_xlim(x_min, x_max )
y_min = np.min(y)-50
y_max = np.max(y)+50
axScatter.set_ylim(y_min, y_max)
# Add vertical red line between 750-1050 retention time
'''
plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',
label='Luck', alpha=0.5)
'''
print 'ymin: ', y_min
# Add vertical/horizontal lines to scatter and histograms
axScatter.axvline(x=rt_bounds[0], lw=2, color='r', alpha=0.5)
axScatter.axvline(x=rt_bounds[1], lw=2, color='r', alpha=0.5)
#axHistx.axvline(x=rt_bounds[0], lw=2, color='r', alpha=0.5)
#axHistx.axvline(x=rt_bounds[1], lw=2, color='r', alpha=0.5)
#bins = np.arange(-lim, lim + binwidth, binwidth)
bins = 100
axHistx.hist(x, bins=bins)
axHisty.hist(y, bins=bins, orientation='horizontal')
axHistx.set_xlim(axScatter.get_xlim())
axHisty.set_ylim(axScatter.get_ylim())
axScatter.set_ylabel('m/z', fontsize=30)
axScatter.set_xlabel('Retention Time', fontsize=30)
axHistx.set_ylabel('# of Features', fontsize=20)
axHisty.set_xlabel('# of Features', fontsize=20)
plt.savefig(path,
format='pdf')
plt.show()
my_path = '/home/irockafe/Dropbox (MIT)/'\
'Alm_Lab/projects/revo_healthcare/'\
'presentations/eric_bose/poop.pdf'
plot_mz_rt(df, my_path, (750, 1050))
print 'Maximum retention time', x_max
# divide feature table into slices of retention time
def get_rt_slice(df, rt_bounds):
'''
PURPOSE:
Given a tidy feature table with 'mz' and 'rt' column headers,
retain only the features whose rt is between rt_left
and rt_right
INPUT:
df - a tidy pandas dataframe with 'mz' and 'rt' column
headers
rt_left, rt_right: the boundaries of your rt_slice, in seconds
'''
out_df = df.loc[ (df['rt'] > rt_bounds[0]) &
(df['rt'] < rt_bounds[1])]
return out_df
def sliding_window_rt(df, rt_width, step=rt_width*0.25):
# get range of values [(0, rt_width)
#get_rt_slice(df, )
rt_min = np.min(df['rt'])
rt_max = np.max(df['rt'])
# define the ranges
left_bound = np.arange(rt_min, rt_max, step)
right_bound = left_bound + rt_width
rt_bounds = zip(left_bound, right_bound)
for rt_slice in rt_bounds:
rt_window = get_rt_slice(df, rt_slice)
#print rt_window.head()
print 'shape', rt_window.shape
raise hee
# TODO Send to ml pipeline here? Or separate function?
print type(np.float64(3.5137499999999999))
a = get_rt_slice(df, (750, 1050))
print 'Original dataframe shape: ', df.shape
print '\n Shape:', a.shape, '\n\n\n\n'
print df
sliding_window_rt(df, 100)
# Convert selected slice to feature table, X, get labels y
### Subdivide the data into a feature table
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\
'projects'
data_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\
'mtbls_no_retcor_bw2.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
df.head()
a = get_rt_slice(df, (0,100))
print df.shape
print a.shape
my_path = '/home/irockafe/Dropbox (MIT)/'\
'Alm_Lab/projects/revo_healthcare/'\
'presentations/eric_bose/alzheimers_mz_rt_scatter.pdf'
print np.max(df['rt'])
plot_mz_rt(df, my_path, (550, 670))
# Make chromatography gradient images
fig = plt.figure(1, figsize=(8,8))
#axScatter = plt.axes()
plt.plot([0,1500], [0, 1], lw=4, label='Normal',
alpha=0.4)
#plt.xlim([0, 30])
#plt.ylim([0,1])
plt.ylabel("% Hydrophobic Solvent", fontsize=20)
plt.xlabel('Time', fontsize=20)
plt.plot([650, 750, 1050], [750.0/1500, 750.0/1500, 1050.0/1500],
lw=4, color='r', label='Faster')
plt.xlim([0, 1400])
plt.ylim([0,1])
plt.ylabel("% Hydrophobic Solvent", fontsize=20)
plt.xlabel('Retention Time', fontsize=20)
plt.legend(fontsize=12)
plt.axvline(x=750, lw=2, color='r', alpha=0.3)
plt.axvline(x=1060, lw=2, color='r',alpha=0.3)
plt.savefig('/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/presentations/eric_bose/gradient_example.pdf')
plt.show()
# Make chromatography gradient images
fig = plt.figure()
#axScatter = plt.axes()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start with MTBLS315, the malaria vs fever dataset. Could get ~0.85 AUC for whole dataset.
Step2: Almost everything is below 30sec rt-window
Step3: <h2> Show me the distribution of features from alzheimers dataset </h2>
|
1,884
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import csv
import io
import urllib.request
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
url = 'https://radwatch.berkeley.edu/sites/default/files/dosenet/etch.csv'
response = urllib.request.urlopen(url)
reader = csv.reader(io.TextIOWrapper(response))
timedata = []
cpm = []
line = 0
for row in reader:
if line != 0:
timedata.append(datetime.fromtimestamp(float(row[2],)))
cpm.append(float(row[6]))
line += 1
mean_cpm1 = sum(cpm)/len(cpm)
print('mean CPM from its definition is: %s' %mean_cpm1)
mean_cpm2 = np.mean(cpm)
print('mean CPM from built-in function is: %s' %mean_cpm2)
if len(cpm)%2 == 0:
median_cpm1 = sorted(cpm)[int(len(cpm)/2)]
else:
median_cpm1 = (sorted(cpm)[int((len(cpm)+1)/2)]+sorted(cpm)[int((len(cpm)-1)/2)]) / 2
print('median CPM from its definition is: %s' %median_cpm1)
median_cpm2 = np.median(cpm)
print('median CPM from built-in function is: %s' %median_cpm2)
from collections import Counter
counter = Counter(cpm)
_,val = counter.most_common(1)[0]
mode_cpm1 = [i for i, target in counter.items() if target == val]
print('mode(s) CPM from its definition is: %s' %mode_cpm1)
import statistics # note: this function fails if there are two statistical modes
mode_cpm2 = statistics.mode(cpm)
print('mode(s) CPM from built-in function is: %s' %mode_cpm2)
fig, ax = plt.subplots()
ax.plot(timedata,cpm,alpha=0.3)
# alpha modifier adds transparency, I add this so the CPM plot doesn't overpower the mean, median, and mode
ax.plot([timedata[0],timedata[-1]], [mean_cpm1,mean_cpm1], label='mean CPM')
ax.plot([timedata[0],timedata[-1]], [median_cpm1,median_cpm1], 'r:', label='median CPM')
ax.plot([timedata[0],timedata[-1]], [mode_cpm1,mode_cpm1], 'c--', label='mode CPM',alpha=0.5)
plt.legend(loc='best')
plt.ylim(ymax = 5, ymin = .5)
ax.xaxis.set_major_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b-%Y'))
ax.xaxis.set_minor_locator(mdates.DayLocator())
plt.xticks(rotation=15)
plt.title('DoseNet Data: Etcheverry Roof\nCPM vs. Time with mean, mode, and median')
plt.ylabel('CPM')
plt.xlabel('Date')
fig, ax = plt.subplots()
y,x, _ = plt.hist(cpm,bins=30, alpha=0.3, label='CPM distribution')
ax.plot([mean_cpm1,mean_cpm1], [0,y.max()],label='mean CPM')
ax.plot([median_cpm1, median_cpm1], [0,y.max()], 'r:', label='median CPM')
ax.plot([mode_cpm1,mode_cpm1], [0,y.max()], 'c--', label='mode CPM')
plt.legend(loc='best')
plt.title('DoseNet Data: Etcheverry Roof\nCPM Histogram with mean, mode, and median')
plt.ylabel('Frequency')
plt.xlabel('CPM')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measures of central tendency identify values that lie on the center of a sample and help statisticians summarize their data. The most measures of central tendency are mean, median, and mode. Although you should be familiar with these values, they are defined as
|
1,885
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
from datetime import datetime
import tensorflow as tf
print "TensorFlow : {}".format(tf.__version__)
(train_data, train_labels), (eval_data, eval_labels) = tf.keras.datasets.mnist.load_data()
NUM_CLASSES = 10
print "Train data shape: {}".format(train_data.shape)
print "Eval data shape: {}".format(eval_data.shape)
def keras_model_fn(params):
inputs = tf.keras.layers.Input(shape=(28, 28), name='input_image')
input_layer = tf.keras.layers.Reshape(target_shape=(28, 28, 1), name='reshape')(inputs)
# convolutional layers
conv_inputs = input_layer
for i in range(params.num_conv_layers):
filters = params.init_filters * (2**i)
conv = tf.keras.layers.Conv2D(kernel_size=3, filters=filters, strides=1, padding='SAME', activation='relu')(conv_inputs)
max_pool = tf.keras.layers.MaxPool2D(pool_size=2, strides=2, padding='SAME')(conv)
batch_norm = tf.keras.layers.BatchNormalization()(max_pool)
conv_inputs = batch_norm
flatten = tf.keras.layers.Flatten(name='flatten')(conv_inputs)
# fully-connected layers
dense_inputs = flatten
for i in range(len(params.hidden_units)):
dense = tf.keras.layers.Dense(units=params.hidden_units[i], activation='relu')(dense_inputs)
dropout = tf.keras.layers.Dropout(params.dropout)(dense)
dense_inputs = dropout
# softmax classifier
logits = tf.keras.layers.Dense(units=NUM_CLASSES, name='logits')(dense_inputs)
softmax = tf.keras.layers.Activation('softmax', name='softmax')(logits)
# keras model
model = tf.keras.models.Model(inputs, softmax)
return model
def create_estimator(params, run_config):
keras_model = keras_model_fn(params)
print keras_model.summary()
optimizer = tf.keras.optimizers.Adam(lr=params.learning_rate)
keras_model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
mnist_classifier = tf.keras.estimator.model_to_estimator(
keras_model=keras_model,
config=run_config
)
return mnist_classifier
def run_experiment(params, run_config):
train_spec = tf.estimator.TrainSpec(
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"input_image": train_data},
y=train_labels,
batch_size=params.batch_size,
num_epochs=None,
shuffle=True),
max_steps=params.max_traning_steps
)
eval_spec = tf.estimator.EvalSpec(
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"input_image": eval_data},
y=eval_labels,
batch_size=params.batch_size,
num_epochs=1,
shuffle=False),
steps=None,
throttle_secs=params.eval_throttle_secs
)
tf.logging.set_verbosity(tf.logging.INFO)
time_start = datetime.utcnow()
print("Experiment started at {}".format(time_start.strftime("%H:%M:%S")))
print(".......................................")
estimator = create_estimator(params, run_config)
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec
)
time_end = datetime.utcnow()
print(".......................................")
print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S")))
print("")
time_elapsed = time_end - time_start
print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
return estimator
MODELS_LOCATION = 'models/mnist'
MODEL_NAME = 'keras_classifier'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
print model_dir
params = tf.contrib.training.HParams(
batch_size=100,
hidden_units=[512, 512],
num_conv_layers=3,
init_filters=64,
dropout=0.2,
max_traning_steps=50,
eval_throttle_secs=10,
learning_rate=1e-3,
debug=True
)
run_config = tf.estimator.RunConfig(
tf_random_seed=19830610,
save_checkpoints_steps=1000,
keep_checkpoint_max=3,
model_dir=model_dir
)
if tf.gfile.Exists(model_dir):
print("Removing previous artifacts...")
tf.gfile.DeleteRecursively(model_dir)
os.makedirs(model_dir)
estimator = run_experiment(params, run_config)
def make_serving_input_receiver_fn():
inputs = {'input_image': tf.placeholder(shape=[None,28,28], dtype=tf.float32, name='serving_input_image')}
return tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn()
)
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
ls ${saved_model_dir}
saved_model_cli show --dir=${saved_model_dir} --all
def inference_test(saved_model_dir, signature="serving_default", input_name='input_image', batch=300, repeat=100):
tf.logging.set_verbosity(tf.logging.ERROR)
time_start = datetime.utcnow()
predictor = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key=signature
)
time_end = datetime.utcnow()
time_elapsed = time_end - time_start
print ""
print("Model loading time: {} seconds".format(time_elapsed.total_seconds()))
print ""
time_start = datetime.utcnow()
output = None
for i in range(repeat):
predictions = predictor(
{
input_name: eval_data[:batch]
}
)
output=[np.argmax(prediction) for prediction in predictions['softmax']]
time_end = datetime.utcnow()
time_elapsed_sec = (time_end - time_start).total_seconds()
print "Inference elapsed time: {} seconds".format(time_elapsed_sec)
print ""
print "Prediction produced for {} instances batch, repeated {} times".format(len(output), repeat)
print "Average latency per batch: {} seconds".format(time_elapsed_sec/repeat)
print ""
saved_model_dir = os.path.join(
export_dir, [f for f in os.listdir(export_dir) if f.isdigit()][0])
print(saved_model_dir)
inference_test(saved_model_dir)
def describe_graph(graph_def, show_nodes=False):
print 'Input Feature Nodes: {}'.format([node.name for node in graph_def.node if node.op=='Placeholder'])
print ""
print 'Unused Nodes: {}'.format([node.name for node in graph_def.node if 'unused' in node.name])
print ""
print 'Output Nodes: {}'.format( [node.name for node in graph_def.node if 'softmax' in node.name])
print ""
print 'Quanitization Nodes: {}'.format( [node.name for node in graph_def.node if 'quant' in node.name])
print ""
print 'Constant Count: {}'.format( len([node for node in graph_def.node if node.op=='Const']))
print ""
print 'Variable Count: {}'.format( len([node for node in graph_def.node if 'Variable' in node.op]))
print ""
print 'Identity Count: {}'.format( len([node for node in graph_def.node if node.op=='Identity']))
print ""
print 'Total nodes: {}'.format( len(graph_def.node))
print ''
if show_nodes==True:
for node in graph_def.node:
print 'Op:{} - Name: {}'.format(node.op, node.name)
def get_graph_def_from_saved_model(saved_model_dir):
print saved_model_dir
print ""
from tensorflow.python.saved_model import tag_constants
with tf.Session() as session:
meta_graph_def = tf.saved_model.loader.load(
session,
tags=[tag_constants.SERVING],
export_dir=saved_model_dir
)
return meta_graph_def.graph_def
describe_graph(get_graph_def_from_saved_model(saved_model_dir))
def get_size(model_dir):
print model_dir
print ""
pb_size = os.path.getsize(os.path.join(model_dir,'saved_model.pb'))
variables_size = 0
if os.path.exists(os.path.join(model_dir,'variables/variables.data-00000-of-00001')):
variables_size = os.path.getsize(os.path.join(model_dir,'variables/variables.data-00000-of-00001'))
variables_size += os.path.getsize(os.path.join(model_dir,'variables/variables.index'))
print "Model size: {} KB".format(round(pb_size/(1024.0),3))
print "Variables size: {} KB".format(round( variables_size/(1024.0),3))
print "Total Size: {} KB".format(round((pb_size + variables_size)/(1024.0),3))
get_size(saved_model_dir)
def freeze_graph(saved_model_dir):
from tensorflow.python.tools import freeze_graph
from tensorflow.python.saved_model import tag_constants
output_graph_filename = os.path.join(saved_model_dir, "freezed_model.pb")
output_node_names = "softmax/Softmax"
initializer_nodes = ""
freeze_graph.freeze_graph(
input_saved_model_dir=saved_model_dir,
output_graph=output_graph_filename,
saved_model_tags = tag_constants.SERVING,
output_node_names=output_node_names,
initializer_nodes=initializer_nodes,
input_graph=None,
input_saver=False,
input_binary=False,
input_checkpoint=None,
restore_op_name=None,
filename_tensor_name=None,
clear_devices=False,
input_meta_graph=False,
)
print "SavedModel graph freezed!"
freeze_graph(saved_model_dir)
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
ls ${saved_model_dir}
def get_graph_def_from_file(graph_filepath):
print graph_filepath
print ""
from tensorflow.python import ops
with ops.Graph().as_default():
with tf.gfile.GFile(graph_filepath, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
return graph_def
freezed_filepath=os.path.join(saved_model_dir,'freezed_model.pb')
describe_graph(get_graph_def_from_file(freezed_filepath))
def optimize_graph(model_dir, graph_filename, transforms):
from tensorflow.tools.graph_transforms import TransformGraph
input_names = []
output_names = ['softmax/Softmax']
graph_def = get_graph_def_from_file(os.path.join(model_dir, graph_filename))
optimised_graph_def = TransformGraph(graph_def,
input_names,
output_names,
transforms
)
tf.train.write_graph(optimised_graph_def,
logdir=model_dir,
as_text=False,
name='optimised_model.pb')
print "Freezed graph optimised!"
transforms = [
'remove_nodes(op=Identity)',
'fold_constants(ignore_errors=true)',
'fold_batch_norms',
# 'fuse_resize_pad_and_conv',
# 'quantize_weights',
# 'quantize_nodes',
'merge_duplicate_nodes',
'strip_unused_nodes',
'sort_by_execution_order'
]
optimize_graph(saved_model_dir, 'freezed_model.pb', transforms)
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
ls ${saved_model_dir}
optimised_filepath=os.path.join(saved_model_dir,'optimised_model.pb')
describe_graph(get_graph_def_from_file(optimised_filepath))
def convert_graph_def_to_saved_model(graph_filepath):
from tensorflow.python import ops
export_dir=os.path.join(saved_model_dir,'optimised')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
graph_def = get_graph_def_from_file(graph_filepath)
with tf.Session(graph=tf.Graph()) as session:
tf.import_graph_def(graph_def, name="")
tf.saved_model.simple_save(session,
export_dir,
inputs={
node.name: session.graph.get_tensor_by_name("{}:0".format(node.name))
for node in graph_def.node if node.op=='Placeholder'},
outputs={
"softmax": session.graph.get_tensor_by_name("softmax/Softmax:0"),
}
)
print "Optimised graph converted to SavedModel!"
optimised_filepath=os.path.join(saved_model_dir,'optimised_model.pb')
convert_graph_def_to_saved_model(optimised_filepath)
optimised_saved_model_dir = os.path.join(saved_model_dir,'optimised')
get_size(optimised_saved_model_dir)
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)/optimised
ls ${saved_model_dir}
saved_model_cli show --dir ${saved_model_dir} --all
optimized_saved_model_dir = os.path.join(saved_model_dir,'optimised')
print(optimized_saved_model_dir)
inference_test(saved_model_dir=optimized_saved_model_dir, signature='serving_default', input_name='serving_input_image')
PROJECT = 'ksalama-gcp-playground'
BUCKET = 'ksalama-gcs-cloudml'
REGION = 'europe-west1'
MODEL_NAME = 'mnist_classifier'
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['MODEL_NAME'] = MODEL_NAME
%%bash
gsutil -m rm -r gs://${BUCKET}/tf-model-optimisation
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)
echo ${saved_model_dir}
gsutil -m cp -r ${saved_model_dir} gs://${BUCKET}/tf-model-optimisation/original
%%bash
saved_models_base=models/mnist/keras_classifier/export/
saved_model_dir=${saved_models_base}$(ls ${saved_models_base} | tail -n 1)/optimised
echo ${saved_model_dir}
gsutil -m cp -r ${saved_model_dir} gs://${BUCKET}/tf-model-optimisation
%%bash
echo ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions=${REGION}
%%bash
MODEL_VERSION='v_org'
MODEL_ORIGIN=gs://${BUCKET}/tf-model-optimisation/original
gcloud ml-engine versions create ${MODEL_VERSION}\
--model=${MODEL_NAME} \
--origin=${MODEL_ORIGIN} \
--runtime-version=1.10
%%bash
MODEL_VERSION='v_opt'
MODEL_ORIGIN=gs://${BUCKET}/tf-model-optimisation/optimised
gcloud ml-engine versions create ${MODEL_VERSION}\
--model=${MODEL_NAME} \
--origin=${MODEL_ORIGIN} \
--runtime-version=1.10
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build(
'ml', 'v1',
credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json'
)
def predict(version, instances):
request_data = {'instances': instances}
model_url = 'projects/{}/models/{}/versions/{}'.format(PROJECT, MODEL_NAME, version)
response = api.projects().predict(body=request_data, name=model_url).execute()
class_ids = None
try:
class_ids = [item["class_ids"] for item in response["predictions"]]
except:
print response
return class_ids
def inference_cmle(version, batch=100, repeat=10):
instances = [
{'input_image_3': [float(i) for i in list(eval_data[img])] }
for img in range(batch)
]
#warmup request
predict(version, instances[0])
print 'Warm up request performed!'
print 'Timer started...'
print ''
time_start = datetime.utcnow()
output = None
for i in range(repeat):
output = predict(version, instances)
time_end = datetime.utcnow()
time_elapsed_sec = (time_end - time_start).total_seconds()
print "Inference elapsed time: {} seconds".format(time_elapsed_sec)
print ""
print "Prediction produced for {} instances batch, repeated {} times".format(len(output), repeat)
print "Average latency per batch: {} seconds".format(time_elapsed_sec/repeat)
print ""
print "Prediction output for the last instance: {}".format(output[0])
version='v_org'
inference_cmle(version)
version='v_opt'
inference_cmle(version)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Train and Export a Keras Model
Step2: 1.2 Estimator
Step3: 1.2.2 Convert Keras model to Estimator
Step4: 1.3 Train and Evaluate
Step5: 1.3.2 Experiment Parameters
Step6: TensorFlow Graph
Step7: 1.4 Export the model
Step8: 2. Inspect the Exported SavedModel
Step9: Prediction with SavedModel
Step10: 3. Test Prediction with SavedModel
Step11: Describe GraphDef
Step12: 4. Describe the SavedModel Graph (before optimisation)
Step13: Get model size
Step14: 5. Freeze SavedModel
Step15: 6. Describe the freezed_model.pb Graph (after freezing)
Step16: 8. Optimise the freezed_model.pb
Step17: 8. Describe the Optimised Graph
Step18: 9. Convert Optimised graph (GraphDef) to SavedModel
Step19: Optimised SavedModel Size
Step20: 10. Prediction with the Optimised SavedModel
Step21: Cloud ML Engine Deployment and Prediction
Step22: 1. Upload the model artefacts to Google Cloud Storage bucket
Step23: 2. Deploy models to Cloud ML Engine
Step24: Version
Step25: Version
Step26: 3. Cloud ML Engine online predictions
|
1,886
|
<ASSISTANT_TASK:>
Python Code:
!pip install --user apache-beam[gcp]==2.16.0
!pip install --user tensorflow-transform==0.15.0
!pip download tensorflow-transform==0.15.0 --no-deps
%%bash
pip freeze | grep -e 'flow\|beam'
import tensorflow as tf
import tensorflow_transform as tft
import shutil
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'bucket-name'
PROJECT = 'project-id'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
from google.cloud import bigquery
def create_query(phase, EVERY_N):
Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
base_query =
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
if EVERY_N is None:
if phase < 2:
# training
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2.format(base_query)
else:
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}.format(base_query, phase)
else:
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}.format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df_valid = bigquery.Client().query(query).to_dataframe()
display(df_valid.head())
df_valid.describe()
import datetime
import tensorflow as tf
import apache_beam as beam
import tensorflow_transform as tft
import tensorflow_metadata as tfmd
from tensorflow_transform.beam import impl as beam_impl
def is_valid(inputs):
Check to make sure the inputs are valid.
Args:
inputs: dict, dictionary of TableRow data from BigQuery.
Returns:
True if the inputs are valid and False if they are not.
try:
pickup_longitude = inputs['pickuplon']
dropoff_longitude = inputs['dropofflon']
pickup_latitude = inputs['pickuplat']
dropoff_latitude = inputs['dropofflat']
hourofday = inputs['hourofday']
dayofweek = inputs['dayofweek']
passenger_count = inputs['passengers']
fare_amount = inputs['fare_amount']
return fare_amount >= 2.5 and pickup_longitude > -78 \
and pickup_longitude < -70 and dropoff_longitude > -78 \
and dropoff_longitude < -70 and pickup_latitude > 37 \
and pickup_latitude < 45 and dropoff_latitude > 37 \
and dropoff_latitude < 45 and passenger_count > 0
except:
return False
def preprocess_tft(inputs):
Preproccess the features and add engineered features with tf transform.
Args:
dict, dictionary of TableRow data from BigQuery.
Returns:
Dictionary of preprocessed data after scaling and feature engineering.
import datetime
print(inputs)
result = {}
result['fare_amount'] = tf.identity(inputs['fare_amount'])
# Build a vocabulary
# TODO: convert day of week from string->int with tft.string_to_int
result['hourofday'] = tf.identity(inputs['hourofday']) # pass through
# TODO: scale pickup/dropoff lat/lon between 0 and 1 with tft.scale_to_0_1
result['passengers'] = tf.cast(inputs['passengers'], tf.float32) # a cast
# Arbitrary TF func
result['key'] = tf.as_string(tf.ones_like(inputs['passengers']))
# Engineered features
latdiff = inputs['pickuplat'] - inputs['dropofflat']
londiff = inputs['pickuplon'] - inputs['dropofflon']
# TODO: Scale our engineered features latdiff and londiff between 0 and 1
dist = tf.sqrt(latdiff * latdiff + londiff * londiff)
result['euclidean'] = tft.scale_to_0_1(dist)
return result
def preprocess(in_test_mode):
Sets up preprocess pipeline.
Args:
in_test_mode: bool, False to launch DataFlow job, True to run locally.
import os
import os.path
import tempfile
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam import tft_beam_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-taxi-features' + '-'
job_name += datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EVERY_N = 100000
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
EVERY_N = 10000
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'num_workers': 1,
'max_num_workers': 1,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'direct_num_workers': 1,
'extra_packages': ['tensorflow-transform-0.15.0.tar.gz']
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# Set up raw data metadata
raw_data_schema = {
colname: dataset_schema.ColumnSchema(
tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'dayofweek,key'.split(',')
}
raw_data_schema.update({
colname: dataset_schema.ColumnSchema(
tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in
'fare_amount,pickuplon,pickuplat,dropofflon,dropofflat'.split(',')
})
raw_data_schema.update({
colname: dataset_schema.ColumnSchema(
tf.int64, [], dataset_schema.FixedColumnRepresentation())
for colname in 'hourofday,passengers'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(
dataset_schema.Schema(raw_data_schema))
# Run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# Save the raw data metadata
(raw_data_metadata |
'WriteInputMetadata' >> tft_beam_io.WriteMetadata(
os.path.join(
OUTPUT_DIR, 'metadata/rawdata_metadata'), pipeline=p))
# TODO: Analyze and transform our training data
# using beam_impl.AnalyzeAndTransformDataset()
raw_dataset = (raw_data, raw_data_metadata)
# Analyze and transform training data
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(
preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
# Save transformed train data to disk in efficient tfrecord format
transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'), file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# TODO: Read eval data from BigQuery using beam.io.BigQuerySource
# and filter rows using our is_valid function
raw_test_dataset = (raw_test_data, raw_data_metadata)
# Transform eval data
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset()
)
transformed_test_data, _ = transformed_test_dataset
# Save transformed train data to disk in efficient tfrecord format
(transformed_test_data |
'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'), file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema)))
# Save transformation function to disk for use at serving time
(transform_fn |
'WriteTransformFn' >> transform_fn_io.WriteTransformFn(
os.path.join(OUTPUT_DIR, 'metadata')))
# Change to True to run locally
preprocess(in_test_mode=False)
%%bash
# ls preproc_tft
gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
%%bash
rm -r ./taxi_trained
export PYTHONPATH=${PYTHONPATH}:$PWD
python3 -m tft_trainer.task \
--train_data_path="gs://${BUCKET}/taxifare/preproc_tft/train*" \
--eval_data_path="gs://${BUCKET}/taxifare/preproc_tft/eval*" \
--output_dir=./taxi_trained \
!ls $PWD/taxi_trained/export/exporter
%%writefile /tmp/test.json
{"dayofweek":0, "hourofday":17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2.0}
%%bash
sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete
%%bash
model_dir=$(ls $PWD/taxi_trained/export/exporter/)
gcloud ai-platform local predict \
--model-dir=./taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NOTE
Step2: <b>Restart the kernel</b> (click on the reload button above).
Step8: Input source
Step12: Create ML dataset using tf.transform and Dataflow
Step13: This will take 10-15 minutes. You cannot go on in this lab until your DataFlow job has succesfully completed.
Step14: Train off preprocessed data
Step15: Now let's create fake data in JSON format and use it to serve a prediction with gcloud ai-platform local predict
|
1,887
|
<ASSISTANT_TASK:>
Python Code:
import mne
from mne.preprocessing import maxwell_filter
data_path = mne.datasets.sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6))
evoked = epochs.average()
evoked.plot(window_title=kind, ylim=dict(grad=(-200, 250),
mag=(-600, 700)), time_unit='s')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Preprocess with Maxwell filtering
Step3: Select events to extract epochs from, pick M/EEG channels, and plot evoked
|
1,888
|
<ASSISTANT_TASK:>
Python Code:
! gunzip -d ../data/result.vot_01.gz
#! head -n 200 ../data/result.vot_01.gz
import pandas as pd
import numpy as np
import seaborn as sns
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import matplotlib.pyplot as plt
from astropy.io import votable
votable.is_votable('../data/result.vot_01.gz')
out = votable.parse_single_table('../data/result.vot_01.gz')
out2 = out.to_table()
out3 = out2.to_pandas()
out2[0:5]
plt.plot(out3.ra, out3.dec, '.', alpha=0.1)
out3.columns
out3.shape
vec = out3.parallax == out3.parallax
vec.sum(), len(vec)
sns.distplot(out3.parallax[vec])
sns.distplot(out3.phot_g_mean_mag, kde=False)#, bins=np.arange(0,2E7, 10), kde=False)
plt.yscale('log')
out3.head()
! du -hs ../data/result.vot_02.gz
tab2_raw = votable.parse_single_table('../data/result.vot_02.gz')
tab2_apy = tab2_raw.to_table()
tab2_apy
tab2 = tab2_apy.to_pandas()
tab2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Apparently not in gzip format, despite the file extension
Step2: Attempt 2
|
1,889
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
from statsmodels.compat import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import statsmodels.api as sm
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv('salary.table')
except: # recent pandas can read URL without urlopen
url = 'http://stats191.stanford.edu/data/salary.table'
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv('salary.table')
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
plt.figure(figsize=(6,6))
symbols = ['D', '^']
colors = ['r', 'g', 'blue']
factor_groups = salary_table.groupby(['E','M'])
for values, group in factor_groups:
i,j = values
plt.scatter(group['X'], group['S'], marker=symbols[j], color=colors[i-1],
s=144)
plt.xlabel('Experience');
plt.ylabel('Salary');
formula = 'S ~ C(E) + C(M) + X'
lm = ols(formula, salary_table).fit()
print(lm.summary())
lm.model.exog[:5]
lm.model.data.orig_exog[:5]
lm.model.data.frame[:5]
infl = lm.get_influence()
print(infl.summary_table())
df_infl = infl.summary_frame()
df_infl[:5]
resid = lm.resid
plt.figure(figsize=(6,6));
for values, group in factor_groups:
i,j = values
group_num = i*2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(x, resid[group.index], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('Group');
plt.ylabel('Residuals');
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
interM_lm.model.data.orig_exog[:5]
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X');
plt.ylabel('standardized resids');
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols('S ~ C(E) + X + C(M)', data=salary_table, subset=idx).fit()
print(lm32.summary())
print('\n')
interX_lm32 = ols('S ~ C(E) * X + C(M)', data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print('\n')
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print('\n')
interM_lm32 = ols('S ~ X + C(E) * C(M)', data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print('\n')
try:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
except:
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X[~[32]]');
plt.ylabel('standardized resids');
lm_final = ols('S ~ X + C(E)*C(M)', data = salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ['-','--']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], S[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
# drop NA because there is no idx 32 in the final model
plt.plot(mf.X[idx].dropna(), lm_final.fittedvalues[idx].dropna(),
ls=lstyle[j], color=colors[i-1])
plt.xlabel('Experience');
plt.ylabel('Salary');
U = S - X * interX_lm32.params['X']
plt.figure(figsize=(6,6))
interaction_plot(E, M, U, colors=['red','blue'], markers=['^','D'],
markersize=10, ax=plt.gca())
try:
jobtest_table = pd.read_table('jobtest.table')
except: # don't have data already
url = 'http://stats191.stanford.edu/data/jobtest.table'
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(['ETHN'])
fig, ax = plt.subplots(figsize=(6,6))
colors = ['purple', 'green']
markers = ['o', 'v']
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST');
ax.set_ylabel('JPERF');
min_lm = ols('JPERF ~ TEST', data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST')
ax.set_ylabel('JPERF')
fig = abline_plot(model_results = min_lm, ax=ax)
min_lm2 = ols('JPERF ~ TEST + TEST:ETHN',
data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'] + min_lm2.params['TEST:ETHN'],
ax=ax, color='green');
min_lm3 = ols('JPERF ~ TEST + ETHN', data = jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm3.params['Intercept'],
slope = min_lm3.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm3.params['Intercept'] + min_lm3.params['ETHN'],
slope = min_lm3.params['TEST'], ax=ax, color='green');
min_lm4 = ols('JPERF ~ TEST * ETHN', data = jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm4.params['Intercept'],
slope = min_lm4.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm4.params['Intercept'] + min_lm4.params['ETHN'],
slope = min_lm4.params['TEST'] + min_lm4.params['TEST:ETHN'],
ax=ax, color='green');
# is there any effect of ETHN on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of ETHN on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of ETHN on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
try:
rehab_table = pd.read_csv('rehab.table')
except:
url = 'http://stats191.stanford.edu/data/rehab.csv'
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv('rehab.table')
fig, ax = plt.subplots(figsize=(8,6))
fig = rehab_table.boxplot('Time', 'Fitness', ax=ax, grid=False)
rehab_lm = ols('Time ~ C(Fitness)', data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
try:
kidney_table = pd.read_table('./kidney.table')
except:
url = 'http://stats191.stanford.edu/data/kidney.table'
kidney_table = pd.read_table(url, delimiter=" *")
kidney_table.groupby(['Weight', 'Duration']).size()
kt = kidney_table
plt.figure(figsize=(8,6))
fig = interaction_plot(kt['Weight'], kt['Duration'], np.log(kt['Days']+1),
colors=['red', 'blue'], markers=['D','^'], ms=10, ax=plt.gca())
kidney_lm = ols('np.log(Days+1) ~ C(Duration) * C(Weight)', data=kt).fit()
table10 = anova_lm(kidney_lm)
print(anova_lm(ols('np.log(Days+1) ~ C(Duration) + C(Weight)',
data=kt).fit(), kidney_lm))
print(anova_lm(ols('np.log(Days+1) ~ C(Duration)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
print(anova_lm(ols('np.log(Days+1) ~ C(Weight)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
sum_lm = ols('np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)',
data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols('np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)',
data=kt).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Take a look at the data
Step2: Fit a linear model
Step3: Have a look at the created design matrix
Step4: Or since we initially passed in a DataFrame, we have a DataFrame available in
Step5: We keep a reference to the original untouched data in
Step6: Influence statistics
Step7: or get a dataframe
Step8: Now plot the reiduals within the groups separately
Step9: Now we will test some interactions using anova or f_test
Step10: Do an ANOVA check
Step11: The design matrix as a DataFrame
Step12: The design matrix as an ndarray
Step13: Looks like one observation is an outlier.
Step14: Replot the residuals
Step15: Plot the fitted values
Step16: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
Step17: Minority Employment Data
Step18: One-way ANOVA
Step19: Two-way ANOVA
Step20: Explore the dataset
Step21: Balanced panel
Step22: You have things available in the calling namespace available in the formula evaluation namespace
Step23: Sum of squares
|
1,890
|
<ASSISTANT_TASK:>
Python Code:
issubclass(bool, int)
isinstance(False, int)
the_list = list()
nada_dict = dict(the_list) # converting
empty_set = set() # can't use {} as that means empty dict
empty_tuple = tuple()
print(the_list, nada_dict, empty_set, empty_tuple)
import decimal # needs to be imported
# lets create more "empty stuff" by calling types with no arguments
empty_string = str()
logical = bool()
float_zero = float()
int_zero = int()
d = decimal.Decimal() # Decimal is a class name inside the decimal module
# a triple-quoted string may go on for many lines...
# and a format string -- note the f prefix -- enables
# substitution of surrounding objects into curly brace
# placeholders -- new as of 3.6 (or use the format method)
print(f
empty_string: {empty_string}
logical: {logical}
float_zero: {float_zero}
int_zero: {int_zero}
d: {d})
print(len(str()), len(" "))
str() == " " # empty string equals space? (no way)
def get_types():
output = []
for name in dir(__builtins__):
the_type = eval(name) # turn string into original object
if type(the_type) == type:
if not issubclass(the_type, BaseException):
output.append(name)
return output
types = get_types()
types
def cyclic(the_dict):
accepts a permutation as a dict, returns cyclic notation,
a compact view of a permutation
could say (annotated version):
def cyclic(the_dict : dict) -> tuple:
without annoying, or troubling, the Python interpreter (try it!).
output = [] # we'll make this a tuple before returning
while the_dict:
start = tuple(the_dict.keys())[0]
the_cycle = [start]
the_next = the_dict.pop(start) # take it out of the_dict
while the_next != start: # keep going until we cycle
the_cycle.append(the_next) # following the bread crumb trail...
the_next = the_dict.pop(the_next) # look up what we're paired with
output.append(tuple(the_cycle)) # we've got a cycle, append to output
return tuple(output) # giving back a tuple object, job complete
zoo = ["bear", "tiger", "lion"]
zoo.append("scorpion")
print(zoo[0], zoo[3]) # print first and last
zoo.insert(2, "monkey") # after item 2
zoo
ages = {"monkey":3, "bear":2.5, "otter":1.5} # remember me?
ages
bear_age = ages.pop("bear") # take a key, return a value
bear_age
ages # the bear item is gone
del ages['otter'] # delete this element
ages
pairings = {"monkey":"bear", "tiger":"scorpion", "scorpion":"monkey", "bear":"tiger"}
print("Keys:", pairings.keys())
print("Values:", pairings.values())
result = cyclic(pairings)
result
from string import ascii_lowercase # a string
from random import shuffle
the_letters = list(ascii_lowercase) + [" "] # lowercase plus space, as a list
shuffled = the_letters.copy() # copy me
shuffle(shuffled) # works "in place" i.e. None returned
permutation = dict(zip(the_letters, shuffled)) # make pairs, each letter w/ random other
print("ASCII", ascii_lowercase)
print("Coding Key:", permutation)
# feel free to keep re-running this, a different permutation every time!
seq1 = list(range(21))
seq2 = seq1.copy()
shuffle(seq2)
the_zip = tuple(zip(seq1, seq2)) # actually a tuple already
the_zip
perm_ints = dict(the_zip) # the_zip is by now a tuple of tuples, which dict will digest
perm_ints
cyclic(permutation)
cyclic(perm_ints)
cipher = dict(zip(seq1, seq2)) # will a dict eat a zip?
cipher
def encrypt(plain : str, secret_key : dict) -> str:
turn plaintext into cyphertext using the secret key
output = "" # empty string
for c in plain:
output = output + secret_key.get(c, c)
return output
c = encrypt("able was i ere i saw elba", permutation) # something Napoleon might have said
c
permutation
def make_perm(incoming : set) -> dict:
seq1 = list(incoming)
seq2 = seq1.copy()
shuffle(seq2)
return dict(zip(seq1, seq2))
make_perm(set(range(10)))
make_perm(the_letters) # defined above as lowercase ascii letters plus space
def cyclic(the_perm : dict) -> tuple:
cyclic notation, a compact view of a permutation
output = [] # we'll make this a tuple before returning
the_dict = the_perm.copy() # protect the original from exhaustion
while the_dict:
start = tuple(the_dict.keys())[0]
the_cycle = [start]
the_next = the_dict.pop(start) # take it out of the_dict
while the_next != start: # keep going until we cycle
the_cycle.append(the_next) # following the bread crumb trail...
the_next = the_dict.pop(the_next) # look up what we're paired with
output.append(tuple(the_cycle)) # we've got a cycle, append to output
return tuple(output) # giving back a tuple object, job complete
original = make_perm(the_letters) # the_letters is a string
cyclic_view = cyclic(original)
cyclic_view
original
cipher = make_perm(the_letters)
c = encrypt("hello alice this is bob lets hope eve does not decrypt this", cipher)
def decrypt(cyphertext : str, secret_key : dict) -> str:
Turn cyphertext into plaintext using ~self
reverse_P = {v: k for k, v in secret_key.items()} # invert me!
output = ""
for c in cyphertext:
output = output + reverse_P.get(c, c)
return output
c
p = decrypt(c, cipher)
p
def inv_cyclic(cycles : tuple) -> dict:
output = {} # empty dig
pass # more code goes here
return output
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calling types
Step3: print( ) is an example of a built-in Python function. It sends strings to the console, converting objects to strings first, as a part of its job. We fed it not-string type objects, but it didn't complain.
Step4: The empty string doesn't show up, because it's empty. Spaces wouldn't show up either, when printed. However an empty string and single space are different.
Step5: If we check for equality (==), we find the answer is False.
Step6: What have we been looking at so far?
Step8: A First Function
Step9: Don't let the above code intimidate you.
Step10: We may also insert into a list, at any point
Step11: To pop, in the case of a dict, is to remove an item by key, while returning the value. Lets look at that
Step12: Or just use the keyword del, for delete, if you don't have a need for the object in future
Step13: Now lets get back to our function, wherein these data structures, each with a bag of tricks (methods) play a starring role.
Step14: Notice the keys and values are the same elements. Think of the values as simply the keys but in a different order. We could say "monkey maps to bear" and "tiger maps to scorpion".
Step15: The resulting tuple is read left to right and says "monkey maps to bear" and "bear to tiger" and "tiger to scorpion". The last element wraps around to the first i.e. "scorpion maps to monkey". That all corresponds to what we said in the original dict.
Step16: ASCII is the precursor to Unicode, a vast lookup table containing a huge number of important symbols, such as world alphabets and ideograms, special symbols (chess pieces, playing cards), emoji.
Step17: Although the above data structure is a tuple of tuples, its not in cyclic notation. Rather, its a step on the way to becoming a dict type object.
Step18: Our function will work equally well on any permutation dict, as we're not depending on the type of our keys and/or values for this recipe to work.
Step19: You may be wondering, considering the above pipeline, which went from zip type, to tuple, then to dict, whether a dict might eat a zip type directly. Lets try that...
Step21: So yes, that works great.
Step22: Excuse me? That doesn't look scrambled at all? The reason is subtle
Step23: Let's do two things to improve the situation
Step25: That's all working great, and now the second change, a new cyclic
Step27: Life is good. As a final touch, we'd like to deciper or decrypt, what was encrypted, using the very same secret_key. All encrypt( ) needs to do is reverse the secret key, giving what's called the inverse permutation, and call encrypt with that.
Step28: Final thing
|
1,891
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
tf.__version__
%%sql -d standard
SELECT
timestamp,
borough,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
ORDER BY
timestamp DESC
LIMIT
15
%%sql --module nyc_collisions
SELECT
IF(borough = 'MANHATTAN', 1, 0) AS is_mt,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
WHERE
LENGTH(borough) > 0
AND latitude IS NOT NULL AND latitude != 0.0
AND longitude IS NOT NULL AND longitude != 0.0
AND borough != 'BRONX'
ORDER BY
RAND()
LIMIT
10000
import datalab.bigquery as bq
nyc_cols = bq.Query(nyc_collisions).to_dataframe(dialect='standard').as_matrix()
print(nyc_cols)
print("\nLoaded " + str(len(nyc_cols)) + " rows.")
import numpy as np
is_mt = nyc_cols[:,0].astype(np.int32) # read the 0th column (is_mt) as int32
latlng = nyc_cols[:,1:3].astype(np.float32) # read the 1st and 2nd column (latitude and longitude) as float32
print("Is Manhattan: " + str(is_mt))
print("\nLat/Lng: \n\n" + str(latlng))
# create an numpy array with numbers from 0 to 14
A = np.arange(15)
print(A)
# reshape the array A into an array with shape in 3 rows and 5 columns,
# set it to variable A, and print it.
# *** ADD YOUR CODE HERE ***
print(A)
# expected result:
# [[ 0 1 2 3 4]
# [ 5 6 7 8 9]
# [10 11 12 13 14]]
# print() the shape, data type name, size (total number of elements) of the array A
# *** ADD YOUR CODE HERE ***
# expected result:
# (3, 5)
# int64
# 15
# multiply the array A by the number 2 and print the result
# *** ADD YOUR CODE HERE ***
# expected result:
# [[ 0 2 4 6 8]
# [10 12 14 16 18]
# [20 22 24 26 28]]
# create a new array that has the same shape as the array A filled with zeros, and print it
# *** ADD YOUR CODE HERE ***
# expected result:
# [[ 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0.]]
# create a new array that has the elements in the right-most column of the array A
# *** ADD YOUR CODE HERE ***
# expected result:
# [ 4 9 14]
# Collect elements in array B with an index "I % 2 == 0" and print it
B = np.arange(10)
I = np.arange(10)
# *** ADD YOUR CODE HERE ***
# expected result:
# [0 2 4 6 8]
from sklearn.preprocessing import StandardScaler
latlng_std = StandardScaler().fit_transform(latlng)
print(latlng_std)
# *** ADD YOUR CODE HERE ***
import matplotlib.pyplot as plt
lat = latlng_std[:,0]
lng = latlng_std[:,1]
plt.scatter(lng[is_mt == 1], lat[is_mt == 1], c='b') # plot points in Manhattan in blue
plt.scatter(lng[is_mt == 0], lat[is_mt == 0], c='y') # plot points outside Manhattan in yellow
plt.show()
# 8,000 pairs for training
latlng_train = latlng_std[0:8000]
is_mt_train = is_mt[0:8000]
# 2,000 pairs for test
latlng_test = latlng_std[8000:10000]
is_mt_test = is_mt[8000:10000]
print("Split finished.")
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # supress warning messages
# define two feature columns consisting of real values
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=2)]
# create a neural network
dnnc = tf.contrib.learn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[],
n_classes=2)
dnnc
# plot a predicted map of Manhattan
def plot_predicted_map():
is_mt_pred = dnnc.predict(latlng_std, as_iterable=False) # an array of prediction results
plt.scatter(lng[is_mt_pred == 1], lat[is_mt_pred == 1], c='b')
plt.scatter(lng[is_mt_pred == 0], lat[is_mt_pred == 0], c='y')
plt.show()
# print the accuracy of the neural network
def print_accuracy():
accuracy = dnnc.evaluate(x=latlng_test, y=is_mt_test)["accuracy"]
print('Accuracy: {:.2%}'.format(accuracy))
# train the model for just 1 step and print the accuracy
dnnc.fit(x=latlng_train, y=is_mt_train, steps=1)
plot_predicted_map()
print_accuracy()
steps = 100
for i in range (1, 6):
dnnc.fit(x=latlng_train, y=is_mt_train, steps=steps)
plot_predicted_map()
print('Steps: ' + str(i * steps))
print_accuracy()
print('\nTraining Finished.')
dnnc = tf.contrib.learn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20, 20, 20, 20],
n_classes=2)
dnnc
steps = 30
for i in range (1, 6):
dnnc.fit(x=latlng_train, y=is_mt_train, steps=steps)
plot_predicted_map()
print 'Steps: ' + str(i * steps)
print_accuracy()
print('\nTraining Finished.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This codelab requires TensorFlow 1.0 or above. If you see older versions such as 0.11.0rc0, please follow the instruction below to update your local Datalab.
Step2: Preprocess the training data on BigQuery
Step3: Import the BigQuery SQL result as NumPy array
Step4: Let's take a look at what's inside the result. Run the cell below and check the variable is_mt has an array of 1s and 0s representing each geolocation is in Manhattan or not, and the variable latlng has an array of pairs of latitude and longitude.
Step5: Lab
Step6: Now, add the necessary new code in the following cells, and run them, to get the result described in the comments with NumPy. You should refer to the NumPy Quickstart to learn how to get the results required.
Step7: 2-2. Feature scaling and splitting data
Step8: Lab
Step9: Plot the training data with Matplotlib
Step10: You can see that the geolocations in Manhattan are plotted as blue dots, and others are yellow dots. Also, latitudes and longitudes are scaled to have 0 as the center.
Step11: Lab
Step12: The code above does the following
Step13: In the first method plot_predicted_map() at line 3, we call the predict() method of DNNClassifier class to get an array of prediction results (10,000 rows) like [1 0 0 1 ... 0 0 1 0] where 1 means that the neural network believes the geolocation is in Manhattan, and 0 means it's not. By using this array as an indexer for selecting lat and lng pairs in each class, the method plots geolocations predicted as Manhattan in blue dots and others in yellow dots.
Step14: Lab
Step15: The hidden layers give the power
|
1,892
|
<ASSISTANT_TASK:>
Python Code:
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex2 import *
print("Setup Complete")
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = 0
# Call line below with no argument to check that you've loaded the data correctly
step_1.assert_check_failed()
#%%RM_IF(PROD)%%
# Fill in the line below to read the file into a variable home_data
home_data = pd.DataFrame()
# Call line below with no argument to check that you've loaded the data correctly
step_1.assert_check_failed()
home_data = pd.read_csv(iowa_file_path)
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
# Print summary statistics in next line
____
# What is the average lot size (rounded to nearest integer)?
avg_lot_size = ____
# As of today, how old is the newest home (current year - the date in which it was built)
newest_home_age = ____
# Check your answers
step_2.check()
#step_2.hint()
#step_2.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 1
Step2: Step 2
|
1,893
|
<ASSISTANT_TASK:>
Python Code:
a = "string"
b = "string1"
print a, b
print "The return value is", a
print(a, b)
print("The return value is", a)
print(a+' '+b)
print("The return value is" + " " + a)
print(a),; print(b)
print(a+b)
print("{}{}".format(a,b))
print("%s%s%d" % (a, b, 10))
from math import pi
print("원주율값은 대략 %f이다." % pi)
print("원주율값은 대략 %e이다." % pi)
print("원주율값은 대략 %g이다." % pi)
print("원주율값은 대략 %.10f이다." % pi)
print("원주율값은 대략 %.10e이다." % pi)
print("원주율값은 대략 %.10g이다." % pi)
print("지금 사용하는 컴퓨터가 계산할 수 있는 원주율값은 대략 '%.50f'이다." % pi)
print("%f"% pi)
print("%f"% pi**3)
print("%f"% pi**10)
print("%12f"% pi)
print("%12f"% pi**3)
print("%12f"% pi**10)
print("%12e"% pi)
print("%12e"% pi**3)
print("%12e"% pi**10)
print("%12g"% pi)
print("%12g"% pi**3)
print("%12g"% pi**10)
print("%16.10f"% pi)
print("%16.10f"% pi**3)
print("%16.10f"% pi**10)
print("%16.10e"% pi)
print("%16.10e"% pi**3)
print("%16.10e"% pi**10)
print("%16.10g"% pi)
print("%16.10g"% pi**3)
print("%16.10g"% pi**10)
print("%012f"% pi)
print("%012f"% pi**3)
print("%012f"% pi**10)
print("%012e"% pi)
print("%012e"% pi**3)
print("%012e"% pi**10)
print("%012g"% pi)
print("%012g"% pi**3)
print("%012g"% pi**10)
print("%016.10f"% pi)
print("%016.10f"% pi**3)
print("%016.10f"% pi**10)
print("%016.10e"% pi)
print("%016.10e"% pi**3)
print("%016.10e"% pi**10)
print("%016.10g"% pi)
print("%016.10g"% pi**3)
print("%016.10g"% pi**10)
print("%12.20f" % pi**19)
print("{}{}{}".format(a, b, 10))
print("{:s}{:s}{:d}".format(a, b, 10))
print("{:f}".format(pi))
print("{:f}".format(pi**3))
print("{:f}".format(pi**10))
print("{:12f}".format(pi))
print("{:12f}".format(pi**3))
print("{:12f}".format(pi**10))
print("{:012f}".format(pi))
print("{:012f}".format(pi**3))
print("{:012f}".format(pi**10))
print("{2}{1}{0}".format(a, b, 10))
print("{s1}{s2}{s1}".format(s1=a, s2=b, i1=10))
print("{i1}{s2}{s1}".format(s1=a, s2=b, i1=10))
print("{1:12f}, {0:12f}".format(pi, pi**3))
print("{p1:12f}, {p0:12f}".format(p0=pi, p1=pi**3))
a = 3.141592
print(a)
a.__str__()
str(a)
b = [2, 3.5, ['school', 'bus'], (1,2)]
str(b)
type(str(b))
print(b)
"%s" % b
"{}".format(b)
c = str(pi)
pi1 = eval(c)
pi1
pi1 - pi
pi2 = repr(pi)
type(pi2)
eval(pi2) - pi
"%s" % pi
"%r" % pi
"%d" % pi
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 주의
Step2: 하지만 위와 같이 괄호를 사용하지 않는 방식은 파이썬 3.x에서는 지원되지 않는다.
Step3: 아래와 같이 할 수도 있다
Step4: 그런데 위 경우 a와 b를 인쇄할 때 스페이스가 자동으로 추가된다.
Step5: 서식이 있는 인쇄방식을 이용하여 다양한 형태로 자료들을 인쇄할 수 있다. 서식을 사용하는 방식은 크게 두 가지가 있다.
Step6: 부동소수점 실수의 경우 여러 서식을 이용할 수 있다.
Step7: 소수점 이하 숫자의 개수를 임의로 정할 수 있다.
Step8: pi값 계산을 소숫점 이하 50자리 정도까지 계산함을 알 수 있다. 이것은 사용하는 컴퓨터의 한계이며 컴퓨터의 성능에 따라 계산능력이 달라진다.
Step9: 여러 값을 보여주면 아래 예제처럼 기본은 왼쪽에 줄을 맞춘다.
Step10: 오른쪽으로 줄을 맞추려면 아래 방식을 사용한다.
Step11: 비어 있는 자리를 숫자 0으로 채울 수도 있다.
Step12: 자릿수는 계산결과를 예상하여 결정해야 한다.
Step13: format 함수 사용하여 문자열 인쇄하기
Step14: format 함수는 인덱싱 기능까지 지원한다.
Step15: 인덱싱을 위해 키워드를 사용할 수도 있다.
Step16: 인덱싱과 서식지정자를 함께 사용할 수 있다
Step17: % 연산자를 사용하는 방식과 format 함수를 사용하는 방식 중에 어떤 방식을 선택할지는 경우에 따라 다르다.
Step18: str 함수는 문자열값을 리턴한다.
Step19: 앞서 언급한 대로 __str__ 메소드는 print 함수뿐만 아니라 서식을 사용할 때도 기본적으로 사용된다.
Step20: repr 함수
Step21: 하지만 pi1을 이용하여 원주율 pi 값을 되살릴 수는 없다.
Step22: repr 함수는 eval 함수를 이용하여 pi 값을 되살릴 수 있다.
Step23: repr 함수를 활용하는 서식지정자는 %r 이다.
|
1,894
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
tf.enable_eager_execution()
# Using python state
x = tf.zeros([10, 10])
x += 2 # This is equivalent to x = x + 2, which does not mutate the original
# value of x
print(x)
v = tf.Variable(1.0)
assert v.numpy() == 1.0
# Re-assign the value
v.assign(3.0)
assert v.numpy() == 3.0
# Use `v` in a TensorFlow operation like tf.square() and reassign
v.assign(tf.square(v))
assert v.numpy() == 9.0
class Model(object):
def __init__(self):
# Initialize variable to (5.0, 0.0)
# In practice, these should be initialized to random values.
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
def loss(predicted_y, desired_y):
return tf.reduce_mean(tf.square(predicted_y - desired_y))
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: '),
print(loss(model(inputs), outputs).numpy())
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
model = Model()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'true W', 'true_b'])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Custom training
Step2: Variables
Step3: TensorFlow, however, has stateful operations built in, and these are often more pleasant to use than low-level Python representations of your state. To represent weights in a model, for example, it's often convenient and efficient to use TensorFlow variables.
Step4: Computations using Variables are automatically traced when computing gradients. For Variables representing embeddings TensorFlow will do sparse updates by default, which are more computation and memory efficient.
Step5: Define a loss function
Step6: Obtain training data
Step7: Before we train the model let's visualize where the model stands right now. We'll plot the model's predictions in red and the training data in blue.
Step8: Define a training loop
Step9: Finally, let's repeatedly run through the training data and see how W and b evolve.
|
1,895
|
<ASSISTANT_TASK:>
Python Code:
desired_contigs = ['Contig' + str(x) for x in [1131, 3182, 39106, 110, 5958]]
desired_contigs
grab = [c for c in contigs if c.name in desired_contigs]
len(grab)
import os
print(os.getcwd())
write_contigs_to_file('data2/sequences_desired.fa', grab)
[c.name for c in grab[:100]]
import os
os.path.realpath('')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you have a genuinely big file then I would do the following
Step2: Ya! There's two contigs.
|
1,896
|
<ASSISTANT_TASK:>
Python Code:
import sympy as sp
from matplotlib import pyplot as plt
%matplotlib inline
# Customize figure size
plt.rcParams['figure.figsize'] = 25, 15
#plt.rcParams['lines.linewidth'] = 1
#plt.rcParams['lines.color'] = 'g'
plt.rcParams['font.family'] = 'monospace'
plt.rcParams['font.size'] = '16.0'
plt.rcParams['text.hinting'] = 'either'
f = lambda x: 1/(x + 7**x)
sp.mpmath.plot([f], xlim=[-5,25], ylim=[0,25], points=500)
# To check your work, use Sympy.mpmath.nsum()
# This gives the sum of the infinite series (if the series converges)
infty = sp.mpmath.inf
sum = sp.mpmath.nsum(f, [1, infty])
print('The sum of the series = {}'.format(sum))
f = lambda x: 1/x
g = lambda x: 1/7**x
sp.mpmath.plot([f,g], xlim=[-5,25], ylim=[0,25], points=500)
# Check that sum of f_n plus g_n = a_n
sum = sp.mpmath.nsum(f, [1, infty]) + sp.mpmath.nsum(g, [1, infty])
print('The sum of the series = {}'.format(sum))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let $a_n = \sum_{n=1}^{\infty} \frac{1}{n + 7^n}$.
|
1,897
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from scipy import linalg
from scipy import optimize
import functools
import tensorly
from tensorly.decomposition import partial_tucker
from tensorly.decomposition import tucker
tensorly.set_backend('numpy')
tensor_steam_length = 300
factors_tensor_list = []
for i in np.arange(tensor_steam_length):
a = np.random.normal(size=[69], scale=0.5)
b = np.random.normal(size=[16], scale=0.5)
c = np.random.normal(size=[32], scale=0.5)
x = np.zeros([1, 69, 16, 32])
x[0,:,0,0] = a
x[0,1,:,1] = b
x[0,2,2,:] = c
factors_tensor_list.append(x)
factors_tensor = np.concatenate(factors_tensor_list)
targets = np.random.normal(scale=0.01, size=[300,1])
def get_weighting_of_geometric_structure (targets):
W = (targets - targets.T) / targets.T # 广播
W = np.abs(W) - 0.05 # 转换绝对值并判断相似度
W[W>0.0]=0
W[W<0.0]=1
upper_traingular_matrix = np.eye(W.shape[0]).cumsum(1) # 上三角矩阵掩码
return W * upper_traingular_matrix
def get_var_of_adjusting_geometric_structure(targets):
W = get_weighting_of_geometric_structure(targets)
return np.expand_dims(W.sum(0),axis=0)
D = get_var_of_adjusting_geometric_structure(targets)
W = get_weighting_of_geometric_structure(targets)
factors_tensor = tensorly.tensor(factors_tensor)
core_list = []
mode_factors_list = []
for i in range(factors_tensor.shape[0]):
print (i)
core, mode_factors= tucker(factors_tensor[i])
core = np.expand_dims(core, axis=0)
core_list.append(core)
mode_factors_list.append(mode_factors)
batch_length = tensor_steam_length
a_list = []
b_list = []
c_list = []
for i in range(batch_length):
a = np.expand_dims(mode_factors_list[i][0], axis=0)
b = np.expand_dims(mode_factors_list[i][1], axis=0)
c = np.expand_dims(mode_factors_list[i][2], axis=0)
a_list.append(a)
b_list.append(b)
c_list.append(c)
U1 = np.concatenate(a_list)
U2 = np.concatenate(b_list)
U3 = np.concatenate(c_list)
M1 = U1 * np.transpose(U1, axes=[0,2,1])
M2 = U2 * np.transpose(U2, axes=[0,2,1])
M3 = U3 * np.transpose(U3, axes=[0,2,1])
D_U1_core = np.matmul(D, tensorly.base.unfold(M1, mode=0))
I_k = np.int(np.sqrt(D_U1_core.shape[1]))
D_U1 = tensorly.base.fold(D_U1_core, mode=0, shape=[I_k, I_k])
D_U2_core = np.matmul(D, tensorly.base.unfold(M2, mode=0))
I_k = np.int(np.sqrt(D_U2_core.shape[1]))
D_U2 = tensorly.base.fold(D_U2_core, mode=0, shape=[I_k, I_k])
D_U3_core = np.matmul(D, tensorly.base.unfold(M3, mode=0))
I_k = np.int(np.sqrt(D_U3_core.shape[1]))
D_U3 = tensorly.base.fold(D_U3_core, mode=0, shape=[I_k, I_k])
vec_W = np.expand_dims(W.sum(axis=0), axis=0)
W_U1_core = np.matmul(vec_W, tensorly.base.unfold(M1, mode=0))
I_k = np.int(np.sqrt(W_U1_core.shape[1]))
W_U1 = tensorly.base.fold(W_U1_core, mode=0, shape=[I_k, I_k])
W_U2_core = np.matmul(vec_W, tensorly.base.unfold(M2, mode=0))
I_k = np.int(np.sqrt(W_U2_core.shape[1]))
W_U2 = tensorly.base.fold(W_U2_core, mode=0, shape=[I_k, I_k])
W_U3_core = np.matmul(vec_W, tensorly.base.unfold(M3, mode=0))
I_k = np.int(np.sqrt(W_U3_core.shape[1]))
W_U3 = tensorly.base.fold(W_U3_core, mode=0, shape=[I_k, I_k])
def objective_function(V, D_U, W_U, J_K):
newshape = [D_U.shape[0], J_K]
V = np.reshape(V,newshape=newshape)
left = np.matmul(np.matmul(V.T, D_U),V)
right = np.matmul(np.matmul(V.T, W_U),V)
return np.trace(left + right)
def constraints(V, D_U):
newshape = [D_U.shape[0], J_K]
V = np.reshape(V,newshape=newshape)
left = np.matmul(np.matmul(V.T, D_U),V)
return np.trace(left)- 1.0
objective_function_1 = functools.partial(
objective_function, D_U = D_U1, W_U = W_U1, J_K = 10)
constraints_1 = functools.partial(constraints, D_U = D_U1)
cons_1 = ({'type':'ineq', 'fun':constraints_1})
initial_1 = np.random.normal(scale=0.1, size=D_U1.shape[0]*10).reshape(D_U1.shape[0],10)
V1 = optimize.minimize(objective_function_1, initial_1).x.reshape(D_U1.shape[0],10)
objective_function_2 = functools.partial(
objective_function, D_U = D_U2, W_U = W_U2, J_K = 10)
constraints_2 = functools.partial(constraints, D_U = D_U2)
cons_2 = ({'type':'ineq', 'fun':constraints_2})
initial_2 = np.random.normal(scale=0.1, size=D_U2.shape[0]*10).reshape(D_U2.shape[0],10)
V2 = optimize.minimize(objective_function_2, initial_2).x.reshape(D_U2.shape[0],10)
objective_function_3 = functools.partial(
objective_function, D_U = D_U3, W_U = W_U3, J_K = 10)
constraints_3 = functools.partial(constraints, D_U = D_U3)
cons_3 = ({'type':'ineq', 'fun':constraints_3})
initial_3 = np.random.normal(scale=0.1, size=D_U3.shape[0]*10).reshape(D_U3.shape[0],10)
V3 = optimize.minimize(objective_function_3, initial_3).x.reshape(D_U3.shape[0],10)
new_U1 = np.matmul(V1.T, U1)
new_U2 = np.matmul(V2.T, U2)
new_U3 = np.matmul(V3.T, U3)
unfold_mode1 = tensorly.base.partial_unfold(factors_tensor, mode=0, skip_begin=1)
times_mode1 = np.matmul(new_U1, unfold_mode1)
times1_shape = (factors_tensor.shape[0] ,new_U1.shape[1], factors_tensor.shape[2], factors_tensor.shape[3])
times1 = tensorly.base.partial_fold(times_mode1, 0, times1_shape, skip_begin=1, skip_end=0)
unfold_mode2 = tensorly.base.partial_unfold(times1, mode=1, skip_begin=1)
times_mode2 = np.matmul(new_U2, unfold_mode2)
times2_shape = (factors_tensor.shape[0] ,new_U1.shape[1] ,new_U2.shape[1], factors_tensor.shape[3])
times2 = tensorly.base.partial_fold(times_mode2, 1, times2_shape, skip_begin=1, skip_end=0)
unfold_mode3 = tensorly.base.partial_unfold(times2, mode=2, skip_begin=1)
times_mode3 = np.matmul(new_U3, unfold_mode3)
times3_shape = (factors_tensor.shape[0] ,new_U1.shape[1], new_U2.shape[1], new_U3.shape[1])
times3 = tensorly.base.partial_fold(times_mode3, 2, times3_shape, skip_begin=1, skip_end=0)
new_factors_tensor = times3
factors_tensor.shape
new_factors_tensor.shape
new_factors_tensor
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 生成数据集
Step2: 3.3 $\sum$ 循环向量化
Step3: 循环tucker分解
Step4: 连接张量流
Step5: 暂时没有找到较好的批次处理tucker分解的方法,这里特例处理
Step6: 二次规划优化估计修正矩阵$V_k$
Step7: $V_1$
Step8: $V_2$
Step9: $V_3$
Step10: $\bar{\mathcal{X}_i} = \mathcal{C}_i \times_1 (V^T_1U^i_1) \times_2 (V^T_2U^i_2) \times_3 (V^T_3U^i_3)$
Step11: 动态关系捕获和降维 $\mathcal{X}\in\mathbb{R}^{I_1 \times I_2 \times I_3} \to \bar{\mathcal{X}} \in \mathbb{R}^{J_1 \times J_2 \times J_3}$
|
1,898
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s=2):
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape=(64, 64, 3), classes=6):
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs=X_input, outputs=X, name='ResNet50')
return model
model = ResNet50(input_shape=(64, 64, 3), classes=6)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig / 255.
X_test = X_test_orig / 255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print("number of training examples = " + str(X_train.shape[0]))
print("number of test examples = " + str(X_test.shape[0]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
model.summary()
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1 - The problem of very deep neural networks
Step4: Expected Output
Step6: Expected Output
Step7: Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running model.fit(...) below.
Step8: As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
Step9: The model is now ready to be trained. The only thing you need is a dataset.
Step10: Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
Step11: Expected Output
Step12: Expected Output
Step13: ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Step14: You can also print a summary of your model by running the following code.
Step15: Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
|
1,899
|
<ASSISTANT_TASK:>
Python Code:
# useful additional packages
import random
import math
from sympy.ntheory import isprime
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.wrapper.jupyter import *
from qiskit.backends.ibmq import least_busy
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
IBMQ.load_accounts()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
device_coupling = device_backend.configuration()['coupling_map']
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(prime))) #Rounds up to the next integer of the log(prime)
qr = QuantumRegister(n) #Creates a quantum register of length log(prime) for log(prime) qubits
cr = ClassicalRegister(n) #Creates a classical register for measurement
qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for x in range(n): #For each qubit, we want to apply a series of unitary operations with a random int
random_value = random.randint(1,prime - 1) #Generates the random int for each qubit from {1, prime -1}
for letter in string: #For each letter in the string, we want to apply the same unitary operation to each qubit
qfaCircuit.ry((2*math.pi*random_value) / prime, qr[x]) #Applies the Y-Rotation to each qubit
qfaCircuit.measure(qr[x], cr[x]) #Measures each qubit
return qfaCircuit #Returns the created quantum circuit
#A function that returns a string saying if the string is accepted into the language or rejected
def accept(parameter):
states = list(result.get_counts(parameter))
for s in states:
for integer in s:
if integer == "1":
return "Reject: the string is not accepted into the language"
return "Accept: the string is accepted into the language"
range_lower = 0
range_higher = 36
prime_number = 11
for length in range(range_lower,range_higher):
params = qfa_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
circuit_drawer(qfa_algorithm("a"* 3, prime_number), style=qx_color_scheme())
#Function that takes in a prime number and a string of letters and returns a quantum circuit
def qfa_controlled_algorithm(string, prime):
if isprime(prime) == False:
raise ValueError("This number is not a prime") #Raises a ValueError if the input prime number is not prime
else:
n = math.ceil((math.log(math.log(prime,2),2))) #Represents log(log(p)) control qubits
states = 2 ** (n) #Number of states that the qubits can represent/Number of QFA's to be performed
qr = QuantumRegister(n+1) #Creates a quantum register of log(log(prime)) control qubits + 1 target qubit
cr = ClassicalRegister(1) #Creates a classical register of log(log(prime)) control qubits + 1 target qubit
control_qfaCircuit = QuantumCircuit(qr, cr) #Defining the circuit to take in the values of qr and cr
for q in range(n): #We want to take each control qubit and put them in a superposition by applying a Hadamard Gate
control_qfaCircuit.h(qr[q])
for letter in string: #For each letter in the string, we want to apply a series of Controlled Y-rotations
for q in range(n):
control_qfaCircuit.cu3(2*math.pi*(2**q)/prime, 0, 0, qr[q], qr[n]) #Controlled Y on Target qubit
control_qfaCircuit.measure(qr[n], cr[0]) #Measure the target qubit
return control_qfaCircuit #Returns the created quantum circuit
for length in range(range_lower,range_higher):
params = qfa_controlled_algorithm("a"* length, prime_number)
job = execute(params, sim_backend, shots=1000)
result = job.result()
print(accept(params), "\n", "Length:",length," " ,result.get_counts(params))
circuit_drawer(qfa_controlled_algorithm("a"* 3, prime_number), style=qx_color_scheme())
prime_number = 3
length = 2 # set the length so that it is not divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa1 = qfa_controlled_algorithm("a"* length, prime_number)
%%qiskit_job_status
HTMLProgressBar()
job = execute(qfa1, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
circuit_drawer(qfa1, style=qx_color_scheme())
print_number = length = 3 # set the length so that it is divisible by the prime_number
print("The length of a is", length, " while the prime number is", prime_number)
qfa2 = qfa_controlled_algorithm("a"* length, prime_number)
%%qiskit_job_status
HTMLProgressBar()
job = execute(qfa2, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
circuit_drawer(qfa2, style=qx_color_scheme())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We then use QISKit to program the algorithm.
Step2: The qfa_algorithm function returns the Quantum Circuit qfaCircuit.
Step3: Insert your own parameters and try even larger prime numbers.
Step4: Drawing the circuit of the QFA
Step5: The Algorithm for Log(Log(p)) Qubits
Step6: The qfa_algorithm function returns the Quantum Circuit control_qfaCircuit.
Step7: Drawing the circuit of the QFA
Step8: Experimenting with Real Devices
Step9: In the above, we can see that the probability of observing "1" is quite significant. Let us see how the circuit looks like.
Step10: Now, let us see what happens when the QFAs should accept the input string.
Step11: The error of rejecting the bitstring is equal to the probability of observing "1" which can be checked from the above histogram. We can see that the noise of real-device backends prevents us to have a correct answer. It is left as future work on how to mitigate errors of the backends in the QFA models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.