text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Boston Housing Dataset
Step2: Standardize Features
Step3: Create Ridge Regression With Candidate Alpha Values
Step4: Fit Ridge Regression
Step5: View Best Model's Alpha Value
|
<ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.linear_model import RidgeCV
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
# Load data
boston = load_boston()
X = boston.data
y = boston.target
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
# Create ridge regression with three possible alpha values
regr_cv = RidgeCV(alphas=[0.1, 1.0, 10.0])
# Fit the linear regression
model_cv = regr_cv.fit(X_std, y)
# View alpha
model_cv.alpha_
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reducing Extra Dataset Loads
Step2: Then we define a grid of parameters whose nodes will be used to form separate experiments
Step3: These parameters can be passed to model's configs using named expressions.
Step4: After that we define a pipeline to run during our experiments. We initialise a pipeline variable 'loss' to store loss on each iteration
Step5: Each research is assigned with a name and writes its results to a folder with this name. The names must be unique, so if one attempts to run a research with a name that already exists, an error will be thrown. In the cell below we clear the results of previous research runs so as to allow multiple runs of a research. This is done solely for purposes of ths tutorial and should not be done in real work
Step6: Finally we define a Research that runs the pipeline substituting its parameters using different nodes of the grid, and saves values of the 'loss' named expressions to results.
Step7: 16 experiments are run (4 grid nodes x 4 repetitions) each consisting of 10 iterations.
Step8: Branches
Step9: Scince every root is now assigned to 8 branches, there are only 2 jobs.
Step10: Functions on Root
Step11: Improving Performance
Step12: Cross-validation
Step13: Next, we define our train and test pipelines. To perform cross-validation, you can define train and test datasets as mnist_train.CV(C('fold')).train and mnist_test.CV(C('fold')).test, correspondingly.
Step14: Then multiply your Domain by Option('fold', [0, 1, 2]).
Step15: We can now load results, specifying which folds to get if needed
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
import shutil
import warnings
warnings.filterwarnings('ignore')
from tensorflow import logging
logging.set_verbosity(logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import matplotlib
%matplotlib inline
sys.path.append('../../..')
from batchflow import Pipeline, B, C, V, D, L
from batchflow.opensets import MNIST
from batchflow.models.tf import VGG7, VGG16
from batchflow.research import Research, Option, Results, RP
BATCH_SIZE=64
ITERATIONS=1000
TEST_EXECUTE_FREQ=100
def clear_previous_results(res_name):
if os.path.exists(res_name):
shutil.rmtree(res_name)
mnist = MNIST()
train_root = mnist.train.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=None)
domain = Option('layout', ['cna', 'can']) * Option('bias', [True, False])
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': D('num_classes'),
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
'common/conv/use_bias': C('bias'),
}
train_template = (Pipeline()
.init_variable('loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('loss', mode='w'))
)
res_name = 'simple_research'
clear_previous_results(res_name)
research = (Research()
.add_pipeline(train_root + train_template, variables='loss')
.init_domain(domain, n_reps=4))
research.run(n_iters=10, name=res_name, bar=True)
research.load_results().df.info()
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': 10,
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
'common/conv/use_bias': C('bias'),
}
train_template = (Pipeline()
.init_variable('loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('loss', mode='w'))
)
res_name = 'no_extra_dataload_research'
clear_previous_results(res_name)
research = (Research()
.add_pipeline(root=train_root, branch=train_template, variables='loss')
.init_domain(domain, n_reps=4))
research.run(n_iters=10, branches=8, name=res_name, bar=True)
research.load_results().df.info()
res_name = 'on_root_research'
clear_previous_results(res_name)
def function_on_root():
print('on root')
research = (Research()
.add_callable(function_on_root, execute="#0", on_root=True)
.add_pipeline(root=train_root, branch=train_template, variables='loss')
.init_domain(domain, n_reps=4)
)
research.run(branches=8, n_iters=10, name=res_name, bar=True)
model_config={
**model_config,
'device': C('device'), # it's technical parameter for TFModel
}
test_root = mnist.test.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=1) #Note n_epochs=1
test_template = (Pipeline()
.init_variable('predictions')
.init_variable('metrics')
.import_model('conv', C('import_from'))
.to_array()
.predict_model('conv',
images=B('images'), labels=B('labels'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a')))
research = (Research()
.add_pipeline(root=train_root, branch=train_template, variables='loss', name='train_ppl',
dump=TEST_EXECUTE_FREQ)
.add_pipeline(root=test_root, branch=test_template, name='test_ppl',
execute=TEST_EXECUTE_FREQ, run=True, import_from=RP('train_ppl'))
.get_metrics(pipeline='test_ppl', metrics_var='metrics', metrics_name='accuracy',
returns='accuracy',
execute=TEST_EXECUTE_FREQ,
dump=TEST_EXECUTE_FREQ)
.init_domain(domain, n_reps=4))
res_name = 'faster_research'
clear_previous_results(res_name)
research.run(n_iters=ITERATIONS, name=res_name, bar=True,
branches=2, workers=2, devices=[0, 1],
timeout=2, trials=1)
results = research.load_results().df
results.info()
mnist_train = MNIST().train
mnist_train.cv_split(n_splits=3)
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': D('num_classes'),
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
}
train_template = (Pipeline()
.init_variable('train_loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('train_loss', mode='w'))
.run_later(BATCH_SIZE, shuffle=True, n_epochs=None)) << mnist_train.CV(C('fold')).train
test_template = (Pipeline()
.init_variable('predictions')
.init_variable('metrics')
.import_model('conv', C('import_from'))
.to_array()
.predict_model('conv',
images=B('images'), labels=B('labels'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
.run_later(BATCH_SIZE, shuffle=True, n_epochs=1)) << mnist_train.CV(C('fold')).test
domain = Option('layout', ['cna', 'can']) * Option('fold', [0, 1, 2])
research = (Research()
.add_pipeline(train_template, dataset=mnist_train, variables='train_loss', name='train_ppl')
.add_pipeline(test_template, dataset=mnist_train, name='test_ppl',
execute=TEST_EXECUTE_FREQ, run=True, import_from=RP('train_ppl'))
.get_metrics(pipeline='test_ppl', metrics_var='metrics', metrics_name='accuracy', returns='accuracy',
execute=TEST_EXECUTE_FREQ)
.init_domain(domain))
res_name = 'cv_research'
clear_previous_results(res_name)
research.run(n_iters=ITERATIONS, name=res_name, bar=True, workers=1, devices=[0])
results = research.load_results(fold=0).df
results.sample(5)
from matplotlib import pyplot as plt
test_results = Results(path='cv_research', names= 'test_ppl_metrics',
concat_config=True, drop_columns=False).df
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
for i, (config, df) in enumerate(test_results.groupby('config')):
x, y = i//2, i%2
df.pivot(index='iteration', columns='fold', values='accuracy').plot(ax=ax[y])
ax[y].set_title(config)
ax[y].set_xlabel('iteration')
ax[y].set_ylabel('accuracy')
ax[y].grid(True)
ax[y].legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: python
Step2: scipy
Step3: numba
Step4: serial version
Step5: parallel version
Step6: check results
Step7: check if the different implementations produce the same result
Step8: plot results
Step9: profile for different data sizes
Step10: plot profile results
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.ndimage import generic_filter
from numba import jit, guvectorize, float64
import pyprind
import matplotlib.pyplot as plt
%matplotlib inline
def denoise(a, b):
for channel in range(2):
for f_band in range(4, a.shape[1] - 4):
for t_step in range(1, a.shape[2] - 1):
neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2]
if neighborhood.mean() < 10:
b[channel, f_band, t_step] = neighborhood.min()
else:
b[channel, f_band, t_step] = neighborhood[4, 1]
return b
@jit(nopython=True)
def filter_denoise(neighborhood):
if neighborhood.mean() < 10:
return neighborhood.min()
else:
return neighborhood[13]
def denoise_scipy(a, b):
for channel in range(2):
b[channel] = generic_filter(input=a[channel], function=filter_denoise,
size=(9, 3), mode='constant')
return b
# just removed return statement
def denoise_guvectorize(a, b):
for channel in range(2):
for f_band in range(4, a.shape[1] - 4):
for t_step in range(1, a.shape[2] - 1):
neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2]
if neighborhood.mean() < 10:
b[channel, f_band, t_step] = neighborhood.min()
else:
b[channel, f_band, t_step] = neighborhood[4, 1]
denoise_numba = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)',
nopython=True)(denoise_guvectorize)
denoise_parallel = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)',
nopython=True, target='parallel')(denoise_guvectorize)
size = 100
data = np.random.rand(2, size, int(size*1.5))
data[:, int(size/4):int(size/2), int(size/4):int(size/2)] = 27
result_python = denoise(data, np.zeros_like(data))
result_scipy = denoise_scipy(data, np.zeros_like(data))
result_numba = denoise_numba(data, np.zeros_like(data))
result_parallel = denoise_parallel(data, np.zeros_like(data))
assert np.allclose(result_python, result_scipy) and np.allclose(result_python, result_numba)
fig, ax = plt.subplots(2,2)
fig.set_figheight(8)
fig.set_figwidth(12)
im1 = ax[0, 0].imshow(data[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[0, 0].set_title('data')
im2 = ax[0, 1].imshow(result_python[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[0, 1].set_title('pure python')
im3 = ax[1, 0].imshow(result_scipy[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[1, 0].set_title('scipy')
im4 = ax[1, 1].imshow(result_numba[0], cmap='viridis', interpolation='none', vmax=1)
t1 = ax[1, 1].set_title('numba')
sizes = [30, 50, 100, 200, 400, 800, 1600]
progress_bar = pyprind.ProgBar(iterations=len(sizes), track_time=True, stream=1, monitor=True)
t_python = np.empty_like(sizes, dtype=np.float64)
t_scipy = np.empty_like(sizes, dtype=np.float64)
t_numba = np.empty_like(sizes, dtype=np.float64)
t_parallel = np.empty_like(sizes, dtype=np.float64)
for size in range(len(sizes)):
progress_bar.update(item_id=sizes[size])
data = np.random.rand(2, sizes[size], sizes[size])*0.75
t_1 = %timeit -oq denoise(data, np.zeros_like(data))
t_2 = %timeit -oq denoise_scipy(data, np.zeros_like(data))
t_3 = %timeit -oq denoise_numba(data, np.zeros_like(data))
t_4 = %timeit -oq denoise_parallel(data, np.zeros_like(data))
t_python[size] = t_1.best
t_scipy[size] = t_2.best
t_numba[size] = t_3.best
t_parallel[size] = t_4.best
fig, ax = plt.subplots(figsize=(15,5))
p1 = ax.loglog(sizes, t_python, color='black', marker='.', label='python')
p2 = ax.loglog(sizes, t_scipy, color='blue', marker='.', label='scipy')
p3 = ax.loglog(sizes, t_numba, color='green', marker='.', label='numba')
p4 = ax.loglog(sizes, t_parallel, color='red', marker='.', label='parallel')
lx = ax.set_xlabel("data array size (2 x n x n elements)")
ly = ax.set_ylabel("time (seconds)")
t1 = ax.set_title("running times of the 'denoise' algorithm")
ax.grid(True, which='major')
l = ax.legend()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La idea es ir achicando el año, con el mayor numero romano posible, sin embargo nos dimos cuenta que teniamos problemas con los "9", por lo que agregamos 900, 400, 90, 40, 9 y 4.
Step2: Vimos que el if es muy extenso, podemos usar lista, o mejor "tuplas".
|
<ASSISTANT_TASK:>
Python Code:
# suponemos que ponemos un año de verdad, por eso no pongo condiciones
año = int(input("Ingrese su año: "))
añooriginal = año
resultado = ""
while año != 0:
if año >= 1000:
veces = año // 1000
resultado += "M" * veces
año %= 1000
elif año >= 900:
año -= 900
resultado += "CM"
elif año >= 500:
año -= 500
resultado += "D"
elif año >= 400:
año -= 400
resultado += "CD"
elif año >= 100:
veces = año // 100
resultado += "C" * veces
año %= 100
elif año >= 90:
año -= 90
resultado += "XC"
elif año >= 50:
año -= 50
resultado += "L"
elif año >= 40:
año -= 40
resultado += "XL"
elif año >= 10:
veces = año // 10
año -= 10
resultado += "X"
elif año >= 9:
año -= 9
resultado += "IX"
elif año >= 5:
año -= 5
resultado += "V"
elif año >= 4:
año -= 4
resultado += "IV"
else:
resultado += "I" * año
año = 0
print(resultado)
valores = (1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1)
letras = ("M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I")
valores[3]= 123
# nos damos cuenta que la tupla no se puede modificar a diferencia de la lista...
año = añooriginal
res = ""
while año != 0:
for i in range(len(valores)):
if valores[i] <= año:
res += letras[i]
año -= valores[i]
break
print(res)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Choose a representative species for a case study
Step2: 2. Rasterize the species, to get a matrix of pixels
Step3: 2.1 Plot to get an idea
Step4: 3. Load the biogeographical regons raster layer
Step5: 3.1 Plot to get an idea
Step6: 3.2 Load the continents vector layer (for further clipping of pseudo-absence area), rasterize
Step7: As agreed, we will merge Europe and Asia to be a bit closer to the biogeographical regions. We do this user-specific patching for now, until a better solution is found.
Step8: Set all values >1 to 1. (we only care about presence/absence)
Step9: Delete band 2 (Europe, previously merged with layer 0==Asia)
Step10: This is how the band 0 looks like now.
Step11: 4. Sample pseudo-absence pixels, taking into account all the distinct biomes that fall in the species region.
Step12: 4.1 Plot the biomes taken into account for sampling pseudo-absences, to get an idea
Step13: 4.2 Plot the sampled pseudo-absences, to get an idea
Step14: 5. Construct a convenient dataframe for testing with different SDM models
Step15: 5.2 Get arrays of coordinates (latitude/longitude) for each cell (middle point) in a presences pixel map
Step16: 5.3 Get arrays of coordinates (latitude/longitude) for each cell (middle point) in a pseudo_absences pixel map
Step17: 5.4 Get arrays of coordinates (latitude/longitude) for each cell (middle point) in a minimum temperature pixel map
Step18: 5.5 Get arrays of coordinates (latitude/longitude) for each cell (middle point) in a maximum temperature pixel map
Step19: 5.6 Get arrays of coordinates (latitude/longitude) for each cell (middle point) in a mean temperature pixel map
Step20: http
Step21: 6. Repeat with other species
Step22: Third species.... (largest IUCN area, also plenty of occurrences in GBIF)
Step23: Hmm, why does it take South Afrika into account?
Step24: This above adds two bands with ones and zeroes
Step25: Add continents column
|
<ASSISTANT_TASK:>
Python Code:
import logging
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
root = logging.getLogger()
root.addHandler(logging.StreamHandler())
%matplotlib inline
# download from Google Drive: https://drive.google.com/open?id=0B9cazFzBtPuCOFNiUHYwcVFVODQ
# Representative example with multiple polygons in the shapefile, and a lot of point-records (also outside rangemaps)
from iSDM.species import IUCNSpecies
salmo_trutta = IUCNSpecies(name_species='Salmo trutta')
salmo_trutta.load_shapefile("../data/fish/selection/salmo_trutta")
rasterized = salmo_trutta.rasterize(raster_file="./salmo_trutta_full.tif", pixel_size=0.5)
plt.figure(figsize=(25,20))
plt.imshow(rasterized, cmap="hot", interpolation="none")
from iSDM.environment import RasterEnvironmentalLayer
biomes_adf = RasterEnvironmentalLayer(file_path="../data/rebioms/w001001.adf", name_layer="Biomes")
biomes_adf.load_data()
biomes_adf.plot()
from iSDM.environment import ContinentsLayer
from iSDM.environment import Source
continents = ContinentsLayer(file_path="../data/continents/continent.shp", source=Source.ARCGIS)
continents.load_data()
fig, ax = plt.subplots(1,1, figsize=(30,20))
continents.data_full.plot(column="continent", colormap="hsv")
continents_rasters = continents.rasterize(raster_file="../data/continents/continents_raster.tif", pixel_size=0.5)
continents_rasters.shape # stacked raster with 8 bands, one for each continent.
continents_rasters[0] = continents_rasters[0] + continents_rasters[2] # combine Europe and Asia
continents_rasters[0].max() # where the continents touch, we have overlap! that's why max is not 1, but 2.
continents_rasters[0][continents_rasters[0] > 1] = 1
continents_rasters = np.delete(continents_rasters, 2, 0)
plt.figure(figsize=(25,20))
plt.imshow(continents_rasters[0], cmap="hot", interpolation="none")
continents_rasters.shape # now total of 7 band rather than 8
selected_layers, pseudo_absences = biomes_adf.sample_pseudo_absences(species_raster_data=rasterized,continents_raster_data=continents_rasters, number_of_pseudopoints=1000)
plt.figure(figsize=(25,20))
plt.imshow(selected_layers, cmap="hot", interpolation="none")
plt.figure(figsize=(25,20))
plt.imshow(pseudo_absences, cmap="hot", interpolation="none")
all_coordinates = biomes_adf.pixel_to_world_coordinates(raster_data=np.zeros_like(rasterized), filter_no_data_value=False)
all_coordinates
base_dataframe = pd.DataFrame([all_coordinates[0], all_coordinates[1]]).T
base_dataframe.columns=['decimallatitude', 'decimallongitude']
base_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
base_dataframe.head()
base_dataframe.tail()
presence_coordinates = salmo_trutta.pixel_to_world_coordinates()
presence_coordinates
presences_dataframe = pd.DataFrame([presence_coordinates[0], presence_coordinates[1]]).T
presences_dataframe.columns=['decimallatitude', 'decimallongitude']
presences_dataframe[salmo_trutta.name_species] = 1 # fill presences with 1's
presences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
presences_dataframe.head()
presences_dataframe.tail()
pseudo_absence_coordinates = biomes_adf.pixel_to_world_coordinates(raster_data=pseudo_absences)
pseudo_absences_dataframe = pd.DataFrame([pseudo_absence_coordinates[0], pseudo_absence_coordinates[1]]).T
pseudo_absences_dataframe.columns=['decimallatitude', 'decimallongitude']
pseudo_absences_dataframe[salmo_trutta.name_species] = 0 # fill pseudo-absences with 0
pseudo_absences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
pseudo_absences_dataframe.head()
pseudo_absences_dataframe.tail()
from iSDM.environment import ClimateLayer
water_min_layer = ClimateLayer(file_path="../data/watertemp/min_wt_2000.tif")
water_min_reader = water_min_layer.load_data()
# HERE: should we ignore cells with no-data values for temperature? They are set to a really big negative number
# for now we keep them, otherwise could be NaN
water_min_coordinates = water_min_layer.pixel_to_world_coordinates(filter_no_data_value=False)
water_min_coordinates
mintemp_dataframe = pd.DataFrame([water_min_coordinates[0], water_min_coordinates[1]]).T
mintemp_dataframe.columns=['decimallatitude', 'decimallongitude']
water_min_matrix = water_min_reader.read(1)
mintemp_dataframe['MinT'] = water_min_matrix.reshape(np.product(water_min_matrix.shape))
mintemp_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
mintemp_dataframe.head()
mintemp_dataframe.tail()
water_max_layer = ClimateLayer(file_path="../data/watertemp/max_wt_2000.tif")
water_max_reader = water_max_layer.load_data()
# HERE: should we ignore cells with no-data values for temperature? They are set to a really big negative number
# for now we keep them, otherwise could be NaN
water_max_coordinates = water_max_layer.pixel_to_world_coordinates(filter_no_data_value=False)
maxtemp_dataframe = pd.DataFrame([water_max_coordinates[0], water_max_coordinates[1]]).T
maxtemp_dataframe.columns=['decimallatitude', 'decimallongitude']
water_max_matrix = water_max_reader.read(1)
maxtemp_dataframe['MaxT'] = water_max_matrix.reshape(np.product(water_max_matrix.shape))
maxtemp_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
maxtemp_dataframe.head()
maxtemp_dataframe.tail()
water_mean_layer = ClimateLayer(file_path="../data/watertemp/mean_wt_2000.tif")
water_mean_reader = water_mean_layer.load_data()
# HERE: should we ignore cells with no-data values for temperature? They are set to a really big negative number
# for now we keep them, otherwise could be NaN
water_mean_coordinates = water_mean_layer.pixel_to_world_coordinates(filter_no_data_value=False)
meantemp_dataframe = pd.DataFrame([water_mean_coordinates[0], water_mean_coordinates[1]]).T
meantemp_dataframe.columns=['decimallatitude', 'decimallongitude']
water_mean_matrix = water_mean_reader.read(1)
meantemp_dataframe['MeanT'] = water_mean_matrix.reshape(np.product(water_mean_matrix.shape))
meantemp_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
meantemp_dataframe.head()
meantemp_dataframe.tail()
# merge base with presences
merged = base_dataframe.combine_first(presences_dataframe)
merged.head()
merged.tail()
# merge based+presences with pseudo-absences
# merged2 = pd.merge(merged1, pseudo_absences_dataframe, on=["decimallatitude", "decimallongitude", salmo_trutta.name_species], how="outer")
merged = merged.combine_first(pseudo_absences_dataframe)
merged.head()
merged.tail()
# merge base+presences+pseudo-absences with min temperature
#merged3 = pd.merge(merged2, mintemp_dataframe, on=["decimallatitude", "decimallongitude"], how="outer")
merged = merged.combine_first(mintemp_dataframe)
merged.head()
merged.tail()
# merged4 = pd.merge(merged3, maxtemp_dataframe, on=["decimallatitude", "decimallongitude"], how="outer")
merged = merged.combine_first(maxtemp_dataframe)
merged.head()
merged.tail()
# merged5 = pd.merge(merged4, meantemp_dataframe, on=["decimallatitude", "decimallongitude"], how="outer")
merged = merged.combine_first(meantemp_dataframe)
merged.tail()
merged.to_csv("../data/fish/selection/salmo_trutta.csv")
merged[merged['Salmo trutta']==0].shape[0] # should be equal to number of pseudo absences below
pseudo_absence_coordinates[0].shape[0]
merged[merged['Salmo trutta']==1].shape[0] # should be equal to number of presences below
presence_coordinates[0].shape[0]
merged[merged['Salmo trutta'].isnull()].shape[0] # all that's left
360 * 720 == merged[merged['Salmo trutta']==0].shape[0] + merged[merged['Salmo trutta']==1].shape[0] + merged[merged['Salmo trutta'].isnull()].shape[0]
# == all pixels in 360 x 720 matrix
# Download from Google Drive: https://drive.google.com/open?id=0B9cazFzBtPuCaW0wRkk2N0g5d1k
lepidomeda_mollispinis = IUCNSpecies(name_species='Lepidomeda mollispinis')
lepidomeda_mollispinis.load_shapefile("../data/fish/selection/lepidomeda_mollispinis")
rasterized_lm = lepidomeda_mollispinis.rasterize(raster_file="./lepidomeda_mollispinis_full.tif", pixel_size=0.5)
plt.figure(figsize=(25,20))
plt.imshow(rasterized_lm, cmap="hot", interpolation="none")
selected_layers_lm, pseudo_absences_lm = biomes_adf.sample_pseudo_absences(species_raster_data=rasterized_lm, continents_raster_data=continents_rasters, number_of_pseudopoints=1000)
plt.figure(figsize=(25,20))
plt.imshow(selected_layers_lm, cmap="hot", interpolation="none")
plt.figure(figsize=(25,20))
plt.imshow(pseudo_absences_lm, cmap="hot", interpolation="none")
presence_coordinates_lm = lepidomeda_mollispinis.pixel_to_world_coordinates()
presences_dataframe = pd.DataFrame([presence_coordinates_lm[0], presence_coordinates_lm[1]]).T
presences_dataframe.columns=['decimallatitude', 'decimallongitude']
presences_dataframe[lepidomeda_mollispinis.name_species] = 1 # fill presences with 1's
presences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
presences_dataframe.head()
presences_dataframe.tail()
pseudo_absence_coordinates_lm = biomes_adf.pixel_to_world_coordinates(raster_data=pseudo_absences_lm)
pseudo_absences_dataframe = pd.DataFrame([pseudo_absence_coordinates_lm[0], pseudo_absence_coordinates_lm[1]]).T
pseudo_absences_dataframe.columns=['decimallatitude', 'decimallongitude']
pseudo_absences_dataframe[lepidomeda_mollispinis.name_species] = 0
pseudo_absences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
pseudo_absences_dataframe.head()
pseudo_absences_dataframe.tail()
merged1 = merged.combine_first(presences_dataframe)
merged1.tail()
merged1 = merged1.combine_first(pseudo_absences_dataframe)
merged1.tail()
merged1['Lepidomeda mollispinis'].unique()
merged1[merged1['Lepidomeda mollispinis']==0].shape # pseudo-absences
merged1[merged1['Lepidomeda mollispinis']==1].shape # presences
merged1[merged1['Lepidomeda mollispinis'].isnull()].shape
merged1[merged1['Lepidomeda mollispinis'].isnull()].shape[0] + merged1[merged1['Lepidomeda mollispinis']==1].shape[0] + merged1[merged1['Lepidomeda mollispinis']==0].shape[0]
salmo_trutta.get_data().shape_area.sum()
lepidomeda_mollispinis.get_data().shape_area.sum()
# Download from Google drive: https://drive.google.com/open?id=0B9cazFzBtPuCamEwWlZxV3lBZmc
esox_lucius = IUCNSpecies(name_species='Esox lucius')
esox_lucius.load_shapefile("../data/fish/selection/esox_lucius/")
rasterized_el = esox_lucius.rasterize(raster_file="./esox_lucius_full.tif", pixel_size=0.5)
plt.figure(figsize=(25,20))
plt.imshow(rasterized_el, cmap="hot", interpolation="none")
selected_layers_el, pseudo_absences_el = biomes_adf.sample_pseudo_absences(species_raster_data=rasterized_el, continents_raster_data=continents_rasters, number_of_pseudopoints=1000)
plt.figure(figsize=(25,20))
plt.imshow(selected_layers_el, cmap="hot", interpolation="none")
np.where((continents_rasters[2]+rasterized_el)>1)
(continents_rasters[2]+rasterized_el)[108,348]
plt.figure(figsize=(25,20))
plt.imshow(pseudo_absences_el, cmap="hot", interpolation="none")
presence_coordinates_el = esox_lucius.pixel_to_world_coordinates()
presences_dataframe = pd.DataFrame([presence_coordinates_el[0], presence_coordinates_el[1]]).T
presences_dataframe.columns=['decimallatitude', 'decimallongitude']
presences_dataframe[esox_lucius.name_species] = 1 # fill presences with 1's
presences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
presences_dataframe.head()
pseudo_absence_coordinates_el = biomes_adf.pixel_to_world_coordinates(raster_data=pseudo_absences_el)
pseudo_absences_dataframe = pd.DataFrame([pseudo_absence_coordinates_el[0], pseudo_absence_coordinates_el[1]]).T
pseudo_absences_dataframe.columns=['decimallatitude', 'decimallongitude']
pseudo_absences_dataframe[esox_lucius.name_species] = 0 # fill pseudo-absences with 0
pseudo_absences_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
pseudo_absences_dataframe.head()
merged2 = merged1.combine_first(presences_dataframe)
merged2.head()
merged2 = merged2.combine_first(pseudo_absences_dataframe)
merged2.head()
merged2['Esox lucius'].unique()
merged2[merged2['Esox lucius']==0].shape # pseudo-absences
merged2[merged2['Esox lucius']==1].shape # presences
merged2[merged2['Esox lucius'].isnull()].shape
merged2[merged2['Esox lucius'].isnull()].shape[0] + merged2[merged2['Esox lucius']==1].shape[0] + merged2[merged2['Esox lucius']==0].shape[0]
merged2.tail()
# rearange columns (nothing critical)
cols = merged2.columns.values
cols1 = [cols[4], cols[2], cols[3], cols[0], cols[1], cols[5]]
merged2 = merged2[cols1]
merged2.to_csv("../data/fish/selection/dataframe_merged_all_touching_false.csv")
merged2.columns.values
for idx, band in enumerate(continents_rasters):
continents_coordinates = biomes_adf.pixel_to_world_coordinates(raster_data=band)
continent_dataframe = pd.DataFrame([continents_coordinates[0], continents_coordinates[1]]).T
continent_dataframe.columns=['decimallatitude', 'decimallongitude']
continent_dataframe['Continent'] = idx
continent_dataframe.set_index(['decimallatitude', 'decimallongitude'], inplace=True, drop=True)
continent_dataframe.head()
merged2 = merged2.combine_first(continent_dataframe)
merged2[merged2.Continent==0].shape
np.count_nonzero(continents_rasters[0]) # good!
merged2[merged2.Continent==2].shape
np.count_nonzero(continents_rasters[2]) # good!
merged2.columns.values
# rearange columns again
cols = merged2.columns.values
cols1 = [cols[5], cols[3], cols[4], cols[1], cols[2], cols[6], cols[0]]
merged2 = merged2[cols1]
merged2.head()
merged2.to_csv("../data/fish/selection/dataframe_merged_with_continents_rasterized_2.csv")
np.where((continents_rasters[2]+rasterized_el)>1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector.
Step2: Preprocessing data with standard scaler
Step3: Make F1 performance scorers
Step4: Parameter tuning ( maximum number of features and number of estimators)
Step5: Random forest classifier
Step6: Average test F1 score with leave one well out
Step7: Predicting and saving facies for blind wells
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
from scipy.stats import randint as sp_randint
from scipy.signal import argrelextrema
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
from sklearn.metrics import f1_score, make_scorer
from sklearn.model_selection import LeaveOneGroupOut, validation_curve
filename = 'SFS_top70_selected_engineered_features.csv'
training_data = pd.read_csv(filename)
training_data.describe()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
Fscorer = make_scorer(f1_score, average = 'micro')
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
from sklearn.ensemble import RandomForestClassifier
RF_clf100 = RandomForestClassifier (n_estimators=100, n_jobs=-1, random_state = 49)
RF_clf200 = RandomForestClassifier (n_estimators=200, n_jobs=-1, random_state = 49)
RF_clf300 = RandomForestClassifier (n_estimators=300, n_jobs=-1, random_state = 49)
RF_clf400 = RandomForestClassifier (n_estimators=400, n_jobs=-1, random_state = 49)
RF_clf500 = RandomForestClassifier (n_estimators=500, n_jobs=-1, random_state = 49)
RF_clf600 = RandomForestClassifier (n_estimators=600, n_jobs=-1, random_state = 49)
param_name = "max_features"
#param_range = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60]
param_range = [9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51]
plt.figure()
plt.suptitle('n_estimators = 100', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf100, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 200', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf200, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 300', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf300, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 400', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf400, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 500', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf500, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
plt.figure()
plt.suptitle('n_estimators = 600', fontsize=14, fontweight='bold')
_, test_scores = validation_curve(RF_clf600, X, y, cv=logo.split(X, y, groups=wells),
param_name=param_name, param_range=param_range,
scoring=Fscorer, n_jobs=-1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.plot(param_range, test_scores_mean)
plt.xlabel(param_name)
plt.xlim(min(param_range), max(param_range))
plt.ylabel("F1")
plt.ylim(0.47, 0.57)
plt.show()
#print max(test_scores_mean[argrelextrema(test_scores_mean, np.greater)])
print np.amax(test_scores_mean)
print np.array(param_range)[test_scores_mean.argmax(axis=0)]
RF_clf_f1 = RandomForestClassifier (n_estimators=600, max_features = 21,
n_jobs=-1, random_state = 49)
f1_RF = []
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
RF_clf_f1.fit(X[train], y[train])
pred = RF_clf_f1.predict(X[test])
sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_RF.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_RF)/(1.0*(len(f1_RF))))
RF_clf_b = RandomForestClassifier (n_estimators=600, max_features = 21,
n_jobs=-1, random_state = 49)
blind = pd.read_csv('engineered_features_validation_set_top70.csv')
X_blind = np.array(blind.drop(['Formation', 'Well Name'], axis=1))
scaler1 = preprocessing.StandardScaler().fit(X_blind)
X_blind = scaler1.transform(X_blind)
y_pred = RF_clf_b.fit(X, y).predict(X_blind)
#blind['Facies'] = y_pred
np.save('ypred_RF_SFS_VC.npy', y_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 1 - Gradient Descent
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step13: Expected Output
Step15: We have already implemented a 3-layer neural network. You will train it with
Step16: You will now run this 3 layer neural network with each of the 3 optimization methods.
Step17: 5.2 - Mini-batch gradient descent with momentum
Step18: 5.3 - Mini-batch with Adam mode
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils_v1a import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils_v1a import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 =\n" + str(parameters["W1"]))
print("b1 =\n" + str(parameters["b1"]))
print("W2 =\n" + str(parameters["W2"]))
print("b2 =\n" + str(parameters["b2"]))
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, mini_batch_size * k : mini_batch_size * (k + 1)]
mini_batch_Y = shuffled_Y[:, mini_batch_size * k : mini_batch_size * (k + 1)]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, mini_batch_size * k : m]
mini_batch_Y = shuffled_Y[:, mini_batch_size * k : m]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] =\n" + str(v["dW1"]))
print("v[\"db1\"] =\n" + str(v["db1"]))
print("v[\"dW2\"] =\n" + str(v["dW2"]))
print("v[\"db2\"] =\n" + str(v["db2"]))
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta)* grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta)* grads["db" + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * v["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * v["db" + str(l+1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = v" + str(v["db2"]))
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1)* grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1)* grads["db" + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1 ** t)
v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1 ** t)
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * (grads["dW" + str(l+1)]**2)
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * (grads["db" + str(l+1)]**2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2 ** t)
s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2 ** t)
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * (v_corrected["dW" + str(l+1)] / (np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon))
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * (v_corrected["db" + str(l+1)] / (np.sqrt(s_corrected["db" + str(l+1)]) + epsilon))
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
train_X, train_Y = load_dataset()
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
m = X.shape[1] # number of training examples
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
cost_total = 0
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost and add to the cost total
cost_total += compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
cost_avg = cost_total / m
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost_avg))
if print_cost and i % 100 == 0:
costs.append(cost_avg)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import libraries
Step2: Configure GCP environment settings
Step3: Authenticate your GCP account
Step4: Process the item embeddings data
Step5: Get a count of the records in the item_embeddings table
Step6: See a sample of the data in the item_embeddings table
Step7: Export the item embedding vector data
Step8: Run the pipeline
Step9: List the CSV files that were written to Cloud Storage
|
<ASSISTANT_TASK:>
Python Code:
!pip install -U -q apache-beam[gcp]
import os
from datetime import datetime
import apache_beam as beam
import numpy as np
import tensorflow.io as tf_io
PROJECT_ID = "yourProject" # Change to your project.
BUCKET = "yourBucketName" # Change to the bucket you created.
REGION = "yourDataflowRegion" # Change to your Dataflow region.
BQ_DATASET_NAME = "recommendations"
!gcloud config set project $PROJECT_ID
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except:
pass
%%bigquery --project $PROJECT_ID
CALL recommendations.sp_ExractEmbeddings()
%%bigquery --project $PROJECT_ID
SELECT COUNT(*) embedding_count
FROM recommendations.item_embeddings;
%%bigquery --project $PROJECT_ID
SELECT *
FROM recommendations.item_embeddings
LIMIT 5;
runner = "DataflowRunner"
timestamp = datetime.utcnow().strftime("%y%m%d%H%M%S")
job_name = f"ks-bqml-export-embeddings-{timestamp}"
bq_dataset_name = BQ_DATASET_NAME
embeddings_table_name = "item_embeddings"
output_dir = f"gs://{BUCKET}/bqml/item_embeddings"
project = PROJECT_ID
temp_location = os.path.join(output_dir, "tmp")
region = REGION
print(f"runner: {runner}")
print(f"job_name: {job_name}")
print(f"bq_dataset_name: {bq_dataset_name}")
print(f"embeddings_table_name: {embeddings_table_name}")
print(f"output_dir: {output_dir}")
print(f"project: {project}")
print(f"temp_location: {temp_location}")
print(f"region: {region}")
try:
os.chdir(os.path.join(os.getcwd(), "embeddings_exporter"))
except:
pass
if tf_io.gfile.exists(output_dir):
print("Removing {} contents...".format(output_dir))
tf_io.gfile.rmtree(output_dir)
print("Creating output: {}".format(output_dir))
tf_io.gfile.makedirs(output_dir)
!python runner.py \
--runner={runner} \
--job_name={job_name} \
--bq_dataset_name={bq_dataset_name} \
--embeddings_table_name={embeddings_table_name} \
--output_dir={output_dir} \
--project={project} \
--temp_location={temp_location} \
--region={region}
!gsutil ls {output_dir}/embeddings-*.csv
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Floats
Step2: This can give us surprises
Step3: For "infinite" precision float arithmetic you can use decimal or mpmath
Step4: Getting 30 digits of 1/7
Step5: We can see how many digits are true of 1/7 using doubles
Step6: Decimal arithmetic produces decimal objects
Step7: Decimal objects can be printed with format
Step8: A more complex example
|
<ASSISTANT_TASK:>
Python Code:
x = 7**273
print(x)
print(type(x))
format(0.1, '.80f')
.1 + .1 + .1 == .3
.1 + .1 == .2
from decimal import Decimal, getcontext
getcontext().prec=80
format(Decimal(1)/Decimal(7), '.80f')
format(1/7, '.80f')
#12345678901234567 (17 digits)
Decimal(1)/Decimal(7)
print('{:.50f}'.format(Decimal(1)/Decimal(7)))
# https://stackoverflow.com/questions/28284996/python-pi-calculation
from decimal import Decimal, getcontext
getcontext().prec=1000
my_pi= sum(1/Decimal(16)**k *
(Decimal(4)/(8*k+1) -
Decimal(2)/(8*k+4) -
Decimal(1)/(8*k+5) -
Decimal(1)/(8*k+6)) for k in range(1000))
'{:.1000f}'.format(my_pi)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: LOAD data from Mongo
Step2: Preproccessing
Step3: save/load the model
Step4: efforts with pyLDAvis (visualize the LDA topics)
Step5: Interactive version available here
|
<ASSISTANT_TASK:>
Python Code:
from pymongo import MongoClient
import json
client = MongoClient()
db = client.Twitter
import pandas as pd
import time
import re
from nltk.tokenize import RegexpTokenizer
import HTMLParser # In Python 3.4+ import html
import nltk
from nltk.corpus import stopwords
start_time = time.time()
#we are filtering out tweets of different languages and outside of the US
filter_query = {
"$and":[ {"place.country_code":"US"}, { "lang": "en" } ]
}
#we are keeping only our fields of interest
columns_query = {
'text':1,
'entities.hashtags':1,
'entities.user_mentions':1,
'place.full_name':1,
'place.bounding_box':1
}
tweets = pd.DataFrame(list(db.tweets.find(
filter_query,
columns_query
)#.limit()
)
)
elapsed_time = time.time() - start_time
print elapsed_time
#parse state variable
tweets['state'] = map(lambda place_dict: place_dict['full_name'][-2:] ,tweets['place'])
tweets['state'].value_counts().head()
# #for one state only
# state = 'CA'
# tweets = tweets[tweets['state']==state]
len(tweets)
def Clean(unescaped_tweet):
'''This function takes a tweet as input and returns a tokenizing list.'''
tokenizer = RegexpTokenizer(r'\w+')
cleaned_tweet_tokens = tokenizer.tokenize(unescaped_tweet.lower())
return cleaned_tweet_tokens
start_time = time.time() #Starts time
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"http\S+", "", tweet))
#########################################################
def trump_mention(tweet):
trump_count = 0
if ('trump' in tweet.lower()) or ('donald' in tweet.lower()):
return True
return False
tweets['Trump'] = tweets['text'].apply(lambda tweet: trump_mention(tweet))
##############################################################
#tweet mentions --->@
#tweet hashtags --->#
#create two column with the the hashtags and the mentions
tweets['mentions'] = tweets['text'].apply(lambda tweet: re.findall(r'\@\w+',tweet))
tweets['hashtags'] = tweets['text'].apply(lambda tweet: re.findall(r'\#\w+',tweet))
#remove hashtags and mentions
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"\@\w+" , "", tweet))
tweets['text'] = tweets['text'].apply(lambda tweet: re.sub(r"\#\w+" , "", tweet))
#remove the numbers from the text
tweets['text'] =tweets['text'].apply(lambda tweet: ''.join([i for i in tweet if not i.isdigit()]))
trump_count = 0
clinton_count =0
#remove the names and surnames of the two candidates
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Trump" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Clinton" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Donald" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"Hillary" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"USA" , "", tweet))
tweets['text'] =tweets['text'].apply(lambda tweet: re.sub(r"amp" , "", tweet))
#tokenize the text and add an extra column
tweets['token'] = tweets['text'].apply(lambda tweet: Clean(tweet))
tweets['token'] = tweets['token'].apply(lambda x: list(set(x)-set(stopwords.words('english'))))
elapsed_time = time.time() - start_time #time ends
print elapsed_time
tweets.head()
tweets.head()
#test['tags'] = map(lambda tweet: map(lambda tweet: tweet['text'] , tweet['entities']['hashtags']) if tweet['entities']['hashtags'] != None else None, raw_tweet[:100])
#tweets['text'][9]
doc_complete = tweets['token'].tolist()
doc_complete[:2]
import gensim
import pickle
import gensim
from gensim import corpora
# Creating the term dictionary of our courpus, where every unique term is assigned an index
dictionary = corpora.Dictionary(doc_complete)
pickle.dump(dictionary, open( 'dictionary2.pickle', "wb" ) )
# Converting list of documents (corpus) into Document Term Matrix using dictionary prepared above.
doc_term_matrix = [dictionary.doc2bow(doc) for doc in doc_complete]
pickle.dump(doc_term_matrix, open( 'doc_term_matrix.pickle', "wb" ) )
Lda = gensim.models.ldamulticore.LdaMulticore
nr_topics = 10
nr_passes = 100
start_time = time.time()
# Creating the object for LDA model using gensim library
# Running and Trainign LDA model on the document term matrix.
ldamodel = Lda(doc_term_matrix, num_topics=nr_topics, id2word = dictionary, passes=nr_passes)
elapsed_time = time.time() - start_time
print 'Topic modelling for', nr_topics,'topics,', nr_passes,'passes,',len(tweets),'tweets:','\ncomplete in',elapsed_time/60.,'minutes'
# Runtimes:
# Florida (~4K) ~ 16 min on 10 topics, 300 passes
# CA (57K) - 48 min on 10 topics 300 passes
# can we do it on the whole data -> take the topics and classify each tweet within them.
# then we have discrete sets with topics and words weights in each topic.
# so then isn't a tweet represented by the appropriate values?
# Print 2 topics and describe then with 4 words.
topics = ldamodel.print_topics(num_topics=nr_topics, num_words=50)
i=0
for topic in topics:
print topic
print ""
i+=1
import pickle
nr_topics = 10
nr_passes = 100
state = 'allstates'
name = "trained models/lda/%s_%itopics_%ipasses.pickle"%(state,nr_topics,nr_passes)
print "Procceed to save model in:", name
pickle.dump(ldamodel, open( name, "wb" ) )
#load
ldamodel = pickle.load(open(name,'rb'))
import time
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
#load the LDA results (model, dictionary and corpus)
start_time = time.time()
ldamodel = pickle.load(open('trained models/lda/allstates_10topics_100passes.pickle'))
dictandcorpus = pickle.load(open('trained models/lda/Dictionary.pickle'))
c = dictandcorpus[1]
d = dictandcorpus[0]
del dictandcorpus
elapsed_time = time.time() - start_time
print elapsed_time
data = pyLDAvis.gensim.prepare(ldamodel, c, d)
data
#save results as an html file
pyLDAvis.save_html(data, open('LDA topics.html','wb'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
#print(type(text))
vocab=set(text)
vocab_to_int= {word:integer for integer, word in enumerate(vocab)}
int_to_vocab= {integer:word for integer, word in enumerate(vocab)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
"\n": '||Return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (input, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(rnn_size)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2])
init_state = rnn_cell.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, name= "initial_state")
return (rnn_cell, init_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
return tf.contrib.layers.embed_sequence(input_data, vocab_size, embed_dim)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
inputs = get_embed(input_data, vocab_size, embed_dim)
rnn_output, rnn_state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(rnn_output, vocab_size, activation_fn=None)
return (logits, rnn_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
#print(n_batches)
# Drop the last few characters to make only full batches
x_data = np.array(int_text[: n_batches * batch_size * seq_length])
y_data = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
# The last target of the last batch should be the first input of the first batch.
y_data = np.concatenate([ y_data[:-1], [x_data[0]] ])
#print(x_data)
#print(y_data)
x_batches = np.split(x_data.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(y_data.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 300
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 256
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 150
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
inputs = loaded_graph.get_tensor_by_name('input:0');
initial_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probabilities = loaded_graph.get_tensor_by_name('probs:0')
return inputs, initial_state, final_state, probabilities
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
int_word = np.random.choice(len(int_to_vocab), p=probabilities)
return int_to_vocab[int_word]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unsupervised Learning
Step2: Applying agglomerative clustering via scikit-learn
Step3:
Step4: K Means
Step5: Affinity Propogation
Step6: MeanShift
Step7: Mixture of Guassian Models
Step8:
Step9: Classification
Step10: Decision Tree accuracy and time elapsed caculation
Step11: Random Forest accuracy and time elapsed caculation
Step12: Naive Bayes accuracy and time elapsed caculation
Step13: KNN accuracy and time elapsed caculation
Step14: SVM accuracy and time elapsed caculation
Step20: Ensemble Learning
Step21: Bagging -- Building an ensemble of classifiers from bootstrap samples
Step22: Leveraging weak learners via adaptive boosting
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn import preprocessing
import matplotlib
import matplotlib.pyplot as plt
# read .csv from provided dataset
csv_filename1="a1_raw.csv"
csv_filename2="a1_va3.csv"
# df=pd.read_csv(csv_filename,index_col=0)
df1=pd.read_csv(csv_filename1 , skiprows=[1,2,3,4])
df2=pd.read_csv(csv_filename2)
df1.head()
df1.shape
df1['phase'].unique()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df1['phase'] = le.fit_transform(df1['phase'])
df1['phase'].unique()
df2.head()
df2.shape
df2['Phase'].unique()
df1.columns
df2.columns
df2.rename(columns={'Phase': 'phase'}, inplace=True)
df1.phase.unique()
df2.phase.unique()
a = df2.phase == 'D'
b = df2.phase == 'P'
c = df2.phase == 'S'
d = df2.phase == 'H'
e = df2.phase == 'R'
df2.loc[a,'phase'] = 'Rest'
df2.loc[b,'phase'] = 'Preparation'
df2.loc[c,'phase'] = 'Stroke'
df2.loc[d,'phase'] = 'Hold'
df2.loc[e,'phase'] = 'Retraction'
df2.head(3)
df2.phase.unique()
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df2['phase'] = le.fit_transform(df2['phase'])
df2.phase.unique()
df1.groupby('phase').count()
df2.groupby('phase').count()
df1.sort('phase',inplace=True)
df2.sort('phase',inplace=True)
df2.tail()
left = pd.DataFrame({
....: 'key2': ['0', '2', '1', '3','0','1'],
....: 'A': ['A0', 'A1', 'A2', 'A3','A4','A5'],
....: 'B': ['B0', 'B1', 'B2', 'B3','B4','B5']})
....:
right = pd.DataFrame({
....: 'key2': ['0', '1', '2', '0', '1', '3'],
....: 'C': ['C0', 'C1', 'C2', 'C3', 'C4', 'C5'],
....: 'D': ['D0', 'D1', 'D2', 'D3', 'D4', 'D5']})
....:
left
right
left.sort('key2',inplace=True)
left
right.sort('key2',inplace=True)
right
result = pd.merge(left, right, on=['key2'])
result
result2 = pd.merge(left, right, on=['key2'], how='right')
result2
df = pd.merge(df1, df2, on='phase')
df.head()
df.columns
df[:1]
df1.shape,df2.shape,df.shape
df.drop('timestamp', axis=1, inplace=True)
cols = list(df.columns)
features = cols
features.remove('phase')
len(features)
df1.shape,df2.shape,df.shape
df1.drop('phase',axis=1,inplace=True)
df_1 = pd.concat([df1,df2],axis=1)
df_1.drop('timestamp' , axis=1, inplace=True )
df_1.shape
cols = list(df_1.columns)
features = cols
features.remove('phase')
X = df_1[features]
y = df_1['phase']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
print (X_train.shape, y_train.shape)
len(features)
# Apply PCA with the same number of dimensions as variables in the dataset
from sklearn.decomposition import PCA
pca = PCA(n_components=50)
pca.fit(X)
# Print the components and the amount of variance in the data contained in each dimension
print(pca.components_)
print(pca.explained_variance_ratio_)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(list(pca.explained_variance_ratio_),'-o')
plt.title('Explained variance ratio as function of PCA components')
plt.ylabel('Explained variance ratio')
plt.xlabel('Component')
plt.show()
# First we reduce the data to two dimensions using PCA to capture variation
pca = PCA(n_components=2)
reduced_data = pca.fit_transform(X)
print(reduced_data[:10]) # print upto 10 elements
# Import clustering modules
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
kmeans = KMeans(n_clusters=5)
clusters = kmeans.fit(reduced_data)
print(clusters)
# Plot the decision boundary by building a mesh grid to populate a graph.
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
hx = (x_max-x_min)/1000.
hy = (y_max-y_min)/1000.
xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))
# Obtain labels for each point in mesh. Use last trained model.
Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()])
# Find the centroids for KMeans or the cluster means for GMM
centroids = kmeans.cluster_centers_
print('*** K MEANS CENTROIDS ***')
print(centroids)
# TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7
print('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***')
print(pca.inverse_transform(centroids))
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('Clustering on the seeds dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='complete')
labels = ac.fit_predict(X)
print('Cluster labels: %s' % labels)
X = df_1[features]
y = df_1['phase']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
from sklearn import cluster
clf = cluster.KMeans(init='k-means++', n_clusters=5, random_state=5)
clf.fit(X_train)
print clf.labels_.shape
print clf.labels_
# Predict clusters on testing data
y_pred = clf.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
# Affinity propagation
aff = cluster.AffinityPropagation()
aff.fit(X_train)
print aff.cluster_centers_indices_.shape
y_pred = aff.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
ms = cluster.MeanShift()
ms.fit(X_train)
y_pred = ms.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
from sklearn import mixture
# Define a heldout dataset to estimate covariance type
X_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split(
X_train, y_train,test_size=0.25, random_state=42)
for covariance_type in ['spherical','tied','diag','full']:
gm=mixture.GMM(n_components=100, covariance_type=covariance_type, random_state=42, n_init=5)
gm.fit(X_train_heldout)
y_pred=gm.predict(X_test_heldout)
print "Adjusted rand score for covariance={}:{:.2}".format(covariance_type,
metrics.adjusted_rand_score(y_test_heldout, y_pred))
pca = PCA(n_components=2)
X = pca.fit_transform(X)
c = []
from matplotlib.pyplot import cm
n=6
color=iter(cm.rainbow(np.linspace(0,1,n)))
for i in range(n):
c.append(next(color))
n = 5
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,6))
km = KMeans(n_clusters= n , random_state=0)
y_km = km.fit_predict(X)
for i in range(n):
ax1.scatter(X[y_km==i,0], X[y_km==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i))
ax1.set_title('K-means clustering')
ac = AgglomerativeClustering(n_clusters=n, affinity='euclidean', linkage='complete')
y_ac = ac.fit_predict(X)
for i in range(n):
ax2.scatter(X[y_ac==i,0], X[y_ac==i,1], c=c[i], marker='o', s=40, label='cluster{}'.format(i))
ax2.set_title('Agglomerative clustering')
# Put a legend below current axis
plt.legend()
plt.tight_layout()
#plt.savefig('./figures/kmeans_and_ac.png', dpi=300)
plt.show()
import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from time import time
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
X = df_1[features]
y = df_1['phase']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
print (X_train.shape, y_train.shape,X_test.shape, y_test.shape)
t0=time()
print ("DecisionTree")
dt = DecisionTreeClassifier(min_samples_split=20,random_state=99)
# dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print ("Acurracy: ", clf_dt.score(X_test,y_test))
t1=time()
print ("time elapsed: ", t1-t0)
tt0=time()
print ("cross result========")
scores = cross_validation.cross_val_score(dt, X,y, cv=5)
print (scores)
print (scores.mean())
tt1=time()
print ("time elapsed: ", tt1-tt0)
t2=time()
print ("RandomForest")
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf_rf = rf.fit(X_train,y_train)
print ("Acurracy: ", clf_rf.score(X_test,y_test))
t3=time()
print ("time elapsed: ", t3-t2)
tt2=time()
print ("cross result========")
scores = cross_validation.cross_val_score(rf, X,y, cv=5)
print (scores)
print (scores.mean())
tt3=time()
print ("time elapsed: ", tt3-tt2)
t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4)
tt4=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X,y, cv=5)
print (scores)
print (scores.mean())
tt5=time()
print ("time elapsed: ", tt5-tt4)
t6=time()
print ("KNN")
# knn = KNeighborsClassifier(n_neighbors=3)
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print ("Acurracy: ", clf_knn.score(X_test,y_test) )
t7=time()
print ("time elapsed: ", t7-t6)
tt6=time()
print ("cross result========")
scores = cross_validation.cross_val_score(knn, X,y, cv=5)
print (scores)
print (scores.mean())
tt7=time()
print ("time elapsed: ", tt7-tt6)
t7=time()
print ("SVM")
svc = SVC()
clf_svc=svc.fit(X_train, y_train)
print ("Acurracy: ", clf_svc.score(X_test,y_test) )
t8=time()
print ("time elapsed: ", t8-t7)
tt7=time()
print ("cross result========")
scores = cross_validation.cross_val_score(svc, X,y, cv=5)
print (scores)
print (scores.mean())
tt8=time()
print ("time elapsed: ", tt7-tt6)
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
svc = SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
grid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print ('Best score: %0.3f' % grid.best_score_)
print ('Best parameters set:')
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print ('\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid.predict(X_test)
print (classification_report(y_test, predictions))
pipeline = Pipeline([
('clf', SVC(kernel='rbf', gamma=0.01, C=100))
])
parameters = {
'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),
'clf__C': (0.1, 0.3, 1, 3, 10, 30),
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print ('Best score: %0.3f' % grid_search.best_score_)
print ('Best parameters set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print ('\t%s: %r' % (param_name, best_parameters[param_name]))
predictions = grid_search.predict(X_test)
print (classification_report(y_test, predictions))
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelEncoder
from sklearn.externals import six
from sklearn.base import clone
from sklearn.pipeline import _name_estimators
import numpy as np
import operator
class MajorityVoteClassifier(BaseEstimator,
ClassifierMixin):
A majority vote ensemble classifier
Parameters
----------
classifiers : array-like, shape = [n_classifiers]
Different classifiers for the ensemble
vote : str, {'classlabel', 'probability'} (default='label')
If 'classlabel' the prediction is based on the argmax of
class labels. Else if 'probability', the argmax of
the sum of probabilities is used to predict the class label
(recommended for calibrated classifiers).
weights : array-like, shape = [n_classifiers], optional (default=None)
If a list of `int` or `float` values are provided, the classifiers
are weighted by importance; Uses uniform weights if `weights=None`.
def __init__(self, classifiers, vote='classlabel', weights=None):
self.classifiers = classifiers
self.named_classifiers = {key: value for key, value
in _name_estimators(classifiers)}
self.vote = vote
self.weights = weights
def fit(self, X, y):
Fit classifiers.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
y : array-like, shape = [n_samples]
Vector of target class labels.
Returns
-------
self : object
if self.vote not in ('probability', 'classlabel'):
raise ValueError("vote must be 'probability' or 'classlabel'"
"; got (vote=%r)"
% self.vote)
if self.weights and len(self.weights) != len(self.classifiers):
raise ValueError('Number of classifiers and weights must be equal'
'; got %d weights, %d classifiers'
% (len(self.weights), len(self.classifiers)))
# Use LabelEncoder to ensure class labels start with 0, which
# is important for np.argmax call in self.predict
self.lablenc_ = LabelEncoder()
self.lablenc_.fit(y)
self.classes_ = self.lablenc_.classes_
self.classifiers_ = []
for clf in self.classifiers:
fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y))
self.classifiers_.append(fitted_clf)
return self
def predict(self, X):
Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
Returns
----------
maj_vote : array-like, shape = [n_samples]
Predicted class labels.
if self.vote == 'probability':
maj_vote = np.argmax(self.predict_proba(X), axis=1)
else: # 'classlabel' vote
# Collect results from clf.predict calls
predictions = np.asarray([clf.predict(X)
for clf in self.classifiers_]).T
maj_vote = np.apply_along_axis(
lambda x:
np.argmax(np.bincount(x,
weights=self.weights)),
axis=1,
arr=predictions)
maj_vote = self.lablenc_.inverse_transform(maj_vote)
return maj_vote
def predict_proba(self, X):
Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
avg_proba : array-like, shape = [n_samples, n_classes]
Weighted average probability for each class per sample.
probas = np.asarray([clf.predict_proba(X)
for clf in self.classifiers_])
avg_proba = np.average(probas, axis=0, weights=self.weights)
return avg_proba
def get_params(self, deep=True):
Get classifier parameter names for GridSearch
if not deep:
return super(MajorityVoteClassifier, self).get_params(deep=False)
else:
out = self.named_classifiers.copy()
for name, step in six.iteritems(self.named_classifiers):
for key, value in six.iteritems(step.get_params(deep=True)):
out['%s__%s' % (name, key)] = value
return out
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import numpy as np
from sklearn.preprocessing import StandardScaler
clf1 = LogisticRegression(penalty='l2',
C=0.001,
random_state=0)
clf2 = DecisionTreeClassifier(max_depth=1,
criterion='entropy',
random_state=0)
clf3 = KNeighborsClassifier(n_neighbors=1,
p=2,
metric='minkowski')
pipe1 = Pipeline([['sc', StandardScaler()],
['clf', clf1]])
pipe3 = Pipeline([['sc', StandardScaler()],
['clf', clf3]])
clf_labels = ['Logistic Regression', 'Decision Tree', 'KNN']
print('10-fold cross validation:\n')
for clf, label in zip([pipe1, clf2, pipe3], clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
# Majority Rule (hard) Voting
mv_clf = MajorityVoteClassifier(
classifiers=[pipe1, clf2, pipe3])
clf_labels += ['Majority Voting']
all_clf = [pipe1, clf2, pipe3, mv_clf]
for clf, label in zip(all_clf, clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='accuracy')
print("Accuracy: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
mv_clf.get_params()
from sklearn.grid_search import GridSearchCV
params = {'decisiontreeclassifier__max_depth': [1, 2],
'pipeline-1__clf__C': [0.001, 0.1, 100.0]}
grid = GridSearchCV(estimator=mv_clf,
param_grid=params,
cv=10,
scoring='accuracy')
grid.fit(X_train, y_train)
for params, mean_score, scores in grid.grid_scores_:
print("%0.3f+/-%0.2f %r"
% (mean_score, scores.std() / 2, params))
print('Best parameters: %s' % grid.best_params_)
print('Accuracy: %.2f' % grid.best_score_)
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=None)
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
n_jobs=1,
random_state=1)
from sklearn.metrics import accuracy_score
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
bag = bag.fit(X_train, y_train)
y_train_pred = bag.predict(X_train)
y_test_pred = bag.predict(X_test)
bag_train = accuracy_score(y_train, y_train_pred)
bag_test = accuracy_score(y_test, y_test_pred)
print('Bagging train/test accuracies %.3f/%.3f'
% (bag_train, bag_test))
from sklearn.ensemble import AdaBoostClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=1)
ada = AdaBoostClassifier(base_estimator=tree,
n_estimators=500,
learning_rate=0.1,
random_state=0)
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
ada = ada.fit(X_train, y_train)
y_train_pred = ada.predict(X_train)
y_test_pred = ada.predict(X_test)
ada_train = accuracy_score(y_train, y_train_pred)
ada_test = accuracy_score(y_test, y_test_pred)
print('AdaBoost train/test accuracies %.3f/%.3f'
% (ada_train, ada_test))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href="https
Step4: Normalizing Numerical Features
Step5: Implementation
Step6: Shuffle and Split Data
Step7: Evaluating Model Performance
Step8: Supervised Learning Models
Step9: Implementation
Step10: Improving Results
Step11: Question 5 - Final Model Evaluation
Step12: Question 7 - Extracting Feature Importance
Step13: Answer
Step14: Extra Work
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))
# TODO: Total number of records
n_records = len(data)
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = len(data[data['income']=='>50K'])
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = len(data[data['income']=='<=50K'])
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = float(n_greater_50k) / n_records
# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 1))
features = pd.get_dummies(features_raw)
# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.apply(lambda value: 0 if value=='<=50K' else 1)
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# Uncomment the following line to see the encoded feature names
#print encoded
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
# TODO: Calculate accuracy
true_positive = 11208.0
true_negative = 0.0
false_positive = n_records - true_positive
false_negative = 0.0
accuracy = true_positive / n_records
# TODO: Calculate F-score using the formula above for beta = 0.5
precision = true_positive / n_records
recall = true_positive / (true_positive + false_negative)
fscore = (1 + 0.5**2) * precision * recall / (0.5**2 * precision + recall)
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size'
# Get start time
start = time()
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
# Get end time
end = time()
# TODO: Calculate the training time
results['train_time'] = end - start
# TODO: Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end - start
# TODO: Compute accuracy on the first 300 training samples
y_train_array = y_train.as_matrix()
y_test_array = y_test.as_matrix()
results['acc_train'] = accuracy_score(y_train_array[:300], predictions_train)
# TODO: Compute accuracy on test set
results['acc_test'] = accuracy_score(y_test_array, predictions_test)
# TODO: Compute F-score on the the first 300 training samples
results['f_train'] = fbeta_score(y_train_array[:300], predictions_train, 0.5)
# TODO: Compute F-score on the test set
results['f_test'] = fbeta_score(y_test_array, predictions_test, 0.5)
# Success
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# Return the results
return results
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.ensemble import RandomForestClassifier
# TODO: Initialize the three models
clf_A = SGDClassifier()
clf_B = RandomForestClassifier()
clf_C = LogisticRegression()
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
n_train = len(X_train)
samples_1 = int(.01 * n_train)
samples_10 = int(.1 * n_train)
samples_100 = n_train
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Initialize the classifier
clf = LogisticRegression()
# TODO: Create the parameters list you wish to tune
parameters = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]}
# TODO: Make an fbeta_score scoring object
scorer = make_scorer(fbeta_score, beta=0.5)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(estimator=clf, param_grid=parameters, scoring=scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train, y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
from sklearn.ensemble import RandomForestClassifier
# TODO: Import a supervised learning model that has 'feature_importances_'
clf = RandomForestClassifier()
# TODO: Train the supervised model on the training set
model = clf.fit(X_train, y_train)
# TODO: Extract the feature importances
importances = clf.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
indices = np.argsort(importances)[::-1]
columns = X_train.columns.values[indices[:5]]
print(columns)
def print_result(best_pred, reduced_pred):
# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_pred))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_pred, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_pred))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_pred, beta = 0.5))
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
print_result(best_predictions, reduced_predictions)
clf_RF = RandomForestClassifier()
# Search parameters
parameters_RF = {'max_features': ['auto', 'sqrt', 'log2'],
'n_estimators': [10, 20, 30],
'min_samples_leaf': [30, 60, 90],
'max_depth': [30, 60, 90]}
grid_obj_RF = GridSearchCV(estimator=clf_RF, param_grid=parameters_RF, scoring=scorer)
grid_fit_RF = grid_obj_RF.fit(X_train, y_train)
best_clf_RF = grid_fit_RF.best_estimator_
print grid_fit_RF.best_params_
# Make predictions using the unoptimized and model
predictions_RF = (clf_RF.fit(X_train, y_train)).predict(X_test)
best_predictions_RF = best_clf_RF.predict(X_test)
print "\nUnoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions_RF))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions_RF, beta = 0.5))
print "\n"
# Train on the "best" model found from grid search earlier and use it to predict reduced features
clf_RF = (clone(best_clf_RF)).fit(X_train_reduced, y_train)
reduced_predictions_RF = clf_RF.predict(X_test_reduced)
print_result(best_predictions_RF, reduced_predictions_RF)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mh', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a variable
Step2: Print out the value of the variable
Step3: or even easier
Step4: Datatypes
Step5: NumPy (Numerical Python) is the fundamental package for scientific computing with Python.
Step6: pi and e are built-in constants
Step7: Here is a link to all Numpy math functions.
Step8: Indexing
Step9: Slices
Step10: There are lots of different methods that can be applied to a NumPy array
Step11: Help about a function
Step12: NumPy math works over an entire array
Step13: Masking - The key to fast programs
Step14: Fancy masking
Step15: Sorting
Step16: Control Flow
Step17: For loops are different in python.
Step18: Loops are slow in Python. Do not use them if you do not have to!
Step19: Functions
Step20: The results of one function can be used as the input to another function
Step21: Creating Arrays
|
<ASSISTANT_TASK:>
Python Code:
print("Hello World!")
# lines that begin with a # are treated as comment lines and not executed
# print("This line is not printed")
print("This line is printed")
g = 3.0 * 2.0
print(g)
g
a = 1
b = 2.3
c = 2.3e4
d = True
e = "Spam"
type(a), type(b), type(c), type(d), type(e)
a + b, type(a + b)
c + d, type(c + d) # True = 1
a + e
str(a) + e
import numpy as np
np.pi, np.e
np.random.seed(42) # set the seed - everyone gets the same random numbers
x = np.random.randint(1,10,20) # 20 random ints between 1 and 10
x
x[0] # The Value at Index = 0
x[-1] # The last Value in the array x
x
x[0:4] # first 4 items
x[:4] # same
x[0:4:2] # first four item, step = 2
x[3::-1] # first four items backwards, step = -1
x[::-1] # Reverse the array x
print(x[-5:]) # last 5 elements of the array x
x.size # Number of elements in x
x.mean() # Average of the elements in x
x.sum() # Total of the elements in x
x[-5:].sum() # Total of last 5 elements in x
x.cumsum() # Cumulative sum
x.cumsum()/x.sum() # Cumulative percentage
x.
?x.min
y = x * 2
y
sin(x) # need to Numpy's math functions
np.sin(x)
mask1 = np.where(x>5)
x, mask1
x[mask1], y[mask1]
mask2 = np.where((x>3) & (x<7))
x[mask2]
mask3 = np.where(x >= 8)
x[mask3]
# Set all values of x that match mask3 to 0
x[mask3] = 0
x
mask4 = np.where(x != 0)
mask4
#Add 10 to every value of x that matches mask4:
x[mask4] += 100
x
np.random.seed(13) # set the seed - everyone gets the same random numbers
z = np.random.randint(1,10,20) # 20 random ints between 1 and 10
z
np.sort(z)
np.sort(z)[0:4]
# Returns the indices that would sort an array
np.argsort(z)
z, z[np.argsort(z)]
maskS = np.argsort(z)
z, z[maskS]
xx = -1
if xx > 0:
print("This number is positive")
else:
print("This number is NOT positive")
xx = 0
if xx > 0:
print("This number is positive")
elif xx == 0:
print("This number is zero")
else:
print("This number is negative")
z
for value in z:
print(value)
for idx,val in enumerate(z):
print(idx,val)
for idx,val in enumerate(z):
if (val > 5):
z[idx] = 0
for idx,val in enumerate(z):
print(idx,val)
np.random.seed(42)
BigZ = np.random.random(10000) # 10,000 value array
BigZ[:10]
# This is slow!
for Idx,Val in enumerate(BigZ):
if (Val > 0.5):
BigZ[Idx] = 0
BigZ[:10]
%%timeit
for Idx,Val in enumerate(BigZ):
if (Val > 0.5):
BigZ[Idx] = 0
# Masks are MUCH faster
mask = np.where(BigZ>0.5)
BigZ[mask] = 0
BigZ[:10]
%%timeit -o
mask = np.where(BigZ>0.5)
BigZ[mask] = 0
def find_f(x,y):
result = (x ** 2) * np.sin(y) # assign the variable result the value of the function
return result # return the value of the function to the main program
np.random.seed(42)
array_x = np.random.rand(10) * 10
array_y = np.random.rand(10) * 2.0 * np.pi
array_x, array_y
value_f = find_f(array_x,array_y)
value_f
def find_g(z):
result = z / np.e
return result
find_g(value_f)
find_g(find_f(array_x,array_y))
# a new array filled with zeros
array_0 = np.zeros(10)
array_0
# a new array filled with ones
array_1 = np.ones(10)
array_1
# a new array filled with evenly spaced values within a given interval
array_2 = np.arange(10,20)
array_2
# a new array filled with evenly spaced numbers over a specified interval (start, stop, num)
array_3 = np.linspace(10,20,5)
array_3
# a new array filled with evenly spaced numbers over a log scale. (start, stop, num, base)
array_4 = np.logspace(1,2,5,10)
array_4
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step-by-Step Instruction
Step1: 2. Use loss scaling to prevent underflow
Step2: 3. Use dynamic loss scaling to prevent overflow/underflow
Step4: Note that currently the procedures of 2nd (Use loss scaling to prevent underflow) and 3rd (Use loss scaling to prevent overflow) are experimental, and we are now trying to speed up the mixed precision training, so API might change for future use, especially 3rd.
Step5: Then, call the update method in a training loop
|
<ASSISTANT_TASK:>
Python Code:
ctx = get_extension_context("cudnn", type_config="half")
loss_scale = 8
loss.backward(loss_scale)
solver.scale_grad(1. / loss_scale) # do some gradient clipping, etc. after this
solver.update()
loss_scale = 8
scaling_factor = 2
counter = 0
interval = 2000
...
loss.backward(loss_scale, ...)
...
if solver.check_inf_or_nan_grad():
loss_scale /= scaling_factor
counter = 0
else:
solver.scale_grad(1. / loss_scale) # do some gradient clipping, etc. after this
solver.update()
if counter > interval:
loss_scale *= scaling_factor
counter = 0
counter += 1
class DynamicLossScalingUpdater(object):
'''Dynamic Loss Scaling Updater for the mixed precision training.
Args:
solver (:obj:`nnabla.solvers.Solver`): Solver object. E.g., Momentum or Adam.
loss (:obj:`nnabla.Variable`): Loss variable from which the forward and the backward is called.
data_feeder (callable :obj:`object`, function, or lambda): Data feeder
scale (:obj:`float`): Loss scale constant. This is dynamically changing during training.
scaling_factor (:obj:`float`): Scaling factor for the dynamic loss scaling.
N (:obj:`int`): Interval, the number of iterations in training for increasing `loss scale` by `scaling_factor`.
clear_buffer (:obj:`bool`): Clears the no longer referenced variables during backpropagation to save memory.
accum_grad (:obj:`int`): Number of accumulation of gradients. Update method of the `solver` is called after the `accum_grad` number of the forward and backward is called.
weight_decay (:obj:`float`): Decay constant. Default is `None`, not applying the weight decay.
comm (:obj:`nnabla.communicators.Communicator`): Communicator when to do distributed training. Default is :obj:`None`.
grads (:obj:`list` of :obj:`nnabla._nd_array.NdArray`): The list of gradients to be exchanged when to do distributed training. Default is the empty :obj:`list`.
Attributes:
solver (:obj:`nnabla.solvers.Solver`): Solver object. E.g., Momentum or Adam.
loss (:obj:`nnabla.Variable`): Loss variable from which the forward and the backward is called.
data_feeder (callable :obj:`object`, function, lambda): Data feeder
scale (:obj:`float`): Loss scale constant. This is dynamically changing during training.
scaling_factor (:obj:`float`): Scaling factor for the dynamic loss scaling.
N (:obj:`int`): Interval, the number of iterations in training for increasing `loss scale` by `scaling_factor`.
clear_buffer (:obj:`bool`): Clears the no longer referenced variables during backpropagation to save memory.
accum_grad (:obj:`int`): Number of accumulation of gradients. Update method of the `solver` is called after the `accum_grad` number of the forward and backward is called.
weight_decay (:obj:`float`): Decay constant. Default is `None`, not applying the weight decay.
comm (:obj:`nnabla.communicators.Communicator`): Communicator when to do distributed training.
grads (:obj:`list` of :obj:`nnabla._nd_array.NdArray`): The list of gradients to be exchanged when to do distributed training.
Example:
.. code-block:: python
solver = <Solver>
loss = <Loss Variable of Network>
data_feeder = <DataFeeder>
updater = DynamicLossScalingUpdater(solver, loss, data_feeder)
# Training iteration
for itr in range(max_iter):
# Call solver.zero_grad, data_feeder, loss.forward, loss.backward
# and solver.update with the dynamic loss scaling.
updater.update()
Reference:
https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#scalefactor
'''
def __init__(self, solver, loss, data_feeder=lambda x: x,
scale=8.0, scaling_factor=2.0, N=2000, clear_buffer=True,
accum_grad=1, weight_decay=None,
comm=None,
grads=[]):
self.solver = solver
self.loss = loss
self.data_feeder = data_feeder
self.scale = scale
self.scaling_factor = scaling_factor
self.N = N
self.clear_buffer = clear_buffer
self.accum_grad = accum_grad
self.weight_decay = weight_decay
self.comm = comm
self.grads = grads
self._counter = 0
self._recursive_count = 0
self._max_recursive_count = 100
def update(self):
Monolithic update method.
This method calls the following methods with the dynamic loss scaling.
1. solver.zerograd
2. feed data
3. loss.forward
4. loss.backward
5. comm.all_reduce (if it is specified)
6. solver.update
# Initialize gradients.
self.solver.zero_grad()
# Forward and backward
for _ in range(self.accum_grad):
# feed data
self.data_feeder()
# forward
self.loss.forward(clear_no_need_grad=self.clear_buffer)
# backward with scale
self.loss.backward(self.scale, clear_buffer=self.clear_buffer)
# AllReduce
if self.comm and len(self.grads) != 0:
self.comm.all_reduce(self.grads, division=False, inplace=False)
# Check Inf/NaN in grads
if self.solver.check_inf_or_nan_grad():
self.scale /= self.scaling_factor
self._counter = 0
# Recursively call udpate function until no inf nor nan.
self._recursive_count += 1
if self._recursive_count > self._max_recursive_count:
self._recursive_count = 0
return # skip
return self.update()
self._recursive_count = 0
# Rescale grads
self.solver.scale_grad(1. / self.scale)
# Do some gradient clipping, etc.
if self.weight_decay is not None:
self.solver.weight_decay(self.weight_decay)
# Update
self.solver.update()
if self._counter > self.N:
self.scale *= self.scaling_factor
self._counter = 0
self._counter += 1
from nnabla.experimental.mixed_precision_training import DynamicLossScalingUpdater
solver = <Solver>
loss = <Loss Variable of Network>
data_feeder = <DataFeeder>
updater = DynamicLossScalingUpdater(solver, loss, data_feeder)
# Training iteration
for itr in range(max_iter):
# Call solver.zero_grad, data_feeder, loss.forward, loss.backward
# and solver.update with the dynamic loss scaling.
updater.update()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 13.3. External Mixture
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Step60: 14.2. Shortwave Bands
Step61: 14.3. Longwave Bands
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Step63: 15.2. Twomey
Step64: 15.3. Twomey Minimum Ccn
Step65: 15.4. Drizzle
Step66: 15.5. Cloud Lifetime
Step67: 15.6. Longwave Bands
Step68: 16. Model
Step69: 16.2. Processes
Step70: 16.3. Coupling
Step71: 16.4. Gas Phase Precursors
Step72: 16.5. Scheme Type
Step73: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 데이터 불러오기 및 처리
Step2: read_csv 함수의 리턴값은 DataFrame 이라는 자료형이다.
Step3: DataFrame 자료형
Step4: 인자를 주면 원하는 만큼 보여준다.
Step5: 파일이 매우 많은 수의 데이터를 포함하고 있을 경우, 맨 뒷쪽 부분을 확인하고 싶으면
Step6: 결측치 존재 여부
Step7: 정렬 및 결측치 채우기
Step8: 결측치 채우기
Step9: 정렬된 데이터의 첫 부분은 아래와 같이 알라바마(Alabama) 주의 데이터만 날짜별로 순서대로 보인다.
Step10: 정렬된 데이터의 끝 부분은 아래와 같이 요밍(Wyoming) 주의 데이터만 날짜별로 순서대로 보인다.
Step11: 데이터 분석하기
Step12: 캘리포니아 주에서 거래된 첫 5개의 데이터를 확인해보자.
Step13: HighQ 열 목록에 있는 값들의 총합을 구해보자.
Step14: HighQ 열 목록에 있는 값들의 개수를 확인해보자.
Step15: 이제 캘리포니아 주에서 거래된 HighQ의 담배가격의 평균값을 구할 수 있다.
Step16: 중앙값(Median)
Step17: 따라서 중앙값은 $\frac{\text{ca_count}-1}{2}$번째에 위치한 값이다.
Step18: 인덱스 로케이션 함수인 iloc 함수를 활용한다.
Step19: 최빈값(Mode)
Step20: 연습문제
Step21: 연습
Step22: 견본답안2
Step23: year_starts에 담긴 숫자들의 의미는 다음과 같다.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from datetime import datetime as dt
from scipy import stats
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
type(prices_pd)
prices_pd.head()
prices_pd.head(10)
prices_pd.tail()
prices_pd.dtypes
prices_pd.sort_values(['State', 'date'], inplace=True)
prices_pd.fillna(method='ffill', inplace=True)
prices_pd.head()
prices_pd.tail()
california_pd = prices_pd[prices_pd.State == "California"].copy(True)
california_pd.head(20)
ca_sum = california_pd['HighQ'].sum()
ca_sum
ca_count = california_pd['HighQ'].count()
ca_count
# 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값
ca_mean = ca_sum / ca_count
ca_mean
ca_count
ca_highq_pd = california_pd.sort_values(['HighQ'])
ca_highq_pd.head()
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 중앙값
ca_median = ca_highq_pd.HighQ.iloc[int((ca_count-1)/ 2)]
ca_median
# 캘리포니아 주에서 가장 빈번하게 거래된 상품(HighQ) 담배(식물)의 도매가
ca_mode = ca_highq_pd.HighQ.value_counts().index[0]
ca_mode
california_pd.mean()
california_pd.mean().HighQ
california_pd.median()
california_pd.mode()
california_pd.mode().HighQ
california_pd.HighQ.mean()
california_pd.HighQ.median()
california_pd.HighQ.mode()
sum = 0
count = 0
for index in np.arange(len(california_pd)):
if california_pd.iloc[index]['date'].year == 2014:
sum += california_pd.iloc[index]['HighQ']
count += 1
sum/count
years = np.arange(2013, 2016)
year_starts = [0]
for yr in years:
for index in np.arange(year_starts[-1], len(california_pd)):
if california_pd.iloc[index]['date'].year == yr:
continue
else:
year_starts.append(index)
break
year_starts
california_pd.iloc[4]
california_pd.iloc[5]
california_pd.iloc[368]
california_pd.iloc[369]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Just for fun, we can — using the original formula in the reference — calculate the population size from an observation
Step2: It's a linear relationship, let's plot it.
Step3: <hr />
Step4: Orthogonal grid of 2D lines
Step5: We're going to need a binomial distribution, scipy.stats.binom.
Step6: We can use the distribution to estimate the probability of seeing no features. Then we can use the survival function (or, equivalently, 1 - the cumulative distribution function), sf(x, n, p), to tell us the probability of drawing more than x in n trials, given a success probability p
Step7: <hr />
Step8: We can use a pandas DataFrame to show a quick table
Step9: We can compute the probability of a given feature being correctly interpreted
|
<ASSISTANT_TASK:>
Python Code:
area = 120000.0 # km^2, area covered by transects
population = 120 # Total number of features (guess)
no_lines = 250 # Total number of transects
line_length = 150 # km, mean length of a transect
feature_width = 0.5 # km, width of features
density = population / area
length = no_lines * line_length
observed = 2 * density * length * feature_width
print "Expected number of features intersected:", observed
observed = 37.5
population = (observed * area) / (2. * length * feature_width)
print "Population:", population
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make that last expression into a quick function
def pop(obs, area, length, width):
return (obs * area) / (2. * length * width)
# Pass in an array of values
obs = np.arange(50)
pop = pop(obs, area, length, feature_width)
plt.plot(obs, pop)
plt.xlabel('observed')
plt.ylabel('population')
plt.show()
line_spacing = 3.0 # km, the width of the gap
# 'Invisible' means 'not intersected'
width_invisible = line_spacing - feature_width
prob_invisible = width_invisible / line_spacing
prob_visible = 1 - prob_invisible
print "Probability of intersecting a given feature:", prob_visible
x_spacing = 3.0 # km
y_spacing = 3.0 # km
# Think of the quadrilaterals between lines as 'units'
area_of_unit = x_spacing * y_spacing
area_invisible = (x_spacing - feature_width) * (y_spacing - feature_width)
area_visible = area_of_unit - area_invisible
prob_visible = area_visible / area_of_unit
print "Probability of intersecting a given feature:", prob_visible
import scipy.stats
p = "Probability of intersecting"
print p, "no features:", scipy.stats.binom.pmf(0, population, prob_visible)
print p, "at least one:", scipy.stats.binom.sf(0, population, prob_visible)
print p, "at least two:", scipy.stats.binom.sf(1, population, prob_visible)
print p, "all features:", scipy.stats.binom.sf(population-1, population, prob_visible)
reliability = 0.75
trials = 120
intersect_interpret = prob_visible * reliability * trials
intersect_xinterpret = prob_visible * (1 - reliability) * trials
xintersect_interpret = (1 - prob_visible) * (1 - reliability) * trials
xintersect_xinterpret = (1 - prob_visible) * reliability * trials
t = [[intersect_interpret, intersect_xinterpret], [xintersect_interpret, xintersect_xinterpret]]
from pandas import DataFrame
df = DataFrame(t, index=['Intersected', 'Not intersected'], columns=['Interpreted','Not interpreted'])
df
prob_correct = intersect_interpret / (intersect_interpret + xintersect_interpret)
print "Probability of a feature existing if interpreted:", prob_correct
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 01. Work_Flow_Management(using_Airflow)
Step2: 위 소스를 [Airflow Home]/dags/ 에 test.py로 저장해주세요!
|
<ASSISTANT_TASK:>
Python Code:
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta
# start_date를 현재날자보다 과거로 설정하면, backfill(과거 데이터를 채워넣는 액션)이 진행됩니다
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2017, 10, 1),
'email': ['airflow@airflow.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
# 'queue': 'bash_queue',
# 'pool': 'backfill', # Only celery option
# 'priority_weight': 10,
# 'end_date': datetime(2016, 1, 1),
}
# dag 객체 생성
dag = DAG('test', description='First DAG',
schedule_interval = '55 14 * * *',
default_args=default_args)
t1 = BashOperator(
task_id='print_date',
bash_command='date',
dag=dag)
# BashOperator를 사용
# task_id는 unique한 이름이어야 합니다
# bash_command는 bash에서 date를 입력한다는 뜻
t2 = BashOperator(
task_id='sleep',
bash_command='sleep 5',
retries=3,
dag=dag)
templated_command =
{% for i in range(5) %}
echo "{{ ds }}"
echo "{{ macros.ds_add(ds, 7)}}"
echo "{{ params.my_param }}"
{% endfor %}
t3 = BashOperator(
task_id='templated',
bash_command=templated_command,
params={'my_param': 'Parameter I passed in'},
dag=dag)
# set_upstream은 t1 작업이 끝나야 t2가 진행된다는 뜻
t2.set_upstream(t1)
# t1.set_downstream(t2)와 동일한 표현입니다
# t1 >> t2 와 동일 표현
t3.set_upstream(t1)
from datetime import timedelta, datetime
from airflow import DAG
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.python_operator import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2017, 10, 1),
'email': ['yourmail@airflow.com'],
'email_on_failure': False,
'email_on_retry': False,
'retries': False,
'retry_delay': timedelta(minutes=2),
}
dag = DAG('airflow_bigquery', default_args=default_args)
t1 = BigQueryOperator(
task_id='bigquery_test',
bql='SELECT COUNT(vendor_id) FROM [nyc-tlc:yellow.trips]',
destination_dataset_table='temp.airflow_test1',
bigquery_conn_id='bigquery_default',
delegate_to=True,
udf_config=False,
dag=dag,
)
def print_hello():
return 'Hello Airflow'
t3 = PythonOperator(
task_id='python_operator',
python_callable = print_hello,
dag = dag)
t4 = BigQueryOperator(
task_id='bigquery_test2',
bql='SELECT COUNT(vendor_id) as user FROM [nyc-tlc:yellow.trips]',
destination_dataset_table='temp.airflow_test2',
bigquery_conn_id='bigquery_default',
delegate_to=True,
udf_config=False,
dag=dag,
)
t1 >> t3 >> t4
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Eager execution
Step2: TensorFlow 2.0 では、 Eager Execution はデフォルトで有効化されます。
Step3: これで TensorFlow の演算を実行してみましょう。結果はすぐに返されます。
Step4: Eager Execution を有効化することで、 TensorFlow の挙動は変わります—TensorFlowは即座に式を評価して結果をPythonに返すようになります。 tf.Tensor オブジェクトは計算グラフのノードへのシンボリックハンドルの代わりに具体的な値を参照します。 セッションの中で構築して実行する計算グラフが存在しないため、print()やデバッガを使って容易に結果を調べることができます。 勾配計算を遮ることなくテンソル値を評価、出力、およびチェックすることができます。
Step5: 動的な制御フロー
Step6: この関数はテンソル値に依存する条件式を持ち、実行時にこれらの値を表示します。
Step7: モデルをトレーニングする
Step8: トレーニングを行わなくても、Eager execution により、モデルを呼び出して出力を検査することができます。
Step9: keras モデルは組み込みで学習のループを回すメソッド fit がありますが、よりカスタマイズが必要な場合もあるでしょう。 Eager Executionを用いて実装された学習ループのサンプルを以下に示します:
Step10: Note
Step11: 変数とオプティマイザ
Step12: 次に、以下を行います。
Step13: 注意
Step14: tf.train.Checkpoint を使うと、このプロセスを完全に制御できるようになります。
Step15: モデルを保存して読み込むために、 tf.train.Checkpoint は隠れ変数なしにオブジェクトの内部状態を保存します。 モデル、 オプティマイザ 、そしてグローバルステップの状態を記録するには、それらを tf.train.Checkpoint に渡します。
Step16: 注意
Step17: サマリーと TensorBoard
Step18: 高度な自動微分トピック
Step19: カスタム勾配
Step20: カスタム勾配は、一連の演算に対して数値的に安定した勾配を提供するために一般的に使用されます。
Step21: ここで、 log1pexp 関数はカスタム勾配を用いて解析的に単純化することができます。 以下の実装は、フォワードパスの間に計算された tf.exp(x) の値を 再利用します—冗長な計算を排除することでより効率的になります:
Step22: パフォーマンス
Step23: tf.Tensor オブジェクトはそのオブジェクトに対するオペレーションを実行するために別のデバイスにコピーすることができます:
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import cProfile
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# ブロードキャストのサポート
b = tf.add(a, 1)
print(b)
# オペレータのオーバーロードがサポートされている
print(a * b)
# NumPy valueの使用
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# assertを入れて出力の型をチェックする。
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# 3 * x + 2を近似するトイデータセット
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# 最適化対象のloss関数
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# 定義:
# 1. モデル
# 2. モデルパラメータに関する損失関数の導関数
# 3. 導関数に基づいて変数を更新するストラテジ。
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# 学習ループ
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
model.save_weights('weights')
status = model.load_weights('weights')
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save(checkpoint_path)
x.assign(11.) # 保存後に変数の値を変える。
# チェックポイントから変数を復元する
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# 勾配計算は x = 0 のときはうまくいく。
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# また、勾配計算はx = 100でも機能します。
grad_log1pexp(tf.constant(100.)).numpy()
import time
def measure(x, steps):
# TensorFlowはGPUを初めて使用するときに初期化するため、時間計測対象からは除外する。
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmulは、行列乗算が完了する前に戻ることができる。
# (たとえば、CUDAストリームにオペレーションをエンキューした後に戻すことができる)。
# 以下のx.numpy()呼び出しは、すべてのキューに入れられたオペレーションが完了したことを確認する。
# (そして結果をホストメモリにコピーするため、計算時間は単純なmatmulオペレーションよりも多くのことを含む時間になる。)
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# CPU上で実行するとき:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# GPU上で実行するとき(GPUが利用できれば):
if tf.test.is_gpu_available():
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
if tf.config.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Available Features
Step2: The API docs for each of these can be found in phoebe.parameters.feature. Each entry will list the allowable component and/or dataset-types that that kind of feature can be attached to. For example
Step3: Adding a Feature
|
<ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
phoebe.list_available_features()
help(phoebe.parameters.feature.spot)
b.add_feature('spot', component='primary', feature='spot01')
b.get_feature('spot01')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: ">" vs ">>"
Step2: lower|sort|uniq or sort|lower|uniq
Step3: Among the set of three functions
Step4: More about grep
Step5: There are many, many options, such as case-insensitivity
Step6: Another useful one is to print line numbers for matching lines
Step7: We can also negate certain terms - show non-matches.
Step8: And one more useful tip is to match more than one thing
Step9: Wildcards with "*"
Step10: The ? (question mark) is a wildcard that matches exactly one character.
Step11: The difference is subtle - these two would have worked interchangeably on the above. But note
Step12: See the difference? The * can match more than one character; ? only matches one.
Step13: Working with GNU Parallel
Step14: That's 25,875 lines and 218,062 words in the texts of Romeo and Juliet and Little Women.
Step15: Note that I've wrapped lines around by using the \ character. To me, this looks easier to read - you can see each step of the pipeline one at a time. The \ only means "this shell line continues on the next line". The | still acts as the pipe.
Step16: It looks like Little Women is much longer, which makes sense - it's a novel, not a play. More text!
Step17: We can run through both files at once by giving both file names to grep
Step18: Do those numbers look right?
Step19: Aha! grep is not-so-helpfully including the second filename on the lines matched from the second file, but not on the first. That's why the counts are off.
Step20: See what we did there? We parallelized the data, then brought it back together for the rest of the pipeline.
|
<ASSISTANT_TASK:>
Python Code:
!mkdir mydirectory
!ls > mydirectory/myfiles.txt
!rm myfiles.txt
!rm mydirectory/myfiles.txt
!ls mydirectory
!date > datefile.txt
!cat datefile.txt
!date > datefile.txt
!cat datefile.txt
!date >> datefile.txt
!date >> datefile.txt
!cat datefile.txt
!wget https://github.com/gwsb-istm-6212-fall-2016/syllabus-and-schedule/raw/master/exercises/pg2500.txt
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | tr '[:upper:]' '[:lower:]' | sort | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | uniq -c | sort | tr '[:upper:]' '[:lower:]' | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | sort | tr '[:upper:]' '[:lower:]' | uniq -c | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | sort | uniq -c | tr '[:upper:]' '[:lower:]' | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | tr '[:upper:]' '[:lower:]' | sort | uniq -c | head
!grep -oE '\w{{2,}}' pg2500.txt | grep -v '^[0-9]' | tr '[:upper:]' '[:lower:]' | uniq -c | sort | head
!grep Romeo romeo.txt | head
!grep -i what romeo.txt | head
!grep -n Juliet romeo.txt | head
!grep -n Juliet romeo.txt | grep -v Romeo | head
!grep "Romeo\|Juliet" romeo.txt | head
!ls *.txt
!cp romeo.txt womeo.txt
!ls ?omeo.txt
!ls wome?.txt
!ls wo*.txt
!ls wo?.txt
!chmod +x simplefilter.py
!head pg2500.txt | ./simplefilter.py
!cp simplefilter.py lower.py
!head pg2500.txt | ./lower.py
!wc *.txt
!grep -oE '\w{{2,}}' romeo.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
!time grep -oE '\w{{2,}}' women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
!wc *.txt
!time grep -oE '\w{{2,}}' romeo.txt women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| sort -rn \
| head -10
!time grep -oE '\w{{2,}}' romeo.txt women.txt \
| tr '[:upper:]' '[:lower:]' \
| sort \
| uniq -c \
| grep "and" \
| tail -10
!time ls *.txt \
| parallel -j+0 "grep -oE '\w{2,}' {} | tr '[:upper:]' '[:lower:]' >> all-words.txt"
!time sort all-words.txt \
| uniq -c \
| sort -rn \
| head -10
!unzip -d many-texts texts.zip
!ls -l many-texts | wc -l
!wc many-texts/*.txt
!time ls many-texts/*.txt \
| parallel --eta -j+0 "grep -oE '\w{2,}' {} | tr '[:upper:]' '[:lower:]' >> many-texts/all-words.txt"
!time sort many-texts/all-words.txt \
| uniq -c \
| sort -rn \
| head -10
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
|
<ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
#################################################################
# IMPLEMENT THE FOURIER DERIVATIVE METHOD HERE!
#################################################################
# Basic parameters
# ---------------------------------------------------------------
nx = 128
x, dx = np.linspace(2*np.pi/nx, 2*np.pi, nx, retstep=True)
sigma = 0.5
xo = np.pi
#################################################################
# IMPLEMENT YOUR SOLUTION HERE!
#################################################################
#################################################################
# PLOT YOUR SOLUTION HERE!
#################################################################
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Evolving Spiking Neural Networks with EvoJAX!
Step2: Inputs - Spike Encoding
Step3: Single Spiking Neuron - Leaky Integrate and Fire
Step5: Simulating a Spiking Feed Forward Network
Step6: Build SNN Policy
Step7: Training
Step8: Final Analysis
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import clear_output
!pip install evojax
!pip install torchvision # We use torchvision.datasets.MNIST in this tutorial.
!pip install mediapy
clear_output()
import jax
import jax.numpy as jnp
from evojax.task.mnist import MNIST
import mediapy as media
import matplotlib.pyplot as plt
import os
if 'COLAB_TPU_ADDR' in os.environ:
from jax.tools import colab_tpu
colab_tpu.setup_tpu()
print('jax.devices():')
jax.devices()
# Here we scale the max probability to .8 so that we don't have inputs that are continually spiking.
def rate_code(key, data, max=0.8):
return jax.random.bernoulli(key, data*max).astype(jnp.float32)
task = MNIST(batch_size=5, test=False)
clear_output()
data = task.reset(key=jax.random.split(jax.random.PRNGKey(42)))
digit = data.obs[0][1]
media.show_image(digit, height=28*8)
# Simulate a few spikes and visualise
encoded = []
key = jax.random.PRNGKey(42)
for _ in range(100):
rnd, key = jax.random.split(key)
encoded.append(rate_code(rnd, digit[:,:,0]))
media.show_video(encoded, height=28*8, codec='gif')
#@title JAX friendly version of the spike visualisation code
def raster(data, ax, **kwargs):
if len(data.shape) == 1:
return ax.scatter(*jnp.where(data.unsqueeze(1)), **kwargs)
return ax.scatter(*jnp.where(data), **kwargs)
def plot_cur_mem_spk(cur, mem, spk, thr_line=False, vline=False, title=False, ylim_max1=1.25, ylim_max2=1.25):
# Generate Plots
fig, ax = plt.subplots(3, figsize=(8,6), sharex=True,
gridspec_kw = {'height_ratios': [1, 1, 0.4]})
# Plot input current
ax[0].plot(cur, c="tab:orange")
ax[0].set_ylim([0, ylim_max1])
ax[0].set_xlim([0, 200])
ax[0].set_ylabel("Input Current ($I_{in}$)")
if title:
ax[0].set_title(title)
# Plot membrane potential
ax[1].plot(mem)
ax[1].set_ylim([0, ylim_max2])
ax[1].set_ylabel("Membrane Potential ($U_{mem}$)")
if thr_line:
ax[1].axhline(y=thr_line, alpha=0.25, linestyle="dashed", c="black", linewidth=2)
plt.xlabel("Time step")
# Plot output spike using spikeplot
raster(spk, ax[2], s=400, c="black", marker="|")
if vline:
ax[2].axvline(x=vline, ymin=0, ymax=6.75, alpha = 0.15, linestyle="dashed", c="black", linewidth=2, zorder=0, clip_on=False)
plt.ylabel("Output spikes")
plt.yticks([])
plt.show()
def plot_snn_spikes(spk_in, spk1_rec, spk2_rec, title):
# Generate Plots
fig, ax = plt.subplots(3, figsize=(8,6), sharex=True,
gridspec_kw = {'height_ratios': [1, 1, 0.4]})
# Plot input spikes
raster(spk_in, ax[0], s=0.03, c="black")
ax[0].set_ylabel("Input Spikes")
ax[0].set_title(title)
# Plot hidden layer spikes
raster(spk1_rec, ax[1], s = 0.05, c="black")
ax[1].set_ylabel("Hidden Layer")
# Plot output spikes
raster(spk2_rec, ax[2], c="black", marker="|")
ax[2].set_ylabel("Output Spikes")
ax[2].set_ylim([0, 10])
plt.show()
def lif(mem, x, beta, threshold=1):
spk = (mem > threshold).astype(jnp.float32) # if membrane exceeds threshold, spk=1, else, 0
mem = beta * mem + x - spk*threshold
return spk, mem
num_steps = 200
# initialize inputs/outputs + small step current input
x = jnp.concatenate([jnp.zeros(10), jnp.ones(190)*0.5])
mem = jnp.zeros(1)
mem_rec = []
spk_rec = []
# neuron parameters
w = 0.4
beta = 0.819
# neuron simulation
for step in range(num_steps):
spk, mem = lif(mem, x[step]*w, beta=beta)
mem_rec.append(mem)
spk_rec.append(spk)
# convert lists to tensors
mem_rec = jnp.stack(mem_rec)
spk_rec = jnp.stack(spk_rec)
plot_cur_mem_spk(x*w, mem_rec, spk_rec, thr_line=1,ylim_max1=0.5,
title="LIF Neuron Model With Weighted Step Voltage")
hidden_sizes = [28*28, 1000, 10]
def init(hidden_sizes):
Create network initial weights and the initial state of the membrain
potentials.
ws = []
mem = []
key = jax.random.PRNGKey(0)
for i in range(len(hidden_sizes) - 1):
rnd, key = jax.random.split(key)
ws.append(jax.random.normal(rnd, (hidden_sizes[i], hidden_sizes[i+1]))*0.1)
mem.append(jnp.zeros(hidden_sizes[i+1]))
return ws, mem
# Super simple linear layer, SNNs don't use bias.
def linear(x, w):
return jnp.dot(x, w)
# This represents one step of the simulation of the spikes running though the
# network. We need to simulate hundreds steps just to get a single output and
# most of this computation involves zeros. This is why SNNs are not efficient
# on hardware designed for ANNs.
def model(key, x, ws, mems, beta=0.95):
spk = rate_code(key, x)
new_mem = []
new_spk = [spk]
for w, mem in zip(ws, mems):
spk = linear(spk, w)
spk, mem = lif(mem, spk, beta=beta)
new_mem.append(mem)
new_spk.append(spk)
return new_spk, new_mem
%%time
ws, mem = init(hidden_sizes)
all_spk = []
all_mem = []
# Run the simulation for a few time steps and record the spiking activity
for _ in range(num_steps):
rnd, key = jax.random.split(key)
spk, mem = model(key, digit.flatten(), ws, mem)
all_spk.append(spk)
all_mem.append(mem)
all_spk = [jnp.stack(s) for s in zip(*all_spk)]
plot_snn_spikes(all_spk[0], all_spk[1], all_spk[2], 'Some Lovely Spikes')
# Now let's use jax.scan and jitting to make the simulation much faster
ws, mem = init(hidden_sizes)
@jax.jit
def run_sim(key, x, ws, mem):
def step(carry, _):
key, mem = carry
rnd, key = jax.random.split(key)
spk, mem = model(rnd, x, ws, mem)
return (key, mem), (spk, mem)
_, (spk, mem) = jax.lax.scan(step, (key, mem), None, length=num_steps)
return spk, mem
#warm it up!!
key = jax.random.PRNGKey(0)
ws, mem = init(hidden_sizes)
run_sim(key, digit.flatten(), ws, mem);
%%time
ws, mem = init(hidden_sizes)
spk, mem = run_sim(key, digit.flatten(), ws, mem)
plot_snn_spikes(spk[0], spk[1], spk[2], 'Spikey Boys')
# We can also visualise the membrane potentials
media.show_image(mem[1].T, width=num_steps*4, cmap='hot')
from typing import Sequence, Tuple
from flax import linen as nn
from evojax.policy.base import PolicyNetwork
from evojax.policy.base import PolicyState
from evojax.task.base import TaskState
from evojax.util import get_params_format_fn
class SNNPolicy(PolicyNetwork):
def __init__(self,
input_dim: int,
hidden_dims: Sequence[int],
output_dim: int,
time_steps: int):
params, init_mem = init([input_dim] + hidden_dims + [output_dim])
def run_sim(key, x, ws, mem):
def step(carry, _):
key, mem = carry
rnd, key = jax.random.split(key)
spk, mem = model(rnd, x, ws, mem)
return (key, mem), spk[-1]
# NOTE: This version of run_sim only records the final layers's spikes
# to save on memory. Don't need all the other stuff if we are not
# visualising them.
_, spk = jax.lax.scan(step, (key, mem), None, length=time_steps)
return spk
def apply(params, x, p_states):
x = x.flatten()
keys, rnd = jax.random.split(p_states.keys)
spk = run_sim(rnd, x, params, init_mem)
# The tutorial use the final layer's membrane potentials as outputs
# but let's be more pure and use the sum of all the spikes.
x = spk.mean(axis=0)
x = nn.log_softmax(x)
return x, PolicyState(keys=keys)
# Careful! In other examples we use flax to build the neural network
# based policies which automatically build neural networks that expects
# a batch of inputs. Here the other hand, `apply` was built for one input
# only, so we have to allow the function to process batches by using vmap.
# But we only want to batch the inputs and outputs of the network so we
# have to make careful use of in_axes and out_axes parameters.
# The function is then passed though vmap again below to allow it to
# process multiple batches at the same time to evaluate individuals in
# the population in parallel.
apply = jax.vmap(apply, in_axes=(None, 0, None), out_axes=(0, None))
self.num_params, format_params_fn = get_params_format_fn(params)
self._format_params_fn = jax.vmap(format_params_fn)
self._forward_fn = jax.vmap(apply)
def get_actions(self,
t_states: TaskState,
params: jnp.ndarray,
p_states: PolicyState) -> Tuple[jnp.ndarray, PolicyState]:
params = self._format_params_fn(params)
return self._forward_fn(params, t_states.obs, p_states)
def reset(self, states: TaskState) -> PolicyState:
dims = states.obs.shape[0]
return PolicyState(
# Use same random seed so that the inputs spikes are the same!
keys=jnp.tile(jax.random.PRNGKey(12345)[None,:], (dims,1,)),
)
# @title Set hyper-parameters
# @markdown PLEASE NOTE: `pop_size` and `num_tests` should be multiples of `jax.local_device_count()`.
n_devices = jax.local_device_count()
pop_size = 128 # @param
assert pop_size % n_devices == 0
time_steps = 50 # @param
batch_size = 256 # @param
max_iters = 1000 # @param
center_lr = 0.006 # @param
init_std = 0.04 # @param
std_lr = 0.09 # @param
seed = 42 # @param
from evojax import SimManager
from evojax.algo import PGPE
train_task = MNIST(batch_size=batch_size, test=False)
test_task = MNIST(batch_size=batch_size, test=True)
policy = SNNPolicy(
input_dim=28*28,
hidden_dims=[1000],
output_dim=10,
time_steps=time_steps)
num_tests=1
solver = PGPE(
pop_size=pop_size,
param_size=policy.num_params,
optimizer='adam',
center_learning_rate=center_lr,
stdev_learning_rate=std_lr,
init_stdev=init_std,
seed=seed,
)
sim_mgr = SimManager(
n_repeats=1,
test_n_repeats=1,
pop_size=pop_size,
n_evaluations=num_tests,
policy_net=policy,
train_vec_task=train_task,
valid_vec_task=test_task,
seed=seed,
)
import time
import numpy as np
start_time = time.perf_counter()
for train_iters in range(max_iters):
# Training.
params = solver.ask()
scores, _ = sim_mgr.eval_params(params=params, test=False)
solver.tell(fitness=scores)
# Test periodically.
if train_iters > 0 and train_iters % 10 == 0:
best_params = solver.best_params
scores = np.array(sim_mgr.eval_params(params=best_params, test=True)[0])
score_best = scores.max()
score_avg = np.mean(scores)
print('Iter={0}, #tests={1}, score.best={2:.3f}, score.avg={3:.3f}'.format(
train_iters, num_tests, score_best, score_avg))
# Final test.
best_params = solver.best_params
scores = np.array(sim_mgr.eval_params(params=best_params, test=True)[0])
score_best = scores.max()
score_avg = np.mean(scores)
print('Iter={0}, #tests={1}, score.best={2:.3f}, score.avg={3:.3f}'.format(
train_iters, num_tests, score_best, score_avg))
print('time cost: {}s'.format(time.perf_counter() - start_time))
@jax.jit
def run_sim(key, x, ws, mem):
def step(carry, _):
key, mem = carry
rnd, key = jax.random.split(key)
spk, mem = model(rnd, x, ws, mem)
return (key, mem), (spk, mem)
_, (spk, mem) = jax.lax.scan(step, (key, mem), None, length=time_steps)
return spk, mem
# _format_params_fn is vmapped so expects a batch dimension, so we need to add
# an extra dimension at the start and then squeeze the results to get rid of the
# batch dimension again.
params = [jnp.squeeze(w) for w in policy._format_params_fn(best_params[None, :])]
data = test_task.reset(key=jax.random.split(jax.random.PRNGKey(42)))
_, init_mem = init([28*28, 1000, 10])
for i in range(20):
digit = data.obs[0][i]
label = data.labels[0][i]
media.show_image(jnp.reshape(digit, (28, 28)), height=28*8)
spk, mem = run_sim(jax.random.PRNGKey(seed=42), digit.flatten(), params, init_mem)
pred = spk[2].mean(axis=0).argmax()
print('Label:', label)
print('Pred: ', pred)
if pred == label:
print('*** Correct :D ***')
else:
print('*** Wrong :( ***')
plt.bar(jnp.arange(10), spk[2].mean(axis=0))
plt.show()
print('='*50)
plot_snn_spikes(spk[0], spk[1], spk[2], 'Trained Spikes')
# Interestingly, visualising the weights doesn't show any obvious structure...
media.show_images(jnp.reshape(params[0][:, :10], (28, 28, -1)).transpose((2, 0, 1)), height=28*4, cmap='RdBu')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Embedding layer is best understood as a dictionary mapping integer indices (which stand for specific words) to dense vectors. It takes
Step2: We get to a validation accuracy of ~76%, which is pretty good considering that we only look at the first 20 words in every review. But
Step3: Tokenize the data
Step4: Download the GloVe word embeddings
Step5: Now let's build an embedding matrix that we will be able to load into an Embedding layer. It must be a matrix of shape (max_words,
Step6: Define a model
Step7: Load the GloVe embeddings in the model
Step8: Additionally, we freeze the embedding layer (we set its trainable attribute to False), following the same rationale as what you are
Step9: Let's plot its performance over time
Step10: The model quickly starts overfitting, unsurprisingly given the small number of training samples. Validation accuracy has high variance for
Step11: Validation accuracy stalls in the low 50s. So in our case, pre-trained word embeddings does outperform jointly learned embeddings. If you
Step12: And let's load and evaluate the first model
|
<ASSISTANT_TASK:>
Python Code:
from keras.layers import Embedding
# The Embedding layer takes at least two arguments:
# the number of possible tokens, here 1000 (1 + maximum word index),
# and the dimensionality of the embeddings, here 64.
embedding_layer = Embedding(1000, 64)
from keras.datasets import imdb
from keras import preprocessing
# Number of words to consider as features
max_features = 10000
# Cut texts after this number of words
# (among top max_features most common words)
maxlen = 20
# Load the data as lists of integers.
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
# This turns our lists of integers
# into a 2D integer tensor of shape `(samples, maxlen)`
x_train = preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = preprocessing.sequence.pad_sequences(x_test, maxlen=maxlen)
from keras.models import Sequential
from keras.layers import Flatten, Dense
model = Sequential()
# We specify the maximum input length to our Embedding layer
# so we can later flatten the embedded inputs
model.add(Embedding(10000, 8, input_length=maxlen))
# After the Embedding layer,
# our activations have shape `(samples, maxlen, 8)`.
# We flatten the 3D tensor of embeddings
# into a 2D tensor of shape `(samples, maxlen * 8)`
model.add(Flatten())
# We add the classifier on top
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_split=0.2)
import os
imdb_dir = '/home/ubuntu/data/aclImdb'
train_dir = os.path.join(imdb_dir, 'train')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(train_dir, label_type)
for fname in os.listdir(dir_name):
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname))
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
maxlen = 100 # We will cut reviews after 100 words
training_samples = 200 # We will be training on 200 samples
validation_samples = 10000 # We will be validating on 10000 samples
max_words = 10000 # We will only consider the top 10,000 words in the dataset
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=maxlen)
labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
# Split the data into a training set and a validation set
# But first, shuffle the data, since we started from data
# where sample are ordered (all negative first, then all positive).
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]
glove_dir = '/home/ubuntu/data/'
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_dim = 100
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if i < max_words:
if embedding_vector is not None:
# Words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
model.save_weights('pre_trained_glove_model.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(x_train, y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val))
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
test_dir = os.path.join(imdb_dir, 'test')
labels = []
texts = []
for label_type in ['neg', 'pos']:
dir_name = os.path.join(test_dir, label_type)
for fname in sorted(os.listdir(dir_name)):
if fname[-4:] == '.txt':
f = open(os.path.join(dir_name, fname))
texts.append(f.read())
f.close()
if label_type == 'neg':
labels.append(0)
else:
labels.append(1)
sequences = tokenizer.texts_to_sequences(texts)
x_test = pad_sequences(sequences, maxlen=maxlen)
y_test = np.asarray(labels)
model.load_weights('pre_trained_glove_model.h5')
model.evaluate(x_test, y_test)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data pre-processing
Step2: The missing values to be imputed before fit the model
Step3: Exploratory analysis
Step4: Model establishement
Step5: A series functions are defined to fit and evaluate the model
Step6: The optimum model is first analyzed using self-defined method
Step7: The selected features are extracted
Step8: find the name of selected features
Step9: The features selected is translated as shown below
Step10: We map the feature to the files
Step11: All the features are also mapped
Step12: To improve the inpterpretability and understand what is the most important features, feature selection is performed
Step13: Significance test
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import linear_model
import csv
import numpy as np
from matplotlib import pyplot as plt
from sklearn.preprocessing import Imputer
from sklearn.linear_model import lasso_path
from sklearn.linear_model import LassoCV
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
%matplotlib inline
data = np.genfromtxt('ALL_ATRIB_DATA.csv',delimiter=',',skip_header=1)
Predictor = data[:,0:159]
Target = data[:, 159:162]
all_predictor = list(range(159))
numeric_index = [0, 18, 20, 38, 40, 45, 47, 58, 70, 79, 103, 104, 106, 110, 113, 136, 138, 147, 156, 157, 158]
bi_no_index = [item for item in all_predictor if item not in numeric_index]
print(len(numeric_index))
print(len(bi_no_index))
len(numeric_index) + len(bi_no_index) == len(all_predictor)
# Binary and categorical values are imputed as most frequendt
imp_bi = Imputer(missing_values='NaN', strategy='most_frequent', axis = 0)
imp_bi.fit(Predictor[:,bi_no_index])
Predictor[:,bi_no_index] = imp_bi.transform(Predictor[:,bi_no_index])
#Numeric values are imputed as median (to elimanates influence of extreme values)
imp_num = Imputer(missing_values='NaN', strategy='median', axis = 0)
imp_num.fit(Predictor[:,numeric_index])
Predictor[:,numeric_index] = imp_num.transform(Predictor[:,numeric_index])
imp_tar = Imputer(missing_values = 'NaN', strategy = 'median', axis = 0)
imp_tar.fit(Target)
Target = imp_tar.transform(Target)
plt.figure(figsize=(10, 8))
plt.hist(Target[:,0], bins = 20)
plt.ylabel("Number of buildings")
plt.xlabel("Total EUI (MBTU/sqft)")
plt.title("Total EUI distribution")
plt.figure(figsize=(10, 8))
plt.boxplot(Target[:,0], notch=True, sym='bd', vert=False)
plt.xlabel("Building EUI")
plt.ylabel("Building")
plt.title("Building EUI")
trainPredictor = Predictor[0:len(Predictor)//2]
testPredictor = Predictor[len(Predictor)//2: len(Predictor)]
trainTarget = Target[0:len(Target)//2]
testTarget = Target[len(Target)//2:len(Target)]
print(len(trainPredictor) + len(testPredictor) == len(Predictor))
print(len(trainTarget) + len(testTarget) == len(Target))
print(len(trainPredictor) == len(trainTarget))
print(len(testPredictor) == len(testTarget))
print(trainPredictor.shape)
print(trainTarget.shape)
#This function gives back the Lasso regression coefficient after the model is fitted
def lasso_fit(alpha, predictor, Target):
clf = linear_model.Lasso(alpha=alpha)
clf.fit(predictor,Target)
coefficient = clf.coef_
return coefficient
# the function returns the predicted y matrix of test dataset
def lasso_results(alpha_input, train_X, train_y, test_X, test_y):
clf = linear_model.Lasso(alpha=alpha_input)
clf.fit(train_X,train_y)
# a column of ones is added to the design matrix to fit the intercept
oneMatrix = np.ones((len(test_X),1))
DesignMatrix = np.concatenate((test_X, oneMatrix),axis = 1)
coefficients = np.concatenate((clf.coef_ , [clf.intercept_]), axis = 0)
testResults = np.dot(DesignMatrix, coefficients)
return testResults
# the function returns the evaluator of the lasso fit (r_square, mse)
def lasso_test(alpha_input, train_X, train_y, test_X, test_y, testWeight):
r_square = []
mse = []
for a in alpha_input:
testResults = lasso_results(a, train_X, train_y, test_X, test_y)
r_square.append(r2_score(test_y, testResults, sample_weight = testWeight))
mse.append(mean_squared_error(test_y, testResults, sample_weight =testWeight))
index = mse.index(min(mse))
evaluator = np.stack((r_square, mse), axis = 1)
return {"evaluator": evaluator, "r_square": r_square[index], "MSE": mse[index], "alpha":alpha_input[index]}
#find best fit using LassoCV
def lasso_cros(alpha_input, train_X, train_Y, test_X, test_Y):
clf2 = LassoCV(alphas = alpha_range, cv = 5)
clf2.fit(train_X, train_Y)
plt.figure(figsize=(15,10))
plt.plot(np.log10(clf2.alphas_), clf2.mse_path_[:], '--')
plt.plot(np.log10(clf2.alphas_), clf2.mse_path_.mean(axis=-1), 'k-')
plt.show()
return {'alpha':clf2.alpha_, 'r_square':clf2.score(train_X, train_Y), 'intercept':clf2.intercept_, 'Minimum MSE': min(clf2.mse_path_.mean(axis=-1))}
#this function give the number of features selected under each alpha
def num_feature(alpha_input, train_X, train_y):
num_feature = []
for alpha_input in alpha_input:
clf = linear_model.Lasso(alpha=alpha_input)
clf.fit(train_X,train_y)
num = find_features(clf.coef_)["count"]
num_feature.append(num)
return num_feature
#This function give the features selected (non-zero coefficient)
def find_features(coeff):
index = []
count = 0
for i in range(len(coeff)):
if coeff[i] != 0:
index.append(i)
count += 1
return {'index':index, 'count': count}
#define alpha range
alpha_range = np.logspace(-2, 2, num = 1000, base = 10)
#extract MSE and number of features selected from given alpha range
evaluators = lasso_test(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0], testTarget[:,2])["evaluator"]
#and number of features selected from given alpha range
num_feature = num_feature(alpha_range, trainPredictor, trainTarget[:,0])
#Plot the results
plt.figure(figsize=(8,10))
plt.subplot(211)
plt.plot(np.log10(np.logspace(-2, 2, num = 1000, base =10)), evaluators[:,1], 'k-')
plt.ylabel("Mean Square Error")
plt.xlabel("log10(alpha)")
plt.title("Change of Mearn Square Error with Tuning Parameter")
plt.subplot(212)
plt.plot(np.log10(np.logspace(-2, 2, num = 1000, base =10)), num_feature, 'k-')
plt.ylabel("Number of features selected")
plt.xlabel("log10(alpha)")
plt.title("Change of Number of features selected with Tuning Parameter")
plt.show()
# the Model is auto selected by the LassoCV function
lasso_cros(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0])
# the predicted value and real value are compared
testResults = lasso_results(0.30302710828663965, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0])
fig2= plt.figure(figsize=(10, 8))
plt.plot(list(range(2608)), testResults, 'ko', label = 'Predicted EUI')
plt.plot(list(range(2608)), testTarget[:,0], 'ro', label = 'Actual EUI')
plt.ylim(-100, 800)
plt.xlim(0, len(testResults))
plt.xlabel("Buildings")
plt.ylabel("Total EUI (MBTU/sqft)")
plt.title("Predicted EUI vs. Actual EUI")
plt.legend()
#Attributes selected
alpha = lasso_cros(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0])["alpha"]
coefficient = lasso_fit(alpha, trainPredictor, trainTarget[:,0])
features = find_features(coefficient)["index"]
import csv
File = open('ALL_ATRIB_DATA_code.csv','r')
reader = csv.reader(File, delimiter = ',')
code = list(reader)
code = np.array(code)
code[features,1]
#This function maps the feature to the file
def file_map(feature_index, code):
basic = []
file1 = []
file2 = []
file3 = []
file4 = []
file5 = []
file6 = []
file7 = []
for i in feature_index:
if code[i, 3] == "Basic info.":
basic.append(code[i,1])
elif code[i, 3] == "File (1)":
file1.append(code[i, 1])
elif code[i, 3] == "File (2)":
file2.append(code[i, 1])
elif code[i, 3] == "File (3)":
file3.append(code[i, 1])
elif code[i, 3] == "File (4)":
file4.append(code[i, 1])
elif code[i, 3] == "File (5)":
file5.append(code[i, 1])
elif code[i, 3] == "File (6)":
file6.append(code[i, 1])
elif code[i, 3] == "File (7)":
file7.append(code[i, 1])
print "The total number of features selected is \t{}.".format(len(feature_index))
print "The number of features in Basic info is \t{}.".format(len(basic))
print "The number of features in File 1 is \t{}.".format(len(file1))
print "The number of features in File 2 is \t{}.".format(len(file2))
print "The number of features in File 3 is \t{}.".format(len(file3))
print "The number of features in File 4 is \t{}.".format(len(file4))
print "The number of features in File 5 is \t{}.".format(len(file5))
print "The number of features in File 6 is \t{}.".format(len(file6))
print "The number of features in File 7 is \t{}.".format(len(file7))
return None
file_map(features, code)
file_map(list(range(159)), code)
# this function returns the targeted number of features selected
def feature_reduc(alpha_input, train_X, train_y, threshold):
feature_num = len(train_X[0])
while feature_num > threshold:
clf = linear_model.Lasso(alpha=alpha_input)
clf.fit(train_X,train_y)
feature_index = find_features(clf.coef_)["index"]
feature_num = len(feature_index)
alpha_input = alpha_input * 1.2
return {'alpha':alpha_input, 'feature_index': feature_index}
# Target 1 most important feature
code[feature_reduc(10, Predictor, Target[:,0],1)["feature_index"],1]
# Target 3 most important feature
code[feature_reduc(10, Predictor, Target[:,0],3)["feature_index"],1]
# Target 5 most important feature
code[feature_reduc(10, Predictor, Target[:,0],5)["feature_index"],1]
# Target 10 most important feature
code[feature_reduc(10, Predictor, Target[:,0],10)["feature_index"],1]
# the MSE of the optimum model is used
MSE = 10595.655356917619
#alpha obtained from previous model
coef = coefficient = lasso_fit(0.30302710828663965, trainPredictor, trainTarget[:,0])
Y_pred=lasso_results(0.30372635797, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0])
from scipy.stats import t
s2=MSE*np.linalg.inv(np.dot(np.transpose(testPredictor),testPredictor))
ss2=np.diag(s2)
ss=np.sqrt(ss2)
stu=t.isf((1-0.95)/2,2608-159)
Max=np.matrix(coef).T+np.matrix(stu*ss).T
Min=np.matrix(coef).T-np.matrix(stu*ss).T
beta_min_max=np.concatenate((Min,Max),axis=1)
print beta_min_max
from scipy.stats import t
stu=t.isf((1-0.95)/2,2608-159)
T=coef/ss
print 'T-test of the confidence interval at 99% is between -2.578 and 2.578'
num=np.where((T<-stu)|(T>stu))
Stu_p=stu*np.ones(159)
x=np.linspace(1,159,num=159)
plt.plot(x,T,x,Stu_p,x,-Stu_p)
num_reject=np.matrix(num).shape
print num_reject
#print[code[num,1]]
print(numeric_index)
print(num)
print(code[num,1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic usage
Step2: Plotting circuits with sparse output
Step3: Sparse plots
Step4: Histogram for processed results.
Step5: Modifying plot properties
|
<ASSISTANT_TASK:>
Python Code:
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
import matplotlib.pyplot as plt
q = cirq.LineQubit.range(4)
circuit = cirq.Circuit([cirq.H.on_each(*q), cirq.measure(*q)])
result = cirq.Simulator().run(circuit, repetitions=100)
_ = cirq.plot_state_histogram(result, plt.subplot())
plt.show()
q = cirq.LineQubit.range(6)
circuit = cirq.Circuit([cirq.H(q[0]), [cirq.CNOT(q[0], q[i]) for i in range(1, 6, 1)], cirq.measure(*q, key='measure_all')])
result = cirq.Simulator().run(circuit, repetitions=100)
_ = cirq.plot_state_histogram(result, plt.subplot())
plt.show()
histogram = result.histogram(key = 'measure_all')
_ = cirq.plot_state_histogram(histogram, plt.subplot())
plt.show()
def count_ones(bits):
return 'even' if sum(bits) % 2 == 0 else 'odd'
q = cirq.LineQubit.range(6)
# Uniform superposition over all 2 ** 6 states.
circuit = cirq.Circuit([cirq.H.on_each(*q), cirq.measure(*q, key='measure_all')])
result = cirq.Simulator().run(circuit, repetitions=100)
# Create a customized histogram by processing the measurement results using `fold_func`.
custom_histogram = result.histogram(key = 'measure_all', fold_func=count_ones)
# Plot the custom histogram using cirq.plot_state_histogram
_ = cirq.plot_state_histogram(custom_histogram, plt.subplot())
plt.show()
def binary_labels(num_qubits):
return [bin(x)[2:].zfill(num_qubits) for x in range(2 ** num_qubits)]
q = cirq.LineQubit.range(3)
circuit = cirq.Circuit([cirq.H.on_each(*q), cirq.measure(*q)])
result = cirq.Simulator().run(circuit, repetitions=100)
_ = cirq.plot_state_histogram(result, plt.subplot(), title = 'Custom Plot Title', xlabel = 'Custom X-Axis Label', ylabel = 'Custom Y-Axis Label', tick_label=binary_labels(3))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transform data with PCA computed on the average ie evoked response
Step2: Transform data with ICA computed on the raw epochs (no averaging)
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Asish Panda <asishrocks95@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20)
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data()
pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA")
ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
#path = "data/dogscats/"
path = "data/dogscats/sample/"
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
import utils
import importlib
importlib.reload(utils)
from utils import plots
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
# batch_size=64
batch_size=2
# Import our class, and instantiate
import vgg16
from vgg16 import Vgg16
vgg.classes
# %%capture x # ping bug: disconnect -> reconnect kernel workaround
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+ 'train', batch_size=batch_size)
batches.nb_class
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1, verbose=1)
#x.show()
vgg = Vgg16()
batches = vgg.get_batches(path+'train', batch_size=4)
imgs,labels = next(batches)
imgs[0].shape
labels
plots(imgs, titles=labels)
vgg.predict(imgs, True)
vgg.classes[:4]
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.2 Préparation des données
Step2: 1.3 Description élémentaire
Step3: Q Que dire de la distribution de la variable age, de celle income ?
Step4: Q Que dire des liaisons
Step5: Q La variable fnlwgt (Final sampling weight
Step6: Quelques modifications comlémentaires sont apportées de la base. Certaines variables en versions quantitatives et qualitatives comme le nombre d'heures par semaine, l'âge ou le niveau d'éducation sont conservées. Des variables sont supprimées afin de ne garder qu'une seule présence d'une information sensible
Step7: 1.3 Description multivariée
Step8: 2 Méthodes de modélisation
Step9: 2.2 Régression logistique
Step10: Q La matrice de confusion n’est pas symétrique. Quelle pourrait en être la raison ?
Step11: 2.2 Arbre binaire de discrimination
Step12: Q Quelle est l’insuffisance de l’implémentation des arbres de décision dans Scikit-learn par rapport à celle de rpart de R ? Que dire de l’arbre?
Step13: Q Quelle stratégie d’optimisation est adoptée ? Quelle autre pourrait l’être? Quel réseau pourrait également être pris en compte? Quelles sont les fonctions d’activation des neurones?
Step14: 2.4 Forêts aléatoires
Step15: Q Comment est obtenu le graphique ? Quelle importance ? Comment interpréter ces résultats ?
Step16: 3 Comparaison des méthodes
Step17: 3.2 Validation croisée Monte Carlo
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Importations
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
adult=pd.read_csv('adultTrainTest.csv')
adult.head()
def create_categorical_data(df, column_name):
cat_columns = pd.Categorical(df[column_name], ordered=False)
return cat_columns
print(np.sort(adult["mariStat"].unique()))
# mariStat
cat_name_dic = {" Never-married": "Never-Married", " Married-AF-spouse": "Married",
" Married-civ-spouse": "Married", " Married-spouse-absent": "Not-Married",
" Separated": "Not-Married", " Divorced": "Not-Married", " Widowed": "Widowed"}
adult['mariStat'] = adult.mariStat.map(cat_name_dic)
print(np.sort(adult["nativCountry"].unique()))
# nativCountry
cat_country = {" Cambodia": "SE-Asia", " Canada": "British-Commonwealth", " China": "China", " Columbia": "South-America",
" Cuba": "Other", " Dominican-Republic": "Latin-America", " Ecuador": "South-America",
" El-Salvador": "South-America", " England": "British-Commonwealth", " France": "Euro_1",
" Germany": "Euro_1", " Greece": "Euro_2", " Guatemala": "Latin-America", " Haiti": "Latin-America",
" Holand-Netherlands": "Euro_1", " Honduras": "Latin-America", " Hong": "China", " Hungary": "Euro_2",
" India": "British-Commonwealth", " Iran": "Other", " Ireland": "British-Commonwealth", " Italy": "Euro_1",
" Jamaica": "Latin-America", " Japan": "Other", " Laos": "SE-Asia", " Mexico": "Latin-America",
" Nicaragua": "Latin-America", " Outlying-US(Guam-USVI-etc)": "Latin-America", " Peru": "South-America",
" Philippines": "SE-Asia", " Poland": "Euro_2", " Portugal": "Euro_2", " Puerto-Rico": "Latin-America",
" Scotland": "British-Commonwealth", " South": "Euro_2", " Taiwan": "China", " Thailand": "SE-Asia",
" Trinadad&Tobago": "Latin-America", " Vietnam": "SE-Asia", " United-States": "United-States",
" Yugoslavia": "Euro_2"}
adult["nativCountry"] = adult.nativCountry.map(cat_country)
print(np.sort(adult["education"].unique()))
# education
cat_educ = {" 10th": "Dropout", " 11th": "Dropout", " 12th": "Dropout", " 1st-4th": "Dropout", " 5th-6th": "Dropout",
" 7th-8th": "Dropout", " 9th": "Dropout", " Assoc-acdm": "Associates", " Assoc-voc": "Associates",
" Bachelors": "Bachelors", " Doctorate": "Doctorate", " HS-grad": "HS-grad", " Masters": "Masters",
" Preschool": "Dropout", " Prof-school": "Prof-School", " Some-college": "HS-Graduate"}
adult["education"] = adult.education.map(cat_educ)
print(np.sort(adult["workClass"].unique()))
# workClass
cat_work = {" Federal-gov": "Federal-Govt", " Local-gov": "Other-Govt", " State-gov": "Other-Govt", " Private": "Private",
" Self-emp-inc": "Self-Employed", " Self-emp-not-inc": "Self-Employed", " Without-pay": "Not-Working",
" Never-worked": "Not-Working"}
adult["workClass"] = adult.workClass.map(cat_work)
print(np.sort(adult["occup"].unique()))
# occup
cat_occup = {" Adm-clerical": "Admin", " Craft-repair": "Blue-Collar", " Exec-managerial": "White-Collar",
" Farming-fishing": "Blue-Collar", " Handlers-cleaners": "Blue-Collar", " Machine-op-inspct": "Blue-Collar",
" Other-service": "Service", " Priv-house-serv": "Service", " Prof-specialty": "Professional",
" Protective-serv": "Other-occups", " Sales": "Sales", " Tech-support": "Other-occups",
" Transport-moving": "Blue-Collar"}
adult["occup"] = adult.occup.map(cat_occup)
print(np.sort(adult["origEthn"].unique()))
# origEthn
cat_orig = {" White": "CaucYes", " Black": "CaucNo", " Amer-Indian-Eskimo": "CaucNo", " Asian-Pac-Islander": "CaucNo",
" Other": "CaucNo"}
adult["origEthn"] = adult.origEthn.map(cat_orig)
adult["LcapitalGain"] = np.log(1 + adult["capitalGain"])
adult["LcapitalLoss"] = np.log(1 + adult["capitalLoss"])
# capital
def quantileCapitalGain(capital):
if type(capital) != int:
result = np.nan
elif capital <= 0:
result = "None"
elif capital <= np.median(adult[adult["capitalGain"] > 0]["capitalGain"]):
result = "cgLow"
else:
result = "cgHigh"
return result
adult["capitalGain"] = list(map(quantileCapitalGain, adult.capitalGain))
def quantileCapitalLoss(capital):
if type(capital) != int:
result = np.nan
elif capital <= 0:
result = "None"
elif capital <= np.median(adult[adult["capitalLoss"] > 0]["capitalLoss"]):
result = "clLow"
else:
result = "clHigh"
return result
adult["capitalLoss"] = list(map(quantileCapitalLoss, adult.capitalLoss))
adult["ageQ"] = pd.qcut(adult.age, 5, labels=["Ag1", "Ag2", "Ag3", "Ag4", "Ag5"])
adult["hoursWeekQ"] = pd.cut(adult.hoursWeek, bins=np.array([0, 39, 41, 100]), labels=["HW1", "HW2", "HW3"])
def create_categorical_data_rename(df, column_name, cat_name_dic):
cat_columns = pd.Categorical(df[column_name], ordered=False)
new_categorie = [cat_name_dic[old_name] for old_name in cat_columns.categories]
return cat_columns.rename_categories(new_categorie)
print(np.sort(adult["income"].unique()))
adult["income"] = create_categorical_data_rename(adult, "income", {" <=50K": "incLow", " >50K": "incHigh"})
for name in ["workClass", "education", "mariStat", "occup", "relationship", "origEthn", "sex", "capitalGain",
"capitalLoss", "nativCountry"]:
adult[name] = create_categorical_data(adult, name)
adult = adult[np.logical_not(adult.isnull().any(axis=1))]
adult.head()
adult = adult[(adult["sex"] != "Female") | (adult["relationship"] != "Husband")]
adult = adult[(adult["sex"] != "Male") | (adult["relationship"] != "Wife")]
adult.describe()
adult["age"].hist()
plt.show()
adult["fnlwgt"].hist()
plt.show()
adult["income"].value_counts()
adult["relationship"].value_counts()
adult.plot(kind="scatter",x="age",y="educNum")
plt.show()
adult.plot(kind="scatter",x="hoursWeek",y="age")
plt.show()
adult.boxplot(column="age",by="income")
plt.show()
adult.boxplot(column="fnlwgt",by="income")
plt.show()
# Mosaic plots
from statsmodels.graphics.mosaicplot import mosaic
mosaic(adult,["income","sex"])
plt.show()
from statsmodels.graphics.mosaicplot import mosaic
mosaic(adult,["income","origEthn"])
plt.show()
print(np.sort(adult["relationship"].unique()))
cat_orig = {' Husband':"ChildNo",' Not-in-family':"ChildNo",' Other-relative':"ChildNo",' Own-child':"ChildYes",' Unmarried':"ChildNo",' Wife':"ChildNo"}
adult["child"] = adult.relationship.map(cat_orig)
adult=adult.drop(["fnlwgt","nativCountry","relationship"],axis=1)
adult.head()
adultDum=pd.get_dummies(adult[["workClass","education","mariStat",
"occup","origEthn","sex","capitalGain","capitalLoss","ageQ","hoursWeekQ","child"]])
adultDum.head()
adultJoin = adult[["age","educNum","hoursWeek","LcapitalGain","LcapitalLoss","income"]].join(adultDum)
ind_ech = np.random.choice(adultJoin.index.values, 20000, replace=False)
adultEch=adultJoin.loc[ind_ech]
# Variable cible
Y=adultEch["income"]
# Variables prédictives
X=adultEch.drop(["income"],axis=1)
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=5000,random_state=11)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
import time
tps0=time.perf_counter()
# Optimisation du paramètre de pénalisation
# grille de valeurs
param=[{"C":[0.8,0.9,1,1.1,1.2]}]
logit = GridSearchCV(LogisticRegression(penalty="l1",solver="liblinear"), param,cv=10,n_jobs=-1)
logitOpt=logit.fit(X_train, Y_train) # GridSearchCV est lui même un estimateur
# paramètre optimal
logitOpt.best_params_["C"]
tps1=(time.perf_counter()-tps0)
print("Temps logit = %f, Meilleur taux = %f, Meilleur paramètre = %s" % (tps1,
1.-logitOpt.best_score_,logitOpt.best_params_))
# erreur sur l'échantillon test
1-logitOpt.score(X_test, Y_test)
# Prévision
y_chap = logitOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_chap,Y_test)
print(table)
# Coefficients
LogisticRegression(penalty="l1",C=logitOpt.best_params_['C'],
solver='liblinear').fit(X_train, Y_train).coef_
from sklearn.tree import DecisionTreeClassifier
tps0=time.perf_counter()
# Optimisation de la profondeur de l'arbre
param=[{"max_depth":list(range(2,10))}]
tree= GridSearchCV(DecisionTreeClassifier(),param,cv=10,n_jobs=-1)
treeOpt=tree.fit(X_train, Y_train)
# paramètre optimal
tps1=(time.perf_counter()-tps0)
print("Temps arbre = %f, Meilleur taux = %f, Meilleur paramètre = %s" % (tps1,
1. - treeOpt.best_score_,treeOpt.best_params_))
# Estimation de l'erreur de prévision
1-treeOpt.score(X_test,Y_test)
# prévision de l'échantillon test
y_chap = treeOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_chap,Y_test)
print(table)
treeOpt.best_params_['max_depth']
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
import pydotplus
treeG=DecisionTreeClassifier(max_depth=treeOpt.best_params_['max_depth'])
treeG.fit(X_train,Y_train)
dot_data = StringIO()
export_graphviz(treeG, out_file=dot_data)
graph=pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png("treeOpt.png")
from IPython.display import Image
Image(filename='treeOpt.png')
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import StandardScaler
# L'algorithme ds réseaux de neurones nécessite éventuellement une normalisation
# des variables explicatives avec les commandes ci-dessous
scaler = StandardScaler()
scaler.fit(X_train.astype(float))
Xnet_train = scaler.transform(X_train.astype(float))
# Meme transformation sur le test
Xnet_test = scaler.transform(X_test.astype(float))
tps0=time.perf_counter()
param_grid=[{"hidden_layer_sizes":list([(4,),(5,),(6,)])}]
nnet= GridSearchCV(MLPClassifier(max_iter=500),param_grid,cv=10,n_jobs=-1)
nnetOpt=nnet.fit(Xnet_train, Y_train)
# paramètre optimal
tps1=(time.perf_counter()-tps0)
print("Temps perceptron = %f, Meilleur taux = %f, Meilleur paramètre = %s" % (tps1,
1. - nnetOpt.best_score_,nnetOpt.best_params_))
# Estimation de l'erreur de prévision sur le test
1-nnetOpt.score(Xnet_test,Y_test)
# prévision de l'échantillon test
y_chap = nnetOpt.predict(Xnet_test)
# matrice de confusion
table=pd.crosstab(y_chap,Y_test)
print(table)
from sklearn.ensemble import RandomForestClassifier
# définition des paramètres
forest = RandomForestClassifier(n_estimators=500,
criterion='gini', max_depth=None,
min_samples_split=2, min_samples_leaf=1,
max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=True)
# apprentissage
rfFit = forest.fit(X_train,Y_train)
print(1-rfFit.oob_score_)
# erreur de prévision sur le test
1-rfFit.score(X_test,Y_test)
# optimisation du paramètre
tps0=time.perf_counter()
param=[{"max_features":list(range(2,10,1))}]
rf= GridSearchCV(RandomForestClassifier(n_estimators=100),param,cv=10,n_jobs=-1)
rfOpt=rf.fit(X_train, Y_train)
# paramètre optimal
tps1=(time.perf_counter()-tps0)
print("Temps r forest = %f, Meilleur taux = %f, Meilleur paramètre = %s" % (tps1,
1. - rfOpt.best_score_,rfOpt.best_params_))
# erreur de prévision sur le test
1-rfOpt.score(X_test,Y_test)
# prévision
y_chap = rfFit.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_chap,Y_test)
print(table)
rf= RandomForestClassifier(n_estimators=100,max_features=6)
rfFit=rf.fit(X_train, Y_train)
# Importance décroissante des variables
importances = rfFit.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(20):
print(X_train.columns[indices[f]], importances[indices[f]])
# Graphe des importances
plt.figure()
plt.title("Importances des variables")
plt.bar(range(X_train.shape[1]), importances[indices])
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
from sklearn.ensemble import GradientBoostingClassifier
tps0=time.perf_counter()
param=[{"n_estimators":[200, 250, 300]}]
gbm= GridSearchCV(GradientBoostingClassifier(),param,cv=10)
gbmOpt=gbm.fit(X_train, Y_train)
# paramètre optimal
tps1=(time.perf_counter()-tps0)
print("Temps boosting = %f, Meilleur taux = %f, Meilleur paramètre = %s" % (tps1,
1. - gbmOpt.best_score_,gbmOpt.best_params_))
# erreur de prévision sur le test
1-gbmOpt.score(X_test,Y_test)
# prévision de l'échantillon test
y_chap = gbmOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_chap,Y_test)
print(table)
from sklearn.metrics import roc_curve
listMethod=[["GBM",gbmOpt],["RF",rfOpt],["NN",nnetOpt],["Tree",treeOpt],["Logit",logitOpt]]
# Courbes ROC des précédents modèles optimaux
for method in enumerate(listMethod):
probas_ = method[1][1].fit(Xnet_train, Y_train).predict_proba(Xnet_test)
fpr, tpr, thresholds = roc_curve(Y_test,probas_[:,1], pos_label="incLow")
plt.plot(fpr, tpr, lw=1,label="%s"%method[1][0]),
plt.xlabel('Taux de faux positifs')
plt.ylabel('Taux de vrais positifs')
plt.legend(loc="best")
plt.show()
from sklearn.utils import check_random_state
tps0=time.perf_counter()
check_random_state(11)
# définition des estimateurs
logit= LogisticRegression(penalty="l1",solver="liblinear")
tree = DecisionTreeClassifier()
nnet = MLPClassifier(max_iter=400)
rf = RandomForestClassifier(n_estimators=200)
gbm = GradientBoostingClassifier()
# Nombre d'itérations
B=10 # pour utiliser le programme, mettre plutôt B=10
# définition des grilles de paramètres
listMethGrid=[[gbm,{"n_estimators":[200, 250, 300]}],
[rf,{"max_features":list(range(5,10))}],
[nnet,{"hidden_layer_sizes":list([(3,),(4,),(5,)])}],
[tree,{"max_depth":list(range(5,10))}],
[logit,{"C":[0.8,0.9,1,1.1,1.2]}]]
# Initialisation à 0
arrayErreur=np.empty((B,5))
for i in range(B): # itérations sur B échantillons test
# extraction apprentissage et test
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=1000)
scaler = StandardScaler()
scaler.fit(X_train.astype(float))
Xnet_train = scaler.transform(X_train.astype(float))
# Meme transformation sur le test
Xnet_test = scaler.transform(X_test.astype(float))
# optimisation de chaque méthode et calcul de l'erreur sur le test
for j,(method, grid_list) in enumerate(listMethGrid):
methodGrid=GridSearchCV(method,grid_list,cv=10,n_jobs=-1).fit(X_train, Y_train)
methodOpt = methodGrid.best_estimator_
methFit=methodOpt.fit(Xnet_train, Y_train)
arrayErreur[i,j]=1-methFit.score(Xnet_test,Y_test)
tps1=time.perf_counter()
dataframeErreur=pd.DataFrame(arrayErreur,columns=["GBM","RF","NN","Tree","Logit"])
print("Temps execution :",(tps1 - tps0))
dataframeErreur[["GBM","RF","NN","Tree","Logit"]].boxplot(return_type='dict')
plt.show()
# Moyennes
dataframeErreur.mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Actions
Step2: To understand flatMap, I needed to use an action to convert the RDD to a list. The solution
Step3: parallelized collections
Step4: Learning Spark
|
<ASSISTANT_TASK:>
Python Code:
conf = (SparkConf().
setMaster("mesos://zk://10.132.126.37:2181/mesos").
setAppName("RY from jupyter").
set("spark.executor.uri", "http://apache.petsads.us/spark/spark-1.2.0/spark-1.2.0-bin-hadoop2.4.tgz").
set("spark.mesos.coarse", "true").
set("spark.mesos.executor.home", "/spark-1.2.0-bin-hadoop2.4").
set("spark.executor.extraLibraryPath", "/usr/lib/hadoop/lib/native").
set("spark.executor.extraJavaOptions","-XX:-UseConcMarkSweepGC").
set("spark.mesos.native.library", "/usr/local/lib/libmesos.so").
set("spark.local.ip", "10.9.8.6").
set("spark.driver.host","10.9.8.6"))
# for spark local
# https://spark.apache.org/docs/latest/quick-start.html
sc = SparkContext()
sc.version
# load the README.md
textFile = sc.textFile("/spark/README.md")
# action
textFile.count()
textFile.
# confirm that the answer using standard lib
len(open("/spark/README.md").read().strip().split("\n"))
# first line: action
textFile.first()
# transformation
textFile.filter(lambda x: "Spark" in x)
# how many lines with "Spark"
textFile.filter(lambda x: "Spark" in x).count()
# number of words in the line with most words
textFile.map(lambda line: len(line.split())).reduce(lambda a, b: a if (a > b) else b)
textFile.map(lambda line: len(line.split())).reduce(max)
# compute the number of words in the line with most words in another way
max(map(lambda line: len(line.split()), open("/spark/README.md").read().strip().split("\n")))
# flatmap
textFile.flatMap(lambda line: line.split()).take(5)
# a map/reduce workflow
wordCounts = textFile.flatMap(lambda line: line.split()).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
wordCounts.takeSample(False,5)
nums = sc.parallelize([1, 2, 3, 4])
squared = nums.map(lambda x: x * x).collect()
for num in squared:
print ("%i " % (num))
s3file = sc.textFile("s3n://AKIAI3ZHCGO3UMYFXWFA:w0ALUVQ3p6bqmMYytMn1w93fL5JlSLNK5IDKjHRv@aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2014-52/segment.paths.gz")
s3file.count()
s3file.count()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 0.0 Basic parameters
Step4: 1.0 Load in vertex-wise betas across all miniblocks for all brain regions
Step5: 2.0 - Estimate information estimates for all regions for all 3 rule domains
Step6: Save CSVs for baseline leave-4-out CV on information transfer estimate
Step7: Run statistics (t-tests and multiple comparisons correction with FDR)
Step8: Map statistics and results to surface using workbench
Step9: Run FWE Correction using permutation testing
Step10: Write out significant ROIs
Step11: Map out FWE-corrected statistics/results to surface using workbench
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scripts3_functions as func
import scipy.stats as stats
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import statsmodels.api as sm
import sys
import multiprocessing as mp
import pandas as pd
import multregressionconnectivity as mreg
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import nibabel as nib
import os
os.environ['OMP_NUM_THREADS'] = str(1)
import permutationTesting as pt
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
# Organized as a 64k vector
glasserparcels = lg.loadGlasserParcels()
nParcels = 360
# Load in Glasser parcels in their native format
# Note that this parcel file is actually flipped (across hemispheres), but it doesn't matter since we're using the same exact file to reconstruct the data
glasser2 = nib.load('/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii')
glasser2 = glasser2.get_data()
glasser2 = np.squeeze(glasser2)
def loadBetas(subj):
datadir = '/projects2/ModalityControl2/data/resultsMaster/glmMiniblockBetaSeries/'
filename = subj + '_miniblock_taskBetas_Surface64k.csv'
betas = np.loadtxt(datadir + filename, delimiter=',')
betas = betas[:,17:].T
return betas
def setUpRSAMatrix(subj,ruledim):
Sets up basic SVM Matrix for a classification of a particular rule dimension and network
betas = loadBetas(subj)
rules, rulesmb = func.importRuleTimingsV3(subj,ruledim)
svm_mat = np.zeros((betas.shape))
samplecount = 0
labels = []
for rule in rulesmb:
rule_ind = rulesmb[rule].keys()
sampleend = samplecount + len(rule_ind)
svm_mat[samplecount:sampleend,:] = betas[rule_ind,:]
labels.extend(np.ones(len(rule_ind),)*rule)
samplecount += len(rule_ind)
labels = np.asarray(labels)
svm_dict = {}
nParcels = 360
for roi in range(1,nParcels+1):
roi_ind = np.where(glasserparcels==roi)[0]
svm_dict[roi] = svm_mat[:,roi_ind]
return svm_dict, labels
def rsaCV(svm_mat,labels, subj):
Runs a leave-4-out CV for a 4 way classification
cvfolds = []
# 32 folds, if we do a leave 4 out for 128 total miniblocks
# Want to leave a single block from each rule from each CV
for rule in np.unique(labels):
cvfolds.append(np.where(labels==rule)[0])
cvfolds = np.asarray(cvfolds)
# Number of CVs is columns
ncvs = cvfolds.shape[1]
nrules = cvfolds.shape[0]
# For each CV fold, make sure the fold is constructed randomly
for i in range(nrules): np.random.shuffle(cvfolds[i,:])
corr_rho_cvs = []
err_rho_cvs = []
acc_ind = []
infoEstimate = []
for cv in range(ncvs):
# Select a test set from the CV Fold matrix
test_ind = cvfolds[:,cv].copy()
# The accuracy array should be the same as test_idn
acc_ind.extend(cvfolds[:,cv].copy())
# Delete the CV included from the train set
train_ind = np.delete(cvfolds,cv,axis=1)
# Identify the train and test sets
svm_train = svm_mat[np.reshape(train_ind,-1),:]
svm_test = svm_mat[test_ind,:]
prototype = {}
# Construct RSA prototypes
for rule in range(nrules):
prototype_ind = np.reshape(train_ind[rule,:],-1)
prototype[rule] = np.mean(svm_mat[prototype_ind],axis=0)
corr_rho = []
err_rho = []
for rule1 in range(nrules):
tmp = []
for rule2 in range(nrules):
r = np.arctanh(stats.spearmanr(prototype[rule1],svm_test[rule2,:])[0])
if rule1==rule2:
corr_rho.append(r)
else:
tmp.append(r)
err_rho.append(np.mean(tmp))
corr_rho_cvs.append(np.mean(corr_rho))
err_rho_cvs.append(np.mean(err_rho))
# Compute miniblock-wise information estimate
for i in range(len(corr_rho)):
infoEstimate.append(corr_rho[i] - err_rho[i])
# independent var (constant terms + information estimate)
infoEstimate = np.asarray(infoEstimate)
ind_var = np.vstack((np.ones((len(infoEstimate),)),infoEstimate))
ind_var = ind_var.T
return np.mean(corr_rho_cvs), np.mean(err_rho_cvs), np.mean(infoEstimate)
def subjRSACV((subj,ruledim,behav)):
svm_dict, labels = setUpRSAMatrix(subj,ruledim)
corr_rhos = {}
err_rhos = {}
infoEstimate = {}
for roi in svm_dict:
svm_mat = svm_dict[roi].copy()
# Demean each sample
svmmean = np.mean(svm_mat,axis=1)
svmmean.shape = (len(svmmean),1)
svm_mat = svm_mat - svmmean
# svm_mat = preprocessing.scale(svm_mat,axis=0)
corr_rhos[roi], err_rhos[roi], infoEstimate[roi] = rsaCV(svm_mat, labels, subj)
return corr_rhos, err_rhos, infoEstimate
ruledims = ['logic','sensory','motor']
behav='acc'
corr_rhos = {}
err_rhos = {}
diff_rhos = {}
for ruledim in ruledims:
corr_rhos[ruledim] = {}
err_rhos[ruledim] = {}
diff_rhos[ruledim] = {}
print 'Running', ruledim
inputs = []
for subj in subjNums: inputs.append((subj,ruledim,behav))
# pool = mp.Pool(processes=8)
pool = mp.Pool(processes=16)
results = pool.map_async(subjRSACV,inputs).get()
pool.close()
pool.join()
# Reorganize results
corr_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))
err_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))
diff_rhos[ruledim] = np.zeros((nParcels,len(subjNums)))
scount = 0
for result in results:
for roi in range(nParcels):
corr_rhos[ruledim][roi,scount] = result[0][roi+1]
err_rhos[ruledim][roi,scount] = result[1][roi+1]
diff_rhos[ruledim][roi,scount] = result[2][roi+1]
scount += 1
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
for ruledim in ruledims:
filename = 'Regionwise_InformationEstimate_LeaveOneOut_' + ruledim + '.csv'
np.savetxt(outdir + filename, diff_rhos[ruledim], delimiter=',')
df_stats = {}
# Output to CSV matrix
sig_t = np.zeros((nParcels,len(ruledims)))
sig_effect = np.zeros((nParcels,len(ruledims)))
effectsize = {}
rulecount = 0
for ruledim in ruledims:
df_stats[ruledim] = {}
df_stats[ruledim]['t'] = np.zeros((nParcels,))
df_stats[ruledim]['p'] = np.zeros((nParcels,))
effectsize[ruledim] = np.zeros((nParcels,))
for roi in range(nParcels):
t, p = stats.ttest_1samp(diff_rhos[ruledim][roi,:], 0)
# t, p = stats.ttest_rel(corr_rhos[ruledim][roi,:], err_rhos[ruledim][roi,:])
effectsize[ruledim][roi] = np.mean(diff_rhos[ruledim][roi,:])
ps = np.zeros(())
if t > 0:
p = p/2.0
else:
p = 1.0 - p/2.0
df_stats[ruledim]['t'][roi] = t
df_stats[ruledim]['p'][roi] = p
arr = df_stats[ruledim]['p']
df_stats[ruledim]['q'] = mc.fdrcorrection0(arr)[1]
qbin = df_stats[ruledim]['q'] < 0.05
sig_t[:,rulecount] = np.multiply(df_stats[ruledim]['t'],qbin)
sig_effect[:,rulecount] = np.multiply(effectsize[ruledim],qbin)
rulecount += 1
sig_t_vertex = np.zeros((len(glasser2),len(ruledims)))
effects_vertex = np.zeros((len(glasser2),len(ruledims)))
effects_vertex_sig = np.zeros((len(glasser2),len(ruledims)))
col = 0
for cols in range(sig_t_vertex.shape[1]):
for roi in range(nParcels):
parcel_ind = np.where(glasser2==(roi+1))[0]
sig_t_vertex[parcel_ind,col] = sig_t[roi,col]
effects_vertex[parcel_ind,col] = effectsize[ruledims[col]][roi]
effects_vertex_sig[parcel_ind,col] = sig_effect[roi,col]
col += 1
# Write file to csv and run wb_command
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'RegionwiseIE_FDRThresholded_Tstat.csv'
np.savetxt(outdir + filename, sig_t_vertex,fmt='%s')
wb_file = 'RegionwiseIE_FDRThresholded_Tstat.dscalar.nii'
glasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
# Compute effect size baseline (information content)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'RegionwiseIE_InformationEstimate.csv'
np.savetxt(outdir + filename, effects_vertex,fmt='%s')
wb_file = 'RegionwiseIE_InformationEstimate.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
# Compute Thresholded effect size baseline (information content)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'RegionwiseIE_FDRThresholded_InformationEstimate.csv'
np.savetxt(outdir + filename, effects_vertex_sig,fmt='%s')
wb_file = 'RegionwiseIE_FDRThresholded_InformationEstimate.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
pt = reload(pt)
fwe_Ts = np.zeros((nParcels,len(ruledims)))
fwe_Ps = np.zeros((nParcels,len(ruledims)))
rulecount = 0
for ruledim in ruledims:
t, p = pt.permutationFWE(diff_rhos[ruledim], nullmean=0, permutations=10000, nproc=15)
# t, p = pt.permutationFWE(corr_rhos[ruledim] - err_rhos[ruledim],
# nullmean=0, permutations=1000, nproc=15)
fwe_Ts[:,rulecount] = t
fwe_Ps[:,rulecount] = p
rulecount += 1
# Compare t-values from permutation function and above
rulecount = 0
for ruledim in ruledims:
if np.sum(df_stats[ruledim]['t']==fwe_Ts[:,rulecount])==360:
print 'Correct t-values match up'
else:
print 'Error! Likely a bug in the code'
rulecount += 1
fwe_Ps2 = (1.0000 - fwe_Ps) # One-tailed test on upper tail
sig_mat = fwe_Ps2 < 0.0500 # One-sided t-test (Only interested in values greater than 95% interval)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'FWE_corrected_pvals_allruledims.csv'
np.savetxt(outdir + filename, fwe_Ps, delimiter=',')
sig_t = np.zeros((nParcels,len(ruledims)))
sig_effect = np.zeros((nParcels,len(ruledims)))
rulecount = 0
for ruledim in ruledims:
sig_t[:,rulecount] = np.multiply(fwe_Ts[:,rulecount],sig_mat[:,rulecount])
sig_effect[:,rulecount] = np.multiply(effectsize[ruledim],sig_mat[:,rulecount])
# Read out statistics for manuscript
# Identify significant regions
sig_ind = sig_mat[:,rulecount] == True
nonsig_ind = sig_mat[:,rulecount] == False
print 'Average significant IE for', ruledim, ':', np.mean(effectsize[ruledim][sig_ind])
print 'Average significant T-stats for', ruledim, ':', np.mean(fwe_Ts[:,rulecount][sig_ind])
print 'Maximum significant p-value for', ruledim, ':', np.max(fwe_Ps2[:,rulecount][sig_ind])
print '----'
print 'Average nonsignificant IE for', ruledim, ':', np.mean(effectsize[ruledim][nonsig_ind])
print 'Average nonsignificant T-stats for', ruledim, ':', np.mean(fwe_Ts[:,rulecount][nonsig_ind])
print 'Minimum nonsignificant p-value for', ruledim, ':', np.min(fwe_Ps2[:,rulecount][nonsig_ind])
print '\n'
print '*****************'
rulecount += 1
sig_t_vertex = np.zeros((len(glasser2),len(ruledims)))
effects_vertex = np.zeros((len(glasser2),len(ruledims)))
effects_vertex_sig = np.zeros((len(glasser2),len(ruledims)))
col = 0
for cols in range(sig_t_vertex.shape[1]):
for roi in range(nParcels):
parcel_ind = np.where(glasser2==(roi+1))[0]
sig_t_vertex[parcel_ind,col] = sig_t[roi,col]
effects_vertex_sig[parcel_ind,col] = sig_effect[roi,col]
col += 1
# Write file to csv and run wb_command
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'RegionwiseIE_FWERThresholded_Tstat.csv'
np.savetxt(outdir + filename, sig_t_vertex,fmt='%s')
wb_file = 'RegionwiseIE_FWERThresholded_Tstat.dscalar.nii'
glasserfilename = '/projects/AnalysisTools/ParcelsGlasser2016/archive/Q1-Q6_RelatedParcellation210.LR.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
# # Compute effect size baseline (information content)
# outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
# filename = 'RegionwiseIE_InformationEstimate.csv'
# np.savetxt(outdir + filename, effects_vertex,fmt='%s')
# wb_file = 'RegionwiseIE_InformationEstimate.dscalar.nii'
# wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
# os.system(wb_command)
# Compute Thresholded effect size baseline (information content)
outdir = '/projects2/ModalityControl2/data/resultsMaster/Manuscript5_BaselineRegionIE/'
filename = 'RegionwiseIE_FWERThresholded_InformationEstimate.csv'
np.savetxt(outdir + filename, effects_vertex_sig,fmt='%s')
wb_file = 'RegionwiseIE_FWERThresholded_InformationEstimate.dscalar.nii'
wb_command = 'wb_command -cifti-convert -from-text ' + outdir + filename + ' ' + glasserfilename + ' ' + outdir + wb_file + ' -reset-scalars'
os.system(wb_command)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array(
[[[ 0, 1, 2, 3],
[ 2, 3, 4, 5],
[ 4, 5, 6, 7]],
[[ 6, 7, 8, 9],
[ 8, 9, 10, 11],
[10, 11, 12, 13]],
[[12, 13, 14, 15],
[14, 15, 16, 17],
[16, 17, 18, 19]]]
)
b = np.array(
[[0, 1, 2],
[2, 1, 3],
[1, 0, 3]]
)
result = np.take_along_axis(a, b[..., np.newaxis], axis=-1)[..., 0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reset TensorFlow Graph
Step2: Create TensorFlow Session
Step3: Load Model Training and Test/Validation Data
Step4: Randomly Initialize Variables (Weights and Bias)
Step5: View Accuracy of Pre-Training, Initial Random Variables
Step6: Setup Loss Summary Operations for Tensorboard
Step7: Train Model
Step8: View loss in Tensorboard
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
tf.reset_default_graph()
sess = tf.Session()
print(sess)
from datetime import datetime
version = int(datetime.now().strftime("%s"))
num_samples = 100000
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/cpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/cpu/%s/test' % version,
graph=tf.get_default_graph())
%%time
with tf.device("/cpu:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-cpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/cpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_cpu.pb' % optimize_me_parent_path
print(unoptimized_model_graph_path)
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
# saver.export_meta_graph(
# filename=model_checkpoint_path,
# clear_devices=True
# )
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/cpu/unoptimized_cpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/cpu/unoptimized_cpu.pb'
output_dot='/root/notebooks/unoptimized_cpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_cpu.dot \
-o /root/notebooks/unoptimized_cpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_cpu.png', width=1024, height=768)
class PioBundle(object):
def __init__(self,
model):
self.model = model
def predict(self,
request):
return self.model.predict(request)
def transform_request(self,
request):
import tensorflow as tf
import json
import numpy as np
request_str = request.decode('utf-8')
request_json = json.loads(request_str)
request_np = np.asarray([request_json['x_observed']])
return request_np
def transform_response(self,
response):
import json
return json.dumps({"y_pred": response.tolist()[0]})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NumPy
Step2: Requests
Step3: Pandas (optional)
Step4: SciPy (optional)
Step5: 2) Importar scidbpy
Step6: conectarse al servidor de Base de datos
Step7: 3) Leer archivo con cada una de las ondas
Step8: Quitarle caracteres especiales
Step9: 4) Importar WFDB para conectarse a physionet
Step10: Busca la ubicacion de la señal tipo II
Step11: Normaliza la señal y le quita los valores en null
Step12: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Step13: lista de los arreglos que actualmente se encuentran en SciDB
Step14: Obtiene el arreglo s00515_3148_06_10_10_44 de SciDB
Step15: Imprime el contenido del arreglo
Step16: Lee el contenido del archivo donde se encuentran las ondas ya leidas
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.version
sys.version_info
import numpy as np
np.__version__
import requests
requests.__version__
import pandas as pd
pd.__version__
import scipy
scipy.__version__
import scidbpy
scidbpy.__version__
from scidbpy import connect
sdb = connect('http://localhost:8080')
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://www.physionet.org/physiobank/database/mimic2wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[0])
carpeta,onda = line.replace('b\'','').replace('\'','').replace('\\n','').split("/")
onda
import wfdb
sig, fields = wfdb.srdsamp(onda,pbdir='mimic3wdb/matched/'+carpeta) #, sampfrom=11000
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
arrayNun = array[~numpy.isnan(array)]
arrayNun = numpy.trim_zeros(arrayNun)
arrayNun
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
dir(sdb.arrays)
myArray = sdb.iquery('scan(s00600_3152_10_19_18_13)', fetch=True, as_dataframe=True)
import matplotlib.pyplot as plt
simpleArray = myArray["x"];
simpleArray.plot()
plt.show()
simpleArray
file = open("readedWaves.txt","r")
arrays = dir(sdb.arrays)
filelines = file.readlines()
arrays.extend([w.replace('\n','') for w in filelines])
#type(arrays)
arrays
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What We Expect Our Simulation Data Will Look Like
Step2: Why Our Simulation is Correct
Step3: Simulation Analysis
Step4: Algorithm Code
Step5: Easy Simulation Analysis
Step6: Easy Simulation Results
Step7: Performance Metrics
Step8: Quantify Performance for Easy Simulation
Step9: As shown above, our connectLib pipeline worked extremely well on the easy simulation. The small difference between the actual and expected values come from the generated synapse point sets. Foreground synapses can potentially be adjacent to each other in the test volume. Connected Components will label the multiple, connected synapses as one cluster, which explains the cluster volumes at roughly 56 (2 synapses) and 81 (3 synapses) [See Histogram in Easy Simulation Results].
Step10: Difficult Simulation Results
Step11: Performance Metrics
Step12: As predicted, the foreground and background was combined into one cluster through the connectLib Pipeline (see Results). This large cluster does not coregister with any of the original foreground clusters. Clearly, our pipeline performed very poorly on the difficult simulation as zero clusters were actually detected. This ultimately proves our earlier thesis that the connectLib pipeline is dependent on the foreground and background voxels having significantly different intensities.
Step13: Plotting Expected and Average Cluster Volumes for each easy simulation.
Step14: Plotting Expected and Average Cluster Volumes for each difficult simulation.
Step15: Summary of Simulation Analysis
Step16: Predicting Performance
Step17: Results
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.insert(0,'../code/functions/')
from random import randrange as rand
from skimage.measure import label
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pickle
def generatePointSet():
center = (rand(0, 99), rand(0, 99), rand(0, 99))
toPopulate = []
for z in range(-1, 2):
for y in range(-1, 2):
for x in range(-1, 2):
curPoint = (center[0]+z, center[1]+y, center[2]+x)
#only populate valid points
valid = True
for dim in range(3):
if curPoint[dim] < 0 or curPoint[dim] >= 100:
valid = False
if valid:
toPopulate.append(curPoint)
return set(toPopulate)
def generateTestVolume():
#create a test volume
volume = np.zeros((100, 100, 100))
myPointSet = set()
for _ in range(rand(500, 800)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = 60000
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
noiseVolume[z][y][x] = rand(0, 10000)
return volume, noiseVolume
randIm = generateTestVolume()
foreground = randIm[0]
combinedIm = randIm[1]
#displaying the random clusters
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = foreground.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Foreground')
plt.show()
#displaying the noise
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = combinedIm.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Noise + Foreground')
plt.show()
def generateDifficultTestVolume():
#create a test volume
volume = np.zeros((100, 100, 100))
myPointSet = set()
for _ in range(rand(500, 800)):
potentialPointSet = generatePointSet()
#be sure there is no overlap
while len(myPointSet.intersection(potentialPointSet)) > 0:
potentialPointSet = generatePointSet()
for elem in potentialPointSet:
myPointSet.add(elem)
#populate the true volume
for elem in myPointSet:
volume[elem[0], elem[1], elem[2]] = 60000
#introduce noise
noiseVolume = np.copy(volume)
for z in range(noiseVolume.shape[0]):
for y in range(noiseVolume.shape[1]):
for x in range(noiseVolume.shape[2]):
if not (z, y, x) in myPointSet:
noiseVolume[z][y][x] = 60000
return volume, noiseVolume
randImHard = generateDifficultTestVolume()
foregroundHard = randImHard[0]
combinedImHard = randImHard[1]
#displaying the random clusters
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = foregroundHard.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Foreground')
plt.show()
#displaying the noise
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
z, y, x = combinedImHard.nonzero()
ax.scatter(x, y, z, zdir='z', c='r')
plt.title('Random Noise + Foreground')
plt.show()
####Pseudocode: Will not run!####
#Step 1 Otsu's Binarization to threshold out background noise intensity to 0.
for(each 2D image slice in 3D plos_image):
threshold_otsu on slice #uses Otsu's Binarization to threshold background noise to 0.
return thresholded_image
#Step 2 Cluster foreground using connected components
connected_components on thresholded_image #labels and clusters 'connected' regions in foreground
for(each labeled region):
MAKE Cluster object #instance that contains voxel members that made up labeled region
plos_ClusterList.append(Cluster) #list of synapse/foreground clusters
return plos_ClusterList
#Step 3 Use Naive Fencing (IQR Range Rule) to remove large background cluster that formed
IQR = getIQR(plos_ClusterList.getVolumes()) #calculate IQR of Cluster volumes
UpperOutlierFence = 75thpercentile(plos_ClusterList.getVolumes()) + 1.5*IQR #get upper volume threshold (third quartile + 1.5*IQR)
for (Cluster in plos_ClusterList):
if (Cluster.getVolume() > UpperOutlierFence) #if volume is considered an upper outlier, remove it
plos_ClusterList.remove(Cluster)
#Step 4 Coregister Degraded clusters found above with Raw clusters
threshold_otsu on raw_image #Thresholds raw image background
rawClusterList = connected_components on thresholded_raw_image #Clusters raw image
for raw_cluster in rawClusterList:
for plos_cluster in plos_ClusterList:
if plos_cluster in raw_cluster: #if degraded cluster is contained in the raw cluster
actualClusterList.append(raw_cluster) #add raw cluster to actual Cluster list.
return actualClusterList
from skimage.filters import threshold_otsu
from skimage.measure import label
from cluster import Cluster
import numpy as np
import cv2
import plosLib as pLib
### Step 1: Threshold the image using Otsu Binarization
def otsuVox(argVox):
probVox = np.nan_to_num(argVox)
bianVox = np.zeros_like(probVox)
for zIndex, curSlice in enumerate(probVox):
#if the array contains all the same values
if np.max(curSlice) == np.min(curSlice):
#otsu thresh will fail here, leave bianVox as all 0's
continue
thresh = threshold_otsu(curSlice)
bianVox[zIndex] = curSlice > thresh
return bianVox
### Step 2: Cluster foreground using Connected Components
def connectedComponents(voxel):
labelMap = label(voxel)
clusterList = []
#plus 1 since max label should be included
for uniqueLabel in range(0, np.max(labelMap)+1):
memberList = [list(elem) for elem in zip(*np.where(labelMap == uniqueLabel))]
if not len(memberList) == 0:
clusterList.append(Cluster(memberList))
return clusterList
### Step 3: Remove outlier clusters using IRQ Rule
def thresholdByVolumePercentile(clusterList):
#putting the plosPipeline clusters volumes in a list
plosClusterVolList =[]
for cluster in (range(len(clusterList))):
plosClusterVolList.append(clusterList[cluster].getVolume())
#finding the upper outlier fence
Q3 = np.percentile(plosClusterVolList, 75)
Q1 = np.percentile(plosClusterVolList, 25)
IQR = Q3 - Q1
upperThreshFence = Q3 + 1.5*IQR
#filtering out the background cluster
upperThreshClusterList = []
for cluster in (range(len(clusterList))):
if clusterList[cluster].getVolume() < upperThreshFence:
upperThreshClusterList.append(clusterList[cluster])
return upperThreshClusterList
### Step 4: Coregister clusters with raw data.
def clusterCoregister(plosClusterList, rawClusterList):
#creating a list of all the member indices of the plos cluster list
plosClusterMemberList = []
for cluster in range(len(plosClusterList)):
plosClusterMemberList.extend(plosClusterList[cluster].members)
#creating a list of all the clusters without any decay
finalClusterList =[]
for rawCluster in range(len(rawClusterList)):
for index in range(len(plosClusterMemberList)):
if ((plosClusterMemberList[index] in rawClusterList[rawCluster].members) and (not(rawClusterList[rawCluster] in finalClusterList))):
finalClusterList.append(rawClusterList[rawCluster])
return finalClusterList
########## Complete Pipeline ##########
def completePipeline(image):
#Plos Pipeline Results
plosOut = pLib.pipeline(image)
#Otsu's Binarization Thresholding
bianOut = otsuVox(plosOut)
#Connected Components
connectList = connectedComponents(bianOut)
#Remove outlier clusters
threshClusterList = thresholdByVolumePercentile(connectList)
#finding the clusters without plosPipeline - lists the entire clusters
bianRawOut = otsuVox(image)
clusterRawList = connectedComponents(bianRawOut)
#coregistering with raw data
clusters = clusterCoregister(threshClusterList, clusterRawList)
return clusters
completeClusterMemberList = completePipeline(combinedIm)
### Get Cluster Volumes
def getClusterVolumes(clusterList):
completeClusterVolumes = []
for cluster in clusterList:
completeClusterVolumes.append(cluster.getVolume())
return completeClusterVolumes
import mouseVis as mv
#plotting results
completeClusterVolumes = getClusterVolumes(completeClusterMemberList)
mv.generateHist(completeClusterVolumes, title = 'Cluster Volumes for Easy Simulation', bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency')
#test stats
# get actual cluster volumes from foreground (for 'Expected' values)
def getForegroundClusterVols(foreground):
foregroundClusterList = connectedComponents(foreground)
del foregroundClusterList[0] #background cluster
foregroundClusterVols = []
for cluster in foregroundClusterList:
foregroundClusterVols.append(cluster.getVolume())
return foregroundClusterVols
def getAverageMetric(coClusterVols, foreClusterVols):
#no clusters found
if (len(coClusterVols)==0):
avgClusterVol = 0
else:
#average volume of detected clusters
avgClusterVol = np.mean(coClusterVols)
#average volume of total foreground clusters
avgExpectedVol = np.mean(foreClusterVols)
print 'Average Volume'
print "\tExpected: " + str(avgExpectedVol) + '\tActual: ' + str(avgClusterVol)
return avgExpectedVol, avgClusterVol
def getDensityMetric(coClusterVols, foreClusterVols):
#no clusters found
if (len(coClusterVols)==0):
coClusterVols.append(0)
print 'Cluster Density of Data By Volume'
print "\tExpected: " + str(np.sum(foreClusterVols)/(100*100*100.0)) + '\tActual: ' + str(np.sum(coClusterVols)/(100*100*100.0))
foregroundClusterVols = getForegroundClusterVols(foreground)
getAverageMetric(completeClusterVolumes, foregroundClusterVols)
getDensityMetric(completeClusterVolumes, foregroundClusterVols)
completeClusterMemberListHard = completePipeline(combinedImHard)
print len(completeClusterMemberListHard)
#Plos Pipeline Results
plosOut = pLib.pipeline(combinedImHard)
#Otsu's Binarization Thresholding
bianOut = otsuVox(plosOut)
#Connected Components
connectList = connectedComponents(bianOut)
#get total volume for hard simulation clusters
totalClusterHard = []
for cluster in connectList:
totalClusterHard.append(cluster.getVolume())
#get coregistered (complete) cluster volumes
completeClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)
print 'Number of Clusters: ' + str(len(totalClusterHard))
print 'Cluster Volume: ' + str(totalClusterHard[0])
print 'Coregistered Clusters: ' + str(len(completeClusterMemberListHard))
foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)
getAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard)
getDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard)
easySimulationVolumes = []
hardSimulationVolumes = []
for i in range(10):
#Easy Simulation
randIm = generateTestVolume()
foreground = randIm[0]
combinedIm = randIm[1]
completeClusterMemberList = completePipeline(combinedIm)
completeClusterVolumes = getClusterVolumes(completeClusterMemberList)
foregroundClusterVols = getForegroundClusterVols(foreground)
easySimulationVolumes.append(getAverageMetric(completeClusterVolumes, foregroundClusterVols))
getDensityMetric(completeClusterVolumes, foregroundClusterVols)
#Hard Simulation
randImHard = generateDifficultTestVolume()
foregroundHard = randImHard[0]
combinedImHard = randImHard[1]
completeClusterMemberListHard = completePipeline(combinedImHard)
completeClusterVolumesHard = getClusterVolumes(completeClusterMemberListHard)
foregroundClusterVolsHard = getForegroundClusterVols(foregroundHard)
hardSimulationVolumes.append(getAverageMetric(completeClusterVolumesHard, foregroundClusterVolsHard))
getDensityMetric(completeClusterVolumesHard, foregroundClusterVolsHard)
#separate expected and actual values into separate indices
esv = [list(t) for t in zip(*easySimulationVolumes)]
#outlier
del esv[0][6]
del esv [1][6]
fig = plt.figure()
plt.title('Easy Simulation: Average and Expected Cluster Volumes (10 Trials)')
plt.xlabel('Simulation #')
plt.ylabel('Volume (voxels)')
x = np.arange(9)
plt.scatter(x, esv[0], c='r')
plt.scatter(x, esv[1], c='b')
plt.show()
hsv = [list(t) for t in zip(*hardSimulationVolumes)]
fig = plt.figure()
plt.title('Difficult Simulation: Average and Expected Cluster Volumes (10 Trials)')
plt.xlabel('Simulation #')
plt.ylabel('Volume (voxels)')
x = np.arange(10)
plt.scatter(x, hsv[0], c='r')
plt.scatter(x, hsv[1], c='b')
plt.show()
import pickle
realData = pickle.load(open('../data/realDataRaw_t0.synth'))
realDataSection = realData[5: 10]
plosDataSection = pLib.pipeline(realDataSection)
mv.generateHist(plosDataSection, bins = 50, title = "Voxel Intensity Distribution after PLOS", xaxis = 'Relative Voxel Intensity', yaxis = 'Frequency')
print 'Running'
realClusterList = completePipeline(plosDataSection)
realClusterVols = getClusterVolumes(realClusterList)
mv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', bins = 50, xaxis = 'Volumes', yaxis = 'Relative Frequency')
print realClusterVols
del realClusterVols[0]
mv.generateHist(realClusterVols, title = 'Cluster Volumes for Easy Simulation', axisStart = 0, axisEnd = 200, bins = 25, xaxis = 'Volumes', yaxis = 'Relative Frequency')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can look for data to do some exploratory stuff.
Step2: Remove the rows with NA for loan status
|
<ASSISTANT_TASK:>
Python Code:
df3a.dropna(axis=1, how='all',inplace=True)
df3a.columns.tolist()
df3a.application_type.unique() ##Only one type, may not be that useful so not keeping it.
cols_to_keep=['id','loan_amnt','funded_amnt','funded_amnt_inv','term','int_rate','installment','grade','sub_grade','emp_title','emp_length','home_ownership','annual_inc','verification_status','issue_d','pymnt_plan','purpose','title','addr_state','dti','delinq_2yrs','earliest_cr_line','inq_last_6mths','mths_since_last_delinq','mths_since_last_record','open_acc','pub_rec','revol_bal','revol_util','total_acc','initial_list_status','out_prncp','out_prncp_inv','total_pymnt','total_pymnt_inv','total_rec_prncp','total_rec_int','total_rec_late_fee','recoveries','collection_recovery_fee','last_pymnt_d','last_pymnt_amnt','next_pymnt_d','last_credit_pull_d','collections_12_mths_ex_med','mths_since_last_major_derog','policy_code','loan_status']
df3a.home_ownership.unique()
df3a=df3a[cols_to_keep]
df3a.loan_status.unique()
df3a.dropna(subset=['loan_status'],how='any',inplace=True)
df3a.loan_status.value_counts().plot(kind='bar')
default_status = {'Fully Paid':1,'Charged Off':0,'Current':1,'In Grace Period':0,'Late (31-120 days)':0,'Default':0,'Does not meet the credit policy. Status:Fully Paid':1,'Does not meet the credit policy. Status:Charged Off':0,'Late (16-30 days)':0}
df3a['defaultYN']=df3a.loan_status.map(default_status)
df3a.defaultYN.value_counts()
#sns.countplot(x="defaultYN",data=df3a)
df3a.grade.value_counts()
sns.distplot(df3a.loan_amnt)
sns.distplot(df3a.annual_inc)
sns.countplot(df3a.defaultYN)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With Pandas we can load the aforementioned CSV data.
Step2: With the data loaded we can plot it as a scatter plot using matplotlib.
Step4: Modelling
Step6: Task I
Step8: Task II
Step9: Task III
Step10: <div class="alert alert-info">
Step11: Critical Analysis
Step12: The input to our model is a single scalar value (Education). The output is also a single scalar value (Income).
Step13: Notice in the print above how the fully-connected layer Dense() has two trainable parameters. One is the weight (slope), while the second is the bias (intercept). Keras adds bias units by default, but it can be turned off by setting use_bias=False.
Step14: Now that both the model definition and the optimiser is set up we can start training. Training using the Keras model application programming interface (API) is done by calling the fit() method.
|
<ASSISTANT_TASK:>
Python Code:
# Plots will be show inside the notebook
%matplotlib notebook
import matplotlib.pyplot as plt
# NumPy is a package for manipulating N-dimensional array objects
import numpy as np
# Pandas is a data analysis package
import pandas as pd
import problem_unittests as tests
# Load data and print the first n = 5 rows
# URL: http://www-bcf.usc.edu/~gareth/ISL/Income1.csv
DATA_URL = './resources/Income1.csv'
data = pd.read_csv(DATA_URL, index_col=0)
print(data.head(n=5))
# Put the second (education index) and third (income index) row in a NumPy array
X_data = data['Education'].values
y_data = data['Income'].values
plt.figure()
plt.scatter(X_data, y_data, label='Training data')
plt.title('Education vs. Income')
plt.xlabel('Education index')
plt.ylabel('Income index')
plt.grid(linestyle='dotted')
plt.legend()
plt.show()
def build_X(x_data):
Return design matrix given an array of N samples with d dimensions.
# Create matrix Ax1 if d = 1
if x_data.ndim == 1:
x_data = np.expand_dims(x_data, axis=1)
# Find the number of samples and dimensions
nb_samples = x_data.shape[0]
nb_dimensions = x_data.shape[1]
# Create Nxd+1 matrix filled with ones
_X = np.ones((nb_samples, nb_dimensions + 1))
# Paste in the data we have in the new matrix
_X[:nb_samples, 1:nb_dimensions + 1] = x_data
return _X
# Test and see that the design matrix was built correctly
tests.test_build_x(build_X)
def build_y(y_data):
Return a column vector containing the target values y.
# Make a copy of the argument that we can work on
_y = y_data.copy()
# Create y matrix Nx1
# Return result
return _y
### Do *not* modify the following line ###
# Test and see that the y vector was built correctly
tests.test_build_y(build_y)
def compute_weights(X, y):
Return a vector of weights found by the derived closed-form solution.
weights = None
# Implement closed-form solution here
return weights
### Do *not* modify the following line ###
# Test and see that the weights are calculated correctly
tests.test_compute_theta(compute_weights)
# Build design matrix (TASK)
X = None
# Build y vector (TASK)
y = None
# Learn linear model (TASK)
W = None
# Print weights
print('The learned linear model looks like this:')
print('Y = {:.3f} x + {:.3f}'.format(W[1, 0], W[0, 0]))
# Plot hyperplane and training data
xs = np.linspace(X_data.min(), X_data.max(), num=50)
ys = np.dot(build_X(xs), W)
plt.figure()
plt.scatter(X_data, y_data, label='Training data')
plt.plot(xs, ys, color='Red', linewidth=1, label='Fit')
plt.title('Education vs. Income')
plt.xlabel('Education index')
plt.ylabel('Income index')
plt.grid(linestyle='dotted')
plt.legend()
plt.show()
import time
# A library for easily displaying progress meters
import tqdm
# Contains all built-in optimisation tools in Keras, such as stochastic gradient descent
from keras import optimizers
# An input "layer" and a densely-connected neural network layer
from keras.layers import Input, Dense
# Model is an API that wraps our linear regression model
from keras.models import Model
# There is only a *single* feature
input_X = Input(shape=(1,))
# The output of the model is a single value
output_y = Dense(units=1, use_bias=True)(input_X)
# We give the input and output to our Model API
model = Model(inputs=input_X, outputs=output_y)
# Print a summary of the model
model.summary()
#
# Start by setting some user options
#
# Learning rate (set very small so we can clearly see the training progress)
lr = 0.0001
# Number of times to apply the update rule
nb_iterations = 100
# Number of samples to include each iteration (used to compute gradients)
nb_samples = 30
# Create optimiser using Keras
sgd = optimizers.SGD(lr=lr)
# Add the optimiser to our model, make it optimise mean squared error
model.compile(optimizer=sgd, loss='mean_squared_error')
fig, ax = plt.subplots(1,1)
# Perform `nb_iterations` update rule applications
for i in tqdm.tqdm(np.arange(nb_iterations)):
# Learn by calling the `fit` method
model.fit(X_data, y_data,
batch_size=nb_samples,
epochs=1,
verbose=0)
# Make a plot of the data and the current fit
xs = np.linspace(X_data.min(), X_data.max(), num=50)
ys = model.predict(xs)
ax.clear()
ax.scatter(X_data, y_data, label='Training data')
ax.plot(xs, ys, color='Red', linewidth=1, label='Fit')
ax.set_xlabel('Education index')
ax.set_ylabel('Income index')
ax.grid(linestyle='dotted')
ax.legend()
fig.canvas.draw()
time.sleep(0.05)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Primitive Stochastic Functions
Step2: Here, torch.distributions.Normal is an instance of the Distribution class that takes parameters and provides sample and score methods. Pyro's distribution library pyro.distributions is a thin wrapper around torch.distributions because we want to make use of PyTorch's fast tensor math and autograd capabilities during inference.
Step3: Let's go through this line-by-line. First, in lines 2 we define a binary random variable 'cloudy', which is given by a draw from the Bernoulli distribution with a parameter of 0.3. Since the Bernoulli distributions return 0s or 1s, in line 3 we convert the value cloudy to a string so that return values of weather are easier to parse. So according to this model 30% of the time it's cloudy and 70% of the time it's sunny.
Step4: Just like a direct call to torch.distributions.Normal().rsample(), this returns a sample from the unit normal distribution. The crucial difference is that this sample is named. Pyro's backend uses these names to uniquely identify sample statements and change their behavior at runtime depending on how the enclosing stochastic function is being used. As we will see, this is how Pyro can implement the various manipulations that underlie inference algorithms.
Step5: Procedurally, weather() is still a non-deterministic Python callable that returns two random samples. Because the randomness is now invoked with pyro.sample, however, it is much more than that. In particular weather() specifies a joint probability distribution over two named random variables
Step6: This kind of modularity, familiar to any programmer, is obviously very powerful. But is it powerful enough to encompass all the different kinds of models we'd like to express?
Step7: Note that the names x_0, x_1, etc., in geometric() are generated dynamically and that different executions can have different numbers of named random variables.
|
<ASSISTANT_TASK:>
Python Code:
import torch
import pyro
pyro.set_rng_seed(101)
loc = 0. # mean zero
scale = 1. # unit variance
normal = torch.distributions.Normal(loc, scale) # create a normal distribution object
x = normal.rsample() # draw a sample from N(0,1)
print("sample", x)
print("log prob", normal.log_prob(x)) # score the sample from N(0,1)
def weather():
cloudy = torch.distributions.Bernoulli(0.3).sample()
cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny'
mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy]
scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy]
temp = torch.distributions.Normal(mean_temp, scale_temp).rsample()
return cloudy, temp.item()
x = pyro.sample("my_sample", pyro.distributions.Normal(loc, scale))
print(x)
def weather():
cloudy = pyro.sample('cloudy', pyro.distributions.Bernoulli(0.3))
cloudy = 'cloudy' if cloudy.item() == 1.0 else 'sunny'
mean_temp = {'cloudy': 55.0, 'sunny': 75.0}[cloudy]
scale_temp = {'cloudy': 10.0, 'sunny': 15.0}[cloudy]
temp = pyro.sample('temp', pyro.distributions.Normal(mean_temp, scale_temp))
return cloudy, temp.item()
for _ in range(3):
print(weather())
def ice_cream_sales():
cloudy, temp = weather()
expected_sales = 200. if cloudy == 'sunny' and temp > 80.0 else 50.
ice_cream = pyro.sample('ice_cream', pyro.distributions.Normal(expected_sales, 10.0))
return ice_cream
def geometric(p, t=None):
if t is None:
t = 0
x = pyro.sample("x_{}".format(t), pyro.distributions.Bernoulli(p))
if x.item() == 1:
return 0
else:
return 1 + geometric(p, t + 1)
print(geometric(0.5))
def normal_product(loc, scale):
z1 = pyro.sample("z1", pyro.distributions.Normal(loc, scale))
z2 = pyro.sample("z2", pyro.distributions.Normal(loc, scale))
y = z1 * z2
return y
def make_normal_normal():
mu_latent = pyro.sample("mu_latent", pyro.distributions.Normal(0, 1))
fn = lambda scale: normal_product(mu_latent, scale)
return fn
print(make_normal_normal()(1.))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import scipy
import scipy.optimize
import numpy as np
def test_func(x):
return (x[0])**2+(x[1])**2
def test_grad(x):
return [2*x[0],2*x[1]]
starting_point = [1.8, 1.7]
direction = [-1, -1]
result = scipy.optimize.line_search(test_func, test_grad, np.array(starting_point), np.array(direction))[0]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load and prepare the data
Step2: Checking out the data
Step3: Dummy variables
Step4: Scaling target variables
Step5: Splitting the data into training, testing, and validation sets
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Step8: Unit tests
Step9: Training the network
Step10: Check out your predictions
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
rides[:24*10].plot(x='dteday', y='cnt')
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
#final_outputs = self.activation_function(final_inputs) # signals from final output layer
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output.T) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Boolean Automata
Step2: Calling a.shortest() is equivalent to a.shortest(1) which is equivalent to a.shortest(1, 0)
Step3: To get the first four words, use a.shortest(4) (or a.shortest(len = 4))
Step4: The words with at most four letters
Step5: At most 10 words of at most 4 letters
Step6: At most 10 words of at most 3 letters
Step7: Weighted Automata
Step8: Transducers
|
<ASSISTANT_TASK:>
Python Code:
import vcsn
a = vcsn.context('lal_char(ab), b').de_bruijn(1)
a
a.shortest()
a.shortest(4)
a.shortest(len = 4)
a.shortest(num = 10, len = 4)
a.shortest(num = 10, len = 3)
%%automaton -s bin
context = "lal_char(01), z"
$ -> 0
0 -> 0 0, 1
0 -> 1 1
1 -> $
1 -> 1 <2>0, <2>1
bin.shortest(len = 3)
bin.shortest(num = 10)
%%automaton -s t
context = "lat<law<char>, law<char>>, q"
$ -> 0
0 -> 1 <1>one|un
0 -> 2 <2>two|deux
0 -> 3 <3>three|trois
1 -> 0 \e|\e
2 -> 0 \e|\e
3 -> 0 \e|\e
1 -> $
2 -> $
3 -> $
t.shortest(num = 10, len = 9)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The true power of functions is being able to call it as many times as we would like. In the previous example, we called the square function, xsq in a loop 10 times. Let's check out some more complicated examples.
Step2: So clearly we can do some powerful things. Now let's see why these functions have significant power over loops.
Step3: Here, using loops within functions allows to generate the fibonacci sequence. We then write a function to print out the first n numbers.
Step4: Lambda
|
<ASSISTANT_TASK:>
Python Code:
def add1(x):
return x+1
print(add1(1))
def xsq(x):
return x**2
print(xsq(5))
for i in range(0,10):
print(xsq(i))
def removefs(data):
newdata=''
for d in data:
if(d=="f" or d=="F"):
pass
else:
newdata+=(d)
return newdata
print(removefs('ffffffFFFFFg'))
intro='''##Functions
Let's say that we have some code that does some task, but the code is 25 lines long, we need to run it over 1000 items and it doesn't work in a loop. How in the world will we handle this situation? That is where functions come in really handy. Functions are a generalized block of code that allow you to run code over and over while changing its parameters if you so choose. Functions may take **(arguments)** that you are allowed to change when you call the function. It may also **return** a value.
A function must be defined before you can call it. To define a function, we use the following syntax:
def <function name>(arg0, arg1, arg3,...):
#code here must be indented.
#you can use arg0,...,argn within the function
#you can also return things
return 1
#This code returns 1 no matter what you tell the function
Functions can take as many arguments as you wish, but they may only return 1 thing. A simple example of a familiar function is any mathematical function. Take sin(x), it is a function that takes one argument x and returns one value based on the input. Let's get familiar with functions."'''
print(removefs(intro))
def removevowels(data):
newdata = ''
for d in data:
if(d=='a' or d=='e' or d=='i' or d=='o' or d=='u' or d=='y'):
pass
else:
newdata+=d
return newdata
print(removevowels(intro))
def fib(n):
a,b = 1,1
for i in range(n-1):
a,b = b,a+b
return a
def printfib(n):
for i in range(0,n):
print(fib(i))
printfib(15)
thoudigits = 7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#^^^Some junk we will learn later^^^
func = lambda x:np.exp(np.sin(x**np.cos(x)))
#^^^The important part^^^
plt.plot(np.linspace(0,10,1000), func(np.linspace(0,10,1000)))
#^^^We will learn this next^^^
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Let's visualize what we're working with and get the pixel count for our images. They should be square for this to work, but luckily we padded them with black pixels where needed back in Step 2.
Step5: Step 5 - Get tooling for training convolutional neural networks
Step7: Neural network architecture
Step10: Model function
Step11: TF Estimator
Step13: TF Experiment
Step14: Step 6 - Train a model!
Step17: Get predictions and performance metrics
Step19: Get training accuracy
Step20: Step 7 - Debugging and Tuning
Step21: Tuning
Step22: Step 8 - Validation
|
<ASSISTANT_TASK:>
Python Code:
# Enter your username:
YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address
# Libraries for this section:
import os
import datetime
import numpy as np
import pandas as pd
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import tensorflow as tf
from tensorflow.contrib.learn import RunConfig, Experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Directory settings:
TRAIN_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_small/') # Where the subset training dataset lives.
DEBUG_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/debugging_small/') # Where the debugging dataset lives.
VALID_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/validation_images/') # Where the full validation dataset lives.
OUTPUT_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/output_cnn_small/') # Where we store our logging and models.
# TensorFlow setup:
NUM_CLASSES = 2 # This code can be generalized beyond 2 classes (binary classification).
QUEUE_CAP = 5000 # Number of images the TensorFlow queue can store during training.
# For debugging, QUEUE_CAP is ignored in favor of using all images available.
TRAIN_BATCH_SIZE = 128 # Number of images processed every training iteration.
DEBUG_BATCH_SIZE = 64 # Number of images processed every debugging iteration.
TRAIN_STEPS = 100 # Number of batches to use for training.
DEBUG_STEPS = 2 # Number of batches to use for debugging.
# Example: If dataset is 5 batches ABCDE, train_steps = 2 uses AB, train_steps = 7 uses ABCDEAB).
# Monitoring setup:
TRAINING_LOG_PERIOD_SECS = 60 # How often we want to log training metrics (from training hook in our model_fn).
CHECKPOINT_PERIOD_SECS = 60 # How often we want to save a checkpoint.
# Hyperparameters we'll tune in the tutorial:
DROPOUT = 0.8 # Regularization parameter for neural networks - must be between 0 and 1.
# Additional hyperparameters:
LEARNING_RATE = 0.001 # Rate at which weights update.
CNN_KERNEL_SIZE = 3 # Receptive field will be square window with this many pixels per side.
CNN_STRIDES = 2 # Distance between consecutive receptive fields.
CNN_FILTERS = 16 # Number of filters (new receptive fields to train, i.e. new channels) in first convolutional layer.
FC_HIDDEN_UNITS = 512 # Number of hidden units in the fully connected layer of the network.
def show_inputs(dir, filelist=None, img_rows=1, img_cols=3, figsize=(20, 10)):
Display the first few images.
Args:
dir: directory where the files are stored
filelist: list of filenames to pull from, if left as default, all files will be used
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
pixel_dims: pixel dimensions (height and width) of the image
if filelist is None:
filelist = os.listdir(dir) # Grab all the files in the directory
filelist = np.array(filelist)
plt.close('all')
fig = plt.figure(figsize=figsize)
print('File names:')
for i in range(img_rows * img_cols):
print(str(filelist[i]))
a=fig.add_subplot(img_rows, img_cols,i + 1)
img = mpimg.imread(os.path.join(dir, str(filelist[i])))
plt.imshow(img)
plt.show()
return np.shape(img)
pixel_dim = show_inputs(TRAIN_DIR)
print('Images have ' + str(pixel_dim[0]) + 'x' + str(pixel_dim[1]) + ' pixels.')
pixels = pixel_dim[0] * pixel_dim[1]
# Input function:
def generate_input_fn(dir, batch_size, queue_capacity):
Return _input_fn for use with TF Experiment.
Will be called in the Experiment section below (see _experiment_fn).
Args:
dir: directory we're taking our files from, code is written to collect all files in this dir.
batch_size: number of rows ingested in each training iteration.
queue_capacity: number of images the TF queue can store.
Returns:
_input_fn: a function that returns a batch of images and labels.
file_pattern = os.path.join(dir, '*') # We're pulling in all files in the directory.
def _input_fn():
A function that returns a batch of images and labels.
Args:
None
Returns:
image_batch: 4-d tensor collection of images.
label_batch: 1-d tensor of corresponding labels.
height, width, channels = [pixel_dim[0], pixel_dim[1], 3] # [height, width, 3] because there are 3 channels per image.
filenames_tensor = tf.train.match_filenames_once(file_pattern) # Collect the filenames
# Queue that periodically reads in images from disk:
# When ready to run iteration, TF will take batch_size number of images out of filename_queue.
filename_queue = tf.train.string_input_producer(
filenames_tensor,
shuffle=False) # Do not shuffle order of the images ingested.
# Convert filenames from queue into contents (png images pulled into memory):
reader = tf.WholeFileReader()
filename, contents = reader.read(filename_queue)
# Decodes contents pulled in into 3-d tensor per image:
image = tf.image.decode_png(contents, channels=channels)
# If dimensions mismatch, pad with zeros (black pixels) or crop to make it fit:
image = tf.image.resize_image_with_crop_or_pad(image, height, width)
# Parse out label from filename:
label = tf.string_to_number(tf.string_split([tf.string_split([filename], '_').values[-1]], '.').values[0])
# All your filenames should be in this format number_number_label.extension where label is 0 or 1.
# Execute above in a batch of batch_size to create a 4-d tensor of collection of images:
image_batch, label_batch = tf.train.batch(
[image, label],
batch_size,
num_threads=1, # We'll decline the multithreading option so that everything stays in filename order.
capacity=queue_capacity)
# Normalization for better training:
# Change scale from pixel uint8 values between 0 and 255 into normalized float32 values between 0 and 1:
image_batch = tf.to_float(image_batch) / 255
# Rescale from (0,1) to (-1,1) so that the "center" of the image range is 0:
image_batch = (image_batch * 2) - 1
return image_batch, label_batch
return _input_fn
# CNN architecture:
def cnn(features, dropout, reuse, is_training):
Defines the architecture of the neural network.
Will be called within generate_model_fn() below.
Args:
features: feature data as 4-d tensor (of batch_size) pulled in when_input_fn() is executed.
dropout: regularization parameter in last layer (between 0 and 1, exclusive).
reuse: a scoping safeguard. First time training: set to False, after that, set to True.
is_training: if True then fits model and uses dropout, if False then doesn't consider the dropout
Returns:
2-d tensor: each image's "logit", [logit(1-p), logit(p)]
where p=Pr(1) i.e. probability that class is 1 (cat in our case).
In CNN terminology, "logit" doesn't always mean the logit function you might
have encountered studying statistics: logit(p) = logodds(p) = log(p / (1-p))
Instead of converting using the inverse of the logodds function, use softmax.
# Next, we define a scope for reusing our variables, choosing our network architecture and naming our layers.
with tf.variable_scope('cnn', reuse=reuse):
layer_1 = tf.layers.conv2d( # 2-d convolutional layer; size of output image is (pixels/stride) a side with channels = filters.
inputs=features, # previous layer (inputs) is features argument to the main function
kernel_size=CNN_KERNEL_SIZE, # 3x3(x3 because we have 3 channels) receptive field (only square ones allowed)
strides=CNN_STRIDES, # distance between consecutive receptive fields
filters=CNN_FILTERS, # number of receptive fields to train; think of this as a CNN_FILTERS-channel image which is input to next layer)
padding='SAME', # SAME uses zero padding if not all CNN_KERNEL_SIZE x CNN_KERNEL_SIZE positions are filled, VALID will ignore missing
activation=tf.nn.relu) # activation function is ReLU which is f(x) = max(x, 0)
# For simplicity, this neural network doubles the number of receptive fields (filters) with each layer.
# By using more filters, we are able to preserve the spatial dimensions better by storing more information.
#
# To determine how much information is preserved by each layer, consider that with each layer,
# the output width and height is divided by the `strides` value.
# When strides=2 for example, the input width W and height H is reduced by 2x, resulting in
# an "image" (formally, an activation field) for each filter output with dimensions W/2 x H/2.
# By doubling the number of filters compared to the input number of filters, the total output
# dimension becomes W/2 x H/2 x CNN_FILTERS*2, essentially compressing the input of the layer
# (W x H x CNN_FILTERS) to half as many total "pixels" (hidden units) at the output.
#
# On the other hand, increasing the number of filters will also increase the training time proportionally,
# as there are more weights and biases to train and convolutions to perform.
#
# As an exercise, you can play around with different numbers of filters, strides, and kernel_sizes.
# To avoid very long training time, make sure to keep kernel sizes small (under 5),
# strides at least 2 but no larger than kernel sizes (or you will skip pixels),
# and cap the number of filters at each level (no more than 512).
#
# When modifying these values, it is VERY important to keep track of the size of your layer outputs,
# i.e. the number of hidden units, since the final layer will need to be flattened into a 1D vector with size
# equal to the total number of hidden units. For this reason, using strides that are divisible by the width
# and height of the input may be the easiest way to avoid miscalculations from rounding.
layer_2 = tf.layers.conv2d(
inputs=layer_1,
kernel_size=CNN_KERNEL_SIZE,
strides=CNN_STRIDES,
filters=CNN_FILTERS * (2 ** 1), # Double the number of filters from previous layer
padding='SAME',
activation=tf.nn.relu)
layer_3 = tf.layers.conv2d(
inputs=layer_2,
kernel_size=CNN_KERNEL_SIZE,
strides=CNN_STRIDES,
filters=CNN_FILTERS * (2 ** 2), # Double the number of filters from previous layer
padding='SAME',
activation=tf.nn.relu)
layer_4 = tf.layers.conv2d(
inputs=layer_3,
kernel_size=CNN_KERNEL_SIZE,
strides=CNN_STRIDES,
filters=CNN_FILTERS * (2 ** 3), # Double the number of filters from previous layer
padding='SAME',
activation=tf.nn.relu)
layer_5 = tf.layers.conv2d(
inputs=layer_4,
kernel_size=CNN_KERNEL_SIZE,
strides=CNN_STRIDES,
filters=CNN_FILTERS * (2 ** 4), # Double the number of filters from previous layer
padding='SAME',
activation=tf.nn.relu)
layer_5_flat = tf.reshape( # Flattening to 2-d tensor (1-d per image row for feedforward fully-connected layer)
layer_5,
shape=[-1, # Reshape final layer to 1-d tensor per image.
CNN_FILTERS * (2 ** 4) * # Number of filters (depth), times...
pixels / (CNN_STRIDES ** 5) / (CNN_STRIDES ** 5)]) # ... number of hidden units per filter (input pixels / width decimation / height decimation)
dense_layer= tf.layers.dense( # fully connected layer
inputs=layer_5_flat,
units=FC_HIDDEN_UNITS, # number of hidden units
activation=tf.nn.relu)
dropout_layer = tf.layers.dropout( # Dropout layer randomly keeps only dropout*100% of the dense layer's hidden units in training and autonormalizes during prediction.
inputs=dense_layer,
rate=dropout,
training=is_training)
return tf.layers.dense(inputs=dropout_layer, units=NUM_CLASSES) # 2-d tensor: of "logits" for each image in batch.
# Model function:
def generate_model_fn(dropout):
Return a function that determines how TF estimator operates.
The estimator has 3 modes of operation:
* train (fitting and updating the model)
* eval (collecting and returning validation metrics)
* predict (using the model to label unlabeled images)
The returned function _cnn_model_fn below determines what to do depending
on the mode of operation, and returns specs telling the estimator what to
execute for that mode.
Args:
dropout: regularization parameter in last layer (between 0 and 1, exclusive)
Returns:
_cnn_model_fn: a function that returns specs for use with TF estimator
def _cnn_model_fn(features, labels, mode):
A function that determines specs for the TF estimator based on mode of operation.
Args:
features: actual data (which goes into scope within estimator function) as 4-d tensor (of batch_size),
pulled in via tf executing _input_fn(), which is the output to generate_input_fn() and is in memory
labels: 1-d tensor of 0s and 1s
mode: TF object indicating whether we're in train, eval, or predict mode.
Returns:
estim_specs: collections of metrics and tensors that are required for training (e.g. prediction values, loss value, train_op tells model weights how to update)
# Use the cnn() to compute logits:
logits_train = cnn(features, dropout, reuse=False, is_training=True)
logits_eval = cnn(features, dropout, reuse=True, is_training=False)
# We'll be evaluating these later.
# Transform logits into predictions:
pred_classes = tf.argmax(logits_eval, axis=1) # Returns 0 or 1, whichever has larger logit.
pred_prob = tf.nn.softmax(logits=logits_eval)[:, 1] # Applies softmax function to return 2-d probability vector.
# Note: we're not outputting pred_prob in this tutorial, that line just shows you
# how to get it if you want it. Softmax[i] = exp(logit[i]) / sum(exp((logit[:]))
# If we're in prediction mode, early return predicted class (0 or 1):
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# If we're not in prediction mode, define loss function and optimizer.
# Loss function:
# This is what the algorithm minimizes to learn the weights.
# tf.reduce_mean() just takes the mean over a batch, giving back a scalar.
# Inside tf.reduce_mean() we'll select any valid binary loss function we want to use.
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
# Optimizer:
# This is the scheme the algorithm uses to update the weights.
# AdamOptimizer is adaptive moving average, feel free to replace with one you prefer.
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE)
# The minimize method below doesn't minimize anything, it just takes a step.
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
# Performance metric:
# Should be whatever we chose as we defined in Step 1. This is what you said you care about!
# This output is for reporting only, it is not optimized directly.
acc = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
# Hooks - pick what to log and show:
# Hooks are designed for monitoring; every time TF writes a summary, it'll append these.
logging_hook = tf.train.LoggingTensorHook({
'x-entropy loss': loss,
'training accuracy': acc[0],
}, every_n_secs=TRAINING_LOG_PERIOD_SECS)
# Stitch everything together into the estimator specs, which we'll output here so it can
# later be passed to tf.estimator.Estimator()
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_classes,
loss=loss,
train_op=train_op,
training_hooks=[logging_hook],
eval_metric_ops={ # This bit is Step 7!
'accuracy': acc,
}
)
# TF estim_specs defines a huge dict that stores different metrics and operations for useby TF Estimator.
# This gives you the interaction between your architecture in cnn() and the weights, etc. in the current iteration which
# will be used as input in the next iteration.
return estim_specs
return _cnn_model_fn
# TF Estimator:
# WARNING: Don't run this block of code more than once without first changing OUTPUT_DIR.
estimator = tf.estimator.Estimator(
model_fn=generate_model_fn(DROPOUT), # Call our generate_model_fn to create model function
model_dir=OUTPUT_DIR, # Where to look for data and also to paste output.
config=RunConfig(
save_checkpoints_secs=CHECKPOINT_PERIOD_SECS,
keep_checkpoint_max=20,
save_summary_steps=100,
log_step_count_steps=100)
)
# TF Experiment:
def experiment_fn(output_dir):
Create _experiment_fn which returns a TF experiment
To be used with learn_runner, which we imported from tf.
Args:
output_dir: which is where we write our models to.
Returns:
a TF Experiment
return Experiment(
estimator=estimator,
train_input_fn=generate_input_fn(TRAIN_DIR, TRAIN_BATCH_SIZE, QUEUE_CAP), # Generate input function above.
eval_input_fn=generate_input_fn(DEBUG_DIR, DEBUG_BATCH_SIZE, QUEUE_CAP),
train_steps=TRAIN_STEPS, # Number of batches to use for training.
eval_steps=DEBUG_STEPS, # Number of batches to use for eval.
min_eval_frequency=1 # Run eval once every min_eval_frequency number of checkpoints.
)
# Enable TF verbose output:
tf.logging.set_verbosity(tf.logging.INFO)
start_time = datetime.datetime.now()
print('It\'s {:%H:%M} in London'.format(start_time) + ' --- Let\'s get started!')
# Let the learning commence! Run the TF Experiment here.
learn_runner.run(experiment_fn, OUTPUT_DIR)
# Output lines using the word "Validation" are giving our metric on the non-training dataset (from DEBUG_DIR).
end_time = datetime.datetime.now()
print('\nIt was {:%H:%M} in London when we started.'.format(start_time))
print('\nWe\'re finished and it\'s {:%H:%M} in London'.format(end_time))
print('\nCongratulations! Training is complete!')
# Observed labels from filenames:
def get_labels(dir):
Get labels from filenames.
Filenames must be in the following format: number_number_label.png
Args:
dir: directory containing image files
Returns:
labels: 1-d np.array of binary labels
filelist = os.listdir(dir) # Use all the files in the directory
labels = np.array([])
for f in filelist:
split_filename = f.split('_')
label = int(split_filename[-1].split('.')[0])
labels = np.append(labels, label)
return labels
# Cat_finder function for getting predictions:
def cat_finder(dir, model_version):
Get labels from model.
Args:
dir: directory containing image files
Returns:
predictions: 1-d np array of binary labels
num_predictions = len(os.listdir(dir))
predictions = [] # Initialize array.
# Estimator.predict() returns a generator g. Call next(g) to retrieve the next value.
prediction_gen = estimator.predict(
input_fn=generate_input_fn(dir=dir,
batch_size=TRAIN_STEPS,
queue_capacity=QUEUE_CAP
),
checkpoint_path=model_version
)
# Use generator to ensure ordering is preserved and predictions match order of validation_labels:
i = 1
for pred in range(0, num_predictions):
predictions.append(next(prediction_gen)) #Append the next value of the generator to the prediction array
i += 1
if i % 1000 == 0:
print('{:d} predictions completed (out of {:d})...'.format(i, len(os.listdir(dir))))
print('{:d} predictions completed (out of {:d})...'.format(len(os.listdir(dir)), len(os.listdir(dir))))
return np.array(predictions)
def get_accuracy(truth, predictions, threshold=0.5, roundoff = 2):
Compares labels with model predictions and returns accuracy.
Args:
truth: can be bool (False, True), int (0, 1), or float (0, 1)
predictions: number between 0 and 1, inclusive
threshold: we convert the predictions to 1s if they're above this value
roundoff: report accuracy to how many decimal places?
Returns:
accuracy: number correct divided by total predictions
truth = np.array(truth) == (1|True)
predicted = np.array(predictions) >= threshold
matches = sum(predicted == truth)
accuracy = float(matches) / len(truth)
return round(accuracy, roundoff)
files = os.listdir(TRAIN_DIR)
model_version = os.path.join(OUTPUT_DIR, 'model.ckpt-' + str(TRAIN_STEPS))
predicted = cat_finder(TRAIN_DIR, model_version)
observed = get_labels(TRAIN_DIR)
print('Accuracy is ' + str(get_accuracy(observed, predicted)))
files = os.listdir(DEBUG_DIR)
model_version = os.path.join(OUTPUT_DIR + 'model.ckpt-' + str(TRAIN_STEPS))
predicted = cat_finder(DEBUG_DIR, model_version)
observed = get_labels(DEBUG_DIR)
print('Debugging accuracy is ' + str(get_accuracy(observed, predicted)))
df = pd.DataFrame({'files': files, 'predicted': predicted, 'observed': observed})
hit = df.files[df.observed == df.predicted]
miss = df.files[df.observed != df.predicted]
# Show successful classifications:
show_inputs(DEBUG_DIR, hit, 3)
# Show unsuccessful classifications:
show_inputs(DEBUG_DIR, miss, 3)
# Disable TF verbose output:
tf.logging.set_verbosity(tf.logging.FATAL)
# Get output:
dropouts = np.array([])
accuracies = np.array([])
for i in range(9):
tune_output_dir = os.path.join(OUTPUT_DIR, 'dropout0.' + str(i + 1) + '/')
tune_dropout = (float(i) + 1) / 10
print('It\'s {:%H:%M} in London'.format(datetime.datetime.now()) + ' --- Dropout setting is ' + str(tune_dropout))
# Try a new dropout setting for TF Estimator:
estimator = tf.estimator.Estimator(
model_fn=generate_model_fn(tune_dropout),
model_dir=tune_output_dir,
config=RunConfig(
save_checkpoints_secs=CHECKPOINT_PERIOD_SECS,
keep_checkpoint_max=20,
save_summary_steps=100,
log_step_count_steps=100
)
)
# Train it!
learn_runner.run(experiment_fn, tune_output_dir)
# Identify the model version:
tuned_model = os.path.join(tune_output_dir, 'model.ckpt-' + str(TRAIN_STEPS))
# Output predicted and observed labels:
predicted = cat_finder(DEBUG_DIR, model_version=tuned_model)
observed = get_labels(DEBUG_DIR)
# Compute performance metric:
accuracy = get_accuracy(truth=observed, predictions=predicted)
print('Accuracy is: ' + str(accuracy))
# Append to array:
dropouts = np.append(dropouts, tune_dropout)
accuracies = np.append(accuracies, accuracy)
best_dropout = dropouts[np.argmax(accuracies)]
print("Dropout tuning complete! Set hyperparameter to " + str(best_dropout) + ".")
files = os.listdir(VALID_DIR)
predicted = cat_finder(VALID_DIR, model_version)
observed = get_labels(VALID_DIR)
print('\nValidation accuracy is ' + str(get_accuracy(observed, predicted)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 - Overview of the Problem set
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_y.shape[1]
m_test = test_set_y.shape[1]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (np.exp(-z) + 1)
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = -1 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = np.dot(X, (A - Y).T) / m
db = np.sum(A - Y) / m
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0, i] <= 0.5:
Y_prediction[0, i] = 0
else:
Y_prediction[0, i] = 1
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations= num_iterations, learning_rate = learning_rate, print_cost = print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the Monte-Carlo experiment class
Step2: We now generate a specified amount of samples (8 in this example) and run them through Noddy. Note that Noddy can be spawned in separate threads
Step3: Now that the model instances have been generated, we need to load them into memory (from the output directory we specified earlier). The easiest way to do this is to use a static method in the MonteCarlo class
Step4: Now, lets draw some of them
Step5: Finally, we should really (but don't need to) delete all of the files we've generated.
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
# here the usual imports. If any of the imports fails, make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop' or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment
rcParams.update({'font.size': 20})
print "Success!"
reload(pynoddy.history)
reload(pynoddy.output)
reload(pynoddy.experiment)
reload(pynoddy)
import pynoddy.experiment.monte_carlo
reload(pynoddy.experiment.monte_carlo)
from pynoddy.experiment.monte_carlo import MonteCarlo
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/fold_dyke_fault.his")
#this file defines the statistical distributions to sample from
params = os.path.join(repo_path, "examples/fold_dyke_fault.csv")
#now we initialise a MonteCarlo experiment
gipps_mc = MonteCarlo(history_file,params)
n_draws = 8 #the number of models to produce
output_path = os.path.join(repo_path,'sandbox/out') #the location to write models too
gipps_mc.change_cube_size(100) #change block size
gipps_mc.generate_model_instances(output_path,n_draws,threads=4, verbose=False, seed=10)
print "%d random model variations produced." % n_draws
models = MonteCarlo.load_noddy_realisations(output_path,verbose=False)
print "%d models loaded" % len(models)
for i in range(3):
models[i].plot_section(direction = 'y', ve=.5)
gipps_mc.cleanup()
print "Finito!"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create plot drawing function
Step2: ReLU
Step3: Leaky ReLU
Step4: tanh
Step5: sigmoid
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
z = np.linspace(-5,5,num=1000)
def draw_activation_plot(a,quadrants=2,y_ticks=[0],y_lim=[0,5]):
#Create figure and axis
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#Move left axis
ax.spines['left'].set_position('center')
# Remove top and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#Set x and y labels
plt.xlabel('z')
plt.ylabel('a')
#Set ticks
plt.xticks([])
plt.yticks(y_ticks)
#Set ylim
plt.ylim(y_lim)
#4 Quadrant conditions
if quadrants==4:
#Move bottom axis
ax.spines['bottom'].set_position('center')
#Move x and y labels
ax.yaxis.set_label_coords(.48,.75)
ax.xaxis.set_label_coords(.75,.48)
plt.plot(z,a);
relu = np.maximum(z,0)
draw_activation_plot(relu)
def draw_2_quad_plot(a):
#Create figure and axis
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#Move bottom and left axes
ax.spines['left'].set_position('center')
#ax.spines['bottom'].set_position('center')
# Remove top and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#Set x and y labels
plt.xlabel('z')
plt.ylabel('a')
#Set ticks
plt.xticks([])
plt.yticks([0])
plt.ylim([0,5])
plt.plot(z,a);
relu = np.maximum(z,0)
draw_2_quad_plot(relu)
leaky_ReLU = np.maximum(0.01*z,z)
draw_4_quad_plot(tanh)
def draw_4_quad_plot(a):
#Create figure and axis
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#Move bottom and left axes
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
# Remove top and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#Set x and y labels
plt.xlabel('z')
plt.ylabel('a')
#Move x and y labels
ax.yaxis.set_label_coords(.48,.75)
ax.xaxis.set_label_coords(.75,.48)
#Set ticks
plt.xticks([])
plt.yticks([-1,0,1])
plt.plot(z,a);
tanh = (np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z))
draw_4_quad_plot(tanh)
sigmoid = 1/(1+np.exp(-z))
draw_2_quad_plot(sigmoid)
#Create z and sigma
sigma = 1/(1+np.exp(-z))
#Draw prediction cut-off line
plt.axhline(0.5, color='black',ls='--')
#Label axis
plt.xlabel('z')
plt.ylabel(r'$\hat{y}$')
#Plot graph
plt.tick_params(axis='x',bottom='off',labelbottom='off')
plt.plot(z,sigma,'-',lw=3);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make up some open interval tops and bottoms and some heads
Step2: Make a flopy modflow model
Step3: Get transmissivities along the diagonal cells
Step4: Plot the model top and layer bottoms (colors)
Step5: example of transmissivites without sctop and scbot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
sctop = [-.25, .5, 1.7, 1.5, 3., 2.5] # screen tops
scbot = [-1., -.5, 1.2, 0.5, 1.5, -.2] # screen bottoms
# head in each layer, for 6 head target locations
heads = np.array([[1., 2.0, 2.05, 3., 4., 2.5],
[1.1, 2.1, 2.2, 2., 3.5, 3.],
[1.2, 2.3, 2.4, 0.6, 3.4, 3.2]
])
nl, nr = heads.shape
nc = nr
botm = np.ones((nl, nr, nc), dtype=float)
top = np.ones((nr, nc), dtype=float) * 2.1
hk = np.ones((nl, nr, nc), dtype=float) * 2.
for i in range(nl):
botm[nl-i-1, :, :] = i
botm
m = flopy.modflow.Modflow('junk', version='mfnwt', model_ws='temp')
dis = flopy.modflow.ModflowDis(m, nlay=nl, nrow=nr, ncol=nc, botm=botm, top=top)
upw = flopy.modflow.ModflowUpw(m, hk=hk)
r, c = np.arange(nr), np.arange(nc)
T = flopy.utils.get_transmissivities(heads, m, r=r, c=c, sctop=sctop, scbot=scbot)
np.round(T, 2)
m.dis.botm.array[:, r, c]
fig, ax = plt.subplots()
plt.plot(m.dis.top.array[r, c], label='model top')
for i, l in enumerate(m.dis.botm.array[:, r, c]):
label = 'layer {} bot'.format(i+1)
if i == m.nlay -1:
label = 'model bot'
plt.plot(l, label=label)
plt.plot(heads[0], label='piezometric surface', color='b', linestyle=':')
for iw in range(len(sctop)):
ax.fill_between([iw-.25, iw+.25], scbot[iw], sctop[iw],
facecolor='None', edgecolor='k')
ax.legend(loc=2)
T = flopy.utils.get_transmissivities(heads, m, r=r, c=c)
np.round(T, 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Convection
Step2: Convection naturelle
Step3: Si taille de tuile comme dim carac
Step4: Isolation toiture
Step5: Flux velux et fenêtre
Step6: Masse thermique sol et mur
Step7: Masse thermique air
Step8: Taux de renouvelement
Step9: Débit ventilateur de table
Step10: Aération naturelle
|
<ASSISTANT_TASK:>
Python Code:
S = 25.8e-2 * 43.7e-2 # m3
m = 3.2 # kg
# Masse surfacique :
rhoS = m/S
print( rhoS )
k = 0.0262 # W/m/K, conductivité thermique
nu = 1.57e-5 # m2·s−1 , viscosité cinématique air,
alpha = 2.22e-5 # m2·s−1, diffusivité thermique
Pr = 0.708 # Prandl
L = 4 # m, dimension caractéristique
U = 20e3/60/60 # m/s, vitesse caractéristique
print( U )
Re = U*L/nu # Reynolds, turbulent si >3.10^5
print( Re )
Reynolds = lambda u: u*L/nu
# Limite turbulence
v_lim = 3e5/L*nu
print( 'vitesse limite turbulence: %f m/s '% v_lim )
# Forcée, turbulent, Pr < 50 (cf. page 15)
h_FT = lambda u : k/L * 0.036 * Reynolds(u)**(4/5) * Pr**(1/3)
# Forcée laminaire
h_FL = lambda u : k/L * 0.664 * Reynolds(u)**(1/2) * Pr**(1/3)
u_span_T = np.linspace( 1.0, 15, 20 ) # m/sh_FT_span = h_FT( u_span )
u_span_L = np.linspace( 0, 1.7, 20 )
plt.figure( figsize=(10, 6) )
plt.plot( u_span_T, h_FT( u_span_T ), label='Forcée, turbulent' )
plt.plot( u_span_L, h_FL( u_span_L ), label='Forcée, laminaire' )
plt.plot( [0, 1.7], [5, 5], label='Naturelle, laminaire&turb.' )
# plt.plot( u_span_T, 6.5*u_span_T, label='Forcée, laMétéo-dec' )
plt.xlabel( 'vitesse vent m/s' )
plt.ylabel( 'h W/m2/K' );
plt.legend();
theta = 37*np.pi/180
deltaT = 30
L = 4 # m
g = 9.81 #* np.cos(theta) #m/s-2
beta = 3e-3 # air, 20°C
Gr = g*beta*deltaT*L**3/nu**2
print('Grashof: %e'% Gr )
print('Rayleight: %e'% (Gr*Pr) )
# Plaque horizontale chauffant vers le haut
h_NTh = k/L * 0.14*(Gr*Pr)**(1/3)
print( h_NTh )
# Vertical, turbulent (Mac Adams)
h_NTv = k/L * 0.13 * (Gr*Pr)**(1/3)
print( h_NTv )
L = 20e-2 # m
Gr = g*beta*deltaT*L**3/nu**2
print('Grashof: %e'% Gr )
print('Rayleight: %e'% (Gr*Pr) )
# limite de turbulence ...
# vertical :
h_NLv = k/L * 0.677* (Pr)**(1/2)/(0.95+Pr)**(1/4)*Gr**(1/4)
print( h_NLv )
# horizontale :
h_NLh = k/L * 0.54* (Pr*Gr)**(1/4)
print( h_NLh )
ep = 20e-2 # m
k = 0.035 # W/m/K
S_plafond = 2*4.59*7.94 # m2 , surface toit
print( 'U_toit = %.3f W/K'%(ep*k*S_plafond) )
h_vitre = 2.8 # W/ (m2.K)
h_vitre = 2.8 # W/ (m2.K)
S_vitre = 0.6*0.8*2 + 1.2*0.8 + 0.3*0.72*4 + 0.25**2 # m2
print('surface vitrée: %.2f m^2' % S_vitre )
print( '> U_vitre : %.3f W/K'%( h_vitre*S_vitre ) )
longueur_cadres = (0.6+0.8)*4 + (1.2+0.8)*2 + 2*(0.3+0.72)*4 + 4*0.25
print( 'longueur cadres: %.1f m' % longueur_cadres )
psi = 0.016
k_bois = 0.15 + psi #
U_cadres = k_bois*longueur_cadres # W/K
print( '> U_cadres : %.3f W/K'%( U_cadres ) )
print(' ')
print( '> U_tot : %.3f W/K'%( U_cadres + h_vitre*S_vitre ) )
# pour du béton :
rhoCp = 1400e3 # J/m3/K
k = 1.75 # W/m/K
# Distance de pénetration dans les murs :
temps_carac = 60*60*12 # 12h
D = k/rhoCp # diffusivité
distance = np.sqrt( D*temps_carac )
print( 'Longeur carac. diff. mur: %f cm' % (distance*100) )
surface_murs = 58 + 2*22.7
ep_utile = 0.08 # 8cm
M_mur = surface_murs*ep_utile*rhoCp
print( 'masse thermique murs: %.2e J/K'% M_mur )
h = 10
print( M_mur/h/60/60/24 )
Volume_appart = 22.7 * 7.94 # m3
# http://www.engineeringtoolbox.com/air-properties-d_156.html
rho_air = 1.2051 # kg/m3, 20°C
Cp_air = 0.005 * 1e3 # J/(kg K), 20°C
rhoCp_air = Cp_air*rho_air
print( 'masse air appart: %f kg'% (Volume_appart*rho_air) )
M_air = Volume_appart*rho_air*Cp_air
print( 'masse thermique air appart: %f J/K'% M_air )
A = 1.2*0.3 # m2
v = 0.5 # m/s
taux = ( A*v ) / Volume_appart
print( taux*60*60 )
q_min = 34 # m3/min +- 10%
q_min = q_min - 0.1*q_min
taux = (q_min*60)/ Volume_appart
print( taux )
taux = 10 # Volume / heure
U_aeration = q_min *60*60 * rhoCp_air
print( 'U_ventilateur: %f W/K' % U_aeration )
# http://www.ecro.fr/fr/principes-solutions/principes/taux-de-brassage-renouvellement.html
taux = 2 # V/h
Volume_appart = 22.7 * 7.94 # m3
U_aeration = taux *Volume_appart /60/60 * rhoCp_air
print( 'U aération nat: %f W/K' % U_aeration )
12/2.3
3.5*5
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fermi-Hubbard experiment example
Step2: To track the progress of simulating experiments, we use the tqdm package.
Step3: We can now import Cirq and the fermi_hubbard module from ReCirq.
Step5: Experiment parameters
Step7: We can see an example layout by printing out its text diagram.
Step9: The layout indicates the site index $j$ and spin state $\nu$, as well as which cirq.GridQubit on the Rainbow processor this combination of $(j, \nu)$ is encoded into. One can choose a different layout in the previous cell to see how the configurations vary.
Step11: Other configurations which support site-dependent $U$ and $J$ coefficients can be prepared by creating instances of the fermi_hubbard.FermiHubbardParameters data class explicitly.
Step12: We can also see the initial state
Step13: And the time step
Step15: Circuits
Step17: Below, we display the complete circuit to execute which is a sum of the three component circuits above.
Step19: Note
Step21: We will consider both ideal and noisy decompositions when executing the experiment below.
Step23: Ideal
Step25: We can now run experiments using the run_experiment function. This function takes the parameters of a problem, a sampler, and a list of Trotter steps to simulate. Below, we provide the problem parameters defined on each $16$ qubit layout of the Rainbow processor and simulate the experiments using ten Trotter steps and the ideal_sampler.
Step27: The output of run_experiment is an instance of the ExperimentResult data class. A series of experiments for the same problem instance on different qubit layouts can be post-processed with the help of the InstanceBundle class. This class takes care of averaging results over qubits layouts, re-scaling the data by comparing against a reference run (perfect simulation in this case), and extracting various quantities.
Step29: A number of quantities of interest can be accessed from an InstanceBundle, as shown below.
Step31: Each quantity can be converted to a pandas DataFrame using the quantity_data_frame function. Our main goal in simulating the Fermi-Hubbard model was to compute the charge and spin densities
Step33: This data frame contains the value, standard error, and standard deviation of the "charge_spin_density" quantity at each site for each time (Trotter step). For convenience, this quantity (and others) can be plotted with the fermi_hubbard.plot_quantity helper function.
Step35: This plotting function automatically adjusts the appearance of plots according to the data being plotted. We illustrate this by plotting the "charge_spin_spreading" below.
Step37: One can compare these plots to Figure 2 of the Fermi-Hubbard experiment paper.
Step39: As above, we can post-process the data using an InstanceBundle and plot quantities of interest using the plot_quantity helper function.
Step40: We first plot the "charge_spin_density"
Step42: And plot the "charge_spin_spreading" as well.
Step44: One can compare these to the simulation with exact decompositions above to see the effect of the parasitic controlled phase.
Step46: Now that we are running on a quantum computer, we follow good experimental practice and save the results on disk as soon as each experiment finishes using the fermi_hubbard.save_experiment function. Although rare, remote operation may fail for various reasons. More advanced execution workflow might include error handling, experiment pause and continuation, etc., which we omit here for simplicity.
Step48: We can now load the results using fermi_hubbard.load_experiment.
Step50: When post-processing experimental data from hardware, we include effects due the the parasitic controlled phase as shown below. The value $\phi = 0.138$ was the approximate value of the parasitic controlled phase at the time when the experimental results in the paper were collected.
Step53: We can now visualize these results using the same plotting functions from above. Here we show the standard deviation of results in the plots.
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import recirq
except ImportError:
print("Installing ReCirq...")
!pip install git+https://github.com/quantumlib/recirq --quiet
print("Installed ReCirq!")
try:
import ipywidgets
except ImportError:
!pip install ipywidgets --quiet
!jupyter nbextension enable --py widgetsnbextension --sys-prefix
import cirq
from recirq import fermi_hubbard
from recirq.fermi_hubbard import publication
# Hide numpy warnings
import warnings
warnings.filterwarnings("ignore")
Get all layouts for 8 sites on a 23-qubit subgrid of the Google Rainbow processor.
layouts = publication.rainbow23_layouts(sites_count=8)
print(f"There are {len(layouts)} total qubit layouts.")
Display an example layout.
print(layouts[0].text_diagram())
Get FermiHubbardParameters (problem descriptions) for each qubit layout with the above parameters.
parameters = [
publication.trapping_instance(
layout, u=2, dt=0.3, up_particles=2, down_particles=2
)
for layout in layouts
]
Display the Hamiltonian for an example problem description.
parameters_example = parameters[0]
print(parameters_example.hamiltonian)
parameters_example.initial_state
parameters_example.dt
Create circuits from a problem description.
initial, trotter, measurement = fermi_hubbard.create_circuits(parameters_example, trotter_steps=1)
Display the total circuit to execute.
circuit = initial + trotter + measurement
circuit
Convert the circuit to native hardware gates perfectly (without noise).
publication.ideal_sqrt_iswap_converter().convert(circuit)
Set the number of Trotter steps to simulate.
trotter_steps = range(10 + 1)
Get an ideal sampler to simulate experiments.
ideal_sampler = fermi_hubbard.ConvertingSampler(
cirq.Simulator(), publication.ideal_sqrt_iswap_converter().convert
)
Run the experiments on a perfect simulator for each qubit layout.
from tqdm.notebook import tqdm
with tqdm(range(len(parameters) * len(trotter_steps))) as progress:
experiments = [
fermi_hubbard.run_experiment(
params,
trotter_steps,
ideal_sampler,
post_run_func=lambda *_: progress.update()
)
for params in parameters
]
Post-process the experimental data for all qubit layouts.
bundle = fermi_hubbard.InstanceBundle(experiments)
bundle.cache_exact_numerics()
Show quantities which can be accessed from an InstanceBundle.
for quantity_name in bundle.quantities:
print(quantity_name)
Example of getting a DataFrame from a quantity.
charge_spin_density, _, _ = fermi_hubbard.quantity_data_frame(bundle, "charge_spin_density")
charge_spin_density.head()
Plot the charge spin density.
fermi_hubbard.plot_quantity(bundle, "charge_spin_density");
Plot the charge spin spreading.
fermi_hubbard.plot_quantity(bundle, "charge_spin_spreading");
Run the experiments on a noisy simulator for each qubit layout.
parasitic_sampler = fermi_hubbard.ConvertingSampler(
cirq.Simulator(), publication.google_sqrt_iswap_converter().convert
)
with tqdm(range(len(parameters) * len(trotter_steps))) as progress:
experiments = [
fermi_hubbard.run_experiment(
params,
trotter_steps,
parasitic_sampler,
post_run_func=lambda *_: progress.update()
)
for params in parameters
]
Post-process the experimental data for all qubit layouts.
bundle = fermi_hubbard.InstanceBundle(experiments)
bundle.cache_exact_numerics()
fermi_hubbard.plot_quantity(bundle, "charge_spin_density");
Plot the charge spin spreading.
fermi_hubbard.plot_quantity(bundle, "charge_spin_spreading", show_std_error=True);
Get an engine sampler.
import os
import cirq_google
if "GOOGLE_CLOUD_PROJECT" in os.environ:
engine_sampler = cirq_google.get_engine_sampler(
processor_id="rainbow", gate_set_name="sqrt_iswap"
)
else:
# Use the simulator as a backup.
engine_sampler = cirq.Simulator()
# Get a sampler for the Fermi-Hubbard experiment.
google_sampler = fermi_hubbard.ConvertingSampler(
engine_sampler, publication.google_sqrt_iswap_converter().convert
)
Run the experiments on Google's QCS and save the results.
# Directory to save results in.
results_dir = "trapping"
with tqdm(range(len(layouts) * len(trotter_steps))) as progress:
for index, params in enumerate(parameters):
experiment = fermi_hubbard.run_experiment(
params,
trotter_steps,
google_sampler,
post_run_func=lambda *_: progress.update()
)
fermi_hubbard.save_experiment(
experiment, f"{results_dir}/trapping_{index + 1}.json"
)
Load experimental results.
experiments = [
fermi_hubbard.load_experiment(f"{results_dir}/trapping_{index + 1}.json")
for index in range(len(parameters))
]
Post-process the experimental data for all qubit layouts.
bundle = fermi_hubbard.InstanceBundle(
experiments,numerics_transform=publication.parasitic_cphase_compensation(0.138)
)
bundle.cache_exact_numerics()
Plot the charge spin density.
fermi_hubbard.plot_quantity(bundle, "charge_spin_density", show_std_error=True);
Plot the charge spin spreading.
fermi_hubbard.plot_quantity(bundle, "charge_spin_spreading", show_std_error=True);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Setup paths and prepare raw data
Step2: 1) Fit ICA model using the FastICA algorithm
Step3: 2) identify bad components by analyzing latent sources.
Step4: 3) Assess component selection and unmixing quality
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 45, n_jobs=1)
# Other available choices are `infomax` or `extended-infomax`
# We pass a float value between 0 and 1 to select n_components based on the
# percentage of variance explained by the PCA components.
ica = ICA(n_components=0.95, method='fastica')
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13))
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import the python bitcoin module created by Peter Todd.
Step2: Connect nodes to bitcoin-rpc module
Step3: Network attributes
Step4: Block Chain RPCs
Step5: Bootstrap
Step6: Raw Transaction RPCs
Step7: Now that we have a raw transaction, let's look at the details. As noted, the first 100 blocks in regtest are blank, necessary for mining our first coinbase reward of 50BTC. This shows in block 101 with a single transaction, denoted rawTX below. rawTX is a JSON object with
Step8: Let's create an address and send some bitcoins there. Some things to notice
Step9: Bob mines 6 more blocks, after the fifth, the 25BTC sent to Mary is confirmed and shows in her balance. Note that the transaction created in block 101, when Bob sent Mary 25BTC shows up in the first block mined, but the balance isn't updated until some number of blocks have been processed.
Step10: Wallet RPCs
Step11: Wallet Programs
Step12: This code snippet illustrates a simplified proof-of-work algorithm not used by miners; by incementing the nonce and trying difficulties from 1-31 bits (2 - 2^32). Note that this runs for a long time, more than 20 minutes on a 4GB-RAM Ubuntu box.
|
<ASSISTANT_TASK:>
Python Code:
#!/bin/bash
#regtest_start_network.sh
import os
import shutil
#os.system("killall --regex bitcoin.*")
idir = os.environ['HOME']+'/regtest'
if os.path.isdir(idir):
shutil.rmtree(idir)
os.mkdir(idir)
connects = {'17591' : '17592', '17592' : '17591'}
for port in connects.keys():
adir = idir+'/'+port
os.mkdir(adir)
args = " -server -txindex=1 -listen -port=" + port + \
" -rpcuser=bitcoinrpc -rpcpassword=P0 -rpcport="+\
str(int(port)-1000) +\
" -datadir=" + adir + " -connect=localhost:" + connects[port] +\
" -regtest -pid="+port+".pid -daemon -debug"
os.system("bitcoind" + args)
import bitcoin
import bitcoin.rpc
import bitcoin.core
import bitcoin.core.script
bitcoin.SelectParams('regtest')
Bob = bitcoin.rpc.RawProxy("http://bitcoinrpc:P0@127.0.0.1:16591")
Mary = bitcoin.rpc.RawProxy("http://bitcoinrpc:P0@127.0.0.1:16592")
info = Bob.getinfo()
for key in info.keys():
print key + ' : ' + str(info[key])
getnetworkinfo = Bob.getnetworkinfo()
print '\ngetnetworkinfo\n'
print getnetworkinfo
getpeerinfo = Bob.getpeerinfo()
print '\ngetpeerinfo\n'
print getpeerinfo
getconnectioncount = Bob.getconnectioncount()
print '\ngetconnectioncount\n'
print getconnectioncount
getnettotals = Bob.getnettotals()
print '\ngetnettotals\n'
print getnettotals
blockchaininfo = Bob.getblockchaininfo()
getblockcount = Bob.getblockcount()
getbestblockhash = Bob.getbestblockhash()
getdifficulty = Bob.getdifficulty()
getchaintips = Bob.getchaintips()
getmempoolinfo = Bob.getmempoolinfo()
print '\nblockchaininfo\n'
print blockchaininfo
print '\ngetblockcount ' + str(getblockcount)
print '\ngetbestblockhash\n'
print getbestblockhash
print '\ngetdifficulty ' + str(getdifficulty)
print '\ngetchaintips\n'
print getchaintips
print '\ngetmempoolinfo\n'
print getmempoolinfo
print '\n\n'
bestblockhash = blockchaininfo['bestblockhash']
blocks = blockchaininfo['blocks']
print '\nblocks = ' + str(blocks)
print '\nbestblockhash = ' + str(bestblockhash)
## N.B. our balance is zero in the genesis block
print 'Initial balance, before any mining ' + str(Bob.getbalance())
Bob.generate(101)
print 'Balance after mining 101 blocks ' + str(Bob.getbalance())
getblockcount = Bob.getblockcount()
print '\ngetblockcount = ' + str(getblockcount)
getblockhash = Bob.getblockhash(getblockcount)
print '\ngetblockhash = ' + str(getblockhash)
print '\ngetblock\n'
getblock = Bob.getblock(getblockhash)
print getblock
tx = getblock['tx']
print '\n' + str(len(tx)) + ' Transactions\n'
print tx
for i in range(len(tx)):
print '\nSerialized Transaction #' + str(i) +'\n'
serializedTX = Bob.getrawtransaction(tx[i],0)
print serializedTX
print '\nRaw Transaction\n'
rawTX = Bob.getrawtransaction(tx[i],1)
print rawTX
print '\nDecoded Transaction\n'
decodedTX = Bob.decoderawtransaction(serializedTX)
print decodedTX
from datetime import datetime as dt
#import pytz
print rawTX
print '\n\n'
print 'blockhash = ' + str(rawTX['blockhash']) + '\n'
for i in range(len(rawTX['vout'])):
spk = rawTX['vout'][i]['scriptPubKey']
print 'vout ' + str(i) + ' : ' + str(spk) + '\n'
for field in spk.keys():
#['reqSigs','hex','addresses','asm','type']:
print 'vout ' + str(i) + ' ' + field + ' : ' + str(spk[field])
print 'vout ' + str(i) + ' value : ' + str(rawTX['vout'][i]['value'])
print 'vout ' + str(i) + ' n : ' + str(rawTX['vout'][i]['n'])
print '\nserialized hex = ' + str(rawTX['hex'])
print 'Is serialized hex == rawTX["hex"]? ' + str(rawTX['hex']==serializedTX) + '\n'
for i in range(len(rawTX['vin'])):
spk = rawTX['vin'][i]
print 'vin ' + str(i) + ' : ' + str(spk) + '\n'
for field in spk.keys():
#['reqSigs','hex','addresses','asm','type']:
print 'vin ' + str(i) + ' ' + field + ' : ' + str(spk[field])
print '\n'
for field in ['txid','blocktime','version','confirmations','time','locktime']:
if field in ['blocktime','time','locktime']:
print field + ' = ' + str(rawTX[field]) +\
' ' + dt.fromtimestamp(rawTX[field]).strftime('%Y-%m-%d:%H%M%S')
else:
print field + ' = ' + str(rawTX[field])
print 'Mary\'s balance = ' + str(Mary.getbalance())
#print 'Mary\'s peers: '
#print Mary.getpeerinfo()
getnewaddress = Mary.getnewaddress()
print '\nNew address ' + str(getnewaddress)
print '\nMary\'s address has received how many BTC? ' +\
str(Mary.getreceivedbyaddress(getnewaddress,0))
##have Bob (proxy) send 25 bitcoins to Mary
txid = Bob.sendtoaddress(getnewaddress,25)
getmempoolinfo = Bob.getmempoolinfo()
getrawmempool = Bob.getrawmempool(True)
print '\ngetmempoolinfo ' + str(getmempoolinfo)
print '\ngetrawmempool'
print getrawmempool
print '\n'
for key in getrawmempool.keys():
for field in getrawmempool[key].keys():
print str(field) + ' : ' + str(getrawmempool[key][field])
#print '\ntxid from sendtoaddress output ' + str(txid)
print '\nIs the send txid the same as memory pool txid? ****' +\
str(txid == getrawmempool.keys()[0]) + '****'
print '\nMary\'s balance before mining = ' + str(Mary.getbalance())
print 'Bob\'s balance before mining = ' + str(Bob.getbalance())
##how can I see transaction details before mining?
print '\nMemory Pool Raw Transaction Data\n'
import pprint
pprint.pprint(Bob.getrawtransaction(txid,1))
##N.B. no transaction on the blockchain yet!!!
for i in range(7):
Bob.generate(1)
getblockcount = Bob.getblockcount()
getblockhash = Bob.getblockhash(getblockcount)
getblock = Bob.getblock(getblockhash)
print 'Block #' + str(getblockcount) + ' Mary\'s balance ' + str(Mary.getbalance())
print 'txids ' + str(getblock['tx'])
print ' Mary\'s balance ' + str(Mary.getbalance())
print '\nBob\'s Wallet\n'
wallet= Bob.getwalletinfo()
for key in wallet.keys():
print key + '\t' + str(wallet[key])
print '\nMary\'s Wallet\n'
wallet= Mary.getwalletinfo()
for key in wallet.keys():
print key + '\t' + str(wallet[key])
import time
print '\nMary has ' + str(len(Mary.listtransactions())) + ' transactions from Bob\'s largesse'
print '\nMary\'s first address has received how many BTC? ' +\
str(Mary.getreceivedbyaddress( Mary.listtransactions()[0]['address'],0))
#str(Mary.getreceivedbyaddress('mmT3ER6w98jZAKwtTZrr3DSxrchS7fGxKW',0))
print '\nBob has ' + str(len(Bob.listtransactions())) + ' transactions from all that mining'
#print Bob.listtransactions()
##let's send Mary some more of Bob's bitcoins so we can see her unconfirmed balance
getnewaddress = Mary.getnewaddress()
print '\nNew address ' + str(getnewaddress)
##have Bob (proxy) send 0.5 bitcoins to Mary
txid = Bob.sendtoaddress(getnewaddress,0.5)
time.sleep(2)
print '\nMary\'s unconfirmed balance ' + str(Mary.getunconfirmedbalance())
print '\nMary\'s confirmed balance ' + str(Mary.getbalance())
##let's mine 6 blocks
Bob.generate(6)
time.sleep(2)
print 'After Bob\'s mining'
print '\nMary\'s unconfirmed balance ' + str(Mary.getunconfirmedbalance())
print '\nMary\'s confirmed balance ' + str(Mary.getbalance())
# Eloipool - Python Bitcoin pool server
# Copyright (C) 2011-2012 Luke Dashjr <luke-jr+eloipool@utopios.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# original code https://github.com/Crypto-Expert/stratum-mining/blob/master/lib/merkletree.py
# http://code.runnable.com/U3jqtyYUmAUxtsSS/bitcoin-block-merkle-root-python
from hashlib import sha256
from util import doublesha
class MerkleTree:
def __init__(self, data, detailed=False):
self.data = data
self.recalculate(detailed)
self._hash_steps = None
def recalculate(self, detailed=False):
L = self.data
steps = []
if detailed:
detail = []
PreL = []
StartL = 0
else:
detail = None
PreL = [None]
StartL = 2
Ll = len(L)
if detailed or Ll > 1:
while True:
if detailed:
detail += L
if Ll == 1:
break
steps.append(L[1])
if Ll % 2:
L += [L[-1]]
L = PreL + [doublesha(L[i] + L[i + 1]) for i in range(StartL, Ll, 2)]
Ll = len(L)
self._steps = steps
self.detail = detail
def hash_steps(self):
if self._hash_steps == None:
self._hash_steps = doublesha(''.join(self._steps))
return self._hash_steps
def withFirst(self, f):
steps = self._steps
for s in steps:
f = doublesha(f + s)
return f
def merkleRoot(self):
return self.withFirst(self.data[0])
# MerkleTree tests
def _test():
import binascii
import time
mt = MerkleTree([None] + [binascii.unhexlify(a) for a in [
'999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971',
'3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d',
'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6',
'28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0',
]])
assert(
b'82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a' ==
binascii.b2a_hex(mt.withFirst(binascii.unhexlify('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7')))
)
print '82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a'
txes = [binascii.unhexlify(a) for a in [
'd43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7',
'999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971',
'3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d',
'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6',
'28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0',
]]
s = time.time()
mt = MerkleTree(txes)
for x in range(100):
y = int('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7', 16)
#y += x
coinbasehash = binascii.unhexlify("%x" % y)
x = binascii.b2a_hex(mt.withFirst(coinbasehash))
print x
print time.time() - s
if __name__ == '__main__':
_test()
#!/usr/bin/env python
# example of proof-of-work algorithm
import hashlib
import time
max_nonce = 2 ** 32 # 4 billion
def proof_of_work(header, difficulty_bits):
# calculate the difficulty target
target = 2 ** (256-difficulty_bits)
for nonce in xrange(max_nonce):
hash_result = hashlib.sha256(str(header)+str(nonce)).hexdigest()
# check if this is a valid result, below the target
if long(hash_result, 16) < target:
print "Success with nonce %d" % nonce
print "Hash is %s" % hash_result
return (hash_result,nonce)
print "Failed after %d (max_nonce) tries" % nonce
return nonce
if __name__ == '__main__':
nonce = 0
hash_result = ''
# difficulty from 0 to 31 bits
for difficulty_bits in xrange(32):
difficulty = 2 ** difficulty_bits
print "Difficulty: %ld (%d bits)" % (difficulty, difficulty_bits)
print "Starting search..."
# checkpoint the current time
start_time = time.time()
# make a new block which includes the hash from the previous block
# we fake a block of transactions - just a string
new_block = 'test block with transactions' + hash_result
# find a valid nonce for the new block
(hash_result, nonce) = proof_of_work(new_block, difficulty_bits)
# checkpoint how long it took to find a result
end_time = time.time()
elapsed_time = end_time - start_time
print "Elapsed Time: %.4f seconds" % elapsed_time
if elapsed_time > 0:
# estimate the hashes per second
hash_power = float(long(nonce)/elapsed_time)
print "Hashing Power: %ld hashes per second" % hash_power
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: os.path 常用函数
Step2: 文件和目录操作之二
Step3: 写文件
Step4: 操作系统和文件系统差异处理
Step6: 我们在 fishbase 的 fish_file 包内,也实现了一个搜索文件的功能,也使用了 python 自带的 pathlib 函数包。
|
<ASSISTANT_TASK:>
Python Code:
import os
# 操作系统路径分隔符
print(os.sep)
# 操作系统平台名称
print(os.name)
# 获取当前路径
os.getcwd()
# 记录一下这是 zhang yimeng 当时执行后的结果:'C:\\Users\\yimeng.zhang\\Desktop\\Class\\python基础\\python_basic'
# 这是我现在在 windows 电脑上执行的结果:'C:\\dev_python\\python_study\\python_study_basic_notebook'
# 切换路径
# os.chdir('/Users/david.yi')
# 切换路径大家要参考上面获取的当前路径,根据自己的电脑做适当调整,替换下面 yijeng.zhang 为自己电脑上的用户名
os.chdir('C:\\Users\\yimeng.zhang\\Desktop\\Class')
os.getcwd()
# 返回指定的文件夹包含的文件或文件夹的名字的列表。这个列表以字母顺序。
# 不包括 '.' 和'..' 即使它在文件夹中。
import os
# os.chdir('C:\\Users\\yimeng.zhang\\Desktop\\Class\\python基础\\python_basic')
os.listdir()
# 注意返回的数据类型是什么,是一个列表
print(type(os.listdir()))
# 计算目录下有多少文件,因为返回结果是 list,因此各类计算都比较方便
a = os.listdir()
print(len(a))
# 可以指定路径参数,来列出该目录下所有文件
# list_a = os.listdir('/Users/david.yi')
list_a = os.listdir('C:\\dev_python\\python_study\\python_study_basic_notebook')
# 可以判断各类情况,比如第一个是 P 字母
for i in list_a:
if i[0] == 'P':
print(i)
# 操作系统换行符
# 在一些文本文件处理中有用
os.linesep
# 建立路径
# 切换到当前路径
os.getcwd()
os.mkdir('test')
print('ok')
# 检查给出的路径是否是一个存在的目录,存在
# 确保执行这些测试代码的时候先设定路径到当前 notebook 的路径,或者设定的某个路径
# 进行路径、文件操作时候,还是要谨慎小心一些
os.chdir('C:\\dev_python\\python_study\\python_study_basic_notebook')
s_dir = os.getcwd()
print(s_dir)
print(os.path.isdir(s_dir))
print(os.path.isdir('C:\\Users'))
print(os.path.isdir('C:\\Users222'))
# 检查给出的路径是否是一个存在的目录
# 下面再当前路径下加了个字母,当然是不存在的
os.path.isdir(s_dir + 's')
# 文件不是路径,即便文件存在,也返回 False
os.path.isdir(s_dir + 'test.txt')
# 检查给出的路径是否一个文件,存在
import os
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'files/test.txt')
print(s_file)
os.path.isfile(s_file)
# 检查给出的路径是否一个文件,不存在
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'test222.txt')
os.path.isfile(s_file)
# 路径不是文件,所以返回 False
s_dir = os.getcwd()
os.path.isfile(s_dir)
# 对路径和文件都通用的检查方式
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'test.txt')
print(os.path.exists(s_dir))
print(os.path.exists(s_file))
# 获得路径或者文件的大小
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'test.txt')
os.path.getsize(s_file)
# 获得路径或者文件的大小
os.path.getsize(s_dir)
# 返回所指向的文件或者目录的最后存取时间
# 返回的时间格式可能和大家想象的不太一样
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'files/test.txt')
os.path.getatime(s_file)
# 返回所指向的文件或者目录的最后存取时间
import os
import time
# 将日期格式化
dt = time.localtime(os.path.getatime(s_dir))
# print(dt)
print(time.strftime('%Y-%m-%d %H:%M:%S', dt))
# 返回所指向的文件或者目录的最后修改时间
s_file = os.path.join(s_dir, 'files/test.txt')
os.path.getmtime(s_file)
# 返回所指向的文件或者目录的最后修改时间
# 使用 time.ctime() 方法来格式化日期
import time, os
s_file = os.path.join(s_dir, 'files/test.txt')
time.ctime(os.path.getmtime(s_file))
# 返回规范化的绝对路径
# 会自动补齐完整路径,不管文件是否存在
os.path.abspath('tt1211.txt')
# 如果输入是绝对路径,返回True
print(os.path.isabs('test.txt'))
print(os.path.isabs('/Users/yijun/test.txt'))
# 返回一个路径的目录名和文件名
# os.chdir('/Users/david.yi/Documents/dev/python_study/python_basic')
# os.chdir('C:\\Users\\yimeng.zhang\\Desktop\\Class\\python基础\\python_basic')
s_dir = os.getcwd()
s_file = os.path.join(s_dir, 'test.txt')
print(s_file)
# 分拆路径和文件名
os.path.split(s_file)
# 返回路径的目录,其实就是 os.path.split(path)的第一个元素
os.path.dirname('/Users/yijun/test.txt')
# 返回路径最后的文件名,其实就是 os.path.split(path)的第二个元素
os.path.basename(s_file)
# 分离文件名与扩展名,返回(fname,fextension)元组
os.path.splitext(s_file)
# 将多个路径组合后返回,第一个绝对路径之前的参数将被忽略
# os.path.join('/Users/yijun', 'test.txt')
os.path.join('C:\\dev_python\\python_study\\python_study_basic_notebook\\files', 'test.txt')
# 返回list中,所有路径共有的最长的路径
l = ['/Users/yijun/test.txt', '/Users/yijun/test/aaa.txt', '/Users/yijun/bbb.txt']
os.path.commonprefix(l)
# 遍历一个目录下的所有文件
import os
def list_dir(root_dir):
for lists in os.listdir(root_dir):
path = os.path.join(root_dir, lists)
print(path)
if os.path.isdir(path):
list_dir(path)
# 注意不要挑选目录下过多文件的,否则会耗费电脑资源
list_dir(os.getcwd())
# 遍历一个目录下的所有文件
# 显示文件的字节数,用 getsize()
import os
def list_dir(root_dir):
for lists in os.listdir(root_dir):
path = os.path.join(root_dir, lists)
if lists[0:1] != '.':
filesize = os.path.getsize(path)
print(path, ' ', filesize)
if os.path.isdir(path):
list_dir(path)
# 注意不要挑选目录下过多文件的,否则会耗费电脑资源
#list_dir('/Users/david.yi/Documents/dev/dig/doc')
list_dir(os.getcwd())
# 遍历一个目录下的所有文件
# 过滤 . 开头的文件,一般是系统文件
# 显示文件的字节数
# 显示指定后缀 ipynb 的文件,引入 endswith 用法
import os
def list_dir(root_dir):
for lists in os.listdir(root_dir):
path = os.path.join(root_dir, lists)
if lists[0:1] != '.' and lists.endswith('.ipynb'):
filesize = os.path.getsize(path)
print(path, ' ', filesize)
if os.path.isdir(path):
list_dir(path)
# 注意不要挑选目录下过多文件的,否则会耗费电脑资源
# list_dir('/Users/david.yi/Documents/dev/dig/n_query')
list_dir(os.getcwd())
# 写一个可以搜索硬盘上指定路径指定类型的文件
# os.walk() 返回一个三元tuple(root, dirnames, filenames)
# 第一个为起始路径,String
# 第二个为起始路径下的文件夹, List
# 第三个是起始路径下的文件. List
# fnmatch python自带的文件名模式匹配包
# https://docs.python.org/zh-tw/3/library/fnmatch.html
import fnmatch
import os
images = ['*.jpg', '*.jpeg', '*.png', '*.tif', '*.tiff']
matches = []
# for root, dirnames, filenames in os.walk('/Users/david.yi/Documents/dev/'):
for root, dirnames, filenames in os.walk(os.getcwd()):
for extensions in images:
for filename in fnmatch.filter(filenames, extensions):
matches.append(os.path.join(root, filename))
print(matches)
# import os
# for root, dirnames, filenames in os.walk('C:\\Users\\yimeng.zhang\\Desktop\\Class\\python基础\\python_basic'):
# print(filenames)
import os
# 获得当前路径
s_dir = os.getcwd()
print(s_dir)
# 拼接完整文件名
filename = os.path.join(s_dir, 'files/test.txt')
print(filename)
# 和读写文件打交道是一件需要小心的事情,这里用了 try-finally 的方式来避免出错
try:
# 打开文件
f = open(filename, 'r')
print(f.read())
finally:
if f:
f.close()
# 简化调用方式
# 省却了 try...finally,会有 with 来自动控制保证可以关闭文件等
with open(filename, 'r') as f:
print(f.read())
# 读入文件所有的内容
# 这样操作对于一般的文件没啥问题,太大的文件不能这样读,内存会不够
with open(filename, 'r') as f:
lines = f.readlines()
print(type(lines))
print(lines)
# 把读入的文件显示出来
for i in lines:
print(i)
# 更简单的按行读取文件内容方法
with open(filename, 'r') as f:
for eachline in f:
print(eachline)
# 写文件
import os
# 获得当前路径
s_dir = os.getcwd()
# 拼接完整文件名
filename= os.path.join(s_dir, 'files/test2.txt')
print(filename)
# 换行符
br = os.linesep
# 写文件
with open(filename, 'w') as f:
f.write('Hello, World!' + br)
f.write('Hello, Shanghai!' + br)
f.write('Hello, CHINA!' + br)
f.close()
with open(filename, 'r') as f:
print(f.read())
import pathlib
list(pathlib.Path('.').glob('**/*.ipynb'))
# v1.0.14 edit by Hu Jun, edit by Jia Chunying, #38
# v1.0.17 edit by Hu Jun, #212
# v1.3 edit by David Yi, #272
def find_files(path, exts=None):
查找路径下的文件,返回指定类型的文件列表
:param:
* path: (string) 查找路径
* exts: (list) 文件类型列表,默认为空
:return:
* files_list: (list) 文件列表
举例如下::
print('--- find_files demo ---')
path1 = '/root/fishbase_issue'
all_files = find_files(path1)
print(all_files)
exts_files = find_files(path1, exts=['.png', '.py'])
print(exts_files)
print('---')
执行结果::
--- find_files demo ---
['/root/fishbase_issue/test.png', '/root/fishbase_issue/head.jpg','/root/fishbase_issue/py/man.png'
['/root/fishbase_issue/test.png', '/root/fishbase_issue/py/man.png']
---
files_list = []
for root, dirs, files in os.walk(path):
for name in files:
files_list.append(os.path.join(root, name))
if exts is not None:
return [file for file in files_list if pathlib.Path(file).suffix in exts]
return files_list
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's look at what set() does!
Step2: Let's create a 2nd list and set.
Step3: ...and look at the differences!
Step4: See https
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
nums1 = np.random.randint(1,11, 15)
nums1
set1 = set(nums1)
set1
nums2 = np.random.randint(1,11, 12)
nums2
set2 = set(nums2)
set2
set2.difference(set1)
set1.difference(set2)
# Intersection
set1 & set2
# Union
set1 | set2
# Difference
(set1 - set2) | (set2 - set1)
# Difference method 2
(set1 | set2) - (set1 & set2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Decimals can also be created from tuples containing a sign flag
Step2: Formatting
Step3: Arithmetic
Step4: Special Value
Step5: Context
Step6: Precision
Step7: Local Context
|
<ASSISTANT_TASK:>
Python Code:
import decimal
fmt = '{0:<25}{1:<25}'
print(fmt.format('Input', 'Output'))
print(fmt.format('-'*25, '-'*25))
#Integer
print(fmt.format(5, decimal.Decimal(5)))
#String
print(fmt.format('3.14', decimal.Decimal('3.14')))
#Float
f = 0.1
print(fmt.format(repr(f), decimal.Decimal(str(f))))
print('{:0.23g}{:<25}'.format(f, str(decimal.Decimal.from_float(f))[:25]))
import decimal
# Tuple
t = (1, (1, 1), -2)
print('Input :', t)
print('Decimal:', decimal.Decimal(t))
import decimal
d = decimal.Decimal(1.1)
print('Precision:')
print('{:.1}'.format(d))
print('{:.2}'.format(d))
print('{:.3}'.format(d))
print('{:.18}'.format(d))
print('\nWidth and precision combined:')
print('{:5.1f} {:5.1g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('{:5.2f} {:5.2g}'.format(d, d))
print('\nZero padding:')
print('{:05.1}'.format(d))
print('{:05.2}'.format(d))
print('{:05.3}'.format(d))
import decimal
a = decimal.Decimal('5.1')
b = decimal.Decimal('3.14')
c = 4
d = 3.14
print('a =', repr(a))
print('b =', repr(b))
print('c =', repr(c))
print('d =', repr(d))
print()
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print()
print('a + c =', a + c)
print('a - c =', a - c)
print('a * c =', a * c)
print('a / c =', a / c)
print()
print('a + d =', end=' ')
try:
print(a + d)
except TypeError as e:
print(e)
import decimal
for value in ['Infinity', 'NaN', '0']:
print(decimal.Decimal(value), decimal.Decimal('-' + value))
print()
# Math with infinity
print('Infinity + 1:', (decimal.Decimal('Infinity') + 1))
print('-Infinity + 1:', (decimal.Decimal('-Infinity') + 1))
# Print comparing NaN
print(decimal.Decimal('NaN') == decimal.Decimal('Infinity'))
print(decimal.Decimal('NaN') != decimal.Decimal(1))
import decimal
context = decimal.getcontext()
print('Emax =', context.Emax)
print('Emin =', context.Emin)
print('capitals =', context.capitals)
print('prec =', context.prec)
print('rounding =', context.rounding)
print('flags =')
for f, v in context.flags.items():
print(' {}: {}'.format(f, v))
print('traps =')
for t, v in context.traps.items():
print(' {}: {}'.format(t, v))
import decimal
d = decimal.Decimal('0.123456')
for i in range(1, 5):
decimal.getcontext().prec = i
print(i, ':', d, d * 1)
import decimal
with decimal.localcontext() as c:
c.prec = 2
print('Local precision:', c.prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
print()
print('Default precision:', decimal.getcontext().prec)
print('3.14 / 3 =', (decimal.Decimal('3.14') / 3))
import decimal
# Set up a context with limited precision
c = decimal.getcontext().copy()
c.prec = 3
# Create our constant
pi = c.create_decimal('3.1415')
# The constant value is rounded off
print('PI :', pi)
# The result of using the constant uses the global context
print('RESULT:', decimal.Decimal('2.01') * pi)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Correlation metrics
Step2: Let's use FacetGrid from seaborn to plot the data onto four axes, and plot the regression line `
Step3: Below is a widget that calculates different summary statistics or distance metrics using Anscombe's quartet. It shows both a table and a barplot of the values. Play around with the different settings and discuss the questions below with your partner.
Step4: Discussion
Step5: Here is a plot of the colors associated with each group
Step6: Below is another widget for you to play with. It lets you set different gene dropout thresholds (starts at 0 dropout), which will randomly remove (additional!) genes from the data.
|
<ASSISTANT_TASK:>
Python Code:
# Alphabetical order for nonstandard python modules is conventional
# We're doing "import superlongname as abbrev" for our laziness --
# -- this way we don't have to type out the whole thing each time.
# Python plotting library
import matplotlib.pyplot as plt
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
# Use the visual style of plots that I prefer and use the
# "notebook" context, which sets the default font and figure sizes
sns.set(style='whitegrid')
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
# Import figure code for interactive widgets
import fig_code
# Read the file - notice it is a URL. pandas can read either URLs or files on your computer
anscombe = pd.read_csv("https://github.com/mwaskom/seaborn-data/raw/master/anscombe.csv")
# Say the variable name with no arguments to look at the data
anscombe
# Make a "grid" of plots based on the column name "dataset"
g = sns.FacetGrid(anscombe, col='dataset')
# Make a regression plot (regplot) using 'x' for the x-axis and 'y' for the y-axis
g.map(sns.regplot, 'x', 'y')
fig_code.interact_anscombe()
for cluster_id, name in fig_code.cluster_id_to_name.items():
# The 'f' before the string means it's a "format string,"
# which means it will read the variable names that exist
# in your workspace. This is a very helpful and convient thing that was
# just released in Python 3.6! (not available in Python 3.5)
print('---')
print(f'{cluster_id}: {name}')
fig_code.plot_color_legend()
fig_code.plot_dropout_interactive()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Relationship of crystal momentum eigen-vectors with the Hamiltonian
Step2: So, eigenvectors of the crystal momentum are eigenvectors of the Hamiltonian as
Step3: Relationship of Expansion in momentum eigenkets to fft
Step4: Inner Product of crystal momentum eigenket and a state
Step5: fft spectrum
Step6: Comparison
Step7: Therefore, expanding a state in momentum eigenkets is equivalent of a DFT for a periodic lattice.
|
<ASSISTANT_TASK:>
Python Code:
from qutip import *
import matplotlib.pyplot as plt
import numpy as np
periodic_atom_chain8 = Lattice1d(num_cell=8, boundary = "periodic")
k8 = periodic_atom_chain8.k()
[ks8, pw8] = k8.eigenstates()
ks8 # In units of 2*pi/(L*a), if ks[1] = 1, the wavevector/crystal-momentum of the
# corresponding eigen-vector of k is a plane wave with wavelength of L*a, the length
# of the crystal. ks[2] = 2 has a corresponding plane wave eigenvector with
# wavelength of 2*L*a, twice the length of the crystal.
pw8[0] # ks[0] = 0 has an eigenvector that has a frequency of 0 and wavelength of
# infinity, the eigen-vector is normalized to unity, meaning the sum of all
# the elements sqared is 1
# The eigenvectors are better demonstrated with a latice of longer length
# Here we plot the second and third eigen-vectors(real and imaginary parts) of the first
# 3 eigenvectors of the crystal momentum operator for a 1d lattice with 64 cells
periodic_atom_chain64 = Lattice1d(num_cell=64, boundary = "periodic")
k64 = periodic_atom_chain64.k()
[ks64, pw64] = k64.eigenstates()
plt.plot(np.real(pw64[1]),'r')
plt.plot(np.imag(pw64[1]),'b')
plt.plot(np.abs(pw64[1]),'g')
plt.xlabel('position')
plt.ylabel('crystal momentum eigen vector')
plt.show()
plt.close()
k64 = periodic_atom_chain64.k()
[ks64, pw64] = k64.eigenstates()
plt.plot(np.real(pw64[2]),'r')
plt.plot(np.imag(pw64[2]),'b')
plt.plot(np.abs(pw64[2]),'g')
plt.xlabel('position')
plt.ylabel('crystal momentum eigen vector')
plt.show()
plt.close()
Hamt8 = periodic_atom_chain8.Hamiltonian()
pw8_M = np.array([pw8[0].full(),pw8[1].full(),pw8[2].full(),pw8[3].full(),
pw8[4].full(),pw8[5].full(),pw8[6].full(),pw8[7].full()])
pw8_M = np.squeeze(pw8_M, axis=2) # changing shape from (8,8,1) to (8,8)
# pw8_M is a matrix of 8 columns, each of which are the eigenvectors
# of the crystal momentum operator
pw8_M = Qobj(pw8_M)
pw8_M * Hamt8 * pw8_M.dag()
# A check that the Hamiltonian and the crystal momentum operator do indeed commute
Hamt8 * k8 - k8 * Hamt8
xs8 = np.arange(0,8)
Gaussian8_sum_to_1 = 1/np.sqrt(2*np.pi* 0.5 **2) * np.exp(-(xs8 - 3)**2/2/0.5**2)
Gaussian_state8 = np.sqrt(Gaussian8_sum_to_1) # A normalized Gaussian state
plt.plot(Gaussian_state8,'r')
plt.xlabel('position')
plt.ylabel('Probability Amplitude')
plt.show()
plt.close()
k_exps = pw8_M.dag() * Gaussian_state # ket vector of inner products with crystal
# momentum eigenkets
k_exps = k_exps/ np.sqrt( np.sum(np.multiply(k_exps, np.conj(k_exps)))) # normalization
dft_Gaussian_state = np.fft.fft(Gaussian_state)
shf_dft_Gaussian_state = np.roll(dft_Gaussian_state,4)
sq_dft_Gaussian_state = np.multiply(shf_dft_Gaussian_state, np.conj(shf_dft_Gaussian_state))
norml_dft_Gaussian_state = shf_dft_Gaussian_state / np.sqrt(np.sum(sq_dft_Gaussian_state))
plt.plot(np.abs(norml_dft_Gaussian_state),'b')
plt.plot(np.abs(k_exps),'ro')
plt.xlabel('frequency number')
plt.ylabel('frequency Amplitude')
plt.show()
plt.close()
qutip.about()
qutip.cite()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, we set some parameters.
Step2: Load the MEG data
Step3: Estimate the background noise covariance from the baseline period
Step4: Generate sinusoids in two spatially distant labels
Step5: Find the center vertices in source space of each label
Step6: Create source-space data with known signals
Step7: Plot original signals
Step8: Simulate sensor-space signals
Step9: Plot the point-spread of corrupted signal
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.simulation import simulate_stc, simulate_evoked
seed = 42
# parameters for inverse method
method = 'sLORETA'
snr = 3.
lambda2 = 1.0 / snr ** 2
# signal simulation parameters
# do not add extra noise to the known signals
nave = np.inf
T = 100
times = np.linspace(0, 1, T)
dt = times[1] - times[0]
# Paths to MEG data
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_fwd = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-fwd.fif')
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
fname_evoked = op.join(data_path, 'MEG', 'sample',
'sample_audvis-ave.fif')
fwd = mne.read_forward_solution(fname_fwd)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True,
use_cps=False)
fwd['info']['bads'] = []
inv_op = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw.fif'))
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True)
epochs.info['bads'] = []
evoked = epochs.average()
labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir)
label_names = [l.name for l in labels]
n_labels = len(labels)
cov = mne.compute_covariance(epochs, tmin=None, tmax=0.)
# The known signal is all zero-s off of the two labels of interest
signal = np.zeros((n_labels, T))
idx = label_names.index('inferiorparietal-lh')
signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times)
idx = label_names.index('rostralmiddlefrontal-rh')
signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times)
hemi_to_ind = {'lh': 0, 'rh': 1}
for i, label in enumerate(labels):
# The `center_of_mass` function needs labels to have values.
labels[i].values.fill(1.)
# Restrict the eligible vertices to be those on the surface under
# consideration and within the label.
surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno']
restrict_verts = np.intersect1d(surf_vertices, label.vertices)
com = labels[i].center_of_mass(subject='sample',
subjects_dir=subjects_dir,
restrict_vertices=restrict_verts,
surf='white')
# Convert the center of vertex index from surface vertex list to Label's
# vertex list.
cent_idx = np.where(label.vertices == com)[0][0]
# Create a mask with 1 at center vertex and zeros elsewhere.
labels[i].values.fill(0.)
labels[i].values[cent_idx] = 1.
stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt,
value_fun=lambda x: x)
kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4,
time_unit='s', initial_time=0.05, size=1200,
views=['lat', 'med'])
clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7])
brain_gen = stc_gen.plot(clim=clim, **kwargs)
evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave,
random_state=seed)
# Map the simulated sensor-space data to source-space using the inverse
# operator.
stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method)
brain_inv = stc_inv.plot(**kwargs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: インスタンス生成時に input オプションにイテレータ型オブジェクトを設定する事で、シェルスクリプトの標準入力に対する入力を設定する事ができる。shellOneLiner のインスタンス、および input オプションで指定されるオブジェクトは、デフォルトでは配列のイテレータ型である[*2]。
Step2: データ
Step3: ここでは、 Tukubai コマンド、 dmerge を使用している。
Step4: ここでは、 Tukubai コマンド、 selr を使用している。
Step5: ここで用いているのは、 Unix 標準コマンド grep である。
Step6: ここでは、 Tukubai コマンド self を使用している。
Step7: ここでは、 Tukubai コマンド delf を使用している。
Step8: ここでは、 Unix 標準コマンド awk を用いている。
Step9: まず最初の selr コマンドにより、 A 地点に到着したレコードのみを抽出している。続く2段目〜4段目のコマンドにて、深夜0時に到着したレコードを抜き出している。最後に5段目のコマンドにて到着データが40以下であるデータを抽出している。
Step10: ここでは、新たな Tukubai コマンド cjoin2 を使用している。
Step11: ここでは、新たな Tukubai コマンド count を使用している。
Step12: ここでは、新たな Tukubai コマンド psort および getlast を使用している。
Step13: ここでは、新たな Tukubai コマンド sm2 を使用している。
Step14: ここでは、新たな Tukubai コマンド lcalc を使用している。
Step15: shellOneLiner モジュールはイテレータ型の Python データを入力として受け取る。
|
<ASSISTANT_TASK:>
Python Code:
ol = shellOneLiner.ShellOneLiner('echo Hello; LANG=C date; cat datafile')
head(ol,5)
l = map((lambda n: ['%s' % str(n)]),range(80,100))
print l
di = list2iter(l)
ol = shellOneLiner.ShellOneLiner('echo Hello; LANG=C date; head', input=di)
head(ol,5)
ol = shellOneLiner.ShellOneLiner('dmerge key=1 set_A set_B set_C')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('selr 2 A set')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('grep \'^20150102\' set')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set | selr 4 00 | delf 4')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('awk \'$3>80\' set')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('selr 2 A set | self 0 1.9.2 | selr 4 00 | delf 4 | awk \'$3<40\' ')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 0 1.9.2 set_A | cjoin2 key=4 weight')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 set_A | count key=1@2')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 3 set_A | psort ref=1@2 key=3n | getlast key=1@2')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner('self 1.1.8 2 3 set_A | sm2 key=1/2 val=3')
head(ol, 3)
ol = shellOneLiner.ShellOneLiner(
'self 1.1.8 2 3 1.9.2 set_A | cjoin2 key=4 weight | lcalc \'$1, $2, $3 * $6\' | sm2 key=1/2 val=3')
head(ol, 3)
l = map((lambda n: ['%s' % str(n)]),range(80,100))
print l
di = list2iter(l)
ol = shellOneLiner.ShellOneLiner(
'cjoin0 key=3 - set_A',
input=di )
head(ol,3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we load the first input image and show it's origin, spacing, etc.
Step2: We get the numpy array for the image and visualize it
Step3: Let's do the same for the second image
Step4: Let's check if the spacing and direction are compatible
Step5: We see that im0 and im1 are in fact compatible, but the origin of im0 is at (200, 200).
Step6: Let's execute that command (via the subprocess module)
Step7: ... and check if the output image /tmp/merged.mha was actually written to disk
Step8: We are now ready to visualize the result
Step9: We see that the output image is larger by 212 pixel in both dimensions, since the second image's origin was at (200, 200) and the
Step10: Merging - Example 3
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
import numpy
# Path for TubeTK libs
#Values takend from TubeTK launcher
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/")
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/lib/Release")
# Setting TubeTK Build Directory
TubeTK_BUILD_DIR=None
if 'TubeTK_BUILD_DIR' in os.environ:
TubeTK_BUILD_DIR = os.environ['TubeTK_BUILD_DIR']
else:
print('TubeTK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["TubeTK_BUILD_DIR"] = "C:/src/TubeTK_Python_ITK/TubeTK-build"
TubeTK_BUILD_DIR = os.environ["TubeTK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(TubeTK_BUILD_DIR):
print('TubeTK_BUILD_DIR set by directory not found!')
print(' TubeTK_BUILD_DIR = ' + TubeTK_BUILD_DIR )
sys.exit(1)
try:
import itk
except:
ITK_BUILD_DIR = None
if 'ITK_BUILD_DIR' in os.environ:
ITK_BUILD_DIR = os.environ['ITK_BUILD_DIR']
else:
print('ITK_BUILD_DIR not found!')
print(' Set environment variable')
os.environ["ITK_BUILD_DIR"] = "C:/src/TubeTK_Python_R/ITK-build"
ITK_BUILD_DIR = os.environ["ITK_BUILD_DIR"]
#sys.exit( 1 )
if not os.path.exists(ITK_BUILD_DIR):
print('ITK_BUILD_DIR set by directory not found!')
print(' ITK_BUIDL_DIR = ' + ITK_BUILD_DIR )
sys.exit(1)
# Append ITK libs
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/Wrapping/Generators/Python/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib/Release")
sys.path.append("C:/src/TubeTK_Python_ITK/ITK-build/lib")
# Append TubeTK libs
sys.path.append("C:/src/TubeTK_Python_ITK/TubeTK-build/ITKModules/TubeTKITK-build/Wrapping/Generators/Python/Release")
import itk
from itk import TubeTKITK as itktube
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
input_image1 = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\ES0015_Large.mha')
reader0 = itk.ImageFileReader.New(FileName=input_image1)
reader0.Update()
im0=reader0.GetOutput()
print("Origin:")
print im0.GetOrigin()
print("Spacing:")
print im0.GetSpacing()
print("Direction:")
print im0.GetDirection()
ImageType=itk.Image[itk.F,2]
im_np0 = itk.PyBuffer[ImageType].GetArrayFromImage(im0)
plt.imshow(im_np0, cm.gray)
input_image0 = os.path.join(TubeTK_BUILD_DIR, 'MIDAS_Data\ES0015_Large_Wo_offset.mha')
print("Image file path:%s"%input_image0)
reader1 = itk.ImageFileReader.New(FileName=input_image0)
reader1.Update()
im1=reader1.GetOutput()
print im1.GetOrigin()
print im1.GetSpacing()
print im1.GetDirection()
im_np1 = itk.PyBuffer[ImageType].GetArrayFromImage(im1)
plt.imshow(im_np1, cm.gray)
im0.GetSpacing() == im1.GetSpacing() and im0.GetDirection() == im1.GetDirection()
output_image = os.path.join(TubeTK_BUILD_DIR, 'Temporary\Python.MergeAdjacentImages-Ex1.mha')
cmd = [os.path.join(TubeTK_BUILD_DIR, 'bin\Release\MergeAdjacentImages'),
'-i', '0', # Number of iterations ... here i=0, which means no registration,
input_image0, # First image to merge
input_image1, # Second image to merge
output_image
]
import subprocess
subprocess.call(cmd)
print os.path.exists(output_image)
output_reader = itk.ImageFileReader.New(FileName=output_image)
output_reader.Update()
out_im=output_reader.GetOutput()
print out_im.GetOrigin()
print out_im.GetSpacing()
print out_im.GetLargestPossibleRegion().GetSize()
plt.imshow(itk.PyBuffer[ImageType].GetArrayFromImage(out_im), cm.gray)
output_image = os.path.join(TubeTK_BUILD_DIR, 'Temporary/Python.MergeAdjacentImages-Ex2.mha')
cmd1 = [os.path.join(TubeTK_BUILD_DIR, 'bin/Release/MergeAdjacentImages'),
'-i','0', # Number of iterations ... here i=0, which means no registration,
'-b','50,50', # This adds a white border around the second image (50 pixel each side)
input_image0,
input_image1,
output_image
]
subprocess.call(cmd1)
reader=itk.ImageFileReader.New(FileName=output_image)
reader.Update()
plt.imshow(itk.PyBuffer[ImageType].GetArrayFromImage(reader.GetOutput()), cm.gray)
output_image = os.path.join(TubeTK_BUILD_DIR, 'Temporary/Python.MergeAdjacentImages-Ex3.mha')
cmd = [os.path.join(TubeTK_BUILD_DIR, 'bin/Release/MergeAdjacentImages'),
input_image1,
input_image0,
output_image
]
subprocess.call(cmd)
reader=itk.ImageFileReader.New(FileName=output_image)
reader.Update()
plt.imshow(itk.PyBuffer[ImageType].GetArrayFromImage(reader.GetOutput()), cm.gray)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Combining Streams with Binary Operators
Step2: Examples of zip_stream and zip_map
Step3: Defining Aggregating Functions on Streams
Step4: Merging Windows
Step5: Asynchronous Merges
Step6: blend
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.op import map_element
from IoTPy.helper_functions.recent_values import recent_values
w = Stream('w')
x = Stream('x')
y = Stream('y')
z = (x+y)*w
# z[n] = (x[n] + y[n])*w[n]
w.extend([1, 10, 100])
x.extend(list(range(10, 20, 1)))
y.extend(list(range(5)))
run()
print ('recent_values of z are:')
print(recent_values(z))
from IoTPy.agent_types.basics import fmap_e
# Decorate terminating function to specify non-terminating agent.
@fmap_e
def f(v): return v+10
@fmap_e
def g(w): return w * 2
w = Stream('w')
x = Stream('x')
y = Stream('y')
z = f(x+y)*g(w)
# z[n] = f(x[n]+y[n])*g(w[n])
w.extend([1, 10, 100])
x.extend(list(range(10, 20, 1)))
y.extend(list(range(5)))
run()
print ('recent_values of z are:')
print(recent_values(z))
from IoTPy.agent_types.merge import zip_stream
def example_of_zip_stream():
x = Stream('x')
y = Stream('y')
z = Stream('z')
zip_stream(in_streams=[x,y], out_stream=z)
x.extend(['A', 'B', 'C'])
y.extend(list(range(100, 1000, 100)))
run()
print ('recent values of x are')
print (recent_values(x))
print ('recent values of y are')
print (recent_values(y))
print ('recent values of z are')
print (recent_values(z))
example_of_zip_stream()
from IoTPy.agent_types.basics import zip_map
def example_of_zip_map():
x = Stream('x')
y = Stream('y')
z = Stream('z')
zip_map(func=sum, in_streams=[x,y], out_stream=z)
x.extend(list(range(5)))
y.extend(list(range(100, 1000, 100)))
run()
print ('recent values of x are')
print (recent_values(x))
print ('recent values of y are')
print (recent_values(y))
print ('recent values of z are')
print (recent_values(z))
example_of_zip_map()
import numpy as np
def merge_function(func, streams):
out_stream = Stream()
zip_map(func, streams, out_stream)
return out_stream
def sum_streams(streams): return merge_function(sum, streams)
def median_streams(streams): return merge_function(np.median, streams)
w = Stream('w')
x = Stream('x')
y = Stream('y')
sums = sum_streams([w,x,y])
medians = median_streams([w,x,y])
w.extend([4, 8, 12, 16])
x.extend([0, 16, -16])
y.extend([2, 9, 28, 81, 243])
run()
print ('recent values of sum of streams are')
print (recent_values(sums))
print ('recent values of medians of streams are')
print (recent_values(medians))
from IoTPy.agent_types.merge import merge_window
def f(two_windows):
first_window, second_window = two_windows
return max(first_window) - min(second_window)
x = Stream('x')
y = Stream('y')
z = Stream('z')
merge_window(func=f, in_streams=[x,y], out_stream=z, window_size=2, step_size=2)
x.extend(list(range(4, 10, 1)))
y.extend(list(range(0, 40, 4)))
run()
print ('recent values of z are')
print (recent_values(z))
from IoTPy.agent_types.merge import merge_asynch
Fahrenheit = Stream('Fahrenheit')
Celsius = Stream('Celsius')
Kelvin = Stream('Kelvin')
def convert_to_Kelvin(index_and_temperature):
index, temperature = index_and_temperature
result = 273 + (temperature if index == 1
else (temperature - 32.0)/1.8)
return result
merge_asynch(func=convert_to_Kelvin,
in_streams=[Fahrenheit, Celsius], out_stream=Kelvin)
Fahrenheit.extend([32, 50])
Celsius.extend([0.0, 10.0])
run()
Fahrenheit.extend([14.0])
Celsius.extend([-273.0, 100.0])
run()
print ('Temperatures in Kelvin are')
print (recent_values(Kelvin))
from IoTPy.agent_types.merge import blend
def test_blend():
x = Stream('x')
y = Stream('y')
z = Stream('z')
blend(func=lambda v: 2*v, in_streams=[x,y], out_stream=z)
x.extend(list(range(3)))
run()
print (recent_values(z))
x.extend(list(range(3, 5, 1)))
run()
print (recent_values(z))
y.extend(list(range(100, 130, 10)))
run()
print (recent_values(z))
x.extend(list(range(5, 10, 1)))
run()
print (recent_values(z))
test_blend()
from IoTPy.core.stream import StreamArray
from IoTPy.agent_types.merge import merge_list
def test_merge_list_with_stream_array():
x = StreamArray()
y = StreamArray()
z = StreamArray(dtype='bool')
# Function that is encapsulated
def f(two_arrays):
x_array, y_array = two_arrays
return x_array > y_array
# Create agent
merge_list(f, [x,y], z)
x.extend(np.array([3.0, 5.0, 7.0, 11.0, 30.0]))
y.extend(np.array([4.0, 3.0, 10.0, 20.0, 25.0, 40.0]))
run()
print('recent values of z are:')
print (recent_values(z))
test_merge_list_with_stream_array()
from IoTPy.agent_types.merge import timed_zip
def test_timed_zip():
x = Stream('x')
y = Stream('y')
z = Stream('z')
# timed_zip_agent(in_streams=[x,y], out_stream=z, name='a')
timed_zip(in_streams=[x, y], out_stream=z)
x.extend([(1, "A"), (5, "B"), (9, "C"), (12, "D"), (13, "E")])
y.extend([(5, "a"), (7, "b"), (9, "c"), (12, "d"), (14, 'e'), (16, 'f')])
run()
print ('recent values of z are')
print (recent_values(z))
test_timed_zip()
from IoTPy.agent_types.merge import timed_mix
def test_timed_mix_agents():
x = Stream('x')
y = Stream('y')
z = Stream('z')
timed_mix([x,y], z)
x.append((0, 'a'))
run()
# time=0, value='a', in_stream index is
assert recent_values(z) == [(0, (0, 'a'))]
x.append((1, 'b'))
run()
assert recent_values(z) == [(0, (0, 'a')), (1, (0, 'b'))]
y.append((2, 'A'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A'))]
y.append((5, 'B'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((3, 'c'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((4, 'd'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B'))]
x.append((8, 'e'))
run()
assert recent_values(z) == \
[(0, (0, 'a')), (1, (0, 'b')), (2, (1, 'A')), (5, (1, 'B')), (8, (0, 'e'))]
print (recent_values(z))
test_timed_mix_agents()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
Step7: Problem 2
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
|
<ASSISTANT_TASK:>
Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Working with Pmfs
Step2: A Pmf is a map from possible outcomes to their probabilities.
Step3: Initially the probabilities don't add up to 1.
Step4: Normalize adds up the probabilities and divides through. The return value is the total probability before normalizing.
Step5: Now the Pmf is normalized.
Step6: And we can compute its mean (which only works if it's normalized).
Step7: Random chooses a random value from the Pmf.
Step8: thinkplot provides methods for plotting Pmfs in a few different styles.
Step9: Exercise 1
Step10: Exercise 2
Step11: The cookie problem
Step12: Update each hypothesis with the likelihood of the data (a vanilla cookie).
Step13: Print the posterior probabilities.
Step14: Exercise 3
Step15: Exercise 4
Step16: The dice problem
Step17: Exercise 5
Step18: Exercise 6
Step19: Now we can create a Dice object and update it.
Step20: If we get more data, we can perform more updates.
Step21: Here are the results.
Step22: The German tank problem
Step23: Here are the posterior probabilities after seeing Tank #37.
Step24: Exercise 7
Step26: The Euro problem
Step27: We'll start with a uniform distribution from 0 to 100.
Step28: Now we can update with a single heads
Step29: Another heads
Step30: And a tails
Step31: Starting over, here's what it looks like after 7 heads and 3 tails.
Step32: The maximum posterior probability is 70%, which is the observed proportion.
Step33: The posterior mean s about 56%
Step34: So is the value with Maximum Aposteriori Probability (MAP).
Step35: The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge).
Step37: Swamping the prior
Step38: And here's what it looks like
Step39: Exercise 9
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
from thinkbayes2 import Pmf, Suite
import thinkplot
d6 = Pmf()
for x in [1,2,3,4,5,6]:
d6[x] = 1
d6.Print()
d6.Normalize()
d6.Print()
d6.Mean()
d6.Random()
thinkplot.Hist(d6)
# Solution goes here
# Solution goes here
cookie = Pmf(['Bowl 1', 'Bowl 2'])
cookie.Print()
cookie['Bowl 1'] *= 0.75
cookie['Bowl 2'] *= 0.5
cookie.Normalize()
cookie.Print()
# Solution goes here
# Solution goes here
pmf = Pmf([4, 6, 8, 12])
pmf.Print()
# Solution goes here
class Dice(Suite):
# hypo is the number of sides on the die
# data is the outcome
def Likelihood(self, data, hypo):
return 1
# Solution goes here
dice = Dice([4, 6, 8, 12])
dice.Update(6)
dice.Print()
for roll in [8, 7, 7, 5, 4]:
dice.Update(roll)
dice.Print()
class Tank(Suite):
# hypo is the number of tanks
# data is an observed serial number
def Likelihood(self, data, hypo):
if data > hypo:
return 0
else:
return 1 / hypo
tank = Tank(range(100))
tank.Update(37)
thinkplot.Pdf(tank)
tank.Mean()
# Solution goes here
class Euro(Suite):
def Likelihood(self, data, hypo):
hypo is the prob of heads (0-100)
data is a string, either 'H' or 'T'
return 1
# Solution goes here
euro = Euro(range(101))
thinkplot.Pdf(euro)
euro.Update('H')
thinkplot.Pdf(euro)
euro.Update('H')
thinkplot.Pdf(euro)
euro.Update('T')
thinkplot.Pdf(euro)
euro = Euro(range(101))
for outcome in 'HHHHHHHTTT':
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.MaximumLikelihood()
euro = Euro(range(101))
evidence = 'H' * 140 + 'T' * 110
for outcome in evidence:
euro.Update(outcome)
thinkplot.Pdf(euro)
euro.Mean()
euro.MAP()
euro.CredibleInterval(90)
def TrianglePrior():
Makes a Suite with a triangular prior.
suite = Euro(label='triangle')
for x in range(0, 51):
suite[x] = x
for x in range(51, 101):
suite[x] = 100-x
suite.Normalize()
return suite
euro1 = Euro(range(101), label='uniform')
euro2 = TrianglePrior()
thinkplot.Pdfs([euro1, euro2])
thinkplot.Config(title='Priors')
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Session 3
Step2: <a name="assignment-synopsis"></a>
Step3: We'll now make use of something I've written to help us store this data. It provides some interfaces for generating "batches" of data, as well as splitting the data into training, validation, and testing sets. To use it, we pass in the data and optionally its labels. If we don't have labels, we just pass in the data. In the second half of this notebook, we'll explore using a dataset's labels as well.
Step4: It allows us to easily find the mean
Step5: Or the deviation
Step6: Recall we can calculate the mean of the standard deviation across each color channel
Step7: All the input data we gave as input to our Datasets object, previously stored in Xs is now stored in a variable as part of our ds Datasets object, X
Step8: It takes a parameter, split at the time of creation, which allows us to create train/valid/test sets. By default, this is set to [1.0, 0.0, 0.0], which means to take all the data in the train set, and nothing in the validation and testing sets. We can access "batch generators" of each of these sets by saying
Step9: This returns X and y as a tuple. Since we're not using labels, we'll just ignore this. The next_batch method takes a parameter, batch_size, which we'll set appropriately to our batch size. Notice it runs for exactly 10 iterations to iterate over our 100 examples, then the loop exits. The order in which it iterates over the 100 examples is randomized each time you iterate.
Step10: We're going to now work on creating an autoencoder. To start, we'll only use linear connections, like in the last assignment. This means, we need a 2-dimensional input
Step11: Let's create a list of how many neurons we want in each layer. This should be for just one half of the network, the encoder only. It should start large, then get smaller and smaller. We're also going to try an encode our dataset to an inner layer of just 2 values. So from our number of features, we'll go all the way down to expressing that image by just 2 values. Try a small network to begin with, then explore deeper networks
Step12: Now create a placeholder just like in the last session in the tensorflow graph that will be able to get any number (None) of n_features inputs.
Step13: Now complete the function encode below. This takes as input our input placeholder, X, our list of dimensions, and an activation function, e.g. tf.nn.relu or tf.nn.tanh, to apply to each layer's output, and creates a series of fully connected layers. This works just like in the last session! We multiply our input, add a bias, then apply a non-linearity. Instead of having 20 neurons in each layer, we're going to use our dimensions list to tell us how many neurons we want in each layer.
Step14: We now have a function for encoding an input X. Take note of which activation function you use as this will be important for the behavior of the latent encoding, z, later on.
Step15: Let's take a look at the graph
Step16: So we've created a few layers, encoding our input X all the way down to 2 values in the tensor z. We do this by multiplying our input X by a set of matrices shaped as
Step17: Resulting in a layer which is shaped as
Step18: Building the Decoder
Step19: Now we'll build the decoder. I've shown you how to do this. Read through the code to fully understand what it is doing
Step20: Let's take a look at the new operations we've just added. They will all be prefixed by "decoder" so we can use list comprehension to help us with this
Step21: And let's take a look at the output of the autoencoder
Step22: Great! So we should have a synthesized version of our input placeholder, X, inside of Y. This Y is the result of many matrix multiplications, first a series of multiplications in our encoder all the way down to 2 dimensions, and then back to the original dimensions through our decoder. Let's now create a pixel-to-pixel measure of error. This should measure the difference in our synthesized output, Y, and our input, X. You can use the $l_1$ or $l_2$ norm, just like in assignment 2. If you don't remember, go back to homework 2 where we calculated the cost function and try the same idea here.
Step23: Now for the standard training code. We'll pass our cost to an optimizer, and then use mini batch gradient descent to optimize our network's parameters. We just have to be careful to make sure we're preprocessing our input and feed it in the right shape, a 2-dimensional matrix of [batch_size, n_features] in dimensions.
Step24: Below is the training code for our autoencoder. Please go through each line of code to make sure you understand what is happening, and fill in the missing pieces. This will take awhile. On my machine, it takes about 15 minutes. If you're impatient, you can "Interrupt" the kernel by going to the Kernel menu above, and continue with the notebook. Though, the longer you leave this to train, the better the result will be.
Step25: Note that if you run into "InternalError" or "ResourceExhaustedError", it is likely that you have run out of memory! Try a smaller network! For instance, restart the notebook's kernel, and then go back to defining encoder_dimensions = [256, 2] instead. If you run into memory problems below, you can also try changing the batch_size to 50.
Step26: Let's take a look a the final reconstruction
Step27: <a name="visualize-the-embedding"></a>
Step28: Recall that this layer has 2 neurons
Step29: Let's see what the activations look like for our 100 images as a scatter plot.
Step30: If you view this plot over time, and let the process train longer, you will see something similar to the visualization here on the right
Step31: To do this, we can use scipy and an algorithm for solving this assignment problem known as the hungarian algorithm. With a few points, this algorithm runs pretty fast. But be careful if you have many more points, e.g. > 1000, as it is not a very efficient algorithm!
Step32: The result tells us the matching indexes from our autoencoder embedding of 2 dimensions, to our idealized grid
Step33: In other words, this algorithm has just found the best arrangement of our previous zs as a grid. We can now plot our images using the order of our assignment problem to see what it looks like
Step34: <a name="2d-latent-manifold"></a>
Step35: Now calculate the reconstructed images using our new zs. You'll want to start from the beginning of the decoder! That is the z variable! Then calculate the Y given our synthetic values for z stored in zs.
Step36: And now we can plot the reconstructed montage representing our latent space
Step37: <a name="part-two---general-autoencoder-framework"></a>
Step38: Included in the vae.py module is the train_vae function. This will take a list of file paths, and train an autoencoder with the provided options. This will spit out a bunch of images of the reconstruction and latent manifold created by the encoder/variational encoder. Feel free to read through the code, as it is documented.
Step39: I've also included three examples of how to use the VAE(...) and train_vae(...) functions. First look at the one using MNIST. Then look at the other two
Step40: <a name="part-three---deep-audio-classification-network"></a>
Step41: Inside the dst directory, we now have folders for music and speech. Let's get the list of all the wav files for music and speech
Step42: We now need to load each file. We can use the scipy.io.wavefile module to load the audio as a signal.
Step43: Now, instead of using the raw audio signal, we're going to use the Discrete Fourier Transform to represent our audio as matched filters of different sinuoids. Unfortunately, this is a class on Tensorflow and I can't get into Digital Signal Processing basics. If you want to know more about this topic, I highly encourage you to take this course taught by the legendary Perry Cook and Julius Smith
Step44: What we're seeing are the features of the audio (in columns) over time (in rows). We can see this a bit better by taking the logarithm of the magnitudes converting it to a psuedo-decibel scale. This is more similar to the logarithmic perception of loudness we have. Let's visualize this below, and I'll transpose the matrix just for display purposes
Step45: We could just take just a single row (or column in the second plot of the magnitudes just above, as we transposed it in that plot) as an input to a neural network. However, that just represents about an 80th of a second of audio data, and is not nearly enough data to say whether something is music or speech. We'll need to use more than a single row to get a decent length of time. One way to do this is to use a sliding 2D window from the top of the image down to the bottom of the image (or left to right). Let's start by specifying how large our sliding window is.
Step46: Now we can collect all the sliding windows into a list of Xs and label them based on being music as 0 or speech as 1 into a collection of ys.
Step47: The code below will perform this for us, as well as create the inputs and outputs to our classification network by specifying 0s for the music dataset and 1s for the speech dataset. Let's just take a look at the first sliding window, and see it's label
Step48: Since this was the first audio file of the music dataset, we've set it to a label of 0. And now the second one, which should have 50% overlap with the previous one, and still a label of 0
Step49: So hopefully you can see that the window is sliding down 250 milliseconds at a time, and since our window is 500 ms long, or half a second, it has 50% new content at the bottom. Let's do this for every audio file now
Step50: Just to confirm it's doing the same as above, let's plot the first magnitude matrix
Step51: Let's describe the shape of our input to the network
Step52: We'll now use the Dataset object I've provided for you under libs/datasets.py. This will accept the Xs, ys, a list defining our dataset split into training, validation, and testing proportions, and a parameter one_hot stating whether we want our ys to be converted to a one hot vector or not.
Step53: Let's take a look at the batch generator this object provides. We can all any of the splits, the train, valid, or test splits as properties of the object. And each split provides a next_batch method which gives us a batch generator. We should have specified that we wanted one_hot=True to have our batch generator return our ys with 2 features, one for each possible class.
Step54: Let's take a look at the first element of the randomized batch
Step55: And the second one
Step56: So we have a randomized order in minibatches generated for us, and the ys are represented as a one-hot vector with each class, music and speech, encoded as a 0 or 1. Since the next_batch method is a generator, we can use it in a loop until it is exhausted to run through our entire dataset in mini-batches.
Step57: Let's now create our deep convolutional network. Start by first creating the convolutional layers. Try different numbers of layers, different numbers of filters per layer, different activation functions, and varying the parameters to get the best training/validation score when training below. Try first using a kernel size of 3 and a stride of 1. You can use the utils.conv2d function to help you create the convolution.
Step58: We'll now connect our last convolutional layer to a fully connected layer of 100 neurons. This is essentially combining the spatial information, thus losing the spatial information. You can use the utils.linear function to do this, which will internally also reshape the 4-d tensor to a 2-d tensor so that it can be connected to a fully-connected layer (i.e. perform a matrix multiplication).
Step59: We'll now create our cost. Unlike the MNIST network, we're going to use a binary cross entropy as we only have 2 possible classes. You can use the utils.binary_cross_entropy function to help you with this. Remember, the final cost measure the average loss of your batches.
Step60: Just like in MNIST, we'll now also create a measure of accuracy by finding the prediction of our network. This is just for us to monitor the training and is not used to optimize the weights of the network! Look back to the MNIST network in the lecture if you are unsure of how this works (it is exactly the same)
Step61: We'll now create an optimizer and train our network
Step62: Now we're ready to train. This is a pretty simple dataset for a deep convolutional network. As a result, I've included code which demonstrates how to monitor validation performance. A validation set is data that the network has never seen, and is not used for optimizing the weights of the network. We use validation to better understand how well the performance of a network "generalizes" to unseen data.
Step63: Let's try to inspect how the network is accomplishing this task, just like we did with the MNIST network. First, let's see what the names of our operations in our network are.
Step64: Now let's visualize the W tensor's weights for the first layer using the utils function montage_filters, just like we did for the MNIST dataset during the lecture. Recall from the lecture that this is another great way to inspect the performance of your network. If many of the filters look uniform, then you know the network is either under or overperforming. What you want to see are filters that look like they are responding to information such as edges or corners.
Step65: We can also look at every layer's filters using a loop
Step66: In the next session, we'll learn some much more powerful methods of inspecting such networks.
|
<ASSISTANT_TASK:>
Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
import IPython.display as ipyd
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
import IPython.display as ipyd
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif, datasets, dataset_utils, vae, dft
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
# See how this works w/ Celeb Images or try your own dataset instead:
imgs = ...
# Then convert the list of images to a 4d array (e.g. use np.array to convert a list to a 4d array):
Xs = ...
print(Xs.shape)
assert(Xs.ndim == 4 and Xs.shape[1] <= 100 and Xs.shape[2] <= 100)
ds = datasets.Dataset(Xs)
# ds = datasets.CIFAR10(flatten=False)
mean_img = ds.mean().astype(np.uint8)
plt.imshow(mean_img)
# If your image comes out entirely black, try w/o the `astype(np.uint8)`
# that means your images are read in as 0-255, rather than 0-1 and
# this simply depends on the version of matplotlib you are using.
std_img = ds.std()
plt.imshow(std_img)
print(std_img.shape)
std_img = np.mean(std_img, axis=2)
plt.imshow(std_img)
plt.imshow(ds.X[0])
print(ds.X.shape)
for (X, y) in ds.train.next_batch(batch_size=10):
print(X.shape)
# Write a function to preprocess/normalize an image, given its dataset object
# (which stores the mean and standard deviation!)
def preprocess(img, ds):
norm_img = (img - ...) / ...
return norm_img
# Write a function to undo the normalization of an image, given its dataset object
# (which stores the mean and standard deviation!)
def deprocess(norm_img, ds):
img = norm_img * ... + ...
return img
# Calculate the number of features in your image.
# This is the total number of pixels, or (height x width x channels).
n_features = ...
print(n_features)
encoder_dimensions = [128, 2]
X = tf.placeholder(...
assert(X.get_shape().as_list() == [None, n_features])
def encode(X, dimensions, activation=tf.nn.tanh):
# We're going to keep every matrix we create so let's create a list to hold them all
Ws = []
# We'll create a for loop to create each layer:
for layer_i, n_output in enumerate(dimensions):
# TODO: just like in the last session,
# we'll use a variable scope to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it. Make sure it is a unique name
# for each layer, e.g., 'encoder/layer1', 'encoder/layer2', or
# 'encoder/1', 'encoder/2',...
with tf.variable_scope(...)
# TODO: Create a weight matrix which will increasingly reduce
# down the amount of information in the input by performing
# a matrix multiplication. You can use the utils.linear function.
h, W = ...
# TODO: Apply an activation function (unless you used the parameter
# for activation function in the utils.linear call)
# Finally we'll store the weight matrix.
# We need to keep track of all
# the weight matrices we've used in our encoder
# so that we can build the decoder using the
# same weight matrices.
Ws.append(W)
# Replace X with the current layer's output, so we can
# use it in the next layer.
X = h
z = X
return Ws, z
# Then call the function
Ws, z = encode(X, encoder_dimensions)
# And just some checks to make sure you've done it right.
assert(z.get_shape().as_list() == [None, 2])
assert(len(Ws) == len(encoder_dimensions))
[op.name for op in tf.get_default_graph().get_operations()]
[W_i.get_shape().as_list() for W_i in Ws]
z.get_shape().as_list()
# We'll first reverse the order of our weight matrices
decoder_Ws = Ws[::-1]
# then reverse the order of our dimensions
# appending the last layers number of inputs.
decoder_dimensions = encoder_dimensions[::-1][1:] + [n_features]
print(decoder_dimensions)
assert(decoder_dimensions[-1] == n_features)
def decode(z, dimensions, Ws, activation=tf.nn.tanh):
current_input = z
for layer_i, n_output in enumerate(dimensions):
# we'll use a variable scope again to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("decoder/layer/{}".format(layer_i)):
# Now we'll grab the weight matrix we created before and transpose it
# So a 3072 x 784 matrix would become 784 x 3072
# or a 256 x 64 matrix, would become 64 x 256
W = tf.transpose(Ws[layer_i])
# Now we'll multiply our input by our transposed W matrix
h = tf.matmul(current_input, W)
# And then use a relu activation function on its output
current_input = activation(h)
# We'll also replace n_input with the current n_output, so that on the
# next iteration, our new number inputs will be correct.
n_input = n_output
Y = current_input
return Y
Y = decode(z, decoder_dimensions, decoder_Ws)
[op.name for op in tf.get_default_graph().get_operations()
if op.name.startswith('decoder')]
Y.get_shape().as_list()
# Calculate some measure of loss, e.g. the pixel to pixel absolute difference or squared difference
loss = ...
# Now sum over every pixel and then calculate the mean over the batch dimension (just like session 2!)
# hint, use tf.reduce_mean and tf.reduce_sum
cost = ...
learning_rate = ...
optimizer = tf.train.AdamOptimizer(...).minimize(...)
# (TODO) Create a tensorflow session and initialize all of our weights:
sess = ...
sess.run(tf.global_variables_initializer())
# Some parameters for training
batch_size = 100
n_epochs = 31
step = 10
# We'll try to reconstruct the same first 100 images and show how
# The network does over the course of training.
examples = ds.X[:100]
# We have to preprocess the images before feeding them to the network.
# I'll do this once here, so we don't have to do it every iteration.
test_examples = preprocess(examples, ds).reshape(-1, n_features)
# If we want to just visualize them, we can create a montage.
test_images = utils.montage(examples).astype(np.uint8)
# Store images so we can make a gif
gifs = []
# Now for our training:
for epoch_i in range(n_epochs):
# Keep track of the cost
this_cost = 0
# Iterate over the entire dataset in batches
for batch_X, _ in ds.train.next_batch(batch_size=batch_size):
# (TODO) Preprocess and reshape our current batch, batch_X:
this_batch = preprocess(..., ds).reshape(-1, n_features)
# Compute the cost, and run the optimizer.
this_cost += sess.run([cost, optimizer], feed_dict={X: this_batch})[0]
# Average cost of this epoch
avg_cost = this_cost / ds.X.shape[0] / batch_size
print(epoch_i, avg_cost)
# Let's also try to see how the network currently reconstructs the input.
# We'll draw the reconstruction every `step` iterations.
if epoch_i % step == 0:
# (TODO) Ask for the output of the network, Y, and give it our test examples
recon = sess.run(...
# Resize the 2d to the 4d representation:
rsz = recon.reshape(examples.shape)
# We have to unprocess the image now, removing the normalization
unnorm_img = deprocess(rsz, ds)
# Clip to avoid saturation
# TODO: Make sure this image is the correct range, e.g.
# for float32 0-1, you should clip between 0 and 1
# for uint8 0-255, you should clip between 0 and 255!
clipped = np.clip(unnorm_img, 0, 255)
# And we can create a montage of the reconstruction
recon = utils.montage(clipped)
# Store for gif
gifs.append(recon)
fig, axs = plt.subplots(1, 2, figsize=(10, 10))
axs[0].imshow(test_images)
axs[0].set_title('Original')
axs[1].imshow(recon)
axs[1].set_title('Synthesis')
fig.canvas.draw()
plt.show()
fig, axs = plt.subplots(1, 2, figsize=(10, 10))
axs[0].imshow(test_images)
axs[0].set_title('Original')
axs[1].imshow(recon)
axs[1].set_title('Synthesis')
fig.canvas.draw()
plt.show()
plt.imsave(arr=test_images, fname='test.png')
plt.imsave(arr=recon, fname='recon.png')
zs = sess.run(z, feed_dict={X:test_examples})
zs.shape
plt.scatter(zs[:, 0], zs[:, 1])
n_images = 100
idxs = np.linspace(np.min(zs) * 2.0, np.max(zs) * 2.0,
int(np.ceil(np.sqrt(n_images))))
xs, ys = np.meshgrid(idxs, idxs)
grid = np.dstack((ys, xs)).reshape(-1, 2)[:n_images,:]
fig, axs = plt.subplots(1,2,figsize=(8,3))
axs[0].scatter(zs[:, 0], zs[:, 1],
edgecolors='none', marker='o', s=2)
axs[0].set_title('Autoencoder Embedding')
axs[1].scatter(grid[:,0], grid[:,1],
edgecolors='none', marker='o', s=2)
axs[1].set_title('Ideal Grid')
from scipy.spatial.distance import cdist
cost = cdist(grid[:, :], zs[:, :], 'sqeuclidean')
from scipy.optimize._hungarian import linear_sum_assignment
indexes = linear_sum_assignment(cost)
indexes
plt.figure(figsize=(5, 5))
for i in range(len(zs)):
plt.plot([zs[indexes[1][i], 0], grid[i, 0]],
[zs[indexes[1][i], 1], grid[i, 1]], 'r')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
examples_sorted = []
for i in indexes[1]:
examples_sorted.append(examples[i])
plt.figure(figsize=(15, 15))
img = utils.montage(np.array(examples_sorted)).astype(np.uint8)
plt.imshow(img,
interpolation='nearest')
plt.imsave(arr=img, fname='sorted.png')
# This is a quick way to do what we could have done as
# a nested for loop:
zs = np.meshgrid(np.linspace(-1, 1, 10),
np.linspace(-1, 1, 10))
# Now we have 100 x 2 values of every possible position
# in a 2D grid from -1 to 1:
zs = np.c_[zs[0].ravel(), zs[1].ravel()]
recon = sess.run(Y, feed_dict={...})
# reshape the result to an image:
rsz = recon.reshape(examples.shape)
# Deprocess the result, unnormalizing it
unnorm_img = deprocess(rsz, ds)
# clip to avoid saturation
clipped = np.clip(unnorm_img, 0, 255)
# Create a montage
img_i = utils.montage(clipped).astype(np.uint8)
plt.figure(figsize=(15, 15))
plt.imshow(img_i)
plt.imsave(arr=img_i, fname='manifold.png')
help(vae.VAE)
help(vae.train_vae)
# Get a list of jpg file (Only JPG works!)
files = [os.path.join(some_dir, file_i) for file_i in os.listdir(some_dir) if file_i.endswith('.jpg')]
# Ensure that you have the latest TensorFlow version installed, otherwise you may have encountered
# 'rsz_shape' error because of the backward incompatible API.
# Train it! Change these parameters!
vae.train_vae(files,
input_shape,
learning_rate=0.0001,
batch_size=100,
n_epochs=50,
n_examples=10,
crop_shape=[64, 64, 3],
crop_factor=0.8,
n_filters=[100, 100, 100, 100],
n_hidden=256,
n_code=50,
convolutional=True,
variational=True,
filter_sizes=[3, 3, 3, 3],
dropout=True,
keep_prob=0.8,
activation=tf.nn.relu,
img_step=100,
save_step=100,
ckpt_name="vae.ckpt")
dst = 'gtzan_music_speech'
if not os.path.exists(dst):
dataset_utils.gtzan_music_speech_download(dst)
# Get the full path to the directory
music_dir = os.path.join(os.path.join(dst, 'music_speech'), 'music_wav')
# Now use list comprehension to combine the path of the directory with any wave files
music = [os.path.join(music_dir, file_i)
for file_i in os.listdir(music_dir)
if file_i.endswith('.wav')]
# Similarly, for the speech folder:
speech_dir = os.path.join(os.path.join(dst, 'music_speech'), 'speech_wav')
speech = [os.path.join(speech_dir, file_i)
for file_i in os.listdir(speech_dir)
if file_i.endswith('.wav')]
# Let's see all the file names
print(music, speech)
file_i = music[0]
s = utils.load_audio(file_i)
plt.plot(s)
# Parameters for our dft transform. Sorry we can't go into the
# details of this in this course. Please look into DSP texts or the
# course by Perry Cook linked above if you are unfamiliar with this.
fft_size = 512
hop_size = 256
re, im = dft.dft_np(s, hop_size=256, fft_size=512)
mag, phs = dft.ztoc(re, im)
print(mag.shape)
plt.imshow(mag)
plt.figure(figsize=(10, 4))
plt.imshow(np.log(mag.T))
plt.xlabel('Time')
plt.ylabel('Frequency Bin')
# The sample rate from our audio is 22050 Hz.
sr = 22050
# We can calculate how many hops there are in a second
# which will tell us how many frames of magnitudes
# we have per second
n_frames_per_second = sr // hop_size
# We want 500 milliseconds of audio in our window
n_frames = n_frames_per_second // 2
# And we'll move our window by 250 ms at a time
frame_hops = n_frames_per_second // 4
# We'll therefore have this many sliding windows:
n_hops = (len(mag) - n_frames) // frame_hops
Xs = []
ys = []
for hop_i in range(n_hops):
# Creating our sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# Store them with a new 3rd axis and as a logarithmic scale
# We'll ensure that we aren't taking a log of 0 just by adding
# a small value, also known as epsilon.
Xs.append(np.log(np.abs(frames[..., np.newaxis]) + 1e-10))
# And then store the label
ys.append(0)
plt.imshow(Xs[0][..., 0])
plt.title('label:{}'.format(ys[1]))
plt.imshow(Xs[1][..., 0])
plt.title('label:{}'.format(ys[1]))
# Store every magnitude frame and its label of being music: 0 or speech: 1
Xs, ys = [], []
# Let's start with the music files
for i in music:
# Load the ith file:
s = utils.load_audio(i)
# Now take the dft of it (take a DSP course!):
re, im = dft.dft_np(s, fft_size=fft_size, hop_size=hop_size)
# And convert the complex representation to magnitudes/phases (take a DSP course!):
mag, phs = dft.ztoc(re, im)
# This is how many sliding windows we have:
n_hops = (len(mag) - n_frames) // frame_hops
# Let's extract them all:
for hop_i in range(n_hops):
# Get the current sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# We'll take the log magnitudes, as this is a nicer representation:
this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)
# And store it:
Xs.append(this_X)
# And be sure that we store the correct label of this observation:
ys.append(0)
# Now do the same thing with speech (TODO)!
for i in speech:
# Load the ith file:
s = ...
# Now take the dft of it (take a DSP course!):
re, im = ...
# And convert the complex representation to magnitudes/phases (take a DSP course!):
mag, phs = ...
# This is how many sliding windows we have:
n_hops = (len(mag) - n_frames) // frame_hops
# Let's extract them all:
for hop_i in range(n_hops):
# Get the current sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# We'll take the log magnitudes, as this is a nicer representation:
this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)
# And store it:
Xs.append(this_X)
# Make sure we use the right label (TODO!)!
ys.append...
# Convert them to an array:
Xs = np.array(Xs)
ys = np.array(ys)
print(Xs.shape, ys.shape)
# Just to make sure you've done it right. If you've changed any of the
# parameters of the dft/hop size, then this will fail. If that's what you
# wanted to do, then don't worry about this assertion.
assert(Xs.shape == (15360, 43, 256, 1) and ys.shape == (15360,))
plt.imshow(Xs[0][..., 0])
plt.title('label:{}'.format(ys[0]))
n_observations, n_height, n_width, n_channels = Xs.shape
ds = datasets.Dataset(Xs=..., ys=..., split=[0.8, 0.1, 0.1], one_hot=True)
Xs_i, ys_i = next(ds.train.next_batch())
# Notice the shape this returns. This will become the shape of our input and output of the network:
print(Xs_i.shape, ys_i.shape)
assert(ys_i.shape == (100, 2))
plt.imshow(Xs_i[0, :, :, 0])
plt.title('label:{}'.format(ys_i[0]))
plt.imshow(Xs_i[1, :, :, 0])
plt.title('label:{}'.format(ys_i[1]))
tf.reset_default_graph()
# Create the input to the network. This is a 4-dimensional tensor!
# Don't forget that we should use None as a shape for the first dimension
# Recall that we are using sliding windows of our magnitudes (TODO):
X = tf.placeholder(name='X', shape=..., dtype=tf.float32)
# Create the output to the network. This is our one hot encoding of 2 possible values (TODO)!
Y = tf.placeholder(name='Y', shape=..., dtype=tf.float32)
# TODO: Explore different numbers of layers, and sizes of the network
n_filters = [9, 9, 9, 9]
# Now let's loop over our n_filters and create the deep convolutional neural network
H = X
for layer_i, n_filters_i in enumerate(n_filters):
# Let's use the helper function to create our connection to the next layer:
# TODO: explore changing the parameters here:
H, W = utils.conv2d(
H, n_filters_i, k_h=3, k_w=3, d_h=2, d_w=2,
name=str(layer_i))
# And use a nonlinearity
# TODO: explore changing the activation here:
H = tf.nn.relu(H)
# Just to check what's happening:
print(H.get_shape().as_list())
# Connect the last convolutional layer to a fully connected network (TODO)!
fc, W = utils.linear(H, ...
# And another fully connected layer, now with just 2 outputs, the number of outputs that our
# one hot encoding has (TODO)!
Y_pred, W = utils.linear(fc, ...
loss = utils.binary_cross_entropy(Y_pred, Y)
cost = tf.reduce_mean(tf.reduce_sum(loss, 1))
predicted_y = tf.argmax(...
actual_y = tf.argmax(...
correct_prediction = tf.equal(...
accuracy = tf.reduce_mean(...
learning_rate = ...
optimizer = tf.train.AdamOptimizer(...).minimize(...)
# Explore these parameters: (TODO)
n_epochs = 10
batch_size = 200
# Create a session and init!
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Now iterate over our dataset n_epoch times
for epoch_i in range(n_epochs):
print('Epoch: ', epoch_i)
# Train
this_accuracy = 0
its = 0
# Do our mini batches:
for Xs_i, ys_i in ds.train.next_batch(batch_size):
# Note here: we are running the optimizer so
# that the network parameters train!
this_accuracy += sess.run([accuracy, optimizer], feed_dict={
X:Xs_i, Y:ys_i})[0]
its += 1
print(this_accuracy / its)
print('Training accuracy: ', this_accuracy / its)
# Validation (see how the network does on unseen data).
this_accuracy = 0
its = 0
# Do our mini batches:
for Xs_i, ys_i in ds.valid.next_batch(batch_size):
# Note here: we are NOT running the optimizer!
# we only measure the accuracy!
this_accuracy += sess.run(accuracy, feed_dict={
X:Xs_i, Y:ys_i})
its += 1
print('Validation accuracy: ', this_accuracy / its)
g = tf.get_default_graph()
[op.name for op in g.get_operations()]
g = tf.get_default_graph()
W = ...
assert(W.dtype == np.float32)
m = utils.montage_filters(W)
plt.figure(figsize=(5, 5))
plt.imshow(m)
plt.imsave(arr=m, fname='audio.png')
g = tf.get_default_graph()
for layer_i in range(len(n_filters)):
W = sess.run(g.get_tensor_by_name('{}/W:0'.format(layer_i)))
plt.figure(figsize=(5, 5))
plt.imshow(utils.montage_filters(W))
plt.title('Layer {}\'s Learned Convolution Kernels'.format(layer_i))
utils.build_submission('session-3.zip',
('test.png',
'recon.png',
'sorted.png',
'manifold.png',
'test_xs.png',
'audio.png',
'session-3.ipynb'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function libaries
Step2: data_reader
Step3: model
Step4: build neural network model
Step5: model training
Step6: TPU training
Step7: Training loss plot
Step8: Inference test
Step9: prepare data for plotting
Step10: GPU data prepare
Step11: interactive plot
|
<ASSISTANT_TASK:>
Python Code:
!pip install gdown
!mkdir ./data
import gdown
def data_import():
ids = {
"tables_of_fgm.h5":"1XHPF7hUqT-zp__qkGwHg8noRazRnPqb0"
}
url = 'https://drive.google.com/uc?id='
for title, g_id in ids.items():
try:
output_file = open("/content/data/" + title, 'wb')
gdown.download(url + g_id, output_file, quiet=False)
except IOError as e:
print(e)
finally:
output_file.close()
data_import()
import tensorflow as tf
import keras
from keras.layers import Dense, Activation, Input, BatchNormalization, Dropout, concatenate
from keras import layers
def res_branch(bi, conv_name_base, bn_name_base, scale, input_tensor, n_neuron, stage, block, bn=False):
x_1 = Dense(scale * n_neuron, name=conv_name_base + '2a_'+str(bi))(input_tensor)
if bn:
x_1 = BatchNormalization(axis=-1, name=bn_name_base + '2a_'+str(bi))(x_1)
x_1 = Activation('relu')(x_1)
# x_1 = Dropout(0.)(x_1)
return x_1
def res_block(input_tensor,scale, n_neuron, stage, block, bn=False,branches=0):
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# scale = 2
x = Dense(scale * n_neuron, name=conv_name_base + '2a')(input_tensor)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2a')(x)
x = Activation('relu')(x)
dp1=0.0
if dp1 >0:
x = Droout(0.)(x)
branch_list=[x]
for i in range(branches-1):
branch_list.append(res_branch(i,conv_name_base, bn_name_base, scale,input_tensor,n_neuron,stage,block,bn))
if branches-1 > 0:
x = Dense(n_neuron, name=conv_name_base + '2b')(concatenate(branch_list,axis=-1))
# x = Dense(n_neuron, name=conv_name_base + '2b')(layers.add(branch_list))
else:
x = Dense(n_neuron, name=conv_name_base + '2b')(x)
if bn:
x = BatchNormalization(axis=-1, name=bn_name_base + '2b')(x)
x = layers.add([x, input_tensor])
x = Activation('relu')(x)
if dp1 >0:
x = Droout(0.)(x)
return x
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler, StandardScaler
class data_scaler(object):
def __init__(self):
self.norm = None
self.norm_1 = None
self.std = None
self.case = None
self.scale = 1
self.bias = 1e-20
# self.bias = 1
self.switcher = {
'min_std': 'min_std',
'std2': 'std2',
'std_min':'std_min',
'min': 'min',
'no':'no',
'log': 'log',
'log_min':'log_min',
'log2': 'log2',
'tan': 'tan'
}
def fit_transform(self, input_data, case):
self.case = case
if self.switcher.get(self.case) == 'min_std':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.norm.fit_transform(input_data)
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'std2':
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'min':
self.norm = MinMaxScaler()
out = self.norm.fit_transform(input_data)
if self.switcher.get(self.case) == 'no':
self.norm = MinMaxScaler()
self.std = StandardScaler()
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.std = StandardScaler()
out = self.std.fit_transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
self.norm = MinMaxScaler()
out = self.norm.fit_transform(out)
if self.switcher.get(self.case) == 'log2':
self.norm = MinMaxScaler()
self.norm_1 = MinMaxScaler()
out = self.norm.fit_transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.fit_transform(out)
if self.switcher.get(self.case) == 'tan':
self.norm = MaxAbsScaler()
self.std = StandardScaler()
out = self.std.fit_transform(input_data)
out = self.norm.fit_transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.norm.transform(input_data)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.std.transform(input_data)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.std.transform(out)
if self.switcher.get(self.case) == 'log_min':
out = - np.log(np.asarray(input_data / self.scale) + self.bias)
out = self.norm.transform(out)
if self.switcher.get(self.case) == 'log2':
out = self.norm.transform(input_data)
out = np.log(np.asarray(out) + self.bias)
out = self.norm_1.transform(out)
if self.switcher.get(self.case) == 'tan':
out = self.std.transform(input_data)
out = self.norm.transform(out)
out = np.tan(out / (2 * np.pi + self.bias))
return out
def inverse_transform(self, input_data):
if self.switcher.get(self.case) == 'min_std':
out = self.std.inverse_transform(input_data)
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'std2':
out = self.std.inverse_transform(input_data)
if self.switcher.get(self.case) == 'std_min':
out = self.norm.inverse_transform(input_data)
out = self.std.inverse_transform(out)
if self.switcher.get(self.case) == 'min':
out = self.norm.inverse_transform(input_data)
if self.switcher.get(self.case) == 'no':
out = input_data
if self.switcher.get(self.case) == 'log':
out = self.std.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log_min':
out = self.norm.inverse_transform(input_data)
out = (np.exp(-out) - self.bias) * self.scale
if self.switcher.get(self.case) == 'log2':
out = self.norm_1.inverse_transform(input_data)
out = np.exp(out) - self.bias
out = self.norm.inverse_transform(out)
if self.switcher.get(self.case) == 'tan':
out = (2 * np.pi + self.bias) * np.arctan(input_data)
out = self.norm.inverse_transform(out)
out = self.std.inverse_transform(out)
return out
def read_h5_data(fileName, input_features, labels):
df = pd.read_hdf(fileName)
df = df[df['f']<0.45]
input_df=df[input_features]
in_scaler = data_scaler()
input_np = in_scaler.fit_transform(input_df.values,'no')
label_df=df[labels].clip(0)
# if 'PVs' in labels:
# label_df['PVs']=np.log(label_df['PVs']+1)
out_scaler = data_scaler()
label_np = out_scaler.fit_transform(label_df.values,'std2')
return input_np, label_np, df, in_scaler, out_scaler
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# define the labels
col_labels=['C2H3', 'C2H6', 'CH2', 'H2CN', 'C2H4', 'H2O2', 'C2H', 'CN',
'heatRelease', 'NCO', 'NNH', 'N2', 'AR', 'psi', 'CO', 'CH4', 'HNCO',
'CH2OH', 'HCCO', 'CH2CO', 'CH', 'mu', 'C2H2', 'C2H5', 'H2', 'T', 'PVs',
'O', 'O2', 'N2O', 'C', 'C3H7', 'CH2(S)', 'NH3', 'HO2', 'NO', 'HCO',
'NO2', 'OH', 'HCNO', 'CH3CHO', 'CH3', 'NH', 'alpha', 'CH3O', 'CO2',
'CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN', 'H', 'N', 'H2O',
'HCCOH', 'HCNN']
# labels = ['T','PVs']
# labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']
# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']
labels = np.random.choice(col_labels,20,replace=False).tolist()
# labels.append('PVs')
# labels = col_labels
print(labels)
input_features=['f','pv','zeta']
# read in the data
x_input, y_label, df, in_scaler, out_scaler = read_h5_data('./data/tables_of_fgm.h5',input_features=input_features, labels = labels)
from sklearn.model_selection import train_test_split
import tensorflow as tf
from keras.models import Model
from keras.layers import Dense, Input
from keras.callbacks import ModelCheckpoint
# split into train and test data
x_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)
n_neuron = 10
scale=3
branches=3
# %%
print('set up ANN')
# ANN parameters
dim_input = x_train.shape[1]
dim_label = y_train.shape[1]
batch_norm = False
# This returns a tensor
inputs = Input(shape=(dim_input,),name='input_1')
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(n_neuron, activation='relu')(inputs)
# less then 2 res_block, there will be variance
x = res_block(x, scale, n_neuron, stage=1, block='a', bn=batch_norm,branches=branches)
x = res_block(x, scale, n_neuron, stage=1, block='b', bn=batch_norm,branches=branches)
# x = res_block(x, scale, n_neuron, stage=1, block='c', bn=batch_norm,branches=branches)
x_1 = Dense(100, activation='relu')(x)
x_1 = Dense(100, activation='relu')(x)
predictions = Dense(dim_label, activation='linear', name='output_1')(x)
model = Model(inputs=inputs, outputs=predictions)
model.summary()
x_1 = Dense(100, activation='relu')(x)
import keras.backend as K
def cubic_loss(y_true, y_pred):
return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)
def coeff_r2(y_true, y_pred):
from keras import backend as K
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
from keras import optimizers
batch_size = 1024*32
epochs = 5000
vsplit = 0.1
loss_type='mse'
adam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)
model.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])
# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])
# checkpoint (save the best model based validate loss)
!mkdir ./tmp
filepath = "./tmp/weights.best.cntk.hdf5"
checkpoint = ModelCheckpoint(filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=20)
callbacks_list = [checkpoint]
# fit the model
history = model.fit(
x_train, y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit,
verbose=2,
# callbacks=callbacks_list,
shuffle=True)
model.save('trained_fgm_nn.h5')
import os
batch_size = 1024*128
epochs = 100
vsplit = 0.2
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-4),
loss=tf.keras.losses.mae,
metrics=['accuracy']
)
tpu_model.fit(
X_train,y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=vsplit
)
fig = plt.figure()
plt.semilogy(history.history['loss'])
if vsplit:
plt.semilogy(history.history['val_loss'])
plt.title(loss_type)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()
#@title import plotly
import plotly.plotly as py
import numpy as np
from plotly.offline import init_notebook_mode, iplot
# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
import plotly.graph_objs as go
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
cpu_model = tpu_model.sync_to_cpu()
predict_val=cpu_model.predict(X_test)
X_test_df = pd.DataFrame(in_scaler.inverse_transform(X_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(X_test)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([X_test_df,y_test_df],axis=1)
pred_data=pd.concat([X_test_df,predict_df],axis=1)
test_data.to_hdf('sim_check.H5',key='test')
pred_data.to_hdf('sim_check.H5',key='pred')
df_test=pd.read_hdf('sim_check.H5',key='test')
df_pred=pd.read_hdf('sim_check.H5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
model.load_weights("./tmp/weights.best.cntk.hdf5")
x_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)
y_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)
predict_val = model.predict(x_test,batch_size=1024*8)
predict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)
test_data=pd.concat([x_test_df,y_test_df],axis=1)
pred_data=pd.concat([x_test_df,predict_df],axis=1)
!rm sim_check.h5
test_data.to_hdf('sim_check.h5',key='test')
pred_data.to_hdf('sim_check.h5',key='pred')
df_test=pd.read_hdf('sim_check.h5',key='test')
df_pred=pd.read_hdf('sim_check.h5',key='pred')
zeta_level=list(set(df_test['zeta']))
zeta_level.sort()
#@title Default title text
# species = 'PVs' #@param {type:"string"}
species = np.random.choice(labels)
# configure_plotly_browser_state()
# init_notebook_mode(connected=False)
from sklearn.metrics import r2_score
df_t=df_test.loc[df_test['zeta']==zeta_level[0]].sample(frac=0.5)
# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)
df_p=df_pred.loc[df_t.index]
error=df_p[species]-df_t[species]
r2=round(r2_score(df_p[species],df_t[species]),4)
fig_db = {
'data': [
{'name':'test data from table',
'x': df_t['f'],
'y': df_t['pv'],
'z': df_t[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
}
},
{'name':'prediction from neural networks',
'x': df_p['f'],
'y': df_p['pv'],
'z': df_p[species],
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
},
{'name':'error in difference',
'x': df_p['f'],
'y': df_p['pv'],
'z': error,
'type':'scatter3d',
'mode': 'markers',
'marker':{
'size':1
},
}
],
'layout': {
'scene':{
'xaxis': {'title':'mixture fraction'},
'yaxis': {'title':'progress variable'},
'zaxis': {'title': species+'_r2:'+str(r2)}
}
}
}
# iplot(fig_db, filename='multiple-scatter')
iplot(fig_db)
print(species,r2)
df_p['HNO']
%run -i k2tf.py --input_model='trained_fgm_nn.h5' --output_model='exported/fgm.pb'
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Periodic Yield
Step2: CPU Hog
Step3: Changing Reservations
|
<ASSISTANT_TASK:>
Python Code:
from trappy.stats.Topology import Topology
from bart.sched.SchedMultiAssert import SchedMultiAssert
from bart.sched.SchedAssert import SchedAssert
import trappy
import os
import operator
import json
#Define a CPU Topology (for multi-cluster systems)
BIG = [1, 2]
LITTLE = [0, 3, 4, 5]
CLUSTERS = [BIG, LITTLE]
topology = Topology(clusters=CLUSTERS)
BASE_PATH = "/Users/kapileshwarsingh/AnalysisRawData/LPC/sched_deadline/"
THRESHOLD = 10.0
def between_threshold(a, b):
return abs(((a - b) * 100.0) / b) < THRESHOLD
TRACE_FILE = os.path.join(BASE_PATH, "yield")
run = trappy.Run(TRACE_FILE, "cpuhog")
# Assert Period
s = SchedMultiAssert(run, topology, execnames="periodic_yield")
if s.assertPeriod(30, between_threshold, rank=1):
print "PASS: Period"
print json.dumps(s.getPeriod(), indent=3)
print ""
# Assert DutyCycle
if s.assertDutyCycle(1, operator.lt, window=(0,4), rank=2):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0,4)), indent=3)
TRACE_FILE = os.path.join(BASE_PATH, "cpuhog")
run = trappy.Run(TRACE_FILE, "cpuhog")
s = SchedMultiAssert(run, topology, execnames="cpuhog")
s.plot().view()
# Assert DutyCycle
if s.assertDutyCycle(10, between_threshold, window=(0, 5), rank=1):
print "PASS: DutyCycle"
print json.dumps(s.getDutyCycle(window=(0, 5)), indent=3)
TRACE_FILE = os.path.join(BASE_PATH, "cancel_dl_timer")
run = trappy.Run(TRACE_FILE, "cpuhog")
s = SchedAssert(run, topology, execname="cpuhog")
s.plot().view()
NUM_PHASES = 10
PHASE_DURATION = 2
start = s.getStartTime()
DUTY_CYCLE_FACTOR = 10
for phase in range(NUM_PHASES + 1):
window = (start + (phase * PHASE_DURATION),
start + ((phase + 1) * PHASE_DURATION))
if phase % 2 == 0:
DUTY_CYCLE = (phase + 2) * DUTY_CYCLE_FACTOR / 2
else:
DUTY_CYCLE = 100
print "WINDOW -> [{:.2f}, {:.2f}]".format(window[0],
window[1])
if s.assertDutyCycle(DUTY_CYCLE, between_threshold, window=window):
print "PASS: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
else:
print "FAIL: Expected={} Actual={:.2f} THRESHOLD={}".format(DUTY_CYCLE,
s.getDutyCycle(window=window),
THRESHOLD)
print ""
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Building the PDB Query (Search API)
Step2: Now we can combine these three filters together using AND
Step3: And add the return information. Note that we are specifying the polymer_instance ID's as that is where the disulfide bonds are noted.
Step4: And finally submit the requst to the PDB. The response should be 200 if the query was successful.
Step5: Next we will extract just the PDB codes from our results and build a list.
Step6: PDB Data API
Step7: Now we can loop through each PDB entry and request the polymer_entity_instance information. We will only care about disulfide bridges of adjacent residues
Step8: Our list is small (intentionally) but we can now use it to fetch chemical shifts from the BMRB.
Step9: Alternate Approach
|
<ASSISTANT_TASK:>
Python Code:
%%capture
!pip install requests;
import requests
pdbAPI = "https://search.rcsb.org/rcsbsearch/v1/query?json="
disulfide_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "greater_or_equal", "value": 1, "attribute": "rcsb_entry_info.disulfide_bond_count"}}'
NMR_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "exact_match", "value": "NMR", "attribute": "rcsb_entry_info.experimental_method"}}'
GK_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "exact_match", "value": "King, G.F.", "attribute": "audit_author.name"}}'
filters = '{"type": "group", "logical_operator": "and", "nodes": [' + disulfide_filter + ',' + NMR_filter + ',' + GK_filter + ']}'
full_query = '{"query": ' + filters + ', "request_options": {"return_all_hits": true}, "return_type": "polymer_instance"}'
response = requests.get(pdbAPI + full_query)
print(response) # should return 200
print(type(response.json()))
#print(response.json()) #uncomment this line if you want to see the results
pdb_results = response.json()
pdb_list = []
for x in pdb_results['result_set']:
pdb_list.append (x['identifier'])
print (pdb_list)
data_query_base = "https://data.rcsb.org/rest/v1/core/polymer_entity_instance/"
def swapSymbols(iter):
return iter.replace(".","/")
pdb_list2 = list(map(swapSymbols,pdb_list))
print(pdb_list2)
data_response = requests.get(data_query_base + "1DL0/A")
print(data_response) # should return 200
vds_list = []
for instance in pdb_list2:
data_response = requests.get(data_query_base + instance)
if data_response.status_code == 200:
data_result = data_response.json()
for x in data_result['rcsb_polymer_struct_conn']:
if (x['connect_type'] == 'disulfide bridge' and x['connect_partner']['label_seq_id']-x['connect_target']['label_seq_id']==1):
vds_list.append (data_result['rcsb_polymer_entity_instance_container_identifiers']['entry_id'])
print(vds_list)
BMRB_LookupString = 'http://api.bmrb.io/v2/search/get_bmrb_ids_from_pdb_id/'
BMRB_ID_List = []
for PDB_ID in vds_list:
BMRB_response = requests.get(BMRB_LookupString + PDB_ID)
if BMRB_response.status_code == 200:
BMRB_result = BMRB_response.json()
for x in BMRB_result:
for y in x['match_types']:
if y == 'Author Provided':
BMRB_ID_List.append (x['bmrb_id'])
print(BMRB_ID_List)
chemical_shifts_list = []
for ID in BMRB_ID_List:
x = requests.get("http://api.bmrb.io/v2/entry/" + ID + "?saveframe_category=assigned_chemical_shifts")
chemical_shifts_list.append (x.json())
#print(chemical_shifts_list)
bmrb_link = "https://bmrb.io/ftp/pub/bmrb/nmr_pdb_integrated_data/adit_nmr_matched_pdb_bmrb_entry_ids.csv"
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: \title{myHDL Combinational Logic Elements
Step2: Demultiplexers
Step4: myHDL Module
Step6: myHDL Testing
Step7: Verilog Conversion
Step9: \begin{figure}
Step10: PYNQ-Z1 Deployment
Step11: Video of Deployment
Step13: myHDL Module
Step15: myHDL Testing
Step16: Verilog Conversion
Step18: \begin{figure}
Step19: PYNQ-Z1 Deployment
Step21: Video of Deployment
Step23: myHDL Testing
Step24: Verilog Conversion
Step26: \begin{figure}
Step28: PYNQ-Z1 Deployment
Step30: myHDL Testing
Step31: Verilog Conversion
Step33: \begin{figure}
Step35: PYNQ-Z1 Deployment
Step37: myHDL Testing
Step38: Verilog Conversion
Step40: \begin{figure}
Step42: PYNQ-Z1 Deployment
Step43: myHDL Testing
Step44: Verilog Conversion
Step45: \begin{figure}
|
<ASSISTANT_TASK:>
Python Code:
#This notebook also uses the `(some) LaTeX environments for Jupyter`
#https://github.com/ProfFan/latex_envs wich is part of the
#jupyter_contrib_nbextensions package
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import itertools
#EE drawing tools in python from https://cdelker.bitbucket.io/SchemDraw/
import SchemDraw as schem
import SchemDraw.elements as e
import SchemDraw.logic as l
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, itertools, SchemDraw
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
def ConstraintXDCTextReader(loc, printresult=True):
with open(f'{loc}.xdc', 'r') as xdcText:
ConstraintText=xdcText.read()
if printresult:
print(f'***Constraint file from {loc}.xdc***\n\n', ConstraintText)
return ConstraintText
def TruthTabelGenrator(BoolSymFunc):
Function to generate a truth table from a sympy boolian expression
BoolSymFunc: sympy boolian expression
return TT: a Truth table stored in a pandas dataframe
colsL=sorted([i for i in list(BoolSymFunc.rhs.atoms())], key=lambda x:x.sort_key())
colsR=sorted([i for i in list(BoolSymFunc.lhs.atoms())], key=lambda x:x.sort_key())
bitwidth=len(colsL)
cols=colsL+colsR; cols
TT=pd.DataFrame(columns=cols, index=range(2**bitwidth))
for i in range(2**bitwidth):
inputs=[int(j) for j in list(np.binary_repr(i, bitwidth))]
outputs=BoolSymFunc.rhs.subs({j:v for j, v in zip(colsL, inputs)})
inputs.append(int(bool(outputs)))
TT.iloc[i]=inputs
return TT
x, s, y0, y1=symbols('x, s, y_0, y_1')
y12_0Eq=Eq(y0, ~s&x)
y12_1Eq=Eq(y1, s&x)
y12_0Eq, y12_1Eq
T0=TruthTabelGenrator(y12_0Eq)
T1=TruthTabelGenrator(y12_1Eq)
T10=pd.merge(T1, T0, how='left')
T10
y12_0EqN=lambdify([s, x], y12_0Eq.rhs, dummify=False)
y12_1EqN=lambdify([s, x], y12_1Eq.rhs, dummify=False)
SystmaticVals=np.array(list(itertools.product([0,1], repeat=2)))
print(SystmaticVals)
print(y12_0EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int))
print(y12_1EqN(SystmaticVals[:, 0], SystmaticVals[:, 1]).astype(int))
@block
def DEMUX1_2_Combo(x, s, y0, y1):
1:2 DEMUX written in full combo
Inputs:
x(bool): input feed
s(bool): channel select
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
@always_comb
def logic():
y0.next= not s and x
y1.next= s and x
return instances()
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=2))
xTVs=np.array([i[1] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
sTVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, sTVs, xTVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s=Signal(bool(0)); Peeker(s, 's')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
DUT=DEMUX1_2_Combo(x, s, y0, y1)
def DEMUX1_2_Combo_TB():
myHDL only testbench for module `DEMUX1_2_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s.next=int(sTVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_2_Combo_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's', 'y0','y1')
DEMUX1_2_ComboData=Peeker.to_dataframe()
DEMUX1_2_ComboData=DEMUX1_2_ComboData[['x', 's', 'y0','y1']]
DEMUX1_2_ComboData
DEMUX1_2_ComboData['y0Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_0EqN(row['s'], row['x']), axis=1).astype(int)
DEMUX1_2_ComboData['y1Ref']=DEMUX1_2_ComboData.apply(lambda row:y12_1EqN(row['s'], row['x']), axis=1).astype(int)
DEMUX1_2_ComboData
Test0=(DEMUX1_2_ComboData['y0']==DEMUX1_2_ComboData['y0Ref']).all()
Test1=(DEMUX1_2_ComboData['y1']==DEMUX1_2_ComboData['y1Ref']).all()
Test=Test0&Test1
print(f'Module `DEMUX1_2_Combo` works as exspected: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_2_Combo');
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), sTVs, bin(sTVs)
@block
def DEMUX1_2_Combo_TBV():
myHDL -> testbench for module `DEMUX1_2_Combo`
x=Signal(bool(0))
s=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
@always_comb
def print_data():
print(x, s, y0, y1)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
sTV=Signal(sTVs)
DUT=DEMUX1_2_Combo(x, s, y0, y1)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s.next=int(sTV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_2_Combo_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_2_Combo_TBV');
ConstraintXDCTextReader('DEMUX1_2');
x, s0, s1, y0, y1, y2, y3=symbols('x, s0, s1, y0, y1, y2, y3')
y14_0Eq=Eq(y0, ~s0&~s1&x)
y14_1Eq=Eq(y1, s0&~s1&x)
y14_2Eq=Eq(y2, ~s0&s1&x)
y14_3Eq=Eq(y3, s0&s1&x)
y14_0Eq, y14_1Eq, y14_2Eq, y14_3Eq
T0=TruthTabelGenrator(y14_0Eq)
T1=TruthTabelGenrator(y14_1Eq)
T2=TruthTabelGenrator(y14_2Eq)
T3=TruthTabelGenrator(y14_3Eq)
T10=pd.merge(T1, T0, how='left')
T20=pd.merge(T2, T10, how='left')
T30=pd.merge(T3, T20, how='left')
T30
y14_0EqN=lambdify([x, s0, s1], y14_0Eq.rhs, dummify=False)
y14_1EqN=lambdify([x, s0, s1], y14_1Eq.rhs, dummify=False)
y14_2EqN=lambdify([x, s0, s1], y14_2Eq.rhs, dummify=False)
y14_3EqN=lambdify([x, s0, s1], y14_3Eq.rhs, dummify=False)
SystmaticVals=np.array(list(itertools.product([0,1], repeat=3)))
print(SystmaticVals)
print(y14_0EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_1EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_2EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
print(y14_3EqN(SystmaticVals[:, 2], SystmaticVals[:, 1], SystmaticVals[:, 0]).astype(int))
@block
def DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX written in full combo
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
@always_comb
def logic():
y0.next= (not s0) and (not s1) and x
y1.next= s0 and (not s1) and x
y2.next= (not s0) and s1 and x
y3.next= s0 and s1 and x
return instances()
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_Combo_TB():
myHDL only testbench for module `DEMUX1_4_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_Combo_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_ComboData=Peeker.to_dataframe()
DEMUX1_4_ComboData=DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_ComboData
DEMUX1_4_ComboData['y0Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y1Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y2Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData['y3Ref']=DEMUX1_4_ComboData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_ComboData
Test0=(DEMUX1_4_ComboData['y0']==DEMUX1_4_ComboData['y0Ref']).all()
Test1=(DEMUX1_4_ComboData['y1']==DEMUX1_4_ComboData['y1Ref']).all()
Test2=(DEMUX1_4_ComboData['y2']==DEMUX1_4_ComboData['y2Ref']).all()
Test3=(DEMUX1_4_ComboData['y3']==DEMUX1_4_ComboData['y3Ref']).all()
Test=Test0&Test1&Test2&Test3
print(f'Module `DEMUX1_4_Combo` works as exspected: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_4_Combo');
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_Combo_TBV():
myHDL -> testbench for module `DEMUX1_4_Combo`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_Combo(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_Combo_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_Combo_TBV');
ConstraintXDCTextReader('DEMUX1_4');
@block
def DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX via DEMUX Stacking
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
s0_y0y1_WIRE=Signal(bool(0))
s0_y2y3_WIRE=Signal(bool(0))
x_s1_DEMUX=DEMUX1_2_Combo(x, s1, s0_y0y1_WIRE, s0_y2y3_WIRE)
s1_y0y1_DEMUX=DEMUX1_2_Combo(s0_y0y1_WIRE, s0, y0, y1)
s1_y2y3_DEMUX=DEMUX1_2_Combo(s0_y2y3_WIRE, s0, y2, y3)
return instances()
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_DMS_TB():
myHDL only testbench for module `DEMUX1_4_DMS`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_DMS_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_DMSData=Peeker.to_dataframe()
DEMUX1_4_DMSData=DEMUX1_4_DMSData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_DMSData
Test=DEMUX1_4_DMSData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
Test=Test.all().all()
print(f'DEMUX1_4_DMS equivlinet to DEMUX1_4_Combo: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_4_DMS');
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_DMS_TBV():
myHDL -> testbench for module `DEMUX1_4_DMS`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_DMS(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_DMS_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_DMS_TBV');
@block
def DEMUX1_2_B(x, s, y0, y1):
1:2 DMUX in behavioral
Inputs:
x(bool): input feed
s(bool): channel select
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
@always_comb
def logic():
if s==0:
#take note that since we have
#two ouputs there next state values
#must both be set, else the last
#value will presist till it changes
y0.next=x
y1.next=0
else:
y0.next=0
y1.next=x
return instances()
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=2))
xTVs=np.array([i[1] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
sTVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, sTVs, xTVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s=Signal(bool(0)); Peeker(s, 's')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
DUT=DEMUX1_2_B(x, s, y0, y1)
def DEMUX1_2_B_TB():
myHDL only testbench for module `DEMUX1_2_B`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s.next=int(sTVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_2_B_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's', 'y0','y1')
DEMUX1_2_BData=Peeker.to_dataframe()
DEMUX1_2_BData=DEMUX1_2_BData[['x', 's', 'y0','y1']]
DEMUX1_2_BData
Test=DEMUX1_2_BData==DEMUX1_2_ComboData[['x', 's', 'y0','y1']]
Test=Test.all().all()
print(f'DEMUX1_2_BD is equivlent to DEMUX1_2_Combo: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_2_B');
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), sTVs, bin(sTVs)
@block
def DEMUX1_2_B_TBV():
myHDL -> testbench for module `DEMUX1_2_B`
x=Signal(bool(0))
s=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
@always_comb
def print_data():
print(x, s, y0, y1)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
sTV=Signal(sTVs)
DUT=DEMUX1_2_B(x, s, y0, y1)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s.next=int(sTV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_2_B_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_2_B_TBV');
@block
def DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3):
1:4 DEMUX written via behaviorial
Inputs:
x(bool): input feed
s0(bool): channel select 0
s1(bool): channel select 1
Outputs:
y0(bool): ouput channel 0
y1(bool): ouput channel 1
y2(bool): ouput channel 2
y3(bool): ouput channel 3
@always_comb
def logic():
if s0==0 and s1==0:
y0.next=x; y1.next=0
y2.next=0; y3.next=0
elif s0==1 and s1==0:
y0.next=0; y1.next=x
y2.next=0; y3.next=0
elif s0==0 and s1==1:
y0.next=0; y1.next=0
y2.next=x; y3.next=0
else:
y0.next=0; y1.next=0
y2.next=0; y3.next=x
return instances()
TestLen=10
SystmaticVals=list(itertools.product([0,1], repeat=3))
xTVs=np.array([i[2] for i in SystmaticVals]).astype(int)
np.random.seed(15)
xTVs=np.append(xTVs, np.random.randint(0,2, TestLen)).astype(int)
s0TVs=np.array([i[1] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(16)
s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int)
s1TVs=np.array([i[0] for i in SystmaticVals]).astype(int)
#the random genrator must have a differint seed beween each generation
#call in order to produce differint values for each call
np.random.seed(17)
s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int)
TestLen=len(xTVs)
SystmaticVals, xTVs, s0TVs, s1TVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
s0=Signal(bool(0)); Peeker(s0, 's0')
s1=Signal(bool(0)); Peeker(s1, 's1')
y0=Signal(bool(0)); Peeker(y0, 'y0')
y1=Signal(bool(0)); Peeker(y1, 'y1')
y2=Signal(bool(0)); Peeker(y2, 'y2')
y3=Signal(bool(0)); Peeker(y3, 'y3')
DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3)
def DEMUX1_4_B_TB():
myHDL only testbench for module `DEMUX1_4_Combo`
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTVs[i])
s0.next=int(s0TVs[i])
s1.next=int(s1TVs[i])
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_B_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 's1', 's0', 'y0', 'y1', 'y2', 'y3')
DEMUX1_4_BData=Peeker.to_dataframe()
DEMUX1_4_BData=DEMUX1_4_BData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
DEMUX1_4_BData
Test=DEMUX1_4_BData==DEMUX1_4_ComboData[['x', 's1', 's0', 'y0', 'y1', 'y2', 'y3']]
Test=Test.all().all()
print(f'DEMUX1_4_B equivlinet to DEMUX1_4_Combo: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_4_B');
#create BitVectors
xTVs=intbv(int(''.join(xTVs.astype(str)), 2))[TestLen:]
s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:]
s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:]
xTVs, bin(xTVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs)
@block
def DEMUX1_4_B_TBV():
myHDL -> testbench for module `DEMUX1_4_B`
x=Signal(bool(0))
s0=Signal(bool(0))
s1=Signal(bool(0))
y0=Signal(bool(0))
y1=Signal(bool(0))
y2=Signal(bool(0))
y3=Signal(bool(0))
@always_comb
def print_data():
print(x, s0, s1, y0, y1, y2, y3)
#Test Signal Bit Vectors
xTV=Signal(xTVs)
s0TV=Signal(s0TVs)
s1TV=Signal(s1TVs)
DUT=DEMUX1_4_B(x, s0, s1, y0, y1, y2, y3)
@instance
def stimules():
for i in range(TestLen):
x.next=int(xTV[i])
s0.next=int(s0TV[i])
s1.next=int(s1TV[i])
yield delay(1)
raise StopSimulation()
return instances()
TB=DEMUX1_4_B_TBV()
TB.convert(hdl="Verilog", initial_values=True)
VerilogTextReader('DEMUX1_4_B_TBV');
@block
def DEMUX1_4_BV(x, S, Y):
1:4 DEMUX written via behaviorial with
bit vectors
Inputs:
x(bool): input feed
S(2bit vector): channel select bitvector;
min=0, max=3
Outputs:
Y(4bit vector): ouput channel bitvector;
values min=0, max=15; allowed is: 0,1,2,4,8
in this application
@always_comb
def logic():
#here concat is used to build up the word
#from the x input
if S==0:
Y.next=concat(intbv(0)[3:], x); '0001'
elif S==1:
Y.next=concat(intbv(0)[2:], x, intbv(0)[1:]); '0010'
elif S==2:
Y.next=concat(intbv(0)[1:], x, intbv(0)[2:]); '0100'
else:
Y.next=concat(x, intbv(0)[3:]); '1000'
return instances()
xTVs=np.array([0,1])
xTVs=np.append(xTVs, np.random.randint(0,2,6)).astype(int)
TestLen=len(xTVs)
np.random.seed(12)
STVs=np.arange(0,4)
STVs=np.append(STVs, np.random.randint(0,4, 5))
TestLen, xTVs, STVs
Peeker.clear()
x=Signal(bool(0)); Peeker(x, 'x')
S=Signal(intbv(0)[2:]); Peeker(S, 'S')
Y=Signal(intbv(0)[4:]); Peeker(Y, 'Y')
DUT=DEMUX1_4_BV(x, S, Y)
def DEMUX1_4_BV_TB():
@instance
def stimules():
for i in STVs:
for j in xTVs:
S.next=int(i)
x.next=int(j)
yield delay(1)
raise StopSimulation()
return instances()
sim=Simulation(DUT, DEMUX1_4_BV_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x', 'S', 'Y', start_time=0, stop_time=2*TestLen+2)
DEMUX1_4_BVData=Peeker.to_dataframe()
DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 'Y']]
DEMUX1_4_BVData
DEMUX1_4_BVData['y0']=None; DEMUX1_4_BVData['y1']=None; DEMUX1_4_BVData['y2']=None; DEMUX1_4_BVData['y3']=None
DEMUX1_4_BVData[['y3', 'y2', 'y1', 'y0']]=DEMUX1_4_BVData[['Y']].apply(lambda bv: [int(i) for i in bin(bv, 4)], axis=1, result_type='expand')
DEMUX1_4_BVData['s0']=None; DEMUX1_4_BVData['s1']=None
DEMUX1_4_BVData[['s1', 's0']]=DEMUX1_4_BVData[['S']].apply(lambda bv: [int(i) for i in bin(bv, 2)], axis=1, result_type='expand')
DEMUX1_4_BVData=DEMUX1_4_BVData[['x', 'S', 's0', 's1', 'Y', 'y3', 'y2', 'y1', 'y0']]
DEMUX1_4_BVData
DEMUX1_4_BVData['y0Ref']=DEMUX1_4_BVData.apply(lambda row:y14_0EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y1Ref']=DEMUX1_4_BVData.apply(lambda row:y14_1EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y2Ref']=DEMUX1_4_BVData.apply(lambda row:y14_2EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData['y3Ref']=DEMUX1_4_BVData.apply(lambda row:y14_3EqN(row['x'], row['s0'], row['s1']), axis=1).astype(int)
DEMUX1_4_BVData
Test=DEMUX1_4_BVData[['y0', 'y1', 'y2', 'y3']].sort_index(inplace=True)==DEMUX1_4_BVData[['y0Ref', 'y1Ref', 'y2Ref', 'y3Ref']].sort_index(inplace=True)
print(f'Module `DEMUX1_4_BVData` works as exspected: {Test}')
DUT.convert()
VerilogTextReader('DEMUX1_4_BV');
ConstraintXDCTextReader('DEMUX1_4_BV');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Exercise
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from bokeh.charts import TimeSeries, output_notebook, show
# Get data
# Process data
# Output option
# Create timeseries chart
# Show chart
# Style your timeseries chart
# Show new chart
# Compute moving average
# Create chart with moving average
# Show chart with moving average
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part One
Step2: We'll look at a couple of variables, including pregnancy length and birth weight. The effect size we'll consider is the difference in the means.
Step3: For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects.
Step4: The actual difference in the means is 0.078 weeks, which is only 13 hours.
Step5: The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others.
Step6: Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample.
Step7: The result of running the model is two NumPy arrays with the shuffled pregnancy lengths
Step8: Then we compute the same test statistic using the simulated data
Step9: If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis.
Step10: Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line.
Step11: The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value.
Step20: In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks.
Step25: HypothesisTest is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section.
Step26: Now we can run the test by instantiating a DiffMeansPermute object
Step27: And we can plot the sampling distribution of the test statistic under the null hypothesis.
Step28: Difference in standard deviation
Step29: Here's the code to test your solution to the previous exercise.
Step30: Difference in birth weights
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
import first
# seed the random number generator so we all get the same results
numpy.random.seed(19)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
live, firsts, others = first.MakeFrames()
def TestStatistic(data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
group1 = firsts.prglngth
group2 = others.prglngth
actual = TestStatistic((group1, group2))
actual
n, m = len(group1), len(group2)
pool = numpy.hstack((group1, group2))
def RunModel():
numpy.random.shuffle(pool)
data = pool[:n], pool[n:]
return data
RunModel()
TestStatistic(RunModel())
test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)])
test_stats.shape
pyplot.vlines(actual, 0, 300, linewidth=3, color='0.8')
pyplot.hist(test_stats, color=COLOR5)
pyplot.xlabel('difference in means')
pyplot.ylabel('count')
None # It's interesting that these None's are significant to the notebook.
pvalue = sum(test_stats >= actual) / len(test_stats)
pvalue
class HypothesisTest(object):
Represents a hypothesis test.
def __init__(self, data):
Initializes.
data: data in whatever form is relevant
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
self.test_stats = None
def PValue(self, iters=1000):
Computes the distribution of the test statistic and p-value.
iters: number of iterations
returns: float p-value
self.test_stats = numpy.array([self.TestStatistic(self.RunModel())
for _ in range(iters)])
count = sum(self.test_stats >= self.actual)
return count / iters
def MaxTestStat(self):
Returns the largest test statistic seen during simulations.
return max(self.test_stats)
def PlotHist(self, label=None):
Draws a Cdf with vertical lines at the observed test stat.
ys, xs, patches = pyplot.hist(ht.test_stats, color=COLOR4)
pyplot.vlines(self.actual, 0, max(ys), linewidth=3, color='0.8')
pyplot.xlabel('test statistic')
pyplot.ylabel('count')
def TestStatistic(self, data):
Computes the test statistic.
data: data in whatever form is relevant
raise UnimplementedMethodException()
def MakeModel(self):
Build a model of the null hypothesis.
pass
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
raise UnimplementedMethodException()
class DiffMeansPermute(HypothesisTest):
Tests a difference in means by permutation.
def TestStatistic(self, data):
Computes the test statistic.
data: data in whatever form is relevant
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
Build a model of the null hypothesis.
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = numpy.hstack((group1, group2))
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
numpy.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
data = (firsts.prglngth, others.prglngth)
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
ht.PlotHist()
# Solution goes here
data = (firsts.prglngth, others.prglngth)
ht = DiffStdPermute(data)
p_value = ht.PValue(iters=1000)
print('\nstd permute pregnancy length')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna())
ht = DiffMeansPermute(data)
p_value = ht.PValue(iters=1000)
print('\nmeans permute birthweight')
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Simple trial wavefunction
|
<ASSISTANT_TASK:>
Python Code:
r1 = Symbol('r1') # magnitude of vector r1 (electron 1 to nucleus distance)
r2 = Symbol('r2') # magnitude of vector r2 (electron 2 to nucleus distance)
r12 = Symbol('r12') # |r2-r1| (magnitude of vector r2-r1)
beta = Symbol('beta')
R1 = exp(-2*r1)
R2 = exp(-2*r2)
G = exp(r12/2/(1 + beta*r12))
R1*R2*G
def del_spherical(e, r):
Compute Laplacian for expression e with respect to symbol r.
Currently works only with radial dependence
t1 = r*r*diff(e, r)
t2 = diff(t1, r)/(r*r)
return simplify(t2)
def grad_spherical(e, r, rhat):
Compute gradient for expresion e with respect to symbol r.
The symbol rhat is the unit vector in the radial direction.
t1 = diff(e, r)
t2 = simplify(t1)
return t2*rhat
# Wavefunction has the form F1 * F2 * G
# grad (FG) = G grad(F) + F grad(G)
def grad_combo(F, gradF, G, gradG):
t = F*gradG + G*gradF
return simplify(t)
# del^2 (FG) = G del^2(F0 + F del^2(G) + 2 grad(F) * grad(G)
def del_combo(F, gradF, delsqF, G, gradG, delsqG):
t = G * delsqF + F * delsqG + 2*gradF*gradG
t2 = t.subs(rhat1*rhat1, 1)
t3 = t2.subs(rhat2*rhat2, 1)
t4 = t3.subs(rhat12*rhat12, 1)
t5 = t4.subs(rhat1*rhat12, rhat1_12)
t6 = t5.subs(rhat2*rhat12, rhat2_12)
return simplify(t6)
# Unit vectors
rhat1 = Symbol('rhat1')
rhat2 = Symbol('rhat2')
rhat12 = Symbol('rhat12')
rhat1_12 = Symbol('rhat1_12')
rhat2_12 = Symbol('rhat2_12')
# TODO: Explain this derivation
grad_1 = grad_spherical(R1, r1, rhat1)
del2_1 = del_spherical(R1, r1)
grad_2 = grad_spherical(R2, r2, rhat2)
del2_2 = del_spherical(R2, r2)
gradG_1 = grad_spherical(G, r12, rhat12)
del2G_1 = del_spherical(G, r12)
grad_R1G = grad_combo(R1, grad_1, G, -gradG_1)
delsq_R1G = del_combo(R1, grad_1, del2_1, G, -gradG_1, del2G_1)
grad_R2G = grad_combo(R2, grad_2, G, gradG_1)
delsq_R2G = del_combo(R2, grad_2, del2_2, G, gradG_1, del2G_1)
# This is the integrand for the denominator
psi2 = R1*R1*R2*R2*G*G
h1 = -1/S(2) * (R2*R2) * R1 * G * delsq_R1G - 2*psi2/r1
h2 = -1/S(2) * (R1*R1) * R2 * G * delsq_R2G - 2*psi2/r2
h = h1 + h2 + psi2/r12
h = simplify(h)
# This is the integrand for the numerator
h
# Symbols for performing the integration
x1,y1,z1 = symbols('x1 y1 z1')
x2,y2,z2 = symbols('x2 y2 z2')
# Add a very tiny softening to keep the evaluation of the integrand from blowing up at certain points.
r1_c = sqrt(x1*x1 + y1*y1 + z1*z1 + 1e-20)
r2_c = sqrt(x2*x2 + y2*y2 + z2*z2 + 1e-20)
r12_c = sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1) + (z2-z1)*(z2-z1) + 1e-20)
rhat1_12_c = (x1*(x2-x1) + y1*(y2-y1) + z1*(z2-z1))/r1_c/r12_c
rhat2_12_c = (x2*(x2-x1) + y2*(y2-y1) + z2*(z2-z1))/r2_c/r12_c
# Perform 6D integral. Using scipy.integrate.nquad will take too long.
# Use Cubature and Sympy LLVM JIT
# TODO: insert link to explain different integrators
beta_val = 1.0
num = tmp.subs(beta, beta_val)
denom = simplify(tmp2.subs(beta, beta_val))
ndim = 6
lim_min = [-30.0]*ndim
lim_max = [30.0]*ndim
repls = [(r1, r1_c), (r2, r2_c), (r12, r12_c), (rhat1_12, rhat1_12_c), (rhat2_12, rhat2_12_c)]
repls_n = [a for a in repls]
tmp_h = h.subs(beta, beta_val)
tmp_d = psi2.subs(beta, beta_val)
# Use CSE (Common Subexpression Elimination) to speed up the integrand evaluation
cse_h = cse([tmp_h, tmp_d])
repls.extend(cse_h[0])
expr_h = [repls, cse_h[1]]
jit_expr = jit.llvm_callable( [x1, y1, z1, x2, y2, z2], expr_h, callback_type='cubature')
epsrel = 1e-4
start = timer()
res = cubature(jit_expr, ndim=ndim, fdim=2, xmin=lim_min, xmax=lim_max, relerr=epsrel)
end = timer()
E = res[0][0]/res[0][1]
print("For beta = ",beta_val," Energy = ",E)
print("Time: %.2f seconds"%(end-start))
# Collect computation of energy into a single function
def compute_energy(beta_val):
repls = [(r1, r1_c), (r2, r2_c), (r12, r12_c), (rhat1_12, rhat1_12_c), (rhat2_12, rhat2_12_c)]
repls_n = [a for a in repls]
tmp_h = h.subs(beta, beta_val)
tmp_d = psi2.subs(beta, beta_val)
cse_h = cse([tmp_h,tmp_d])
repls.extend(cse_h[0])
expr_h = [repls, cse_h[1]]
jit_expr = jit.llvm_callable( [x1, y1, z1, x2, y2, z2], expr_h, callback_type='cubature')
epsrel = 1e-4
res = cubature(jit_expr, ndim=ndim, fdim=2, xmin=lim_min, xmax=lim_max, relerr=epsrel)
E = res[0][0]/res[0][1]
return E
# Compute energies over a range of variational parameters to find the minimum.
energies = []
betas = []
for i in range(10):
beta_val = i*.02 + 0.01
e = compute_energy(beta_val)
print(beta_val, e)
betas.append(beta_val)
energies.append(e)
fig = plt.plot(betas, energies)
fig[0].axes.set_xlabel(r"$\beta$")
fig[0].axes.set_ylabel("Energy")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Iteration with For Loops
Step2: List Comprehension in Python
Step3: Combining Control Flow Loops
Step4: File I/O
Step5: File I/O
Step6: There is one wrinkle to using things like np.genfromtxt, np.savetxt, and np.loadtxt
Step7: Writing Functions and Modules in Python
Step8: An important note
Step10: Doc Strings
Step11: Modules
Step13: This kind of syntax should look familiar to you! Whenever you do something like
Step14: Now that I've written this module, we want to run it. In IPython you can simply do "run module.py" and the module will run. In this case, the calculate_distance module should write a file called "distance_parallax.dat," so with the bash
|
<ASSISTANT_TASK:>
Python Code:
x = 5. # assign value of 5 to variable x
if x < 10: # the statement we are testing. IF this statement is true...
print(x) # this block of text will execute. the next line after if, while, or for should always be indented.
if x == 5.: # recall the "==" is used to TEST whether two values are equal to one another
print("True")
else: # if, elif, else statements can be used to test multiple cases
print("False")
y = 5. # assign value of 5 to variable y
while y < 10: # this is the statement we are testing. WHILE this statement is true...
print(y) # print the value of y
y += 1 #then add one to y. so this will only execute WHILE x < 10, as we can see from the output
# here we iterate over the items in a list
z = range(5) # make a list of integers using function range
for i in z: # for each value "i" in my array z...
print(i) # print that value i
# this will take the value i and set it equal to the first value in the list. then will will
# execute the indentation block and set i equal to the NEXT value in the list, and so on.
# we could just as easily do the following, which is equivalent to the above:
for i in range(5):
print(i)
# we can also combine range() and len() to iterate over the indices of a list or array
solar_system = ['sun', 'moon', 'earth', 'pluto']
for i in range(len(solar_system)):
print(i, solar_system[i])
# what is this for loop doing?
for i in solar_system:
print(i, len(i))
# perhaps a more useful example: what if i want to make everything in my list a float?
magnitudes = ['-1.46', '-0.04', '0.12', '0.5', '1.25']
for i in range(len(magnitudes)):
magnitudes[i] = float(magnitudes[i])
print(magnitudes)
magnitudes_lc = [float(i) for i in magnitudes] # for each value i in magnitudes, make that value i into a float.
print(magnitudes_lc)
# Try it yourself. Turn the following for loop into a list comprehension:
empty_list = [] # create an empty list
for num in range(50):
empty_list.append(num)
# Hint: You do not actually need to do append! A list comprehension will build up the list without using that function.
num_array = range(20)
for i in num_array:
if i < 1: # for multiple if statements use if, elif, else.
print("Zero")
elif (i <= 1) & (x > 0): # you can combine comparison operators!
print("One")
elif i == 2:
print("Two")
elif (i > 2) & (i <= 3):
pass # pass does nothing
elif i > 3:
break # break quits the loop
# use a for loop to pick out even numbers!
for num in range(2,10):
if num % 2 == 0: # note the "==" this is for checking whether two values are equal, i.e. it is doing "?="
print("Even number {}".format(num))
continue # this tells the loop to continue, although it is not *required* that you use this
elif num % 2 != 0:
print("Odd number {}".format(num))
break # try commenting out the break to see what happens!
# try it for yourself! write a loop that takes the following list and prints only the
# words that are longer than four letters.
# helpful hint: you will need a for loop and an if statement.
solar_system_objects = ['sun', 'moon', 'earth', 'mars', 'pluto', 'neptune', 'venus', 'mercury']
import numpy as np
# numpy has a function called "loadtxt" which will read in text files. you just need to give it the path to the file!
# note that this will actually give you an error. why? because loadtxt doesn't like that there are different
# variable types in different columns, i.e. a mix of strings and floats; it wants to assume everything is float.
# there are a couple of ways around this
stars_table = np.loadtxt('data/BrightStars.dat')
# one option is to simply skip the first column, the star names, since these are the string values that are throwing
# loadtxt off. for this we use the loadtxt option "usecols" where we can feed it a list of column numbers we want to
# use. remember that python is zero indexing, so to have it skip the first column we omit the number zero.
# just like with the bash "sort" command loadtxt sees spaces as separating one column from a new column.
stars_table = np.loadtxt('data/BrightStars.dat', usecols=[1,2,3,4,5,6,7,8,9,10,11])
# the output of this is an array that contains 11 columns and 5016 lines.
type(stars_table), stars_table.shape
# another option is to explicitly tell it all the column types
# we will do this with dictionaries and lists! we make the "dtype" argument of
# loadtxt into a dictionary which has the keys "names" and "formats," each of which
# have tuples that make up their values.
stars_table = np.loadtxt('data/BrightStars.dat', dtype={'names': ('starname', 'ra_hr', 'ra_min', 'ra_s', 'dec_deg',
'dec_arcmin', 'deg_arcsec', 'Vmag', 'B-V',
'Parallax', 'uRA', 'uDec'),
'formats': ('|S15', np.float, np.float, np.float, np.float,
np.float, np.float, np.float,np.float,np.float,
np.float,np.float)})
# the neat thing about doing this (although it is tedious) is that we get all our columns, and now we can refer
# to them by name!
stars_table['starname'], stars_table['Vmag']
# let's look at one last simple option. numpy has another built-in function called "genfromtxt," which can more
# easily handle different data types. If I set the "dtype" argument here to be "None" it will try to guess
# all the types of the columns. This is pretty powerful!
stars_table = np.genfromtxt('data/BrightStars.dat', dtype=None)
# try doing np.genfromtxt? to see all the other arguments that the function genfromtxt can take
# find the argument that will help us define column names, and rerun the above command using it.
stars_table = np.genfromtxt('data/BrightStars.dat', dtype=None, names=('starname', 'ra_hr', 'ra_min', 'ra_s', 'dec_deg',
'dec_arcmin', 'deg_arcsec', 'Vmag', 'B-V',
'Parallax', 'uRA', 'uDec'))
stars_table['Parallax'] / 1000
# now we can work with the contents of this file! let's say we want to find the brightest star.
brightest_star_mag = stars_table['Vmag'].min()
print(brightest_star_mag)
# how would you find all the stars that have Vmag < 3? we did this in bash with a series of commands, but
# np.where makes it much easier!
Vlessthan_three = np.where(stars_table['Vmag'] < 3.) # recall that where only gives you the INDICES
len(stars_table[Vlessthan_three]) # we can easily find that it is 172 lines, just like we found on our bash homework
# how would you find the parallax of the
# most distant star (smallest parallax) with Vmag < 3? hint: half of the work is already done in the previous cell!
# to get the name of this star we can do the following:
stars_table[np.where((stars_table['Vmag'] < 3) & (stars_table['Parallax'] == 1.01))]['starname']
# if we want to do something like in our first assignment and make a list of the five brightest stars' magnitudes
# it is much easier now that we've read in the data!
sort_index = np.argsort(stars_table['Vmag']) # this is a built in function of numpy that returns the INDICES that sort an array
five_brightest_mag = stars_table['Vmag'][sort_index[:5]]
five_brightest_mag
# the function that we will be using to write output to a file is numpy's "savetxt"
# savetxt takes as arguments the filename (in quotes) and then the data (in array form) that you write to that filename
# the "fmt" argument is optional but specifying it allows us to choose the specific data type
# in this case we are choosing float values with two decimal places. try it without this to see what you get!
np.savetxt('five_brightest_table.dat', five_brightest_mag, fmt='%.2f')
%%bash
cat five_brightest_table.dat
# here we have only written one line, but what if we want to write a multi-column file?
# let's get the parallax values of the five brightest stars. remember that argsort gave us INDICES. let's reuse them.
five_brightest_dist = stars_table['Parallax'][sort_index[:5]]
five_brightest_dist
# if we want to save both these arrays, we need to feed both of them to "savetxt" in the following way.
# when we look at the output of this we'll notice that savetxt defaults to saving arrays as ROWS.
np.savetxt('five_brightest_table.dat', (five_brightest_mag,five_brightest_dist), fmt='%.2f')
%%bash
cat five_brightest_table.dat
# if we want each array to be a column instead we should use numpy's built-in transpose function
# let's look at what transpose does
print(five_brightest_mag, five_brightest_dist)
print np.transpose([five_brightest_mag,five_brightest_dist])
# to get a two-column data table we do the following:
np.savetxt('five_brightest_table.dat', np.transpose([five_brightest_mag, five_brightest_dist]), fmt=['%.2f', '%.2f'])
%%bash
cat five_brightest_table.dat
from astropy.io import ascii
# we use the read function with the path to the file name. notice how we don't define data types
stars_table_astropy = ascii.read('data/BrightStars.dat')
print(stars_table_astropy)
print(stars_table_astropy['col1']) # we don't have names for our columns so astropy's read just names them in this way
print(type(stars_table_astropy['col1'][0])) # it knows that elements of the first column are strings!
# now let's see how easy it is to write a table using this module
ascii.write([stars_table_astropy['col1'], stars_table_astropy['col9']],'five_brightest_table_astropy.dat')
%%bash
cat five_brightest_table_astropy.dat | head -5
# that is a rather simple example of how powerful ascii.write() can be. let's see what else it can do
# here i can pass it a LIST of column names, and a DICTIONARY of data types for those columns
ascii.write([stars_table_astropy['col1'], stars_table_astropy['col9']], 'five_brightest_table_astropy.dat',
names=['StarName', 'Vmag'], formats={'StarName': '%s', 'Vmag': '%.2f'})
%%bash
cat five_brightest_table_astropy.dat | head -5
def add_four(x): # on this first line we name our function "add" and designate that it has one required argument, x
return x + 4 # for a function to output values we use this "return" statement
# now if i call my function with a value, i should get the output x + 4
add_four(4)
# let's try an example with two arguments
def multiply(x,y):
return x*y
# in this case we need to feed multiply two arguments
multiply(2,3)
z = 5
print add_four(z) # we can pass z by reference to our add function
print(z) # but it's value won't be changed globally
def append_element(x):
return x.append(4)
list = [1,2,3]
append_element(list) # pass the list by reference to our append_element function
print(list) # the values in the list are changed globally
def multiply(x,y):
Returns the product of x and y
Parameters
----------
x, y: int
the values for which the product is calculated
Returns
-------
result: int
the product of x and y
# now if we do multiply? or help(multiply) we get info from the doc string!
multiply?
help(multiply)
%%file add_subtract.py
# simple addition and subtraction module
def add_nums(x,y):
return x + y
def subtract_nums(x,y):
return x - y
# If you recall from the first lecture, we can now import this module and use the
# functions that we have defined within it.
import add_subtract
print add_subtract.add_nums(2,3)
print add_subtract.subtract_nums(5,3)
# another way to import this module is as follows:
from add_subtract import add_nums
add_nums(2,3) # and now we can recall it directly
%%file calculate_distance.py
import numpy as np
# module to read in data file, find distance in lyr from parallax, and write output to a file
stars_table = np.genfromtxt('data/BrightStars.dat', dtype=None, names=('starname', 'ra_hr', 'ra_min', 'ra_s', 'dec_deg',
'dec_arcmin', 'deg_arcsec', 'Vmag', 'B-V',
'Parallax', 'uRA', 'uDec'))
parallax_mas = stars_table['Parallax']
parallax_as = stars_table['Parallax'] / 1000.
def distance_in_lyr(x):
Finds distance in lyr given parallax in arcseconds
Parameters
----------
x: float
parallax in arcseconds
Returns
-------
result: float
distance in lyr
return (1. / x)*3.26
if __name__ == "__main__":
print("Running calculate_distance as main")
all_distances_lyr = distance_in_lyr(parallax_as)
np.savetxt('distance_parallax.dat', np.transpose([parallax_as, all_distances_lyr]), fmt=['%.2e', '%.2f'])
%run calculate_distance.py
%%bash
cat distance_parallax.dat
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We need a first order system, so convert the second order system
Step3: To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member.
Step4: You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution.
Step5: The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple
Step6: To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows
Step7: Simple Oscillator Example
Step8: The solver interpolates solutions to return the solution at the required output times
Step9: Simple Oscillator Example
Step10: By inspection of the returned times you can see how efficient the solver can solve this problem
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
from scikits.odes import ode
#data of the oscillator
k = 4.0
m = 1.0
#initial position and speed data on t=0, x[0] = u, x[1] = \dot{u}, xp = \dot{x}
initx = [1, 0.1]
def rhseqn(t, x, xdot):
we create rhs equations for the problem
xdot[0] = x[1]
xdot[1] = - k/m * x[0]
solver = ode('cvode', rhseqn, old_api=False)
solution = solver.solve([0., 1., 2.], initx)
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
#Solve over the next hour by continuation
times = np.linspace(0, 3600, 61)
times[0] = solution.values.t[-1]
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
solver = ode('cvode', rhseqn, old_api=False, max_steps=5000)
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
#plot of the oscilator
solver = ode('cvode', rhseqn, old_api=False)
times = np.linspace(0,60,600)
solution = solver.solve(times, initx)
plt.plot(solution.values.t,[x[0] for x in solution.values.y])
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
solver = ode('cvode', rhseqn, old_api=False)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
time += 0.1
# fix roundoff error at end
if time > 60: time = 60
solution = solver.step(time)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if time >= 60:
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
solver = ode('cvode', rhseqn, old_api=False, one_step_compute=True)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
solution = solver.step(60)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if solution.values.t >= 60:
#back up to 60
solver.set_options(one_step_compute=False)
solution = solver.step(60)
plott[-1] = solution.values.t
plotx[-1] = solution.values.y[0]
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Create a data variable that points to the big 33 GB file and set the chunkbytes to 100 MB, map each item to json.loads since I know that each row is a JSON doc.
Step3: Similar to spark you can take one or more items to see items of the dataset.
Step4: Now some simple helper functions that we are going to use as filters.
Step5: Now it comes the dask part which it will look really similar to anyone who has used Spark.
Step6: Call compute and let dask work. At this moment if you take a look at your process manager you should a lot of Python processes.
Step7: We can see how long the computation took, in this case
Step8: With the word count done we can just sort the python list and see the most common words.
Step9: Save a file with the results
|
<ASSISTANT_TASK:>
Python Code:
import re
import json
import time
import nltk
import dask
import dask.bag as db
import nltk
from nltk.corpus import stopwords
dask.__version__
nltk.__version__
data = db.from_filenames("RC_2015-05", chunkbytes=100000).map(json.loads)
data.take(1)
no_stopwords = lambda x: x not in stopwords.words('english')
is_word = lambda x: re.search("^[0-9a-zA-Z]+$", x) is not None
subreddit = data.filter(lambda x: x['subreddit'] == 'movies')
bodies = subreddit.pluck('body')
words = bodies.map(nltk.word_tokenize).concat()
words2 = words.map(lambda x: x.lower())
words3 = words2.filter(no_stopwords)
words4 = words3.filter(is_word)
counts = words4.frequencies()
start_time = time.time()
values = counts.compute()
elapsed_time = time.time() - start_time
elapsed_time # seconds
len(values)
sort = sorted(values, key=lambda x: x[1], reverse=True)
sort[:100]
with open('r_movies.txt', 'w') as f:
for item in sort:
f.write("%s %i\n" % item)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inspection Reports and DocumentCloud
Step2: Here I convert the relatively human-readable dict to the right format for querying DocumentCloud
Step3: The Data
Step4: Each row in df_data corresponds to one armory. Below I have selected the row with data on a Portland, OR armory about two miles from the Portland International Airport.
Step5: Below I change "Unknown" to "NaN" so that, when I try to calculate counts for different variables, it is evident that this information is missing
Step6: If you aren't already familiar with programming
Step7: Python's groupby behaves much like Excel's pivot tables, and is also a process in SQL.
Step8: Below I've made a new dataframe to examine what proportion of armories in a given state had a firing range, and what proportion had lead.
Step9: I counted the number of unique IDs that The Oregonian had (Oregonian ID) for armories within each state, so I'll rename that column to Armory Count for the sake of clarity.
Step10: Here I recombine the two dataframes because one had to be a calculation that counted the number of armories, while df_state_values allowed me to sum the 1s that I had generated for Yes for a given variable, e.g. Inspection conducted?
Step11: I'm going to repurpose df_state_count to more easily show the results of some calculations
Step12: States ordered by what proportion of armories were inspected
Step13: It's verbose and it isn't pretty, but plotting is also built into pandas!
|
<ASSISTANT_TASK:>
Python Code:
import json
import urllib.parse
import numpy as np
import pandas as pd
import requests
%matplotlib inline
# It might be overkill, but I figured it best
# for legibility to separate query arguments as a dict
params_dict = {
"q":"projectid:30072",
"per_page":"1000"
}
params_encoded = urllib.parse.urlencode(params_dict)
r = requests.get('https://www.documentcloud.org/api/search.json', params=params_encoded)
r.json()['documents'][0]
documentcloud_df = pd.read_json(json.dumps(r.json()['documents']))
documentcloud_df.head()
df = pd.read_csv('armories_data - 20161201.csv',dtype={'Oregonian ID':'str','Inspection year':'str'})
df.ix[975]
df.head()
df_modified = df.replace(to_replace=['Unknown','Yes','No'],value=[np.nan,1,0])
df_modified.head()
df_states = df_modified[['State','Oregonian ID','Lead present?','Had firing range?']].groupby('State').sum().reset_index()
df_state_count = df_modified[['State','Oregonian ID']].groupby('State').count().reset_index()
df_state_count.rename(columns={'Oregonian ID':'Armory Count'},inplace=True)
df_state_values = df_modified[['State','Inspection conducted?','Inspection report available?','Lead present?','Lead present outside range?','Had firing range?']].groupby('State').sum().reset_index()
df_states = pd.merge(df_state_count,df_state_values)
df_state_values['Rate of lead in state'] = round(df_states['Lead present?']/df_states['Armory Count'],2)
df_state_values['Rate of inspection'] = round(df_states['Inspection conducted?']/df_states['Armory Count'],2)
df_state_values.sort_values(by=['Rate of inspection'],ascending=False)
df_state_values[['Rate of inspection']].sort_values(
by='Rate of inspection', ascending=False).plot(
kind='bar',
title='Rate of armory inspection by state',
legend=False,
x=df_state_values['State'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span id="slip_plat_prod">Choose Platform and Product ▴</span>
Step2: <span id="slip_define_extents">Define the Extents of the Analysis ▴</span>
Step3: <span id="slip_load_data">Load Data from the Data Cube ▴</span>
Step4: Visualization
Step5: <span id="slip_change_detect">Change Detection ▴</span>
Step6: <br>
Step7: It is important to note that compositing will shorten the length of baseline's time domain by the window size since ranges less than the composite size are not computed. For a composite size of 5, new's first 5 time values will not have composite values.
Step8: What this composite looks like
Step9: The baseline composite is featured in the figure above (left). It represents what was typical for the past five acquisitions 'leading-up-to' time_to_show. Displayed next to it (right) is the true-color visualization of the acquisition 'at' time_to_show. The new object contains unaltered LS7 scenes that are index-able using a date like time_to_show. The baseline object contains a block of composites of those landsat scenes that is index-able the same way.
Step10: Filtering NDWI
Step11: How far NDWI filtering gets you
Step12: Highlighted in the center picture are values that meet our NDWI change expectations. Featured in the right-most image is what remains of our original image after NDWI filtering.
Step13: Filtering for RED reflectance increase
Step14: How much further RED reflectance filtering gets you
Step15: <span id="slip_aster">ASTER Global Elevation Models ▴</span>
Step16: Calculating Angle of elevation
Step17: Filtering out pixels that don't meet requirements for steepness
Step18: Visualising our final SLIP product
Step19: <span id="slip_evo">Reviewing the Evolution of the SLIP Product ▴</span>
Step20: <span id="slip_compare_output_baseline">Visual Comparison of SLIP Output and Baseline Composited Scene ▴</span>
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import numpy as np
import xarray as xr
import pandas as pd
import matplotlib.pyplot as plt
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
# landsat_qa_clean_mask, landsat_clean_mask_invalid
from utils.data_cube_utilities.dc_baseline import generate_baseline
from utils.data_cube_utilities.dc_displayutil import display_at_time
from utils.data_cube_utilities.dc_slip import create_slope_mask
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
# Freetown, Sierra Leone
# (https://www.reuters.com/article/us-leone-mudslide-africa/cities-across-africa-face-threat-of-landslides-like-sierra-leone-idUSKCN1AY115)
# define geographic boundaries in (min, max) format
lon = (-13.3196, -12.9366)
lat = (8.1121, 8.5194)
# define date range boundaries in (min,max) format
# There should be a landslide by Freetown during August 2017.
date_range =("2016-01-01", "2017-12-31")
display_map(lat, lon)
# Define desired bands. For SLIP, only red, nir, swir and pixel_qa will be necessary.
desired_bands = ['red','nir','swir1','pixel_qa']
# Add blue and green bands since they are needed for visualizing results (RGB).
desired_bands = desired_bands + ['green', 'blue']
# Load area.
landsat_ds = dc.load(product = product,\
platform = platform,\
lat = lat,\
lon = lon,\
time = date_range,\
measurements = desired_bands,
group_by='solar_day',
dask_chunks={'time':1, 'longitude': 1000, 'latitude': 1000}).persist()
# clean_mask = landsat_qa_clean_mask(landsat_ds, platform) & \
# (landsat_ds != -9999).to_array().all('variable') & \
# landsat_clean_mask_invalid(landsat_ds)
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level).persist()
# Determine the times with data.
data_time_mask = (clean_mask.sum(['latitude', 'longitude']) > 0).persist()
clean_mask = clean_mask.sel(time=data_time_mask)
landsat_ds = landsat_ds.sel(time=data_time_mask)
landsat_ds = landsat_ds.where(clean_mask).persist()
time_to_show = '2017-08-04'
acq_to_show = landsat_ds.sel(time=time_to_show, method='nearest')
rgb_da = acq_to_show[['red', 'green', 'blue']].squeeze().to_array().compute()
vmin = rgb_da.quantile(0.05).values
vmax = rgb_da.quantile(0.95).values
rgb_da.plot.imshow(vmin=vmin, vmax=vmax)
plt.show()
new = acq_to_show
# Generate a moving average of n values leading up to current time.
baseline = generate_baseline(landsat_ds, composite_size = 3, mode = 'average')
(len(new.time), len(baseline.time))
display_at_time([baseline, new], time = time_to_show, width = 2, w = 12)
ndwi_new = (new.nir- new.swir1)/(new.nir + new.swir1)
ndwi_baseline = (baseline.nir - baseline.swir1)/ (baseline.nir + baseline.swir1)
ndwi_change = ndwi_new - ndwi_baseline
new_ndwi_filtered = new.where(abs(ndwi_change) > 0.2)
display_at_time([new, (new, new_ndwi_filtered),new_ndwi_filtered],
time = time_to_show,
width = 3, w =14)
red_change = (new.red - baseline.red)/(baseline.red)
new_red_and_ndwi_filtered = new_ndwi_filtered.where(red_change > 0.4)
display_at_time([new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered],
time = time_to_show,
width = 3, w = 14)
aster = dc.load(product="terra_aster_gdm",\
lat=lat,\
lon=lon,\
measurements=['dem'],
group_by='solar_day')
# Create a slope-mask. False: if pixel <15 degees; True: if pixel > 15 degrees;
is_above_slope_threshold = create_slope_mask(aster, degree_threshold = 15,resolution = 30)
slip_product = new_red_and_ndwi_filtered.where(is_above_slope_threshold)
display_at_time([new, (new, slip_product),slip_product],
time = time_to_show,
width = 3, w = 14)
display_at_time([new, (new,new_ndwi_filtered),new_ndwi_filtered,new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered, new, (new, slip_product),slip_product],
time = time_to_show,
width = 3, w = 14, h = 12)
display_at_time([baseline, (new,slip_product)],
time = time_to_show,
width = 2, mode = 'blend', color = [210,7,7] , w = 14)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we're trying to build a network we need two things
Step2: That's a lot of information! Let's grab out all of the speakers. All the speaker elements will have a text attribute that has their actual name, or abbreviation of their name.
Step3: To get a unique list we'll use set
Step4: Great start! In Network Analysis there are two fundamental principles. A node is an entity, it can have relationships with other entities. In literature, this is often a character, but it could be a Twitter user, organization, geographic location, or even words!
Step5: That's all we need to make a basic network and do some analysis! We have all the character names and the scenes in which they appear. We can collate some of this information to find out in which scenes certain characters appear together. This will happen in our make_graph function.
Step6: We can graph this using matplotlib
Step7: Our graph, G, is a powerful object. We can calculate many of the standard network analysis statistics. There are various measures of centrality, many of which were referenced in the reading.
Step8: Wikipedia defines "degree centrality"
Step9: Wikipedia defines "betweeness centrality"
Step10: Wikipedia defines "eigenvector centrality"
Step11: Challenge
Step12: We can then add this to a D3 template
Step13: We'll then IFrame in the HTML file
Step15: Gini Coefficient
Step16: Just to demonstrate, let's make a very unequal array
Step17: The gini coefficient should be close to 1
Step18: What if we have half zeroes and half ones?
Step19: All ones?
Step20: Now we can use the gini function on Othello to see how evenly distributed centrality is
Step21: Great, but that's not terribly interesting itself, we want to see how it relates to other plays. We'll do that for homework.
Step22: Homework
|
<ASSISTANT_TASK:>
Python Code:
with open("shakespeare_data/plays_xml/othello_ps_v3.xml") as f:
othello_xml = etree.fromstring(f.read().encode())
all_elements = list(othello_xml.iter())
all_elements
[e.text for e in all_elements if e.tag == "speaker"]
set([e.text for e in all_elements if e.tag == "speaker"])
cast_dict = {}
for c in set([e.text for e in all_elements if e.tag == "speaker"]):
cast_dict[c] = {"num_lines": 0,
"scenes": []}
cast_dict
# extract all scene elements from the xml
scenes = [e for e in all_elements if e.tag == "scene"]
scenes
elements = [e.find("acttitle").text for e in all_elements if e.tag == "act"]
def get_cast_dict(all_elements):
'''
returns a dictionary with the total number of lines and scenes a character appears in
'''
cast_dict = {}
# first get a unique set of all characters appearing in the play
for c in set([e.text for e in all_elements if e.tag == "speaker"]):
cast_dict[c] = {"num_lines": 0,
"scenes": []}
# extract all scene elements from the xml
scenes = [e for e in all_elements if e.tag == "scene"]
acts = [e for e in all_elements if e.tag == "act"]
# acts = [e.find("acttitle").text for e in all_elements if e.tag == "act"]
for a in acts:
# get title of acts
act_title = a.find("acttitle").text
# get scene elements
scenes = [e for e in a if e.tag == "scene"]
# iterate through each scene
for sc in scenes:
# grab all the speeches in the scene
speeches = [s for s in sc.getchildren() if s.tag == "speech"]
# iterate through speeches
for s in speeches:
# increment number of lines for the speaker
cast_dict[s.find("speaker").text]["num_lines"] += len(s.findall("line"))
# find all the speaker for each speech
speakers = [s.find("speaker").text for s in speeches]
# add the title of the scene for each speaker appearing in the scene
for s in set(speakers):
cast_dict[s]["scenes"].append(act_title + " " + sc.find("scenetitle").text)
# reassign scenes to only a unique set
for c in cast_dict.keys():
cast_dict[c]["scenes"] = list(set(cast_dict[c]["scenes"]))
return cast_dict
cast_dict = get_cast_dict(all_elements)
cast_dict
def make_graph(c_dict):
'''
This function accepts a dictionary with number of lines and scenes to create a
NetworkX graph object
'''
# setup graph object
G = nx.Graph()
# add nodes with attributes of number of lines and scenes
for c in c_dict.keys():
if c_dict[c]["num_lines"] > 0:
G.add_node(
c,
number_of_lines=c_dict[c]["num_lines"],
scenes=c_dict[c]["scenes"]
)
# make edges by iterating over all combinations of nodes
for (node1, data1), (node2, data2) in itertools.combinations(G.nodes(data=True), 2):
# count scenes together by getting union of their sets
scenes_together = len(set(data1['scenes']) & set(data2['scenes']))
if scenes_together:
# add more weight for more scenes together
G.add_edge(node1, node2, weight=scenes_together)
return G
G = make_graph(cast_dict)
# nodes should be sized by number of lines
node_size = [data['number_of_lines'] for __, data in G.nodes(data=True)]
node_color = 'blue'
plt.figure(figsize=(13,8)) # make the figure size a little larger
plt.axis('off') # remove the axis, which isn't meaningful in this case
plt.title("Othello's Social Network", fontsize=20)
# The 'k' argument determines how spaced out the nodes will be from
# one another on the graph.
pos = nx.spring_layout(G, k=0.5)
nx.draw_networkx(
G,
pos=pos,
node_size=node_size,
node_color=node_color,
edge_color='gray', # change edge color
alpha=0.3, # make nodes more transparent to make labels clearer
font_size=14,
)
network_tab = Table()
network_tab.append_column(label="Characters", values=[c for c in sorted(cast_dict.keys())])
network_tab.show()
dc = [x[1] for x in sorted(nx.degree_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Degree Centrality", values=dc)
network_tab.show()
bc = [x[1] for x in sorted(nx.betweenness_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Betweenness Centrality", values=bc)
network_tab.show()
ec = [x[1] for x in sorted(nx.eigenvector_centrality(G).items(), key=lambda x: x[0])]
network_tab.append_column(label="Eigenvector Centrality", values=ec)
network_tab.show()
from networkx.readwrite import json_graph
import json
d3_data = json_graph.node_link_data(G)
d3_data
import re
with open('network.html', 'r') as f:
net_html = f.read()
pattern = re.compile(r'(<script type="application/json" id="net">)(\s*.*)')
net_html = net_html.replace(re.findall(pattern, net_html)[-1][-1].strip(), json.dumps(d3_data).strip())
with open('network.html', 'w') as f:
f.write(net_html)
from IPython.display import IFrame
IFrame('network.html', width=700, height=900)
def gini(array):
Calculate the Gini coefficient of a numpy array.
# https://github.com/oliviaguest/gini
array = np.sort(array) # values must be sorted
index = np.arange(1, array.shape[0] + 1) # index per array element
n = array.shape[0] # number of array elements
return ((np.sum((2 * index - n - 1) * array)) / (n * np.sum(array))) #Gini coefficient
np.concatenate((np.zeros(99), np.ones(1)))
gini(np.concatenate((np.zeros(99), np.ones(1))))
gini(np.concatenate((np.zeros(50), np.ones(50))))
gini(np.ones(50))
import numpy as np
gini(network_tab['Eigenvector Centrality'])
def percentage_top_quartile(character_table):
# YOUR CODE HERE
return percentage
percentage_top_quartile(network_tab['Eigenvector Centrality'])
!ls shakespeare_data/plays_xml/
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now, let's create and test a pipeline
Step2: Let's first create a simple CUDA kernel within Bifrost.
Step6: Now, let's generate a full pipeline
|
<ASSISTANT_TASK:>
Python Code:
# @title Install C++ deps
%%shell
sudo apt-get -qq install exuberant-ctags libopenblas-dev software-properties-common build-essential
# @title Install python deps
%%shell
pip install -q contextlib2 pint simplejson ctypesgen==1.0.2
# @title Build and Install Bifrost
%%shell
cd "${HOME}"
if [ -d "${HOME}/bifrost_repo" ]; then
echo "Already cloned."
else
git clone https://github.com/ledatelescope/bifrost bifrost_repo
fi
cd "${HOME}/bifrost_repo"
git pull --all
./configure
# Build and install:
make -j all
sudo make install
export LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}
import os
# Environment path doesn't propagate, so add it manually:
if "/usr/local/lib" not in os.environ['LD_LIBRARY_PATH']:
os.environ['LD_LIBRARY_PATH'] += ":/usr/local/lib"
import bifrost as bf
import numpy as np
x = np.random.randint(256, size=1000)
x_orig = x
x = bf.asarray(x, 'cuda')
y = bf.empty_like(x)
x.flags['WRITEABLE'] = False
x.bf.immutable = True
for _ in range(3):
bf.map("y = x * 3", {'x': x, 'y': y})
x = x.copy('system')
y = y.copy('system')
if isinstance(x_orig, bf.ndarray):
x_orig = x
np.testing.assert_equal(y, x_orig * 3)
from bifrost.block import Pipeline, NumpyBlock, NumpySourceBlock
def generate_different_arrays():
Yield four different groups of two arrays
dtypes = ['float32', 'float64', 'complex64', 'int8']
shapes = [(4,), (4, 5), (4, 5, 6), (2,) * 8]
for array_index in range(4):
yield np.ones(
shape=shapes[array_index],
dtype=dtypes[array_index])
yield 2 * np.ones(
shape=shapes[array_index],
dtype=dtypes[array_index])
def switch_types(array):
Return two copies of the array, one with a different type
return np.copy(array), np.copy(array).astype(np.complex128)
occurences = 0
def compare_arrays(array1, array2):
Make sure that all arrays coming in are equal
global occurences
occurences += 1
np.testing.assert_almost_equal(array1, array2)
blocks = [
(NumpySourceBlock(generate_different_arrays), {'out_1': 0}),
(NumpyBlock(switch_types, outputs=2), {'in_1': 0, 'out_1': 1, 'out_2': 2}),
(NumpyBlock(np.fft.fft), {'in_1': 2, 'out_1': 3}),
(NumpyBlock(np.fft.ifft), {'in_1': 3, 'out_1': 4}),
(NumpyBlock(compare_arrays, inputs=2, outputs=0), {'in_1': 1, 'in_2': 4})]
Pipeline(blocks).main()
# The function `compare_arrays` should be hit 8 times:
assert occurences == 8
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A dataframe can be thought of like a relationtional database table or an Excel sheet. It has rows and columns. The rows correspond with an individual item or entity. The columns correspond with various features of the entity. In the above, the columns and the row index are in navy. The actual data falls in the middle of the table.
Step2: What Can Pandas Do?
|
<ASSISTANT_TASK:>
Python Code:
# Import pandas under the name pd
import pandas as pd
import numpy as np
import matplotlib
%matplotlib inline
matplotlib.style.use('fivethirtyeight')
# Create a dataframe from a CSV file
df = pd.read_csv('data/cfpb_complaints_with_fictitious_data.csv')
# Any dataframe at end of cell gets HTML representation
# Head limits the number of output rows
df
# For clarity, add color.
from IPython.display import HTML
with open('static/notebook_style.css', 'r', encoding='utf8') as f:
data = f.read()
HTML('<style>' + data + '</style>')
# You can select a column series of a dataframe with this notation:
# dataframe[column_name]
# OR if there are no spaces in the name
# df.Product
# Head limits it output to the first 5 elements
df['Product'].head()
# We can cut down the dataframe as needed
# You can also pass a list of columns to get back a subframe
# dataframe[list_of_column_names]
# http://pandas.pydata.org/pandas-docs/stable/indexing.html
tdf = df[['Product', 'Company', 'State']]
tdf.head()
# We can also cut down columns by filtering by number
# Loc is for indexing
tdf.iloc[:5]
# Or by content
# This works because of boolean indexing, which we will get to.
tdf[tdf['Company'] == 'Experian'].head(5)
# We can filter and then analyze columns.
filtered_df = tdf[tdf['Company'] == 'Experian']
# And we can use Series methods if we want to examine columns
filtered_df['State'].value_counts()[:5]
# Or get unique values
filtered_df['Product'].unique()[:5]
# We can get column datatypes
filtered_df.dtypes
# We can group the data and view it in aggregate
# http://pandas.pydata.org/pandas-docs/stable/groupby.html
gb = df.groupby(['Product', 'Sub-product'])
# Add count size gives basic numbers.
size = gb.mean().head(15)
size
# Pandas also has simplified plotting
size['Consumer Claim'].sort_values().plot.barh()
# We can reshape data as necessary.
size.unstack().fillna(' ')
# Or get numerical data or transform based on common groups.
output = gb['Consumer Claim'].agg([np.mean, np.median, np.max, np.min]).head(10)
output
# We can use apply to apply functions in a vectorized manner
def transmogrify(row):
id = row['Complaint ID']
product = row['Product']
state = row['State']
return 'Complaint {} is a {} complaint from the state of {}.'.format(id, product, state)
output = df.apply(transmogrify, axis=1)[:5]
output[0]
# We can do database style merges, joins, and concatenations
# http://pandas.pydata.org/pandas-docs/stable/merging.html
df2 = pd.read_csv('data/simple.csv')
df2.head(5)
# Here we non-sensically merge arbitrary numbers from simple to the CFPB dataset
tdf = df.merge(df2, how='inner', left_on='Date received', right_on='Date')
tdf[['Date received', 'Product', 'Count']].head(5)
# We can write the data to disk in a single line
output.to_csv('data/custom_function_output.csv')
# Pandas has built in support for datetime objects, too.
df['Date received'] = pd.to_datetime(df['Date received'])
gb = df.groupby([df['Date received'].dt.year,
df['Date received'].dt.month])
gb.size().plot()
# And support for string methods
contains_lawyer = df['Consumer complaint narrative'].str.contains('lawyer')
# Get all items containing lawyer
data = df[contains_lawyer]['Consumer complaint narrative']
# Get text of first item
data
# This includes regexes for text mining ... https://en.wikipedia.org/wiki/Regular_expression
regex_string = r'([Ll]awyer[\S\s]*?\.|[Aa]ttorney[\S\s]*?\.)'
# Look for each and every instance
lawyer_to_sentence_end = df['Consumer complaint narrative'].str.extract(regex_string,
expand=True)
lawyer_to_sentence_end.dropna().head(5)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 이제 숫자들의 리스트를 입력받아 각 항목의 세제곱으로 이루어진 리스트와 네제곱으로 이루어진 리스트를 리턴하는 함수들을 작성해보자.
Step2: 이런 식으로 5승, 6승, 7승값으로 이루어진 리스트를 린터하는 함수들을 작성하려면 매번 전체코드를 복사해서 특정 부분을 수정해 주어야 한다. 그런데 이런 과정은 매우 비효율적이며 코드를 복잡하게 만든다.
Step3: 즉, 변하는 지수갑만 따로 빼내서 인자로 활용하면 들어오는 인자값에 따라 임의의 지수승값으로 이루어진 리스트를 리턴하는 함수를 만들게 되었다.
Step4: 10승 해주는 함수
Step5: 이제 xn_list 함수가 다음의 경우도 다룰 수 있는지 확인해보자.
Step6: 위 함수는 sin 함수값으로 이루어진 리스트를 리턴하는 함수이며, 불행히도 xn_list 함수를 이용하여 구현할 수 없다. 이유는 넷째 줄에서 변하는 부분이 x ** n의 꼴이 아니기 때문이다.
Step7: 이제 sin_list(xs)가 fun_list(sin, xs)와 동일함을 확인할 수 있다.
Step8: 따라서 sin_list 함수를 다음과 같이 재정의 할 수 있다.
Step9: 앞서 다뤘던 x2_list, x3_list 함수들을 fun_list 함수를 이용하여 구현할 수 있다.
Step10: 앞서 다뤘던 xn_list 함수 또한 fun_list 함수를 이용하여 구현할 수 있다.
Step11: lambda(람다) 키워드 활용법
Step12: 위 함수는 아래 함수와 동일하다. 대신 사용법에 차이가 있음에 주의해야 한다.
Step13: lambda 활용 예제 2
Step14: 위 함수는 아래 함수와 동일하다.
Step15: lambda 활용 예제 3
Step16: 연습문제 1
Step17: 견본답안 활용 예제
Step18: 함수를 리턴값으로 사용하기
Step19: linear_fun(a, b)의 리턴값을 사용하는 방법은 함수를 호출하는 것과 동일하다.
Step20: 위 코드에서 g(x) = 2*x + 3 함수를 의미하게 된다.
Step21: 함수들의 리스트 역시 자료형은 list이다.
Step22: 이제 Square 함수와 Cubic 함수의 테이블을 연속으로 보여줄 수 있다.
Step23: 함수 이름을 좀 더 예쁘게 보여주고 싶으면 위 코드를 좀 더 섬세하게 수정해야 한다.
Step24: 위 코드에서 2번 줄이 조금 복잡해 보인다. 특히
Step25: 즉, 주어진 시퀀스 자료형을 크기 순서대로 오름차순으로 정렬한다. 그런데 내림차순으로 정렬하고자 하면 다음과 같이 사용해야 한다.
Step26: sorted 함수의 인자가 하나 더 늘어났다. 실제로 sorted 함수는 최대 네 개까지 인자를 받을 수 있다. help 명령어를 사용하면 아래와 같은 정보를 확인할 수 있다.
Step27: 위 정보는 sorted 함수는 네 개의 인자를 받는다. 하지만, 두 번째부터 네 번째 인자들은 이미 기본값이 정해져 있다. 그래서 그 인자들은 굳이 입력하지 않아도 파이썬 해석기 내부에서는 지정된 기본값으로 처리한다. 따라서 다음이 성립한다.
Step28: sorted 함수의 다른 두 개의 키워드 인자인 cmp와 key의 경우도 사용법은 동일하다. 두 키워드 인자의 역할은 다음과 같다.
Step29: 따라서 문자열을 정렬할 때 대문자가 시작하는 단어가 먼저 정렬된다.
Step30: 대, 소문자 구별없이 정렬하고자 한다면 key 인자 값을 수정해야 한다.
Step31: key 인자 사용 예제 2
Step32: sorted 함수를 기본값으로 적용하면 이름 순서대로 정렬된다.
Step33: 이제 위 리스트를 학점 순으로 정렬하려면 다음과 같이 key 인자 값을 변경해야 한다.
Step34: 위에서 key인자에 입력된 lambda student
Step35: 이제 위 리스트를 점수 순으로 정렬하려면 다음과 같이 key 인자 값을 변경해야 한다.
Step36: 키워드 인자가 있는 함수 생성하기
Step37: 이제 세로의 길이를 3고정해서 사용하고자 한다면 다음과 같이 키워드 인자를 선언하면 된다. 그러면 인자를 하나만 입력할 경우 자동으로 세로의 길이는 3으로 삽입되어 계산된다. 반면에 세로의 길이를 변경하고자 한다면 해당 키워드 인자값을 변경하면 된다.
Step38: 이제 키워드가 두 개 이상일 경우 인자의 순서에 주의해야 한다.
Step39: 전역변수와 지역변수
Step40: 위와 같은 경우를 대비해 에러처리를 해둘 필요도 있다.
Step41: 다음은 함수 내부에서 정의되어 사용되는 변수, 즉, 지역변수에 대한 간단한 예제이다.
Step42: 반면에 함수 외부에서 정의된 변수는 전역변수이다.
Step43: x가 전역변수로 이미 선언이 되었어도 f1 함수를 호출하면 그 함수 안에 선언된 x 값이 사용된다. 함수가 호출되었을 때 사용되는 변수에 할당된 값을 먼저 함수 내부에서 찾으며, 함수 내부에서 찾지 못할 경우 함수 밖에서 찾기 때문이다.
Step44: 함수가 호출되었을 때 사용되는 변수에 할당된 값을 먼저 함수 내부에서 찾으며, 함수 내부에서 찾지 못할 경우 함수 밖에서 찾기 때문이다.
Step45: 위 함수 외부에서 d값을 물으면 모른다고 에러가 발생한다.
Step46: 함수 내부에서 선언된 지역변수를 전역변수처럼 사용하려면 global 키워드를 사용한다.
|
<ASSISTANT_TASK:>
Python Code:
def x2_list(xs):
L = []
for x in xs:
L.append(x**2)
return L
x2_list([0, 0.5, 1, 1.5, 2, 2.5])
def x3_list(xs):
L = []
for x in xs:
L.append(x**3)
return L
def x4_list(xs):
L = []
for x in xs:
L.append(x**4)
return L
def xn_list(n, xs):
L = []
for x in xs:
L.append(x**n)
return L
xn_list(5, [0, 0.5, 1, 1.5, 2, 2.5])
xn_list(10, [0, 0.5, 1, 1.5, 2, 2.5])
from math import *
def sin_list(xs):
L = []
for x in xs:
L.append(sin(x))
return L
def fun_list(f, xs):
L = []
for x in xs:
L.append(f(x))
return L
sin_list([0, 0.5, 1, 1.5, 2, 2.5])
fun_list(sin, [0, 0.5, 1, 1.5, 2, 2.5])
fun_list(sin, [0, 0.5, 1, 1.5, 2, 2.5]) == sin_list([0, 0.5, 1, 1.5, 2, 2.5])
def sin_list_1(xs):
return fun_list(sin, xs)
def x2_list_1(xs):
def x_2(x):
return x ** 2
return fun_list(x_2, xs)
x2_list_1([0, 0.5, 1, 1.5, 2, 2.5]) == x2_list([0, 0.5, 1, 1.5, 2, 2.5])
def x3_list_1(xs):
def x_3(x):
return x ** 3
return fun_list(x_3, xs)
x3_list_1([0, 0.5, 1, 1.5, 2, 2.5]) == x3_list([0, 0.5, 1, 1.5, 2, 2.5])
def xn_list_1(n, xs):
def x_n(x):
return x ** n
return fun_list(x_n, xs)
xn_list_1(5, [0, 0.5, 1, 1.5, 2, 2.5]) == xn_list(5, [0, 0.5, 1, 1.5, 2, 2.5])
(lambda x: x*3)(5)
def fx_3(x):
return x * 3
fx_3(5)
(lambda x, y: (x + y) * 2)(3, 4)
def plus_2times(x, y):
return (x + y) * 2
plus_2times(3, 4)
def x2_list_2(xs):
return fun_list(lambda x : x ** 2, xs)
x2_list_2([0, 0.5, 1, 1.5, 2, 2.5]) == x2_list_1([0, 0.5, 1, 1.5, 2, 2.5])
def x3_list_2(xs):
return fun_list(lambda x : x ** 3, xs)
x3_list_2([0, 0.5, 1, 1.5, 2, 2.5]) == x3_list_1([0, 0.5, 1, 1.5, 2, 2.5])
def xn_list_2(n, xs):
return fun_list(lambda x : x ** n, xs)
xn_list_2(5, [0, 0.5, 1, 1.5, 2, 2.5]) == xn_list_1(5, [0, 0.5, 1, 1.5, 2, 2.5])
def fun_table(f):
for i in range(6):
x = i * 0.5
print("{} {}").format(x, f(x))
# 제곱과 세제곱 함수
def square(x): return x ** 2
def cubic(x): return x ** 3
print("Square")
fun_table(square)
print("Cubic"); fun_table(cubic)
def linear_fun(a, b):
def linear(x):
return a * x + b
return linear
g = linear_fun(2, 3)
g(5)
funs = [sin, cos]
type(funs)
for f in funs:
print(str(f))
fun_table(f)
str(sin)
for f in funs:
print("<" + str(f).split()[-1][:-1] + " 함수 테이블>")
fun_table(f)
print("\n")
sorted([2, 1, 3])
sorted([2, 1, 3], reverse=True)
help(sorted)
sorted([2, 1, 3]) == sorted([2, 1, 3], reverse=False)
'A' < 'a'
sorted("This is a test string from Andrew".split())
sorted("This is a test string from Andrew".split(), key=str.lower)
students_grades = [
('john', 'B', 12),
('jane', 'C', 10),
('dave', 'A', 15),
]
sorted(students_grades)
sorted(students_grades, key=lambda student: student[1])
sorted(students_grades, key=lambda student: student[1], reverse=True)
sorted(students_grades, key=lambda student: student[2])
def area(a, b):
return a * b
print("가로, 세로 길이가 각각 {}, {}인 직사각형의 넓이는 {}이다.".format(3, 3, area(3, 3)))
print("가로, 세로 길이가 각각 {}, {}인 직사각형의 넓이는 {}이다.".format(3, 5, area(3, 5)))
def area(a, breadth=3):
return a * breadth
print("가로, 세로 길이가 각각 {}, {}인 직사각형의 넓이는 {}이다.".format(3, 3, area(3)))
print("가로, 세로 길이가 각각 {}, {}인 직사각형의 넓이는 {}이다.".format(5, 3, area(5)))
print("가로, 세로 길이가 각각 {}, {}인 직사각형의 넓이는 {}이다.".format(5, 7, area(5, 7)))
def strange_volume(x, y_axis = 3, z_axis=4):
return x * y_axis * (z_axis + 1)
print(strange_volume(3, y_axis=5, z_axis=7) == strange_volume(3, z_axis=7, y_axis=5))
print(strange_volume(3, 5, 7) == strange_volume(3, 7, 5))
def f():
print(y)
f()
def f():
try:
print(y)
except:
raise NameError("the varible y is not defined yet")
f()
def f1():
x = '지역변수이다'
print("x는 {}.".format(x))
f1()
x = '전역변수이다'
def f2():
print("x는 {}.".format(x))
f2()
f1()
def d_is_only_defined_here():
d = 10
print("d 값은 여기서만 {}으로 정의되었습니다".format(d))
d_is_only_defined_here()
d
def e_is_made_global():
global e
e = 10
print("e 값은 여기서도 보이며 현재 {} 값이 할당되어 있습니다".format(e))
e_is_made_global()
e
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You may see a warning message from Kubeflow Pipeline logs saying "Insufficient nvidia.com/gpu". If so, this probably means that your GPU-enabled node is still spinning up; please wait for few minutes. You can check the current nodes in your cluster like this
Step2: You should see different "nvidia-smi" logs from the two pipeline steps.
|
<ASSISTANT_TASK:>
Python Code:
import kfp
from kfp import dsl
def gpu_smoking_check_op():
return dsl.ContainerOp(
name='check',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).set_gpu_limit(1)
@dsl.pipeline(
name='GPU smoke check',
description='smoke check as to whether GPU env is ready.'
)
def gpu_pipeline():
gpu_smoking_check = gpu_smoking_check_op()
if __name__ == '__main__':
kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
import kfp
from kfp import dsl
def gpu_p100_op():
return dsl.ContainerOp(
name='check_p100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1)
def gpu_v100_op():
return dsl.ContainerOp(
name='check_v100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1)
@dsl.pipeline(
name='GPU smoke check',
description='Smoke check as to whether GPU env is ready.'
)
def gpu_pipeline():
gpu_p100 = gpu_p100_op()
gpu_v100 = gpu_v100_op()
if __name__ == '__main__':
kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
import kfp
import kfp.gcp as gcp
from kfp import dsl
def gpu_p100_op():
return dsl.ContainerOp(
name='check_p100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100').container.set_gpu_limit(1)
def gpu_v100_op():
return dsl.ContainerOp(
name='check_v100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100').container.set_gpu_limit(1)
def gpu_v100_preemptible_op():
v100_op = dsl.ContainerOp(
name='check_v100_preemptible',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi'])
v100_op.container.set_gpu_limit(1)
v100_op.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100')
v100_op.apply(gcp.use_preemptible_nodepool(hard_constraint=True))
return v100_op
@dsl.pipeline(
name='GPU smoking check',
description='Smoking check whether GPU env is ready.'
)
def gpu_pipeline():
gpu_p100 = gpu_p100_op()
gpu_v100 = gpu_v100_op()
gpu_v100_preemptible = gpu_v100_preemptible_op()
if __name__ == '__main__':
kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Conversion and cleaning
Step2: Descriptive statistics of ratings
Step3: Recommendation Engines
Step4: Memory-based CF
Step5: Item-based CF
Step6: Evaluation
Step7: Memory-based
Step8: Predictions
Step9: Nope, there isn't much of enhancement. But maybe finetuning on the two most promising models helps.
Step10: Final model and predictions
Step11: Predict some document ratings
|
<ASSISTANT_TASK:>
Python Code:
# Import data
path = "../data/petdata_binary_1000_100.csv"
raw_data = pd.read_csv(path, index_col="doc_uri")
assert raw_data.shape == (1000,100), "Import error, df has false shape"
# Convert df
data = raw_data.unstack().to_frame().reset_index()
data.columns = ["user", "doc_uri", "rating"]
# Missing value handling
data.fillna(0, inplace=True)
assert data.shape == (raw_data.shape[0] * raw_data.shape[1], 3), "Conversion error, df has false shape"
assert data.rating.max() <= 1., "Value error, max rating over upper bound"
assert data.rating.min() >= -1., "Value error, min rating under lower bound"
data.head()
data.rating.describe().to_frame().T
data.rating.value_counts(normalize=True).to_frame().T
# Plot distribution of (random) ratings
hist = data.rating.plot(kind="hist", grid=True,
bins=[-1.1,-0.9,-0.1,0.1,0.9,1.1])
hist.set(xlabel= "rating")
plt.tight_layout()
plt.savefig("plots/ratings_binary.png", orientation="landscape", dpi=120)
from surprise import KNNWithMeans, SVD, NMF, Dataset, Reader, accuracy
from surprise.prediction_algorithms.random_pred import NormalPredictor
from surprise.model_selection import cross_validate, GridSearchCV
reader = Reader(rating_scale=(-1, 1))
ds = Dataset.load_from_df(data[["user", "doc_uri", "rating"]], reader)
baseline_model = NormalPredictor() # Baseline model, predicts labels based on distribution of ratings
sim_options = {"name": "cosine", # cosine similarity
"user_based": True, # user-based
"min_support": 10 # min number of common items, else pred 0
}
user_knn = KNNWithMeans(sim_options=sim_options)
sim_options = {"name": "cosine", # cosine similarity
"user_based": False, # item-based
"min_support": 5 # min number of common users, else pred 0
}
item_knn = KNNWithMeans(sim_options=sim_options)
for algo_name, algo in zip(["Baseline", "User-based CF", "Item-based CF"],
[baseline_model, user_knn, item_knn]):
history = cross_validate(algo, ds, measures=["RMSE", "MAE"], cv=5, verbose=False)
print("***", algo_name, "***")
print("RMSE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_rmse"].mean(),
history["test_rmse"].std(),
history["test_rmse"]))
print("MAE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_mae"].mean(),
history["test_mae"].std(),
history["test_mae"]))
print("Avg fit time: {:0.5f}s".format(np.array(history["fit_time"]).mean()))
# Models - tune parameters, if you'd like ;)
svd = SVD() # Singular value decomposition
pmf = SVD(biased=False) # Probabilistic matrix factorization
nmf = NMF() # Non-negative matrix factorization
for algo_name, algo in zip(["SVD", "Probabilistic MF", "Non-negative MF"],
[svd, pmf, nmf]):
history = cross_validate(algo, ds, measures=["RMSE", "MAE"], cv=5, verbose=False)
print("***", algo_name, "***")
print("RMSE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_rmse"].mean(),
history["test_rmse"].std(),
history["test_rmse"]))
print("MAE: {:0.3f} (std {:0.4f}) <- {}".format(history["test_mae"].mean(),
history["test_mae"].std(),
history["test_mae"]))
print("Avg fit time: {:0.5f}s".format(np.array(history["fit_time"]).mean()))
# SVD
param_svd = {"n_factors": [1, 100],
"n_epochs": [5, 20],
"reg_all": [0.02, 0.08], # regularization term for all param
"lr_all": [0.001, 0.005]} # learning rate for all param
gs_svd = GridSearchCV(SVD, param_svd, measures=["rmse", "mae"], cv=5)
gs_svd.fit(ds)
print("Best RMSE:", gs_svd.best_score["rmse"])
best_params_svd = gs_svd.best_params["rmse"]
for param in best_params_svd:
print(param, ":", best_params_svd[param])
# NMF
param_nmf = {"n_factors": [15, 100],
"n_epochs": [50, 60],
#"biased": [True, False],
#"reg_pu": [0.04, 0.06, 0.08], # regularization term for users
#"reg_qi": [0.04, 0.06, 0.08], # regularization term for items
"lr_bu": [0.001, 0.005], # learning rate for user bias term
"lr_bi": [0.001, 0.005]} # learning rate for item bias term
gs_nmf = GridSearchCV(NMF, param_nmf, measures=["rmse"], cv=5)
gs_nmf.fit(ds)
print("Best RMSE:", gs_nmf.best_score["rmse"])
best_params_nmf = gs_nmf.best_params["rmse"]
for param in best_params_nmf:
print(param, ":", best_params_nmf[param])
# Train final model
trainset = ds.build_full_trainset()
model = gs_svd.best_estimator["rmse"]
model.fit(trainset)
# RMSE of final model
testset = trainset.build_testset()
test_pred = model.test(testset)
accuracy.rmse(test_pred, verbose=True) # should be very bad ;)
combinations_to_predict = [("Aaron Keith III", "http://www.bell.com/main.php"),
("Linda Torres", "http://www.martin-harris.org/main/"),
("Veronica Jackson", "https://www.carter.com/"),
("Cindy Jones", "https://www.garcia.com/homepage/")]
# Predictions
for combination in combinations_to_predict:
user = combination[0]
doc = combination[1]
pred = model.predict(user, doc)
pred_string = "like" if pred[3] > 0 else "dislike" # if estimated rating >0 => "like", else "dislike"
print(pred[0], "should **>", pred_string, "<**", pred[1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Table 1- Composite photometry
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
names = ["component", "RA", "Dec", "Spectral Type", "Teff", "AJ", "Lbol", "R-I","I", "J-H","H-Ks", "Ks", "Mass"]
tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/614/1/398/fulltext/60660.tb1.txt", sep='\t', names=names)
tbl1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What happens if we remove farm2 from the ProcessGroup?
Step2: The flow is still there! But it is labelled with a little arrow to show that it is coming "from elsewhere". This is important because we are still showing Susan and Fred in the diagram, and they get fruit from farm2. If we didn't show those flows, Susan's and Fred's inputs and outputs would not balance.
Step3: Now they're gone, we no longer see the incoming flows from farm2. But we see some outgoing flows "to elsewhere" from farm3 and the other group. This is because farm3 is within the system boundary -- it is shown in the diagram -- so its output flow has to go somewhere.
Step4: This is pretty similar to what we had already, but now the waypoint is explicitly listed as part of the SankeyDefinition, we have more control over it.
Step5: Or we can partition the exports Waypoint to show how much of it is apples and bananas
Step6: Horizontal bands
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
flows = pd.read_csv('simple_fruit_sales.csv')
from floweaver import *
# Set the default size to fit the documentation better.
size = dict(width=570, height=300)
# Same partitions as the Quickstart tutorial
farms_with_other = Partition.Simple('process', [
'farm1',
'farm2',
'farm3',
('other', ['farm4', 'farm5', 'farm6']),
])
customers_by_name = Partition.Simple('process', [
'James', 'Mary', 'Fred', 'Susan'
])
# Define the nodes, this time setting the partition from the start
nodes = {
'farms': ProcessGroup(['farm1', 'farm2', 'farm3',
'farm4', 'farm5', 'farm6'],
partition=farms_with_other),
'customers': ProcessGroup(['James', 'Mary', 'Fred', 'Susan'],
partition=customers_by_name),
}
# Ordering and bundles as before
ordering = [
['farms'], # put "farms" on the left...
['customers'], # ... and "customers" on the right.
]
bundles = [
Bundle('farms', 'customers'),
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
nodes['farms'].selection = [
'farm1', 'farm3', 'farm4', 'farm5', 'farm6'
]
weave(sdd, flows).to_widget(**size)
nodes['customers'].selection = ['James', 'Mary']
weave(sdd, flows).to_widget(**size)
# Define a new Waypoint
nodes['exports'] = Waypoint(title='exports here')
# Update the ordering to include the waypoint
ordering = [
['farms'], # put "farms" on the left...
['customers', 'exports'], # ... and "exports" below "customers"
] # on the right.
# Add a new bundle from "farms" to Elsewhere, via the waypoint
bundles = [
Bundle('farms', 'customers'),
Bundle('farms', Elsewhere, waypoints=['exports']),
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
ordering = [
['farms'],
['exports', 'customers'],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
fruits_by_type = Partition.Simple('type', ['apples', 'bananas'])
nodes['exports'].partition = fruits_by_type
weave(sdd, flows).to_widget(**size)
ordering = [
[[], ['farms' ]],
[['exports'], ['customers']],
]
sdd = SankeyDefinition(nodes, bundles, ordering)
weave(sdd, flows).to_widget(**size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: trying for several targets
Step2: Plots
Step3: Scatterplot
Step4: 3. bar plot
|
<ASSISTANT_TASK:>
Python Code:
# Dependencies
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
import tweepy
import time
import seaborn as sns
%pylab notebook
# Initialize Sentiment Analyzer
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
# Twitter API Keys
consumer_key = "8kRQQdDT8zGpyOBTUbXqGF2nc"
consumer_secret = "1D2oGovKe15PwLTKEAlI7ao4nAnHqSYfkGH5mQAdx1T7BUdEmX"
access_token = "68786821-99qoOdGVdmeskFhyanhuj5G1UgTjLXy3zsHtmBTB4"
access_token_secret = "WrIEOcbavzeNmeEoCH2ZxfqIMlB1KbMYCVNnihgQTUC0c"
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Target Account
target_terms = ["@CNN", "@BBC", "@CBS",
"@Fox", "@nytimes"]
# Counter
counter = 1
# Variables for holding sentiments
sentiments = []
# Loop through the list of targets
for target in target_terms:
# Get all tweets from home feed
public_tweets = api.user_timeline(target, count = 100) #(total 100 tweets)
tweetnumber = 1
# Loop through all tweets
for tweet in public_tweets:
# Print Tweets
print("Tweet %s: %s" % (counter, tweet["text"]))
# Run Vader Analysis on each tweet
compound = analyzer.polarity_scores(tweet["text"])["compound"]
pos = analyzer.polarity_scores(tweet["text"])["pos"]
neu = analyzer.polarity_scores(tweet["text"])["neu"]
neg = analyzer.polarity_scores(tweet["text"])["neg"]
tweets_ago = tweetnumber
# Add sentiments for each tweet into an array
sentiments.append({"User": target,
"Date": tweet["created_at"],
"Compound": compound,
"Positive": pos,
"Negative": neu,
"Neutral": neg,
"Tweets Ago": tweetnumber})
# Add to counter
tweetnumber += 1
counter = counter + 1
# sentiments to DataFrame
sentiments_pd = pd.DataFrame.from_dict(sentiments)
sentiments_pd.head(25)
#Checking that it works
#sentiments_pd.describe()
#sentiments_pd.tail()
sentiments_pd["User"].describe()
# Convert sentiments to DataFrame
sentiments_pd = pd.DataFrame.from_dict(sentiments)
sentiments_pd.head()
sentiments_pd.columns
sentiments_pd.to_csv("newsTweets.csv")
# Firt : Create plot - general view
plt.plot(np.arange(len(sentiments_pd["Compound"])),
sentiments_pd["Compound"], marker="o", linewidth=0.5,
alpha=0.8)
# # Incorporate the other graph properties
plt.title("Sentiment Analysis of Tweets (%s) for %s" % (time.strftime("%x"), target_terms))
plt.ylabel("Tweet Polarity")
plt.xlabel("Tweets Ago")
plt.show()
plt.savefig('Overall.png')
news_colors = {"@CNN": "red",
"@BBC": "blue",
"@CBS": "yellow",
"@Fox": "lightblue",
"@nytimes" : "green"}
# Create a scatterplot of,
sns.set()
plt.figure(figsize = (10,6))
plt.xlabel ("tweet Ago", fontweight = 'bold')
plt.ylabel ("Tweet Polarity", fontweight ='bold')
plt.title ("Sentiment Analysis")
plt.xlim (102, -2, -1)
plt.ylim(-1,1)
for target_terms in news_colors.keys():
df = sentiments_pd[sentiments_pd["User"] == target_terms]
sentiment_analysis = plt.scatter(df["Tweets Ago"],df["Compound"], label = target_terms, color = news_colors[target_terms], edgecolor = "black", s=125)
plt.legend(bbox_to_anchor = (1,1), title = 'Media Sources')
plt.show()
sentiment_analysis.figure.savefig('SentimentAnalysis.png')
# finding average with group by
#taking user and finding the average of compound
groupbyagency = sentiments_pd.groupby("User")["Compound"].mean()
# count at 0
df[(df["Compound"]==0)].groupby("newsOutlet").count()
groupbyagency_df = pd.DataFrame(groupbyagency)
groupbyagency_df
data = [0.099411, 0.203394, -0.087430 , 0.179426, 0.039392]
#sns.barplot(x = "data", data = data, palette = news_colors)
sns.barplot(x = data, y = data, data = groupbyagency_df, orient = "h")
plt.title ("Sentiment Analysis")
plt.show()
plt.savefig('SentimentAnalysisII.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Entendendo Linhas
Step2: Figuras
|
<ASSISTANT_TASK:>
Python Code:
from geomath.point import Point
A = Point(0,0)
B = Point(4,4)
A.distance(B)
A.midpoint(B)
B.quadrant()
from geomath.line import Line
Linha = Line()
Linha.create_via_equation("1x+2y+3=0")
Linha.equation()
Linha.create(Point(0,0),Point(4,4))
Linha.equation()
from geomath.figure import Figure
FiguraEstranha = Figure()
FiguraEstranha.add_points([Point(2,10),Point(0,4),Point(0,0),Point(10,5),Point(3,9)])
FiguraEstranha.area()
FiguraEstranha.perimeter()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First Step
Step2: Second Step
Step3: Look how great it is doing!
Step4: But really?
Step5: Cross Validation is a way to make the score more telling
|
<ASSISTANT_TASK:>
Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import pandas as pd
print(pd.__version__)
df = pd.read_csv('./insurance-customers-300.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
df.describe()
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if print:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_kmh_age = X_train[:, :2]
X_test_kmh_age = X_test[:, :2]
X_train_2_dim = X_train_kmh_age
X_test_2_dim = X_test_kmh_age
from sklearn import neighbors
clf = neighbors.KNeighborsClassifier(1)
%time clf.fit(X_train_2_dim, y_train)
plotPrediction(clf, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train,
title="Train Data Max Speed vs Age with Classification")
clf.score(X_train_2_dim, y_train)
plotPrediction(clf, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test,
title="Test Data Max Speed vs Age with Prediction")
clf.score(X_test_2_dim, y_test)
# http://scikit-learn.org/stable/modules/cross_validation.html
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, X[:, :2], y, cv=10)
scores
# https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule
print("Accuracy: %0.2f (+/- %0.2f for 95 percent of runs)" % (scores.mean(), scores.std() * 2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We start by making sure the computation is performed on GPU if available. prefer_gpu should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated.
Step2: We also define the following helper functions for loading the data, and training and evaluating a given model. Don't forget to call model.initialize with a batch of input and output data to initialize the model and fill in any missing shapes.
Step3: 1. Composing the model in code
Step5: Composing the model via a config file
Step6: When the config is loaded, it's first parsed as a dictionary and all references to values from other sections, e.g. ${hyper_params
Step7: registry.resolve then creates the objects and calls the functions bottom-up.
Step8: We now have a model, optimizer and training settings, built from the config, and can use them to train the model.
Step9: Composing the model with code and config
Step11: The config would then only need to define one model block with @layers = "cnn_tagger.v1" and the function arguments. Whether you move them out to a section like [hyper_params] or just hard-code them into the block is up to you. The advantage of a separate section is that the values are preserved in the parsed config object (and not just passed into the function), so you can always print and view them.
|
<ASSISTANT_TASK:>
Python Code:
!pip install "thinc>=8.0.0a0" "ml_datasets>=0.2.0a0" "tqdm>=4.41"
from thinc.api import prefer_gpu
prefer_gpu()
import ml_datasets
from tqdm.notebook import tqdm
from thinc.api import fix_random_seed
fix_random_seed(0)
def train_model(model, optimizer, n_iter, batch_size):
(train_X, train_y), (dev_X, dev_y) = ml_datasets.ud_ancora_pos_tags()
model.initialize(X=train_X[:5], Y=train_y[:5])
for n in range(n_iter):
loss = 0.0
batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True)
for X, Y in tqdm(batches, leave=False):
Yh, backprop = model.begin_update(X)
d_loss = []
for i in range(len(Yh)):
d_loss.append(Yh[i] - Y[i])
loss += ((Yh[i] - Y[i]) ** 2).sum()
backprop(d_loss)
model.finish_update(optimizer)
score = evaluate(model, dev_X, dev_y, batch_size)
print(f"{n}\t{loss:.2f}\t{score:.3f}")
def evaluate(model, dev_X, dev_Y, batch_size):
correct = 0
total = 0
for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y):
Yh = model.predict(X)
for yh, y in zip(Yh, Y):
correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum()
total += y.shape[0]
return float(correct / total)
from thinc.api import Model, chain, strings2arrays, with_array, HashEmbed, expand_window, Relu, Softmax, Adam, warmup_linear
width = 32
vector_width = 16
nr_classes = 17
learn_rate = 0.001
n_iter = 10
batch_size = 128
with Model.define_operators({">>": chain}):
model = strings2arrays() >> with_array(
HashEmbed(nO=width, nV=vector_width, column=0)
>> expand_window(window_size=1)
>> Relu(nO=width, nI=width * 3)
>> Relu(nO=width, nI=width)
>> Softmax(nO=nr_classes, nI=width)
)
optimizer = Adam(learn_rate)
train_model(model, optimizer, n_iter, batch_size)
CONFIG =
[hyper_params]
width = 32
vector_width = 16
learn_rate = 0.001
[training]
n_iter = 10
batch_size = 128
[model]
@layers = "chain.v1"
[model.*.strings2arrays]
@layers = "strings2arrays.v1"
[model.*.with_array]
@layers = "with_array.v1"
[model.*.with_array.layer]
@layers = "chain.v1"
[model.*.with_array.layer.*.hashembed]
@layers = "HashEmbed.v1"
nO = ${hyper_params:width}
nV = ${hyper_params:vector_width}
column = 0
[model.*.with_array.layer.*.expand_window]
@layers = "expand_window.v1"
window_size = 1
[model.*.with_array.layer.*.relu1]
@layers = "Relu.v1"
nO = ${hyper_params:width}
nI = 96
[model.*.with_array.layer.*.relu2]
@layers = "Relu.v1"
nO = ${hyper_params:width}
nI = ${hyper_params:width}
[model.*.with_array.layer.*.softmax]
@layers = "Softmax.v1"
nO = 17
nI = ${hyper_params:width}
[optimizer]
@optimizers = "Adam.v1"
learn_rate = ${hyper_params:learn_rate}
from thinc.api import registry, Config
config = Config().from_str(CONFIG)
config
C = registry.resolve(config)
C
model = C["model"]
optimizer = C["optimizer"]
n_iter = C["training"]["n_iter"]
batch_size = C["training"]["batch_size"]
train_model(model, optimizer, n_iter, batch_size)
import thinc
from thinc.api import Model, chain, strings2arrays, with_array, HashEmbed, expand_window, Relu, Softmax, Adam, warmup_linear
@thinc.registry.layers("cnn_tagger.v1")
def create_cnn_tagger(width: int, vector_width: int, nr_classes: int = 17):
with Model.define_operators({">>": chain}):
model = strings2arrays() >> with_array(
HashEmbed(nO=width, nV=vector_width, column=0)
>> expand_window(window_size=1)
>> Relu(nO=width, nI=width * 3)
>> Relu(nO=width, nI=width)
>> Softmax(nO=nr_classes, nI=width)
)
return model
CONFIG =
[hyper_params]
width = 32
vector_width = 16
learn_rate = 0.001
[training]
n_iter = 10
batch_size = 128
[model]
@layers = "cnn_tagger.v1"
width = ${hyper_params:width}
vector_width = ${hyper_params:vector_width}
nr_classes = 17
[optimizer]
@optimizers = "Adam.v1"
learn_rate = ${hyper_params:learn_rate}
C = registry.resolve(Config().from_str(CONFIG))
C
model = C["model"]
optimizer = C["optimizer"]
n_iter = C["training"]["n_iter"]
batch_size = C["training"]["batch_size"]
train_model(model, optimizer, n_iter, batch_size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Processing Data
Step2: Preprocess Messages
Step3: Generate Vocab to Index Mapping
Step4: Check Message Lengths
Step5: Encode Messages and Labels
Step6: Pad Messages
Step7: Train, Test, Validation Split
Step9: Building and Training our LSTM Network
Step11: Embedding Layer
Step13: LSTM Layers
Step15: Loss Function and Optimizer
Step17: Accuracy
Step18: Training
Step19: Next we define our model hyper parameters. We will build a 2 Layer LSTM Newtork with hidden layer sizes of 128 and 64 respectively. We will use an embedding size of 300 and train over 50 epochs with mini-batches of size 256. We will use an initial learning rate of 0.1, though our Adadelta Optimizer will adapt this over time, and a keep probability of 0.5.
Step20: and now we train!
Step21: Testing our Network
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import tensorflow as tf
import utils as utl
from collections import Counter
# read data from csv file
data = pd.read_csv("data/StockTwits_SPY_Sentiment_2017.gz",
encoding="utf-8",
compression="gzip",
index_col=0)
# get messages and sentiment labels
messages = data.message.values
labels = data.sentiment.values
# View sample of messages with sentiment
for i in range(10):
print("Messages: {}...".format(messages[i]),
"Sentiment: {}".format(labels[i]))
messages = np.array([utl.preprocess_ST_message(message) for message in messages])
messages[0]
full_lexicon = " ".join(messages).split()
vocab_to_int, int_to_vocab = utl.create_lookup_tables(full_lexicon)
messages_lens = Counter([len(x) for x in messages])
print("Zero-length messages: {}".format(messages_lens[0]))
print("Maximum message length: {}".format(max(messages_lens)))
print("Average message length: {}".format(np.mean([len(x) for x in messages])))
messages, labels = utl.drop_empty_messages(messages, labels)
messages = utl.encode_ST_messages(messages, vocab_to_int)
labels = utl.encode_ST_labels(labels)
messages2 = utl.zero_pad_messages(messages, seq_len=244)
mess = [i[-6:-1] for i in messages2]
labels = [i[-1] for i in messages2]
BIG_N = 1600
X = [[i for i in zip(mess[j], np.sqrt(mess[j]))] for j in range(0, BIG_N)]
labels = [labels[j] for j in range(0, BIG_N)]
some_2d_sequences = np.array([*X]).astype(float)
some_2d_labels = np.array(labels).astype(int)
# X
print('shape: n_sequences, len_sequence, dim_input', some_2d_sequences.shape)
print('shape labels: n_sequences, len_labels, dim_input', some_2d_labels.shape)
some_2d_labels[100]
train_x, val_x, test_x, train_y, val_y, test_y = utl.train_val_test_split(some_2d_sequences, some_2d_labels, split_frac=0.80)
print("Data Set Size")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
###### JUST SOME CHALKBOARD STUFF
# first, create a TensorFlow constant
# const = tf.constant(2.0, name="const")
foo = np.array([[1,2,3], [4,4,5]])
inputs_ = tf.constant([], name="train_x")
embedding = tf.Variable(tf.random_uniform((7, 4), -1, 1))
embed = tf.nn.embedding_lookup(embedding, foo)
# create TensorFlow variables
# b = tf.Variable(2.0, name='b')
# c = tf.Variable(1.0, name='c')
# d = tf.add(b, c, name='d')
# e = tf.add(c, const, name='e')
a = tf.multiply(inputs_, 1, name='a')
# setup the variable initialisation
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
# initialise the variables
sess.run(init_op)
# compute the output of the graph
a_out = sess.run(embed)
print("train_x is: ", train_x[0:2], train_x[0:2].shape)
print("foo is: ", foo, foo.shape)
print("SHAPE a is {}".format(a_out.shape))
print("Variable a is {}".format(a_out))
# this proves that embedding_lookup DOES IN FACT TURN your sequence of words into a n-d series, with n = embedding_size
# so this should work for n-d time series
n_dims = 2 # here, we test for 2 dimensions!
def model_inputs():
Create the model inputs
inputs_ = tf.placeholder(tf.float32, [None, None, n_dims], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob_ = tf.placeholder(tf.float32, name='keep_prob')
return inputs_, labels_, keep_prob_
def build_embedding_layer(inputs_):
Create the embedding layer
# embedding = tf.Variable(tf.random_uniform((vocab_size, embed_size), -1, 1))
# embed = tf.nn.embedding_lookup(embedding, inputs_)
# foo = inputs_.astype(float)
return inputs_
def build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size):
Create the LSTM layers
lstms = [tf.contrib.rnn.BasicLSTMCell(size) for size in lstm_sizes]
# Add dropout to the cell
drops = [tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob_) for lstm in lstms]
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell(drops)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
lstm_outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
return initial_state, lstm_outputs, cell, final_state
def build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate):
Create the Loss function and Optimizer
predictions = tf.contrib.layers.fully_connected(lstm_outputs[:, -1], 1, activation_fn=tf.sigmoid)
loss = tf.losses.mean_squared_error(labels_, predictions)
optimzer = tf.train.AdadeltaOptimizer(learning_rate).minimize(loss)
return predictions, loss, optimzer
def build_accuracy(predictions, labels_):
Create accuracy
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
return accuracy
def build_and_train_network(lstm_sizes, epochs, batch_size,
learning_rate, keep_prob, train_x, val_x, train_y, val_y):
inputs_, labels_, keep_prob_ = model_inputs()
embed = build_embedding_layer(inputs_)
initial_state, lstm_outputs, lstm_cell, final_state = build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size)
predictions, loss, optimizer = build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate)
accuracy = build_accuracy(predictions, labels_)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
n_batches = len(train_x)//batch_size
for e in range(epochs):
state = sess.run(initial_state)
train_acc = []
for ii, (x, y) in enumerate(utl.get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob_: keep_prob,
initial_state: state}
loss_, state, _, batch_acc = sess.run([loss, final_state, optimizer, accuracy], feed_dict=feed)
train_acc.append(batch_acc)
if (ii + 1) % n_batches == 0:
val_acc = []
val_state = sess.run(lstm_cell.zero_state(batch_size, tf.float32))
for xx, yy in utl.get_batches(val_x, val_y, batch_size):
feed = {inputs_: xx,
labels_: yy[:, None],
keep_prob_: 1,
initial_state: val_state}
val_batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(val_batch_acc)
print("Epoch: {}/{}...".format(e+1, epochs),
"Batch: {}/{}...".format(ii+1, n_batches),
"Train Loss: {:.3f}...".format(loss_),
"Train Accruacy: {:.3f}...".format(np.mean(train_acc)),
"Val Accuracy: {:.3f}".format(np.mean(val_acc)))
saver.save(sess, "checkpoints/sentiment.ckpt")
# Define Inputs and Hyperparameters
lstm_sizes = [8, 4]
# vocab_size = len(vocab_to_int) + 1 #add one for padding
# embed_size = 30
epochs = 4
batch_size = 16
learning_rate = 0.1
keep_prob = 0.5
with tf.Graph().as_default():
build_and_train_network(lstm_sizes, epochs, batch_size,
learning_rate, keep_prob, train_x, val_x, train_y, val_y)
def test_network(model_dir, batch_size, test_x, test_y):
inputs_, labels_, keep_prob_ = model_inputs()
embed = build_embedding_layer(inputs_, vocab_size, embed_size)
initial_state, lstm_outputs, lstm_cell, final_state = build_lstm_layers(lstm_sizes, embed, keep_prob_, batch_size)
predictions, loss, optimizer = build_cost_fn_and_opt(lstm_outputs, labels_, learning_rate)
accuracy = build_accuracy(predictions, labels_)
saver = tf.train.Saver()
test_acc = []
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint(model_dir))
test_state = sess.run(lstm_cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(utl.get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob_: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test Accuracy: {:.3f}".format(np.mean(test_acc)))
with tf.Graph().as_default():
test_network('checkpoints', batch_size, test_x, test_y)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The function $\texttt{arb}(S)$ returns an arbitrary element from the set $S$,
Step2: Given a set $M$ and a binary relation $R \subseteq M \times M$, the function $\texttt{union_find}$ returns a partition $\mathcal{P}$ of $M$ such that we have
Step3: Given a partition $\mathcal{P}$ of a set $M$ and an element $x$ from $M$, the function $\texttt{find}(x, \mathcal{P})$
|
<ASSISTANT_TASK:>
Python Code:
def toStr(FS):
result = '{ '
for S in FS:
result += str(set(S)) + ', '
result = result[:-2]
result += ' }'
return result
def arb(S):
for x in S:
return x
def union_find(M, R):
print(f'R = {R}')
P = { frozenset({x}) for x in M } # the trivial partition of M
print(f'P = {toStr(P)}')
for x, y in R:
A = find(x, P) # find A
B = find(y, P) # find B
if A != B:
print(f'{x} ≅ {y}: combining {set(A)} and {set(B)}')
P -= { A, B }
P |= { A | B }
print(f'P = {toStr(P)}')
return P
def find(x, P):
return arb({ S for S in P if x in S })
def demo():
M = set(range(1, 9+1))
R = { (1, 4), (7, 9), (3, 5), (2, 6), (5, 8), (1, 9), (4, 7) }
P = union_find(M, R)
demo()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: - dry stuff - The matplotlib Figure, Axes and Axis
Step2: On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
Step3: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.
Step4: Observe the following difference
Step5: 2. object oriented
Step6: Although a little bit more code is involved, the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure
Step7: And also Matplotlib advices the object oriented style
Step8: An small cheat-sheet reference for some common elements
Step9: Adjusting specific parts of a plot is a matter of accessing the correct element of the plot
Step10: <div class="alert alert-success">
Step11: <div class="alert alert-success">
Step12: <div class="alert alert-success">
Step13: I do not like the style...
Step14: We should not start discussing about colors and styles, just pick your favorite style!
Step15: or go all the way and define your own custom style, see the official documentation or this tutorial.
Step16: A typical issue when plotting multiple elements in the same Figure is the overlap of the subplots. A straight-forward approach is using a larger Figure size, but this is not always possible and does not make the content independent from the Figure size. Matplotlib provides the usage of a constrained-layout to fit plots within your Figure cleanly.
Step18: When more advanced layout configurations are required, the usage of the gridspec module is a good reference. See gridspec demo for more information. A useful shortcut to know about is the string-shorthand to setup subplot layouts in a more intuitive way, e.g.
Step19: Interaction with Pandas
Step20: Under the hood, it creates an Matplotlib Figure with an Axes object.
Step21: Making this with matplotlib...
Step22: is still ok!
Step23: Mimicking this in matplotlib (just as a reference, it is basically what Pandas is doing under the hood)
Step24: Is already a bit harder ;-). Pandas provides as set of default configurations on top of Matplotlib.
Step25: <div class="alert alert-info">
Step26: <div class="alert alert-success">
Step27: <div class="alert alert-success">
Step28: <div class="alert alert-success">
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
fig = plt.figure()
plt.show()
ax = plt.axes()
x = np.linspace(0, 5, 10)
y = x ** 2
ax = plt.plot(x, y, '-')
from matplotlib import ticker
x = np.linspace(0, 5, 10)
y = x ** 10
fig, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_title("My data")
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%.1f"))
fig, ax1 = plt.subplots()
ax1.plot(x, y, '-')
ax1.set_ylabel('y')
ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
ax2.set_xlabel('x')
ax2.plot(x, y*2, 'r-')
fig, ax = plt.subplots()
ax.plot(x, y, '-')
# ...
x = np.linspace(-1, 0, 100)
fig, ax = plt.subplots(figsize=(10, 7))
# Adjust the created axes so that its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.9)
ax.plot(x, x**2, color='0.4', label='power 2')
ax.plot(x, x**3, color='0.8', linestyle='--', label='power 3')
ax.vlines(x=-0.75, ymin=0., ymax=0.8, color='0.4', linestyle='-.')
ax.fill_between(x=x, y1=x**2, y2=1.1*x**2, color='0.85')
ax.axhline(y=0.1, color='0.4', linestyle='-.')
ax.axhspan(ymin=0.65, ymax=0.75, color='0.95')
fig.suptitle('Figure title', fontsize=18,
fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.set_xlim(-1.0, 1.1)
ax.set_ylim(-0.1, 1.)
ax.text(0.5, 0.2, 'Text centered at (0.5, 0.2)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin relative Axes coordinates.',
horizontalalignment='center', fontsize=14,
transform=ax.transAxes, color='grey')
ax.annotate('Text pointing at (0.0, 0.75)', xy=(0.0, 0.75), xycoords="data",
xytext=(20, 40), textcoords="offset points",
horizontalalignment='left', fontsize=14,
arrowprops=dict(facecolor='black', shrink=0.05, width=1))
ax.legend(loc='lower right', frameon=True, ncol=2, fontsize=14)
data = np.random.randint(-2, 3, 100).cumsum()
data
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(data, color='darkgrey')
ax.set_xlabel('days since start');
ax.set_ylabel('measured value');
dates = pd.date_range("2021-01-01", periods=100, freq="D")
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(dates, data, color='darkgrey')
ax.axhspan(ymin=-5, ymax=5, color='green', alpha=0.2)
ax.set_xlabel('days since start');
ax.set_ylabel('measured value');
fig, ax = plt.subplots(figsize=(12, 4))
ax.bar(dates[-10:], data[-10:], color='darkgrey')
ax.bar(dates[-6], data[-6], color='orange')
plt.style.available
x = np.linspace(0, 10)
with plt.style.context('seaborn-whitegrid'): # 'seaborn', ggplot', 'bmh', 'grayscale', 'seaborn-whitegrid', 'seaborn-muted'
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
plt.style.use('seaborn')
fig, ax = plt.subplots(2, 3, figsize=(5, 5))
fig, ax = plt.subplots(2, 3, figsize=(5, 5), constrained_layout=True)
axd = plt.figure(constrained_layout=True).subplot_mosaic(
ABD
CCD
)
axd;
import pandas as pd
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
flowdata.plot.line() # remark default plot() is a line plot
flowdata.plot(figsize=(16, 6), ylabel="Discharge m3/s") # SHIFT + TAB this!
fig, ax = plt.subplots(figsize=(16, 6))
ax.plot(flowdata)
ax.legend(["L06_347", "LS06_347", "LS06_348"])
axs = flowdata.plot(subplots=True, sharex=True,
figsize=(16, 8), colormap='viridis', # Dark2
fontsize=15, rot=0)
axs[0].set_title("EXAMPLE");
from matplotlib import cm
import matplotlib.dates as mdates
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the colors
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col)
ax.legend()
if not ax.get_subplotspec().is_last_row():
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else:
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlabel('Time')
ax.tick_params(labelsize=15)
fig, (ax0, ax1) = plt.subplots(2, 1) #prepare a Matplotlib figure
flowdata.plot(ax=ax0) # use Pandas for the plotting
fig, ax = plt.subplots(figsize=(15, 5)) #prepare a matplotlib figure
flowdata.plot(ax=ax) # use pandas for the plotting
# Provide further adaptations with matplotlib:
ax.set_xlabel("")
ax.grid(which="major", linewidth='0.5', color='0.8')
fig.suptitle('Flow station time series', fontsize=15)
fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(16, 6)) #provide with matplotlib 2 axis
flowdata[["L06_347", "LS06_347"]].plot(ax=ax0) # plot the two timeseries of the same location on the first plot
flowdata["LS06_348"].plot(ax=ax1, color='0.7') # plot the other station on the second plot
# further adapt with matplotlib
ax0.set_ylabel("L06_347")
ax1.set_ylabel("LS06_348")
ax1.legend()
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
flowdata.head()
fig, ax = plt.subplots()
flowdata.mean().plot.bar(ylabel="mean discharge", ax=ax)
fig, (ax0, ax1) = plt.subplots(1, 2, constrained_layout=True)
flowdata.min().plot.bar(ylabel="min discharge", ax=ax0)
flowdata.max().plot.bar(ylabel="max discharge", ax=ax1)
fig.suptitle(f"Minimal and maximal discharge from {flowdata.index[0]:%Y-%m-%d} till {flowdata.index[-1]:%Y-%m-%d}");
alarm_level = 20
max_datetime, max_value = flowdata["LS06_347"].idxmax(), flowdata["LS06_347"].max()
fig, ax = plt.subplots(figsize=(18, 4))
flowdata["LS06_347"].plot(ax=ax)
ax.axhline(y=alarm_level, color='red', linestyle='-', alpha=0.8)
ax.annotate('Alarm level', xy=(flowdata.index[0], alarm_level),
xycoords="data", xytext=(10, 10), textcoords="offset points",
color="red", fontsize=12)
ax.annotate(f"Flood event on {max_datetime:%Y-%m-%d}",
xy=(max_datetime, max_value), xycoords='data',
xytext=(-30, -30), textcoords='offset points',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='right', verticalalignment='bottom',
fontsize=12)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files.
Step2: So great, we have 702 of them. Now let's import them.
Step3: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff.
Step4: Doing our analysis
Step5: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
Step6: Now let's push all of that into a dataframe with nicely named columns.
Step7: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and how many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
Step8: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
Step9: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
Step10: Now what if I'm using a TfidfVectorizer?
Step11: What's the content of the speeches? Here's a way to get them
Step12: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
Step13: Enough of this garbage, let's cluster
Step14: Which one do you think works the best?
|
<ASSISTANT_TASK:>
Python Code:
# If you'd like to download it through the command line...
!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz
# And then extract it through the command line...
!tar -zxf convote_v1.1.tar.gz
# glob finds files matching a certain filename pattern
import glob
# Give me all the text files
paths = glob.glob('convote_v1.1/data_stage_one/development_set/*')
paths[:5]
len(paths)
speeches = []
for path in paths:
with open(path) as speech_file:
speech = {
'pathname': path,
'filename': path.split('/')[-1],
'content': speech_file.read()
}
speeches.append(speech)
speeches_df = pd.DataFrame(speeches)
speeches_df.head()
for item in speeches_df['content'].head(5):
print("++++++++++++++++++++NEW SPEECH+++++++++++++++++++++")
print(item)
print(" ")
c_vectorizer = CountVectorizer(stop_words='english')
x = c_vectorizer.fit_transform(speeches_df['content'])
x
df = pd.DataFrame(x.toarray(), columns=c_vectorizer.get_feature_names())
df
#http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html
c2_vectorizer = CountVectorizer(stop_words='english', max_features=100)
y = c2_vectorizer.fit_transform(speeches_df['content'])
y
new_df = pd.DataFrame(y.toarray(), columns=c2_vectorizer.get_feature_names())
#new_df
#http://stackoverflow.com/questions/15943769/how-to-get-row-count-of-pandas-dataframe
total_speeches = len(new_df.index)
print("In total there are", total_speeches, "speeches.")
wo_chairman = new_df[new_df['chairman']==0]['chairman'].count()
print(wo_chairman, "speeches don't mention 'chairman'")
wo_mr_chairman = new_df[(new_df['chairman']==0) & (new_df['mr']==0)]['chairman'].count()
print(wo_mr_chairman, "speeches don't mention neither 'chairman' nor 'mr'")
#http://stackoverflow.com/questions/18199288/getting-the-integer-index-of-a-pandas-dataframe-row-fulfilling-a-condition
print("The speech with the most 'thank's has the index", np.where(new_df['thank']==new_df['thank'].max()))
china_trade_speeches = (new_df['china'] + new_df['trade']).sort_values(ascending = False).head(3)
china_trade_speeches
porter_stemmer = PorterStemmer()
def stem_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
words = [porter_stemmer.stem(word) for word in words]
return words
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stem_tokenizer, use_idf=False, norm='l1', max_features=100)
X = tfidf_vectorizer.fit_transform(speeches_df['content'])
t_df = pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
china_trade_speeches_v2 = (t_df['china'] + t_df['trade']).sort_values(ascending = False).head(3)
china_trade_speeches_v2
# index 0 is the first speech, which was the first one imported.
paths[0]
# Pass that into 'cat' using { } which lets you put variables in shell commands
# that way you can pass the path to cat
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[345]}
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[336]}
print("++++++++++NEW SPEECH+++++++++")
!cat {paths[402]}
new_df.columns
election_speeches = (new_df['discrimination'] + new_df['rights']).sort_values(ascending = False).head(3)
election_speeches
def new_stem_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
#With PorterStemmer implemented as above, the text was pretty crippled and hard to judge which made more sense.
#that's why I have commented that line out for now
#words = [porter_stemmer.stem(word) for word in words]
return words
vectorizer_types = [
{'name': 'CVectorizer', 'definition': CountVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100)},
{'name': 'TFVectorizer', 'definition': TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100, use_idf=False)},
{'name': 'TFVIDFVectorizer', 'definition': TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, max_features=100, use_idf=True)}
]
for vectorizer in vectorizer_types:
X = vectorizer['definition'].fit_transform(speeches_df['content'])
number_of_clusters = 8
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("++++++++ Top terms per cluster -- using a", vectorizer['name'])
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer['definition'].get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :7]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
!curl -O https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip
import zipfile
import glob
potter_paths = glob.glob('hp/*')
potter_paths[:5]
potter = []
for path in potter_paths:
with open(path) as potter_file:
potter_text = {
'pathname': path,
'filename': path.split('/')[-1],
'content': potter_file.read()
}
potter.append(potter_text)
potter_df = pd.DataFrame(potter)
potter_df.head()
vectorizer = TfidfVectorizer(stop_words='english', tokenizer=new_stem_tokenizer, use_idf=True)
X = vectorizer.fit_transform(potter_df['content'])
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :7]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p>Plot a cfradial formatted sweep file from the NOAA P-3. This should work on data from 2012 onward.
Step2: Now let's plot the same data, but in an earth relative coordinate system. Notice that depending on aircraft orientation, the plot may change.
Step3: <p>By using Py-ART as the reader for the sweep files, a Sigmet file can also be used.
|
<ASSISTANT_TASK:>
Python Code:
# Load the needed packages
from glob import glob
import matplotlib.pyplot as plt
import awot
from awot.graph import RadarSweepPlot
%matplotlib inline
fnc = '/Users/guy/data/p3radar/HRD_test/20120828/cfradial/cfrad.20120828_120541.113_to_20120828_120547.113_N42R_v1_s00_az-19.48_AIR.nc'
RadName = 'NOAA P-3 Tail Radar'
cbTx = 'NOAA P-3 Reflectivity '
tTx = "Track relative sweep display"
Field1, Field2 = 'DBZ', 'VEL'
FLim1, FLim2 = (0., 60.), (-40., 40.)
RngRings = [15., 30., 46., 60.]
RngLim = (-70., 70.)
HtLim = (-10.,20.)
dMax = 45.
# Creating axes outside seems to screw up basemap
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
# Get the tail radar data and map it to an AWOT radar instance
# If false it retains the Py-ART radar instance
r = awot.io.read_tdr_sweep(fname=fnc, map_to_awot=True, platform='noaap3')
# Create a RadarSweepPlot instance and add track-relative plot
rgp = RadarSweepPlot(r)
rgp.plot_track_relative(Field1, mask_procedure=None, mask_tuple=None,
cb_pad=.02, cb_orient='vertical', cb_label=cbTx+"(dBZ)",
ylims=(-10.,15.), title=tTx, ax=ax, fig=fig)
fnc = '/Users/guy/data/p3radar/HRD_test/20120828/cfradial/cfrad.20120828_120541.113_to_20120828_120547.113_N42R_v1_s00_az-19.48_AIR.nc'
RadName = 'NOAA P-3 Tail Radar'
tTx = "Earth relative sweep display"
# Creating axes outside seems to screw up basemap
fig2, ax2 = plt.subplots(1, 1, figsize=(8, 4))
# Create a RadarSweep
rgp.plot_earth_relative(Field1, mask_procedure=None, mask_tuple=None,
cb_pad=.02, cb_orient='vertical', cb_label=cbTx+"(dBZ)",
ylims=(-10.,15.), title=tTx, ax=ax2, fig=fig2)
fSig = '/Users/guy/data/p3radar/arthur/data/raw/2TA140705120336.RAWTR67'
Field = 'reflectivity'
tTx = "Using a Sigmet format sweep"
# Creating axes outside seems to screw up basemap
fig3, ax3 = plt.subplots(1, 1, figsize=(8, 4))
# Get the tail radar data
rsig = awot.io.read_tdr_sweep(fname=fSig, map_to_awot=True, platform='noaap3')
# Create a RadarSweep
rgp = RadarSweepPlot(rsig)
rgp.plot_track_relative(Field, mask_procedure=None, mask_tuple=None,
cb_pad=.02, cb_orient='vertical', cb_label=cbTx+"(dBZ)",
ylims=(-10.,15.), title=tTx, ax=ax3, fig=fig3)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Move to (or stay in) the '/content' directory
Step2: Ensure you are in the correct directory --> Expected output
Step3: Install NEURON and NetPyNE, and import matplotlib
Step4: This next line will detect if the directory already exists (i.e. you are re-running this code), and will delete it to prevent future errors.
Step5: Clone repository with the necessary cell and mod files
Step6: Move into the repository with all the necessary files
Step7: Ensure you are in the repository with the 'pwd' command --> Expected output
Step8: Compile the mod files --> Expected output
Step9: (2) Importing cells from different file formats
Step10: 2a. Import cell from .json format
Step11: 2b. Import a detailed morphology from a .swc file
Step12: 2c. Import a cell from a .hoc (NEURON) file
Step13: 2d. Import a cell from a .py (python) file
Step14: EXERCISE
Step15: (3) Explore and manipulate cell parameters
Step16: EXERCISE
Step17: EXERCISE
Step18: Now we want to explore (and change) the values of a channel parameter in a given cell model
Step19: EXERCISE
Step20: EXERCISE
Step21: Now let's see how these changes affect the cell behavior by plotting cell's response to current input before and after param changes!
Step22: EXERCISE
Step23: Add cfg params
Step24: Create network and run simulation
Step25: EXERCISE
Step26: Run the sim
Step27: (4) Plotting Morphology
Step28: EXERCISE
Step29: Now let's set the propagation velocity and length constant
Step30: EXERCISE
Step31: Add some network stimulation parameters
Step32: EXERCISE
Step33: Add cell connectivity rules
Step34: EXERCISE
|
<ASSISTANT_TASK:>
Python Code:
!pwd
%cd /content/
!pwd
!pip install neuron
!pip install netpyne
import matplotlib
import os
import json
%matplotlib inline
if os.path.isdir('/content/cells_netpyne2021'):
!rm -r /content/cells_netpyne2021
!git clone https://github.com/ericaygriffith/cells_netpyne2021.git
cd cells_netpyne2021/
!pwd
!nrnivmodl
from netpyne import specs, sim
# Network parameters
netParams = specs.NetParams() # object of class NetParams to store the network parameters
netParams.loadCellParamsRule(label='TC_reduced', fileName = 'TC_reduced_cellParams.json')
netParams.cellParams['TC_reduced']
netParams.importCellParams(
label='PYR_HH3D_swc',
conds={'cellType': 'PYR', 'cellModel': 'HH3D_swc'},
fileName='BS0284.swc',
cellName='swc_cell')
netParams.cellParams.keys()
netParams.importCellParams(
label='PYR_HH3D_hoc',
conds={'cellType': 'PYR', 'cellModel': 'HH3D_hoc'},
fileName='geom.hoc',
cellName='E21',
importSynMechs=False)
netParams.cellParams.keys()
netParams.importCellParams(
label='sRE_py',
conds={'cellType': 'sRE', 'cellModel': 'HH'},
fileName='sRE.py',
cellName='sRE',
importSynMechs=False)
netParams.cellParams.keys()
netParams.importCellParams(
label='mouse_hipp_swc',
conds={'cellType': 'hipp','cellModel': 'HH3D'},
fileName='mouseGABA_hipp.swc',
cellName='swc_hippCell'
)
netParams.cellParams.keys()
netParams.cellParams.keys()
netParams.cellParams['TC_reduced']['secs']['soma']['geom']['L']
geom_TC = netParams.cellParams['TC_reduced']['secs']['soma']['geom']
geom_TC['L']
netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys()
netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys()
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas'].keys()
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g'] = 5.0e05
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g']
netParams.cellParams['mouse_hipp_swc']['secs']['soma_0']['mechs']['pas'] = {'g': 0.0000357, 'e': -70}
for sec in netParams.cellParams['PYR_HH3D_swc']['secs'].keys():
netParams.cellParams['PYR_HH3D_swc']['secs'][sec]['geom']['cm'] = 1
netParams.popParams['TC_pop'] = {'cellType': 'TC', 'numCells': 1, 'cellModel': 'HH_reduced'}
netParams.stimTargetParams['Input->TC_pop'] = {'source': 'Input', 'sec':'soma', 'loc': 0.5, 'conds': {'pop':'TC_pop'}}
## cfg
cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration
cfg.duration = 2*1e3 # Duration of the simulation, in ms
cfg.dt = 0.01 # Internal integration timestep to use
cfg.verbose = 1 # Show detailed messages
cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record
cfg.recordStep = 0.01
cfg.filename = 'model_output' # Set file output name
cfg.saveJson = False
cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells
cfg.hParams['celsius'] = 36
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
## cfg
cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration
cfg.duration = 2*1e3 # Duration of the simulation, in ms
cfg.dt = 0.01 # Internal integration timestep to use
cfg.verbose = 1 # Show detailed messages
cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record
cfg.recordStep = 0.01
cfg.filename = 'model_output' # Set file output name
cfg.saveJson = False
cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells
cfg.hParams['celsius'] = 36
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
netParams.popParams['HH3D_pop_hoc'] = {'cellType': 'PYR', 'numCells': 1, 'cellModel': 'HH3D_hoc'}
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
%matplotlib inline
sim.analysis.plotShape(includePre = [], includePost=['HH3D_pop_hoc'], showSyns=False, figSize=(4,9), dist=0.8, saveFig=True)
netParams.sizeX = 200
netParams.propVelocity = 100.0 # propagation velocity (um/ms)
netParams.probLengthConst = 150.0 # length constant for conn probability (um)
netParams.synMechParams['exc'] = {'mod': 'Exp2Syn', 'tau1': 0.8, 'tau2': 5.3, 'e': 0} # NMDA synaptic mechanism
netParams.synMechParams['inh'] = {'mod': 'Exp2Syn', 'tau1': 0.6, 'tau2': 8.5, 'e': -75} # GABA synaptic mechanism
netParams.stimSourceParams['bkg'] = {'type': 'NetStim', 'rate': 40, 'noise': 0.3}
netParams.stimTargetParams['bkg->all'] = {'source': 'bkg',
'conds': {'cellType': ['E','I']},
'weight': 10.0, 'sec': 'soma',
'delay': 'max(1, normal(5,2))',
'synMech': 'exc'}
netParams.connParams['E->all'] = {
'preConds': {'cellType': 'E'}, 'postConds': {'y': [100,1000]}, # E -> all (100-1000 um)
'probability': 0.1 , # probability of connection
'weight': '5.0*post_ynorm', # synaptic weight
'delay': 'dist_3D/propVelocity', # transmission delay (ms)
'synMech': 'exc'} # synaptic mechanism
cfg.analysis['plot2Dnet'] = {'saveFig': True} # plot 2D cell positions and connections
cfg.analysis['plotConn'] = {'saveFig': True} # plot connectivity matrix
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: WSS dependent parameters
Step10: Parameters
Step11: Assembling and solving the discrete system
Step12: Class that performs simulation
Step13: Examples
Step14: High WSS=2.2
Step15: Concentration in intima in function of WSS
Step16: Two layer model
Step17: Low WSS=0.02
Step18: Concentration in intima in function of WSS
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import numpy as np
from scipy.sparse import dia_matrix
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
import matplotlib
import matplotlib.pyplot as plt
newparams = { 'savefig.dpi': 100, 'figure.figsize': (12/2., 5/2.) }
plt.rcParams.update(newparams)
params = {'legend.fontsize': 8,
'legend.linewidth': 0.2}
plt.rcParams.update(params)
def phi(WSS = 1.79):
Rcell = 15e-3 # mm
area=.64 # mm^2
SI = 0.38*np.exp(-0.79*WSS) + 0.225*np.exp(-0.043*WSS)
MC = 0.003797* np.exp(14.75*SI)
LC = 0.307 + 0.805 * MC
phi = (LC*np.pi*Rcell**2)/(area)
return( phi)
def Klj(w=14.3e-6,phi=5e-4):
"permability in m^2"
Rcell = 15e-3 # mm
return ( (w**2/3.)*(4.*w*phi)/Rcell * (1e-6) )
def Kend(w=14.3e-6,phi=5e-4):
"permability w m^2"
Kend_70mmHg =3.22e-21
Knj = Kend_70mmHg - Klj() # at 70mmHg
return Knj + Klj(w,phi)
def sigma_end(phi=5e-4,w=14.3*1e-6,r_m = 11e-6):
a = r_m/w
Kend_70mmHg =3.22e-21
Knj = Kend_70mmHg - Klj() # at 70mmHg
sigma_lj = 1-(1-3/2.*a**2+0.5*a**3)*(1-1/3.*a**2)
return 1 - ((1-sigma_lj)*Klj(phi=phi))/(Knj+Klj(phi=phi))
def Diffusivity(w=14.3e-6,phi=5e-4,r_m = 11e-6):
"Diffusivity w um^2/s"
R_cell = 15e-6 # m
a=r_m/w
D_lumen=2.71e-11
return D_lumen*(1-a)*(1.-1.004*a+0.418*a**3-0.16*a**5)*4*w/R_cell*phi*1e-3*1e12
class LDL_Parameters_Vafai2012(object):
S. Chung, K. Vafai, International Journal of Biomechanics 45(2012)
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 5.7e-12 , 5.4e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9888 , 0.8272 , 0.9827 , 0.8836 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 3.197e-4 ]
K = [ 3.22e-15 , 2e-10 ,4.392e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
Change units to mikrometers
Class can be initialized with a value of WSS in Pa
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Vafai2006_Ai(object):
L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006)
Table 2 Physiological parameters used in the numerical simulation
values given in milimeters
With D of endothelium depending on WSS
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 6e-11 , 5.0e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9886 , 0.8292 , 0.8295 , 0.8660 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 1.4e-4 ]
K = [ 3.2172e-15 , 2.2e-10 ,3.18e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
Change units to mikrometers
Class can be initialized with a value of WSS in Pa
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Vafai2006_Ai_without_D(object):
L. Ai, K. Vafai, International Journal of Heat and Mass Transfer 49 (2006)
Table 2 Physiological parameters used in the numerical simulation
values given in milimeters
With D of endothelium from that work.
names = [ 'endothel' , 'intima', 'IEL' ,'media' ]
D = [ 8.15e-11 , 5.0e-6 , 3.18e-9 , 5e-8 ]
V = [ 2.3e-2 ]*4
sigma = [ 0.9886 , 0.8292 , 0.8295 , 0.8660 ]
L = [ 2. , 10. , 2. , 200. ]
k_react = [ 0. , 0. , 0. , 1.4e-4 ]
K = [ 3.2172e-15 , 2.2e-10 ,3.18e-13, 2e-12 ]
mu = [ 0.72e-3 , 0.72e-3 , 0.72e-3 , 0.72e-3 ]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
Change units to mikrometers
Class can be initialized with a value of WSS in Pa
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Parameters_Olgac_WSS(object):
U. Olgac, V. Kurtcuoglu, D. Poulikakos, American Journal of Physiology-Heart and Circulatory
Physiology 294
The dependecy of WSS is implemented
name = [ 'endothel' , 'wall' ]
D = [ 6e-11 , 8.0e-7 ]
V = [ 2.3e-5 ]*2
sigma = [ 0.988 , 0.8514 ]
L = [ 2. , 338. ]
k_react = [ 0. , 3.0e-4 ]
K = [ 3.32e-15 , 1.2e-12]
mu = [ 0.72e-3 ,0.001]
def calculate_filration(self,dPmmHg):
mmHg2Pa = 133.3
dP = mmHg2Pa*dPmmHg
Rw = [L_*mu_/K_ for L_,K_,mu_ in zip(self.L,self.K,self.mu)]
self.Vfiltr = dP/sum(Rw)
def __init__(self,WSS=1.79):
Change units to mikrometers
Class can be initialized with a value of WSS in Pa
dPmmHg=70
self.phi = phi(WSS=WSS)
self.sigma_end = sigma_end(phi=self.phi,w=14.3*1e-6,r_m = 11e-6)
self.Kend = Kend(w=14.3e-6,phi=self.phi)
self.K[0] = self.Kend*1e6
self.calculate_filration(dPmmHg=dPmmHg)
self.D = [D_*1e6 for D_ in self.D]
self.V = [ self.Vfiltr*1e6]*4
self.sigma[0] = self.sigma_end
self.D[0]=Diffusivity(w=14.3e-6,phi=self.phi,r_m = 11e-6)
def get_params(self):
return (self.D,self.V,self.sigma,self.L,self.k_react)
class LDL_Sim(object):
def __init__(self, pars):
self.pars = pars
self.c_st = None
def discretize(self,N=2000):
self.N = N
k = np.ones(N)
v = np.ones(N)
Dyf = np.ones(N)
D,V,sigma,L,k_react = self.pars.get_params()
l = np.sum(L)
self.l = l
self.x=np.linspace(0,l,N)
layers=[0]+list( np.ceil( (N*(np.cumsum(L)/sum(L)))).astype(np.int32) )
for i,(l1,l2) in enumerate(zip(layers[:],layers[1:])):
k[l1:l2] = k_react[i]
v[l1:l2] = (1.0-sigma[i])*V[i]
Dyf[l1:l2] = D[i]
dx2_1 = (N-1)**2/l**2
dx_1 = (N-1)/l
diag_l = np.ones(N)*(np.roll(Dyf,-1)*dx2_1)
diag = np.ones(N)*(-2.*Dyf*dx2_1 - k + v*dx_1)
diag_u = np.ones(N)*(np.roll(Dyf,1)*dx2_1 - np.roll(v,1)*dx_1)
# Layer's junctions
for j in layers[1:-1]:
diag[j] = v[j-1]-v[j+1]-(Dyf[j-1]+Dyf[j+1])*dx_1
diag_l[j-1] = Dyf[j-1]*dx_1
diag_u[j+1] = Dyf[j+1]*dx_1
#Boundary Conditions
diag[0] = 1
diag[-1] = 1
diag_u[0+1] = 0
diag_l[0-2] = 0
self.L = dia_matrix((np.array([diag_l,diag,diag_u]),np.array([-1,0,1])), shape=(N,N))
def solve_stationary(self,bc=[1,0]):
b = np.zeros(self.N)
b[0],b[-1] = bc
L = self.L.tocsr()
self.c_st = sp.sparse.linalg.linsolve.spsolve(L,b)
def plot_c(self,yrange=(0,0.2),xrange=(0,214),filename=None, color='red', alpha=0.2, style='-'):
i1,i2 = int(xrange[0]/self.l*self.N),int(xrange[1]/self.l*self.N)
plt.plot(self.x[i1:i2],self.c_st[i1:i2],color=color,linewidth=2, ls=style)
plt.ylim( *yrange)
plt.xlim( *xrange)
L=self.pars.L
d=[0]+np.cumsum(self.pars.L).tolist()
colors=['m','g','b','w']
for i,(l1,l2) in enumerate(zip(d[:],d[1:])):
plt.bar([l1,],yrange[1],l2-l1, color=colors[i], linewidth=0.3, alpha=alpha)
plt.grid(True,axis='y', which='major')
plt.xlabel(r"$x \left[\mu m\right]$")
plt.ylabel(r"$c(x)$")
if filename!=None:
plt.savefig(filename)
def LDL_simulation(wss=1.79, parameters="2012", bc=[1,0.0047],verbose=True):
if (parameters=="4L_2012"):
pars = LDL_Parameters_Vafai2012(WSS=wss)
elif (parameters=="4L_2006_Ai"):
pars = LDL_Parameters_Vafai2006_Ai(WSS=wss)
elif (parameters=="4L_2006_Ai_without_D"):
pars = LDL_Parameters_Vafai2006_Ai_without_D(WSS=wss)
elif (parameters=="2L"):
pars = LDL_Parameters_Olgac_WSS(WSS=wss)
else:
print "Parameters error"
return
sim = LDL_Sim(pars)
sim.discretize(130*214)
sim.solve_stationary(bc=bc)
if verbose:
print "The total surfaced LDL concentration:",np.sum(sim.c_st)*(sim.l/(sim.N-1))
return sim
LDL_simulation(wss=0.02, parameters="4L_2012").plot_c(yrange=(0,5.0),xrange=(0,214), color='green', alpha=0.1)
LDL_simulation(wss=0.02, parameters="4L_2006_Ai").plot_c(yrange=(0,5.0),xrange=(0,214), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
LDL_simulation(wss=0.02, parameters="4L_2012").plot_c(yrange=(0,5.0),xrange=(0,25), color='green', alpha=0.1)
LDL_simulation(wss=0.02, parameters="4L_2006_Ai").plot_c(yrange=(0,5.0),xrange=(0,25), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
sim = LDL_simulation(wss=2.2, parameters="4L_2012")
sim.plot_c(yrange=(0,1.1),xrange=(0,214), color='green', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai")
sim.plot_c(yrange=(0,1.1),xrange=(0,214), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs'));
# use e.g. xlim(0,10) to zoom
sim = LDL_simulation(wss=2.2, parameters="4L_2012")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='green', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai_without_D")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='orange', alpha=0.1)
sim = LDL_simulation(wss=2.2, parameters="4L_2006_Ai")
sim.plot_c(yrange=(0,1.1),xrange=(0,25), color='blue', alpha=0.1, style='--')
legend(('4LC parametrs','4LA parametrs without D(WSS)','4LA parametrs with D(WSS)'));
WSSs = np.arange(0.0,3.0,0.1)
print WSSs
c_endo_wss=[LDL_simulation(wss=x, parameters="4L_2012",verbose=False).c_st[2*130] for x in WSSs]
c_endo_wss2=[LDL_simulation(wss=x, parameters="4L_2006_Ai",verbose=False).c_st[2*130] for x in WSSs]
plot (WSSs,c_endo_wss2)
plot (WSSs,c_endo_wss, ls='--')
title("WSS dependence of intima side LDL concentration at the endothelium", fontsize=10)
xlim([0.0,2.5])
xlabel('WSS [Pa]')
ylabel('$c_{w}end^{*}$')
grid(True)
legend(('4LA parametrs','4LC parametrs'))
sim = LDL_simulation(wss=2.2, parameters="2L")
sim.plot_c(yrange=(0,0.012),xrange=(0,340));
sim = LDL_simulation(wss=0.02, parameters="2L")
sim.plot_c(yrange=(0,0.25),xrange=(0,340))
c_endo_wss=[LDL_simulation(wss=x, parameters="2L",verbose=False).c_st[2*130] for x in WSSs]
plot (WSSs,c_endo_wss)
xlim([0.0,1.6])
ylim([0,0.3])
xlabel('WSS [Pa]')
ylabel('$c_{w}end^{*}$')
grid(True)
plt.title("WSS dependence of intima side LDL concentration at the endothelium", fontsize=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Porcessing
Step2: 看一看前五筆
Step3: 一共5234筆發文
Step4: 把status_message為空值的地方拿掉
Step5: 再把reactions等於0的去掉
Step6: 剩下5061篇發文
Step7: Feature Engineering
Step8: 處理 feature (status_type,weekday)
Step9: 處理 label
Step10: 處理 feature (status_message)
Step11: 代表有5061篇發文和27518個詞
Step12: 看一下資料的欄位,我們只能使用status_message,status_type,weekday,hour
Step13: 把詞向量的feature和其他feature結合在一起
Step14: 處理後的feature 維度和數量
Step15: 處理後的label 數量
Step16: 接下剌我們要使用的model是logistic regression
Step17: 算完之後你會發現,準確率只有50趴,那這樣跟我隨便猜有什麼不一樣,
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import operator
import datetime
import numpy as np
page_id = "appledaily.tw"
path = 'post/'+page_id+'_post.csv'
df = pd.read_csv(path, encoding='utf8')
df.head()
len(df)
df.apply(lambda x: sum(x.isnull()))
df = df[df['status_message'].notnull()].reindex()
df = df[df['num_reactions']!=0].reindex()
len(df)
df['datetime'] = df['status_published'].apply(lambda x: datetime.datetime.strptime(x,'%Y-%m-%d %H:%M:%S'))
df['weekday'] = df['datetime'].apply(lambda x: x.weekday_name)
df['hour'] = df['datetime'].apply(lambda x: x.hour)
pd.Series(data = ["EuropeS","Asia","Europe","US","Asia"], name = 'location')
# Numerical Coding
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit_transform(pd.Series(data = ["Europe","Asia","Europe","US","Asia"], name = 'location'))
# One-Hot Coding
pd.get_dummies(le.fit_transform(pd.Series(data = ["Europe","Asia","Europe","US","Asia"], name = 'location')))
var_to_encode = ['status_type','weekday']
for col in var_to_encode:
df[col] = le.fit_transform(df[col])
df = pd.get_dummies(df, columns=var_to_encode+['hour'])
y = df['num_reactions']
y.describe()
y = df['num_reactions'].apply(lambda x: 1 if x > 1243 else 0)
# 載入繁體詞庫
import jieba.analyse
# 安裝jieba套件的時候,就有繁體詞庫
jieba.set_dictionary('/home/wy/anaconda3/envs/python3/lib/python3.6/site-packages/jieba/extra_dict/dict.txt.big')
def jieba_extract(message_list):
word_count = {}
for message in message_list:
seg_list = jieba.analyse.extract_tags(message, topK=120)
for seg in seg_list:
if not seg in word_count:
word_count[seg] = 1
else:
word_count[seg] += 1
sorted_word_count = sorted(word_count.items(), key=operator.itemgetter(1))
sorted_word_count.reverse()
return sorted_word_count
sorted_word_count = jieba_extract(list(df['status_message']))
word = [one[0] for one in sorted_word_count]
def get_word_vector(message):
v = np.zeros(len(word))
v_word = jieba.analyse.extract_tags(message, topK=120)
for w in v_word:
v[words.index(w)] = 1
return v
sorted_word_count = jieba_extract(list(df['status_message']))
words = [one[0] for one in sorted_word_count]
word_vector = [get_word_vector(one) for one in df['status_message']]
word_vector = np.array(word_vector)
word_vector.shape
word_vector
df.columns
features = df[['status_type_0', 'status_type_1',
'status_type_2', 'status_type_3', 'weekday_0', 'weekday_1', 'weekday_2',
'weekday_3', 'weekday_4', 'weekday_5', 'weekday_6', 'hour_0', 'hour_1',
'hour_2', 'hour_3', 'hour_4', 'hour_5', 'hour_6', 'hour_7', 'hour_8',
'hour_9', 'hour_10', 'hour_11', 'hour_12', 'hour_13', 'hour_14',
'hour_15', 'hour_16', 'hour_17', 'hour_18', 'hour_19', 'hour_20',
'hour_21', 'hour_22', 'hour_23']].as_matrix()
features
X = np.concatenate((features, word_vector), axis=1)
X
X.shape
y.shape
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
clf = LogisticRegression()
clf.fit(X, y)
score = cross_val_score(clf, X, y, cv=5)
print (score.mean())
clf = LogisticRegression(C=0.001)
clf.fit(X, y)
score = cross_val_score(clf, X, y, cv=5)
print (score.mean())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PCA via PyTorch
Step2: We note that the components look rotated compared to the PCs. Do they capture the same information, however? We can verify that they are indeed equivalent to the PCs using CCA.
Step3: Indeed, our gradient-based PCA finds equivalent components to conventional PCA. How could we find the projection of another matrix $\hat{X}$ onto the components $V$? We will learn an amortized inference model to do so
Step4: And check that the amortizer indeed works.
Step5: Thus, we've shown how we can learn a generative model that finds latent factors, and how to invert this generative model using amortized inference.
Step6: Very cool! We find a parts based decomposition - eyebrows and eyes are easy to see. This is consistent with what the know about NMF on face datasets - see e.g. Lee & Seung (1999).
|
<ASSISTANT_TASK:>
Python Code:
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn.cluster import MiniBatchKMeans
from sklearn import decomposition
import sklearn
import sklearn.cross_decomposition
import numpy as np
import seaborn as sns
rng = 10
faces, _ = fetch_olivetti_faces(return_X_y=True, shuffle=True,
random_state=rng)
# Downsample.
faces = faces.reshape((-1, 32, 2, 32, 2)).mean(axis=4).mean(axis=2).reshape((400, -1))
n_samples, n_features = faces.shape
n_components = 10
plt.imshow(faces[0, :].reshape((32, 32)))
estimator = decomposition.PCA(n_components=n_components, svd_solver='randomized', whiten=True)
transformed = estimator.fit_transform(faces)
plt.figure(figsize=(10, 4))
for i in range(10):
plt.subplot(2, 5, 1 + i)
plt.imshow(estimator.components_[i, :].reshape((32, 32)))
plt.axis('off')
plt.suptitle('10 PCs of face dataset (eigenfaces)')
import torch
from torch import optim
from torch import nn
import torch.nn.functional as F
from torch.utils.data import Dataset
class LinearReconstructionModel(nn.Module):
def __init__(self, n_components, n_samples, n_features):
super().__init__()
self.n_components = n_components
self.n_samples = n_samples
self.n_features = n_features
self.U = nn.Parameter(torch.randn(n_samples, n_components) / n_components)
self.V = nn.Parameter(torch.randn(n_features, n_components) / n_components)
def forward(self, idx):
return self.U[idx, :] @ self.V.T
class ImageSet(Dataset):
def __init__(self, X):
self.X = X
def __getitem__(self, i):
return i, self.X[i, :]
def __len__(self):
return self.X.shape[0]
device = 'cuda'
model = LinearReconstructionModel(n_components, n_samples, n_features)
model.to(device=device)
faceset = ImageSet((faces - faces.mean(axis=0, keepdims=True)) / faces.std())
faceloader = torch.utils.data.DataLoader(faceset, batch_size=100, shuffle=True, num_workers=2)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=100.0, momentum=0.9)
for epoch in range(50): # loop over the dataset multiple times
running_loss = 0.0
U_before = model.U.detach().cpu().numpy().copy()
for i, data in enumerate(faceloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, torch.tensor(labels).to(device=device))
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
#print(((model.U.detach().cpu().numpy() - U_before) ** 2).mean())
if epoch % 10 == 0:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 10))
running_loss = 0.0
print('Finished Training')
comps = model.V.detach().cpu().numpy().T
plt.figure(figsize=(10, 4))
for i in range(10):
plt.subplot(2, 5, 1 + i)
plt.imshow(comps[i, :].reshape((32, 32)))
plt.axis('off')
plt.suptitle('10 components of face dataset estimated through gradient descent')
dec = sklearn.cross_decomposition.CCA(n_components=10)
dec.fit(estimator.components_.T, comps.T)
X_c, Y_c = dec.transform(estimator.components_.T, comps.T)
C = np.corrcoef(X_c.T, Y_c.T)
C = C[:, :10][10:, :]
plt.figure(figsize=(10, 6))
sns.heatmap(C, annot=True, square=True, ax=plt.gca())
plt.xlabel('Conventional PCA-based components')
plt.ylabel('Gradient descent-based components')
class Amortizer(nn.Module):
def __init__(self, n_components, n_samples, n_features):
super().__init__()
self.n_components = n_components
self.n_samples = n_samples
self.n_features = n_features
self.W = nn.Parameter(torch.randn(n_features, n_components) / n_components)
def forward(self, X):
return X @ self.W
amortizer = Amortizer(n_components, n_samples, n_features)
amortizer.to(device=device)
faceset = ImageSet((faces - faces.mean(axis=0, keepdims=True)) / faces.std())
faceloader = torch.utils.data.DataLoader(faceset, batch_size=100, shuffle=True, num_workers=2)
criterion = nn.MSELoss()
optimizer = optim.SGD(amortizer.parameters(), lr=0.03, momentum=0.9)
for epoch in range(100): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(faceloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = amortizer(torch.tensor(labels).to(device=device))
loss = criterion(outputs, torch.tensor(model.U[inputs, :]).to(device=device))
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if epoch % 25 == 0:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 10))
running_loss = 0.0
print('Finished Training')
_, data = faceset[:]
data = torch.tensor(data).to(device=device)
U_estimated = amortizer(data)
U_direct = data @ model.V @ torch.linalg.inv(model.V.T @ model.V)
R2 = 1 - ((U_estimated - model.U) ** 2).mean() / ((model.U - model.U.mean()) ** 2).mean()
print(R2)
R2_direct = 1 - ((U_direct - model.U) ** 2).mean() / ((model.U - model.U.mean()) ** 2).mean()
print(R2_direct)
class NNLinearReconstructionModel(nn.Module):
def __init__(self, n_components, n_samples, n_features):
super().__init__()
self.n_components = n_components
self.n_samples = n_samples
self.n_features = n_features
self.U = nn.Parameter(torch.randn(n_samples, n_components) / n_components)
self.V = nn.Parameter(torch.randn(n_features, n_components) / n_components)
def forward(self, idx):
return self.U[idx, :] @ F.leaky_relu(self.V.T)
model = NNLinearReconstructionModel(n_components, n_samples, n_features)
model.to(device=device)
faceset = ImageSet((faces - faces.mean(axis=0, keepdims=True)) / faces.std())
faceloader = torch.utils.data.DataLoader(faceset, batch_size=100, shuffle=True, num_workers=2)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=50.0, momentum=0.9)
for epoch in range(100): # loop over the dataset multiple times
running_loss = 0.0
U_before = model.U.detach().cpu().numpy().copy()
for i, data in enumerate(faceloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, torch.tensor(labels).to(device=device))
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
#print(((model.U.detach().cpu().numpy() - U_before) ** 2).mean())
if epoch % 10 == 0:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 10))
running_loss = 0.0
print('Finished Training')
comps = F.leaky_relu(model.V).detach().cpu().numpy().T
plt.figure(figsize=(10, 4))
for i in range(10):
plt.subplot(2, 5, 1 + i)
plt.imshow(comps[i, :].reshape((32, 32)))
plt.axis('off')
plt.suptitle('10 components of face dataset estimated through gradient descent')
class NNLinearReconstructionModel(nn.Module):
def __init__(self, n_components, n_samples, n_features):
super().__init__()
self.n_components = n_components
self.n_samples = n_samples
self.n_features = n_features
self.U = nn.Parameter(torch.randn(n_samples, n_components) / n_components)
self.V = nn.Parameter(torch.randn(n_features, n_components) / n_components)
def forward(self, idx):
return self.U[idx, :] @ F.leaky_relu(self.V.T)
xi, yi = torch.meshgrid(torch.arange(32), torch.arange(32))
xi = xi.to(device=device).reshape(-1, 1)
yi = yi.to(device=device).reshape(-1, 1)
def penalty(model):
V = F.relu(model.V)
x0 = (V * xi).sum(axis=0) / V.sum(axis=0)
y0 = (V * yi).sum(axis=0) / V.sum(axis=0)
E = ((V * (xi - x0) ** 2).sum(axis=0) +
(V * (yi - y0) ** 2).sum(axis=0)).sum()
return E * 1E-6
model = NNLinearReconstructionModel(n_components, n_samples, n_features)
model.to(device=device)
faceset = ImageSet((faces - faces.mean(axis=0, keepdims=True)) / faces.std())
faceloader = torch.utils.data.DataLoader(faceset, batch_size=100, shuffle=True, num_workers=2)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=50.0, momentum=0.9)
for epoch in range(300): # loop over the dataset multiple times
running_loss = 0.0
U_before = model.U.detach().cpu().numpy().copy()
for i, data in enumerate(faceloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, torch.tensor(labels).to(device=device)) + penalty(model)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
#print(((model.U.detach().cpu().numpy() - U_before) ** 2).mean())
if epoch % 30 == 0:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 10))
running_loss = 0.0
print('Finished Training')
comps = F.leaky_relu(model.V).detach().cpu().numpy().T
plt.figure(figsize=(10, 4))
for i in range(10):
plt.subplot(2, 5, 1 + i)
plt.imshow(comps[i, :].reshape((32, 32)))
plt.axis('off')
plt.suptitle('10 components of face dataset estimated through gradient descent')
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.