markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Add a column indicating week/weekend | # %load snippets/01-pandas_introduction103.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Now we can groupby the hour of the day and the weekend (or use `pivot_table`): | # %load snippets/01-pandas_introduction104.py
# %load snippets/01-pandas_introduction105.py
# %load snippets/01-pandas_introduction106.py
# %load snippets/01-pandas_introduction107.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: What are the number of exceedances of hourly values above the European limit 200 µg/m3 ?Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?Hints: Create a new DataFrame, called `exceedances`, (with boolean values) indicating if the threshold is exceeded or not Remember that the sum of True values can be used to count elements. Do this using groupby for each year. Adding a horizontal line can be done with the matplotlib function `ax.axhline`. | # re-reading the data to have a clean version
no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
# %load snippets/01-pandas_introduction109.py
# %load snippets/01-pandas_introduction110.py
# %load snippets/01-pandas_introduction111.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Testa se um modulo foi importado | 'my_keras_utilities' in sys.modules
| _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
try: train_network(model_week05, model_name, train_generator, validation_generator, **fit_params);except AttributeError: print('nope') | import keras.backend as K
K.set_image_data_format('channels_first')
K.set_floatx('float32')
print('Backend: {}'.format(K.backend()))
print('Data format: {}'.format(K.image_data_format()))
!nvidia-smib
!ls ../Task\ 5 | ls: cannot access '../Task 5': No such file or directory
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Função auxiliar | class MyCb(TrainingPlotter):
def on_epoch_end(self, epoch, logs={}):
super().on_epoch_end(epoch, logs)
def train_network(model, model_name, train_generator, validation_generator,
train_steps=10, valid_steps=10, opt='rmsprop', nepochs=50,
patience=50, reset=False, ploss=1.0):
do_plot = (ploss > 0.0)
model_fn = model_name + '.model'
if reset and os.path.isfile(model_fn):
os.unlink(model_name + '.model')
if not os.path.isfile(model_fn):
# initialize the optimizer and model
print("[INFO] compiling model...")
model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"])
# History, checkpoint, earlystop, plot losses:
cb = [ModelCheckpoint(model_file, monitor='val_acc', verbose=0, save_best_only=True, mode='auto', period=1),
MyCb(n=1, filepath=model_name, patience=patience, plot_losses=do_plot),
ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=7, verbose=0, mode='auto', epsilon=0.00001, cooldown=0, min_lr=0)
]
else:
print("[INFO] loading model...")
model, cb = load_model_and_history(model_name)
cb.patience = patience
past_epochs = cb[1].get_nepochs()
tr_epochs = nepochs - past_epochs
if do_plot:
vv = 0
fig = plot.figure(figsize=(15,6))
plot.ylim(0.0, ploss)
plot.xlim(0, nepochs)
plot.grid(True)
else:
vv = 2
print("[INFO] training for {} epochs ...".format(tr_epochs))
try:
model.fit_generator(train_generator, steps_per_epoch=train_steps,
validation_data=validation_generator, validation_steps=valid_steps,
epochs=nepochs, verbose=vv, callbacks=[cb[1]])
except KeyboardInterrupt:
pass
model, histo = load_model_and_history(model_name)
return model, cb
def test_network(model_name, validation_generator, nb_validation_samples):
model, histo = load_model_and_history(model_name)
print('Model from epoch {}'.format(histo.best_epoch))
print("[INFO] evaluating in the test data set ...")
loss, accuracy = model.evaluate_generator(validation_generator, nb_validation_samples)
print("\n[INFO] accuracy on the test data set: {:.2f}% [{:.5f}]".format(accuracy * 100, loss))
| _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Subindo o dataset | #auternar o comentário, se estiver no client ou no remote
data = np.load('/etc/jupyterhub/ia368z_2s2017/datasets/cifar10-redux.npz')
#data = np.load('../Task 5/cifar10-redux.npz')
X_train = data['X_train']
y_train = data['y_train']
X_test = data['X_test']
y_test = data['y_test']
X_train.dtype, y_train.dtype, X_test.dtype, y_test.dtype | _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Separando o conjunto de treinamento em validação e treinamento, numa proporção 80/20 % | p=np.random.permutation(len(X_train))
percent_factor=0.85
new_train_x = X_train[p]
new_train_y = y_train[p]
new_X_train = new_train_x[0:(np.floor(len(new_train_x)*percent_factor))]
new_y_train = new_train_y[0:(np.floor(len(new_train_y)*percent_factor))]
new_X_val = new_train_x[(np.ceil(len(new_train_x)*percent_factor)):]
new_y_val = new_train_y[(np.ceil(len(new_train_y)*percent_factor)):]
print('X_train.shape',new_X_train.shape)
print('y_train.shape',new_y_train.shape)
print('X_val.shape',new_X_val.shape)
print('y_val.shape',new_y_val.shape)
print('y_test shape ',y_test.shape)
print('X_test.shape:',X_test.shape)
print('Número de diferentes classes',len(np.unique(y_test)))
| Número de diferentes classes 3
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Normalizando os dados | a=0
print(np.mean(X_train)) | 113.781868652
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Guaranteeing that it only runs onceif (a==0): X_test = X_test.astype('float32') new_X_train = new_X_train.astype('float32') new_X_val = new_X_val.astype('float32') new_X_val /= 255. new_X_train /= 255. X_test /= 255. a=1print(np.mean(new_X_train))print(np.mean(new_X_val))print(np.mean(X_test)) | from keras.utils import np_utils
## Transforma o vetor de labels para o formato de one-hot encoding.
n_classes = 3
y_train_oh = np_utils.to_categorical(new_y_train-3, n_classes)
y_val_oh = np_utils.to_categorical(new_y_val-3, n_classes)
y_test_oh = np_utils.to_categorical(y_test-3, n_classes)
print(y_train_oh.shape)
print(y_val_oh.shape)
print(y_test_oh.shape)
| (1700, 3)
(300, 3)
(500, 3)
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Fazendo o data augmentation | print(X_train.shape)
print(X_test.shape)
print('new x train shape', new_X_train.shape)
print('y train oh shape', y_train_oh.shape)
print('new x val shape', new_X_val.shape)
print('y val oh shape', y_val_oh.shape)
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
nb_train_samples = new_train_x.shape[0]
nb_val_samples = new_X_val.shape[0]
print('nb val samples',nb_val_samples)
nb_test_samples = X_test.shape[0]
# dimensions of our images.
img_width, img_height = 32, 32
batch_size=100
# this is the augmentation configuration we will use for training
aug_datagen = ImageDataGenerator(
rescale=1./255, # sempre faz o rescale
shear_range=0.2, # sorteio entre 0 e 0.2 distribuição uniforme
zoom_range=0.2, # sorteio entre 0 e 0.2
horizontal_flip=True) # sorteio 50%
non_aug_datagen = ImageDataGenerator( rescale=1./255)
train_generator = aug_datagen.flow(
x = new_X_train, y = y_train_oh, # as amostras de treinamento
batch_size=batch_size,shuffle=False # batch size do SGD
)
validation_generator = non_aug_datagen.flow(
x = new_X_val, y = y_val_oh, # as amostras de validação
batch_size=batch_size, shuffle = False)
test_generator = non_aug_datagen.flow(
x = X_test, y = y_test_oh, # as amostras de validação
batch_size=batch_size, shuffle = False) | _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Conjunto de treinaemntosamples_train = train_datagen.flow(new_X_train)n_samples_train = nb_train_samples/batch_sizeConjunto de testesamples_test = train_datagen.flow(X_test)n_samples_test = nb_test_samples/batch_sizeConjunto de validacaosamples_val = train_datagen.flow(new_X_val)n_samples_val = nb_val_samples/batch_size | n_classes = len(np.unique(y_test))
print(n_classes) | 3
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Treinamento Transfer_Learning Subindo a VGG-16 | print(y_train_oh.shape)
from keras.applications.vgg16 import VGG16
modelvgg = VGG16(include_top=False, weights='imagenet',classes=y_train_oh.shape[1])
train_feature = modelvgg.predict_generator(generator=train_generator, steps=int(np.round(train_generator.n / batch_size)))
print(train_feature.shape, train_feature.dtype)
validation_features = modelvgg.predict_generator(generator = validation_generator, steps=int(np.round(validation_generator.n / batch_size)))
print(validation_features.shape, validation_features.dtype)
train_feature.shape | _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
topmodel.summary() modelvgg.summary() | train_feature.shape[1:]
train_feat = train_feature.reshape(1700,512)
print(train_feat.shape)
modelvgg.output
def model_build():
img_rows, img_cols = 32, 32 # Dimensões das imagens
#imagens com 3 canais e 32x32
input_shape = (3, img_rows, img_cols)
# Definindo a rede
model = Sequential()
#primeira conv
model.add(Conv2D(32, (3, 3),
input_shape=input_shape))
model.add(Activation('relu'))
#segunda conv
model.add(Conv2D(32,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
# Aqui os features deixam de ser imagens
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(n_classes))
model.add(Activation('softmax'))
return model
model_week05 = model_build()
model_week05.load_weights('../my_cifar_dataplus_model_weights.h5')
print("done")
model_week05.summary()
print(model_week05.layers)
print(len(model_week05.layers))
weights10 = model_week05.layers[10].get_weights()
print(weights10[0].shape,weights10[1].shape)
weights7 = model_week05.layers[7].get_weights()
print(weights7[0].shape,weights7[1].shape)
w2, b2 = weights10
w1, b1 = weights7
topmodel = Sequential()
topmodel.add(layer=keras.layers.Flatten(input_shape=(1,1,512)))
# topmodel.add(layer=keras.layers.Dense(units=256, activation='relu', name='d256'))
topmodel.add(layer=keras.layers.Dense(units=128, name='d256',))
topmodel.add(Activation('relu'))
topmodel.add(layer=keras.layers.Dropout(rate=.5))
topmodel.add(layer=keras.layers.Dense(units=3, name='d3'))
topmodel.add(Activation('softmax'))
# topmodel.compile(optimizer=keras.optimizers.SGD(lr=.05, momentum=.9, nesterov=True),
topmodel.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
topmodel.summary()
print(topmodel.layers)
print(len(topmodel.layers))
topmodel.layers[20].set_weights([w1, b1])
topmodel.layers[5].set_weights([wei])
!ls ../ | cifar_redux_augmented_vgg.history utils week05
cifar_redux_augmented_vgg.model week02 week06
models week03
my_cifar_dataplus_model_weights.h5 week04
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
w1, b1, w2, b2 = load_model('../my_cifar_dataplus_model_weights.h5').get_weights() w1, b1, w2, b2 = load_model('../my_cifar_dataplus_model_weights').get_weights()m m mmm | model2.load_weights('../my_cifar_dataplus_model_weights.h5')
| _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Aqui os features deixam de ser imagens model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(n_classes)) model.add(Activation('softmax')) topmodel = Sequential() topmodel.add(layer=keras.layers.Flatten(input_shape=feat_train.shape[1:])) topmodel.add(layer=keras.layers.Dense(units=256, activation='relu', name='d256'))topmodel.add(layer=keras.layers.Dense(units=256, activation='relu', name='d256', input_shape=(1,1,512)))topmodel.add(layer=keras.layers.Dropout(rate=.5))topmodel.add(layer=keras.layers.Dense(units=3, activation='softmax', name='d3')) topmodel.compile(optimizer=keras.optimizers.SGD(lr=.05, momentum=.9, nesterov=True),topmodel.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) train_features = modelvgg.predict(new_X_train)train_features = modelvgg.predict(new_X_train)print('train_features shape and type',train_features.shape,train_features.dtype)validation_features = modelvgg.predict(new_X_val)print('validation_features shape and type',validation_features.shape,train_features.dtype)test_features = modelvgg.predict(X_test)print('test_features shape and type',test_features.shape,train_features.dtype) | modelvgg.layers.pop(18)
modelvgg.layers.pop(17)
modelvgg.layers.pop(16)
modelvgg.layers.pop(15)
#modelvgg.summary()
train_features = modelvgg.predict(new_X_train)
print('train_features shape and type',train_features.shape,train_features.dtype)
validation_features = modelvgg.predict(new_X_val)
print('validation_features shape and type',validation_features.shape,train_features.dtype)
test_features = modelvgg.predict(X_test)
print('test_features shape and type',test_features.shape,train_features.dtype)
train_features.shape[1:]
model_name = '../cifar_redux_augmented_vgg'
modelVGG = Sequential()
modelVGG.add(Flatten(input_shape= train_features.shape[1:]))
modelVGG.add(Dense(120))
modelVGG.add(Activation('relu'))
modelVGG.add(Dropout(0.5))
modelVGG.add(Dense(3))
modelVGG.add(Activation('softmax'))
modelVGG.summary() | _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Treinando class MyCb(TrainingPlotter): | class MyCb(TrainingPlotter):
def on_epoch_end(self, epoch, logs={}):
super().on_epoch_end(epoch, logs)
def train_network(model, model_name, Xtra, ytra, Xval, yval,
opt='rmsprop', batch_size=100, nepochs=50, patience=50, reset=False, ploss=1.0):
do_plot = (ploss > 0.0)
model_fn = model_name + '.model'
if reset and os.path.isfile(model_fn):
os.unlink(model_name + '.model')
if not os.path.isfile(model_fn):
# initialize the optimizer and model
print("[INFO] compiling model...")
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
# History, checkpoint, earlystop, plot losses:
cb = MyCb(n=1, filepath=model_name, patience=patience, plot_losses=do_plot)
else:
print("[INFO] loading model...")
model, cb = load_model_and_history(model_name)
cb.patience = patience
past_epochs = cb.get_nepochs()
tr_epochs = nepochs - past_epochs
if do_plot:
vv = 0
fig = plot.figure(figsize=(15,6))
plot.ylim(0.0, ploss)
plot.xlim(0, nepochs)
plot.grid(True)
else:
vv = 2
print("[INFO] training for {} epochs ...".format(tr_epochs))
try:
model.fit(Xtra, ytra, batch_size=batch_size, epochs=tr_epochs, verbose=vv,
validation_data=(Xval,yval), callbacks=[cb])
except KeyboardInterrupt:
pass
model, histo = load_model_and_history(model_name)
return model, cb
def test_network(model_name, Xtest, ytest, batch_size=40):
model, histo = load_model_and_history(model_name)
print('Model from epoch {}'.format(histo.best_epoch))
print("[INFO] evaluating in the test data set ...")
loss, accuracy = model.evaluate(Xtest, ytest, batch_size=batch_size, verbose=1)
print("\n[INFO] accuracy on the test data set: {:.2f}% [{:.5f}]".format(accuracy * 100, loss))
print('train_features.shape',train_features.shape)
print('validation_features.shape',validation_features.shape)
print('test_features.shape',test_features.shape)
fit_params = {
'opt': 'adam', # SGD(lr=0.01, momentum=0.9, nesterov=True),
'nepochs': 100,
'patience': 30,
'ploss': 1.5,
'reset': True,
}
train_network(modelVGG, model_name, train_features, y_train_oh, validation_features, y_val_oh, **fit_params);
test_network(model_name, test_features,y_test_oh,X_test.shape[0])
from keras.applications.vgg16 import VGG16
print("[INFO] creating model...")
#vgg = VGG16(include_top=False, weights='imagenet', input_shape=(img_height, img_width, 3))
vgg = VGG16(include_top=False, weights='imagenet')
vgg.summary() | _____no_output_____ | MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Construção da rede neural | print(train_features.shape)
print(new_X_train.shape)
img_height, img_width = new_X_train.shape[2],new_X_train.shape[3]
print(img_height,img_width)
!ls ..
from keras.models import Model
from keras.models import load_model
model_name = '../cifar10_vgg_finetune' # modelo da rede atual
top_model_name = '../cifar_redux_augmented_vgg'
nb_classes=3
def build_net(top_model_name):
from keras.applications.vgg16 import VGG16
print("[INFO] creating model...")
#vgg = VGG16(include_top=False, weights='imagenet', input_shape=(img_height, img_width, 3))
#vgg = VGG16(include_top=False, weights='imagenet', input_shape=(3,img_height, img_width))
vgg = VGG16(include_top=False, weights='imagenet', classes=nb_classes, pooling='max')
print(vgg.output)
# build a classifier model and put on top of the convolutional model
#x = Flatten()(vgg.output)
x = Dense(120, activation='relu', name='dense1')(vgg.output)
x = Dropout(0.5)(x)
x = Dense(3, activation='relu', name='d1')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid', name='d2')(x)
#x = Dense(40, activation='relu', name='dense1')(vgg.output)
# x = Dropout(0.5)(x)
#x = Dense(120, activation='relu', name='dense2')(x)
#x = Dropout(0.2)(x)
#x = Dense(nb_classes, activation='softmax', name='dense3')(x)
#model = Model(inputs=vgg.input, outputs=x
model = Model(inputs=vgg.input, outputs=x)
print(model.layers)
print(len(model.layers)) # print('Model layers:')
# for i, layer in enumerate(model.layers):
# print(' {:2d} {:15s} {}'.format(i, layer.name, layer))
# modelo da rede densa treinada no notebook anterior
top_model_name = top_model_name
# Carrego os pesos treinados anteriormente
#w1, b1, w2, b2 = load_model(top_model_name).get_weights()
w1, b1, w2, b2 = modelVGG.get_weights()
print(w1.shape,b1.shape,w2.shape,b2.shape)
# Coloco nas camadas densas finais da rede
model.layers[20].set_weights([w1, b1])
model.layers[22].set_weights([w2, b2])
# Torno não-treináveis as primeiras 15 camadas
# da rede (os pesos não serão alterados)
for layer in model.layers[:15]:
layer.trainable = False
return model
model = build_net(top_model_name)
#model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
batch_size = 40
print(new_X_train.shape, y_train_oh.shape,new_X_val.shape, y_val_oh.shape) | (1700, 3, 32, 32) (1700, 3) (300, 3, 32, 32) (300, 3)
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
(1600, 32, 32, 3) (1600, 3) (400, 32, 32, 3) (400, 3) | h = model.fit(new_X_train.reshape(1700,3,32,32), y_train_oh,
validation_data=(new_X_val.reshape(300,3,32,32), y_val_oh),
batch_size=batch_size,
epochs=100,
)
h = model.fit(X_train[train_i], y_train_oh[train_i],
validation_data=(X_train[val_i], y_train_oh[val_i]),
batch_size=batch_size,
epochs=400,
callbacks=[early_stopping, checkpointer, reduce_lr], verbose=1)
model_name = '../cifar10_vgg_finetune'
fit_params = {
'opt': 'adam', # SGD(lr=0.01, momentum=0.9, nesterov=True),
'nepochs': 100,
'patience': 30,
'ploss': 1.5,
'reset': True,
}
train_network(model, model_name, train_features, y_train_oh, validation_features, y_val_oh, **fit_params); | [INFO] compiling model...
[INFO] training for 100 epochs ...
| MIT | CIFAR_10/Other_numpy/my_cifar_transferlearning-Copy1.ipynb | kaelgabriel/Neural_Networks_Pytorch |
Data Distribution vs. Sampling Distribution: What You Need to Know This notebook is accompanying the article [Data Distribution vs. Sampling Distribution: What You Need to Know](https://www.ealizadeh.com/blog/statistics-data-vs-sampling-distribution/).Subscribe to **[my mailing list](https://www.ealizadeh.com/subscribe/)** to receive my posts on statistics, machine learning, and interesting Python libraries and tips & tricks.You can also follow me on **[Medium](https://medium.com/@ealizadeh)**, **[LinkedIn](https://www.linkedin.com/in/alizadehesmaeil/)**, and **[Twitter]( https://twitter.com/es_alizadeh)**.Copyright © 2021 [Esmaeil Alizadeh](https://ealizadeh.com) | from IPython.display import Image
Image("https://www.ealizadeh.com/wp-content/uploads/2021/01/data_dist_sampling_dist_featured_image.png", width=1200) | _____no_output_____ | MIT | notebooks/data_vs_sampling_distributions.ipynb | e-alizadeh/medium |
--- It is important to distinguish between the data distribution (aka population distribution) and the sampling distribution. The distinction is critical when working with the central limit theorem or other concepts like the standard deviation and standard error.In this post we will go over the above concepts and as well as bootstrapping to estimate the sampling distribution. In particular, we will cover the following:- Data distribution (aka population distribution)- Sampling distribution- Central limit theorem (CLT)- Standard error and its relation with the standard deviation- Bootstrapping--- Data DistributionMuch of the statistics deals with inferring from samples drawn from a larger population. Hence, we need to distinguish between the analysis done the original data as opposed to analyzing its samples. First, let's go over the definition of the data distribution:💡 **Data distribution:** *The frequency distribution of individual data points in the original dataset.* Generate DataLet's first generate random skewed data that will result in a non-normal (non-Gaussian) data distribution. The reason behind generating non-normal data is to better illustrate the relation between data distribution and the sampling distribution.So, let's import the Python plotting packages and generate right-skewed data. | # Plotting packages and initial setup
import seaborn as sns
sns.set_theme(palette="pastel")
sns.set_style("white")
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams["figure.dpi"] = 150
savefig_options = dict(format="png", dpi=150, bbox_inches="tight")
from scipy.stats import skewnorm
from sklearn.preprocessing import MinMaxScaler
num_data_points = 10000
max_value = 100
skewness = 15 # Positive values are right-skewed
skewed_random_data = skewnorm.rvs(a=skewness, loc=max_value, size=num_data_points, random_state=1)
skewed_data_scaled = MinMaxScaler().fit_transform(skewed_random_data.reshape(-1, 1)) | _____no_output_____ | MIT | notebooks/data_vs_sampling_distributions.ipynb | e-alizadeh/medium |
Plotting the data distribution | fig, ax = plt.subplots(figsize=(10, 6))
ax.set_title("Data Distribution", fontsize=24, fontweight="bold")
sns.histplot(skewed_data_scaled, bins=30, stat="density", kde=True, legend=False, ax=ax)
# fig.savefig("original_skewed_data_distribution.png", **savefig_options) | _____no_output_____ | MIT | notebooks/data_vs_sampling_distributions.ipynb | e-alizadeh/medium |
Sampling DistributionIn the sampling distribution, you draw samples from the dataset and compute a statistic like the mean. It's very important to differentiate between the data distribution and the sampling distribution as most confusion comes from the operation done on either the original dataset or its (re)samples. 💡 **Sampling distribution:** *The frequency distribution of a sample statistic (aka metric) over many samples drawn from the dataset[katex]^{[1]}[/katex]. Or to put it simply, the distribution of sample statistics is called the sampling distribution.*The algorithm to obtain the sampling distribution is as follows: 1. Draw a sample from the dataset.2. Compute a statistic/metric of the drawn sample in Step 1 and save it.3. Repeat Steps 1 and 2 many times.4. Plot the distribution (histogram) of the computed statistic. | import numpy as np
import random
sample_size = 50
sample_means = []
random.seed(1) # Setting the seed for reproducibility of the result
for _ in range(2000):
sample = random.sample(skewed_data_scaled.tolist(), sample_size)
sample_means.append(np.mean(sample))
print(
f"Mean: {np.mean(sample_means).round(5)}"
)
fig, ax = plt.subplots(figsize=(10, 6))
ax.set_title("Sampling Distribution", fontsize=24, fontweight="bold")
sns.histplot(sample_means, bins=30, stat="density", kde=True, legend=False)
# fig.savefig("sampling_distribution.png", **savefig_options) | _____no_output_____ | MIT | notebooks/data_vs_sampling_distributions.ipynb | e-alizadeh/medium |
Above sampling distribution is basically the histogram of the mean of each drawn sample (in above, we draw samples of 50 elements over 2000 iterations). The mean of the above sampling distribution is around 0.23, as can be noted from computing the mean of all samples means.⚠️ *Do not confuse the sampling distribution with the sample distribution. The sampling distribution considers the distribution of sample statistics (e.g. mean), whereas the sample distribution is basically the distribution of the sample taken from the population.* Central Limit Theorem (CLT)💡 **Central Limit Theorem:** *As the sample size gets larger, the sampling distribution tends to be more like a normal distribution (bell-curve shape).**In CLT, we analyze the sampling distribution and not a data distribution, an important distinction to be made.* CLT is popular in hypothesis testing and confidence interval analysis, and it's important to be aware of this concept, even though with the use of bootstrap in data science, this theorem is less talked about or considered in the practice of data science$^{[1]}$. More on bootstrapping is provided later in the post. Standard Error (SE)The [standard error](https://en.wikipedia.org/wiki/Standard_error) is a metric to describe *the variability of a statistic in the sampling distribution*. We can compute the standard error as follows: $$ \text{Standard Error} = SE = \frac{s}{\sqrt{n}} $$where $s$ denotes the standard deviation of the sample values and $n$ denotes the sample size. It can be seen from the formula that *as the sample size increases, the SE decreases*. We can estimate the standard error using the following approach$^{[1]}$:1. Draw a new sample from a dataset.2. Compute a statistic/metric (e.g., mean) of the drawn sample in Step 1 and save it.3. Repeat Steps 1 and 2 several times.4. An estimate of the standard error is obtained by computing the standard deviation of the previous steps' statistics.While the above approach can be used to estimate the standard error, we can use bootstrapping instead, which is preferable. I will go over that in the next section.⚠️ *Do not confuse the standard error with the standard deviation. The standard deviation captures the variability of the individual data points (how spread the data is), unlike the standard error that captures a sample statistic's variability.* BootstrappingBootstrapping is an easy way of estimating the sampling distribution by randomly drawing samples from the population (with replacement) and computing each resample's statistic. Bootstrapping does not depend on the CLT or other assumptions on the distribution, and it is the standard way of estimating SE$^{[1]}$.Luckily, we can use [`bootstrap()`](https://rasbt.github.io/mlxtend/user_guide/evaluate/bootstrap/) functionality from the [MLxtend library](https://rasbt.github.io/mlxtend/) (You can read [my post](https://www.ealizadeh.com/blog/mlxtend-library-for-data-science/) on MLxtend library covering other interesting functionalities). This function also provides the flexibility to pass a custom sample statistic. | from mlxtend.evaluate import bootstrap
avg, std_err, ci_bounds = bootstrap(
skewed_data_scaled,
num_rounds=1000,
func=np.mean, # A function to compute a sample statistic can be passed here
ci=0.95,
seed=123 # Setting the seed for reproducibility of the result
)
print(
f"Mean: {avg.round(5)} \n"
f"Standard Error: +/- {std_err.round(5)} \n"
f"CI95: [{ci_bounds[0].round(5)}, {ci_bounds[1].round(5)}]"
) | Mean: 0.23293
Standard Error: +/- 0.00144
CI95: [0.23023, 0.23601]
| MIT | notebooks/data_vs_sampling_distributions.ipynb | e-alizadeh/medium |
Test "best of two" classifier This notebook test a classifier that operates in two layers:- First we use a SVM classifier to label utterances with high degree of certainty.- Afterwards we use heuristics to complete the labeling | import os
import sys
import pandas as pd
import numpy as np
import random
import pickle
import matplotlib.pyplot as plt
root_path = os.path.dirname(os.path.abspath(os.getcwd()))
sys.path.append(root_path)
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from src import phase_classification as pc
data_path = os.path.join(root_path,'data')
tables_path = os.path.join(data_path,'tables')
results_path = os.path.join(root_path,'results')
output_path =os.path.join(results_path,'tables')
import importlib
importlib.reload(pc)
WITH_STEMMING = True
#REMOVE_STOPWORDS = True
SEED = 10
NUM_TOPICS = 60
random.seed(SEED)
t = 0
CLASS_W = False
test_i = '[test1]'
file_name = test_i+'IBL_topic_distribution_by_utterance_before_after_{}_{}.xlsx'.format(WITH_STEMMING,NUM_TOPICS)
df_data = pd.read_excel(os.path.join(tables_path,'test','before_after',file_name))
the_keys = list(set(df_data['phase']))
total_samples = 0
class_samples = {}
for key in the_keys:
n = list(df_data.phase.values).count(key)
#print("key {}, total {}".format(key,n))
total_samples += n
class_samples[key] = n
print(total_samples)
for key in the_keys:
print("key {}, samples: {}, prop: {}".format(key,class_samples[key],round(class_samples[key]*1.0/total_samples,2)))
filter_rows = list(range(180))+[187,188]
row_label = 180
df_data.head(2)
dfs_all,_ = pc.split_df_discussions(df_data,.0,SEED)
X_all,y_all_1 = pc.get_joined_data_from_df(dfs_all,filter_rows,row_label)
CLASS_W
name_classifier = 'classifier_svm_linear_combination_svc_ba_cw_{}.pickle'.format(CLASS_W)
with open(os.path.join(data_path,name_classifier),'rb') as f:
svc = pickle.load(f)
coeff = pickle.load(f)
t = pickle.load(f)
#t = 0.59
output_first_layer_1 = pc.first_layer_classifier(X_all,t,svc)
comparison = list(zip(output_first_layer_1,y_all_1))
df_data['first_layer'] = output_first_layer_1
second_layer_1 = pc.second_layer_combination_test(X_all,coeff,svc)
second_layer_1.count(-1)
df_data['second_layer'] = second_layer_1
df_data.to_excel(os.path.join(output_path,'[second_layer]'+file_name))
labels = ["Phase {}".format(i) for i in range(1,6)]
df = pd.DataFrame(confusion_matrix(y_all_1, second_layer_1),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all_1, second_layer_1))
df
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svc.score(X_all, y_all_1))) | Accuracy of SVM classifier on training set: 0.30
| MIT | notebooks/3-. Check combination 2 layers classifier with test set [utterance level][180t].ipynb | cinai/classification_ibl |
Test 2 | test_i = '[test2]'
file_name = test_i+'IBL_topic_distribution_by_utterance_before_after_{}_{}.xlsx'.format(WITH_STEMMING,NUM_TOPICS)
df_data = pd.read_excel(os.path.join(tables_path,'test','before_after',file_name))
the_keys = list(set(df_data['phase']))
total_samples = 0
class_samples = {}
for key in the_keys:
n = list(df_data.phase.values).count(key)
#print("key {}, total {}".format(key,n))
total_samples += n
class_samples[key] = n
print(total_samples)
for key in the_keys:
print("key {}, samples: {}, prop: {}".format(key,class_samples[key],round(class_samples[key]*1.0/total_samples,2)))
dfs_all,_ = pc.split_df_discussions(df_data,.0,SEED)
X_all,y_all_2 = pc.get_joined_data_from_df(dfs_all,filter_rows,row_label)
output_first_layer_2 = pc.first_layer_classifier(X_all,t,name_classifier)
comparison = list(zip(output_first_layer_2,y_all_2))
df_data['first_layer'] = output_first_layer_2
second_layer_2 = pc.second_layer_combination_test(X_all,coeff,svc)
df_data['second_layer'] = second_layer_2
df_data.to_excel(os.path.join(output_path,'[second_layer]'+file_name))
second_layer_2.count(-1)
labels = ["Phase {}".format(i) for i in range(1,6)]
df = pd.DataFrame(confusion_matrix(y_all_2, second_layer_2),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all_2, second_layer_2))
df
print('Accuracy of SVM classifier on training set: {:.2f}'
.format(svc.score(X_all, y_all_2)))
y_all = y_all_1+y_all_2
pred = second_layer_1 + second_layer_2
df = pd.DataFrame(confusion_matrix(y_all, pred),columns=["Predicted {}".format(i) for i in labels])
df.index = labels
print(classification_report(y_all, pred))
df
print("Accuracy {0:.3f}".format(np.sum(confusion_matrix(y_all, pred).diagonal())/len(y_all)))
bs = [pc.unit_vector(x) for x in y_all]
y_pred = [pc.unit_vector(x) for x in pred]
np.sqrt(np.sum([np.square(y_pred[i]-bs[i]) for i in range(len(y_all))])/(len(y_all)*2)) | Accuracy 0.466
| MIT | notebooks/3-. Check combination 2 layers classifier with test set [utterance level][180t].ipynb | cinai/classification_ibl |
Classifying Digits with K-Nearest-Neighbors (KNN) This is a very simple implementation of classifying images using the k-nearest-neighbors algorithm. The accuracy is pretty good for how simple the algorithm is. The parameters can be tinkered with but at the time of writing I am using k = 5, training data size = 10000, testing data size = 1000. Let's set these parameters, read in the data, then view one of the images and the label associated with it. Afterwards I'll explain the algorithm. | k = 5
batch_size_train = 10000
batch_size_test = 1000
train_mnist = torchvision.datasets.MNIST('C:/projects/summer2020/vision/digits/', train=True, download=True,
transform=torchvision.transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(train_mnist, batch_size=batch_size_train, shuffle=True)
test_mnist = torchvision.datasets.MNIST('C:/projects/summer2020/vision/digits/', train=False, download=True,
transform=torchvision.transforms.ToTensor())
test_loader = torch.utils.data.DataLoader(test_mnist,batch_size=batch_size_test, shuffle=True)
train_set = enumerate(train_loader)
_, (train_imgs, train_targets) = next(train_set)
test_set = enumerate(test_loader)
_, (test_imgs, test_targets) = next(test_set)
plt.imshow(train_imgs[0][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(train_targets[0]))
plt.xticks([])
plt.yticks([]) | _____no_output_____ | MIT | digits_knn.ipynb | sasiegel/mnist-classification |
The k-nearest-neighbors algorithm is not very efficient and my implementation is even less efficient. I was aiming for simplicity over efficiency. We loop through each test image and find the distance to every training image. Distance is measured as Euclidean (p=2). We take the k nearest images and record the ground truth digit corresponding with the image. The predicted label is based on the majority of labels from k nearest images. The majority I chose to use is the median. It is very basic in that it is the central value/label. The effectiveness of this method of majority depends on our value of k. We compare the prediction with the ground truth of the test set which produces our prediction accuracy. | n_test = test_imgs.shape[0]
n_train = train_imgs.shape[0]
pred_test_targets = torch.zeros_like(test_targets)
for i in range(n_test):
test_img = test_imgs[i]
distances = [torch.dist(test_img, train_imgs[j], p=2) for j in range(n_train)]
nearest_indices = np.array(distances).argsort()[:5]
pred_test_targets[i] = train_targets[nearest_indices].median()
accuracy = np.divide(sum(pred_test_targets == test_targets), len(test_targets))
print('Prediction accuracy: {}'.format(accuracy)) | Prediction accuracy: 0.959
| MIT | digits_knn.ipynb | sasiegel/mnist-classification |
Distributed data parallel BERT training with TensorFlow2 and SMDataParallelHSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for TensorFlow, PyTorch, and MXNet.This notebook example shows how to use SMDataParallel with TensorFlow(version 2.3.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.The outline of steps is as follows:1. Stage dataset in [Amazon S3](https://aws.amazon.com/s3/). Original dataset for BERT pretraining consists of text passages from BooksCorpus (800M words) (Zhu et al. 2015) and English Wikipedia (2,500M words). Please follow original guidelines by NVidia to prepare training data in hdf5 format - https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.mdgetting-the-data2. Create Amazon FSx Lustre file-system and import data into the file-system from S33. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/)4. Configure data input channels for SageMaker5. Configure hyper-prarameters6. Define training metrics7. Define training job, set distribution strategy to SMDataParallel and start training**NOTE:** With large traning dataset, we recommend using (Amazon FSx)[https://aws.amazon.com/fsx/] as the input filesystem for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput.**NOTE:** This example requires SageMaker Python SDK v2.X. Amazon SageMaker InitializationInitialize the notebook instance. Get the aws region, sagemaker execution role.The IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role. | %%time
! python3 -m pip install --upgrade sagemaker
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
import boto3
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}') | SageMaker Execution Role:arn:aws:iam::835319576252:role/service-role/AmazonSageMaker-ExecutionRole-20191006T135881
AWS account:835319576252
AWS region:us-east-1
| Apache-2.0 | 08_optimize/distributed-training/tensorflow/data_parallel/bert/tensorflow2_smdataparallel_bert_demo.ipynb | MarcusFra/workshop |
Prepare SageMaker Training Images1. SageMaker by default use the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) TensorFlow training image. In this step, we use it as a base image and install additional dependencies required for training BERT model.2. In the Github repository https://github.com/HerringForks/DeepLearningExamples.git we have made TensorFlow2-SMDataParallel BERT training script available for your use. This repository will be cloned in the training image for running the model training. Build and Push Docker Image to ECRRun the below command build the docker image and push it to ECR. | image = "tf2-smdataparallel-bert-sagemaker" # Example: tf2-smdataparallel-bert-sagemaker
tag = "latest" # Example: latest
!pygmentize ./Dockerfile
!pygmentize ./build_and_push.sh
%%time
! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag} | WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Sending build context to Docker daemon 12.35MB
Step 1/3 : ARG region
Step 2/3 : FROM 763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04
---> 73f448953d3a
Step 3/3 : RUN pip --no-cache-dir --no-cache install scikit-learn==0.23.1 wandb==0.9.1 tensorflow-addons colorama==0.4.3 pandas apache_beam pyarrow==0.16 git+https://github.com/HerringForks/transformers.git@master git+https://github.com/huggingface/nlp.git@703b761
---> Using cache
---> 24901ecc9de0
Successfully built 24901ecc9de0
Successfully tagged tf2-smdataparallel-bert-sagemaker:latest
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
The push refers to repository [835319576252.dkr.ecr.us-east-1.amazonaws.com/tf2-smdataparallel-bert-sagemaker]
[1Bc96185e7: Preparing
[1Bda29752b: Preparing
[1Baf7081a0: Preparing
[1B8f790f4a: Preparing
[1Ba172edf1: Preparing
[1B75917cc2: Preparing
[1B445f2bde: Preparing
[1B7f80b0c1: Preparing
[1B90dd5fd5: Preparing
[1B98b178d8: Preparing
[1B7efd8228: Preparing
[1Bdb20b91f: Preparing
[1Bb9afb958: Preparing
[1B7406ed5a: Preparing
[1Bde285e41: Preparing
[1B581d23ab: Preparing
[1B8072e44c: Preparing
[1Bca2d4d4f: Preparing
[1Bf8c1572c: Preparing
[14B45f2bde: Waiting g
[13B0dd5fd5: Waiting g
[1Bc9c88d14: Preparing
[14B8b178d8: Waiting g
[1B38a9aef5: Preparing
[8Bca2d4d4f: Waiting g
[11B81d23ab: Waiting g
[11B072e44c: Waiting g
[14Be285e41: Waiting g
[16B406ed5a: Waiting g
[10Bef50ffc: Waiting g
[10B9c88d14: Waiting g
[9B38a9aef5: Waiting g
[1B34ed948a: Preparing
[6B1cf2580d: Waiting g
[6B604caecc: Waiting g
[10B1d66695: Waiting g
[1B93ac9526: Preparing
[11B44c8522: Waiting g
[34B5917cc2: Waiting g
[1B824934b3: Preparing
[1B9d46081a: Preparing
[12B7c45380: Waiting g
[12Be7b68e0: Waiting g
[1B67539a0c: Preparing
[13B4ed948a: Waiting g
[1Ba984e1d1: Preparing
[14Ba36ae91: Waiting g
[1Bf94c75c1: Preparing
[1Bb14f6ff1: Preparing
[1B5f19e930: Preparing
[1B26f8b991: Preparing
[18Be003d68: Waiting g
[1B4dce1444: Preparing
[19B85beea2: Waiting g
[1Be116c0c0: Preparing
[1B4df0ad6c: Preparing
[15Be1c7223: Waiting g
[1B02706667: Layer already exists [53A[2K[48A[2K[47A[2K[39A[2K[35A[2K[31A[2K[27A[2K[22A[2K[16A[2K[12A[2K[9A[2K[4A[2Klatest: digest: sha256:a0e36b294f1909845a48d8e14725b922a137669bcb13c19e2f1029381f3c216d size: 12499
Amazon ECR URI: 835319576252.dkr.ecr.us-east-1.amazonaws.com/tf2-smdataparallel-bert-sagemaker:latest
CPU times: user 88.4 ms, sys: 18.1 ms, total: 106 ms
Wall time: 8.34 s
| Apache-2.0 | 08_optimize/distributed-training/tensorflow/data_parallel/bert/tensorflow2_smdataparallel_bert_demo.ipynb | MarcusFra/workshop |
Preparing FSx Input for SageMaker1. Download and prepare your training dataset on S3.2. Follow the steps listed here to create a FSx linked with your S3 bucket with training data - https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html. Make sure to add an endpoint to your VPC allowing S3 access.3. Follow the steps listed here to configure your SageMaker training job to use FSx https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/ Important Caveats1. You need use the same `subnet` and `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job.2. Make sure you set appropriate inbound/output rules in the `security group`. Specically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job. https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html3. Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`. SageMaker TensorFlow Estimator function optionsIn the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell.**Instance types**SMDataParallel supports model training on SageMaker with the following instance types only:1. ml.p3.16xlarge1. ml.p3dn.24xlarge [Recommended]1. ml.p4d.24xlarge [Recommended]**Instance count**To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.**Distribution strategy**Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. Training scriptIn the Github repository https://github.com/HerringForks/deep-learning-models.git we have made reference TensorFlow-SMDataParallel BERT training script available for your use. Clone the repository. | # Clone herring forks repository for reference implementation BERT with TensorFlow2-SMDataParallel
!rm -rf deep-learning-models
!git clone --recursive https://github.com/HerringForks/deep-learning-models.git
import boto3
import sagemaker
sm = boto3.client('sagemaker')
notebook_instance_name = sm.list_notebook_instances()['NotebookInstances'][3]['NotebookInstanceName']
print(notebook_instance_name)
if notebook_instance_name != 'dsoaws':
print('****** ERROR: MUST FIND THE CORRECT NOTEBOOK ******')
exit()
notebook_instance = sm.describe_notebook_instance(NotebookInstanceName=notebook_instance_name)
notebook_instance
security_group_id = notebook_instance['SecurityGroups'][0]
print(security_group_id)
subnet_id = notebook_instance['SubnetId']
print(subnet_id)
from sagemaker.tensorflow import TensorFlow
print(account)
print(region)
print(image)
print(tag)
instance_type = "ml.p3dn.24xlarge" # Other supported instance type: ml.p3.16xlarge, ml.p4d.24xlarge
instance_count = 2 # You can use 2, 4, 8 etc.
docker_image = f"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE
username = 'AWS'
subnets = [subnet_id] # Should be same as Subnet used for FSx. Example: subnet-0f9XXXX
security_group_ids = [security_group_id] # Should be same as Security group used for FSx. sg-03ZZZZZZ
job_name = 'smdataparallel-bert-tf2-fsx-2p3dn' # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console.
# TODO: Copy data to FSx/S3
!pip install datasets
# For loading datasets
from datasets import list_datasets, load_dataset
# To see all available dataset names
print(list_datasets())
# To load a dataset
wiki = load_dataset("wikipedia", "20200501.en", split='train')
file_system_id = '<FSX_ID>' # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY'
SM_DATA_ROOT = '/opt/ml/input/data/train'
hyperparameters={
"train_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/train/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"val_dir": '/'.join([SM_DATA_ROOT, 'tfrecords/validation/max_seq_len_128_max_predictions_per_seq_20_masked_lm_prob_15']),
"log_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert/logs']),
"checkpoint_dir": '/'.join([SM_DATA_ROOT, 'checkpoints/bert']),
"load_from": "scratch",
"model_type": "bert",
"model_size": "large",
"per_gpu_batch_size": 64,
"max_seq_length": 128,
"max_predictions_per_seq": 20,
"optimizer": "lamb",
"learning_rate": 0.005,
"end_learning_rate": 0.0003,
"hidden_dropout_prob": 0.1,
"attention_probs_dropout_prob": 0.1,
"gradient_accumulation_steps": 1,
"learning_rate_decay_power": 0.5,
"warmup_steps": 2812,
"total_steps": 2000,
"log_frequency": 10,
"run_name" : job_name,
"squad_frequency": 0
}
estimator = TensorFlow(entry_point='albert/run_pretraining.py',
role=role,
image_uri=docker_image,
source_dir='deep-learning-models/models/nlp',
framework_version='2.3.1',
py_version='py3',
instance_count=instance_count,
instance_type=instance_type,
sagemaker_session=sagemaker_session,
subnets=subnets,
hyperparameters=hyperparameters,
security_group_ids=security_group_ids,
debugger_hook_config=False,
# Training using SMDataParallel Distributed Training Framework
distribution={'smdistributed':{
'dataparallel':{
'enabled': True
}
}
}
) | _____no_output_____ | Apache-2.0 | 08_optimize/distributed-training/tensorflow/data_parallel/bert/tensorflow2_smdataparallel_bert_demo.ipynb | MarcusFra/workshop |
Configure FSx Input for the SageMaker Training Job | from sagemaker.inputs import FileSystemInput
#YOUR_MOUNT_PATH_FOR_TRAINING_DATA # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/albert''''
file_system_directory_path='/fsx/'
file_system_access_mode='rw'
file_system_type='FSxLustre'
train_fs = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=file_system_directory_path,
file_system_access_mode=file_system_access_mode)
data_channels = {'train': train_fs}
# Submit SageMaker training job
estimator.fit(inputs=data_channels, job_name=job_name) | _____no_output_____ | Apache-2.0 | 08_optimize/distributed-training/tensorflow/data_parallel/bert/tensorflow2_smdataparallel_bert_demo.ipynb | MarcusFra/workshop |
Continuous Control---In this notebook I will implement the distributed disttributional deep determenstic policy gradients (D4PG) algorithm. different algorithm can also be used to solve this problem such as : ''' 1 - Deep determenstic policy gradients (DDPG) 2 - Proximal policy optimization (PPO) 3 - Asynchronous Advantage Actor-Critic (A3C) 4 - Trust Region Policy Optimization (TRPO) and many more ''' 1. Start the Environment The environments corresponding to both versions of the environment are already saved in the work dirictory and can be accessed at the file paths provided below. Please select one of the two options below for loading the environment.Note: This implementation applies to the second option, where the environment consists of 20 agents. | import random
import time
import torch
import torch.nn as nn
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
from unityagents import UnityEnvironment
from d4pg_agent import Agent
# select this option to load version 1 (with a single agent) of the environment
#env = UnityEnvironment(file_name='Reacher_v1_Windows_x86_64/Reacher.exe')
# select this option to load version 2 (with 20 agents) of the environment
env = UnityEnvironment(file_name='Reacher_v2_Windows_x86_64/Reacher.exe') | INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
goal_speed -> 1.0
goal_size -> 5.0
Unity brain name: ReacherBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 33
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 4
Vector Action descriptions: , , ,
| MIT | Project-2_Continuous-Control/Continuous-Control/Continuous_Control.ipynb | Mohammedabdalqader/DRL |
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. | # get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name] | _____no_output_____ | MIT | Project-2_Continuous-Control/Continuous-Control/Continuous_Control.ipynb | Mohammedabdalqader/DRL |
2. Examine the State and Action SpacesRun the code cell below to print some information about the environment. | # reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0]) | Number of agents: 20
Size of each action: 4
There are 20 agents. Each observes a state with length: 33
The state for the first agent looks like: [ 0.00000000e+00 -4.00000000e+00 0.00000000e+00 1.00000000e+00
-0.00000000e+00 -0.00000000e+00 -4.37113883e-08 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -1.00000000e+01 0.00000000e+00
1.00000000e+00 -0.00000000e+00 -0.00000000e+00 -4.37113883e-08
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 5.75471878e+00 -1.00000000e+00
5.55726624e+00 0.00000000e+00 1.00000000e+00 0.00000000e+00
-1.68164849e-01]
| MIT | Project-2_Continuous-Control/Continuous-Control/Continuous_Control.ipynb | Mohammedabdalqader/DRL |
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.Note that **in this coding environment, you will not be able to watch the agents while they are training**, and you should set `train_mode=True` to restart the environment. | for i in range(10):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores))) | _____no_output_____ | MIT | Project-2_Continuous-Control/Continuous-Control/Continuous_Control.ipynb | Mohammedabdalqader/DRL |
4. ImplementationNow i will implement the D4PG Algorithm | agent = Agent(state_size=state_size, action_size=action_size,num_agents=num_agents, seed=7)
# Training the agent over a number of episodes until we reach the desired average reward, which is > 30
def d4pg(n_episodes=2000):
scores = []
scores_deque = deque(maxlen=100)
rolling_average_score = []
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations # get the current state (for each agent)
agent.reset()
score = np.zeros(num_agents)
for timestep in range(1000):
action = agent.act(state)
env_info = env.step(action)[brain_name] # send all actions to the environment
next_state = env_info.vector_observations # get next state (for each agent)
reward = env_info.rewards # get reward (for each agent)
done = env_info.local_done # to see if episode finished
score += reward
agent.step(state, action, reward, next_state, done)
state = next_state
if np.any(done): # see if any episode finished
break
score = np.mean(score)
scores_deque.append(score)
scores.append(score)
rolling_average_score.append(np.mean(scores_deque))
print('\rEpisode {}\tAverage Score: {:.2f}\tScore: {:.2f}'.format(i_episode, np.mean(scores_deque), score), end="")
if i_episode % 10 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if 30 < current_score and 99 < len(scores_deque):
print('Target average reward achieved!')
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor_local.pth') # save local actor
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic_local.pth') # save local critic
break
return scores, rolling_average_score
scores, rolling_average_score = ddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.plot(np.arange(1, len(rolling_average_score)+1), rolling_average_score)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# Here you can test the performance of the agents
# load the actor critic models
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
agent.actor_local.load_state_dict(torch.load("checkpoints/checkpoint_actor_local.pth", map_location=device))
agent.critic_local.load_state_dict(torch.load("checkpoints/checkpoint_critic_local.pth", map_location=device))
for i in range(10):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
actions = agent.act(states)
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# close the enviroment
env.close() | _____no_output_____ | MIT | Project-2_Continuous-Control/Continuous-Control/Continuous_Control.ipynb | Mohammedabdalqader/DRL |
NewEgg.Com WebScraping Program For Laptops - Beta v1.0 - April 2020--- | # Import dependencies.
import os
import re
import time
import glob
import random
import datetime
import requests
import pandas as pd
from re import search
from splinter import Browser
from playsound import playsound
from bs4 import BeautifulSoup as soup | _____no_output_____ | MIT | archive/newer_notebooks_wip_drafts/drafts/new_egg_webscraper_app_NOTFINAL_review_too_much.ipynb | jhustles/new_egg_webscraper |
Functions & Classes Setup--- | # Build a function to return date throughout the program.
def return_dt():
global current_date
current_date = str(datetime.datetime.now()).replace(':','.').replace(' ','_')[:-7]
return current_date
"""
NewEgg WebScraper function that scrapes data, saves it into a csv file, and creates Laptop objects.
"""
def newegg_page_scraper(containers, turn_page):
page_nums = []
general_category = []
product_categories = []
images = []
product_brands = []
product_models = []
product_links = []
item_numbers = []
promotions = []
prices = []
shipping_terms = []
# Put this to avoid error that was being generated
global gen_category
"""
Loop through all the containers on the HTML, and scrap the following content into the following lists
"""
for con in containers:
try:
page_counter = turn_page
page_nums.append(int(turn_page))
gen_category = target_page_soup.find_all('div', class_="nav-x-body-top-bar fix")[0].text.split('\n')[5]
general_category.append(gen_category)
prod_category = target_page_soup.find_all('h1', class_="page-title-text")[0].text
product_categories.append(prod_category)
image = con.a.img["src"]
images.append(image)
prd_title = con.find_all('a', class_="item-title")[0].text
product_models.append(prd_title)
product_link = con.find_all('a', class_="item-title")[0]['href']
product_links.append(product_link)
shipping = con.find_all('li', class_='price-ship')[0].text.strip().split()[0]
if shipping != "Free":
shipping = shipping.replace('$', '')
shipping_terms.append(shipping)
else:
shipping = 0.00
shipping_terms.append(shipping)
brand_name = con.find_all('a', class_="item-brand")[0].img["title"]
product_brands.append(brand_name)
except (IndexError, ValueError) as e:
# If there are no item_brand container, take the Brand from product details.
product_brands.append(con.find_all('a', class_="item-title")[0].text.split()[0])
try:
current_promo = con.find_all("p", class_="item-promo")[0].text
promotions.append(current_promo)
except:
promotions.append('null')
try:
price = con.find_all('li', class_="price-current")[0].text.split()[0].replace('$','').replace(',', '')
prices.append(price)
except:
price = 'null / out of stock'
prices.append(price)
try:
item_num = con.find_all('a', class_="item-title")[0]['href'].split('p/')[1].split('?')[0]
item_numbers.append(item_num)
except (IndexError) as e:
item_num = con.find_all('a', class_="item-title")[0]['href'].split('p/')[1]
item_numbers.append(item_num)
# Convert all of the lists into a dataframe
df = pd.DataFrame({
'item_number': item_numbers,
'general_category': general_category,
'product_category': product_categories,
'brand': product_brands,
'model_specifications': product_models,
'price': prices,
'current_promotions': promotions,
'shipping': shipping_terms,
'page_number': page_nums,
'product_links': product_links,
'image_link': images
})
# Rearrange the dataframe columns into the following order.
df = df[['item_number', 'general_category','product_category', 'page_number' ,'brand','model_specifications' ,'current_promotions' ,'price' ,'shipping' ,'product_links','image_link']]
# Convert the dataframe into a dictionary.
global scraped_dict
scraped_dict = df.to_dict('records')
# Grab the subcategory "Laptop/Notebooks" and eliminate any special characters that may cause errors.
global pdt_category
pdt_category = df['product_category'].unique()[0]
# Eliminate special characters in a string if it exists.
pdt_category = ''.join(e for e in pdt_category if e.isalnum())
""" Count the number of items scraped by getting the length of a all the models for sale.
This parameter is always available for each item-container in the HTML
"""
global items_scraped
items_scraped = len(df['model_specifications'])
"""
Save the results into a csv file using Pandas
"""
df.to_csv(f'./processing/{current_date}_{pdt_category}_{items_scraped}_scraped_page{turn_page}.csv')
# Return these variables as they will be used.
return scraped_dict, items_scraped, pdt_category
# Function to return the total results pages.
def results_pages(target_page_soup):
# Use BeautifulSoup to extract the total results page number
results_pages = target_page_soup.find_all('span', class_="list-tool-pagination-text")[0].text.strip()
# Find and extract total pages + and add 1 to ensure proper length of total pages.
global total_results_pages
total_results_pages = int(re.split("/", results_pages)[1])
return total_results_pages
"""
Build a function to concatenate all pages that were scraped and saved in the processing folder.
Save the final output (1 csv file) all the results
"""
def concatenate(total_results_pages):
path = f'./processing\\'
scraped_pages = glob.glob(path + "/*.csv")
concatenate_pages = []
counter = 0
for page in scraped_pages:
df = pd.read_csv(page, index_col=0, header=0)
concatenate_pages.append(df)
compiled_data = pd.concat(concatenate_pages, axis=0, ignore_index=True)
total_items_scraped = len(compiled_data['brand'])
concatenated_output = compiled_data.to_csv(f"./finished_outputs/{current_date}_{total_items_scraped}_scraped_{total_results_pages}_pages_.csv")
return
"""
Built a function to clear out the entire processing files folder to avoid clutter.
Or the user can keep the processing files (page by page) for their own analysis.
"""
def clean_processing_fldr():
path = f'./processing\\'
scraped_pages = glob.glob(path + "/*.csv")
if len(scraped_pages) < 1:
print("There are no files in the folder to clear. \n")
else:
print(f"Clearing out a total of {len(scraped_pages)} scraped pages in the processing folder... \n")
clear_processing_files = []
for page in scraped_pages:
os.remove(page)
print('Clearing of "Processing" folder complete. \n')
return
def random_a_tag_mouse_over3():
x = random.randint(6, 10)
def rdm_slp_5_9(x):
time.sleep(x)
print(f"Mimic Humans - Sleeping for {x} seconds. ")
return x
working_try_atags = []
finally_atags = []
working_atags = []
not_working_atags = []
try_counter = 0
finally_counter = 0
time.sleep(1)
# Mouse over to header of the page "Laptops"
browser.find_by_tag("h1").mouse_over()
number_of_a_tags = len(browser.find_by_tag("a"))
# My observation has taught me that most of the actual laptop clickable links on the grid are in the <a> range 2000 to 2100.
if number_of_a_tags > 1900:
print(f"Found {number_of_a_tags} <a> tags when parsing html... ")
random_90_percent_plug = (random.randint(90, 94)/100.00)
start_a_tag = int(round((number_of_a_tags * random_90_percent_plug)))
end_a_tag = int(round((number_of_a_tags * .96)))
else:
# After proving you're human, clickable <a>'s reduced 300, so adjusting mouse_over for that scenario
print(f"Found {number_of_a_tags} <a> tags when parsing html... ")
random_40_percent_plug = (random.randint(40, 44)/100.00)
start_a_tag = int(round((number_of_a_tags * random_40_percent_plug)))
end_a_tag = int(round((number_of_a_tags * .46)))
step = random.randint(13, 23)
for i in range(start_a_tag, end_a_tag, step):
try: # try this as normal part of the program - SHORT
rdm_slp_5_9(x)
browser.find_by_tag("a")[i+2].mouse_over()
time.sleep(3)
except: # Execute this when there is an exception
print("EXCEPTION raised during mouse over. Going to break loop and proceed with moving to the next page. \n")
break
else: # execute this only if no exceptions are raised
working_try_atags.append(i+2)
working_atags.append(i+2)
try_counter += 1
print(f"<a> number = {i+2} | Current Attempts (Try Count): {try_counter} \n")
return
def g_recaptcha_check():
if browser.is_element_present_by_id('g-recaptcha') == True:
for sound in range(0, 2):
playsound('./sounds/user_alert.wav')
print("recaptcha - Check Alert! \n")
continue_scrape = input("Newegg system suspects you are a bot. \n Complete the recaptcha test to prove you're not a bot. After, enter in any key and press ENTER to continue the scrape. \n")
print("Continuing with scrape... \n")
return
def are_you_human_backend(target_page_soup):
if target_page_soup.find_all("title")[0].text == 'Are you a human?':
playsound('./sounds/user_alert.wav')
continue_scrape = input("Newegg notices you're a robot on the backend when requesting. REFRESH THE PAGE and you may have to perform a test to prove you're human. After you refresh, enter in any key, and press ENTER to continue the webscrape. \n")
print("Now will automatically will refresh the page 2 times, and target new URL. \n")
print("Refreshing three times in 12 seconds. Please wait... \n")
for i in range(0, 2):
browser.reload()
time.sleep(2)
browser.back()
time.sleep(4)
browser.forward()
time.sleep(3)
print("Targeting new url... ")
# After user passes test, target the new url, and return updated target_page_soup
target_url = browser.url
response_target = requests.get(target_url)
target_page_soup = soup(response_target.text, 'html.parser')
print("#"* 60)
print(target_page_soup)
print("#"* 60)
#target_page_soup
break_pedal = input("Does the soup say 'are you human?' in the text?' Enter 'y' or 'n'. ")
if break_pedal == 'y':
# recursion
are_you_human_backend(target_page_soup)
else:
#print("#"* 60)
target_url = browser.url
response_target = requests.get(target_url)
target_page_soup = soup(response_target.text, 'html.parser')
return target_page_soup
else:
print("Passed the 'Are you human?' check when requesting and parsing the html. Continuing with scrape ... \n")
# Otherwise, return the target_page_soup that was passed in.
return target_page_soup
def random_xpath_top_bottom():
x = random.randint(3, 8)
def rdm_slp_5_9(x):
time.sleep(x)
print(f"Slept for {x} seconds. \n")
return x
# Check if there are working links on the screen, otherwise alert the user.
if (browser.is_element_present_by_tag('h1')) == True:
print("(Check 1 - Random Xpath Top Bottom) Header is present and hoverable on page. \n")
else:
print("(Check 1 - ERROR - Random Xpath Top Bottom) Header is NOT present on page. \n")
for s in range(0, 1):
playsound('./sounds/user_alert.wav')
red_light = input("Program could not detect a clickable links to hover over, and click. Please use your mouse to refresh the page, and enter 'y' to continue the scrape. \n")
if (browser.is_element_present_by_tag("a")) == True:
print("(Check 2- Random Xpath Top Bottom) <a> tags are present on page. Will begin mouse-over thru the page, and click a link. \n")
else:
# If there isn't, pause the program. Have user click somewhere on the screen.
for s in range(0, 1):
playsound('./sounds/user_alert.wav')
red_light = input("Program could not detect a clickable links to hover over, and click. Please use your mouse to refresh the page, and enter 'y' to continue the scrape. \n")
# There are clickable links, then 'flip the coin' to choose top or bottom button
coin_toss_top_bottom = random.randint(0,1)
next_page_button_results = []
# If the coin toss is even, mouse_over and click the top page link.
if (coin_toss_top_bottom == 0):
print('Heads - Clicking "Next Page" Top Button. \n')
x = random.randint(3, 8)
print(f"Mimic human behavior by randomly sleeping for {x}. \n")
rdm_slp_5_9(x)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').mouse_over()
time.sleep(1)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click()
next_page_button_results.append(coin_toss_top_bottom)
print('Heads - SUCCESSFUL "Next Page" Top Button. \n')
return
else:
next_page_button_results.append(coin_toss_top_bottom)
# try: # after you add item to cart and go back back - this is the bottom next page link
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[8]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
# /html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[6]/div/div/div[11]/button
try:
print('Tails - Clicking "Next Page" Xpath Bottom Button. \n')
x = random.randint(3, 8)
print(f"Mimic human behavior by randomly sleeping for {x}. \n")
rdm_slp_5_9(x)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
print('Tails - 1st Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("EXCEPTION - 1st Bottom Xpath Failed. Sleep for 1 second then will try with 2nd Xpath bottom link. \n")
try:
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
print('(Exception Attempt) Tails - 2nd Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("EXCEPTION - 2nd Bottom Xpath Failed. Trying with 3rd Xpath bottom link. \n")
try:
time.sleep(4)
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').mouse_over()
time.sleep(4)
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
print('(Exception Attempt) Tails - 3rd Bottom Xpath - SUCCESSFUL "Next Page" Bottom Button. \n')
except:
print("3rd Bottom Link - Didn't work - INSPECT AND GRAB THE XPATH... \n")
break_pedeal = input("Pause. Enter anything to continue... ")
return
"""
This class takes in the dictionary from the webscraper function, and will be used in a list comprehension
to produce class "objects"
"""
class Laptops:
counter = 0
def __init__(self, **entries):
self.__dict__.update(entries)
Laptops.counter += 1
def count(self):
print(f"Total Laptops scraped: {Laptops.counter}")
"""
Originally modeled out parent/child inheritance object structure.
After careful research, I found it much easier to export the Pandas Dataframe of the results to a dictionary,
and then into a class object, which I will elaborate more down below.
"""
# class Product_catalog:
# all_prod_count = 0
# def __init__(self, general_category): # computer systems
# self.general_category = general_category
# Product_catalog.all_prod_count += 1
# def count_prod(self):
# return int(self.all_prod_count)
# #return '{}'.format(self.general_category)
# Sub_category was later changed to Laptops due to the scope of this project.
# class Sub_category(Product_catalog): # laptops/notebooks, gaming
# sub_category_ct = 0
# def __init__(self, general_category, sub_categ, item_num, brand, price, img_link, prod_link, model_specifications, current_promotions):
# super().__init__(general_category)
# Sub_category.sub_category_ct += 1
# self.sub_categ = sub_categ
# self.item_num = item_num
# self.brand = brand
# self.price = price
# self.img_link = img_link
# self.prod_link = prod_link
# self.model_specifications = model_specifications
# self.current_promotions = current_promotions | _____no_output_____ | MIT | archive/newer_notebooks_wip_drafts/drafts/new_egg_webscraper_app_NOTFINAL_review_too_much.ipynb | jhustles/new_egg_webscraper |
Main Program Logic--- | """ Welcome to the program message!
"""
print("=== NewEgg.Com Laptop WebScraper Beta v1.0 ===")
print("=="*30)
print('Scope: This project is a beta and is only built to scrape the laptop section of NewEgg.com due to limited time. \n')
print("Instructions: \n")
return_dt()
print(f'Current Date And Time: {current_date} \n')
print("(1) Go to www.newegg.com, go to the laptop section, select your requirements (e.g. brand, screensize, and specifications - SSD size, processor brand and etc...) ")
print("(2) Copy and paste the url from your exact search when prompted ")
print('(3) After the webscraping is successful, you will have an option to concatenate all of the pages you scraped together into one csv file')
print('(4) Lastly, you will have an option to clear out the processing folder (data scraped by each page)')
print('(5) If you have any issues or errors, "PRESS CTRL + C" to quit the program in the terminal ')
print('(6) You may run the program in the background as the program will make an alert noise to flag when Newegg suspects there is a bot, and will pause the scrape until you finish proving you are human. ')
print('(7) Disclaimer: Newegg may ban you for a 24 - 48 hours for webscraping their data, then you may resume. \n Also, please consider executing during the day, with tons of web traffic to their site in your respective area. \n')
print('Happy Scraping!')
# Set up Splinter requirements.
executable_path = {'executable_path': './chromedriver.exe'}
# Add an item to the cart first, then go to the user URL and scrape.
# Ask user to input in the laptop query link they would like to scrape.
url = input("Please copy and paste your laptop query that you want to webscrape, and press enter: \n")
browser = Browser('chrome', **executable_path, headless=False, incognito=True)
########################
# Throw a headfake first.
laptops_home_url = 'https://www.newegg.com/'
browser.visit(laptops_home_url)
# Load Time.
time.sleep(4)
#current_url = browser.url
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').mouse_over()
time.sleep(1)
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').click()
time.sleep(1)
# Type in laptops
intial_search = browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[1]/input').type('Lenovo Laptops intel', slowly=True)
for k in intial_search:
time.sleep(0.5)
pass
time.sleep(3)
# Click the search button
browser.find_by_xpath('/html/body/header/div[1]/div[3]/div[1]/form/div/div[3]/button').click()
print("Sleeping for 5 seconds. \n")
time.sleep(5)
# try to click on the first workable link
for i in range(2,4):
try:
browser.find_by_xpath(f'/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[{i}]/div[1]/div[1]/a').mouse_over()
time.sleep(1)
browser.find_by_xpath(f'/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[3]/div[{i}]/div[1]/div[1]/a').click()
except:
print(f"i {i} - Exception occurred. Trying next link. ")
time.sleep(5)
browser.back()
time.sleep(4)
g_recaptcha_check()
#####################
print("Sleeping for 5 seconds. \n")
time.sleep(3)
# Go to the user intended url
browser.visit(url)
time.sleep(3)
g_recaptcha_check()
current_url = browser.url
# Allocating loading time.
time.sleep(4)
#current_url = browser.url
response = requests.get(current_url)
print(f"{response} \n")
target_page_soup = soup(response.text, 'html.parser')
are_you_human_backend(target_page_soup)
# Run the results_pages function to gather the total pages to be scraped.
results_pages(target_page_soup)
"""
This is the loop that performs the page by page scraping of data / results
of the user's query.
"""
# List set up for where class Laptop objects will be stored.
print("Beginning webscraping and activity log below... ")
print("="*60)
product_catalog = []
# "Stop" in range below is "total_results_pages+1" because we started at 1.
for turn_page in range(1, total_results_pages+1):
"""
If "reCAPTCHA" pops up, pause the program using an input. This allows the user to continue
to scrape after they're done completing the quiz by inputting any value.
"""
# Allocating loading time.
time.sleep(4)
g_recaptcha_check()
print(f"Beginning mouse over activity... \n")
# Set up "containers" to be passed into main scraping function.
if turn_page == 1:
containers = target_page_soup.find_all("div", class_="item-container")
else:
target_url = browser.url
# Use Request.get() - throw the boomerang at the target, retrieve the info, & return back to requestor
response_target = requests.get(target_url)
# Use BeautifulSoup to read grab all the HTML using the lxml parser
target_page_soup = soup(response_target.text, 'html.parser')
# Pass in target_page_soup to scan on the background (usually 10 pages in) if the html has text "Are you human?"
# If yes, the browser will refresh twice, and return a new target_page_soup that should have the scrapable items we want
are_you_human_backend(target_page_soup)
containers = target_page_soup.find_all("div", class_="item-container")
print(f"Scraping Current Page: {turn_page} \n")
# Execute webscraper function. Output is a csv file in the processing folder and dictionary.
newegg_page_scraper(containers, turn_page)
print("Creating laptop objects for this page... \n")
# Create instances of class objects of the laptops/notebooks using a list comprehension.
objects = [Laptops(**prod_obj) for prod_obj in scraped_dict]
print(f"Finished creating Laptop objects for page {turn_page} ... \n")
# Append all of the objects to the main product_catalog list (List of List of Objects).
print(f"Adding {len(objects)} to laptop catalog... \n")
product_catalog.append(objects)
random_a_tag_mouse_over3()
if turn_page == total_results_pages:
print(f"Completed scraping {turn_page} / {total_results_pages} pages. \n ")
# Exit the broswer once complete webscraping.
browser.quit()
else:
try:
y = random.randint(3, 5)
print(f"Current Page: {turn_page}) | SLEEPING FOR {y} SECONDS THEN will click next page. \n")
time.sleep(y)
random_xpath_top_bottom()
except:
z = random.randint(3, 5)
print(f" (EXCEPTION) Current Page: {turn_page}) | SLEEPING FOR {z} SECONDS - Will click next page, if applicable. \n")
time.sleep(z)
random_xpath_top_bottom()
time.sleep(1)
print("")
print("="*60)
print("")
# Prompt the user if they would like to concatenate all of the pages into one csv file
concat_y_n = input(f'All {total_results_pages} pages have been saved in the "processing" folder (1 page = csv files). Would you like for us concatenate all the files into one? Enter "y", if so. Otherwise, enter anykey to exit the program. \n')
if concat_y_n == 'y':
concatenate(total_results_pages)
print(f'WebScraping Complete! All {total_results_pages} have been scraped and saved as {current_date}_{pdt_category}_scraped_{total_results_pages}_pages_.csv in the "finished_outputs" folder \n')
# Prompt the user to if they would like to clear out processing folder function here - as delete everything to prevent clutter
clear_processing_y_n = input(f'The "processing" folder has {total_results_pages} csv files of each page that was scraped. Would you like to clear the files? Enter "y", if so. Otherwise, enter anykey to exit the program. \n')
if clear_processing_y_n == 'y':
clean_processing_fldr()
print('Thank you checking out my project, and hope you found this useful! \n')
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
# 20 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343&recaptcha=pass&LeftPriceRange=1000%201500
## 22 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20600440394%20601183480%20601307583&LeftPriceRange=1000%201500
# 35
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814&LeftPriceRange=1000%201500
# 25 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601286795%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066&LeftPriceRange=1000%201500
# 15 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066&LeftPriceRange=1000%201500
# 26 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066%20601286795%20600440394&LeftPriceRange=1000%201500
# 28 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601346405%20600004341%20600004343%20601183480%20601307583%20601286800%204814%20601296065%20601296059%20601296066%20601286795%20600440394%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 48 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296065%20601296059%20601296066%20601286795%20600440394%20600004344&LeftPriceRange=1000%201500
# 29
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800&LeftPriceRange=1000%201500
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20600004343%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800%20600337010&LeftPriceRange=1000%201500
# 26 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%20601307583%204814%20601296066%20601286795%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 11 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%204814%20601296066%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008&LeftPriceRange=1000%201500
# 22 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%20601183480%204814%20601296066%20600440394%20600004344%20601286800%20600337010%20601107729%20601331008%204023%204022%204084
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%204814%20601296066%20600004344%204023%204022%204084
# 33 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600136700%20600165638%204814%20601296066%204023%204022%2050001186%2050010418%2050010772
# 24 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204023%204022%2050001186%2050010418%2050010772
# 15 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772
# 17 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312
# 18 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146
# 19 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146%2050001759%2050001149
# 25 pages
https://www.newegg.com/p/pl?N=100006740%20600004804%20600165638%204814%20601296066%204022%2050001186%2050010418%2050010772%2050001315%2050001312%2050001146%2050001759%2050001149%2050001077%20600136700
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[3]/div/div/div[11]/button').click()
/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button
browser.find_by_xpath('/html/body/div[4]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[1]/div[2]/div/div[2]/button').click()
target_page_soup.find_all("div", class_="item-container")
browser.find_by_xpath('/html/body/div[5]/section/div/div/div[2]/div/div/div/div[2]/div[1]/div[2]/div[4]/div/div/div[11]/button').click()
# 32 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20600337010%20601313977%20601274231%20601331008%20600440394%20601183480%20600136700
# 29 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20600337010%20601313977%20601274231%20601331008%20600136700
# 18 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601313977%20601274231%20601331008%20600136700
# 30 pages
#https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601274231%20601331008%20600136700%20601346404%20600337010
# 28 pages
# https://www.newegg.com/p/pl?N=100006740%20601346405%20601307583%20601107729%20601274231%20601331008%20600136700%20600337010
# 21 Pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20601331008%20600136700%20600337010
# 13 pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20601331008%20600136700
# 23 pages
# https://www.newegg.com/p/pl?N=100006740%20601307583%20601107729%20601274231%20600136700%20601313977%20600337010%20600440394 | _____no_output_____ | MIT | archive/newer_notebooks_wip_drafts/drafts/new_egg_webscraper_app_NOTFINAL_review_too_much.ipynb | jhustles/new_egg_webscraper |
Saliency visualization | from chainer_chemistry.saliency.calculator.gradient_calculator import GradientCalculator
from chainer_chemistry.saliency.calculator.integrated_gradients_calculator import IntegratedGradientsCalculator
from chainer_chemistry.link_hooks.variable_monitor_link_hook import VariableMonitorLinkHook
# 1. instantiation
gradient_calculator = GradientCalculator(classifier)
#gradient_calculator = IntegratedGradientsCalculator(classifier, steps=3,
from chainer_chemistry.saliency.calculator.calculator_utils import GaussianNoiseSampler
# --- VanillaGrad ---
M = 30
# 2. compute
saliency_samples_vanilla = gradient_calculator.compute(
train, M=1,)
saliency_samples_smooth = gradient_calculator.compute(
train, M=M, noise_sampler=GaussianNoiseSampler())
saliency_samples_bayes = gradient_calculator.compute(
train, M=M, train=True)
# 3. aggregate
method = 'square'
saliency_vanilla = gradient_calculator.aggregate(
saliency_samples_vanilla, ch_axis=None, method=method)
saliency_smooth = gradient_calculator.aggregate(
saliency_samples_smooth, ch_axis=None, method=method)
saliency_bayes = gradient_calculator.aggregate(
saliency_samples_bayes, ch_axis=None, method=method)
from chainer_chemistry.saliency.visualizer.table_visualizer import TableVisualizer
from chainer_chemistry.saliency.visualizer.visualizer_utils import normalize_scaler
visualizer = TableVisualizer()
# Visualize saliency of `i`-th data
i = 0
visualizer.visualize(saliency_vanilla[i], feature_names=iris.feature_names,
scaler=normalize_scaler) | _____no_output_____ | MIT | examples/table/visualize-saliency-table.ipynb | corochann/chainer-saliency |
visualize saliency of all data --> this can be considered as "feature importance" | saliency_mean = np.mean(saliency_vanilla, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_vanilla_{}.png'.format(method))
saliency_mean = np.mean(saliency_smooth, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_smooth_{}.png'.format(method))
saliency_mean = np.mean(saliency_bayes, axis=0)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler)
visualizer.visualize(saliency_mean, feature_names=iris.feature_names, num_visualize=-1,
scaler=normalize_scaler, save_filepath='results/iris_bayes_{}.png'.format(method)) | _____no_output_____ | MIT | examples/table/visualize-saliency-table.ipynb | corochann/chainer-saliency |
sklearn random forest feature importanceRef: - https://qiita.com/TomokIshii/items/290adc16e2ca5032ca07 - https://stackoverflow.com/questions/44101458/random-forest-feature-importance-chart-using-python | import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=0)
clf_rf = RandomForestClassifier()
clf_rf.fit(X_train, y_train)
y_pred = clf_rf.predict(X_test)
accu = accuracy_score(y_test, y_pred)
print('accuracy = {:>.4f}'.format(accu))
# Feature Importance
fti = clf_rf.feature_importances_
print('Feature Importances:')
for i, feat in enumerate(iris['feature_names']):
print('\t{0:20s} : {1:>.6f}'.format(feat, fti[i]))
import matplotlib.pyplot as plt
features = iris['feature_names']
importances = clf_rf.feature_importances_
indices = np.argsort(importances)
plt.title('Random forest feature importance')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show() | _____no_output_____ | MIT | examples/table/visualize-saliency-table.ipynb | corochann/chainer-saliency |
Twin-Delayed DDPGComplete credit goes to this [awesome Deep Reinforcement Learning 2.0 Course on Udemy](https://www.udemy.com/course/deep-reinforcement-learning/) for the code. Installing the packages | !pip install pybullet | Requirement already satisfied: pybullet in /usr/local/lib/python3.6/dist-packages (2.7.1)
| Apache-2.0 | P2S10.ipynb | aks1981/ML |
Importing the libraries | import os
import time
import random
import numpy as np
import matplotlib.pyplot as plt
import pybullet_envs
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
from gym import wrappers
from torch.autograd import Variable
from collections import deque | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Step 1: We initialize the Experience Replay memory | class ReplayBuffer(object):
def __init__(self, max_size=1e6):
self.storage = []
self.max_size = max_size
self.ptr = 0
def add(self, transition):
if len(self.storage) == self.max_size:
self.storage[int(self.ptr)] = transition
self.ptr = (self.ptr + 1) % self.max_size
else:
self.storage.append(transition)
def sample(self, batch_size):
ind = np.random.randint(0, len(self.storage), size=batch_size)
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = [], [], [], [], []
for i in ind:
state, next_state, action, reward, done = self.storage[i]
batch_states.append(np.array(state, copy=False))
batch_next_states.append(np.array(next_state, copy=False))
batch_actions.append(np.array(action, copy=False))
batch_rewards.append(np.array(reward, copy=False))
batch_dones.append(np.array(done, copy=False))
return np.array(batch_states), np.array(batch_next_states), np.array(batch_actions), np.array(batch_rewards).reshape(-1, 1), np.array(batch_dones).reshape(-1, 1) | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Step 2: We build one neural network for the Actor model and one neural network for the Actor target | class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.layer_1 = nn.Linear(state_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.max_action = max_action
def forward(self, x):
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
return x | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Step 3: We build two neural networks for the two Critic models and two neural networks for the two Critic targets | class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()
# Defining the first Critic neural network
self.layer_1 = nn.Linear(state_dim + action_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, 1)
# Defining the second Critic neural network
self.layer_4 = nn.Linear(state_dim + action_dim, 400)
self.layer_5 = nn.Linear(400, 300)
self.layer_6 = nn.Linear(300, 1)
def forward(self, x, u):
xu = torch.cat([x, u], 1)
# Forward-Propagation on the first Critic Neural Network
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
# Forward-Propagation on the second Critic Neural Network
x2 = F.relu(self.layer_4(xu))
x2 = F.relu(self.layer_5(x2))
x2 = self.layer_6(x2)
return x1, x2
def Q1(self, x, u):
xu = torch.cat([x, u], 1)
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
return x1 | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Steps 4 to 15: Training Process | # Selecting the device (CPU or GPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Building the whole Training Process into a class
class TD3(object):
def __init__(self, state_dim, action_dim, max_action):
self.actor = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = torch.optim.Adam(self.actor.parameters())
self.critic = Critic(state_dim, action_dim).to(device)
self.critic_target = Critic(state_dim, action_dim).to(device)
self.critic_target.load_state_dict(self.critic.state_dict())
self.critic_optimizer = torch.optim.Adam(self.critic.parameters())
self.max_action = max_action
def select_action(self, state):
state = torch.Tensor(state.reshape(1, -1)).to(device)
return self.actor(state).cpu().data.numpy().flatten()
def train(self, replay_buffer, iterations, batch_size=100, discount=0.99, tau=0.005, policy_noise=0.2, noise_clip=0.5, policy_freq=2):
for it in range(iterations):
# Step 4: We sample a batch of transitions (s, s’, a, r) from the memory
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = replay_buffer.sample(batch_size)
state = torch.Tensor(batch_states).to(device)
next_state = torch.Tensor(batch_next_states).to(device)
action = torch.Tensor(batch_actions).to(device)
reward = torch.Tensor(batch_rewards).to(device)
done = torch.Tensor(batch_dones).to(device)
# Step 5: From the next state s’, the Actor target plays the next action a’
next_action = self.actor_target(next_state)
# Step 6: We add Gaussian noise to this next action a’ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(batch_actions).data.normal_(0, policy_noise).to(device)
noise = noise.clamp(-noise_clip, noise_clip)
next_action = (next_action + noise).clamp(-self.max_action, self.max_action)
# Step 7: The two Critic targets take each the couple (s’, a’) as input and return two Q-values Qt1(s’,a’) and Qt2(s’,a’) as outputs
target_Q1, target_Q2 = self.critic_target(next_state, next_action)
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2)
# Step 9: We get the final target of the two Critic models, which is: Qt = r + γ * min(Qt1, Qt2), where γ is the discount factor
target_Q = reward + ((1 - done) * discount * target_Q).detach()
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
current_Q1, current_Q2 = self.critic(state, action)
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
if it % policy_freq == 0:
actor_loss = -self.critic.Q1(state, self.actor(state)).mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Making a save method to save a trained model
def save(self, filename, directory):
torch.save(self.actor.state_dict(), '%s/%s_actor.pth' % (directory, filename))
torch.save(self.critic.state_dict(), '%s/%s_critic.pth' % (directory, filename))
# Making a load method to load a pre-trained model
def load(self, filename, directory):
self.actor.load_state_dict(torch.load('%s/%s_actor.pth' % (directory, filename)))
self.critic.load_state_dict(torch.load('%s/%s_critic.pth' % (directory, filename))) | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We make a function that evaluates the policy by calculating its average reward over 10 episodes | def evaluate_policy(policy, eval_episodes=10):
avg_reward = 0.
for _ in range(eval_episodes):
obs = env.reset()
done = False
while not done:
action = policy.select_action(np.array(obs))
obs, reward, done, _ = env.step(action)
avg_reward += reward
avg_reward /= eval_episodes
print ("---------------------------------------")
print ("Average Reward over the Evaluation Step: %f" % (avg_reward))
print ("---------------------------------------")
return avg_reward | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We set the parameters | env_name = "AntBulletEnv-v0" # Name of a environment (set it to any Continous environment you want)
seed = 0 # Random seed number
start_timesteps = 1e4 # Number of iterations/timesteps before which the model randomly chooses an action, and after which it starts to use the policy network
eval_freq = 5e3 # How often the evaluation step is performed (after how many timesteps)
max_timesteps = 5e5 # Total number of iterations/timesteps
save_models = True # Boolean checker whether or not to save the pre-trained model
expl_noise = 0.1 # Exploration noise - STD value of exploration Gaussian noise
batch_size = 100 # Size of the batch
discount = 0.99 # Discount factor gamma, used in the calculation of the total discounted reward
tau = 0.005 # Target network update rate
policy_noise = 0.2 # STD of Gaussian noise added to the actions for the exploration purposes
noise_clip = 0.5 # Maximum value of the Gaussian noise added to the actions (policy)
policy_freq = 2 # Number of iterations to wait before the policy network (Actor model) is updated | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We create a file name for the two saved models: the Actor and Critic models | file_name = "%s_%s_%s" % ("TD3", env_name, str(seed))
print ("---------------------------------------")
print ("Settings: %s" % (file_name))
print ("---------------------------------------") | ---------------------------------------
Settings: TD3_AntBulletEnv-v0_0
---------------------------------------
| Apache-2.0 | P2S10.ipynb | aks1981/ML |
We create a folder inside which will be saved the trained models | if not os.path.exists("./results"):
os.makedirs("./results")
if save_models and not os.path.exists("./pytorch_models"):
os.makedirs("./pytorch_models") | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We create the PyBullet environment | env = gym.make(env_name) | /usr/local/lib/python3.6/dist-packages/gym/logger.py:30: UserWarning: [33mWARN: Box bound precision lowered by casting to float32[0m
warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
| Apache-2.0 | P2S10.ipynb | aks1981/ML |
We set seeds and we get the necessary information on the states and actions in the chosen environment | env.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
max_action = float(env.action_space.high[0]) | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We create the policy network (the Actor model) | policy = TD3(state_dim, action_dim, max_action) | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
[link text](https://) We create the Experience Replay memory | replay_buffer = ReplayBuffer() | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We define a list where all the evaluation results over 10 episodes are stored | evaluations = [evaluate_policy(policy)] | ---------------------------------------
Average Reward over the Evaluation Step: 9.804960
---------------------------------------
| Apache-2.0 | P2S10.ipynb | aks1981/ML |
We create a new folder directory in which the final results (videos of the agent) will be populated | def mkdir(base, name):
path = os.path.join(base, name)
if not os.path.exists(path):
os.makedirs(path)
return path
work_dir = mkdir('exp', 'brs')
monitor_dir = mkdir(work_dir, 'monitor')
max_episode_steps = env._max_episode_steps
save_env_vid = False
if save_env_vid:
env = wrappers.Monitor(env, monitor_dir, force = True)
env.reset() | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
We initialize the variables | total_timesteps = 0
timesteps_since_eval = 0
episode_num = 0
done = True
t0 = time.time() | _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Training | max_timesteps = 500000
# We start the main loop over 500,000 timesteps
while total_timesteps < max_timesteps:
# If the episode is done
if done:
# If we are not at the very beginning, we start the training process of the model
if total_timesteps != 0:
print("Total Timesteps: {} Episode Num: {} Reward: {}".format(total_timesteps, episode_num, episode_reward))
policy.train(replay_buffer, episode_timesteps, batch_size, discount, tau, policy_noise, noise_clip, policy_freq)
# We evaluate the episode and we save the policy
if timesteps_since_eval >= eval_freq:
timesteps_since_eval %= eval_freq
evaluations.append(evaluate_policy(policy))
policy.save(file_name, directory="./pytorch_models")
np.save("./results/%s" % (file_name), evaluations)
# When the training step is done, we reset the state of the environment
obs = env.reset()
# Set the Done to False
done = False
# Set rewards and episode timesteps to zero
episode_reward = 0
episode_timesteps = 0
episode_num += 1
# Before 10000 timesteps, we play random actions
if total_timesteps < start_timesteps:
action = env.action_space.sample()
else: # After 10000 timesteps, we switch to the model
action = policy.select_action(np.array(obs))
# If the explore_noise parameter is not 0, we add noise to the action and we clip it
if expl_noise != 0:
action = (action + np.random.normal(0, expl_noise, size=env.action_space.shape[0])).clip(env.action_space.low, env.action_space.high)
# The agent performs the action in the environment, then reaches the next state and receives the reward
new_obs, reward, done, _ = env.step(action)
# We check if the episode is done
done_bool = 0 if episode_timesteps + 1 == env._max_episode_steps else float(done)
# We increase the total reward
episode_reward += reward
# We store the new transition into the Experience Replay memory (ReplayBuffer)
replay_buffer.add((obs, new_obs, action, reward, done_bool))
# We update the state, the episode timestep, the total timesteps, and the timesteps since the evaluation of the policy
obs = new_obs
episode_timesteps += 1
total_timesteps += 1
timesteps_since_eval += 1
# We add the last policy evaluation to our list of evaluations and we save our model
evaluations.append(evaluate_policy(policy))
if save_models: policy.save("%s" % (file_name), directory="./pytorch_models")
np.save("./results/%s" % (file_name), evaluations) | Total Timesteps: 1000 Episode Num: 1 Reward: 512.6232988347093
Total Timesteps: 2000 Episode Num: 2 Reward: 493.70176430884203
Total Timesteps: 3000 Episode Num: 3 Reward: 492.40008391554187
Total Timesteps: 4000 Episode Num: 4 Reward: 476.3080060617612
Total Timesteps: 5000 Episode Num: 5 Reward: 494.0312655821267
---------------------------------------
Average Reward over the Evaluation Step: 154.195280
---------------------------------------
Total Timesteps: 5133 Episode Num: 6 Reward: 62.03055212295001
Total Timesteps: 6133 Episode Num: 7 Reward: 514.9270628334779
Total Timesteps: 6485 Episode Num: 8 Reward: 167.59288140345458
Total Timesteps: 7485 Episode Num: 9 Reward: 236.29198633567387
Total Timesteps: 8485 Episode Num: 10 Reward: 509.53382938773007
Total Timesteps: 8602 Episode Num: 11 Reward: 50.5568507268108
Total Timesteps: 9602 Episode Num: 12 Reward: 520.9793005272942
Total Timesteps: 10602 Episode Num: 13 Reward: 497.8470933409838
---------------------------------------
Average Reward over the Evaluation Step: 128.452366
---------------------------------------
Total Timesteps: 11602 Episode Num: 14 Reward: 75.12851600337565
Total Timesteps: 12602 Episode Num: 15 Reward: 185.85454988972566
Total Timesteps: 13602 Episode Num: 16 Reward: 255.98572779187978
Total Timesteps: 14602 Episode Num: 17 Reward: 130.0382692472513
Total Timesteps: 15602 Episode Num: 18 Reward: 115.21073238051312
---------------------------------------
Average Reward over the Evaluation Step: 229.912339
---------------------------------------
Total Timesteps: 16602 Episode Num: 19 Reward: 286.9407247295469
Total Timesteps: 17602 Episode Num: 20 Reward: 227.17287004160997
Total Timesteps: 18602 Episode Num: 21 Reward: 80.32986466490651
Total Timesteps: 19602 Episode Num: 22 Reward: 283.3712042492783
Total Timesteps: 20602 Episode Num: 23 Reward: 127.41396182269945
---------------------------------------
Average Reward over the Evaluation Step: 218.761478
---------------------------------------
Total Timesteps: 21602 Episode Num: 24 Reward: 278.36781644990464
Total Timesteps: 22602 Episode Num: 25 Reward: 309.2025085988498
Total Timesteps: 23602 Episode Num: 26 Reward: 306.87063698855405
Total Timesteps: 24602 Episode Num: 27 Reward: 412.03125738607673
Total Timesteps: 25602 Episode Num: 28 Reward: 198.6635895184017
---------------------------------------
Average Reward over the Evaluation Step: 350.084413
---------------------------------------
Total Timesteps: 26602 Episode Num: 29 Reward: 322.19013162263946
Total Timesteps: 27602 Episode Num: 30 Reward: 204.21178137830844
Total Timesteps: 28602 Episode Num: 31 Reward: 103.61375401208718
Total Timesteps: 29602 Episode Num: 32 Reward: 292.5332001401748
Total Timesteps: 29631 Episode Num: 33 Reward: 6.9196293063098695
Total Timesteps: 29660 Episode Num: 34 Reward: 8.769085282658294
Total Timesteps: 29737 Episode Num: 35 Reward: 28.08327579198238
Total Timesteps: 29774 Episode Num: 36 Reward: 7.967136280319689
Total Timesteps: 30390 Episode Num: 37 Reward: 312.0899631284384
---------------------------------------
Average Reward over the Evaluation Step: 202.433665
---------------------------------------
Total Timesteps: 31390 Episode Num: 38 Reward: 164.12255778235527
Total Timesteps: 31447 Episode Num: 39 Reward: 5.823704818050642
Total Timesteps: 31548 Episode Num: 40 Reward: 14.783543810558175
Total Timesteps: 31695 Episode Num: 41 Reward: 50.982756144854214
Total Timesteps: 32695 Episode Num: 42 Reward: 459.96194083486074
Total Timesteps: 33695 Episode Num: 43 Reward: 437.38196663630356
Total Timesteps: 34398 Episode Num: 44 Reward: 300.1568646720385
Total Timesteps: 35398 Episode Num: 45 Reward: 210.21459800639192
---------------------------------------
Average Reward over the Evaluation Step: 326.801282
---------------------------------------
Total Timesteps: 36398 Episode Num: 46 Reward: 316.66459383449904
Total Timesteps: 37398 Episode Num: 47 Reward: 339.16414336388686
Total Timesteps: 38398 Episode Num: 48 Reward: 106.89385488193517
Total Timesteps: 39398 Episode Num: 49 Reward: 379.6430817247807
Total Timesteps: 40398 Episode Num: 50 Reward: 334.8962617321453
---------------------------------------
Average Reward over the Evaluation Step: 117.635710
---------------------------------------
Total Timesteps: 41272 Episode Num: 51 Reward: 111.15273857740404
Total Timesteps: 41302 Episode Num: 52 Reward: 12.68026840063823
Total Timesteps: 41634 Episode Num: 53 Reward: 83.37937865997512
Total Timesteps: 41667 Episode Num: 54 Reward: 15.29215813636454
Total Timesteps: 41708 Episode Num: 55 Reward: 22.910606202918174
Total Timesteps: 42708 Episode Num: 56 Reward: 300.95375435808313
Total Timesteps: 43708 Episode Num: 57 Reward: 307.7032024847098
Total Timesteps: 44064 Episode Num: 58 Reward: 39.01389233571982
Total Timesteps: 44140 Episode Num: 59 Reward: 11.810420664766298
Total Timesteps: 45140 Episode Num: 60 Reward: 195.05080454058643
---------------------------------------
Average Reward over the Evaluation Step: 232.565050
---------------------------------------
Total Timesteps: 46140 Episode Num: 61 Reward: 235.29411016742816
Total Timesteps: 47140 Episode Num: 62 Reward: 315.8326540369658
Total Timesteps: 48140 Episode Num: 63 Reward: 425.615676205701
Total Timesteps: 49140 Episode Num: 64 Reward: 628.341676038075
Total Timesteps: 50140 Episode Num: 65 Reward: 417.72993234590905
---------------------------------------
Average Reward over the Evaluation Step: 483.382864
---------------------------------------
Total Timesteps: 51140 Episode Num: 66 Reward: 270.6667814230197
Total Timesteps: 52140 Episode Num: 67 Reward: 358.8033832631072
Total Timesteps: 53140 Episode Num: 68 Reward: 516.792398534641
Total Timesteps: 54140 Episode Num: 69 Reward: 403.2419608632971
Total Timesteps: 55140 Episode Num: 70 Reward: 440.99007770491835
---------------------------------------
Average Reward over the Evaluation Step: 394.578688
---------------------------------------
Total Timesteps: 55242 Episode Num: 71 Reward: 39.70154329117188
Total Timesteps: 56242 Episode Num: 72 Reward: 374.1900584893031
Total Timesteps: 57242 Episode Num: 73 Reward: 307.6071611539539
Total Timesteps: 58242 Episode Num: 74 Reward: 311.1345148830395
Total Timesteps: 59242 Episode Num: 75 Reward: 127.5429629120776
Total Timesteps: 60242 Episode Num: 76 Reward: 367.0588217185416
---------------------------------------
Average Reward over the Evaluation Step: 108.518749
---------------------------------------
Total Timesteps: 60419 Episode Num: 77 Reward: 1.2919983145482905
Total Timesteps: 61419 Episode Num: 78 Reward: 414.1641433832518
Total Timesteps: 62419 Episode Num: 79 Reward: 369.97711832414734
Total Timesteps: 63419 Episode Num: 80 Reward: 313.3218709906793
Total Timesteps: 64419 Episode Num: 81 Reward: 305.6061976643445
Total Timesteps: 65419 Episode Num: 82 Reward: 351.9500421236098
---------------------------------------
Average Reward over the Evaluation Step: 462.640597
---------------------------------------
Total Timesteps: 66419 Episode Num: 83 Reward: 483.3780508358247
Total Timesteps: 67419 Episode Num: 84 Reward: 307.4506990402266
Total Timesteps: 68419 Episode Num: 85 Reward: 682.4619278143392
Total Timesteps: 69419 Episode Num: 86 Reward: 374.5233031917104
Total Timesteps: 70419 Episode Num: 87 Reward: 492.08397064197857
---------------------------------------
Average Reward over the Evaluation Step: 374.841564
---------------------------------------
Total Timesteps: 71419 Episode Num: 88 Reward: 442.93137085887406
Total Timesteps: 72419 Episode Num: 89 Reward: 586.8272792098666
Total Timesteps: 73419 Episode Num: 90 Reward: 358.33666422091284
Total Timesteps: 74419 Episode Num: 91 Reward: 621.0003030741502
Total Timesteps: 75419 Episode Num: 92 Reward: 674.2112092758323
---------------------------------------
Average Reward over the Evaluation Step: 555.177452
---------------------------------------
Total Timesteps: 76419 Episode Num: 93 Reward: 627.5050249241532
Total Timesteps: 77419 Episode Num: 94 Reward: 838.4823478856684
Total Timesteps: 78419 Episode Num: 95 Reward: 541.1595708152734
Total Timesteps: 79419 Episode Num: 96 Reward: 553.6311493618038
Total Timesteps: 80419 Episode Num: 97 Reward: 641.6735821734253
---------------------------------------
Average Reward over the Evaluation Step: 504.145333
---------------------------------------
Total Timesteps: 81419 Episode Num: 98 Reward: 522.8365489351993
Total Timesteps: 82419 Episode Num: 99 Reward: 477.0298818993572
Total Timesteps: 83419 Episode Num: 100 Reward: 791.6863211157099
Total Timesteps: 84419 Episode Num: 101 Reward: 573.3475449740887
Total Timesteps: 85419 Episode Num: 102 Reward: 648.3139759060236
---------------------------------------
Average Reward over the Evaluation Step: 582.012942
---------------------------------------
Total Timesteps: 86419 Episode Num: 103 Reward: 467.233995134909
Total Timesteps: 87419 Episode Num: 104 Reward: 532.355075793272
Total Timesteps: 88419 Episode Num: 105 Reward: 448.2984699517601
Total Timesteps: 89419 Episode Num: 106 Reward: 733.0891408655976
Total Timesteps: 90419 Episode Num: 107 Reward: 520.7315198264828
---------------------------------------
Average Reward over the Evaluation Step: 572.394565
---------------------------------------
Total Timesteps: 91419 Episode Num: 108 Reward: 247.1330145679398
Total Timesteps: 92419 Episode Num: 109 Reward: 539.4933900200043
Total Timesteps: 93419 Episode Num: 110 Reward: 482.3074099367543
Total Timesteps: 94419 Episode Num: 111 Reward: 608.9362342547292
Total Timesteps: 95419 Episode Num: 112 Reward: 533.214044848681
---------------------------------------
Average Reward over the Evaluation Step: 432.162784
---------------------------------------
Total Timesteps: 96419 Episode Num: 113 Reward: 626.7356321953255
Total Timesteps: 97419 Episode Num: 114 Reward: 165.0771181649909
Total Timesteps: 98419 Episode Num: 115 Reward: 246.41934316667087
Total Timesteps: 99419 Episode Num: 116 Reward: 687.2147335807146
Total Timesteps: 99552 Episode Num: 117 Reward: 106.12056982992601
Total Timesteps: 100552 Episode Num: 118 Reward: 584.3008095812155
---------------------------------------
Average Reward over the Evaluation Step: 279.468899
---------------------------------------
Total Timesteps: 101552 Episode Num: 119 Reward: 251.2864884588421
Total Timesteps: 102552 Episode Num: 120 Reward: 474.39117038965736
Total Timesteps: 103552 Episode Num: 121 Reward: 359.31576138266746
Total Timesteps: 104552 Episode Num: 122 Reward: 314.56947871004485
Total Timesteps: 105552 Episode Num: 123 Reward: 591.1184137822272
---------------------------------------
Average Reward over the Evaluation Step: 307.921247
---------------------------------------
Total Timesteps: 106552 Episode Num: 124 Reward: 557.791907911734
Total Timesteps: 107552 Episode Num: 125 Reward: 563.0735414812734
Total Timesteps: 108552 Episode Num: 126 Reward: 590.9480336821701
Total Timesteps: 109552 Episode Num: 127 Reward: 486.69656816088326
Total Timesteps: 110552 Episode Num: 128 Reward: 468.3500699716701
---------------------------------------
Average Reward over the Evaluation Step: 429.304043
---------------------------------------
Total Timesteps: 111552 Episode Num: 129 Reward: 411.6801388847237
Total Timesteps: 112552 Episode Num: 130 Reward: 411.70937509706556
Total Timesteps: 113552 Episode Num: 131 Reward: 372.00569251127206
Total Timesteps: 114552 Episode Num: 132 Reward: 666.1430800691087
Total Timesteps: 115552 Episode Num: 133 Reward: 460.9904416173104
---------------------------------------
Average Reward over the Evaluation Step: 523.570190
---------------------------------------
Total Timesteps: 116552 Episode Num: 134 Reward: 352.1573019072249
Total Timesteps: 117552 Episode Num: 135 Reward: 488.9605787803935
Total Timesteps: 118552 Episode Num: 136 Reward: 296.84492677040714
Total Timesteps: 119552 Episode Num: 137 Reward: 434.0537955059363
Total Timesteps: 120552 Episode Num: 138 Reward: 379.08378272070706
---------------------------------------
Average Reward over the Evaluation Step: 657.452964
---------------------------------------
Total Timesteps: 121552 Episode Num: 139 Reward: 371.07870015375863
Total Timesteps: 122552 Episode Num: 140 Reward: 664.9377383113746
Total Timesteps: 123552 Episode Num: 141 Reward: 308.3757851862608
Total Timesteps: 124552 Episode Num: 142 Reward: 600.4104389421784
Total Timesteps: 125552 Episode Num: 143 Reward: 439.525359030352
---------------------------------------
Average Reward over the Evaluation Step: 512.187949
---------------------------------------
Total Timesteps: 126552 Episode Num: 144 Reward: 342.8940278454713
Total Timesteps: 127552 Episode Num: 145 Reward: 580.1339780093208
Total Timesteps: 128552 Episode Num: 146 Reward: 378.0806987666161
Total Timesteps: 129552 Episode Num: 147 Reward: 594.8097622539781
Total Timesteps: 130552 Episode Num: 148 Reward: 311.5215678900163
---------------------------------------
Average Reward over the Evaluation Step: 529.079285
---------------------------------------
Total Timesteps: 131552 Episode Num: 149 Reward: 675.7493771261866
Total Timesteps: 132552 Episode Num: 150 Reward: 524.108399768893
Total Timesteps: 133552 Episode Num: 151 Reward: 592.0693750792075
Total Timesteps: 134552 Episode Num: 152 Reward: 292.74064823213683
Total Timesteps: 135552 Episode Num: 153 Reward: 330.2225634953474
---------------------------------------
Average Reward over the Evaluation Step: 532.268559
---------------------------------------
Total Timesteps: 136552 Episode Num: 154 Reward: 436.40275488198887
Total Timesteps: 137552 Episode Num: 155 Reward: 539.7247665876872
Total Timesteps: 138552 Episode Num: 156 Reward: 469.81523890069843
Total Timesteps: 139552 Episode Num: 157 Reward: 601.9912032541115
Total Timesteps: 140552 Episode Num: 158 Reward: 425.4515105314638
---------------------------------------
Average Reward over the Evaluation Step: 314.491185
---------------------------------------
Total Timesteps: 141468 Episode Num: 159 Reward: 425.93157392282257
Total Timesteps: 142468 Episode Num: 160 Reward: 427.26013116609346
Total Timesteps: 142833 Episode Num: 161 Reward: 183.91652865733244
Total Timesteps: 142947 Episode Num: 162 Reward: 50.069775269034324
Total Timesteps: 143947 Episode Num: 163 Reward: 646.364585534303
Total Timesteps: 144947 Episode Num: 164 Reward: 477.41824037888335
Total Timesteps: 145947 Episode Num: 165 Reward: 552.3042803396013
---------------------------------------
Average Reward over the Evaluation Step: 531.458138
---------------------------------------
Total Timesteps: 146947 Episode Num: 166 Reward: 585.5431257819682
Total Timesteps: 147947 Episode Num: 167 Reward: 324.94786041445747
Total Timesteps: 148947 Episode Num: 168 Reward: 546.6546656982192
Total Timesteps: 149947 Episode Num: 169 Reward: 592.4284350006956
Total Timesteps: 150947 Episode Num: 170 Reward: 412.1744292730023
---------------------------------------
Average Reward over the Evaluation Step: 536.302828
---------------------------------------
Total Timesteps: 151947 Episode Num: 171 Reward: 533.0888396635238
Total Timesteps: 152947 Episode Num: 172 Reward: 527.3217282008873
Total Timesteps: 153947 Episode Num: 173 Reward: 280.43220642595793
Total Timesteps: 154947 Episode Num: 174 Reward: 193.33424720378716
Total Timesteps: 155947 Episode Num: 175 Reward: 325.8002629464136
---------------------------------------
Average Reward over the Evaluation Step: 338.946457
---------------------------------------
Total Timesteps: 156947 Episode Num: 176 Reward: 290.7160933823586
Total Timesteps: 157947 Episode Num: 177 Reward: 379.10298330586903
Total Timesteps: 158947 Episode Num: 178 Reward: 292.6396511093699
Total Timesteps: 159947 Episode Num: 179 Reward: 461.62048504640205
Total Timesteps: 160947 Episode Num: 180 Reward: 422.45777610848126
---------------------------------------
Average Reward over the Evaluation Step: 511.189483
---------------------------------------
Total Timesteps: 161947 Episode Num: 181 Reward: 439.4691276679888
Total Timesteps: 162947 Episode Num: 182 Reward: 648.9593381207025
Total Timesteps: 163947 Episode Num: 183 Reward: 620.6338059293364
Total Timesteps: 164947 Episode Num: 184 Reward: 453.458217124958
Total Timesteps: 165947 Episode Num: 185 Reward: 414.36031279219804
---------------------------------------
Average Reward over the Evaluation Step: 440.335195
---------------------------------------
Total Timesteps: 166947 Episode Num: 186 Reward: 522.2063175282304
Total Timesteps: 167947 Episode Num: 187 Reward: 259.9825712171355
Total Timesteps: 168947 Episode Num: 188 Reward: 520.8221978165838
Total Timesteps: 169382 Episode Num: 189 Reward: 185.83971584414329
Total Timesteps: 170382 Episode Num: 190 Reward: 674.3617387582459
---------------------------------------
Average Reward over the Evaluation Step: 705.455231
---------------------------------------
Total Timesteps: 171382 Episode Num: 191 Reward: 564.1567976163952
Total Timesteps: 172382 Episode Num: 192 Reward: 740.9578833141447
Total Timesteps: 173382 Episode Num: 193 Reward: 340.24270992484423
| Apache-2.0 | P2S10.ipynb | aks1981/ML |
Inference | class Actor(nn.Module):
def __init__(self, state_dim, action_dim, max_action):
super(Actor, self).__init__()
self.layer_1 = nn.Linear(state_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, action_dim)
self.max_action = max_action
def forward(self, x):
x = F.relu(self.layer_1(x))
x = F.relu(self.layer_2(x))
x = self.max_action * torch.tanh(self.layer_3(x))
return x
class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super(Critic, self).__init__()
# Defining the first Critic neural network
self.layer_1 = nn.Linear(state_dim + action_dim, 400)
self.layer_2 = nn.Linear(400, 300)
self.layer_3 = nn.Linear(300, 1)
# Defining the second Critic neural network
self.layer_4 = nn.Linear(state_dim + action_dim, 400)
self.layer_5 = nn.Linear(400, 300)
self.layer_6 = nn.Linear(300, 1)
def forward(self, x, u):
xu = torch.cat([x, u], 1)
# Forward-Propagation on the first Critic Neural Network
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
# Forward-Propagation on the second Critic Neural Network
x2 = F.relu(self.layer_4(xu))
x2 = F.relu(self.layer_5(x2))
x2 = self.layer_6(x2)
return x1, x2
def Q1(self, x, u):
xu = torch.cat([x, u], 1)
x1 = F.relu(self.layer_1(xu))
x1 = F.relu(self.layer_2(x1))
x1 = self.layer_3(x1)
return x1
# Selecting the device (CPU or GPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Building the whole Training Process into a class
class TD3(object):
def __init__(self, state_dim, action_dim, max_action):
self.actor = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target = Actor(state_dim, action_dim, max_action).to(device)
self.actor_target.load_state_dict(self.actor.state_dict())
self.actor_optimizer = torch.optim.Adam(self.actor.parameters())
self.critic = Critic(state_dim, action_dim).to(device)
self.critic_target = Critic(state_dim, action_dim).to(device)
self.critic_target.load_state_dict(self.critic.state_dict())
self.critic_optimizer = torch.optim.Adam(self.critic.parameters())
self.max_action = max_action
def select_action(self, state):
state = torch.Tensor(state.reshape(1, -1)).to(device)
return self.actor(state).cpu().data.numpy().flatten()
def train(self, replay_buffer, iterations, batch_size=100, discount=0.99, tau=0.005, policy_noise=0.2, noise_clip=0.5, policy_freq=2):
for it in range(iterations):
# Step 4: We sample a batch of transitions (s, s’, a, r) from the memory
batch_states, batch_next_states, batch_actions, batch_rewards, batch_dones = replay_buffer.sample(batch_size)
state = torch.Tensor(batch_states).to(device)
next_state = torch.Tensor(batch_next_states).to(device)
action = torch.Tensor(batch_actions).to(device)
reward = torch.Tensor(batch_rewards).to(device)
done = torch.Tensor(batch_dones).to(device)
# Step 5: From the next state s’, the Actor target plays the next action a’
next_action = self.actor_target(next_state)
# Step 6: We add Gaussian noise to this next action a’ and we clamp it in a range of values supported by the environment
noise = torch.Tensor(batch_actions).data.normal_(0, policy_noise).to(device)
noise = noise.clamp(-noise_clip, noise_clip)
next_action = (next_action + noise).clamp(-self.max_action, self.max_action)
# Step 7: The two Critic targets take each the couple (s’, a’) as input and return two Q-values Qt1(s’,a’) and Qt2(s’,a’) as outputs
target_Q1, target_Q2 = self.critic_target(next_state, next_action)
# Step 8: We keep the minimum of these two Q-values: min(Qt1, Qt2)
target_Q = torch.min(target_Q1, target_Q2)
# Step 9: We get the final target of the two Critic models, which is: Qt = r + γ * min(Qt1, Qt2), where γ is the discount factor
target_Q = reward + ((1 - done) * discount * target_Q).detach()
# Step 10: The two Critic models take each the couple (s, a) as input and return two Q-values Q1(s,a) and Q2(s,a) as outputs
current_Q1, current_Q2 = self.critic(state, action)
# Step 11: We compute the loss coming from the two Critic models: Critic Loss = MSE_Loss(Q1(s,a), Qt) + MSE_Loss(Q2(s,a), Qt)
critic_loss = F.mse_loss(current_Q1, target_Q) + F.mse_loss(current_Q2, target_Q)
# Step 12: We backpropagate this Critic loss and update the parameters of the two Critic models with a SGD optimizer
self.critic_optimizer.zero_grad()
critic_loss.backward()
self.critic_optimizer.step()
# Step 13: Once every two iterations, we update our Actor model by performing gradient ascent on the output of the first Critic model
if it % policy_freq == 0:
actor_loss = -self.critic.Q1(state, self.actor(state)).mean()
self.actor_optimizer.zero_grad()
actor_loss.backward()
self.actor_optimizer.step()
# Step 14: Still once every two iterations, we update the weights of the Actor target by polyak averaging
for param, target_param in zip(self.critic.parameters(), self.critic_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Step 15: Still once every two iterations, we update the weights of the Critic target by polyak averaging
for param, target_param in zip(self.actor.parameters(), self.actor_target.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
# Making a save method to save a trained model
def save(self, filename, directory):
torch.save(self.actor.state_dict(), '%s/%s_actor.pth' % (directory, filename))
torch.save(self.critic.state_dict(), '%s/%s_critic.pth' % (directory, filename))
# Making a load method to load a pre-trained model
def load(self, filename, directory):
self.actor.load_state_dict(torch.load('%s/%s_actor.pth' % (directory, filename)))
self.critic.load_state_dict(torch.load('%s/%s_critic.pth' % (directory, filename)))
def evaluate_policy(policy, eval_episodes=10):
avg_reward = 0.
for _ in range(eval_episodes):
obs = env.reset()
done = False
while not done:
action = policy.select_action(np.array(obs))
obs, reward, done, _ = env.step(action)
avg_reward += reward
avg_reward /= eval_episodes
print ("---------------------------------------")
print ("Average Reward over the Evaluation Step: %f" % (avg_reward))
print ("---------------------------------------")
return avg_reward
env_name = "AntBulletEnv-v0"
seed = 0
file_name = "%s_%s_%s" % ("TD3", env_name, str(seed))
print ("---------------------------------------")
print ("Settings: %s" % (file_name))
print ("---------------------------------------")
eval_episodes = 10
save_env_vid = True
env = gym.make(env_name)
max_episode_steps = env._max_episode_steps
if save_env_vid:
env = wrappers.Monitor(env, monitor_dir, force = True)
env.reset()
env.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
max_action = float(env.action_space.high[0])
policy = TD3(state_dim, action_dim, max_action)
policy.load(file_name, './pytorch_models/')
_ = evaluate_policy(policy, eval_episodes=eval_episodes)
| _____no_output_____ | Apache-2.0 | P2S10.ipynb | aks1981/ML |
Web predictionsThe purpose of this notebook is to experiment with making predictions from "raw" accumulated user values, thatcould for instance be user input from a web form. | import findspark
findspark.init()
findspark.find()
import pyspark
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
conf = pyspark.SparkConf().setAppName('sparkify-capstone-web').setMaster('local')
sc = pyspark.SparkContext(conf=conf)
spark = SparkSession(sc)
from pyspark.ml.classification import GBTClassifier
from pyspark.ml.classification import GBTClassificationModel
from pyspark.ml.feature import VectorAssembler
transformedPath = "out/transformed.parquet"
predictionsPath = "out/predictions.parquet"
df_transformed = spark.read.parquet(transformedPath)
df_predictions = spark.read.parquet(predictionsPath)
model = GBTClassificationModel.load("out/model")
zeros = df_predictions.filter(df_predictions["prediction"] == 0)
ones = df_predictions.filter(df_predictions["prediction"] == 1)
zerosCount = zeros.count()
onesCount = ones.count()
print("Ones: {}, Zeros: {}".format(onesCount, zerosCount))
print(onesCount / zerosCount * 100)
usersPredictedToChurn = df_predictions.filter(df_predictions["prediction"] == 1).take(5)
for row in usersPredictedToChurn:
print(int(row["userId"]))
df_transformed.show()
df_predictions.show()
# 1 300044
# 0 251
# Select the prediction of a user as value
pred = df_predictions[df_predictions["userId"] == 78].select("prediction").collect()[0][0]
pred
# From a query that could be entered in a web form, create a prediction
# Query from web
query = "1.0,0.0,10,4,307,0,76200,10"
# Split to values
values = query.split(",")
# Prepare dictionary for feature dataframe from web form values
features_dict = [{
"level_index": float(values[0]),
"gender_index": float(values[1]),
"thumbs_up_sum": int(values[2]),
"thumbs_down_sum": int(values[3]),
"nextsong_sum": int(values[4]),
"downgrade_sum": int(values[5]),
"length_sum": float(values[6]),
"sessionId_count": int(values[7]),
}]
# Create a user row to use in VectorAssembler
df_user_row = spark.createDataFrame(features_dict)
# Create feature dataframe with VectorAssembler
df_features = VectorAssembler(inputCols = \
["level_index", "gender_index", "thumbs_up_sum", "thumbs_down_sum", \
"nextsong_sum", "downgrade_sum", "length_sum", "sessionId_count"], \
outputCol = "features").transform(df_user_row)
# Select features
df_features = df_features.select("features")
# Predict on model
prediction = model.transform(df_features)
# Show result
prediction.show()
prediction.select("prediction").collect()[0][0]
# Output the notebook to an html file
from subprocess import call
call(['python', '-m', 'nbconvert', 'web_pred.ipynb']) | _____no_output_____ | MIT | udacity/data-scientist-nanodegree/sparkify/.ipynb_checkpoints/web_pred-checkpoint.ipynb | thomasrobertz/mooc |
Lecture 9 - Motor Control Introduction to modeling and simulation of human movementhttps://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md * In class: | import numpy as np
#import pandas as pd
#import pylab as pl
import matplotlib.pyplot as plt
import math
%matplotlib notebook | _____no_output_____ | MIT | courses/modsim2018/matheuspiquini/Lecture11.ipynb | MatheusKP/bmc |
Muscle properties | Lslack = .223
Umax = .04
Lce_o = .093 #optmal l
width = .63
Fmax = 3000
a = .25
b = .25*10 | _____no_output_____ | MIT | courses/modsim2018/matheuspiquini/Lecture11.ipynb | MatheusKP/bmc |
Initial conditions | LceNorm = .087/Lce_o
t0 = 0
tf = 2.99
h = 1e-3
u = 1
a = 0
t = np.arange(t0,tf,h)
F = np.empty(t.shape)
Fkpe = np.empty(t.shape)
FiberLength = np.empty(t.shape)
TendonLength = np.empty(t.shape)
U = np.arange(t0,1,h)
## Funcoes
def computeTendonForce(LseeNorm, Lce_o, Lslack):
'''
Compute Tendon Length
Input:
LseeNorm - Normalized Tendon Length
Lsalck - slack length of the tendon (non normalized)
Lce_o - Optimal length of the fiber
Output:
FTendonNorm - Force on the tendon normalized
'''
Umax = 0.04
if LseeNorm<(Lslack/Lce_o):
FTendonNorm = 0
else:
FTendonNorm = ((LseeNorm-Lslack/Lce_o)/(Umax*Lslack/Lce_o))**2
return FTendonNorm
def computeParallelElementForce (LceNorm):
Umax = 1
if LceNorm<1:
FkpeNorm = 0
else:
FkpeNorm = ((LceNorm-1)/(Umax))**2
#lce_o/Lce_o = 1 (normalizado)
return FkpeNorm
def computeForceLengthCurve(LceNorm):
width = 0.63
FLNorm = max([0, (1-((LceNorm-1)/width)**2)])
return FLNorm
def computeActivation(a, u, h):
act = 0.015
deact = 0.05
if u>a:
T = act*(0.4+(1.5*a))
else:
T = deact/(0.5+(1.5*a))
a += h*((u-a)/T)
return a
def computeContractileElementDerivative(FLNorm, FCENorm, a):
#calculate CE velocity from Hill's equation
a1 = .25
b = .25*10
Fmlen = 1.8
Vmax = 8
if FCENorm > a*FLNorm:
B = ((2+2/a1)*(FLNorm*Fmlen-FCENorm))/(Fmlen-1)
LceNormdot = (0.75+0.75*a)*Vmax*((FCENorm-FLNorm)/B)
else:
B = FLNorm + (FCENorm/a1)
LceNormdot = (0.75+0.75*a)*Vmax*((FCENorm-FLNorm)/B)
return LceNormdot
def computeContractileElementForce(FTendonNorm, FkpeNorm):
FCENorm = FTendonNorm - FkpeNorm
return FCENorm
def ComputeTendonLength(Lm, Lce_o, LceNorm):
LseeNorm = Lm/Lce_o - LceNorm
return LseeNorm | _____no_output_____ | MIT | courses/modsim2018/matheuspiquini/Lecture11.ipynb | MatheusKP/bmc |
Simulation - Parallel | for i in range (len(t)):
#ramp
if t[i]<=1:
Lm = 0.31
elif t[i]>1 and t[i]<2:
Lm = .31 - .04*(t[i]-1)
#print(Lm)
#####################################################################
LseeNorm = (Lm/Lce_o) - LceNorm
FTendonNorm = computeTendonForce(LseeNorm, Lce_o, Lslack)
FkpeNorm = computeParallelElementForce(LceNorm)
FLNorm = computeForceLengthCurve(LceNorm)
FCENorm = computeContractileElementForce(FTendonNorm, FkpeNorm)
LceNormdot = computeContractileElementDerivative(FLNorm,FCENorm, a)
a = computeActivation(a, u, h)
LceNorm += h*LceNormdot
#####################################################################
F[i] = FTendonNorm*Fmax
FiberLength[i] = LceNorm*Lce_o
TendonLength[i] = LseeNorm*Lce_o
| _____no_output_____ | MIT | courses/modsim2018/matheuspiquini/Lecture11.ipynb | MatheusKP/bmc |
Plot | fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,F,c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force [N]')
#ax.legend()
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,FiberLength, label = 'fibra')
ax.plot(t, TendonLength, label = 'tendao')
ax.plot(t,FiberLength + TendonLength, label = 'fibra + tendao')
plt.grid()
plt.legend(loc = 'best')
plt.xlabel('time (s)')
plt.ylabel('Length [m]')
plt.tight_layout()
#ax.legend() | _____no_output_____ | MIT | courses/modsim2018/matheuspiquini/Lecture11.ipynb | MatheusKP/bmc |
Load a scope from a yaml file. | with open('gbnrtc_scope.yaml') as yf:
for _ in range(100):
print(yf.readline(), end="")
scope = emat.Scope('gbnrtc_scope.yaml')
scope | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Create a brand new set of `Boxes`. | boxes = emat.Boxes(scope=scope) | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Define a new top-level box (i.e. a box with no parents to inherit from). | box_1 = emat.Box(name='High Population Growth', scope=scope) | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Add a lower bound for population growth, to ensure the include values are all "high". | box_1.set_lower_bound('Land Use - CBD Focus', 1.0) | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Add some things as "relevant features" that we are interested in analyzing,even though we do not set bounds on them (yet). | box_1.relevant_features.add('Total LRT Boardings')
box_1.relevant_features.add('Downtown to Airport Travel Time')
box_1.relevant_features.add('Peak Transit Share')
box_1.relevant_features.add('AM Trip Time (minutes)')
box_1.relevant_features.add('AM Trip Length (miles)')
box_1.relevant_features.add('Region-wide VMT')
box_1.relevant_features.add('Total Transit Boardings')
box_1.relevant_features.add('Corridor Kensington Daily VMT')
box_1 | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Define a new lower level box, which will inherit from the top level box we just created. | box_2 = emat.Box(name='Automated Vehicles', scope=scope, parent='High Population Growth') | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Set some thresholds on this lower level box. | box_2.set_lower_bound('Freeway Capacity', 1.25)
box_2.set_upper_bound('Auto IVTT Sensitivity', 0.9)
box_2
box_2.parent_box_name | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
So far, the individual boxes we created are just loose boxes. To connect them, we need to add them to the master `Boxes` object. | boxes.add(box_1)
boxes.add(box_2) | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
We can check on what named boxes are in a `Boxes` object with the `plain_names` method,which just gives the names, or the `fancy_names` method, which adds some iconsto help indicate the hierarchy. | boxes.plain_names()
boxes.fancy_names() | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Now that the boxes are linked together in a `Boxes` object, we can use the `get_chain` method to aggregatethe attributes of any box along with all parents in the chain. | boxes.get_chain('Automated Vehicles') | _____no_output_____ | BSD-3-Clause | docs/source/emat.examples/GBNRTC/gbnrtc_fresh_boxes.ipynb | tlumip/tmip-emat |
Time Series analysis of O'hare taxi rides data | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import TimeSeriesSplit, cross_validate, GridSearchCV
pd.set_option('display.max_rows', 6)
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 16,
'axes.labelweight': 'bold',
'figure.figsize': (8,6)})
from mealprep.mealprep import find_missing_ingredients
# pd.set_option('display.max_colwidth', None)
pd.set_option('display.max_rows', None)
import pickle
ORD_df = pd.read_csv('../data/ORD_train.csv').drop(columns=['Unnamed: 0', 'Unnamed: 0.1'])
ORD_df | _____no_output_____ | MIT | src/time_series_modelling.ipynb | jsleslie/Ohare_taxi_demand |
Tom's functions | # Custom functions
def lag_df(df, lag, cols):
return df.assign(**{f"{col}-{n}": df[col].shift(n) for n in range(1, lag + 1) for col in cols})
def ts_predict(input_data, model, n=20, responses=1):
predictions = []
n_features = input_data.size
for _ in range(n):
predictions = np.append(predictions,
model.predict(input_data.reshape(1, -1))) # make prediction
input_data = np.append(predictions[-responses:],
input_data[:n_features-responses]) # new input data
return predictions.reshape((-1, responses))
def plot_ts(ax, df_train, df_test, predictions, xlim, response_cols):
col_cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']
for i, col in enumerate(response_cols):
ax.plot(df_train[col], '-', c=col_cycle[i], label = f'Train {col}')
ax.plot(df_test[col], '--', c=col_cycle[i], label = f'Validation {col}')
ax.plot(np.arange(df_train.index[-1] + 1,
df_train.index[-1] + 1 + len(predictions)),
predictions[:,i], c=col_cycle[-i-2], label = f'Prediction {col}')
ax.set_xlim(0, xlim+1)
ax.set_title(f"Train Shape = {len(df_train)}, Validation Shape = {len(df_test)}",
fontsize=16)
ax.set_ylabel(df_train.columns[0])
def plot_forecast(ax, df_train, predictions, xlim, response_cols):
col_cycle = plt.rcParams['axes.prop_cycle'].by_key()['color']
for i, col in enumerate(response_cols):
ax.plot(df_train[col], '-', c=col_cycle[i], label = f'Train {col}')
ax.plot(np.arange(df_train.index[-1] + 1,
df_train.index[-1] + 1 + len(predictions)),
predictions[:,i], '-', c=col_cycle[-i-2], label = f'Prediction {col}')
ax.set_xlim(0, xlim+len(predictions))
ax.set_title(f"{len(predictions)}-step forecast",
fontsize=16)
ax.set_ylabel(response_cols)
def create_rolling_features(df, columns, windows=[6, 12]):
for window in windows:
df["rolling_mean_" + str(window)] = df[columns].rolling(window=window).mean()
df["rolling_std_" + str(window)] = df[columns].rolling(window=window).std()
df["rolling_var_" + str(window)] = df[columns].rolling(window=window).var()
df["rolling_min_" + str(window)] = df[columns].rolling(window=window).min()
df["rolling_max_" + str(window)] = df[columns].rolling(window=window).max()
df["rolling_min_max_ratio_" + str(window)] = df["rolling_min_" + str(window)] / df["rolling_max_" + str(window)]
df["rolling_min_max_diff_" + str(window)] = df["rolling_max_" + str(window)] - df["rolling_min_" + str(window)]
df = df.replace([np.inf, -np.inf], np.nan)
df.fillna(0, inplace=True)
return df
lag = 3
ORD_train_lag = lag_df(ORD_df, lag=lag, cols=['seats']).dropna()
ORD_train_lag
find_missing_ingredients(ORD_train_lag)
lag = 3 # you can vary the number of lagged features in the model
n_splits = 5 # you can vary the number of train/validation splits
response_col = ['rides']
# df_lag = lag_df(df, lag, response_col).dropna()
tscv = TimeSeriesSplit(n_splits=n_splits) # define the splitter
model = RandomForestRegressor() # define the model
cv = cross_validate(model,
X = ORD_train_lag.drop(columns=response_col),
y = ORD_train_lag[response_col[0]],
scoring =('r2', 'neg_mean_squared_error'),
cv=tscv,
return_train_score=True)
# pd.DataFrame({'split': range(n_splits),
# 'train_r2': cv['train_score'],
# 'train_negrmse': cv['train_']
# 'validation_r2': cv['test_score']}).set_index('split')
pd.DataFrame(cv)
fig, ax = plt.subplots(n_splits, 1, figsize=(8,4*n_splits))
for i, (train_index, test_index) in enumerate(tscv.split(ORD_train_lag)):
df_train, df_test = ORD_train_lag.iloc[train_index], ORD_train_lag.iloc[test_index]
model = RandomForestRegressor().fit(df_train.drop(columns=response_col),
df_train[response_col[0]]) # train model
# Prediction loop
predictions = model.predict(df_test.drop(columns=response_col))[:,None]
# Plot
plot_ts(ax[i], df_train, df_test, predictions, xlim=ORD_train_lag.index[-1], response_cols=response_col)
ax[0].legend(facecolor='w')
ax[i].set_xlabel('time')
fig.tight_layout()
lag = 3 # you can vary the number of lagged features in the model
n_splits = 3 # you can vary the number of train/validation splits
response_col = ['rides']
# df_lag = lag_df(df, lag, response_col).dropna()
tscv = TimeSeriesSplit(n_splits=n_splits) # define the splitter
model = RandomForestRegressor() # define the model
param_grid = {'n_estimators': [50, 100, 150, 200],
'max_depth': [10,25,50,100, None]}
X = ORD_train_lag.drop(columns=response_col)
y = ORD_train_lag[response_col[0]]
gcv = GridSearchCV(model,
param_grid = param_grid,
# X = ORD_train_lag.drop(columns=response_col),
# y = ORD_train_lag[response_col[0]],
scoring ='neg_mean_squared_error',
cv=tscv,
return_train_score=True)
gcv.fit(X,y)
# pd.DataFrame({'split': range(n_splits),
# 'train_r2': cv['train_score'],
# 'train_negrmse': cv['train_']
# 'validation_r2': cv['test_score']}).set_index('split')
gcv.score(X,y)
filename = 'grid_search_model_1.sav'
pickle.dump(gcv, open(filename, 'wb'))
A = list(ORD_train_lag.columns)
A.remove('rides')
pd.DataFrame({'columns' : A, 'importance' : gcv.best_estimator_.feature_importances_}).sort_values('importance', ascending=False)
gcv.best_params_
pd.DataFrame(gcv.cv_results_)
gcv.estimator.best_ | _____no_output_____ | MIT | src/time_series_modelling.ipynb | jsleslie/Ohare_taxi_demand |
Lesson 04: Numpy - Used for working with tensors- Provides vectors, matrices, and tensors- Provides mathematical functions that operate on vectors, matrices, and tensors- Implemented in Fortran and C in the backend | import numpy as np | _____no_output_____ | MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Making Arrays | arr = np.array([1, 2, 3])
print(arr, type(arr), arr.shape, arr.dtype, arr.ndim)
matrix = np.array(
[[1, 2, 3],
[4, 5, 6.2]]
)
print(matrix, type(matrix), matrix.shape, matrix.dtype, matrix.ndim)
a = np.zeros((10, 2))
print(a)
a = np.ones((4, 5))
print(a)
a = np.full((2, 3, 5), 6)
print(a)
a = np.eye(4)
print(a)
a = np.random.random((5, 5))
print(a) | [[0.94118745 0.22994581 0.75183424 0.3433619 0.53614551]
[0.4701853 0.68700713 0.3685086 0.19023418 0.17094098]
[0.96813951 0.00628098 0.02295652 0.9007116 0.03263926]
[0.56018717 0.13823581 0.71362452 0.57653406 0.9263221 ]
[0.22776242 0.92652569 0.04206205 0.13036483 0.10911229]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Indexing | arr = np.array([
[1, 2, 3, 4, 5],
[6, 7, 8, 9, 10],
[11, 12, 13, 14, 15]
])
print(arr) | [[ 1 2 3 4 5]
[ 6 7 8 9 10]
[11 12 13 14 15]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
The indexing format is: [rows , columns]You can then slice the individual dimension as follows: [start : end , start : end] | print(arr[1:, 2:4])
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print(a[ [0, 1, 2, 3], [1, 0, 2, 0] ])
print(a[0, 1], a[1, 0], a[2, 2], a[3, 0])
print(np.array([a[0, 1], a[1, 0], a[2, 2], a[3, 0]]))
b = np.array([1, 0, 2, 0])
print(a[np.arange(4), b])
a[np.arange(4), b] += 7
print(a)
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
bool_a = (a > 5)
print(bool_a)
print(a[bool_a])
print(a[a>7]) | [ 8 9 10 11 12]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Data Types | b = np.array([1, 2, 3], dtype=np.float64)
print(b.dtype) | float64
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
https://numpy.org/doc/stable/reference/arrays.dtypes.html Operations | x = np.array([
[1, 2],
[3, 4]
])
y = np.array([
[5, 6],
[7, 8]
])
print(x, x.shape)
print(y, y.shape)
print(x + y)
print(np.add(x, y))
print(x - y)
print(np.subtract(x, y))
print(x * y)
print(np.multiply(x, y))
print(x / y)
print(np.divide(x, y)) | [[0.2 0.33333333]
[0.42857143 0.5 ]]
[[0.2 0.33333333]
[0.42857143 0.5 ]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Matrix Multiplication | w = np.array([2, 4])
v = np.array([4, 6])
print(x)
print(y)
print(w)
print(v) | [[1 2]
[3 4]]
[[5 6]
[7 8]]
[2 4]
[4 6]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Vector-vector multiplication | print(v.dot(w))
print(np.dot(v, w)) | 32
32
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Matrix-vector multiplication | print(x.dot(w)) | [10 22]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Matrix multiplication | print(x.dot(y))
print(np.dot(x, y)) | [[19 22]
[43 50]]
[[19 22]
[43 50]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Transpose | print(x)
print(x.T) | [[1 2]
[3 4]]
[[1 3]
[2 4]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
http://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html Other Operations | print(x)
print(np.sum(x))
print(np.sum(x, axis=0))
print(np.sum(x, axis=1)) | [[1 2]
[3 4]]
10
[4 6]
[3 7]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
More array operations are listed here:http://docs.scipy.org/doc/numpy/reference/routines.math.html Broadcasting Broadcasting allows Numpy to work with arrays of different shapes. Operations which would have required loops can now be done without them hence speeding up your program. | x = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[10, 11, 12],
[13, 14, 15],
[16, 17, 18],
])
print(x, x.shape)
y = np.array([1, 2, 3])
print(y, y.shape) | [1 2 3] (3,)
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Loop Approach | z = np.empty_like(x)
print(z, z.shape)
for i in range(x.shape[0]):
z[i, :] = x[i, :] + y
print(z) | [[ 2 4 6]
[ 5 7 9]
[ 8 10 12]
[11 13 15]
[14 16 18]
[17 19 21]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Tile Approach | yy = np.tile(y, (6, 1))
print(yy, yy.shape)
print(x + y) | [[ 2 4 6]
[ 5 7 9]
[ 8 10 12]
[11 13 15]
[14 16 18]
[17 19 21]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Broadcasting Approach | print(x, x.shape)
print(y, y.shape)
print(x + y) | [[ 2 4 6]
[ 5 7 9]
[ 8 10 12]
[11 13 15]
[14 16 18]
[17 19 21]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
- https://numpy.org/doc/stable/user/basics.broadcasting.html- http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc- http://docs.scipy.org/doc/numpy/reference/ufuncs.htmlavailable-ufuncs Reshape | x = np.array([
[1, 2, 3],
[4, 5, 6]
])
y = np.array([2, 2])
print(x, x.shape)
print(y, y.shape) | [[1 2 3]
[4 5 6]] (2, 3)
[2 2] (2,)
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Transpose Approach | xT = x.T
print(xT)
xTw = xT + y
print(xTw)
x = xTw.T
print(x) | [[3 4 5]
[6 7 8]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Transpose approach in one line | print( (x.T + y).T ) | [[3 4 5]
[6 7 8]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Reshape Approach | print(y, y.shape, y.ndim)
y = np.reshape(y, (2, 1))
print(y, y.shape, y.ndim)
print(x + y) | [[3 4 5]
[6 7 8]]
| MIT | Labs/Lab 4/Lesson04-numpy.ipynb | cvlpieas/CIS428-ICV |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.