code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def build_fcn(input_shape,
backbone,
n_classes=4):
"""Helper function to build an FCN model.
Arguments:
backbone (Model): A backbone network
such as ResNetv2 or v1
n_classes (int): Number of object classes
including background.
"""... | Helper function to build an FCN model.
Arguments:
backbone (Model): A backbone network
such as ResNetv2 or v1
n_classes (int): Number of object classes
including background.
| build_fcn | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/model.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/model.py | MIT |
def lr_scheduler(epoch):
"""Learning rate scheduler - called every epoch"""
lr = 1e-3
if epoch > 80:
lr *= 5e-2
elif epoch > 60:
lr *= 1e-1
elif epoch > 40:
lr *= 5e-1
print('Learning rate: ', lr)
return lr | Learning rate scheduler - called every epoch | lr_scheduler | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/model_utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/model_utils.py | MIT |
def parser():
"""Instatiate a command line parser for ssd network model
building, training, and testing
"""
parser = argparse.ArgumentParser(description='FCN for object segmentation')
# arguments for model building and training
help_ = "Number of feature extraction layers of FCN head after backb... | Instatiate a command line parser for ssd network model
building, training, and testing
| parser | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/model_utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/model_utils.py | MIT |
def resnet_layer(inputs,
num_filters=16,
kernel_size=3,
strides=1,
activation='relu',
batch_normalization=True,
conv_first=True):
"""2D Convolution-Batch Normalization-Activation stack builder
Arguments:
... | 2D Convolution-Batch Normalization-Activation stack builder
Arguments:
inputs (tensor): Input tensor from input image or previous layer
num_filters (int): Conv2D number of filters
kernel_size (int): Conv2D square kernel dimensions
strides (int): Conv2D square stride dimensions
... | resnet_layer | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/resnet.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/resnet.py | MIT |
def resnet_v1(input_shape, depth, num_classes=10):
"""ResNet Version 1 Model builder [a]
Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the n... | ResNet Version 1 Model builder [a]
Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved (downsampled)
by a convolutional layer with strides=2, while the number of filters is
doubled. Within each stage, the la... | resnet_v1 | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/resnet.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/resnet.py | MIT |
def resnet_v2(input_shape, depth, n_layers=4):
"""ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as
bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of ea... | ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or also known as
bottleneck layer
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage, the feature map size is halved (downsampled)... | resnet_v2 | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/resnet.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/resnet.py | MIT |
def features_pyramid(x, n_layers):
"""Generate features pyramid from the output of the
last layer of a backbone network (e.g. ResNetv1 or v2)
Arguments:
x (tensor): Output feature maps of a backbone network
n_layers (int): Number of additional pyramid layers
Return:
outputs (l... | Generate features pyramid from the output of the
last layer of a backbone network (e.g. ResNetv1 or v2)
Arguments:
x (tensor): Output feature maps of a backbone network
n_layers (int): Number of additional pyramid layers
Return:
outputs (list): Features pyramid
| features_pyramid | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/resnet.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/resnet.py | MIT |
def build_resnet(input_shape,
n_layers=4,
version=2,
n=6):
"""Build a resnet as backbone
# Arguments:
input_shape (list): Input image size and channels
n_layers (int): Number of feature layers
version (int): Supports ResNetv1 and v2 bu... | Build a resnet as backbone
# Arguments:
input_shape (list): Input image size and channels
n_layers (int): Number of feature layers
version (int): Supports ResNetv1 and v2 but v2 by default
n (int): Determines number of ResNet layers
(Default is ResNet50)
# Ret... | build_resnet | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter12-segmentation/resnet.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter12-segmentation/resnet.py | MIT |
def __init__(self,
args,
shuffle=True,
siamese=False,
mine=False,
crop_size=4):
"""Multi-threaded data generator. Each thread reads
a batch of images and performs image transformation
such that the image... | Multi-threaded data generator. Each thread reads
a batch of images and performs image transformation
such that the image class is unaffected
Arguments:
args (argparse): User-defined options such as
batch_size, etc
shuffle (Bool): Whether to shuffl... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/data_generator.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/data_generator.py | MIT |
def __getitem__(self, index):
"""Image sample Indexes for the current batch
"""
start_index = index * self.args.batch_size
end_index = (index+1) * self.args.batch_size
return self.__data_generation(start_index, end_index) | Image sample Indexes for the current batch
| __getitem__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/data_generator.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/data_generator.py | MIT |
def random_crop(self, image, target_shape, crop_sizes):
"""Perform random crop, resize back to its target shape
Arguments:
image (tensor): Image to crop and resize
target_shape (tensor): Output shape
crop_sizes (list): A list of sizes the image
can b... | Perform random crop, resize back to its target shape
Arguments:
image (tensor): Image to crop and resize
target_shape (tensor): Output shape
crop_sizes (list): A list of sizes the image
can be cropped
| random_crop | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/data_generator.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/data_generator.py | MIT |
def random_rotate(self,
image,
deg=20,
target_shape=(24, 24, 1)):
"""Random image rotation
Arguments:
image (tensor): Image to crop and resize
deg (int): Degrees of rotation
target_shape (tensor): Ou... | Random image rotation
Arguments:
image (tensor): Image to crop and resize
deg (int): Degrees of rotation
target_shape (tensor): Output shape
| random_rotate | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/data_generator.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/data_generator.py | MIT |
def __data_generation(self, start_index, end_index):
"""Data generation algorithm. The method generates
a batch of pair of images (original image X and
transformed imaged Xbar). The batch of Siamese
images is used to trained MI-based algorithms:
1) IIC and 2) MINE... | Data generation algorithm. The method generates
a batch of pair of images (original image X and
transformed imaged Xbar). The batch of Siamese
images is used to trained MI-based algorithms:
1) IIC and 2) MINE (Section 7)
Arguments:
start_index (int): ... | __data_generation | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/data_generator.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/data_generator.py | MIT |
def __init__(self,
args,
backbone):
"""Contains the encoder model, the loss function,
loading of datasets, train and evaluation routines
to implement IIC unsupervised clustering via mutual
information maximization
Arguments:
... | Contains the encoder model, the loss function,
loading of datasets, train and evaluation routines
to implement IIC unsupervised clustering via mutual
information maximization
Arguments:
args : Command line arguments to indicate choice
of batch siz... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def build_model(self):
"""Build the n_heads of the IIC model
"""
inputs = Input(shape=self.train_gen.input_shape, name='x')
x = self.backbone(inputs)
x = Flatten()(x)
# number of output heads
outputs = []
for i in range(self.args.heads):
name =... | Build the n_heads of the IIC model
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def mi_loss(self, y_true, y_pred):
"""Mutual information loss computed from the joint
distribution matrix and the marginals
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of softmax predictions for
... | Mutual information loss computed from the joint
distribution matrix and the marginals
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of softmax predictions for
the Siamese latent vectors (Z and Z... | mi_loss | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def train(self):
"""Train function uses the data generator,
accuracy computation, and learning rate
scheduler callbacks
"""
accuracy = AccuracyCallback(self)
lr_scheduler = LearningRateScheduler(lr_schedule,
verbose=1)
... | Train function uses the data generator,
accuracy computation, and learning rate
scheduler callbacks
| train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def load_eval_dataset(self):
"""Pre-load test data for evaluation
"""
(_, _), (x_test, self.y_test) = self.args.dataset.load_data()
image_size = x_test.shape[1]
x_test = np.reshape(x_test,[-1, image_size, image_size, 1])
x_test = x_test.astype('float32') / 255
x_e... | Pre-load test data for evaluation
| load_eval_dataset | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def eval(self):
"""Evaluate the accuracy of the current model weights
"""
y_pred = self._model.predict(self.x_test)
print("")
# accuracy per head
for head in range(self.args.heads):
if self.args.heads == 1:
y_head = y_pred
else:
... | Evaluate the accuracy of the current model weights
| eval | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/iic-13.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/iic-13.5.1.py | MIT |
def sample(joint=True,
mean=[0, 0],
cov=[[1, 0.5], [0.5, 1]],
n_data=1000000):
"""Helper function to obtain samples
fr a bivariate Gaussian distribution
Arguments:
joint (Bool): If joint distribution is desired
mean (list): The mean values of the 2D Gau... | Helper function to obtain samples
fr a bivariate Gaussian distribution
Arguments:
joint (Bool): If joint distribution is desired
mean (list): The mean values of the 2D Gaussian
cov (list): The covariance matrix of the 2D Gaussian
n_data (int): Number of samples fr 2D Gaussi... | sample | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def compute_mi(cov_xy=0.5, n_bins=100):
"""Analytic computation of MI using binned
2D Gaussian
Arguments:
cov_xy (list): Off-diagonal elements of covariance
matrix
n_bins (int): Number of bins to "quantize" the
continuous 2D Gaussian
"""
cov=[[1, cov_xy]... | Analytic computation of MI using binned
2D Gaussian
Arguments:
cov_xy (list): Off-diagonal elements of covariance
matrix
n_bins (int): Number of bins to "quantize" the
continuous 2D Gaussian
| compute_mi | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def __init__(self,
args,
input_dim=1,
hidden_units=16,
output_dim=1):
"""Learn to compute MI using MINE (Algorithm 13.7.1)
Arguments:
args : User-defined arguments such as off-diagonal
elements of covariance... | Learn to compute MI using MINE (Algorithm 13.7.1)
Arguments:
args : User-defined arguments such as off-diagonal
elements of covariance matrix, batch size,
epochs, etc
input_dim (int): Input size dimension
hidden_units (int): Number of hidden ... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def build_model(self,
input_dim,
hidden_units,
output_dim):
"""Build a simple MINE model
Arguments:
See class arguments.
"""
inputs1 = Input(shape=(input_dim), name="x")
inputs2 = Input(shape=(input_... | Build a simple MINE model
Arguments:
See class arguments.
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def mi_loss(self, y_true, y_pred):
""" MINE loss function
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of predictions for
joint T(x,y) and marginal T(x,y)
"""
size = self.args.batc... | MINE loss function
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of predictions for
joint T(x,y) and marginal T(x,y)
| mi_loss | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def train(self):
"""Train MINE to estimate MI between
X and Y of a 2D Gaussian
"""
optimizer = Adam(lr=0.01)
self._model.compile(optimizer=optimizer,
loss=self.mi_loss)
plot_loss = []
cov=[[1, self.args.cov_xy], [self.args.cov_xy, ... | Train MINE to estimate MI between
X and Y of a 2D Gaussian
| train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def __init__(self,
latent_dim=10,
n_classes=10):
"""A simple MLP-based linear classifier.
A linear classifier is an MLP network
without non-linear activation like ReLU.
This can be used as a substitute to linear
assignment algori... | A simple MLP-based linear classifier.
A linear classifier is an MLP network
without non-linear activation like ReLU.
This can be used as a substitute to linear
assignment algorithm.
Arguments:
latent_dim (int): Latent vector dimensionality
... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def build_model(self, latent_dim, n_classes):
"""Linear classifier model builder.
Arguments: (see class arguments)
"""
inputs = Input(shape=(latent_dim,), name="cluster")
x = Dense(256)(inputs)
outputs = Dense(n_classes,
activation='softmax',
... | Linear classifier model builder.
Arguments: (see class arguments)
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def train(self, x_test, y_test):
"""Linear classifier training.
Arguments:
x_test (tensor): Image fr test dataset
y_test (tensor): Corresponding image label
fr test dataset
"""
self._model.fit(x_test,
y_test,
... | Linear classifier training.
Arguments:
x_test (tensor): Image fr test dataset
y_test (tensor): Corresponding image label
fr test dataset
| train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def eval(self, x_test, y_test):
"""Linear classifier evaluation.
Arguments:
x_test (tensor): Image fr test dataset
y_test (tensor): Corresponding image label
fr test dataset
"""
self._model.fit(x_test,
y_test,
... | Linear classifier evaluation.
Arguments:
x_test (tensor): Image fr test dataset
y_test (tensor): Corresponding image label
fr test dataset
| eval | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def __init__(self,
args,
backbone):
"""Contains the encoder, SimpleMINE, and linear
classifier models, the loss function,
loading of datasets, train and evaluation routines
to implement MINE unsupervised clustering via mutual
inf... | Contains the encoder, SimpleMINE, and linear
classifier models, the loss function,
loading of datasets, train and evaluation routines
to implement MINE unsupervised clustering via mutual
information maximization
Arguments:
args : Command line argumen... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def build_model(self):
"""Build the MINE model unsupervised classifier
"""
inputs = Input(shape=self.train_gen.input_shape,
name="x")
x = self.backbone(inputs)
x = Flatten()(x)
y = Dense(self.latent_dim,
activation='linear',
... | Build the MINE model unsupervised classifier
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def mi_loss(self, y_true, y_pred):
""" MINE loss function
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of predictions for
joint T(x,y) and marginal T(x,y)
"""
size = self.args.batc... | MINE loss function
Arguments:
y_true (tensor): Not used since this is
unsupervised learning
y_pred (tensor): stack of predictions for
joint T(x,y) and marginal T(x,y)
| mi_loss | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def train(self):
"""Train MINE to estimate MI between
X and Y (eg MNIST image and its transformed
version)
"""
accuracy = AccuracyCallback(self)
lr_scheduler = LearningRateScheduler(lr_schedule,
verbose=1)
call... | Train MINE to estimate MI between
X and Y (eg MNIST image and its transformed
version)
| train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def load_eval_dataset(self):
"""Pre-load test data for evaluation
"""
(_, _), (x_test, self.y_test) = \
self.args.dataset.load_data()
image_size = x_test.shape[1]
x_test = np.reshape(x_test,
[-1, image_size, image_size, 1])
x_te... | Pre-load test data for evaluation
| load_eval_dataset | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def eval(self):
"""Evaluate the accuracy of the current model weights
"""
# generate clustering predictions fr test data
y_pred = self._encoder.predict(self.x_test)
# train a linear classifier
# input: clustered data
# output: ground truth labels
self._cla... | Evaluate the accuracy of the current model weights
| eval | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/mine-13.8.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/mine-13.8.1.py | MIT |
def unsupervised_labels(y, yp, n_classes, n_clusters):
"""Linear assignment algorithm
Arguments:
y (tensor): Ground truth labels
yp (tensor): Predicted clusters
n_classes (int): Number of classes
n_clusters (int): Number of clusters
"""
assert n_classes == n_clusters... | Linear assignment algorithm
Arguments:
y (tensor): Ground truth labels
yp (tensor): Predicted clusters
n_classes (int): Number of classes
n_clusters (int): Number of clusters
| unsupervised_labels | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/utils.py | MIT |
def center_crop(image, crop_size=4):
"""Crop the image from the center
Argument:
crop_size (int): Number of pixels to crop
from each side
"""
height, width = image.shape[0], image.shape[1]
x = height - crop_size
y = width - crop_size
dx = dy = crop_size // 2
image = i... | Crop the image from the center
Argument:
crop_size (int): Number of pixels to crop
from each side
| center_crop | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/utils.py | MIT |
def lr_schedule(epoch):
"""Simple learning rate scheduler
Argument:
epoch (int): Which epoch
"""
lr = 1e-3
power = epoch // 400
lr *= 0.8**power
return lr | Simple learning rate scheduler
Argument:
epoch (int): Which epoch
| lr_schedule | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/utils.py | MIT |
def __init__(self, cfg, input_shape=(24, 24, 1)):
"""VGG network model creator to be used as backbone
feature extractor
Arguments:
cfg (dict): Summarizes the network configuration
input_shape (list): Input image dims
"""
self.cfg = cfg
self.in... | VGG network model creator to be used as backbone
feature extractor
Arguments:
cfg (dict): Summarizes the network configuration
input_shape (list): Input image dims
| __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/vgg.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/vgg.py | MIT |
def build_model(self):
"""Model builder uses a helper function
make_layers to read the config dict and
create a VGG network model
"""
inputs = Input(shape=self.input_shape, name='x')
x = VGG.make_layers(self.cfg, inputs)
self._model = Model(inputs, x, name... | Model builder uses a helper function
make_layers to read the config dict and
create a VGG network model
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/vgg.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/vgg.py | MIT |
def make_layers(cfg,
inputs,
batch_norm=True,
in_channels=1):
"""Helper function to ease the creation of VGG
network model
Arguments:
cfg (dict): Summarizes the network layer
configuration
... | Helper function to ease the creation of VGG
network model
Arguments:
cfg (dict): Summarizes the network layer
configuration
inputs (tensor): Input from previous layer
batch_norm (Bool): Whether to use batch norm
between Conv2D and... | make_layers | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter13-mi-unsupervised/vgg.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter13-mi-unsupervised/vgg.py | MIT |
def lr_schedule(epoch):
"""Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
... | Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
| lr_schedule | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter2-deep-networks/densenet-cifar10-2.4.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter2-deep-networks/densenet-cifar10-2.4.1.py | MIT |
def lr_schedule(epoch):
"""Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
... | Learning Rate Schedule
Learning rate is scheduled to be reduced after 80, 120, 160, 180 epochs.
Called automatically every epoch as part of callbacks during training.
# Arguments
epoch (int): The number of epochs
# Returns
lr (float32): learning rate
| lr_schedule | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter2-deep-networks/resnet-cifar10-2.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter2-deep-networks/resnet-cifar10-2.2.1.py | MIT |
def resnet_layer(inputs,
num_filters=16,
kernel_size=3,
strides=1,
activation='relu',
batch_normalization=True,
conv_first=True):
"""2D Convolution-Batch Normalization-Activation stack builder
Arguments:
... | 2D Convolution-Batch Normalization-Activation stack builder
Arguments:
inputs (tensor): input tensor from input image or previous layer
num_filters (int): Conv2D number of filters
kernel_size (int): Conv2D square kernel dimensions
strides (int): Conv2D square stride dimensions
... | resnet_layer | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter2-deep-networks/resnet-cifar10-2.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter2-deep-networks/resnet-cifar10-2.2.1.py | MIT |
def resnet_v1(input_shape, depth, num_classes=10):
"""ResNet Version 1 Model builder [a]
Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved
(downsampled) by a convolutional layer with strides=2, while
... | ResNet Version 1 Model builder [a]
Stacks of 2 x (3 x 3) Conv2D-BN-ReLU
Last ReLU is after the shortcut connection.
At the beginning of each stage, the feature map size is halved
(downsampled) by a convolutional layer with strides=2, while
the number of filters is doubled. Within each stage,
... | resnet_v1 | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter2-deep-networks/resnet-cifar10-2.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter2-deep-networks/resnet-cifar10-2.2.1.py | MIT |
def resnet_v2(input_shape, depth, num_classes=10):
"""ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or
also known as bottleneck layer.
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning... | ResNet Version 2 Model builder [b]
Stacks of (1 x 1)-(3 x 3)-(1 x 1) BN-ReLU-Conv2D or
also known as bottleneck layer.
First shortcut connection per layer is 1 x 1 Conv2D.
Second and onwards shortcut connection is identity.
At the beginning of each stage,
the feature map size is halved (downs... | resnet_v2 | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter2-deep-networks/resnet-cifar10-2.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter2-deep-networks/resnet-cifar10-2.2.1.py | MIT |
def plot_results(models,
data,
batch_size=32,
model_name="autoencoder_2dim"):
"""Plots 2-dim latent values as scatter plot of digits
then, plot MNIST digits as function of 2-dim latent vector
Arguments:
models (list): encoder and decoder models... | Plots 2-dim latent values as scatter plot of digits
then, plot MNIST digits as function of 2-dim latent vector
Arguments:
models (list): encoder and decoder models
data (list): test data and label
batch_size (int): prediction batch size
model_name (string): which model is us... | plot_results | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter3-autoencoders/autoencoder-2dim-mnist-3.2.2.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter3-autoencoders/autoencoder-2dim-mnist-3.2.2.py | MIT |
def build_generator(inputs, labels, image_size):
"""Build a Generator Model
Inputs are concatenated before Dense layer.
Stack of BN-ReLU-Conv2DTranpose to generate fake images.
Output activation is sigmoid instead of tanh in orig DCGAN.
Sigmoid converges easily.
Arguments:
inputs (Laye... | Build a Generator Model
Inputs are concatenated before Dense layer.
Stack of BN-ReLU-Conv2DTranpose to generate fake images.
Output activation is sigmoid instead of tanh in orig DCGAN.
Sigmoid converges easily.
Arguments:
inputs (Layer): Input layer of the generator (the z-vector)
... | build_generator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/cgan-mnist-4.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/cgan-mnist-4.3.1.py | MIT |
def build_discriminator(inputs, labels, image_size):
"""Build a Discriminator Model
Inputs are concatenated after Dense layer.
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in DCGAN paper.
Arguments:
inputs... | Build a Discriminator Model
Inputs are concatenated after Dense layer.
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in DCGAN paper.
Arguments:
inputs (Layer): Input layer of the discriminator (the image)
... | build_discriminator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/cgan-mnist-4.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/cgan-mnist-4.3.1.py | MIT |
def train(models, data, params):
"""Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial networks by batch.
Discriminator is trained first with properly labelled real and fake images.
Adversarial is trained next with fake images pretending to be real.
Dis... | Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial networks by batch.
Discriminator is trained first with properly labelled real and fake images.
Adversarial is trained next with fake images pretending to be real.
Discriminator inputs are conditioned by tra... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/cgan-mnist-4.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/cgan-mnist-4.3.1.py | MIT |
def plot_images(generator,
noise_input,
noise_class,
show=False,
step=0,
model_name="gan"):
"""Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
Arguments:
... | Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
Arguments:
generator (Model): The Generator Model for fake images generation
noise_input (ndarray): Array of z-vectors
show (bool): Whether to show plot or not
... | plot_images | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/cgan-mnist-4.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/cgan-mnist-4.3.1.py | MIT |
def build_generator(inputs, image_size):
"""Build a Generator Model
Stack of BN-ReLU-Conv2DTranpose to generate fake images
Output activation is sigmoid instead of tanh in [1].
Sigmoid converges easily.
Arguments:
inputs (Layer): Input layer of the generator
the z-vector)
... | Build a Generator Model
Stack of BN-ReLU-Conv2DTranpose to generate fake images
Output activation is sigmoid instead of tanh in [1].
Sigmoid converges easily.
Arguments:
inputs (Layer): Input layer of the generator
the z-vector)
image_size (tensor): Target size of one side... | build_generator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/dcgan-mnist-4.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/dcgan-mnist-4.2.1.py | MIT |
def build_discriminator(inputs):
"""Build a Discriminator Model
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in [1] or original paper.
Arguments:
inputs (Layer): Input layer of the discriminator (the image)
... | Build a Discriminator Model
Stack of LeakyReLU-Conv2D to discriminate real from fake.
The network does not converge with BN so it is not used here
unlike in [1] or original paper.
Arguments:
inputs (Layer): Input layer of the discriminator (the image)
Returns:
discriminator (Model... | build_discriminator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/dcgan-mnist-4.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/dcgan-mnist-4.2.1.py | MIT |
def train(models, x_train, params):
"""Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial networks by batch.
Discriminator is trained first with properly real and fake images.
Adversarial is trained next with fake images pretending to be real
Generate s... | Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial networks by batch.
Discriminator is trained first with properly real and fake images.
Adversarial is trained next with fake images pretending to be real
Generate sample images per save_interval.
Argume... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/dcgan-mnist-4.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/dcgan-mnist-4.2.1.py | MIT |
def plot_images(generator,
noise_input,
show=False,
step=0,
model_name="gan"):
"""Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
Arguments:
generator (Model): Th... | Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
Arguments:
generator (Model): The Generator Model for
fake images generation
noise_input (ndarray): Array of z-vectors
show (bool): Whether to show plot ... | plot_images | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter4-gan/dcgan-mnist-4.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter4-gan/dcgan-mnist-4.2.1.py | MIT |
def train(models, data, params):
"""Train the discriminator and adversarial Networks
Alternately train discriminator and adversarial
networks by batch.
Discriminator is trained first with real and fake
images and corresponding one-hot labels.
Adversarial is trained next with fake images prete... | Train the discriminator and adversarial Networks
Alternately train discriminator and adversarial
networks by batch.
Discriminator is trained first with real and fake
images and corresponding one-hot labels.
Adversarial is trained next with fake images pretending
to be real and corresponding ... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter5-improved-gan/acgan-mnist-5.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter5-improved-gan/acgan-mnist-5.3.1.py | MIT |
def build_and_train_models():
"""Load the dataset, build ACGAN discriminator,
generator, and adversarial models.
Call the ACGAN train routine.
"""
# load MNIST dataset
(x_train, y_train), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
image_size = x_train... | Load the dataset, build ACGAN discriminator,
generator, and adversarial models.
Call the ACGAN train routine.
| build_and_train_models | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter5-improved-gan/acgan-mnist-5.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter5-improved-gan/acgan-mnist-5.3.1.py | MIT |
def build_and_train_models():
"""Load the dataset, build LSGAN discriminator,
generator, and adversarial models.
Call the LSGAN train routine.
"""
# load MNIST dataset
(x_train, _), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
image_size = x_train.shape... | Load the dataset, build LSGAN discriminator,
generator, and adversarial models.
Call the LSGAN train routine.
| build_and_train_models | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter5-improved-gan/lsgan-mnist-5.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter5-improved-gan/lsgan-mnist-5.2.1.py | MIT |
def train(models, x_train, params):
"""Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial
networks by batch.
Discriminator is trained first with properly labelled
real and fake images for n_critic times.
Discriminator weights are clipped as a requir... | Train the Discriminator and Adversarial Networks
Alternately train Discriminator and Adversarial
networks by batch.
Discriminator is trained first with properly labelled
real and fake images for n_critic times.
Discriminator weights are clipped as a requirement
of Lipschitz constraint.
Gen... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter5-improved-gan/wgan-mnist-5.1.2.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter5-improved-gan/wgan-mnist-5.1.2.py | MIT |
def build_and_train_models():
"""Load the dataset, build WGAN discriminator,
generator, and adversarial models.
Call the WGAN train routine.
"""
# load MNIST dataset
(x_train, _), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
image_size = x_train.shape[1... | Load the dataset, build WGAN discriminator,
generator, and adversarial models.
Call the WGAN train routine.
| build_and_train_models | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter5-improved-gan/wgan-mnist-5.1.2.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter5-improved-gan/wgan-mnist-5.1.2.py | MIT |
def train(models, data, params):
"""Train the Discriminator and Adversarial networks
Alternately train discriminator and adversarial networks by batch.
Discriminator is trained first with real and fake images,
corresponding one-hot labels and continuous codes.
Adversarial is trained next with fake ... | Train the Discriminator and Adversarial networks
Alternately train discriminator and adversarial networks by batch.
Discriminator is trained first with real and fake images,
corresponding one-hot labels and continuous codes.
Adversarial is trained next with fake images pretending
to be real, corre... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/infogan-mnist-6.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/infogan-mnist-6.1.1.py | MIT |
def mi_loss(c, q_of_c_given_x):
""" Mutual information, Equation 5 in [2],
assuming H(c) is constant
"""
# mi_loss = -c * log(Q(c|x))
return -K.mean(K.sum(c * K.log(q_of_c_given_x + K.epsilon()),
axis=1)) | Mutual information, Equation 5 in [2],
assuming H(c) is constant
| mi_loss | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/infogan-mnist-6.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/infogan-mnist-6.1.1.py | MIT |
def build_and_train_models(latent_size=100):
"""Load the dataset, build InfoGAN discriminator,
generator, and adversarial models.
Call the InfoGAN train routine.
"""
# load MNIST dataset
(x_train, y_train), (_, _) = mnist.load_data()
# reshape data for CNN as (28, 28, 1) and normalize
i... | Load the dataset, build InfoGAN discriminator,
generator, and adversarial models.
Call the InfoGAN train routine.
| build_and_train_models | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/infogan-mnist-6.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/infogan-mnist-6.1.1.py | MIT |
def build_encoder(inputs, num_labels=10, feature1_dim=256):
""" Build the Classifier (Encoder) Model sub networks
Two sub networks:
1) Encoder0: Image to feature1 (intermediate latent feature)
2) Encoder1: feature1 to labels
# Arguments
inputs (Layers): x - images, feature1 -
... | Build the Classifier (Encoder) Model sub networks
Two sub networks:
1) Encoder0: Image to feature1 (intermediate latent feature)
2) Encoder1: feature1 to labels
# Arguments
inputs (Layers): x - images, feature1 -
feature1 layer output
num_labels (int): number of class la... | build_encoder | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def build_generator(latent_codes, image_size, feature1_dim=256):
"""Build Generator Model sub networks
Two sub networks: 1) Class and noise to feature1
(intermediate feature)
2) feature1 to image
# Arguments
latent_codes (Layers): dicrete code (labels),
noise and featu... | Build Generator Model sub networks
Two sub networks: 1) Class and noise to feature1
(intermediate feature)
2) feature1 to image
# Arguments
latent_codes (Layers): dicrete code (labels),
noise and feature1 features
image_size (int): Target size of one side
... | build_generator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def build_discriminator(inputs, z_dim=50):
"""Build Discriminator 1 Model
Classifies feature1 (features) as real/fake image and recovers
the input noise or latent code (by minimizing entropy loss)
# Arguments
inputs (Layer): feature1
z_dim (int): noise dimensionality
# Returns
... | Build Discriminator 1 Model
Classifies feature1 (features) as real/fake image and recovers
the input noise or latent code (by minimizing entropy loss)
# Arguments
inputs (Layer): feature1
z_dim (int): noise dimensionality
# Returns
dis1 (Model): feature1 as real/fake and recov... | build_discriminator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def train(models, data, params):
"""Train the discriminator and adversarial Networks
Alternately train discriminator and adversarial networks by batch.
Discriminator is trained first with real and fake images,
corresponding one-hot labels and latent codes.
Adversarial is trained next with fake imag... | Train the discriminator and adversarial Networks
Alternately train discriminator and adversarial networks by batch.
Discriminator is trained first with real and fake images,
corresponding one-hot labels and latent codes.
Adversarial is trained next with fake images pretending
to be real, correspond... | train | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def plot_images(generators,
noise_params,
show=False,
step=0,
model_name="gan"):
"""Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
# Arguments
generators (Models... | Generate fake images and plot them
For visualization purposes, generate fake images
then plot them in a square grid
# Arguments
generators (Models): gen0 and gen1 models for
fake images generation
noise_params (list): noise parameters
(label, z0 and z1 codes)
... | plot_images | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def train_encoder(model,
data,
model_name="stackedgan_mnist",
batch_size=64):
""" Train the Encoder Model (enc0 and enc1)
# Arguments
model (Model): Encoder
data (tensor): Train and test data
model_name (string): model name
... | Train the Encoder Model (enc0 and enc1)
# Arguments
model (Model): Encoder
data (tensor): Train and test data
model_name (string): model name
batch_size (int): Train batch size
| train_encoder | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def build_and_train_models():
"""Load the dataset, build StackedGAN discriminator,
generator, and adversarial models.
Call the StackedGAN train routine.
"""
# load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# reshape and normalize images
image_size = x_train.... | Load the dataset, build StackedGAN discriminator,
generator, and adversarial models.
Call the StackedGAN train routine.
| build_and_train_models | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter6-disentangled-gan/stackedgan-mnist-6.2.1.py | MIT |
def encoder_layer(inputs,
filters=16,
kernel_size=3,
strides=2,
activation='relu',
instance_norm=True):
"""Builds a generic encoder layer made of Conv2D-IN-LeakyReLU
IN is optional, LeakyReLU may be replaced by ReLU
"... | Builds a generic encoder layer made of Conv2D-IN-LeakyReLU
IN is optional, LeakyReLU may be replaced by ReLU
| encoder_layer | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def decoder_layer(inputs,
paired_inputs,
filters=16,
kernel_size=3,
strides=2,
activation='relu',
instance_norm=True):
"""Builds a generic decoder layer made of Conv2D-IN-LeakyReLU
IN is optional, LeakyRe... | Builds a generic decoder layer made of Conv2D-IN-LeakyReLU
IN is optional, LeakyReLU may be replaced by ReLU
Arguments: (partial)
inputs (tensor): the decoder layer input
paired_inputs (tensor): the encoder layer output
provided by U-Net skip connection &
concatenated to inputs.
... | decoder_layer | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def build_generator(input_shape,
output_shape=None,
kernel_size=3,
name=None):
"""The generator is a U-Network made of a 4-layer encoder
and a 4-layer decoder. Layer n-i is connected to layer i.
Arguments:
input_shape (tuple): input shape
... | The generator is a U-Network made of a 4-layer encoder
and a 4-layer decoder. Layer n-i is connected to layer i.
Arguments:
input_shape (tuple): input shape
output_shape (tuple): output shape
kernel_size (int): kernel size of encoder & decoder layers
name (string): name assigned to generator mo... | build_generator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def build_discriminator(input_shape,
kernel_size=3,
patchgan=True,
name=None):
"""The discriminator is a 4-layer encoder that outputs either
a 1-dim or a n x n-dim patch of probability that input is real
Arguments:
input_shape (tu... | The discriminator is a 4-layer encoder that outputs either
a 1-dim or a n x n-dim patch of probability that input is real
Arguments:
input_shape (tuple): input shape
kernel_size (int): kernel size of decoder layers
patchgan (bool): whether the output is a patch
or just a 1-dim
name (s... | build_discriminator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def train_cyclegan(models,
data,
params,
test_params,
test_generator):
""" Trains the CycleGAN.
1) Train the target discriminator
2) Train the source discriminator
3) Train the forward and backward cyles of
adver... | Trains the CycleGAN.
1) Train the target discriminator
2) Train the source discriminator
3) Train the forward and backward cyles of
adversarial networks
Arguments:
models (Models): Source/Target Discriminator/Generator,
Adversarial Model
data (tuple): source and target t... | train_cyclegan | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def build_cyclegan(shapes,
source_name='source',
target_name='target',
kernel_size=3,
patchgan=False,
identity=False
):
"""Build the CycleGAN
1) Build target and source discriminators
2) Build ... | Build the CycleGAN
1) Build target and source discriminators
2) Build target and source generators
3) Build the adversarial network
Arguments:
shapes (tuple): source and target shapes
source_name (string): string to be appended on dis/gen models
target_name (string): string to be appended ... | build_cyclegan | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def graycifar10_cross_colorcifar10(g_models=None):
"""Build and train a CycleGAN that can do
grayscale <--> color cifar10 images
"""
model_name = 'cyclegan_cifar10'
batch_size = 32
train_steps = 100000
patchgan = True
kernel_size = 3
postfix = ('%dp' % kernel_size) \
... | Build and train a CycleGAN that can do
grayscale <--> color cifar10 images
| graycifar10_cross_colorcifar10 | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def mnist_cross_svhn(g_models=None):
"""Build and train a CycleGAN that can do mnist <--> svhn
"""
model_name = 'cyclegan_mnist_svhn'
batch_size = 32
train_steps = 100000
patchgan = True
kernel_size = 5
postfix = ('%dp' % kernel_size) \
if patchgan else ('%d' % kernel_size)
... | Build and train a CycleGAN that can do mnist <--> svhn
| mnist_cross_svhn | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/cyclegan-7.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/cyclegan-7.1.1.py | MIT |
def display_images(imgs,
filename,
title='',
imgs_dir=None,
show=False):
"""Display images in an nxn grid
Arguments:
imgs (tensor): array of images
filename (string): filename to save the displayed image
title (string): tit... | Display images in an nxn grid
Arguments:
imgs (tensor): array of images
filename (string): filename to save the displayed image
title (string): title on the displayed image
imgs_dir (string): directory where to save the files
show (bool): whether to display the image or not
(False du... | display_images | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/other_utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/other_utils.py | MIT |
def test_generator(generators,
test_data,
step,
titles,
dirs,
todisplay=100,
show=False):
"""Test the generator models
Arguments:
generators (tuple): source and target generators
test_data ... | Test the generator models
Arguments:
generators (tuple): source and target generators
test_data (tuple): source and target test data
step (int): step number during training (0 during testing)
titles (tuple): titles on the displayed image
dirs (tuple): folders to save the outputs of testings
... | test_generator | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/other_utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/other_utils.py | MIT |
def load_data(data, titles, filenames, todisplay=100):
"""Generic loaded data transformation
Arguments:
data (tuple): source, target, test source, test target data
titles (tuple): titles of the test and source images to display
filenames (tuple): filenames of the test and source images to
di... | Generic loaded data transformation
Arguments:
data (tuple): source, target, test source, test target data
titles (tuple): titles of the test and source images to display
filenames (tuple): filenames of the test and source images to
display
todisplay (int): number of images to display (must b... | load_data | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter7-cross-domain-gan/other_utils.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter7-cross-domain-gan/other_utils.py | MIT |
def sampling(args):
"""Implements reparameterization trick by sampling
from a gaussian with zero mean and std=1.
Arguments:
args (tensor): mean and log of variance of Q(z|X)
Returns:
sampled latent vector (tensor)
"""
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
... | Implements reparameterization trick by sampling
from a gaussian with zero mean and std=1.
Arguments:
args (tensor): mean and log of variance of Q(z|X)
Returns:
sampled latent vector (tensor)
| sampling | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/cvae-cnn-mnist-8.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/cvae-cnn-mnist-8.2.1.py | MIT |
def plot_results(models,
data,
y_label,
batch_size=128,
model_name="cvae_mnist"):
"""Plots 2-dim mean values of Q(z|X) using labels
as color gradient then, plot MNIST digits as
function of 2-dim latent vector
Arguments:
... | Plots 2-dim mean values of Q(z|X) using labels
as color gradient then, plot MNIST digits as
function of 2-dim latent vector
Arguments:
models (list): encoder and decoder models
data (list): test data and label
y_label (array): one-hot vector of which digit to plot
... | plot_results | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/cvae-cnn-mnist-8.2.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/cvae-cnn-mnist-8.2.1.py | MIT |
def sampling(args):
"""Reparameterization trick by sampling
fr an isotropic unit Gaussian.
# Arguments:
args (tensor): mean and log of variance of Q(z|X)
# Returns:
z (tensor): sampled latent vector
"""
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.i... | Reparameterization trick by sampling
fr an isotropic unit Gaussian.
# Arguments:
args (tensor): mean and log of variance of Q(z|X)
# Returns:
z (tensor): sampled latent vector
| sampling | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/vae-cnn-mnist-8.1.2.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/vae-cnn-mnist-8.1.2.py | MIT |
def plot_results(models,
data,
batch_size=128,
model_name="vae_mnist"):
"""Plots labels and MNIST digits as function
of 2-dim latent vector
# Arguments:
models (tuple): encoder and decoder models
data (tuple): test data and label
... | Plots labels and MNIST digits as function
of 2-dim latent vector
# Arguments:
models (tuple): encoder and decoder models
data (tuple): test data and label
batch_size (int): prediction batch size
model_name (string): which model is using this function
| plot_results | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/vae-cnn-mnist-8.1.2.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/vae-cnn-mnist-8.1.2.py | MIT |
def sampling(args):
"""Reparameterization trick by sampling
fr an isotropic unit Gaussian.
# Arguments:
args (tensor): mean and log of variance of Q(z|X)
# Returns:
z (tensor): sampled latent vector
"""
z_mean, z_log_var = args
# K is the keras backend
batch = K.s... | Reparameterization trick by sampling
fr an isotropic unit Gaussian.
# Arguments:
args (tensor): mean and log of variance of Q(z|X)
# Returns:
z (tensor): sampled latent vector
| sampling | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/vae-mlp-mnist-8.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/vae-mlp-mnist-8.1.1.py | MIT |
def plot_results(models,
data,
batch_size=128,
model_name="vae_mnist"):
"""Plots labels and MNIST digits as function
of 2-dim latent vector
# Arguments:
models (tuple): encoder and decoder models
data (tuple): test data and label
... | Plots labels and MNIST digits as function
of 2-dim latent vector
# Arguments:
models (tuple): encoder and decoder models
data (tuple): test data and label
batch_size (int): prediction batch size
model_name (string): which model is using this function
| plot_results | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter8-vae/vae-mlp-mnist-8.1.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter8-vae/vae-mlp-mnist-8.1.1.py | MIT |
def __init__(self,
state_space,
action_space,
episodes=500):
"""DQN Agent on CartPole-v0 environment
Arguments:
state_space (tensor): state space
action_space (tensor): action space
episodes (int): number of episod... | DQN Agent on CartPole-v0 environment
Arguments:
state_space (tensor): state space
action_space (tensor): action space
episodes (int): number of episodes to train
| __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def build_model(self, n_inputs, n_outputs):
"""Q Network is 256-256-256 MLP
Arguments:
n_inputs (int): input dim
n_outputs (int): output dim
Return:
q_model (Model): DQN
"""
inputs = Input(shape=(n_inputs, ), name='state')
x = Dense(2... | Q Network is 256-256-256 MLP
Arguments:
n_inputs (int): input dim
n_outputs (int): output dim
Return:
q_model (Model): DQN
| build_model | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def act(self, state):
"""eps-greedy policy
Return:
action (tensor): action to execute
"""
if np.random.rand() < self.epsilon:
# explore - do random action
return self.action_space.sample()
# exploit
q_values = self.q_model.predict(stat... | eps-greedy policy
Return:
action (tensor): action to execute
| act | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def get_target_q_value(self, next_state, reward):
"""compute Q_max
Use of target Q Network solves the
non-stationarity problem
Arguments:
reward (float): reward received after executing
action on state
next_state (tensor): next state
... | compute Q_max
Use of target Q Network solves the
non-stationarity problem
Arguments:
reward (float): reward received after executing
action on state
next_state (tensor): next state
Return:
q_value (float): max Q-value computed
... | get_target_q_value | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def replay(self, batch_size):
"""experience replay addresses the correlation issue
between samples
Arguments:
batch_size (int): replay buffer batch
sample size
"""
# sars = state, action, reward, state' (next_state)
sars_batch = random.sa... | experience replay addresses the correlation issue
between samples
Arguments:
batch_size (int): replay buffer batch
sample size
| replay | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def get_target_q_value(self, next_state, reward):
"""compute Q_max
Use of target Q Network solves the
non-stationarity problem
Arguments:
reward (float): reward received after executing
action on state
next_state (tensor): next state
... | compute Q_max
Use of target Q Network solves the
non-stationarity problem
Arguments:
reward (float): reward received after executing
action on state
next_state (tensor): next state
Returns:
q_value (float): max Q-value computed
... | get_target_q_value | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/dqn-cartpole-9.6.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/dqn-cartpole-9.6.1.py | MIT |
def __init__(self,
observation_space,
action_space,
demo=False,
slippery=False,
episodes=40000):
"""Q-Learning agent on FrozenLake-v0 environment
Arguments:
observation_space (tensor): state space
... | Q-Learning agent on FrozenLake-v0 environment
Arguments:
observation_space (tensor): state space
action_space (tensor): action space
demo (Bool): whether for demo or training
slippery (Bool): 2 versions of FLv0 env
episodes (int): number of episodes t... | __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-frozenlake-9.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-frozenlake-9.5.1.py | MIT |
def act(self, state, is_explore=False):
"""determine the next action
if random, choose from random action space
else use the Q Table
Arguments:
state (tensor): agent's current state
is_explore (Bool): exploration mode or not
Return:
act... | determine the next action
if random, choose from random action space
else use the Q Table
Arguments:
state (tensor): agent's current state
is_explore (Bool): exploration mode or not
Return:
action (tensor): action that the agent
... | act | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-frozenlake-9.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-frozenlake-9.5.1.py | MIT |
def update_q_table(self, state, action, reward, next_state):
"""TD(0) learning (generalized Q-Learning) with learning rate
Arguments:
state (tensor): environment state
action (tensor): action executed by the agent for
the given state
reward (float): re... | TD(0) learning (generalized Q-Learning) with learning rate
Arguments:
state (tensor): environment state
action (tensor): action executed by the agent for
the given state
reward (float): reward received by the agent for
executing the action
... | update_q_table | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-frozenlake-9.5.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-frozenlake-9.5.1.py | MIT |
def __init__(self):
"""Simulated deterministic world made of 6 states.
Q-Learning by Bellman Equation.
"""
# 4 actions
# 0 - Left, 1 - Down, 2 - Right, 3 - Up
self.col = 4
# 6 states
self.row = 6
# setup the environment
self.q_table = np... | Simulated deterministic world made of 6 states.
Q-Learning by Bellman Equation.
| __init__ | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-learning-9.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-learning-9.3.1.py | MIT |
def step(self, action):
"""execute the action on the environment
Argument:
action (tensor): An action in Action space
Returns:
next_state (tensor): next env state
reward (float): reward received by the agent
done (Bool): whether the terminal state ... | execute the action on the environment
Argument:
action (tensor): An action in Action space
Returns:
next_state (tensor): next env state
reward (float): reward received by the agent
done (Bool): whether the terminal state
is reached
... | step | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-learning-9.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-learning-9.3.1.py | MIT |
def act(self):
"""determine the next action
either fr Q Table(exploitation) or
random(exploration)
Return:
action (tensor): action that the agent
must execute
"""
# 0 - Left, 1 - Down, 2 - Right, 3 - Up
# action is from explorat... | determine the next action
either fr Q Table(exploitation) or
random(exploration)
Return:
action (tensor): action that the agent
must execute
| act | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-learning-9.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-learning-9.3.1.py | MIT |
def update_q_table(self, state, action, reward, next_state):
"""Q-Learning - update the Q Table using Q(s, a)
Arguments:
state (tensor) : agent state
action (tensor): action executed by the agent
reward (float): reward after executing action
for a giv... | Q-Learning - update the Q Table using Q(s, a)
Arguments:
state (tensor) : agent state
action (tensor): action executed by the agent
reward (float): reward after executing action
for a given state
next_state (tensor): next state after executing
... | update_q_table | python | PacktPublishing/Advanced-Deep-Learning-with-Keras | chapter9-drl/q-learning-9.3.1.py | https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras/blob/master/chapter9-drl/q-learning-9.3.1.py | MIT |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.