markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Constructor propio | class OtroSaludo(m:String,nombre:String){ //Se deben declarar todos los atributos que se vayan a usar
def this()={
this("Hola","Pepe") //Siempre se debe llamar al constructor por defecto
}
def this(mensaje:String){
this("Hola","Jose")
}
def saludar()={
println(... | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Herencia | class Punto(var x:Int,var y:Int){
def mover(dx:Int,dy:Int):Unit={
this.x=dx
this.y=dy
}
}
class Particula(x:Int,y:Int,masa:Int) extends Punto(x:Int,y:Int){
override def toString():String={ //Para redefinir un metodo de una clase padre agregar override
return s"X:${this.x} ... | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Clases abstractas | abstract class Figura(lado:Int){
def getPerimetro():Double; //Metodo sin implementacion
def printLado():Unit= println("El lado mide "+this.lado) //Metodo implementado
}
class Cuadrado(lado:Int,n:Int) extends Figura(lado:Int){
override def getPerimetro():Double={
return lado*lado;
... | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Traits Son similares a las interfaces de otros lenguajes de programación. Sin embargo cuenta con dos principales diferencias respecto de las interfaces:- Pueden ser parcialmente implementadas como ocurre en las clases abstractas.- No pueden tener parametros en el constructor. | trait Correo{
def enviar():Unit;
def recibir(mensaje:String):Unit={
println(s"Mensaje recibido: ${mensaje}")
}
}
class CorreoPostal() extends Correo{
override def enviar()={
println("Enviado desde correo postal")
}
}
class CorreoElectronico(usuario:String) extends Correo{
... | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Colecciones Las colecciones por defecto incluidas son inmutables, no se puede agregar ni eliminar elementos. Las operaciones como *add* y similares lo que hacen es devolver una nueva colección con los nuevos elementos. Al crear la nueva colección se agregan las referencias de los objetos y por tanto casi no tiene pena... | val lista=List(1,2,3) //Lista inmutable
0::lista //Devuelve una lista con el nuevo elemento insertado al principio
lista.head //Devuelve el primer elemento de la lista
lista.tail //Devuelve toda la lista excepto el primer elemento
lista:::lista //Concatena dos listas y devuelve el resultado | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Operaciones y funciones sobre conjuntos (y similares) | val conjunto=Set(1,2,3)
val conjunto2=conjunto.map(x => x+3) //Ejecuta la funcion que se le pasa a cada miembro de la coleccion
val conjunto3=List(conjunto,conjunto2).flatten //Crea una nueva coleccion con los elementos de las sub-colecciones
Set(1,4,9).flatMap { x => Set(x,x+1) } //FlatMap
val lista=(List(1,2,3)++... | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Mapas Son estructuras clave/valor similares a los Mapas de Java o los diccionarios de Python. | val mapa=Map(1->"Uno",2->"Dos",3->"Tres") | _____no_output_____ | MIT | Scala-basics.ipynb | FranciscoJavierMartin/Notebooks |
Colab FAQFor some basic overview and features offered in Colab notebooks, check out: [Overview of Colaboratory Features](https://colab.research.google.com/notebooks/basic_features_overview.ipynb)You need to use the colab GPU for this assignmentby selecting:> **Runtime** → **Change runtime type** → **Hardware A... | ######################################################################
# Setup python environment and change the current working directory
######################################################################
!pip install torch torchvision
!pip install imageio
!pip install matplotlib
%mkdir -p /content/csc413/a4/
%c... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Helper code Utility functions | import os
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch.nn import Parameter
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Data loader | def get_emoji_loader(emoji_type, opts):
"""Creates training and test data loaders.
"""
transform = transforms.Compose([
transforms.Scale(opts.image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Training and evaluation code | def print_models(G_XtoY, G_YtoX, D_X, D_Y):
"""Prints model information for the generators and discriminators.
"""
print(" G ")
print("---------------------------------------")
print(G_XtoY)
print("---------------------------------------")
print(" ... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Your code for generators and discriminators Helper modules | def sample_noise(batch_size, dim):
"""
Generate a PyTorch Tensor of uniform random noise.
Input:
- batch_size: Integer giving the batch size of noise to generate.
- dim: Integer giving the dimension of noise to generate.
Output:
- A PyTorch Tensor of shape (batch_size, dim, 1, 1) containin... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
DCGAN Spectral Norm class | def l2normalize(v, eps=1e-12):
return v / (v.norm() + eps)
class SpectralNorm(nn.Module):
def __init__(self, module, name='weight', power_iterations=1):
super(SpectralNorm, self).__init__()
self.module = module
self.name = name
self.power_iterations = power_iterations
i... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
**[Your Task]** GAN generator | class DCGenerator(nn.Module):
def __init__(self, noise_size, conv_dim, spectral_norm=False):
super(DCGenerator, self).__init__()
self.conv_dim = conv_dim
###########################################
## FILL THIS IN: CREATE ARCHITECTURE ##
#################################... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
GAN discriminator | class DCDiscriminator(nn.Module):
"""Defines the architecture of the discriminator network.
Note: Both discriminators D_X and D_Y have the same architecture in this assignment.
"""
def __init__(self, conv_dim=64, spectral_norm=False):
super(DCDiscriminator, self).__init__()
self.con... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
**[Your Task]** GAN training loop * Regular GAN* Least Squares GAN | def gan_training_loop_regular(dataloader, test_dataloader, opts):
"""Runs the training loop.
* Saves checkpoint every opts.checkpoint_every iterations
* Saves generated samples every opts.sample_every iterations
"""
# Create generators and discriminators
G, D = create_model(opts)
g... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
**[Your Task]** Training Download dataset | ######################################################################
# Download Translation datasets
######################################################################
data_fpath = get_file(fname='emojis',
origin='http://www.cs.toronto.edu/~jba/emojis.tar.gz',
u... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Train DCGAN | SEED = 11
# Set the random seed manually for reproducibility.
np.random.seed(SEED)
torch.manual_seed(SEED)
if torch.cuda.is_available():
torch.cuda.manual_seed(SEED)
args = AttrDict()
args_dict = {
'image_size':32,
'g_conv_dim':32,
'd_conv_dim':64,
'noise... | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Download your output | !zip -r /content/csc413/a4/results/samples.zip /content/csc413/a4/results/samples_gan_gp1_lr3e-5
from google.colab import files
files.download("/content/csc413/a4/results/samples.zip") | _____no_output_____ | MIT | assets/assignments/a4_dcgan.ipynb | uoft-csc413/2022 |
Do some cleaning and reformatting: | df.drop(df.columns[df.columns.str.contains('unnamed',case = False)],
axis = 1, inplace = True)
df = df[['arrival', 'choice']]
df['arrival'].replace({9.0: 8.6, 9.1: 8.7}, inplace=True)
df.head()
fig, ax = plt.subplots()
fig.set_size_inches(6.7, 1.2)
fig = sns.regplot(x='arrival', y='choice', data=df,
... | _____no_output_____ | MIT | python/fig5_logit_all.ipynb | thomasnicolet/Paper_canteen_dilemma |
Initial Setup | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
import os
import math
import string
import re
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import helper
import pickle
import keras
from kera... | Using TensorFlow backend.
| MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
Training ParametersWe'll set the hyperparameters for training our model. If you understand what they mean, feel free to play around - otherwise, we recommend keeping the defaults for your first run 🙂 | # Hyperparams if GPU is available
if tf.test.is_gpu_available():
print('---- We are using GPU now ----')
# GPU
BATCH_SIZE = 512 # Number of examples used in each iteration
EPOCHS = 80 # Number of passes through entire dataset
# Hyperparams for CPU training
else:
print('---- We are using CPU n... | ---- We are using CPU now ----
| MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
DataThe wine reviews dataset is already attached to your workspace (if you want to attach your own data, [check out our docs](https://docs.floydhub.com/guides/workspace/attaching-floydhub-datasets)).Let's take a look at data. | data_path = '/floyd/input/gengduoshuju/' # ADD path/to/dataset
Y= pickle.load( open(os.path.join(data_path,'Y.pks'), "rb" ) )
X= pickle.load( open(os.path.join(data_path,'X.pks'), "rb" ) )
X = X.reshape((X.shape[0],X.shape[1],1))
print("Size of X :" + str(X.shape))
print("Size of Y :" + str(Y.shape))
X = X.astype(np.f... | Size of X :(412038, 240, 1)
Size of Y :(412038,)
| MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
Data Preprocessing | X_train, X_test, Y_train_orig,Y_test_orig= helper.divide_data(X,Y)
print(Y.min())
print(Y.max())
num_classes = 332
Y_train = keras.utils.to_categorical(Y_train_orig, num_classes)
Y_test = keras.utils.to_categorical(Y_test_orig, num_classes)
print("number of training examples = " + str(X_train.shape[0]))
print("number ... | (240, 1)
| MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
Model definition The *Tokens per sentence* plot (see above) is useful for setting the `MAX_LEN` training hyperparameter. | # ===================================================================================
# Load the model what has already ben trained
# ===================================================================================
model = load_model(r"floyd_model_xxl_data_ver8.h5") | _____no_output_____ | MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
Model Training | opt = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
model.summary()
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
model.fit(X_tra... | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d_1 (Conv1D) (None, 240, 16) 80
________________________________________________________... | MIT | train_result/ml_ee_xxl_data_training_step7.ipynb | cufezhusy/mlXVA |
$$\newcommand\bs[1]{\boldsymbol{1}}$$ This content is part of a series following the chapter 2 on linear algebra from the [Deep Learning Book](http://www.deeplearningbook.org/) by Goodfellow, I., Bengio, Y., and Courville, A. (2016). It aims to provide intuitions/drawings/python code on mathematical theories and is... | x = np.array([1, 2, 3, 4])
x | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
Example 2. Create a (3x2) matrix with nested bracketsThe `array()` function can also create $2$-dimensional arrays with nested brackets: | A = np.array([[1, 2], [3, 4], [5, 6]])
A | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
ShapeThe shape of an array (that is to say its dimensions) tells you the number of values for each dimension. For a $2$-dimensional array it will give you the number of rows and the number of columns. Let's find the shape of our preceding $2$-dimensional array `A`. Since `A` is a Numpy array (it was created with the `... | A.shape | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
We can see that $\bs{A}$ has 3 rows and 2 columns.Let's check the shape of our first vector: | x.shape | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
As expected, you can see that $\bs{x}$ has only one dimension. The number corresponds to the length of the array: | len(x) | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
TranspositionWith transposition you can convert a row vector to a column vector and vice versa:The transpose $\bs{A}^{\text{T}}$ of the matrix $\bs{A}$ corresponds to the mirrored axes. If the matrix is a square matrix (same number of columns and rows):If the matrix is not square the idea is the same:The superscript $... | A = np.array([[1, 2], [3, 4], [5, 6]])
A
A_t = A.T
A_t | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
We can check the dimensions of the matrices: | A.shape
A_t.shape | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
We can see that the number of columns becomes the number of rows with transposition and vice versa. AdditionMatrices can be added if they have the same shape:$$\bs{A} + \bs{B} = \bs{C}$$Each cell of $\bs{A}$ is added to the corresponding cell of $\bs{B}$:$$\bs{A}_{i,j} + \bs{B}_{i,j} = \bs{C}_{i,j}$$$i$ is the row ind... | A = np.array([[1, 2], [3, 4], [5, 6]])
A
B = np.array([[2, 5], [7, 4], [4, 3]])
B
# Add matrices A and B
C = A + B
C | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
It is also possible to add a scalar to a matrix. This means adding this scalar to each cell of the matrix.$$\alpha+ \begin{bmatrix} A_{1,1} & A_{1,2} \\\\ A_{2,1} & A_{2,2} \\\\ A_{3,1} & A_{3,2}\end{bmatrix}=\begin{bmatrix} \alpha + A_{1,1} & \alpha + A_{1,2} \\\\ \alpha + A_{2,1} & \alpha + A_{2,2} \\\... | A
# Exemple: Add 4 to the matrix A
C = A+4
C | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
BroadcastingNumpy can handle operations on arrays of different shapes. The smaller array will be extended to match the shape of the bigger one. The advantage is that this is done in `C` under the hood (like any vectorized operations in Numpy). Actually, we used broadcasting in the example 5. The scalar was converted i... | A = np.array([[1, 2], [3, 4], [5, 6]])
A
B = np.array([[2], [4], [6]])
B
# Broadcasting
C=A+B
C | _____no_output_____ | MIT | 2.1 Scalars, Vectors, Matrices and Tensors/2.1 Scalars Vectors Matrices and Tensors.ipynb | PeterFogh/deepLearningBook-Notes |
`distance_transform_lin`A variant of the standard distance transform where the distances are computed along a give axis rather than radially. | import numpy as np
import porespy as ps
import scipy.ndimage as spim
import matplotlib.pyplot as plt | _____no_output_____ | MIT | examples/filters/reference/distance_transform_lin.ipynb | xu-kai-xu/porespy |
The arguments and their defaults are: | import inspect
inspect.signature(ps.filters.distance_transform_lin) | _____no_output_____ | MIT | examples/filters/reference/distance_transform_lin.ipynb | xu-kai-xu/porespy |
`axis`The axis along which the distances should be computed | fig, ax = plt.subplots(1, 2, figsize=[12, 6])
im = ps.generators.blobs(shape=[500, 500], porosity=0.7)
axis = 0
dt = ps.filters.distance_transform_lin(im, axis=axis)
ax[0].imshow(dt/im)
ax[0].axis(False)
ax[0].set_title(f'axis = {axis}')
axis = 1
dt = ps.filters.distance_transform_lin(im, axis=axis)
ax[1].imshow(d... | _____no_output_____ | MIT | examples/filters/reference/distance_transform_lin.ipynb | xu-kai-xu/porespy |
`mode`Whether the distances are comptuted from the start to end, end to start, or both. | fig, ax = plt.subplots(1, 3, figsize=[15, 5])
im = ps.generators.blobs(shape=[500, 500], porosity=0.7)
mode = 'forward'
dt = ps.filters.distance_transform_lin(im, mode=mode)
ax[0].imshow(dt/im)
ax[0].axis(False)
ax[0].set_title(f'mode = {mode}')
mode = 'reverse'
dt = ps.filters.distance_transform_lin(im, mode=mode)... | _____no_output_____ | MIT | examples/filters/reference/distance_transform_lin.ipynb | xu-kai-xu/porespy |
Develop and Register ModelIn this noteook, we will go through the steps to load the MaskRCNN model and call the model to find the top predictions. We will then register the model in ACR using AzureML. Note: Always make sure you don't have any lingering notebooks running (Shutdown previous notebooks). Otherwise it m... | %reload_ext autoreload
%autoreload 2
%matplotlib inline
import torch
import torchvision
import numpy as np
from pathlib import *
from PIL import Image
from azureml.core.workspace import Workspace
from azureml.core.model import Model
from dotenv import set_key, find_dotenv
from testing_utilities import get_auth
import u... | _____no_output_____ | MIT | object-detection-azureml/031_DevAndRegisterModel.ipynb | Bhaskers-Blu-Org2/deploy-MLmodels-on-iotedge |
ModelWe load a pretrained [**Mask R-CNN ResNet-50 FPN** object detection model](https://pytorch.org/blog/torchvision03/). This model is trained on subset of COCO train2017, which contains the same 20 categories as those from Pascal VOC. | # use pretrained model: https://pytorch.org/blog/torchvision03/
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
#device = torch.device("cpu")
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
url = "https://download.pytorch.org/models/maskr... | _____no_output_____ | MIT | object-detection-azureml/031_DevAndRegisterModel.ipynb | Bhaskers-Blu-Org2/deploy-MLmodels-on-iotedge |
Register Model | # Get workspace
# Load existing workspace from the config file info.
ws = Workspace.from_config(auth=get_auth())
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep="\n")
model = Model.register(
model_path="maskrcnn_resnet50.pth", # this points to a local file
model_name="maskrcnn_resnet50... | _____no_output_____ | MIT | object-detection-azureml/031_DevAndRegisterModel.ipynb | Bhaskers-Blu-Org2/deploy-MLmodels-on-iotedge |
!nvidia-smi
| Sun Dec 6 06:17:07 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id ... | MIT | TensorFI_Capsnet.ipynb | MahdiSajedi/TensorFI | |
import `tensorflow version 1` for colab and `os` | # set tensorflow version to 1
%tensorflow_version 1.x
# if need to install some spesfic version
# !pip install tensorflow-gpu==1.10.0
import os
| _____no_output_____ | MIT | TensorFI_Capsnet.ipynb | MahdiSajedi/TensorFI |
**Download Modified git repo and change dir to `TensorFI`** | !git clone https://github.com/MahdiSajedi/TensorFI.git
os.chdir('TensorFI')
!pwd
%ls | fatal: destination path 'TensorFI' already exists and is not an empty directory.
/content/TensorFI/TensorFI
faultTypes.py fiLog.py __init__.py modifyGraph.py tensorFI.py
fiConfig.py fiStats.py injectFault.py printGraph.py
| MIT | TensorFI_Capsnet.ipynb | MahdiSajedi/TensorFI |
Intstall `TensorFI` pip package Run `capsnet.py` file | !pip install tensorfi
!python ./Tests/capsnet.py
!pwd | /content/TensorFI
| MIT | TensorFI_Capsnet.ipynb | MahdiSajedi/TensorFI |
Artificial Intelligence Nanodegree Voice User Interfaces Project: Speech Recognition with Neural Networks---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included cod... | from data_generator import vis_train_features
# extract label and audio features for a single training example
vis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features() | There are 2136 total training examples.
| Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook! | from IPython.display import Markdown, display
from data_generator import vis_train_features, plot_raw_audio
from IPython.display import Audio
%matplotlib inline
# plot audio signal
plot_raw_audio(vis_raw_audio)
# print length of audio signal
display(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape)))
... | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
STEP 1: Acoustic Features for Speech RecognitionFor this project, you won't use the raw audio waveform as input to your model. Instead, we provide code that first performs a pre-processing step to convert the raw audio to a feature representation that has historically proven successful for ASR models. Your acoustic ... | from data_generator import plot_spectrogram_feature
# plot normalized spectrogram
plot_spectrogram_feature(vis_spectrogram_feature)
# print shape of spectrogram
display(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape))) | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Mel-Frequency Cepstral Coefficients (MFCCs)The second option for an audio feature representation is [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). You do **not** need to dig deeply into the details of how MFCCs are calculated, but if you would like more information, you are welcome to peruse the [docu... | from data_generator import plot_mfcc_feature
# plot normalized MFCC
plot_mfcc_feature(vis_mfcc_feature)
# print shape of MFCC
display(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape))) | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below:- This [repository](https://github.com/baidu-research/ba-dls-deepspeech) uses spectrog... | #####################################################################
# RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK #
#####################################################################
# allocate 50% of GPU memory (if you like, feel free to change this)
from keras.backend.tensorflow_backend im... | Using TensorFlow backend.
/home/pjordan/anaconda3/envs/dnn-speech-recognizer/lib/python3.5/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv i... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Model 0: RNNGiven their effectiveness in modeling sequential data, the first acoustic model you will use is an RNN. As shown in the figure below, the RNN we supply to you will take the time sequence of audio features as input.At each time step, the speaker pronounces one of 28 possible characters, including each of t... | model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
________________________________________________________... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
As explored in the lesson, you will train the acoustic model with the [CTC loss](http://www.cs.toronto.edu/~graves/icml_2006.pdf) criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architec... | train_model(input_to_softmax=model_0,
pickle_path='model_0.pickle',
save_model_path='model_0.h5',
optimizer=SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you would like to use MFCC features | Epoch 1/20
106/106 [==============================] - 116s - loss: 962.2045 - val_loss: 746.4123
Epoch 2/20
106/106 [==============================] - 111s - loss: 757.1928 - val_loss: 729.0466
Epoch 3/20
106/106 [==============================] - 116s - loss: 753.0298 - val_loss: 730.4964
Epoch 4/20
106/106 [=========... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
(IMPLEMENTATION) Model 1: RNN + TimeDistributed DenseRead about the [TimeDistributed](https://keras.io/layers/wrappers/) wrapper and the [BatchNormalization](https://keras.io/layers/normalization/) layer in the Keras documentation. For your next architecture, you will add [batch normalization](https://arxiv.org/pdf/1... | model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=246,
activation='relu',
dropout_rate=0.0) | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
________________________________________________________... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_1.h5`. The loss history is [saved](https://wiki.python.org/moin/Usi... | from keras.optimizers import SGD
train_model(input_to_softmax=model_1,
pickle_path='model_1.pickle',
save_model_path='model_1.h5',
optimizer=SGD(lr=0.07693823225442271, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you wou... | Epoch 1/20
106/106 [==============================] - 125s - loss: 301.3889 - val_loss: 255.1117
Epoch 2/20
106/106 [==============================] - 126s - loss: 208.7791 - val_loss: 195.5662
Epoch 3/20
106/106 [==============================] - 126s - loss: 188.6020 - val_loss: 184.3830
Epoch 4/20
106/106 [=========... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
(IMPLEMENTATION) Model 2: CNN + RNN + TimeDistributed DenseThe architecture in `cnn_rnn_model` adds an additional level of complexity, by introducing a [1D convolution layer](https://keras.io/layers/convolutional/conv1d). This layer incorporates many arguments that can be (optionally) tuned when calling the `cnn_rnn_... | model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=185,
kernel_size=5,
conv_stride=3,
conv_border_mode='valid',
units=350,
dr... | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
________________________________________________________... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_2.h5`. The loss history is [saved](https://wiki.python.org/moin/Usi... | from keras.optimizers import SGD
train_model(input_to_softmax=model_2,
pickle_path='model_2.pickle',
save_model_path='model_2.h5',
optimizer=SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you would like to use ... | Epoch 1/20
106/106 [==============================] - 47s - loss: 258.7976 - val_loss: 215.1476
Epoch 2/20
106/106 [==============================] - 44s - loss: 210.2469 - val_loss: 195.7121
Epoch 3/20
106/106 [==============================] - 44s - loss: 194.4411 - val_loss: 176.9136
Epoch 4/20
106/106 [============... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
(IMPLEMENTATION) Model 3: Deeper RNN + TimeDistributed DenseReview the code in `rnn_model`, which makes use of a single recurrent layer. Now, specify an architecture in `deep_rnn_model` that utilizes a variable number `recur_layers` of recurrent layers. The figure below shows the architecture that should be returned... | model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=290,
recur_layers=3,
dropout_rate=0.3035064397585259) | WARNING:tensorflow:From /home/pjordan/anaconda3/envs/dnn-speech-recognizer/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:1190: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecat... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_3.h5`. The loss history is [saved](https://wiki.python.org/moin/Usi... | from keras.optimizers import SGD
train_model(input_to_softmax=model_3,
pickle_path='model_3.pickle',
save_model_path='model_3.h5',
optimizer=SGD(lr=0.0635459438114008, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you wou... | WARNING:tensorflow:From /home/pjordan/anaconda3/envs/dnn-speech-recognizer/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:1297: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is depreca... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
(IMPLEMENTATION) Model 4: Bidirectional RNN + TimeDistributed DenseRead about the [Bidirectional](https://keras.io/layers/wrappers/) wrapper in the Keras documentation. For your next architecture, you will specify an architecture that uses a single bidirectional RNN layer, before a (`TimeDistributed`) dense layer. T... | model_4 = bidirectional_rnn_model(
input_dim=161, # change to 13 if you would like to use MFCC features
units=250,
dropout_rate=0.1) | _________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
________________________________________________________... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_4.h5`. The loss history is [saved](https://wiki.python.org/moin/Usi... | train_model(input_to_softmax=model_4,
pickle_path='model_4.pickle',
save_model_path='model_4.h5',
optimizer=SGD(lr=0.06, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you would like to use MFCC features | Epoch 1/20
106/106 [==============================] - 205s - loss: 275.6266 - val_loss: 226.8717
Epoch 2/20
106/106 [==============================] - 205s - loss: 213.2997 - val_loss: 201.3109
Epoch 3/20
106/106 [==============================] - 204s - loss: 200.7651 - val_loss: 186.7573
Epoch 4/20
106/106 [=========... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
(OPTIONAL IMPLEMENTATION) Models 5+If you would like to try out more architectures than the ones above, please use the code cell below. Please continue to follow the same convention for saving the models; for the $i$-th sample model, please save the loss at **`model_i.pickle`** and saving the trained model at **`mode... | model_5 = cnn2d_rnn_model(
input_dim=161, # change to 13 if you would like to use MFCC features
filters=50,
kernel_size=(11,11),
conv_stride=1,
conv_border_mode='same',
pool_size=(1,5),
units=200,
dropout_rate=0.1)
from keras.optimizers import SGD
train_model(input_to_softmax=model_5, ... | Epoch 1/20
106/106 [==============================] - 137s - loss: 285.0588 - val_loss: 228.7582
Epoch 2/20
106/106 [==============================] - 129s - loss: 230.2834 - val_loss: 213.1584
Epoch 3/20
106/106 [==============================] - 126s - loss: 213.9887 - val_loss: 194.7103
Epoch 4/20
106/106 [=========... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Compare the ModelsExecute the code cell below to evaluate the performance of the drafted deep learning models. The training and validation loss are plotted for each model. | from glob import glob
import numpy as np
import _pickle as pickle
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style(style='white')
# obtain the paths for the saved model history
all_pickles = sorted(glob("results/*.pickle"))
# extract the name of each model
model_names = [item[8:-7... | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
__Question 1:__ Use the plot above to analyze the performance of each of the attempted architectures. Which performs best? Provide an explanation regarding why you think some models perform better than others. __Answer:__The following table gives the model performance in ascending order of (best) validation loss.| R... | # specify the model
model_end = final_model(
input_dim=161,
filters=50,
kernel_size=(11,11),
conv_stride=1,
conv_border_mode='same',
pool_size=(1,5),
units=200,
recur_layers=1,
dropout_rate=0.5) | WARNING:tensorflow:From /home/pjordan/anaconda3/envs/dnn-speech-recognizer/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:1208: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is depreca... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_end.h5`. The loss history is [saved](https://wiki.python.org/moin/U... | from keras.optimizers import SGD
train_model(input_to_softmax=model_end,
pickle_path='model_end.pickle',
save_model_path='model_end.h5',
optimizer=SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=1),
spectrogram=True) # change to False if you would like t... | Epoch 1/20
106/106 [==============================] - 248s - loss: 335.9858 - val_loss: 255.5860
Epoch 2/20
106/106 [==============================] - 240s - loss: 242.4996 - val_loss: 238.2656
Epoch 3/20
106/106 [==============================] - 239s - loss: 222.3218 - val_loss: 197.3325
Epoch 4/20
106/106 [=========... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
__Question 2:__ Describe your final model architecture and your reasoning at each step. __Answer:__The final architecture included a two-dimensional convolutional layer followed by a max-pooling layer. The output of the max pooling layer fed into a bi-directional GRU layer, which outputted to a time-distributed dense... | import numpy as np
from data_generator import AudioGenerator
from keras import backend as K
from utils import int_sequence_to_text
from IPython.display import Audio
def get_predictions(index, partition, input_to_softmax, model_path):
""" Print a model's decoded predictions
Params:
index (int): The exam... | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset. | get_predictions(index=0,
partition='train',
input_to_softmax=model_end,
model_path='results/model_end.h5') | --------------------------------------------------------------------------------
True transcription:
he was young no spear had touched him no poison lurked in his wine
--------------------------------------------------------------------------------
Predicted transcription:
he was o no sperhd thtm no pis on mork din ... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
Use the next code cell to visualize the model's prediction for the first example in the validation dataset. | get_predictions(index=0,
partition='validation',
input_to_softmax=model_end,
model_path='results/model_end.h5') | --------------------------------------------------------------------------------
True transcription:
o life of this our spring
--------------------------------------------------------------------------------
Predicted transcription:
bo f an dhes rbrn
------------------------------------------------------------------... | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
One standard way to improve the results of the decoder is to incorporate a language model. We won't pursue this in the notebook, but you are welcome to do so as an _optional extension_. If you are interested in creating models that provide improved transcriptions, you are encouraged to download [more data](http://www.... | !!python -m nbconvert *.ipynb
!!zip submission.zip vui_notebook.ipynb report.html sample_models.py results/* | _____no_output_____ | Apache-2.0 | vui_notebook.ipynb | shubhank-saxena/dnn-speech-recognizer |
QA Inference on BERT using TensorRT 1. OverviewBidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. The original paper can be found here: https://arxiv.o... | paragraph_text = "The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first humans on the Moon from 1969 to 1972. First conceived during Dwight D. Eisenhower's admi... | _____no_output_____ | Apache-2.0 | demo/BERT/inference.ipynb | malithj/TensorRT |
Question: | question_text = "What project put the first Americans into space?"
#question_text = "What year did the first manned Apollo flight occur?"
#question_text = "What President is credited with the original notion of putting Americans in space?"
#question_text = "Who did the U.S. collaborate with on an Earth orbit mission... | _____no_output_____ | Apache-2.0 | demo/BERT/inference.ipynb | malithj/TensorRT |
In this example we ask our BERT model questions related to the following paragraph:**The Apollo Program**_"The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which accomplished landing the first... | import helpers.data_processing as dp
import helpers.tokenization as tokenization
tokenizer = tokenization.FullTokenizer(vocab_file="/workspace/TensorRT/demo/BERT/models/fine-tuned/bert_tf_ckpt_large_qa_squad2_amp_128_v19.03.1/vocab.txt", do_lower_case=True)
# The maximum number of tokens for the question. Questions l... | _____no_output_____ | Apache-2.0 | demo/BERT/inference.ipynb | malithj/TensorRT |
TensorRT Inference | import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.INFO)
import ctypes
import os
ctypes.CDLL("libnvinfer_plugin.so", mode=ctypes.RTLD_GLOBAL)
import pycuda.driver as cuda
import pycuda.autoinit
import collections
import numpy as np
import time
# Load the BERT-Large Engine
with open("/workspace/TensorRT/demo/BE... | _____no_output_____ | Apache-2.0 | demo/BERT/inference.ipynb | malithj/TensorRT |
Data Post-Processing Now that we have the inference results let's extract the actual answer to our question | # The total number of n-best predictions to generate in the nbest_predictions.json output file
n_best_size = 20
# The maximum length of an answer that can be generated. This is needed
# because the start and end predictions are not conditioned on one another
max_answer_length = 30
prediction... | _____no_output_____ | Apache-2.0 | demo/BERT/inference.ipynb | malithj/TensorRT |
Outliers Impact | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
import pandas as pd | _____no_output_____ | MIT | bonston_housing_project/Regularized Regression.ipynb | taareek/machine_learning |
Linear Regression | from sklearn.linear_model import LinearRegression
np.random.seed(42)
n_samples = 100
rng = np.random.randn(n_samples) * 10
print("Feeature shape: ", rng.shape)
y_gen = 0.5 * rng + 2 * np.random.randn(n_samples)
print("\nTarget shape: ", y_gen.shape)
lr = LinearRegression()
lr.fit(rng.reshape(-1, 1), y_gen)
model_pred... | Coefficient Estimate: [0.92796845]
| MIT | bonston_housing_project/Regularized Regression.ipynb | taareek/machine_learning |
Ridge Regression | from sklearn.linear_model import Ridge
ridge_mod = Ridge(alpha= 1, normalize= True)
ridge_mod.fit(rng.reshape(-1, 1), y_gen)
ridge_mod_pred = ridge_mod.predict(rng.reshape(-1,1))
plt.figure(figsize=(10,8))
plt.scatter(rng, y_gen);
plt.plot(rng, ridge_mod_pred);
print("Coefficient of Estimation: ", ridge_mod.coef_)
# r... | _____no_output_____ | MIT | bonston_housing_project/Regularized Regression.ipynb | taareek/machine_learning |
Lasso Regression | from sklearn.linear_model import Lasso
# define model
lasso_mod = Lasso(alpha= 0.4, normalize= True)
lasso_mod.fit(rng.reshape(-1, 1), y_gen) # (features, target)
lasso_mod_pred = lasso_mod.predict(rng.reshape(-1,1)) # (features)
# plotting
plt.figure(figsize=(10, 8));
plt.scatter(rng, y_gen); # (features, target)... | Coefficient Estimation: [0.48530263]
| MIT | bonston_housing_project/Regularized Regression.ipynb | taareek/machine_learning |
Elastic Net Regression | from sklearn.linear_model import ElasticNet
# defining model and prediction
elnet_mod = ElasticNet(alpha= 0.02, normalize= True)
elnet_mod.fit(rng.reshape(-1, 1), y_gen)
elnet_pred = elnet_mod.predict(rng.reshape(-1,1))
# plotting
plt.figure(figsize=(10, 8));
plt.scatter(rng, y_gen);
plt.plot(rng, elnet_pred);
print... | Coefficent Estimation: [0.4584509]
| MIT | bonston_housing_project/Regularized Regression.ipynb | taareek/machine_learning |
 Add Column using Expression With Azure ML Data Prep you can add a new column to data with `Dataflow.add_column` by using a Data Prep express... | import azureml.dataprep as dprep
# loading data
dflow = dprep.auto_read_file('../data/crime-spring.csv')
dflow.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`substring(start, length)`Add a new column "Case Category" using the `substring(start, length)` expression to extract the prefix from the "Case Number" column. | case_category = dflow.add_column(new_column_name='Case Category',
prior_column='Case Number',
expression=dflow['Case Number'].substring(0, 2))
case_category.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`substring(start)`Add a new column "Case Id" using the `substring(start)` expression to extract just the number from "Case Number" column and then convert it to numeric. | case_id = dflow.add_column(new_column_name='Case Id',
prior_column='Case Number',
expression=dflow['Case Number'].substring(2))
case_id = case_id.to_number('Case Id')
case_id.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`length()`Using the length() expression, add a new numeric column "Length", which contains the length of the string in "Primary Type". | dflow_length = dflow.add_column(new_column_name='Length',
prior_column='Primary Type',
expression=dflow['Primary Type'].length())
dflow_length.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`to_upper()`Using the to_upper() expression, add a new numeric column "Upper Case", which contains the string in "Primary Type" in upper case. | dflow_to_upper = dflow.add_column(new_column_name='Upper Case',
prior_column='Primary Type',
expression=dflow['Primary Type'].to_upper())
dflow_to_upper.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`to_lower()`Using the to_lower() expression, add a new numeric column "Lower Case", which contains the string in "Primary Type" in lower case. | dflow_to_lower = dflow.add_column(new_column_name='Lower Case',
prior_column='Primary Type',
expression=dflow['Primary Type'].to_lower())
dflow_to_lower.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) + col(column2)`Add a new column "Total" to show the result of adding the values in the "FBI Code" column to the "Community Area" column. | dflow_total = dflow.add_column(new_column_name='Total',
prior_column='FBI Code',
expression=dflow['Community Area']+dflow['FBI Code'])
dflow_total.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) - col(column2)`Add a new column "Subtract" to show the result of subtracting the values in the "FBI Code" column from the "Community Area" column. | dflow_diff = dflow.add_column(new_column_name='Difference',
prior_column='FBI Code',
expression=dflow['Community Area']-dflow['FBI Code'])
dflow_diff.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) * col(column2)`Add a new column "Product" to show the result of multiplying the values in the "FBI Code" column to the "Community Area" column. | dflow_prod = dflow.add_column(new_column_name='Product',
prior_column='FBI Code',
expression=dflow['Community Area']*dflow['FBI Code'])
dflow_prod.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) / col(column2)`Add a new column "True Quotient" to show the result of true (decimal) division of the values in "Community Area" column by the "FBI Code" column. | dflow_true_div = dflow.add_column(new_column_name='True Quotient',
prior_column='FBI Code',
expression=dflow['Community Area']/dflow['FBI Code'])
dflow_true_div.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) // col(column2)`Add a new column "Floor Quotient" to show the result of floor (integer) division of the values in "Community Area" column by the "FBI Code" column. | dflow_floor_div = dflow.add_column(new_column_name='Floor Quotient',
prior_column='FBI Code',
expression=dflow['Community Area']//dflow['FBI Code'])
dflow_floor_div.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) % col(column2)`Add a new column "Mod" to show the result of applying the modulo operation on the "FBI Code" column and the "Community Area" column. | dflow_mod = dflow.add_column(new_column_name='Mod',
prior_column='FBI Code',
expression=dflow['Community Area']%dflow['FBI Code'])
dflow_mod.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
`col(column1) ** col(column2)`Add a new column "Power" to show the result of applying the exponentiation operation when the base is the "Community Area" column and the exponent is "FBI Code" column. | dflow_pow = dflow.add_column(new_column_name='Power',
prior_column='FBI Code',
expression=dflow['Community Area']**dflow['FBI Code'])
dflow_pow.head(5) | _____no_output_____ | MIT | how-to-guides/add-column-using-expression.ipynb | Bhaskers-Blu-Org2/AMLDataPrepDocs |
Purpose: A basic object identification package for the lab to use *Step 1: import packages* | import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#Sci-kit Image Imports
from skimage import io
from skimage import filters
from skimage.feature import canny
from skimage import measure
from scipy import ndimage as ndi
%matplotlib inline
import warnings
warnings.filterwarni... | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 2: User Inputs* | file_location = '../../31.2_DG_quant.tif'
plot_name = 'practice2.png'
channel_1_color = 'Blue'
channel_2_color = 'Green' | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 3: Read the image into the notebook* | #Read in the file
im = io.imread(file_location)
#Convert image to numpy array
imarray = np.array(im)
#Checking the image shape
imarray.shape | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 4: Color Split* | channel_1 = im[0, :, :]
channel_2 = im[1, :, :] | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 5: Visualization Check* | fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.set_title(channel_1_color)
ax1.imshow(channel_1, cmap='gray')
ax2 = fig.add_subplot(2,2,2)
ax2.set_title(channel_2_color)
ax2.imshow(channel_2, cmap='gray')
fig.set_size_inches(10.5, 10.5, forward=True) | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 6: Apply a Threshold* | threshold_local = filters.threshold_otsu(channel_1)
binary_c1 = channel_1 > threshold_local
threshold_local = filters.threshold_otsu(channel_2)
binary_c2 = channel_2 > threshold_local
fig = plt.figure()
ax1 = fig.add_subplot(2,2,1)
ax1.set_title(str(channel_1_color + ' Threshold'))
ax1.imshow(binary_c1, cmap='gray')
... | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
*Step 7: Fill in Objects* | filled_c1 = ndi.binary_fill_holes(binary_c1)
filled_c2 = ndi.binary_fill_holes(binary_c2) | _____no_output_____ | MIT | scripts/object_identification_basic.ipynb | hhelmbre/qdbvcella |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.