markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Direct Associations Model
def learning_function(stimuli_shown, Λ, λ, training_or_test, prev_V, prev_Vbar, stimulus_type, α): Λbar = T.zeros_like(Λ) Λbar = T.inc_subtensor(Λbar[0,:], (prev_V[2,:] > 0) * (1 - Λ[0, :])) #Dcs Λbar = T.inc_subtensor(Λbar[1,:], (prev_V[1,:] > 0) * (1 - Λ[1, :])) #Ecs Λbar = T.inc_subtensor(Λbar[2...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Generate Simulated Data with Model
n_stim = 9 n_subjects = len(data['ID'].unique()) #Initial values R = np.zeros((n_stim, n_subjects)) overall_R = np.zeros((1, n_subjects)) v_excitatory = np.zeros((n_stim, n_subjects)) v_inhibitory = np.zeros((n_stim, n_subjects)) #Randomized parameter values - use this if you want to compare simulated vs recovered pa...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Run Fake Data Simulation
#Run the loop output, updates = scan(fn=learning_function, sequences=[{'input': stimuli_shown_sim[:-1, ...]}, {'input': big_lambda_sim}, {'input': small_lambda_sim}, {'input': training_or_test}], ...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Check parameter recovery
n_subjects = len(data['ID'].unique()) #Initial values R = np.zeros((n_stim, n_subjects)) #US values small_lambda = data.pivot(index='trialseq', values='US', columns='ID').values[:, np.newaxis, :].repeat(n_stim, axis=1).astype(float) stim_data = [] for sub in data['ID'].unique(): stim_data.append(data.loc[data['I...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Fit the Model Variational Inference
from pymc3.variational.callbacks import CheckParametersConvergence with model: approx = pm.fit(method='advi', n=40000, callbacks=[CheckParametersConvergence()]) trace = approx.sample(1000) alpha_output = pm.summary(trace, kind='stats', varnames=[i for i in model.named_vars if 'α' in i and not i in model.determinist...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Fit the Model to Real Data
n_subjects = len(data['ID'].unique()) # Initial values R = np.zeros((n_stim, n_subjects)) # Value estimate overall_R = np.zeros((1, n_subjects)) v_excitatory = np.zeros((n_stim, n_subjects)) v_inhibitory = np.zeros((n_stim, n_subjects)) # US values small_lambda = data.pivot(index='trialseq', values='US', columns='I...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Variational Inference
from pymc3.variational.callbacks import CheckParametersConvergence with model: approx = pm.fit(method='advi', n=40000, callbacks=[CheckParametersConvergence()]) trace = approx.sample(1000) alpha_output = pm.summary(trace, kind='stats', varnames=[i for i in model.named_vars if 'α' in i and not i in model.determinist...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Model Output
overall_R_mean = trace['estimated_overall_R'].mean(axis=0) overall_R_sd = trace['estimated_overall_R'].std(axis=0) sub_ids = data['ID'].unique() subs = [np.where(data['ID'].unique() == sub)[0][0] for sub in sub_ids] waic_output = pm.waic(trace) waic_output alpha_output.to_csv(os.path.join('../output/',r'2nd POS - Direc...
_____no_output_____
Apache-2.0
modeling/modeling code/Experiment_2_Direct_Associations.ipynb
tzbozinek/2nd-order-occasion-setting
Experiment 02: Deformations Experiments ETH-05In this notebook, we are using the CLUST Dataset.The sequence used for this notebook is ETH-05.zip
import sys import random import os sys.path.append('../src') import warnings warnings.filterwarnings("ignore") from PIL import Image from utils.compute_metrics import get_metrics, get_majority_vote,log_test_metrics from utils.split import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.d...
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
1. Visualize Sequence of USWe are visualizing the first images from the sequence ETH-01-1 that contains 3652 US images.
directory=os.listdir('../data/02_interim/Data5') directory.sort() # settings h, w = 15, 10 # for raster image nrows, ncols = 3, 4 # array of sub-plots figsize = [15, 8] # figure size, inches # prep (x,y) for extra plotting on selected sub-plots xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi ys = np.abs(...
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
2. Create Dataset
%%time ll_imgstemp = [plt.imread("../data/02_interim/Data5/" + dir) for dir in directory[:5]] %%time ll_imgs = [np.array(Image.open("../data/02_interim/Data5/" + dir).resize(size=(98, 114)), dtype='float32') for dir in directory] %%time ll_imgs2 = [img.reshape(1,img.shape[0],img.shape[1]) for img in ll_imgs] # dataset ...
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
3. Extract Scattering Features
M,N = dataset['img'].iloc[0].shape[1], dataset['img'].iloc[0].shape[2] print(M,N) # Set the parameters of the scattering transform. J = 3 # Generate a sample signal. scattering = Scattering2D(J, (M, N)) data = np.concatenate(dataset['img'],axis=0) data = torch.from_numpy(data) use_cuda = torch.cuda.is_available() devic...
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
4. Extract PCA Components
with open('../data/03_features/scattering_features_deformation5.pickle', 'rb') as handle: scattering_features = pickle.load(handle) with open('../data/03_features/dataset_deformation5.pickle', 'rb') as handle: dataset = pickle.load(handle) sc_features = scattering_features.view(scattering_features.shape[0], sca...
_____no_output_____
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
5. Isometric Mapping Correlation with Order
with open('../data/03_features/scattering_features_deformation5.pickle', 'rb') as handle: scattering_features = pickle.load(handle) with open('../data/03_features/dataset_deformation5.pickle', 'rb') as handle: dataset = pickle.load(handle) sc_features = scattering_features.view(scattering_features.shape[0], sca...
0%| | 0/3 [00:00<?, ?it/s] 0%| | 0/3 [00:00<?, ?it/s] 33%|███▎ | 1/3 [00:11<00:23, 11.70s/it] 67%|██████▋ | 2/3 [00:23<00:11, 11.71s/it] 100%|██████████| 3/3 [00:35<00:00, 11.71s/it] 33%|███▎ | 1/3 [00:35<01:10, 35.12s/it] 0%| | 0/3 [00:00<?, ?it/s] 33%|██...
RSA-MD
notebooks/breathing_notebooks/1.1_deformation_experiment_scattering_ETH-05.ipynb
sgaut023/Chronic-Liver-Classification
Regression
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from collections import OrderedDict import time from sklearn.metrics import mean_squared_error,roc_auc_score,mean_absolute_error,log_loss import sys from gammli import GAMMLI from gammli.dataReader import data_initialize from ga...
_____no_output_____
MIT
examples/simulation_demo.ipynb
SelfExplainML/GAMMLI
Image Captioning with RNNsIn this exercise you will implement a vanilla recurrent neural networks and use them it to train a model that can generate novel captions for images. Install h5pyThe COCO dataset we will be using is stored in HDF5 format. To load HDF5 files, we will need to install the `h5py` Python package....
!pip install h5py # As usual, a bit of setup import time, os, json import numpy as np import matplotlib.pyplot as plt from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.rnn_layers import * from cs231n.captioning_solver import CaptioningSolver from cs231n.classifiers.rn...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Microsoft COCOFor this exercise we will use the 2014 release of the [Microsoft COCO dataset](http://mscoco.org/) which has become the standard testbed for image captioning. The dataset consists of 80,000 training images and 40,000 validation images, each annotated with 5 captions written by workers on Amazon Mechanica...
# Load COCO data from disk; this returns a dictionary # We'll work with dimensionality-reduced features for this notebook, but feel # free to experiment with the original features by changing the flag below. data = load_coco_data(pca_features=True) # Print out all the keys and values from the data dictionary for k, v ...
base dir /home/purewhite/workspace/CS231n-2020-Assignment/assignment3/cs231n/datasets/coco_captioning train_captions <class 'numpy.ndarray'> (400135, 17) int32 train_image_idxs <class 'numpy.ndarray'> (400135,) int32 val_captions <class 'numpy.ndarray'> (195954, 17) int32 val_image_idxs <class 'numpy.ndarray'> (195954...
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Look at the dataIt is always a good idea to look at examples from the dataset before working with it.You can use the `sample_coco_minibatch` function from the file `cs231n/coco_utils.py` to sample minibatches of data from the data structure returned from `load_coco_data`. Run the following to sample a small minibatch ...
# Sample a minibatch and show the images and captions batch_size = 3 captions, features, urls = sample_coco_minibatch(data, batch_size=batch_size) for i, (caption, url) in enumerate(zip(captions, urls)): plt.imshow(image_from_url(url)) plt.axis('off') caption_str = decode_captions(caption, data['idx_to_wor...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Recurrent Neural NetworksAs discussed in lecture, we will use recurrent neural network (RNN) language models for image captioning. The file `cs231n/rnn_layers.py` contains implementations of different layer types that are needed for recurrent neural networks, and the file `cs231n/classifiers/rnn.py` uses these layers ...
N, D, H = 3, 10, 4 x = np.linspace(-0.4, 0.7, num=N*D).reshape(N, D) prev_h = np.linspace(-0.2, 0.5, num=N*H).reshape(N, H) Wx = np.linspace(-0.1, 0.9, num=D*H).reshape(D, H) Wh = np.linspace(-0.3, 0.7, num=H*H).reshape(H, H) b = np.linspace(-0.2, 0.4, num=H) next_h, _ = rnn_step_forward(x, prev_h, Wx, Wh, b) expecte...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: step backwardIn the file `cs231n/rnn_layers.py` implement the `rnn_step_backward` function. After doing so run the following to numerically gradient check your implementation. You should see errors on the order of `e-8` or less.
from cs231n.rnn_layers import rnn_step_forward, rnn_step_backward np.random.seed(231) N, D, H = 4, 5, 6 x = np.random.randn(N, D) h = np.random.randn(N, H) Wx = np.random.randn(D, H) Wh = np.random.randn(H, H) b = np.random.randn(H) out, cache = rnn_step_forward(x, h, Wx, Wh, b) dnext_h = np.random.randn(*out.shape) ...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: forwardNow that you have implemented the forward and backward passes for a single timestep of a vanilla RNN, you will combine these pieces to implement a RNN that processes an entire sequence of data.In the file `cs231n/rnn_layers.py`, implement the function `rnn_forward`. This should be implemented using...
N, T, D, H = 2, 3, 4, 5 x = np.linspace(-0.1, 0.3, num=N*T*D).reshape(N, T, D) h0 = np.linspace(-0.3, 0.1, num=N*H).reshape(N, H) Wx = np.linspace(-0.2, 0.4, num=D*H).reshape(D, H) Wh = np.linspace(-0.4, 0.1, num=H*H).reshape(H, H) b = np.linspace(-0.7, 0.1, num=H) h, _ = rnn_forward(x, h0, Wx, Wh, b) expected_h = np...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Vanilla RNN: backwardIn the file `cs231n/rnn_layers.py`, implement the backward pass for a vanilla RNN in the function `rnn_backward`. This should run back-propagation over the entire sequence, making calls to the `rnn_step_backward` function that you defined earlier. You should see errors on the order of e-6 or less.
np.random.seed(231) N, D, T, H = 2, 3, 10, 5 x = np.random.randn(N, T, D) h0 = np.random.randn(N, H) Wx = np.random.randn(D, H) Wh = np.random.randn(H, H) b = np.random.randn(H) out, cache = rnn_forward(x, h0, Wx, Wh, b) dout = np.random.randn(*out.shape) dx, dh0, dWx, dWh, db = rnn_backward(dout, cache) fx = lam...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Word embedding: forwardIn deep learning systems, we commonly represent words using vectors. Each word of the vocabulary will be associated with a vector, and these vectors will be learned jointly with the rest of the system.In the file `cs231n/rnn_layers.py`, implement the function `word_embedding_forward` to convert ...
N, T, V, D = 2, 4, 5, 3 x = np.asarray([[0, 3, 1, 2], [2, 1, 0, 3]]) W = np.linspace(0, 1, num=V*D).reshape(V, D) out, _ = word_embedding_forward(x, W) expected_out = np.asarray([ [[ 0., 0.07142857, 0.14285714], [ 0.64285714, 0.71428571, 0.78571429], [ 0.21428571, 0.28571429, 0.35714286], [ 0.428...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Word embedding: backwardImplement the backward pass for the word embedding function in the function `word_embedding_backward`. After doing so run the following to numerically gradient check your implementation. You should see an error on the order of `e-11` or less.
np.random.seed(231) N, T, V, D = 50, 3, 5, 6 x = np.random.randint(V, size=(N, T)) W = np.random.randn(V, D) out, cache = word_embedding_forward(x, W) dout = np.random.randn(*out.shape) dW = word_embedding_backward(dout, cache) f = lambda W: word_embedding_forward(x, W)[0] dW_num = eval_numerical_gradient_array(f, W...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Temporal Affine layerAt every timestep we use an affine function to transform the RNN hidden vector at that timestep into scores for each word in the vocabulary. Because this is very similar to the affine layer that you implemented in assignment 2, we have provided this function for you in the `temporal_affine_forward...
np.random.seed(231) # Gradient check for temporal affine layer N, T, D, M = 2, 3, 4, 5 x = np.random.randn(N, T, D) w = np.random.randn(D, M) b = np.random.randn(M) out, cache = temporal_affine_forward(x, w, b) dout = np.random.randn(*out.shape) fx = lambda x: temporal_affine_forward(x, w, b)[0] fw = lambda w: temp...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Temporal Softmax lossIn an RNN language model, at every timestep we produce a score for each word in the vocabulary. We know the ground-truth word at each timestep, so we use a softmax loss function to compute loss and gradient at each timestep. We sum the losses over time and average them over the minibatch.However t...
# Sanity check for temporal softmax loss from cs231n.rnn_layers import temporal_softmax_loss N, T, V = 100, 1, 10 def check_loss(N, T, V, p): x = 0.001 * np.random.randn(N, T, V) y = np.random.randint(V, size=(N, T)) mask = np.random.rand(N, T) <= p print(temporal_softmax_loss(x, y, mask)[0]) check...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
RNN for image captioningNow that you have implemented the necessary layers, you can combine them to build an image captioning model. Open the file `cs231n/classifiers/rnn.py` and look at the `CaptioningRNN` class.Implement the forward and backward pass of the model in the `loss` function. For now you only need to impl...
N, D, W, H = 10, 20, 30, 40 word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3} V = len(word_to_idx) T = 13 model = CaptioningRNN(word_to_idx, input_dim=D, wordvec_dim=W, hidden_dim=H, cell_type='rnn', dtype=np.float64) # Set all model parameters to fixed values for k, v ...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Run the following cell to perform numeric gradient checking on the `CaptioningRNN` class; you should see errors around the order of `e-6` or less.
np.random.seed(231) batch_size = 2 timesteps = 3 input_dim = 4 wordvec_dim = 5 hidden_dim = 6 word_to_idx = {'<NULL>': 0, 'cat': 2, 'dog': 3} vocab_size = len(word_to_idx) captions = np.random.randint(vocab_size, size=(batch_size, timesteps)) features = np.random.randn(batch_size, input_dim) model = CaptioningRNN(wo...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Overfit small dataSimilar to the `Solver` class that we used to train image classification models on the previous assignment, on this assignment we use a `CaptioningSolver` class to train image captioning models. Open the file `cs231n/captioning_solver.py` and read through the `CaptioningSolver` class; it should look ...
np.random.seed(231) small_data = load_coco_data(max_train=50) small_rnn_model = CaptioningRNN( cell_type='rnn', word_to_idx=data['word_to_idx'], input_dim=data['train_features'].shape[1], hidden_dim=512, wordvec_dim=256, ) small_rnn_solver = CaptioningSolver(...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Print final training loss. You should see a final loss of less than 0.1.
print('Final loss: ', small_rnn_solver.loss_history[-1])
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
Test-time samplingUnlike classification models, image captioning models behave very differently at training time and at test time. At training time, we have access to the ground-truth caption, so we feed ground-truth words as input to the RNN at each timestep. At test time, we sample from the distribution over the voc...
for split in ['train', 'val']: minibatch = sample_coco_minibatch(small_data, split=split, batch_size=2) gt_captions, features, urls = minibatch gt_captions = decode_captions(gt_captions, data['idx_to_word']) sample_captions = small_rnn_model.sample(features) sample_captions = decode_captions(sample...
_____no_output_____
MIT
assignment3/RNN_Captioning.ipynb
Purewhite2019/CS231n-2020-Assignment
The noise scattering at a compressor inlet and outlet================================================== In this example we extract the scattering of noise at a compressor inlet and outlet. In addition to measuring the pressure with flush-mounted microphones, we will use the temperature, and flow velocity that was acqui...
import numpy import matplotlib.pyplot as plt import acdecom
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
The compressor intake and outlet have a circular cross section of the radius 0.026 m and 0.028 m.The highest frequency of interest is 3200 Hz.
section = "circular" radius_intake = 0.026 # m radius_outlet = 0.028 # m f_max = 3200 # Hz
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
During the test, test ducts were mounted to the intake and outlet. Those ducts were equipped with three microphoneseach. The first microphone had a distance to the intake of 0.73 m and 1.17 m to the outlet.
distance_intake = 0.073 # m distance_outlet = 1.17 # m
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
To analyze the measurement data, we create objects for the intake and the outlet test pipes.
td_intake = acdecom.WaveGuide(dimensions=(radius_intake,), cross_section=section, f_max=f_max, damping="kirchoff", distance=distance_intake, flip_flow=True) td_outlet = acdecom.WaveGuide(dimensions=(radius_outlet,), cross_section=section, f_max=f_max, damping="kirchoff", ...
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
NoteThe standard flow direction is in $P_+$ direction. Therefore, on the intake side, the Mach-number must be either set negative or the argument *flipFlow* must be set to *True*.2. Sensor Positions-------------------We define lists with microphone positions at the intake and outlet and assign them to the *WaveGuides*...
z_intake = [0, 0.043, 0.324] # m r_intake = [radius_intake, radius_intake, radius_intake] # m phi_intake = [0, 180, 0] # deg z_outlet = [0, 0.054, 0.284] # m r_outlet = [radius_outlet, radius_outlet, radius_outlet] # m phi_outlet = [0, 180, 0] # deg td_intake.set_microphone_positions(z_intake, r_intake, phi_int...
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
3. Decomposition----------------Next, we read the measurement data. The measurement must be pre-processed in a format that is understood by the*WaveGuide* object. This is generally a numpy.ndArray, wherein the columns contain the measurement data, suchas the measured frequency, the pressure values for that frequency, t...
pressure = numpy.loadtxt("data/turbo.txt",dtype=complex, delimiter=",", skiprows=1)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
We review the file's header to understand how the data is stored in our input file.
with open("data/turbo.txt") as pressure_file: print(pressure_file.readline().split(","))
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
The Mach-numbers at the intake and outlet are stored in columns 0 and 1, the temperatures in columns 2 and 3,and the frequency in column 4. The intake microphones (1, 2, and 3) are in columns 5, 6, and 7. The outletmicrophones (3, 5, and 6) are in columns 8, 9, and 10. The case number is in the last column.
Machnumber_intake = 0 Machnumber_outlet= 1 temperature_intake = 2 temperature_outlet = 3 f = 4 mics_intake = [5, 6, 7] mics_outlet = [8, 9, 10] case = -1
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
Next, we decompose the sound-fields into the propagating modes. We decompose the sound-fields on the intakeand outlet side of the duct, using the two *WaveGuide* objects defined earlier.
decomp_intake, headers_intake = td_intake.decompose(pressure, f, mics_intake, temperature_col=temperature_intake, case_col=case, Mach_col=Machnumber_intake) decomp_outlet, headers_outlet = td_outlet.decompose(pressure, f, mics_outlet, temperature_col=temperature_ou...
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
.. note :: The decomposition may show warnings for ill-conditioned modal matrices. This typically happens for frequencies close to the cut-on of a mode. However, it can also indicate that the microphone array is unable to separate the modes. The condition number of the wave decomposition is stored in the data return...
print(headers_intake)
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
We use that information to extract the modal data.
minusmodes = [1] # from headers_intake plusmodes = [0]
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
Furthermore, we acquire the unique decomposed frequency points.
frequs = numpy.abs(numpy.unique(decomp_intake[:, headers_intake.index("f")])) nof = frequs.shape[0]
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
For each of the frequencies, we can compute the scattering matrix by solving a linear system of equations$S = p_+ p_-^{-1}$\, where $S$ is the scattering matrix and $p_{\pm}$ are matrices containing theacoustic modes placed in rows and the different test cases placed in columns.NoteDetails for the computation of the S...
S = numpy.zeros((2,2,nof),dtype = complex) for fIndx, f in enumerate(frequs): frequ_rows = numpy.where(decomp_intake[:, headers_intake.index("f")] == f) ppm_intake = decomp_intake[frequ_rows] ppm_outlet = decomp_outlet[frequ_rows] pp = numpy.concatenate((ppm_intake[:,plusmodes].T, ppm_outlet[:,plusmode...
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
5. Plot-------Finally, we can plot the transmission and reflection coefficients at the intake and outlet.
plt.plot(frequs, numpy.abs(S[0, 0, :]), ls="-", color="#67A3C1", label="Reflection Intake") plt.plot(frequs, numpy.abs(S[0, 1, :]), ls="--", color="#67A3C1", label="Transmission Intake") plt.plot(frequs, numpy.abs(S[1, 1, :]), ls="-", color="#D38D7B", label="Reflection Outlet") plt.plot(frequs, numpy.abs(S[1 ,0, :]), l...
_____no_output_____
MIT
docs/build/html/_downloads/717b2ab272afe0e7360766f751fcd5b0/plot_turbo.ipynb
YinLiu-91/acdecom
PCA with MaxAbsScaler This code template is for simple Principal Component Analysis(PCA) along feature scaling via MaxAbsScaler in python for dimensionality reduction technique. It is used to decompose a multivariate dataset into a set of successive orthogonal components that explain a maximum amount of the variance. ...
import warnings import itertools import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from sklearn.decomposition import PCA from sklearn.preprocessing import LabelEncoder, MaxAbsScaler warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
InitializationFilepath of CSV file
#filepath file_path= ''
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
List of features which are required for model training .
#x_values features= []
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Target feature for prediction.
#y_value target= ''
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
df=pd.read_csv(file_path) df.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to...
X = df[features] Y = df[target]
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the da...
def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df)...
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Data RescalingUsed sklearn.preprocessing.MaxAbsScalerScale each feature by its maximum absolute value.This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any spa...
X_Scaled=MaxAbsScaler().fit_transform(X) X=pd.DataFrame(X_Scaled,columns=X.columns) X.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Choosing the number of componentsA vital part of using PCA in practice is the ability to estimate how many components are needed to describe the data. This can be determined by looking at the cumulative explained variance ratio as a function of the number of components.This curve quantifies how much of the total, dime...
pcaComponents = PCA().fit(X_Scaled) plt.plot(np.cumsum(pcaComponents.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Scree plotThe scree plot helps you to determine the optimal number of components. The eigenvalue of each component in the initial solution is plotted. Generally, you want to extract the components on the steep slope. The components on the shallow slope contribute little to the solution.
PC_values = np.arange(pcaComponents.n_components_) + 1 plt.plot(PC_values, pcaComponents.explained_variance_ratio_, 'ro-', linewidth=2) plt.title('Scree Plot') plt.xlabel('Principal Component') plt.ylabel('Proportion of Variance Explained') plt.show()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
ModelPCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, PCA is implemented as a transformer object that learns components in its fit method, and can be used on new data to project it on these components. Tunning ...
pca = PCA(n_components=8) pcaX = pd.DataFrame(data = pca.fit_transform(X_Scaled))
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Output Dataframe
finalDf = pd.concat([pcaX, Y], axis = 1) finalDf.head()
_____no_output_____
Apache-2.0
Dimensionality Reduction/PCA/PCA_MaxAbsScaler.ipynb
mohityogesh44/ds-seed
Parallel, Multi-Objective BO in BoTorch with qEHVI and qParEGOIn this tutorial, we illustrate how to implement a simple multi-objective (MO) Bayesian Optimization (BO) closed loop in BoTorch.We use the parallel ParEGO ($q$ParEGO) [1] and parallel Expected Hypervolume Improvement ($q$EHVI) [1] acquisition functions to...
import os import torch tkwargs = { "dtype": torch.double, "device": torch.device("cuda" if torch.cuda.is_available() else "cpu"), } SMOKE_TEST = os.environ.get("SMOKE_TEST")
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Problem setup
from botorch.test_functions.multi_objective import BraninCurrin problem = BraninCurrin(negate=True).to(**tkwargs)
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Model initializationWe use a multi-output `SingleTaskGP` to model the two objectives with a homoskedastic Gaussian likelihood with an inferred noise level.The models are initialized with $2(d+1)=6$ points drawn randomly from $[0,1]^2$.
from botorch.models.gp_regression import SingleTaskGP from botorch.models.transforms.outcome import Standardize from gpytorch.mlls.exact_marginal_log_likelihood import ExactMarginalLogLikelihood from botorch.utils.transforms import unnormalize from botorch.utils.sampling import draw_sobol_samples def generate_initial...
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Define a helper function that performs the essential BO step for $q$EHVIThe helper function below initializes the $q$EHVI acquisition function, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. For this example, we'll use a small batch of $q=4$. Passing the keyword...
from botorch.optim.optimize import optimize_acqf, optimize_acqf_list from botorch.acquisition.objective import GenericMCObjective from botorch.utils.multi_objective.scalarization import get_chebyshev_scalarization from botorch.utils.multi_objective.box_decompositions.non_dominated import NondominatedPartitioning from b...
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Define a helper function that performs the essential BO step for $q$ParEGOThe helper function below similarly initializes $q$ParEGO, optimizes it, and returns the batch $\{x_1, x_2, \ldots x_q\}$ along with the observed function values. $q$ParEGO uses random augmented chebyshev scalarization with the `qExpectedImprove...
def optimize_qparego_and_get_observation(model, train_obj, sampler): """Samples a set of random weights for each candidate in the batch, performs sequential greedy optimization of the qParEGO acquisition function, and returns a new candidate and observation.""" acq_func_list = [] for _ in range(BATCH_S...
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Perform Bayesian Optimization loop with $q$EHVI and $q$ParEGOThe Bayesian optimization "loop" for a batch size of $q$ simply iterates the following steps:1. given a surrogate model, choose a batch of points $\{x_1, x_2, \ldots x_q\}$2. observe $f(x)$ for each $x$ in the batch 3. update the surrogate model. Just for il...
from botorch import fit_gpytorch_model from botorch.acquisition.monte_carlo import qExpectedImprovement, qNoisyExpectedImprovement from botorch.sampling.samplers import SobolQMCNormalSampler from botorch.exceptions import BadInitialCandidatesWarning from botorch.utils.multi_objective.pareto import is_non_dominated from...
Trial 1 of 3 ......................... Trial 2 of 3 ......................... Trial 3 of 3 .........................
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
Plot the resultsThe plot below shows the a common metric of multi-objective optimization performance, the log hypervolume difference: the log difference between the hypervolume of the true pareto front and the hypervolume of the approximate pareto front identified by each algorithm. The log hypervolume difference is p...
import numpy as np from matplotlib import pyplot as plt %matplotlib inline def ci(y): return 1.96 * y.std(axis=0) / np.sqrt(N_TRIALS) iters = np.arange(N_BATCH + 1) * BATCH_SIZE log_hv_difference_qparego = np.log10(problem.max_hv - np.asarray(hvs_qparego_all)) log_hv_difference_qehvi = np.log10(problem.max_hv ...
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
plot the observations colored by iterationTo examine optimization process from another perspective, we plot the collected observations under each algorithm where the color corresponds to the BO iteration at which the point was collected. The plot on the right for $q$EHVI shows that the $q$EHVI quickly identifies the p...
from matplotlib.cm import ScalarMappable fig, axes = plt.subplots(1, 3, figsize=(17, 5)) algos = ["Sobol", "qParEGO", "qEHVI"] cm = plt.cm.get_cmap('viridis') batch_number = torch.cat( [torch.zeros(6), torch.arange(1, N_BATCH+1).repeat(BATCH_SIZE, 1).t().reshape(-1)] ).numpy() for i, train_obj in enumerate((trai...
_____no_output_____
MIT
BO_trials/multi_objective_bo.ipynb
michelleliu1027/Bayesian_PV
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks). Amazon Web Services (AWS)* SSH to EC2* S3cmd* s3-parallel-put* S3DistCp* Redshift* Kinesis* Lambda SSH to EC2 Connect to an Ubuntu EC2 instance th...
!ssh -i key.pem ubuntu@ipaddress
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Connect to an Amazon Linux EC2 instance through SSH with the given key:
!ssh -i key.pem ec2-user@ipaddress
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
S3cmdBefore I discovered [S3cmd](http://s3tools.org/s3cmd), I had been using the [S3 console](http://aws.amazon.com/console/) to do basic operations and [boto](https://boto.readthedocs.org/en/latest/) to do more of the heavy lifting. However, sometimes I just want to hack away at a command line to do my work.I've foun...
!sudo apt-get install s3cmd
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Running the following command will prompt you to enter your AWS access and AWS secret keys. To follow security best practices, make sure you are using an IAM account as opposed to using the root account.I also suggest enabling GPG encryption which will encrypt your data at rest, and enabling HTTPS to encrypt your data ...
!s3cmd --configure
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Frequently used S3cmds:
# List all buckets !s3cmd ls # List the contents of the bucket !s3cmd ls s3://my-bucket-name # Upload a file into the bucket (private) !s3cmd put myfile.txt s3://my-bucket-name/myfile.txt # Upload a file into the bucket (public) !s3cmd put --acl-public --guess-mime-type myfile.txt s3://my-bucket-name/myfile.txt # R...
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
s3-parallel-put[s3-parallel-put](https://github.com/twpayne/s3-parallel-put.git) is a great tool for uploading multiple files to S3 in parallel. Install package dependencies:
!sudo apt-get install boto !sudo apt-get install git
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Clone the s3-parallel-put repo:
!git clone https://github.com/twpayne/s3-parallel-put.git
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Setup AWS keys for s3-parallel-put:
!export AWS_ACCESS_KEY_ID=XXX !export AWS_SECRET_ACCESS_KEY=XXX
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Sample usage:
!s3-parallel-put --bucket=bucket --prefix=PREFIX SOURCE
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Dry run of putting files in the current directory on S3 with the given S3 prefix, do not check first if they exist:
!s3-parallel-put --bucket=bucket --host=s3.amazonaws.com --put=stupid --dry-run --prefix=prefix/ ./
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
S3DistCp[S3DistCp](http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_s3distcp.html) is an extension of DistCp that is optimized to work with Amazon S3. S3DistCp is useful for combining smaller files and aggregate them together, taking in a pattern and target file to combine smaller input files...
!rvm --default ruby-1.8.7-p374
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
The EMR command line below executes the following:* Create a master node and slave nodes of type m1.small* Runs S3DistCp on the source bucket location and concatenates files that match the date regular expression, resulting in files that are roughly 1024 MB or 1 GB* Places the results in the destination bucket
!./elastic-mapreduce --create --instance-group master --instance-count 1 \ --instance-type m1.small --instance-group core --instance-count 4 \ --instance-type m1.small --jar /home/hadoop/lib/emr-s3distcp-1.0.jar \ --args "--src,s3://my-bucket-source/,--groupBy,.*([0-9]{4}-01).*,\ --dest,s3://my-bucket-dest/,--targetSiz...
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
For further optimization, compression can be helpful to save on AWS storage and bandwidth costs, to speed up the S3 to/from EMR transfer, and to reduce disk I/O. Note that compressed files are not easy to split for Hadoop. For example, Hadoop uses a single mapper per GZIP file, as it does not know about file boundaries...
--outputCodec,lzo
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Redshift Copy values from the given S3 location containing CSV files to a Redshift cluster:
copy table_name from 's3://source/part' credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX' csv;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Copy values from the given location containing TSV files to a Redshift cluster:
copy table_name from 's3://source/part' credentials 'aws_access_key_id=XXX;aws_secret_access_key=XXX' csv delimiter '\t';
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
View Redshift errors:
select * from stl_load_errors;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Vacuum Redshift in full:
VACUUM FULL;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Analyze the compression of a table:
analyze compression table_name;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Cancel the query with the specified id:
cancel 18764;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
The CANCEL command will not abort a transaction. To abort or roll back a transaction, you must use the ABORT or ROLLBACK command. To cancel a query associated with a transaction, first cancel the query then abort the transaction.If the query that you canceled is associated with a transaction, use the ABORT or ROLLBACK....
abort;
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Reference table creation and setup: ![alt text](http://docs.aws.amazon.com/redshift/latest/dg/images/tutorial-optimize-tables-ssb-data-model.png)
CREATE TABLE part ( p_partkey integer not null sortkey distkey, p_name varchar(22) not null, p_mfgr varchar(6) not null, p_category varchar(7) not null, p_brand1 varchar(9) not null, p_color var...
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
| Table name | Sort Key | Distribution Style ||------------|--------------|--------------------|| LINEORDER | lo_orderdate | lo_partkey || PART | p_partkey | p_partkey || CUSTOMER | c_custkey | ALL || SUPPLIER | s_suppkey | ALL || DWDATE | d_dat...
!aws kinesis create-stream --stream-name Foo --shard-count 1 --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
List all streams:
!aws kinesis list-streams --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Get info about the stream:
!aws kinesis describe-stream --stream-name Foo --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Put a record to the stream:
!aws kinesis put-record --stream-name Foo --data "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4=" --partition-key shardId-000000000000 --region us-east-1 --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Get records from a given shard:
!SHARD_ITERATOR=$(aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name Foo --query 'ShardIterator' --profile adminuser) aws kinesis get-records --shard-iterator $SHARD_ITERATOR
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Delete a stream:
!aws kinesis delete-stream --stream-name Foo --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Lambda List lambda functions:
!aws lambda list-functions \ --region us-east-1 \ --max-items 10
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Upload a lambda function:
!aws lambda upload-function \ --region us-east-1 \ --function-name foo \ --function-zip file-path/foo.zip \ --role IAM-role-ARN \ --mode event \ --handler foo.handler \ --runtime nodejs \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Invoke a lambda function:
!aws lambda invoke-async \ --function-name foo \ --region us-east-1 \ --invoke-args foo.txt \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Return metadata for a specific function:
!aws lambda get-function-configuration \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Return metadata for a specific function along with a presigned URL that you can use to download the function's .zip file that you uploaded:
!aws lambda get-function \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Add an event source:
!aws lambda add-event-source \ --region us-east-1 \ --function-name ProcessKinesisRecords \ --role invocation-role-arn \ --event-source kinesis-stream-arn \ --batch-size 100 \ --profile adminuser
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks
Delete a lambda function:
!aws lambda delete-function \ --function-name helloworld \ --region us-east-1 \ --debug
_____no_output_____
Apache-2.0
aws/aws.ipynb
datascienceandml/data-science-ipython-notebooks