markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
2.2 Bernoulli MLP as decoderIn this case let $p_\theta(x|z)$ be a multivariate Bernoulli whose probabilities are computed from $z$ with a feed forward neural network with a single hidden layer:\begin{align}\log p(x|z) &= \sum_{i=1}^D x_i\log y_i + (1-x_i)\log (1-y_i) \\\textit{ where } y &= f_\sigma(W_5\tanh (W_4z+b_...
# define fully connected and tanh activation layers for the decoder decoder_z = mx.sym.FullyConnected(data=z, name="decoder_z",num_hidden=400) act_z = mx.sym.Activation(data=decoder_z, act_type="tanh",name="activation_z") # define the output layer with sigmoid activation function, where the dimension is equal to the i...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
2.3 Joint Loss Function for the Encoder and the DecoderThe variational lower bound also called evidence lower bound (ELBO) can be estimated as:\begin{align}\mathcal{L}(\theta,\phi;x_{(i)}) \approx \frac{1}{2}\left(1+\log ((\sigma_j^{(i)})^2)-(\mu_j^{(i)})^2-(\sigma_j^{(i)})^2\right) + \log p_\theta(x^{(i)}|z^{(i)})\en...
# define the objective loss function that needs to be minimized KL = 0.5*mx.symbol.sum(1+logvar-pow( mu,2)-mx.symbol.exp(logvar),axis=1) loss = -mx.symbol.sum(mx.symbol.broadcast_mul(loss_label,mx.symbol.log(y)) + mx.symbol.broadcast_mul(1-loss_label,mx.symbol.log(1-y)),axis=1)-KL output = mx.sym...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
3. Training the modelNow, we can define the model and train it. First we will initilize the weights and the biases to be Gaussian(0,0.01), and then use stochastic gradient descent for optimization. To warm start the training, one may also initilize with pre-trainined parameters `arg_params` using `init=mx.initializer....
# set up the log nd_iter.reset() logging.getLogger().setLevel(logging.DEBUG) # define function to trave back training loss def log_to_list(period, lst): def _callback(param): """The checkpoint function.""" if param.nbatch % period == 0: name, value = param.eval_metric.get() ...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
As expected, the ELBO is monotonically increasing over epoch, and we reproduced the results given in the paper [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114/). Now we can extract/load the parameters and then feed the network forward to calculate $y$ which is the reconstructed image, and we can also ...
arg_params = model.get_params()[0] # if saved the parameters, can load them using `load_checkpoint` method at e.g. 100th epoch # sym, arg_params, aux_params = mx.model.load_checkpoint(model_prefix, 100) # assert sym.tojson() == output.tojson() e = y.bind(mx.cpu(), {'data': nd_iter_test.data[0][1], ...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
4. All together: MXNet-based class VAE
from VAE import VAE
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
One can directly call the class `VAE` to do the training:```VAE(n_latent=5,num_hidden_ecoder=400,num_hidden_decoder=400,x_train=None,x_valid=None,batch_size=100,learning_rate=0.001,weight_decay=0.01,num_epoch=100,optimizer='sgd',model_prefix=None,initializer = mx.init.Normal(0.01),likelihood=Bernoulli)```The outputs ar...
# can initilize weights and biases with the learned parameters as follows: # init = mx.initializer.Load(params) # call the VAE, output model contains the learned model and training loss out = VAE(n_latent=2, x_train=image, x_valid=None, num_epoch=200) # encode test images to obtain mu and logvar which are used for s...
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
Quick Exercises 1 1. Verify that |+⟩ and |−⟩ are in fact eigenstates of the X-gate. First we need to define the |+⟩ and |−⟩ states in 2 different qubits. I will initalize the first qubit to 1 and the second to 0.
qCirc1 = QuantumCircuit(2, 2) oneInit = [0, 1] qCirc1.initialize(oneInit, 0) zeroInit = [1, 0] qCirc1.initialize(zeroInit, 1) qCirc1.draw('mpl') qCirc1.h(0) qCirc1.h(1) qCirc1.draw('mpl')
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
Now the first qubit is in the |-⟩ and the second qubit is in the |+⟩ state. We can now apply the X gates. If the |+⟩ and |−⟩ states are really eigenstates then a reapplication of the Hadamard gates and a measurement should give |0⟩ and |1⟩.
qCirc1.x([0,1]) qCirc1.h([0,1]) qCirc1.draw('mpl') qCirc1.measure(0, 0) qCirc1.measure(1, 1) qCirc1.draw('mpl')
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
Simulating this circuit on QASM
nativeSim = Aer.get_backend('qasm_simulator') result = execute(qCirc1, backend = nativeSim).result() plot_histogram(result.get_counts(qCirc1))
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
This proves the property we were looking for 4. Find the eigenstates of the Y-gate, and their co-ordinates on the Bloch sphere. The eigenstates of the Y gate will simply be the y-basis vectors because the are unaffected by a rotation of $\pi$ about the y-axis. These are $$ \frac{1}{\sqrt{2}}(|0⟩ + i|1⟩) \text{ and } ...
qCircTest = QuantumCircuit(2, 2); qCircTest.initialize(zeroInit, 0); qCircTest.initialize(oneInit, 1) qCircTest.draw('mpl') qCircTest.y([0,1]) qCircTest.measure([0, 1], [0, 1]) qCircTest.draw('mpl') result = execute(qCircTest, backend = nativeSim, shots = 1024).result() plot_histogram(result.get_counts(qCircTest))
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
Quick Exercises 2 2. Show that applying the sequence of gates: HZH, to any qubit state is equivalent to applying an X-gate. Here we will define another circuit with 1 qubit and show that a HZH transforms |0⟩ to |1⟩ and |1⟩ and |0⟩.
qCirc2 = QuantumCircuit(1, 1) qCirc2.initialize(zeroInit, 0) qCirc2.draw('mpl') qCirc2.h(0) qCirc2.z(0) qCirc2.h(0) qCirc2.measure(0, 0) qCirc2.draw('mpl')
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
Simulating this circuit on QASM
result2 = execute(qCirc2, backend = nativeSim, shots = 1024).result() plot_histogram(result2.get_counts(qCirc2)) qCirc2 = QuantumCircuit(1, 1) qCirc2.initialize(oneInit, 0) qCirc2.draw('mpl') qCirc2.h(0) qCirc2.z(0) qCirc2.h(0) qCirc2.measure(0, 0) qCirc2.draw('mpl') result2 = execute(qCirc2, backend = nativeSim, shots...
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
3. Find a combination of X, Z and H-gates that is equivalent to a Y-gate (ignoring global phase). As we saw before, the Y gate does not affect the |0⟩ and |1⟩ basis vectors. Since we can ignore the global phase, we can take $-i$ outside. This yeilds the matrix$$ \sigma_Y = -i \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatr...
qCirc23 = QuantumCircuit(1, 1) qCirc23.initialize([1/sqrt(2), 1j/sqrt(2)], 0) qCirc23.draw('mpl') svSim = Aer.get_backend('statevector_simulator') svResult = execute(qCirc23, backend = svSim, shots = 1024).result() stateVectorBefore = svResult.get_statevector() plot_bloch_multivector(stateVectorBefore) qCirc23.z(0) qCi...
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
Quick Exercises 3 1. If we initialise our qubit in the state |+⟩, what is the probability of measuring it in state |−⟩? The probability would be 0. 2. Use Qiskit to display the probability of measuring a |0⟩ qubit in the states |+⟩ and |−⟩
qCirc32 = QuantumCircuit(1, 1) qCirc32.initialize([1/sqrt(2), 1/sqrt(2)], 0) qCirc32.measure(0, 0) qCirc32.draw('mpl') result = execute(qCirc32, backend = nativeSim, shots = 1024).result() plot_histogram(result.get_counts(qCirc32))
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
3. Try to create a function that measures in the Y-basis. To do this we would need to 1. Convert Y-basis into Z-basis (this can simply be done by rotating the x-axis by $ \pi/2$ )2. Use Qiskit measurement to measure in Z-basis3. Convert back into Y-basis (rotate the x-axis by $ -\pi/2$)
def measure_y(qc, qubit, cbit): # STEP 1 qc.sdg(qubit) qc.h(qubit) # STEP 2 qc.measure(qubit, cbit) # STEP 3 qc.h(qubit) qc.s(qubit) circuit = QuantumCircuit(1, 1) outwardInit = [1/sqrt(2), -1j/sqrt(2)] circuit.initialize(outwardInit, 0) measure_y(circuit, 0 ,0) circuit.draw('m...
_____no_output_____
MIT
Qiskit Textbook Solutions/Chapter 1/1.4 - Single Qubit Gates.ipynb
kj3moraes/MyQiskitProgramming
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras...
import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1)
_____no_output_____
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (sessi...
9
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we create...
a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c)
Tensor("Mul:0", shape=(), dtype=int32)
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a sessio...
sess = tf.Session() print(sess.run(c))
20
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed d...
# Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close()
6
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a...
# GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes X to be a random tensor of shape (3,1) Initializes W to be a random tensor of shape (4,3) Initializes b to be a random tensor of shape (4,1) Returns: result -- r...
result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
*** Expected Output ***: ```result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]``` 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmo...
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.pl...
sigmoid(0) = 0.5 sigmoid(12) = 0.999994
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variable...
# GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "...
cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
** Expected Output** : ```cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]``` 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will ...
# GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. ...
one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: ```one_hot = [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]]``` 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros y...
# GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(.....
ones = [ 1. 1. 1.]
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the com...
# Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
_____no_output_____
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
Change the index below and run the cell to visualize some examples in the dataset.
# Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
y = 5
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
# Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train...
number of training examples = 1080 number of test examples = 120 X_train shape: (12288, 1080) Y_train shape: (6, 1080) X_test shape: (12288, 120) Y_test shape: (6, 120)
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a t...
# GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns:...
X = Tensor("X_2:0", shape=(12288, ?), dtype=float32) Y = Tensor("Y:0", shape=(6, ?), dtype=float32)
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2) ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] ...
W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref> b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref> W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref> b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters ...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python d...
Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very ...
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the c...
cost = Tensor("Mean:0", shape=(), dtype=float32)
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of...
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 1...
_____no_output_____
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.048222. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break...
parameters = model(X_train, Y_train, X_test, Y_test)
Cost after epoch 0: 1.913693 Cost after epoch 100: 1.048222 Cost after epoch 200: 0.756012 Cost after epoch 300: 0.590844 Cost after epoch 400: 0.483423 Cost after epoch 500: 0.392928 Cost after epoch 600: 0.323629 Cost after epoch 700: 0.262100 Cost after epoch 800: 0.210199 Cost after epoch 900: 0.171622 Cost after e...
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your mod...
import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) image = image/255. my_image = s...
Your algorithm predicts: y = 3
MIT
2.Improving Deep Neural Networks/Week 3/TensorFlow_Tutorial_v3b.ipynb
thesauravkarmakar/deeplearning.ai
!pip install pyupbit import pyupbit #BTC 최근 200시간의 데이터 불러옴 df = pyupbit.get_ohlcv("KRW-ETH", interval="minute60") df #시간(ds)와 종가(y)값만 남김 df = df.reset_index() df['ds'] = df['index'] df['y'] = df['close'] data = df[['ds','y']] data #prophet 불러옴 from fbprophet import Prophet #학습 model = Prophet() model.fit(data) #24시간 미...
_____no_output_____
MIT
AI.ipynb
wonjongchurl/github_test
How to watch changes to an object==================In this notebook, we learn how kubernetes API resource Watch endpoint is used to observe resource changes. It can be used to get information about changes to any kubernetes object.
from kubernetes import client, config, watch
_____no_output_____
Apache-2.0
examples/notebooks/watch_notebook.ipynb
dix000p/kubernetes-client-python
Load config from default location.
config.load_kube_config()
_____no_output_____
Apache-2.0
examples/notebooks/watch_notebook.ipynb
dix000p/kubernetes-client-python
Create API instance
api_instance = client.CoreV1Api()
_____no_output_____
Apache-2.0
examples/notebooks/watch_notebook.ipynb
dix000p/kubernetes-client-python
Run a Watch on the Pods endpoint. Watch would be executed and produce output about changes to any Pod. After running the cell below, You can test this by running the Pod notebook [create_pod.ipynb](create_pod.ipynb) and observing the additional output here. You can stop the cell from running by restarting the kernel.
w = watch.Watch() for event in w.stream(api_instance.list_pod_for_all_namespaces): print("Event: %s %s %s" % (event['type'],event['object'].kind, event['object'].metadata.name))
_____no_output_____
Apache-2.0
examples/notebooks/watch_notebook.ipynb
dix000p/kubernetes-client-python
TensorFlow Transfer LearningThis notebook shows how to use pre-trained models from [TensorFlowHub](https://www.tensorflow.org/hub). Sometimes, there is not enough data, computational resources, or time to train a model from scratch to solve a particular problem. We'll use a pre-trained model to classify flowers with b...
import os import pathlib import IPython.display as display import matplotlib.pylab as plt import numpy as np import tensorflow as tf import tensorflow_hub as hub from PIL import Image from tensorflow.keras import Sequential from tensorflow.keras.layers import ( Conv2D, Dense, Dropout, Flatten, MaxP...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Exploring the dataAs usual, let's take a look at the data before we start building our model. We'll be using a creative-commons licensed flower photo dataset of 3670 images falling into 5 categories: 'daisy', 'roses', 'dandelion', 'sunflowers', and 'tulips'.The below [tf.keras.utils.get_file](https://www.tensorflow.or...
data_dir = tf.keras.utils.get_file( "flower_photos", "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz", untar=True, ) # Print data path print("cd", data_dir)
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
We can use python's built in [pathlib](https://docs.python.org/3/library/pathlib.html) tool to get a sense of this unstructured data.
data_dir = pathlib.Path(data_dir) image_count = len(list(data_dir.glob("*/*.jpg"))) print("There are", image_count, "images.") CLASS_NAMES = np.array( [item.name for item in data_dir.glob("*") if item.name != "LICENSE.txt"] ) print("These are the available classes:", CLASS_NAMES)
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Let's display the images so we can see what our model will be trying to learn.
roses = list(data_dir.glob("roses/*")) for image_path in roses[:3]: display.display(Image.open(str(image_path)))
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Building the datasetKeras has some convenient methods to read in image data. For instance [tf.keras.preprocessing.image.ImageDataGenerator](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator) is great for small local datasets. A tutorial on how to use it can be found [here](http...
!gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | head -5 > /tmp/input.csv !cat /tmp/input.csv !gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt !cat /tmp/labels.txt
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Let's figure out how to read one of these images from the cloud. TensorFlow's [tf.io.read_file](https://www.tensorflow.org/api_docs/python/tf/io/read_file) can help us read the file contents, but the result will be a [Base64 image string](https://en.wikipedia.org/wiki/Base64). Hmm... not very readable for humans or Ten...
IMG_HEIGHT = 224 IMG_WIDTH = 224 IMG_CHANNELS = 3 BATCH_SIZE = 32 # 10 is a magic number tuned for local training of this dataset. SHUFFLE_BUFFER = 10 * BATCH_SIZE AUTOTUNE = tf.data.experimental.AUTOTUNE VALIDATION_IMAGES = 370 VALIDATION_STEPS = VALIDATION_IMAGES // BATCH_SIZE def decode_img(img, reshape_dims): ...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Is it working? Let's see!**TODO 1.a:** Run the `decode_img` function and plot it to see a happy looking daisy.
img = tf.io.read_file( "gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg" ) # Uncomment to see the image string. # print(img) # TODO: decode image and plot it
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
One flower down, 3669 more of them to go. Rather than load all the photos in directly, we'll use the file paths given to us in the csv and load the images when we batch. [tf.io.decode_csv](https://www.tensorflow.org/api_docs/python/tf/io/decode_csv) reads in csv rows (or each line in a csv file), while [tf.math.equal](...
def decode_csv(csv_row): record_defaults = ["path", "flower"] filename, label_string = tf.io.decode_csv(csv_row, record_defaults) image_bytes = tf.io.read_file(filename=filename) label = tf.math.equal(CLASS_NAMES, label_string) return image_bytes, label
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Next, we'll transform the images to give our network more variety to train on. There are a number of [image manipulation functions](https://www.tensorflow.org/api_docs/python/tf/image). We'll cover just a few:* [tf.image.random_crop](https://www.tensorflow.org/api_docs/python/tf/image/random_crop) - Randomly deletes th...
MAX_DELTA = 63.0 / 255.0 # Change brightness by at most 17.7% CONTRAST_LOWER = 0.2 CONTRAST_UPPER = 1.8 def read_and_preprocess(image_bytes, label, random_augment=False): if random_augment: img = decode_img(image_bytes, [IMG_HEIGHT + 10, IMG_WIDTH + 10]) # TODO: augment the image. else: ...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Finally, we'll make a function to craft our full dataset using [tf.data.dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). The [tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) will read in each line in our train/eval csv files to our `decode_csv` function....
def load_dataset(csv_of_filenames, batch_size, training=True): dataset = ( tf.data.TextLineDataset(filenames=csv_of_filenames) .map(decode_csv) .cache() ) if training: dataset = ( dataset.map(read_and_preprocess_with_augment) .shuffle(SHUFFLE_BUFFER) ...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
We'll test it out with our training set. A batch size of one will allow us to easily look at each augmented image.
train_path = "gs://cloud-ml-data/img/flower_photos/train_set.csv" train_data = load_dataset(train_path, 1) itr = iter(train_data)
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
**TODO 1.c:** Run the below cell repeatedly to see the results of different batches. The images have been un-normalized for human eyes. Can you tell what type of flowers they are? Is it fair for the AI to learn on?
image_batch, label_batch = next(itr) img = image_batch[0] plt.imshow(img) print(label_batch[0])
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
**Note:** It may take a 4-5 minutes to see result of different batches. MobileNetV2These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis **and** there are three color channels, making the information here over 200 times larger!How do our cur...
eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv" nclasses = len(CLASS_NAMES) hidden_layer_1_neurons = 400 hidden_layer_2_neurons = 100 dropout_rate = 0.25 num_filters_1 = 64 kernel_size_1 = 3 pooling_size_1 = 2 num_filters_2 = 32 kernel_size_2 = 3 pooling_size_2 = 2 layers = [ # TODO: Add your image...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
If your model is like mine, it learns a little bit, slightly better then random, but *ugh*, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we n...
module_selection = "mobilenet_v2_100_224" module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4".format( module_selection ) transfer_model = tf.keras.Sequential( [ # TODO tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense( nclasses, activat...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Even though we're only adding one more `Dense` layer in order to get the probabilities for each of the 5 flower types, we end up with over six thousand parameters to train ourselves. Wow!Moment of truth. Let's compile this new model and see how it compares to our MNIST architecture.
transfer_model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) train_ds = load_dataset(train_path, BATCH_SIZE) eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False) transfer_model.fit( train_ds, epochs=5, steps_per_epoch=5, validation_data=eval_ds, val...
_____no_output_____
Apache-2.0
notebooks/image_models/labs/3_tf_hub_transfer_learning.ipynb
henrypurbreadcom/asl-ml-immersion
Saldías et al. Figure 02Waves - ssh anomaly (canyon minus no-canyon), allowed and scattered waves
from brokenaxes import brokenaxes import cmocean as cmo import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec import matplotlib.gridspec as gspec import matplotlib.patches as patches from netCDF4 import Dataset import numpy as np import pandas as pd import scipy as sc import scipy.io as sio import xa...
_____no_output_____
Apache-2.0
figures/figure02.ipynb
UBC-MOAD/Saldias_et_al_2021
Rating and Review Analysis of Car Brands Project Author: Sabriye Ela EsmeThis notebook includes the codes for analyzing rating score and review score of different brands of cars according to 'Edmunds-Consumer Car Ratings and Reviews' data retrieved from https://www.kaggle.com/ankkur13/edmundsconsumer-car-ratings-and-r...
import pandas as pd import numpy as np lst=[] data = pd.read_csv('C:\\Users\\Ela\\Desktop\\Cars_review_project\\Scraped_Car_Review_porsche.csv', lineterminator='\n') brand0= 'Porsche'
_____no_output_____
MIT
Car Data Analysis.ipynb
elaesme/Car-Data-Analysis
Here's a glimpse of dataframe of Porsche.
data.head() score0=sum(data['Rating\r'])/data.shape[0] lst.append([brand0, score0])
_____no_output_____
MIT
Car Data Analysis.ipynb
elaesme/Car-Data-Analysis
Calculation of the score for other brands
#Calculation of the score for other brands data1 = pd.read_csv('C://Users//Ela//Desktop//Cars_review_project//Scrapped_Car_Reviews_Audi.csv', lineterminator='\n') brand1= 'Audi' score1=sum(data1['Rating\r'])/data1.shape[0] lst.append([brand1, score1]) data2 = pd.read_csv('C://Users//Ela//Desktop//Cars_review_project//...
_____no_output_____
MIT
Car Data Analysis.ipynb
elaesme/Car-Data-Analysis
Scores for ReviewsIn this part of the code, there are two different functions to find the total review score of a brand. First the review function gets every review as a single sentence and checks the words to give the sentence a score based on how positive or negative the words in the review are.The second function c...
def review_score(liste): words=["love", "amazing", "happy", "great", "best", "win", "powerful", "beatiful"] negwords=["hate", "regret", "bad", "weak", "disappointed", "sad"] scorepos=0 scoreneg=0 for i in liste: if i in words: scorepos+=1 elif i in negwords: s...
_____no_output_____
MIT
Car Data Analysis.ipynb
elaesme/Car-Data-Analysis
From this line, the code is about creating a new dataframe which shows the average rating score, average review score and the total score (sum of the 2 different averages) for every brand, in a descending order according to total score.
totlist=[total_score(data), total_score(data1), total_score(data2), total_score(data3),total_score(data4)] for i in range(len(lst)): lst[i].append(totlist[i]) lst[i].append(totlist[i]+lst[i][1]) lst.sort(key=lambda x:x[3], reverse= 1) df = pd.DataFrame(lst[0:],columns=['Brand', 'Average Rating Score', 'Average...
_____no_output_____
MIT
Car Data Analysis.ipynb
elaesme/Car-Data-Analysis
Get_Histogram_key
YY = QubitOperator('X0 X1 Y3', 0.25j) Get_Histogram_key(YY)
_____no_output_____
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
Simulate_Quantum_Circuit
num_shots = 1000 YY = QubitOperator('X0 X1 Y3', 0.25j) histogram_string= Get_Histogram_key(YY) Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string)
_____no_output_____
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
Get_wavefunction
YY = QubitOperator('X0 X1 Y3', 0.25j) cirq_NO_M = cirq.Circuit([*HF_circ, *UCCSD_circ.all_operations()]) histogram_string= Get_Histogram_key(YY) Get_wavefunction(cirq_NO_M, sig_figs=3)
_____no_output_____
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
Return_as_binary
num_shots = 1000 YY = QubitOperator('X0 X1 Y3', 0.25j) histogram_string= Get_Histogram_key(YY) c_result = Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string) Return_as_binary(c_result, histogram_string)
_____no_output_____
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
expectation_value_by_parity
num_shots = 1000 YY = QubitOperator('X0 X1 Y3', 0.25j) histogram_string= Get_Histogram_key(YY) c_result = Simulate_Quantum_Circuit(full_circuit, num_shots, histogram_string) b_result=Return_as_binary(c_result, histogram_string) expectation_value_by_parity(b_result) from quchem.Hamiltonian_Generator_Functions import *...
_____no_output_____
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
$$\begin{aligned} H &=h_{0} I+h_{1} Z_{0}+h_{2} Z_{1}+h_{3} Z_{2}+h_{4} Z_{3} \\ &+h_{5} Z_{0} Z_{1}+h_{6} Z_{0} Z_{2}+h_{7} Z_{1} Z_{2}+h_{8} Z_{0} Z_{3}+h_{9} Z_{1} Z_{3} \\ &+h_{10} Z_{2} Z_{3}+h_{11} Y_{0} Y_{1} X_{2} X_{3}+h_{12} X_{0} Y_{1} Y_{2} X_{3} \\ &+h_{13} Y_{0} X_{1} X_{2} Y_{3}+h_{14} X_{0} X_{1} Y_{2} ...
n_shots=1000 def GIVE_ENERGY(theta_ia_theta_jab_list): theta_ia = theta_ia_theta_jab_list[:len(theta_parameters_ia)] theta_ijab = theta_ia_theta_jab_list[len(theta_parameters_ia):] ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(theta_parameters_ia, theta_parameters_ijab) VQE_exp = ...
0: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1188432276915568 1: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.11630628122307 2: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1183902015364695 3: Input_to_Funct: [ 0.00025 -0.0005 0.00025]: Output: -1.1133163085994964 4: Input_to_Fu...
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
REDUCED H2 ansatz:
from quchem.Simulating_Quantum_Circuit import * from quchem.Ansatz_Generator_Functions import * from openfermion.ops import QubitOperator def H2_ansatz(theta): HF_circ = [cirq.X.on(cirq.LineQubit(0)), cirq.X.on(cirq.LineQubit(1))] full_exp_circ_obj = full_exponentiated_PauliWord_circuit(QubitOperator('Y0 ...
Energy = -1.1349360253505665 state:
MIT
quchem_examples/Simulating Quantum Circuit.ipynb
AlexisRalli/VQE-code
FAERS AE Multilabel Outcomes ML pipeline - Dask Distributed + Joblib + Dask DataFrames Methodology Objective**Use FAERS data on drug safety to identify possible risk factors associated with patient mortality and other serious adverse events associated with approved used of a drug or drug class** Data**_Outcome table...
# scale sklearn dask example setup - compare to multi thread below from dask.distributed import Client, progress client = Client(n_workers=4, threads_per_worker=1, memory_limit='2GB') client #import libraries import numpy as np print('The numpy version is {}.'.format(np.__version__)) import pandas as pd print('The pand...
_____no_output_____
MIT
faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb
briangriner/OSTF-FAERS
ML Pipeline - Preprocessing step
# dask: data prep + preprocessor + pipeline funcs def df_prep(ddf): """df_prep func used in pipeline to support select features and prep multilabel targets for clf assumes dask DataFrame is named 'ddf' """ # compute ddf #ddf2 = ddf.compute() # drop fields from df when defining model targets and...
_____no_output_____
MIT
faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb
briangriner/OSTF-FAERS
Decision Tree Classifier
# multilabel clfs classifiers = [ #RidgeClassifierCV(class_weight='balanced'), DecisionTreeClassifier(class_weight='balanced'), #ExtraTreesClassifier(class_weight='balanced'), #RandomForestClassifier(class_weight='balanced'), #MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=T...
Predicted AEs: DecisionTree Classifier (260715, 7) [[0 0 0 ... 0 1 0] [0 1 0 ... 1 1 0] [0 0 0 ... 0 1 0] ... [0 0 0 ... 0 1 0] [0 0 0 ... 0 1 0] [0 0 0 ... 0 1 0]]
MIT
faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb
briangriner/OSTF-FAERS
Random Forest Classifier
# multilabel clfs classifiers = [ #RidgeClassifierCV(class_weight='balanced'), #DecisionTreeClassifier(class_weight='balanced'), #ExtraTreesClassifier(class_weight='balanced'), RandomForestClassifier(class_weight='balanced'), #MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=T...
Step 0: Create ndarray for multilabel targets + select model features y_arr (260715, 7) int64 [[0 0 0 0 0 1 0] [0 0 0 1 0 1 0]] Index(['outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI'], dtype='object') X (260715, 7) i_f_code obj...
MIT
faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb
briangriner/OSTF-FAERS
RidgeClassifierCV
# multilabel clfs classifiers = [ RidgeClassifierCV(class_weight='balanced'), #DecisionTreeClassifier(class_weight='balanced'), #ExtraTreesClassifier(class_weight='balanced'), #RandomForestClassifier(class_weight='balanced'), #MLPClassifier(solver='sdg', learning_rate='adaptive', early_stopping=T...
Step 0: Create ndarray for multilabel targets + select model features y_arr (260715, 7) int64 [[0 0 0 0 0 1 0] [0 0 0 1 0 1 0]] Index(['outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI'], dtype='object') X (260715, 7) i_f_code obj...
MIT
faers_multilabel_outcome_ml_pipeline_dask_joblib_dd_3_5_2021.ipynb
briangriner/OSTF-FAERS
Preprocessing
X = data.drop(['animal_name']+['class_type'],axis=1) ###Eliminiamo dal dataframe i nomi degli animali che sono una variabile categorica e la classe di appartenenza che ###vogliamo trovare con gli algoritmi di clustering from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() Xs = pd.DataFrame(scaler.fit...
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Clustering
###come funziona ciascuno di questi algoritmi?????
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Kmeans
from sklearn.cluster import KMeans,AgglomerativeClustering,SpectralClustering,DBSCAN,Birch kmeans = KMeans(n_clusters= 7,random_state=0) ### Random state è un parametro che ci serve per fare in modo che i centroidi di partenza siano determinati a partire ### da un numero, e non generati casualmente in modo che ripeten...
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Agglomerative clustering
aggc = AgglomerativeClustering(n_clusters = 7, affinity = 'euclidean', linkage = 'ward' ) y_pred_aggc =aggc.fit_predict(Xs)
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
SpectralClustering
spc = SpectralClustering(n_clusters=7, assign_labels="discretize", random_state=0) y_pred_spc = spc.fit_predict(Xs)
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
DBSCAN
dbscan =DBSCAN(eps=0.3,min_samples=4) y_pred_dbscan = dbscan.fit_predict(Xv) #### Ho verificato che con Dbscan otteniamo un risultato molto migliroe se usiamo il dataset in 2 dimensioni Xv #### che a questo punto ci conviene calcolare prima della parte di results visualization
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Birch
brc = Birch(n_clusters=7, threshold = 0.1) y_pred_brc = brc.fit_predict(Xs)
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Results visualization
import matplotlib.pyplot as plt from sklearn.decomposition import PCA pca = PCA(2) Xv = pca.fit_transform(Xs) ### Applichiamo la principal component analysis per comprimere i dati in due dimensioni e poterli visualizzare. fig, ax = plt.subplots(figsize=(16,9),ncols=3, nrows=2) ax[0][0].scatter(Xv[:,0],Xv[:,1], s=110, c...
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Benchmark e interpretazione
from sklearn.metrics import adjusted_rand_score, completeness_score ### Bisogna dare una piccola descrizione di cosa misurano queste due metriche e perchè sono state scelte
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Utilizziamo due metriche diverse per verificare quanto il risultato ottenuto con gli algoritmi di clustering sia accurato rispetto alla ground truth. Kmeans
risultati = {} k_c= completeness_score(y_verita,y_pred_k) k_a = adjusted_rand_score(y_verita,y_pred_k) risultati['Kmeans'] =[k_c,k_a]
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
AgglomerativeClustering
aggc_c= completeness_score(y_verita,y_pred_aggc) aggc_a = adjusted_rand_score(y_verita,y_pred_aggc) risultati['Agglomerative clustering']=[aggc_c,aggc_a]
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
SpectralClustering
spc_c= completeness_score(y_verita,y_pred_spc) spc_a = adjusted_rand_score(y_verita,y_pred_spc) risultati['Spectral clustering']=[spc_c,spc_a]
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
DBSCAN
dbscan_c= completeness_score(y_verita,y_pred_dbscan) dbscan_a = adjusted_rand_score(y_verita,y_pred_dbscan) risultati['Dbscan']=[dbscan_c,dbscan_a]
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Birch
brc_c= completeness_score(y_verita,y_pred_brc) brc_a = adjusted_rand_score(y_verita,y_pred_brc) risultati['Birch']=[brc_c,brc_a] risultati ###perchè è il migliore???
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
L'algoritmo migliore si rivela essere lo spectral clustering
## funzione per trovare la posizione di ogni membro del cluster nel dataset originale def select_points(X, y_pred, cluster_label): pos = [i for i, x in enumerate(y_pred) if x == cluster_label] return X.iloc[pos] select_points(data,y_pred_spc,3) ### Tutti animali dello stesso class_type eccetto la tartaruga co...
_____no_output_____
MIT
.ipynb_checkpoints/Zoo dataset-Giovanni commit-checkpoint.ipynb
GiovanniDiMasi/Cluster_animali
Новый раздел Установка и импорт
_____no_output_____
MIT
Test1_Mask_RCNN.ipynb
Art-phys/Mask_RCNN
Importing Libraries
import os import gc import numpy as np import pandas as pd data_path = r'../input/h-and-m-personalized-fashion-recommendations/transactions_train.csv' customer_data_path = r'../input/h-and-m-personalized-fashion-recommendations/customers.csv' article_data_path = r'../input/h-and-m-personalized-fashion-recommendations/a...
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Reading Transaction data
%%time # Load all sales data (for 3 years starting from 2018 to 2020) # ALso, article_id is treated as a string column otherwise it # would drop the leading zeros while reading the specific column values transactions_data=create_data(data_path, data_type='transaction') print(transactions_data.shape) # # Unique Attri...
(31788324, 5) 734-total No of unique transactions dates in data sheet 1362281-total No of unique customers ids in data sheet 104547-total No of unique article ids courses names in data sheet 2-total No of unique sales channels in data sheet CPU times: user 55.3 s, sys: 4.24 s, total: 59.5 s Wall time: 1min 20s
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Postprocessing Transaction data 1. timestamp column is created from transaction dates column 2. columns are renamed for easy reading
transactions_data['timestamp'] = transactions_data.t_dat.values.astype(np.int64)// 10 ** 9 transactions_data = transactions_data[transactions_data['timestamp'] > 1585620000] transactions_data = transactions_data[['customer_id','article_id','timestamp']].rename(columns={'customer_id': 'user_id:token', ...
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Saving transaction data to kaggle based recbole output directory
transactions_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.inter'), index=False, sep='\t') del [[transactions_data]] gc.collect()
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Reading Article data
%%time # Load all Customers article_data=create_data(article_data_path, data_type='article') print(article_data.shape) print(str(len(article_data['article_id'].drop_duplicates())) + "-total No of unique article ids in article data sheet") article_data.head()
(105542, 25) 105542-total No of unique article ids in article data sheet CPU times: user 716 ms, sys: 43.6 ms, total: 760 ms Wall time: 1.07 s
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Postprocessing Article data 1. drop duplicate columns to avoid multicollinearity2. columns are renamed for easy reading
article_data = article_data.drop(columns = ['product_type_name', 'graphical_appearance_name', 'colour_group_name', 'perceived_colour_value_name', 'perceived_colour_master_name', 'index_name', 'index_group_name', 'section_name', 'g...
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Saving article data to kaggle based recbole output directory
article_data.to_csv(os.path.join(recbole_data_path, 'recbole_data.item'), index=False, sep='\t') del [[article_data]] gc.collect()
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Setting up Recbole based dataset and configurations
import logging from logging import getLogger from recbole.config import Config from recbole.data import create_dataset, data_preparation from recbole.model.sequential_recommender import GRU4RecF from recbole.trainer import Trainer from recbole.utils import init_seed, init_logger parameter_dict = { 'data_path': '/ka...
GRU4RecF( (item_embedding): Embedding(7330, 64, padding_idx=0) (feature_embed_layer): FeatureSeqEmbLayer( (token_embedding_table): ModuleDict( (item): FMEmbedding( (embedding): Embedding(3935, 64) ) ) (float_embedding_table): ModuleDict( (item): Embedding(1, 64) ) (toke...
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Generate trained recommender based predictions
from recbole.utils.case_study import full_sort_topk external_user_ids = dataset.id2token( dataset.uid_field, list(range(dataset.user_num)))[1:]#fist element in array is 'PAD'(default of Recbole) ->remove it import torch from recbole.data.interaction import Interaction def add_last_item(old_interaction, last_item_...
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Reading Submission data
submission_data = pd.read_csv(submission_data_path) submission_data.shape
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Postprocessing submision data 1. Replacing trained customer ids based prediction from recbole based predictions by performing merge 2. Filling up Nan values for customer ids which were not part of recbole training session 3. Generating the final prediction column 4. Dropping redundant columns
submission_data = pd.merge(submission_data, result, on='customer_id', how='outer') submission_data submission_data = submission_data.fillna(-1) submission_data['prediction'] = submission_data.apply( lambda x: x['prediction_y'] if x['prediction_y'] != -1 else x['prediction_x'], axis=1) submission_data submission_dat...
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender