markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Read the data and get a row count. Data source: U.S. Department of Transportation, TranStats database. Air Carrier Statistics Table T-100 Domestic Market (All Carriers): "This table contains domestic market data reported by both U.S. and foreign air carriers, including carrier, origin, destination, and service class for enplaned passengers, freight and mail when both origin and destination airports are located within the boundaries of the United States and its territories." -- 2015
file_path = r'data\T100_2015.csv.gz' df = pd.read_csv(file_path, header=0) df.count() df.head(n=10) df = pd.read_csv(file_path, header=0, usecols=["PASSENGERS", "ORIGIN", "DEST"]) df.head(n=10) print('Min: ', df['PASSENGERS'].min()) print('Max: ', df['PASSENGERS'].max()) print('Mean: ', df['PASSENGERS'].mean()) df = df.query('PASSENGERS > 10000') print('Min: ', df['PASSENGERS'].min()) print('Max: ', df['PASSENGERS'].max()) print('Mean: ', df['PASSENGERS'].mean()) OriginToDestination = df.groupby(['ORIGIN', 'DEST'], as_index=False).agg({'PASSENGERS':sum,}) OriginToDestination.head(n=10) OriginToDestination = pd.pivot_table(OriginToDestination, values='PASSENGERS', index=['ORIGIN'], columns=['DEST'], aggfunc=np.sum) OriginToDestination.head() OriginToDestination.fillna(0)
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
SymPy SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible.
import sympy from sympy import * from sympy.stats import * from sympy import symbols from sympy.plotting import plot from sympy.interactive import printing printing.init_printing(use_latex=True) print('Sympy version ' + sympy.__version__)
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
This example was gleaned from: Rocklin, Matthew, and Andy R. Terrel. "Symbolic Statistics with SymPy." Computing in Science & Engineering 14.3 (2012): 88-93. Problem: Data assimilation -- we want to assimilate new measurements into a set of old measurements. Both sets of measurements have uncertainty. For example, ACS estimates updated with local data. Assume we've estimated that the temperature outside is 30 degrees. However, there is certainly uncertainty is our estimate. Let's say +- 3 degrees. In Sympy, we can model this with a normal random variable.
T = Normal('T', 30, 3)
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
What is the probability that the temperature is actually greater than 33 degrees? <img src="eq1.png"> We can use Sympy's integration engine to calculate a precise answer.
P(T > 33) N(P(T > 33))
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
Assume we now have a thermometer and can measure the temperature. However, there is still uncertainty involved.
noise = Normal('noise', 0, 1.5) observation = T + noise
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
We now have two measurements -- 30 +- 3 degrees and 26 +- 1.5 degrees. How do we combine them? 30 +- 3 was our prior measurement. We want to cacluate a better estimate of the temperature (posterior) given an observation of 26 degrees. <img src="eq2.png">
T_posterior = given(T, Eq(observation, 26))
scipy/demos/DevSummit 2016.ipynb
EsriOceans/oceans-workshop-2016
apache-2.0
А також функції для визначення положення підвісу та вантажу у довільний момент часу: $x_1(t) = x(t)$ $x_2(t) = x_1(t)+lsin(\alpha(t))$ $y_2(t) = -lcos(\alpha(t))$
def x1(): return xr() def x2(): return x1()+l*sin(ar()) def y2(): return -l*cos(ar())
pendulum_model.ipynb
nikita-mayorov/math_modelling
gpl-3.0
Визначаємо початкові умови:
g = 9.8 # Час прискорення вільного падіння. m1 = 3.0 # Маса підвісу. m2 = 2.0 # Маса вантажу. l = 6.0 # Довжина тросу підвісу. a0 = pi/4.0 # Кут початкового відхилення від положення рівноваги. v0 = 2.0 # Початкова швидкість. x0 = 0.0 # Початкова координата по вісі Ox. global_length, delta = 15.0, 0.01 # global_length - загальний час руху, delta - крок. t = arange(x0, global_length, delta) # t - інтервал часу від 'x0' до 'global_length' з кроком 'delta'.
pendulum_model.ipynb
nikita-mayorov/math_modelling
gpl-3.0
Будуємо графіки:
figure1 = plt.figure() plot1 = figure1.add_subplot(3, 2, 1) plot2 = figure1.add_subplot(3, 2, 2) plot3 = figure1.add_subplot(3, 1, 2) for ax in figure1.axes: ax.grid(True) plt.subplots_adjust(top=2.0, right=2.0, wspace=0.10, hspace=0.25) plot1.plot(t, xr(), 'g') plot2.plot(t, ar(), 'g') plot3.plot(x1(), [0.01 for i in range(len(t))], 'b') plot3.plot(x2(), y2(), 'r') plot1.set_title(u'xr(t)') plot2.set_title(u'ar(t)') plot3.set_title(u'Траекторія руху маятника') plt.show()
pendulum_model.ipynb
nikita-mayorov/math_modelling
gpl-3.0
Апроксимація побудованої моделі методом Ейлера Зведення до системи рівнянь першого порядку \begin{equation} \left{ \begin{matrix} l\ddot{\alpha}+\ddot{x}+g\alpha=0\ (m_1+m_2)\ddot{x}+m_1l\ddot{\alpha}=0\ \end{matrix} \right. \end{equation} Отримана система містить рівняння другого порядку. Отже, для побудови наближенного розв'язку методом Ейлера необхідно звести ці рівняння до першого порядку, додавши нові змінні: \begin{equation} \begin{matrix} y=\dot{x}\ \dot{y}=\ddot{x}\ \beta=\dot{\alpha}\ \dot{\beta}=\ddot{\alpha}\ \end{matrix} \Rightarrow \begin{matrix} l\dot{\beta}+\dot{y}+g\alpha=0\ (m_1+m_2)\dot{y}+m_1l\dot{\beta}=0\ \end{matrix} \end{equation} \begin{equation} \begin{matrix} \dot{x}=y&(4)\ \dot{\alpha}=\beta&(5)\ (m_1+m_2)\dot{y}+m_1l\dot{\beta}=0\ \left(\frac{m_1+m_2}{m_1l}\right)\dot{y}+\left(\frac{m_1l\dot{\beta}}{m_1l}\right)=0\ \dot{\beta}=-\frac{m_1+m_2}{m_1l}\dot{y}&(6)\ l\dot{\beta}+\dot{y}+g\alpha=0\ l\left(-\frac{m_1+m_2}{m_1l}\dot{y}\right)+\dot{y}+g\alpha=0\ -\frac{m_1}{m_2}+\dot{y}=-g\alpha\ \dot{y}=\frac{m_2}{m_1}g\alpha&(7)\ \end{matrix} \end{equation} Об'єднуємо у систему рівняння $(4)$, $(5)$, $(6)$, $(7)$ та визначаємо початкові умови: \begin{equation} \left{ \begin{matrix} \dot{x}=y\ \dot{\alpha}=\beta\ \dot{y}=\frac{m_2}{m_1}g\alpha\ \dot{\beta}=-\frac{m_1+m_2}{m_1l}\frac{g}{l}\alpha\ \end{matrix} \right. \begin{matrix} x(0)=x_0\ \alpha(0)=\alpha_0\ y(0)=v_0\ \beta(0)=u_0\ \end{matrix} \end{equation} Побудова ітераційного процесу за методом Ейлера: Задача - знайти розв'язок деякої функції $\dot{y}(t)=f(t,y(t))$, де $t(t_0)=y_0$, на інтервалі від $x_0$ до $n$. Для цього необхідно побудувати цикл, кожна ітерація якого, залежить від результату попередньої: $y_{i+1}=y_i+\Delta tf(t_i, y_i)$, де $\Delta t$ - крок ітерації. \begin{equation} \left{ \begin{matrix} x_{i+1}=x_i+\Delta ty_i\ \alpha_{i+1}=\alpha_i+\Delta t\beta_i\ y_{i+1}=y_i+\Delta t\frac{m_2}{m_1}g\alpha_i\ \beta_{i+1}=\beta_i-\Delta t\frac{m_1+m_2}{m_1}\frac{g}{l}\alpha_i\ \end{matrix} \right. \end{equation}
x = arange(x0, global_length, delta) a = arange(x0, global_length, delta) y = arange(x0, global_length, delta) b = arange(x0, global_length, delta) x[0], a[0], y[0], b[0] = x0, a0, v0, sqrt(g/l) for i in range(0, len(t)-1): x[i+1] = x[i] + delta * y[i] a[i+1] = a[i] + delta * b[i] y[i+1] = y[i] + delta * (m2 / m1) * g * a[i] b[i+1] = b[i] - delta * ((m1 + m2) / m1) * (g/l) * a[i] def aproxy_x1(): return x def aproxy_x2(): return x+l*sin(a) def aproxy_y2(): return -l*cos(a)
pendulum_model.ipynb
nikita-mayorov/math_modelling
gpl-3.0
Будуємо графіки:
figure2 = plt.figure() plot4 = figure2.add_subplot(3, 2, 1) plot5 = figure2.add_subplot(3, 2, 2) plot6 = figure2.add_subplot(3, 1, 2) for ax in figure2.axes: ax.grid(True) plt.subplots_adjust(top=2.0, right=2.0, wspace=0.10, hspace=0.25) plot4.plot(t, xr(), 'r') plot4.plot(t, x, 'b') plot5.plot(t, ar(), 'r') plot5.plot(t, a, 'b') plot6.plot(x1(), [0.01 for i in range(len(t))], 'r') plot6.plot(aproxy_x1(), [0.01 for i in range(len(t))], 'b') plot6.plot(x2(), y2(), 'r', label=u'Точний розв\'язок') plot6.plot(aproxy_x2(), aproxy_y2(), 'b', label=u'Метод Ейлера') plot4.set_title(u'xr(t)') plot5.set_title(u'ar(t)') plot6.set_title(u'Порівняння точноі та наближеної траекторії руху маятника') plot6.legend(loc='upper right') plt.show()
pendulum_model.ipynb
nikita-mayorov/math_modelling
gpl-3.0
Specifying the model in pymc3 mirrors its statistical specification.
model = pm.Model() with model: sigma = pm.Exponential('sigma', 1./.02, testval=.1) nu = pm.Exponential('nu', 1./10) s = GaussianRandomWalk('s', sigma**-2, shape=n) r = pm.T('r', nu, lam=pm.exp(-2*s), observed=returns)
pymc3/examples/stochastic_volatility.ipynb
jameshensman/pymc3
apache-2.0
2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: Convolution functions, including: Zero Padding Convolve window Convolution forward Convolution backward (optional) Pooling functions, including: Pooling forward Create mask Distribute value Pooling backward (optional) This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: <img src="images/model.png" style="width:800px;height:300px;"> Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural Networks Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. <img src="images/conv_nn.png" style="width:350px;height:200px;"> In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-Padding Zero-padding adds zeros around the border of an image: <img src="images/PAD.png" style="width:600px;height:400px;"> <caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption> The main benefits of padding are the following: It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. Exercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do: python a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
# GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad),(0,0)), mode = 'constant') ### END CODE HERE ### return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1,1] =", x[1,1]) print ("x_pad[1,1] =", x_pad[1,1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0])
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1,1]**: </td> <td> [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] </td> </tr> <tr> <td> **x_pad[1,1]**: </td> <td> [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] </td> </tr> </table> 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: Takes an input volume Applies a filter at every position of the input Outputs another volume (usually of different size) <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. Exercise: Implement conv_single_step(). Hint.
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Do not add the bias yet. s = a_slice_prev*W # Sum over all entries of the volume s. Z = np.sum(s) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = float(Z+b) ### END CODE HERE ### return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z)
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> **Z** </td> <td> -6.99908945068 </td> </tr> </table> 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: <center> <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> </video> </center> Exercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. Hint: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: python a_slice_prev = a_prev[0:2,0:2,:] This will be useful when you will define a_slice_prev below, using the start/end indexes you will define. 2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption> Reminder: The formulas relating the output shape of the convolution to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_C = \text{number of filters used in the convolution}$$ For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = int(np.floor((n_H_prev - f + (2*pad))/stride)+1) n_W = int(np.floor((n_W_prev - f + (2*pad))/stride)+1) # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros([m, n_H, n_W, n_C]) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end =vert_start+pad horiz_start = w* stride horiz_end = horiz_start+pad # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[...,c], b[...,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(10,4,4,3) W = np.random.randn(2,2,3,8) b = np.random.randn(1,1,1,8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("Z[3,2,1] =", Z[3,2,1]) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] </td> </tr> </table> Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA: This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: $$ dA += \sum {h=0} ^{n_H} \sum{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into: python da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] 5.1.2 - Computing dW: This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: $$ dW_c += \sum {h=0} ^{n_H} \sum{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into: python dW[:,:,:,c] += a_slice * dZ[i, h, w, c] 5.1.3 - Computing db: This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into: python db[:,:,:,c] += dZ[i, h, w, c] Exercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" stride = hparameters['stride'] pad = hparameters['pad'] # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = dZ.shape # Initialize dA_prev, dW, db with the correct shapes dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) dW = np.zeros((f, f, n_C_prev, n_C)) db = np.zeros((1, 1, 1, n_C)) # Pad A_prev and dA_prev A_prev_pad = zero_pad(A_prev, pad) dA_prev_pad = zero_pad(dA_prev, pad ) for i in range(m): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = A_prev_pad[i] da_prev_pad = dA_prev_pad[i] for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = w * stride vert_end = vert_start + pad horiz_start = h * stride horiz_end = horiz_start+pad # Use the corners to define the slice from a_prev_pad a_slice = a_prev_pad[vert_start:vert_end , horiz_start:horiz_end , :] # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] dW[:,:,:,c] += a_slice * dZ[i, h, w, c] db[:,:,:,c] += dZ[i, h, w, c] # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :] ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db))
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \ 1/4 && 1/4 \end{bmatrix}\tag{5}$$ This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. Exercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint
def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = shape # Compute the value to distribute on the matrix (≈1 line) average =dz / (n_H * n_W) # Create a matrix where every entry is the "average" value (≈1 line) a = np.ones((n_H,n_W)) * average ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a)
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Expected Output: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. Exercise: Implement the pool_backward function in both modes ("max" and "average"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.
def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = cache # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = hparameters["stride"] f = hparameters["f"] # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape m, n_H, n_W, n_C = dA.shape # Initialize dA_prev with zeros (≈1 line) dA_prev = np.zeros(A_prev.shape) for i in range(m): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = A_prev[i] for h in range(n_H): # loop on the vertical axis for w in range(n_W): # loop on the horizontal axis for c in range(n_C): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end = vert_start + f horiz_start = w * stride horiz_end = horiz_start + f # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c] # Create the mask from a_prev_slice (≈1 line) mask = create_mask_from_window(a_prev_slice) # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] += np.multiply(mask, dA[i, h, w, c]) elif mode == "average": # Get the value a from dA (≈1 line) da = dA[i, h, w, c] # Define the shape of the filter as fxf (≈1 line) shape = (f, f) # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] += distribute_value(da, shape) ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1])
Convolutional Neural Networks/Convolution+model+-+Step+by+Step+-+v2.ipynb
AhmetHamzaEmra/Deep-Learning-Specialization-Coursera
mit
Compute DICS beamfomer on evoked data Compute a Dynamic Imaging of Coherent Sources (DICS) beamformer from single trial activity in a time-frequency window to estimate source time courses based on evoked data. The original reference for DICS is: Gross et al. Dynamic imaging of coherent sources: Studying neural interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
# Author: Roman Goj <roman.goj@gmail.com> # # License: BSD (3-clause) import mne import matplotlib.pyplot as plt import numpy as np from mne.datasets import sample from mne.time_frequency import compute_epochs_csd from mne.beamformer import dics print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' label_name = 'Aud-lh' fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name subjects_dir = data_path + '/subjects'
0.12/_downloads/plot_dics_beamformer.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read raw data
raw = mne.io.read_raw_fif(raw_fname) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels # Set picks picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False, stim=False, exclude='bads') # Read epochs event_id, tmin, tmax = 1, -0.2, 0.5 events = mne.read_events(event_fname) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=(None, 0), preload=True, reject=dict(grad=4000e-13, mag=4e-12)) evoked = epochs.average() # Read forward operator forward = mne.read_forward_solution(fname_fwd, surf_ori=True) # Computing the data and noise cross-spectral density matrices # The time-frequency window was chosen on the basis of spectrograms from # example time_frequency/plot_time_frequency.py data_csd = compute_epochs_csd(epochs, mode='multitaper', tmin=0.04, tmax=0.15, fmin=6, fmax=10) noise_csd = compute_epochs_csd(epochs, mode='multitaper', tmin=-0.11, tmax=0.0, fmin=6, fmax=10) evoked = epochs.average() # Compute DICS spatial filter and estimate source time courses on evoked data stc = dics(evoked, forward, noise_csd, data_csd) plt.figure() ts_show = -30 # show the 40 largest responses plt.plot(1e3 * stc.times, stc.data[np.argsort(stc.data.max(axis=1))[ts_show:]].T) plt.xlabel('Time (ms)') plt.ylabel('DICS value') plt.title('DICS time course of the 30 largest sources.') plt.show() # Plot brain in 3D with PySurfer if available brain = stc.plot(hemi='rh', subjects_dir=subjects_dir) brain.set_data_time_index(180) brain.show_view('lateral') # Uncomment to save image # brain.save_image('DICS_map.png')
0.12/_downloads/plot_dics_beamformer.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
As some you were asking about differences between Python2 and Python3, here is an example:
# This is an online comment: Python3 print('hello world') # Python2: print 'hello world'
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
More broadly speaking, some function interfaces (here print function) change. We will encounter other examples as we move along. More information regarding this issue is available at https://wiki.python.org/moin/Python2orPython3. Basic Types We now look at different types of objects that Python offers: floats, intergers, lists, dictionaries, etc.
1 * 1.0 a = 3.0 type(a) b = 3 > 5 type(b) a = int(a) type(a)
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
What about integer division?
# Different between Python2 and Python3 3 / 2
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Let us now turn to containers: lists, strings, etc.
L = ['red', 'blue', 'green', 'black', 'white'] L[3], L[-2], L[3:], L[3:4]
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Lists are mutable objects, i.e. they can be changed.
L[1] = 'yellow'
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
There is an important distinction between independent copies and references to objects.
G = L L[1] = 'blue' L, G G = L[:] L[1] = 'yellow' L, G
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
How to work with lists, or objects more generally?
L.append('pink') print(L) L.pop() print(L)
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Now we turn to tuples, which are immutable objects.
T = 'white', 'black', 'yellow' T T[1] = 'brown'
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
As you asked, here is one way to delete objects.
print(T) del T print(T)
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Now, let us turn to dictionaries, which are tables that map keys to values.
tel = {'Yike': 4546456, 'Philipp': 773456454} tel tel[1] # What is Yike's telephone number? tel['Yike'] # How do we add Adam? tel['Adam'] = 7745464 tel # What keys are defined in our dictionary so far? tel.keys() # Yike has a new telephone number. tel['Yike'] = 77378797
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Control Flow
a, b = 1, 5 if a == 1: print(1) elif a == 2: print(2) else: if b == 5: print('A lot') # Note the indentation. for key_ in tel.keys(): print(key_, tel[key_])
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Functions It is important to distinguish between required, optional, and default arguments.
def return_phone_number(book, name = 'Philipp'): ''' This unction returns the telephone nummber for the requested name. ''' # Check inputs. assert (isinstance(name, str)), 'The requested name needs to be a string object.' assert (name in ['Yike', 'Philipp', 'Adam']) return book[name] return_phone_number(tel) return_phone_number(tel, 'Philipp') return_phone_number(tel, 'Yike') return_phone_number(tel, 'Peter')
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Next Steps Python Environment Python Science Stacks Quantitative Economics Virtual Machines Data Science Toolbox VM Depot VagrantCloud Dual Boot Mac Windows Miscellaneous Keyboard Shortcuts Markdown Tutorial Formatting
import urllib; from IPython.core.display import HTML HTML(urllib.urlopen('http://bit.ly/1Ki3iXw').read())
lectures/python_basics/lecture.ipynb
softEcon/bootcamp
mit
Mean and variance PDF of a Gaussian using ABC The mean As an illustration of Approximate Bayesian Computation (ABC), we will infer first the mean and then the variance of a Gaussian. First we generate the mock data
data= numpy.random.normal(size=100)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
First we assume that we know the variance and constrain the PDF for the mean. Let's write a simple function that samples the PDF using ABC. The sample mean is a sufficient statistic for the mean, so we will use that together with the absolute value of the difference between that and that of the simulated data as the distance function. We use a prior on the mean that is flat between -2 and 2:
def Mean_ABC(n=1000,threshold=0.05): out= [] for ii in range(n): d= threshold+1. while d > threshold: m= numpy.random.uniform()*4-2. sim= numpy.random.normal(size=len(data))+m d= numpy.fabs(numpy.mean(sim)-numpy.mean(data)) out.append(m) return out
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
Now we sample the PDF using ABC:
mean_pdfsamples_abc= Mean_ABC()
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
Let's plot this, as well as the analytical PDF
h= hist(mean_pdfsamples_abc,range=[-1.,1.],bins=51,normed=True) xs= numpy.linspace(-1.,1.,1001) plot(xs,numpy.sqrt(len(data)/2./numpy.pi)*numpy.exp(-(xs-numpy.mean(data))**2./2.*len(data)),lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
What happens when we make the threshold larger?
mean_pdfsamples_abc= Mean_ABC(threshold=1.) h= hist(mean_pdfsamples_abc,range=[-1.,1.],bins=51,normed=True) plot(xs,numpy.sqrt(len(data)/2./numpy.pi)*numpy.exp(-(xs-numpy.mean(data))**2./2.*len(data)),lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
That's not good! What if we make it smaller?
mean_pdfsamples_abc= Mean_ABC(threshold=0.001) h= hist(mean_pdfsamples_abc,range=[-1.,1.],bins=51,normed=True) plot(xs,numpy.sqrt(len(data)/2./numpy.pi)*numpy.exp(-(xs-numpy.mean(data))**2./2.*len(data)),lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
This runs very long, because it's difficult to find simulated data sets that are close enough to the true one. The variance Let's know look at the variance, assuming that we know that the mean is zero. The sample variance is a sufficient statistic in this case and we can again write an ABC function, similar to that above:
def Var_ABC(n=1000,threshold=0.05): out= [] for ii in range(n): d= threshold+1. while d > threshold: v= numpy.random.uniform()*4 sim= numpy.random.normal(size=len(data))*numpy.sqrt(v) d= numpy.fabs(numpy.var(sim)-numpy.var(data)) out.append(v) return out
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
We again run this to get the PDF and compare to the analytical one
var_pdfsamples_abc= Var_ABC() h= hist(var_pdfsamples_abc,range=[0.,2.],bins=51,normed=True) xs= numpy.linspace(0.001,2.,1001) ys= xs**(-len(data)/2.)*numpy.exp(-1./xs/2.*len(data)*(numpy.var(data)+numpy.mean(data)**2.)) ys/= numpy.sum(ys)*(xs[1]-xs[0]) plot(xs,ys,lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
What if we make the threshold larger?
var_pdfsamples_abc= Var_ABC(threshold=1.) h= hist(var_pdfsamples_abc,range=[0.,2.],bins=51,normed=True) plot(xs,ys,lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
That's not good! What if we make the threshold smaller?
var_pdfsamples_abc= Var_ABC(threshold=0.005) h= hist(var_pdfsamples_abc,range=[0.,2.],bins=51,normed=True) plot(xs,ys,lw=2.)
inference/Gaussian-ABC-Inference.ipynb
jobovy/misc-notebooks
bsd-3-clause
Set up your Google Cloud Platform project The following steps are required, regardless of your notebook environment. Select or create a project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the AI Platform APIs and Compute Engine APIs. Enter your project ID and region in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set project ID and authenticate Update your Project ID below. The rest of the notebook will run using these credentials.
PROJECT_ID = "UPDATE TO YOUR PROJECT ID" REGION = "US" DATA_SET_ID = "bqml_kmeans" # Ensure you first create a data set in BigQuery !gcloud config set project $PROJECT_ID # If you have not built the Data Set, the following command will build it for you # !bq mk --location=$REGION --dataset $PROJECT_ID:$DATA_SET_ID
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Import libraries and define constants
import matplotlib.pyplot as plt import numpy as np import pandas as pd import pandas_gbq from google.cloud import bigquery pd.set_option( "display.float_format", lambda x: "%.3f" % x ) # used to display float format client = bigquery.Client(project=PROJECT_ID)
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Data exploration and preparation Prior to building your models, you are typically expected to invest a significant amount of time cleaning, exploring, and aggregating your dataset in a meaningful way for modeling. For the purpose of this demo, we aren't showing this step only to prioritize showcasing clustering with k-means in BigQuery ML. Building synthetic data Our goal is to use both online (GA360) and offline (CRM) data. You can use your own CRM data, however, in this case since we don't have CRM data to showcase, we will instead generate synthetic data. We will generate estimated House Hold Income, and Gender. To do so, we will hash fullVisitorID and build simple rules based on the last digit of the hash. When you run this process with your own data, you can join CRM data with several dimensions, but this is just an example of what is possible.
# We start with GA360 data, and will eventually build synthetic CRM as an example. # This block is the first step, just working with GA360 ga360_only_view = "GA360_View" shared_dataset_ref = client.dataset(DATA_SET_ID) ga360_view_ref = shared_dataset_ref.table(ga360_only_view) ga360_view = bigquery.Table(ga360_view_ref) ga360_query = """ SELECT fullVisitorID, ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID, # This will be used to generate random data. MAX(device.operatingSystem) AS OS, # We can aggregate this because an OS is tied to a fullVisitorID. SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Apparel' THEN 1 ELSE 0 END) AS Apparel, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Office' THEN 1 ELSE 0 END) AS Office, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Electronics' THEN 1 ELSE 0 END) AS Electronics, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Limited Supply' THEN 1 ELSE 0 END) AS LimitedSupply, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Accessories' THEN 1 ELSE 0 END) AS Accessories, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Shop by Brand' THEN 1 ELSE 0 END) AS ShopByBrand, SUM (CASE WHEN REGEXP_EXTRACT (v2ProductCategory, r'^(?:(?:.*?)Home/)(.*?)/') = 'Bags' THEN 1 ELSE 0 END) AS Bags, ROUND (SUM (productPrice/1000000),2) AS productPrice_USD FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`, UNNEST(hits) AS hits, UNNEST(hits.product) AS hits_product WHERE _TABLE_SUFFIX BETWEEN '20160801' AND '20160831' AND geoNetwork.country = 'United States' AND type = 'EVENT' GROUP BY 1, 2 """ ga360_view.view_query = ga360_query.format(PROJECT_ID) ga360_view = client.create_table(ga360_view) # API request print(f"Successfully created view at {ga360_view.full_table_id}") # Show a sample of GA360 data ga360_query_df = f""" SELECT * FROM {ga360_view.full_table_id.replace(":", ".")} LIMIT 5 """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query(ga360_query_df, job_config=job_config) # API Request df_ga360 = query_job.result() df_ga360 = df_ga360.to_dataframe() df_ga360 # Create synthetic CRM data in SQL CRM_only_view = "CRM_View" shared_dataset_ref = client.dataset(DATA_SET_ID) CRM_view_ref = shared_dataset_ref.table(CRM_only_view) CRM_view = bigquery.Table(CRM_view_ref) # Query below works by hashing the fullVisitorID, which creates a random distribution. # We use modulo to artificially split gender and hhi distribution. CRM_query = """ SELECT fullVisitorID, IF (MOD(Hashed_fullVisitorID,2) = 0, "M", "F") AS gender, CASE WHEN MOD(Hashed_fullVisitorID,10) = 0 THEN 55000 WHEN MOD(Hashed_fullVisitorID,10) < 3 THEN 65000 WHEN MOD(Hashed_fullVisitorID,10) < 7 THEN 75000 WHEN MOD(Hashed_fullVisitorID,10) < 9 THEN 85000 WHEN MOD(Hashed_fullVisitorID,10) = 9 THEN 95000 ELSE Hashed_fullVisitorID END AS hhi FROM ( SELECT fullVisitorID, ABS(farm_fingerprint(fullVisitorID)) AS Hashed_fullVisitorID, FROM `bigquery-public-data.google_analytics_sample.ga_sessions_*`, UNNEST(hits) AS hits, UNNEST(hits.product) AS hits_product WHERE _TABLE_SUFFIX BETWEEN '20160801' AND '20160831' AND geoNetwork.country = 'United States' AND type = 'EVENT' GROUP BY 1, 2) """ CRM_view.view_query = CRM_query.format(PROJECT_ID) CRM_view = client.create_table(CRM_view) # API request print(f"Successfully created view at {CRM_view.full_table_id}") # See an output of the synthetic CRM data CRM_query_df = f""" SELECT * FROM {CRM_view.full_table_id.replace(":", ".")} LIMIT 5 """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query(CRM_query_df, job_config=job_config) # API Request df_CRM = query_job.result() df_CRM = df_CRM.to_dataframe() df_CRM
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Build a final view for to use as trainding data for clustering You may decide to change the view below based on your specific dataset. This is fine, and is exactly why we're creating a view. All steps subsequent to this will reference this view. If you change the SQL below, you won't need to modify other parts of the notebook.
# Build a final view, which joins GA360 data with CRM data final_data_view = "Final_View" shared_dataset_ref = client.dataset(DATA_SET_ID) final_view_ref = shared_dataset_ref.table(final_data_view) final_view = bigquery.Table(final_view_ref) final_data_query = f""" SELECT g.*, c.* EXCEPT(fullVisitorId) FROM {ga360_view.full_table_id.replace(":", ".")} g JOIN {CRM_view.full_table_id.replace(":", ".")} c ON g.fullVisitorId = c.fullVisitorId """ final_view.view_query = final_data_query.format(PROJECT_ID) final_view = client.create_table(final_view) # API request print(f"Successfully created view at {final_view.full_table_id}") # Show final data used prior to modeling sql_demo = f""" SELECT * FROM {final_view.full_table_id.replace(":", ".")} LIMIT 5 """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query(sql_demo, job_config=job_config) # API Request df_demo = query_job.result() df_demo = df_demo.to_dataframe() df_demo
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Create our initial model In this section, we will build our initial k-means model. We won't focus on optimal k or other hyperparemeters just yet. Some additional points: We remove fullVisitorId as an input, even though it is grouped at that level because we don't need fullVisitorID as a feature for clustering. fullVisitorID should never be used as feature. We have both categorical as well as numerical features We do not have to normalize any numerical features, as BigQuery ML will automatically do this for us. Build a function to build our model We will build a simple python function to build our model, rather than doing everything in SQL. This approach means we can asynchronously start several models and let BQ run in parallel.
def makeModel(n_Clusters, Model_Name): sql = f""" CREATE OR REPLACE MODEL `{PROJECT_ID}.{DATA_SET_ID}.{Model_Name}` OPTIONS(model_type='kmeans', kmeans_init_method = 'KMEANS++', num_clusters={n_Clusters}) AS SELECT * except(fullVisitorID, Hashed_fullVisitorID) FROM `{final_view.full_table_id.replace(":", ".")}` """ job_config = bigquery.QueryJobConfig() client.query(sql, job_config=job_config) # Make an API request. # Let's start with a simple test to ensure everything works. # After running makeModel(), allow a few minutes for training to complete. model_test_name = "test" makeModel(3, model_test_name) # After training is completed, you can either check in the UI, or you can interact with it using list_models(). for model in client.list_models(DATA_SET_ID): print(model)
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Work towards creating a better model In this section, we want to determine the proper k value. Determining the right value of k depends completely on the use case. There are straight forward examples that will simply tell you how many clusters are needed. Suppose you are pre-processing hand written digits - this tells us k should be 10. Or perhaps your business stakeholder only wants to deliver three different marketing campaigns and needs you to identify three clusters of customers, then setting k=3 might be meaningful. However, the use case is sometimes more open ended and you may want to explore different numbers of clusters to see how your datapoints group together with the minimal error within each cluster. To accomplish this process, we start by performing the 'Elbow Method', which simply charts loss vs k. Then, we'll also use the Davies-Bouldin score. (https://en.wikipedia.org/wiki/Davies%E2%80%93Bouldin_index) Below we are going to create several models to perform both the Elbow Method and get the Davies-Bouldin score. You may change parameters like low_k and high_k. Our process will create models between these two values. There is an additional parameter called model_prefix_name. We recommend you leave this as its current value. It is used to generate a naming convention for our models.
# Define upper and lower bound for k, then build individual models for each. # After running this loop, look at the UI to see several model objects that exist. low_k = 3 high_k = 15 model_prefix_name = "kmeans_clusters_" lst = list(range(low_k, high_k + 1)) # build list to iterate through k values for k in lst: model_name = model_prefix_name + str(k) makeModel(k, model_name) print(f"Model started: {model_name}")
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Select optimal k
# list all current models models = client.list_models(DATA_SET_ID) # Make an API request. print("Listing current models:") for model in models: full_model_id = f"{model.dataset_id}.{model.model_id}" print(full_model_id) # Remove our sample model from BigQuery, so we only have remaining models from our previous loop model_id = DATA_SET_ID + "." + model_test_name client.delete_model(model_id) # Make an API request. print(f"Deleted model '{model_id}'") # This will create a dataframe with each model name, the Davies Bouldin Index, and Loss. # It will be used for the elbow method and to help determine optimal K df = pd.DataFrame(columns=["davies_bouldin_index", "mean_squared_distance"]) models = client.list_models(DATA_SET_ID) # Make an API request. for model in models: full_model_id = f"{model.dataset_id}.{model.model_id}" sql = f""" SELECT davies_bouldin_index, mean_squared_distance FROM ML.EVALUATE(MODEL `{full_model_id}`) """ job_config = bigquery.QueryJobConfig() # Start the query, passing in the extra configuration. query_job = client.query(sql, job_config=job_config) # Make an API request. df_temp = query_job.to_dataframe() # Wait for the job to complete. df_temp["model_name"] = model.model_id df = pd.concat([df, df_temp], axis=0)
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
The code below assumes we've used the naming convention originally created in this notebook, and the k value occurs after the 2nd underscore. If you've changed the model_prefix_name variable, then this code might break.
# This will modify the dataframe above, produce a new field with 'n_clusters', and will sort for graphing df["n_clusters"] = df["model_name"].str.split("_").map(lambda x: x[2]) df["n_clusters"] = df["n_clusters"].apply(pd.to_numeric) df = df.sort_values(by="n_clusters", ascending=True) df df.plot.line(x="n_clusters", y=["davies_bouldin_index", "mean_squared_distance"])
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Note - when you run this notebook, you will get different results, due to random cluster initialization. If you'd like to consistently return the same cluster for reach run, you may explicitly select your initialization through hyperparameter selection (https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create#kmeans_init_method). Making our k selection: There is no perfect approach or process when determining the optimal k value. It can often be determined by business rules or requirements. In this example, there isn't a simple requirement, so these considerations can also be followed: We start with the 'elbow method', which is effectively charting loss vs k. Sometimes, though not always, there's a natural 'elbow' where incremental clusters do not drastically reduce loss. In this specific example, and as you often might find, unfortunately there isn't a natural 'elbow', so we must continue our process. Next, we chart Davies-Bouldin vs k. This score tells us how 'different' each cluster is, with the optimal score at zero. With 5 clusters, we see a score of ~1.4, and only with k>9, do we see better values. Finally, we begin to try to interpret the difference of each model. You can review the evaluation module for various models to understand distributions of our features. With our data, we can look for patterns by gender, house hold income, and shopping habits. Analyze our final cluster There are 2 options to understand the characteristics of your model. You can either 1) look in the BigQuery UI, or you can 2) programmatically interact with your model object. Below you’ll find a simple example for the latter option.
model_to_use = "kmeans_clusters_5" # User can edit this final_model = DATA_SET_ID + "." + model_to_use sql_get_attributes = f""" SELECT centroid_id, feature, categorical_value FROM ML.CENTROIDS(MODEL {final_model}) WHERE feature IN ('OS','gender') """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query(sql_get_attributes, job_config=job_config) # API Request df_attributes = query_job.result() df_attributes = df_attributes.to_dataframe() df_attributes.head() # get numerical information about clusters sql_get_numerical_attributes = f""" WITH T AS ( SELECT centroid_id, ARRAY_AGG(STRUCT(feature AS name, ROUND(numerical_value,1) AS value) ORDER BY centroid_id) AS cluster FROM ML.CENTROIDS(MODEL {final_model}) GROUP BY centroid_id ), Users AS( SELECT centroid_id, COUNT(*) AS Total_Users FROM( SELECT * EXCEPT(nearest_centroids_distance) FROM ML.PREDICT(MODEL {final_model}, ( SELECT * FROM {final_view.full_table_id.replace(":", ".")} ))) GROUP BY centroid_id ) SELECT centroid_id, Total_Users, (SELECT value from unnest(cluster) WHERE name = 'Apparel') AS Apparel, (SELECT value from unnest(cluster) WHERE name = 'Office') AS Office, (SELECT value from unnest(cluster) WHERE name = 'Electronics') AS Electronics, (SELECT value from unnest(cluster) WHERE name = 'LimitedSupply') AS LimitedSupply, (SELECT value from unnest(cluster) WHERE name = 'Accessories') AS Accessories, (SELECT value from unnest(cluster) WHERE name = 'ShopByBrand') AS ShopByBrand, (SELECT value from unnest(cluster) WHERE name = 'Bags') AS Bags, (SELECT value from unnest(cluster) WHERE name = 'productPrice_USD') AS productPrice_USD, (SELECT value from unnest(cluster) WHERE name = 'hhi') AS hhi FROM T LEFT JOIN Users USING(centroid_id) ORDER BY centroid_id ASC """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query( sql_get_numerical_attributes, job_config=job_config ) # API Request df_numerical_attributes = query_job.result() df_numerical_attributes = df_numerical_attributes.to_dataframe() df_numerical_attributes.head()
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
In addition to the output above, I'll note a few insights we get from our clusters. Cluster 1 - The apparel shopper, which also purchases more often than normal. This (although synthetic data) segment skews female. Cluster 2 - Most likely to shop by brand, and interested in bags. This segment has fewer purchases on average than the first cluster, however, this is the highest value customer. Cluster 3 - The most populated cluster, this one has a small amount of purchases and spends less on average. This segment is the one time buyer, rather than the brand loyalist. Cluster 4 - Most interested in accessories, does not buy as often as cluster 1 and 2, however buys more than cluster 3. Cluster 5 - This is an outlier as only 1 person belongs to this group. Use model to group new website behavior, and then push results to GA360 for marketing activation After we have a finalized model, we want to use it for inference. The code below outlines how to score or assign users into clusters. These are labeled as the CENTROID_ID. Although this by itself is helpful, we also recommend a process to ingest these scores back into GA360. The easiest way to export your BigQuery ML predictions from a BigQuery table to Google Analytics 360 is to use the MoDeM (Model Deployment for Marketing, https://github.com/google/modem) reference implementation. MoDeM helps you load data into Google Analytics for eventual activation in Google Ads, Display & Video 360 and Search Ads 360.
sql_score = f""" SELECT * EXCEPT(nearest_centroids_distance) FROM ML.PREDICT(MODEL {final_model}, ( SELECT * FROM {final_view.full_table_id.replace(":", ".")} LIMIT 1)) """ job_config = bigquery.QueryJobConfig() # Start the query query_job = client.query(sql_score, job_config=job_config) # API Request df_score = query_job.result() df_score = df_score.to_dataframe() df_score
notebooks/community/analytics-componetized-patterns/retail/clustering/bqml/bqml_scaled_clustering.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Load the census data
X,y = shap.datasets.adult() X["Occupation"] *= 1000 # to show the impact of feature scale on KNN predictions X_display,y_display = shap.datasets.adult(display=True) X_train, X_valid, y_train, y_valid = sklearn.model_selection.train_test_split(X, y, test_size=0.2, random_state=7)
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
Train a k-nearest neighbors classifier Here we just train directly on the data, without any normalizations.
knn = sklearn.neighbors.KNeighborsClassifier() knn.fit(X_train, y_train)
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
Explain predictions Normally we would use a logit link function to allow the additive feature inputs to better map to the model's probabilistic output space, but knn's can produce infinite log odds ratios so we don't for this example. It is important to note that Occupation is the dominant feature in the 1000 predictions we explain. This is because it has larger variations in value than the other features and so it impacts the k-nearest neighbors calculations more.
f = lambda x: knn.predict_proba(x)[:,1] med = X_train.median().values.reshape((1,X_train.shape[1])) explainer = shap.Explainer(f, med) shap_values = explainer(X_valid.iloc[0:1000,:]) shap.plots.waterfall(shap_values[0])
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
A summary beeswarm plot is an even better way to see the relative impact of all features over the entire dataset. Features are sorted by the sum of their SHAP value magnitudes across all samples.
shap.plots.beeswarm(shap_values)
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
A heatmap plot provides another global view of the model's behavior, this time with a focus on population subgroups.
shap.plots.heatmap(shap_values)
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
Normalize the data before training the model Here we retrain a KNN model on standardized data.
# normalize data dtypes = list(zip(X.dtypes.index, map(str, X.dtypes))) X_train_norm = X_train.copy() X_valid_norm = X_valid.copy() for k,dtype in dtypes: m = X_train[k].mean() s = X_train[k].std() X_train_norm[k] -= m X_train_norm[k] /= s X_valid_norm[k] -= m X_valid_norm[k] /= s knn_norm = sklearn.neighbors.KNeighborsClassifier() knn_norm.fit(X_train_norm, y_train)
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
Explain predictions When we explain predictions from the new KNN model we find that Occupation is no longer the dominate feature, but instead more predictive features, such as marital status, drive most predictions. This is simple example of how explaining why your model is making it's predicitons can uncover problems in the training process.
f = lambda x: knn_norm.predict_proba(x)[:,1] med = X_train_norm.median().values.reshape((1,X_train_norm.shape[1])) explainer = shap.Explainer(f, med) shap_values_norm = explainer(X_valid_norm.iloc[0:1000,:])
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
With a summary plot with see marital status is the most important on average, but other features (such as captial gain) can have more impact on a particular individual.
shap.summary_plot(shap_values_norm, X_valid.iloc[0:1000,:])
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
A dependence scatter plot shows how the number of years of education increases the chance of making over 50K annually.
shap.plots.scatter(shap_values_norm[:,"Education-Num"])
notebooks/tabular_examples/model_agnostic/Census income classification with scikit-learn.ipynb
slundberg/shap
mit
Negative Neurons - Feature Visualization This notebook uses Lucid to reproduce the results in Feature Visualization. This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the Lucid tutorial. Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going: Runtime   →   Change runtime type   →   Hardware Accelerator: GPU Install, Import, Load Model
!pip install --quiet lucid==0.0.5 import numpy as np import scipy.ndimage as nd import tensorflow as tf import lucid.modelzoo.vision_models as models from lucid.misc.io import show import lucid.optvis.objectives as objectives import lucid.optvis.param as param import lucid.optvis.render as render import lucid.optvis.transform as transform # Let's import a model from the Lucid modelzoo! model = models.InceptionV1() model.load_graphdef()
notebooks/feature-visualization/negative_neurons.ipynb
tensorflow/lucid
apache-2.0
Negative Channel Visualizations <img src="https://storage.googleapis.com/lucid-static/feature-visualization/4.png" width="800"></img> Unfortunately, constraints on ImageNet mean we can't provide an easy way for you to reproduce the dataset examples. However, we can reproduce the positive / negative optimized visualizations:
param_f = lambda: param.image(128, batch=2) obj = objectives.channel("mixed4a_pre_relu", 492, batch=1) - objectives.channel("mixed4a_pre_relu", 492, batch=0) _ = render.render_vis(model, obj, param_f)
notebooks/feature-visualization/negative_neurons.ipynb
tensorflow/lucid
apache-2.0
Here, each component of the values tensor has one more sample point in the direction it is facing. If the extrapolation was extrapolation.ZERO, it would be one less (see above image). Creating Staggered Grids The StaggeredGrid constructor supports two modes: Direct construction StaggeredGrid(values: Tensor, extrapolation, bounds). All required fields are passed as arguments and stored as-is. The values tensor must have the correct shape considering the extrapolation. Construction by resampling StaggeredGrid(values: Any, extrapolation, bounds, resolution, **resolution). When specifying the resolution as a Shape or via keyword arguments, non-Tensor values can be passed for values, such as geometries, other fields, constants or functions (see the documentation). Examples:
domain = dict(x=10, y=10, bounds=Box(x=1, y=1), extrapolation=extrapolation.ZERO) grid = StaggeredGrid((1, -1), **domain) # from constant vector grid = StaggeredGrid(Noise(), **domain) # sample analytic field grid = StaggeredGrid(grid, **domain) # resample existing field grid = StaggeredGrid(lambda x: math.exp(-x), **domain) # function value(location) grid = StaggeredGrid(Sphere([0, 0], radius=1), **domain) # no anti-aliasing grid = StaggeredGrid(SoftGeometryMask(Sphere([0, 0], radius=1)), **domain) # with anti-aliasing
docs/Staggered_Grids.ipynb
tum-pbs/PhiFlow
mit
Staggered grids can also be created from other fields using field.at() or @ by passing an existing StaggeredGrid. Some field functions also return StaggeredGrids: spatial_gradient() with type=StaggeredGrid stagger() Values Tensor For non-periodic staggered grids, the values tensor is non-uniform to reflect the different number of sample points for each component. Functions to get a uniform tensor: at_centers() interpolates the staggered values to the cell centers and returns a CenteredGrid staggered_tensor() pads the internal tensor to an invariant shape with n+1 entries along all dimensions. Slicing Like tensors, grids can be sliced using the standard syntax. When selecting a vector component, such as x or y, the result is represented as a CenteredGrid with shifted locations.
grid.vector['x'] # select component
docs/Staggered_Grids.ipynb
tum-pbs/PhiFlow
mit
Grids do not support slicing along spatial dimensions because the result would be ambiguous with StaggeredGrids. Instead, slice the values directly.
grid.values.x[3:4] # spatial slice grid.values.x[0] # spatial slice
docs/Staggered_Grids.ipynb
tum-pbs/PhiFlow
mit
Slicing along batch dimensions has no special effect, this just slices the values.
grid.batch[0] # batch slice
docs/Staggered_Grids.ipynb
tum-pbs/PhiFlow
mit
H &rarr; ZZ* &rarr; 4$\mu$ - cuts and plot, using Monte Carlo signal data (this is a step of the broader analsys)
# Start the Spark Session # This uses local mode for simplicity # the use of findspark is optional import findspark findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3") from pyspark.sql import SparkSession spark = (SparkSession.builder .appName("H_ZZ_4Lep") .master("local[*]") .config("spark.driver.memory", "8g") .config("spark.sql.parquet.enableNestedColumnVectorizedReader", "true") .getOrCreate() ) # Read data with the candidate events # Only Muon events for this reduced-scope notebook path = "./" df_MC_events_signal = spark.read.parquet(path + "SMHiggsToZZTo4L.parquet") df_MC_events_signal.printSchema() # Count the number of events before cuts (filter) print(f"Number of events, MC signal: {df_MC_events_signal.count()}")
Spark_Physics/CMS_Higgs_opendata/H_ZZ_4l_analysis_basic_monte_carlo_signal.ipynb
LucaCanali/Miscellaneous
apache-2.0
Apply cuts More details on the cuts (filters applied to the event data) in the reference CMS paper on the discovery of the Higgs boson
df_events = df_MC_events_signal.selectExpr("""arrays_zip(Muon_charge, Muon_mass, Muon_pt, Muon_phi, Muon_eta, Muon_dxy, Muon_dz, Muon_dxyErr, Muon_dzErr, Muon_pfRelIso04_all) Muon""", "nMuon") df_events.printSchema() # Apply filters to the input data # Keep only events with at least 4 muons df_events = df_events.filter("nMuon >= 4") # Filter Muon arrays # Filters are detailed in the CMS Higgs bosono paper # See notebook with RDataFrame implementation at https://root.cern/doc/master/df103__NanoAODHiggsAnalysis_8py.html # Article: with CMS Higgs boson discovery](https://inspirehep.net/record/1124338 df_events_filtered = df_events.selectExpr("""filter(Muon, m -> abs(m.Muon_pfRelIso04_all) < 0.40 -- Require good isolation and m.Muon_pt > 5 -- Good muon kinematics and abs(m.Muon_eta) < 2.4 -- Track close to primary vertex with small uncertainty and (m.Muon_dxy * m.Muon_dxy + m.Muon_dz * m.Muon_dz) / (m.Muon_dxyErr * m.Muon_dxyErr + m.Muon_dzErr*m.Muon_dzErr) < 16 and abs(m.Muon_dxy) < 0.5 and abs(m.Muon_dz) < 1.0 ) as Muon""") # only events with exactly 4 Muons left after the previous cuts df_events_filtered = df_events_filtered.filter("size(Muon) == 4") # cut on lepton charge # paper: "selecting two pairs of isolated leptons, each of which is comprised of two leptons with the same flavour and opposite charge" df_events_4muons = df_events_filtered.filter("Muon.Muon_charge[0] + Muon.Muon_charge[1] + Muon.Muon_charge[2] + Muon.Muon_charge[3] == 0") print(f"Number of events after applying cuts: {df_events_4muons.count()}")
Spark_Physics/CMS_Higgs_opendata/H_ZZ_4l_analysis_basic_monte_carlo_signal.ipynb
LucaCanali/Miscellaneous
apache-2.0
Compute the invariant mass This computes the 4-vectors sum for the 4-lepton system using formulas from special relativity. See also http://edu.itp.phys.ethz.ch/hs10/ppp1/2010_11_02.pdf and https://en.wikipedia.org/wiki/Invariant_mass
# This computes the 4-vectors sum for the 4-muon system # convert to cartesian coordinates df_4lep = df_events_4muons.selectExpr( "Muon.Muon_pt[0] * cos(Muon.Muon_phi[0]) P0x", "Muon.Muon_pt[1] * cos(Muon.Muon_phi[1]) P1x", "Muon.Muon_pt[2] * cos(Muon.Muon_phi[2]) P2x", "Muon.Muon_pt[3] * cos(Muon.Muon_phi[3]) P3x", "Muon.Muon_pt[0] * sin(Muon.Muon_phi[0]) P0y", "Muon.Muon_pt[1] * sin(Muon.Muon_phi[1]) P1y", "Muon.Muon_pt[2] * sin(Muon.Muon_phi[2]) P2y", "Muon.Muon_pt[3] * sin(Muon.Muon_phi[3]) P3y", "Muon.Muon_pt[0] * sinh(Muon.Muon_eta[0]) P0z", "Muon.Muon_pt[1] * sinh(Muon.Muon_eta[1]) P1z", "Muon.Muon_pt[2] * sinh(Muon.Muon_eta[2]) P2z", "Muon.Muon_pt[3] * sinh(Muon.Muon_eta[3]) P3z", "Muon.Muon_mass[0] as Mass" ) # compute energy for each muon df_4lep = df_4lep.selectExpr( "P0x", "P0y", "P0z", "sqrt(Mass* Mass + P0x*P0x + P0y*P0y + P0z*P0z) as E0", "P1x", "P1y", "P1z", "sqrt(Mass* Mass + P1x*P1x + P1y*P1y + P1z*P1z) as E1", "P2x", "P2y", "P2z", "sqrt(Mass* Mass + P2x*P2x + P2y*P2y + P2z*P2z) as E2", "P3x", "P3y", "P3z", "sqrt(Mass* Mass + P3x*P3x + P3y*P3y + P3z*P3z) as E3" ) # sum energy and momenta over the 4 muons df_4lep = df_4lep.selectExpr( "P0x + P1x + P2x + P3x as Px", "P0y + P1y + P2y + P3y as Py", "P0z + P1z + P2z + P3z as Pz", "E0 + E1 + E2 + E3 as E" ) df_4lep.show(5) # This computes the invariant mass for the 4-muon system df_4lep_invmass = df_4lep.selectExpr("sqrt(E * E - ( Px * Px + Py * Py + Pz * Pz)) as invmass_GeV") df_4lep_invmass.show(5) # This defines the DataFrame transformation to compute the histogram of the invariant mass # The result is a histogram with (energy) bin values and event counts foreach bin # Requires sparkhistogram # See https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_DataFrame_Histograms.md from sparkhistogram import computeHistogram # histogram parameters min_val = 80 max_val = 250 step = 3.0 num_bins = (max_val - min_val) / step # use the helper function computeHistogram in the package sparkhistogram histogram_data = computeHistogram(df_4lep_invmass, "invmass_GeV", min_val, max_val, num_bins) # The action toPandas() here triggers the computation. # Histogram data is fetched into the driver as a Pandas Dataframe. %time histogram_data_pandas=histogram_data.toPandas() # This plots the data histogram with error bars import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]}) f, ax = plt.subplots() x = histogram_data_pandas["value"] y = histogram_data_pandas["count"] # scatter plot #ax.plot(x, y, marker='o', color='red', linewidth=0) #ax.errorbar(x, y, err, fmt = 'ro') # histogram with error bars ax.bar(x, y, width = 5.0, capsize = 5, linewidth = 0.5, ecolor='blue', fill=True) ax.set_xlim(min_val-2, max_val) ax.set_xlabel("$m_{4\mu}$ (GeV)") ax.set_ylabel(f"Number of Events / bucket_size = {step} GeV") ax.set_title("Distribution of the 4-Muon Invariant Mass") # Label for the Z ang Higgs spectrum peaks txt_opts = {'horizontalalignment': 'left', 'verticalalignment': 'center', 'transform': ax.transAxes} plt.text(0.48, 0.71, "Higgs boson, mass = 125 GeV", **txt_opts) # Add energy and luminosity plt.text(0.60, 0.92, "CMS open data, for education", **txt_opts) plt.text(0.60, 0.87, '$\sqrt{s}$=13 TeV, Monte Carlo data', **txt_opts) plt.show() spark.stop()
Spark_Physics/CMS_Higgs_opendata/H_ZZ_4l_analysis_basic_monte_carlo_signal.ipynb
LucaCanali/Miscellaneous
apache-2.0
Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP) Decoding of motor imagery applied to EEG data decomposed using CSP. A classifier is then applied to features extracted on CSP-filtered signals. See https://en.wikipedia.org/wiki/Common_spatial_pattern and [1]. The EEGBCI dataset is documented in [2]. The data set is available at PhysioNet [3]_. References .. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. .. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE TBME 51(6):1034-1043. .. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220.
# Authors: Martin Billinger <martin.billinger@tugraz.at> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.model_selection import ShuffleSplit, cross_val_score from mne import Epochs, pick_types, events_from_annotations from mne.channels import make_standard_montage from mne.io import concatenate_raws, read_raw_edf from mne.datasets import eegbci from mne.decoding import CSP print(__doc__) # ############################################################################# # # Set parameters and read data # avoid classification of evoked responses by using epochs that start 1s after # cue onset. tmin, tmax = -1., 4. event_id = dict(hands=2, feet=3) subject = 1 runs = [6, 10, 14] # motor imagery: hands vs feet raw_fnames = eegbci.load_data(subject, runs) raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames]) eegbci.standardize(raw) # set channel names montage = make_standard_montage('standard_1005') raw.set_montage(montage) # strip channel names of "." characters raw.rename_channels(lambda x: x.strip('.')) # Apply band-pass filter raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge') events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3)) picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') # Read epochs (train will be done only between 1 and 2s) # Testing will be done with a running classifier epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) epochs_train = epochs.copy().crop(tmin=1., tmax=2.) labels = epochs.events[:, -1] - 2
0.21/_downloads/a4d4c1a667c2374c09eed24ac047d840/plot_decoding_csp_eeg.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Suppress wikipedia package warnings.
import warnings warnings.filterwarnings('ignore')
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Helper functions to process output of nltk.ne_chunk and to count frequency of named entities in a given text.
def count_entites(entity, text): s = entity if type(entity) is tuple: s = entity[0] return len(re.findall(s, text)) def get_top_n(entities, text, n): a = [ (e, count_entites(e, text)) for e in entities] a.sort(key=lambda x: x[1], reverse=True) return a[0:n] # For a list of entities found by nltk.ne_chunks: # returns (entity, label) if it is a single word or # concatenates multiple word named entities into single string def get_entity(entity): if isinstance(entity, tuple) and entity[1][:2] == 'NE': return entity if isinstance(entity, nltk.tree.Tree): text = ' '.join([word for word, tag in entity.leaves()]) return (text, entity.label()) return None
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Since nltk.ne_chunks tends to put same named entities into more classes (like 'American' : 'ORGANIZATION' and 'American' : 'GPE'), we would want to filter these duplicities.
# returns list of named entities in a form [(entity_text, entity_label), ...] def extract_entities(chunk): data = [] for entity in chunk: d = get_entity(entity) if d is not None and d[0] not in [e[0] for e in data]: data.append(d) return data
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Our custom NER functio from example here.
def custom_NER(tagged): entities = [] entity = [] for word in tagged: if word[1][:2] == 'NN' or (entity and word[1][:2] == 'IN'): entity.append(word) else: if entity and entity[-1][1].startswith('IN'): entity.pop() if entity: s = ' '.join(e[0] for e in entity) if s not in entities and s[0].isupper() and len(s) > 1: entities.append(s) entity = [] return entities
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Loading processed article, approximately 500 sentences. Regex substitution removes reference links (e.g. [12])
text = None with open('text', 'r') as f: text = f.read() text = re.sub(r'\[[0-9]*\]', '', text)
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Now we try to recognize entities with both nltk.ne_chunk and our custom_NER function and print 10 most frequent entities. Yielded results seem to be fairly similar. nltk.ne_chunk function also added basic classification tags.
tokens = nltk.word_tokenize(text) tagged = nltk.pos_tag(tokens) ne_chunked = nltk.ne_chunk(tagged, binary=False) ex = extract_entities(ne_chunked) ex_custom = custom_NER(tagged) top_ex = get_top_n(ex, text, 20) top_ex_custom = get_top_n(ex_custom, text, 20) print('ne_chunked:') for e in top_ex: print('{} count: {}'.format(e[0], e[1])) print() print('custom NER:') for e in top_ex_custom: print('{} count: {}'.format(e[0], e[1]))
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Next we would want to do our own classification, using Wikipedia articles for each named entity. Idea is to find article matching entity string (for example 'America') and then create a noun phrase from its first sentence. When no suitable article or description is found, entity classification will be 'Thing'.
def get_noun_phrase(entity, sentence): t = nltk.pos_tag([word for word in nltk.word_tokenize(sentence)]) phrase = [] stage = 0 for word in t: if word[0] in ('is', 'was', 'were', 'are', 'refers') and stage == 0: stage = 1 continue elif stage == 1: if word[1] in ('NN', 'JJ', 'VBD', 'CD', 'NNP', 'NNPS', 'RBS', 'IN', 'NNS'): phrase.append(word) elif word[1] in ('DT', ',', 'CC', 'TO', 'POS'): continue else: break if len(phrase) > 1 and phrase[-1][1] == 'IN': phrase.pop() phrase = ' '.join([ word[0] for word in phrase ]) if phrase == '': phrase = 'Thing' return {entity : phrase} def get_wiki_desc(entity, wiki='en'): wikipedia.set_lang(wiki) try: fs = wikipedia.summary(entity, sentences=1) except wikipedia.DisambiguationError as e: fs = wikipedia.summary(e.options[0], sentences=1) except wikipedia.PageError: return {entity : 'Thing'} #fs = nltk.sent_tokenize(page.summary)[0] return get_noun_phrase(entity, fs)
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Obivously this classification is way more specific than tags used by nltk.ne_chunk. We can also see that both NER methods mistook common words for entities unrelated to the article (for example 'New'). Since custom_NER function relies on uppercase letters to recognize entities, this can be commonly caused by first words in sentences. The lack of description for entity 'America' is caused by simple way get_noun_phrase function constructs description. It looks for basic words like 'is', so more advanced language can throw it off. This could be fixed by searching simple english Wikipedia or using it as a fallback when no suitable phrase is found on normal english Wikipedia (for example compare article about Americas on simple and normal wiki). I also tried to search for more general verb (presen tense verb, tag 'VBZ'), but this yielded worse results. Other improvement could be simply expanding the verb list in get_noun_phrase with other suitable verbs. When no exact match for pair (entity, article) is found, wikipedia module raises DisambiguationError, which (same as disambiguation page on Wikipedia) offers possible matching pages. When this happens, first suggested page is picked. This however does not have to be the best page for given entity.
for entity in top_ex: print(get_wiki_desc(entity[0][0])) for entity in top_ex_custom: print(get_wiki_desc(entity[0]))
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
When searching simple wiki, entity 'Americas' gets fairly reasonable description. However there seems to be an issue with handling DisambiguationError in some cases when looking for first page in DisambiguationError.options raises another DisambiguationError (even if pages from .options should be guaranteed hit).
get_wiki_desc('Americas', wiki='simple')
Task 3 - Text Mining/task3.ipynb
ggljzr/mi-ddw
mit
Creating training sets Each class of tissue in our pandas framework has a pre assigned label (Module 1). This labels were: - ClassTissuePost - ClassTissuePre - ClassTissueFlair - ClassTumorPost - ClassTumorPre - ClassTumorFlair - ClassEdemaPost - ClassEdemaPre - ClassEdemaFlair For demontration purposes we will create a feature vector that contains the intesities for the tumor and white matter area from the T1w pre and post contrast images.
ClassBrainTissuepost=(Data['ClassTissuePost'].values) ClassBrainTissuepost= (np.asarray(ClassBrainTissuepost)) ClassBrainTissuepost=ClassBrainTissuepost[~np.isnan(ClassBrainTissuepost)] ClassBrainTissuepre=(Data[['ClassTissuePre']].values) ClassBrainTissuepre= (np.asarray(ClassBrainTissuepre)) ClassBrainTissuepre=ClassBrainTissuepre[~np.isnan(ClassBrainTissuepre)] ClassTUMORpost=(Data[['ClassTumorPost']].values) ClassTUMORpost= (np.asarray(ClassTUMORpost)) ClassTUMORpost=ClassTUMORpost[~np.isnan(ClassTUMORpost)] ClassTUMORpre=(Data[['ClassTumorPre']].values) ClassTUMORpre= (np.asarray(ClassTUMORpre)) ClassTUMORpre=ClassTUMORpre[~np.isnan(ClassTUMORpre)] X_1 = np.stack((ClassBrainTissuepost,ClassBrainTissuepre)) # we only take the first two features. X_2 = np.stack((ClassTUMORpost,ClassTUMORpre)) X=np.concatenate((X_1.transpose(), X_2.transpose()),axis=0) y =np.zeros((np.shape(X))[0]) y[np.shape(X_1)[1]:]=1 X= preprocessing.scale(X)
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
X is the feature vector y are the labels Split Training/Validation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
Create the classifier For the following example we will consider a SVM classifier. The classifier is provided by the Scikit-Learn library
h = .02 # step size in the mesh # we create an instance of SVM and fit out data. We do not scale our # data since we want to plot the support vectors C = 1.0 # SVM regularization parameter svc = svm.SVC(kernel='linear', C=C).fit(X, y) rbf_svc = svm.SVC(kernel='rbf', gamma=0.1, C=10).fit(X, y) poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y) # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # title for the plots titles = ['SVC with linear kernel', 'SVC with RBF kernel', 'SVC with polynomial (degree 3) kernel'] for i, clf in enumerate((svc, rbf_svc, poly_svc)): # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. plt.subplot(2, 2, i + 1) plt.subplots_adjust(wspace=0.4, hspace=0.4) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.xlabel('Intensity post contrast') plt.ylabel('Intensity pre contrast') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() # understanding margins for C in [0.001,1000]: fig = plt.subplot() clf = svm.SVC(C,kernel='linear') clf.fit(X, y) # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx = np.linspace(x_min,x_max) # print (xx) xx=np.asarray(xx) # get the separating hyperplane w = clf.coef_[0] # print(w) a = -w[0] / w[1] # print (a) yy = a * xx - (clf.intercept_[0]) / w[1] # print(yy) # plot the parallels to the separating hyperplane that pass through the # support vectors b = clf.support_vectors_[0] yy_down = a * xx + (b[1] - a * b[0]) b = clf.support_vectors_[-1] yy_up = a * xx + (b[1] - a * b[0]) # plot the line, the points, and the nearest vectors to the plane plt.plot(xx, yy, 'k-') plt.plot(xx, yy_down, 'k--') plt.plot(xx, yy_up, 'k--') plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.axis('tight') plt.show()
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
Run some basic analytics Calculate some basic metrics.
print ('C=100') model=svm.SVC(C=100,kernel='linear') model.fit(X_train, y_train) # make predictions expected = y_test predicted = model.predict(X_test) # summarize the fit of the model print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted)) print (20*'---') print ('C=0.0001') model=svm.SVC(C=0.0001,kernel='linear') model.fit(X_train, y_train) # make predictions expected = y_test predicted = model.predict(X_test) # summarize the fit of the model print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted))
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
Correct way Fine tune hyperparameters
gamma_val =[0.01, .2,.3,.4,.9] classifier = svm.SVC(kernel='rbf', C=10).fit(X, y) classifier = GridSearchCV(estimator=classifier, cv=5, param_grid=dict(gamma=gamma_val)) classifier.fit(X_train, y_train)
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
Debug algorithm with learning curve X_train is randomly split into a training and a test set 3 times (n_iter=3). Each point on the training-score curve is the average of 3 scores where the model was trained and evaluated on the first i training examples. Each point on the cross-validation score curve is the average of 3 scores where the model was trained on the first i training examples and evaluated on all examples of the test set.
title = 'Learning Curves (SVM, gamma=%.6f)' %classifier.best_estimator_.gamma estimator = svm.SVC(kernel='rbf', C=10, gamma=classifier.best_estimator_.gamma) plot_learning_curve(estimator, title, X_train, y_train, cv=4) plt.show() ### Final evaluation on the test set classifier.score(X_test, y_test)
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
Heatmap This will take some time...
C_range = np.logspace(-2, 10, 13) gamma_range = np.logspace(-9, 3, 13) param_grid = dict(gamma=gamma_range, C=C_range) cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42) grid_clf = GridSearchCV(SVC(), param_grid=param_grid, cv=cv) grid_clf.fit(X, y) print("The best parameters are %s with a score of %0.2f" % (grid_clf.best_params_, grid_clf.best_score_)) plt.figure(figsize=(8, 6)) scores = grid_clf.cv_results_['mean_test_score'].reshape(len(C_range), len(gamma_range)) plt.figure(figsize=(8, 6)) plt.subplots_adjust(left=.2, right=0.95, bottom=0.15, top=0.95) plt.imshow(scores, interpolation='nearest', cmap=plt.cm.jet, norm=MidpointNormalize(vmin=0.2, midpoint=0.92)) plt.xlabel('gamma') plt.ylabel('C') plt.colorbar() plt.xticks(np.arange(len(gamma_range)), gamma_range, rotation=45) plt.yticks(np.arange(len(C_range)), C_range) plt.title('Validation accuracy') plt.show()
notebooks/Module 3.ipynb
slowvak/MachineLearningForMedicalImages
mit
<font color='blue'> What you need to remember: Common steps for pre-processing a new dataset are: - Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) - Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) - "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images. You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network! <img src="images/LogReg_kiank.png" style="width:650px;height:400px;"> Mathematical expression of the algorithm: For one example $x^{(i)}$: $$z^{(i)} = w^T x^{(i)} + b \tag{1}$$ $$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$ The cost is then computed by summing over all training examples: $$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$ Key steps: In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm ## The main steps for building a Neural Network are: 1. Define the model structure (such as number of input features) 2. Initialize the model's parameters 3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent) You often build 1-3 separately and integrate them into one function we call model(). 4.1 - Helper functions Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b)$ to make predictions.
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: x -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(9.2) = " + str(sigmoid(9.2)))
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Expected Output: <table style="width:20%"> <tr> <td>**sigmoid(0)**</td> <td> 0.5</td> </tr> <tr> <td>**sigmoid(9.2)**</td> <td> 0.999898970806 </td> </tr> </table> 4.2 - Initializing parameters Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
# GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w, b = np.zeros((dim, 1)), 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b))
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Expected Output: <table style="width:15%"> <tr> <td> ** w ** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> ** b ** </td> <td> 0 </td> </tr> </table> For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagation Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters. Exercise: Implement a function propagate() that computes the cost function and its gradient. Hints: Forward Propagation: - You get X - You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$ - You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$ Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$ $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
# GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) cost = -1.0 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1.0 / m * np.dot(X, (A - Y).T) db = 1.0 / m * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost))
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Expected Output: <table style="width:40%"> <tr> <td> **w** </td> <td>[[ 0.1124579 ] [ 0.23106775]] </td> </tr> <tr> <td> **b** </td> <td> 1.55930492484 </td> </tr> <tr> <td> **dw** </td> <td> [[ 0.90158428] [ 1.76250842]] </td> </tr> <tr> <td> **db** </td> <td> 0.430462071679 </td> </tr> </table> Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions: Calculate $\hat{Y} = A = \sigma(w^T X + b)$ Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
# GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1, m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### # Y_prediction[A >= 0.5] = int(1) # Y_prediction[A < 0.5] = int(0) for i in range(A.shape[1]): # Convert probabilities a[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) if A[0][i] > 0.5: Y_prediction[0][i] = 1 else: Y_prediction[0][i] = 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction.astype(int) print("predictions = " + str(predict(w, b, X)))
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Expected Output: <table style="width:30%"> <tr> <td> **predictions** </td> <td> [[ 1. 1.]] </td> </tr> </table> <font color='blue'> What to remember: You've implemented several functions that: - Initialize (w,b) - Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent - Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order. Exercise: Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
# GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### dim = X_train.shape[0] w, b = initialize_with_zeros(dim) params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations = num_iterations, learning_rate = learning_rate, print_cost = print_cost) w = params["w"] b = params["b"] Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Expected Output: <table style="width:40%"> <tr> <td> **Train Accuracy** </td> <td> 99.04306220095694 % </td> </tr> <tr> <td>**Test Accuracy** </td> <td> 70.0 % </td> </tr> </table> Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week! Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
# Example of a picture that was wrongly classified. index = 5 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print(d["Y_prediction_test"][0, index]) print ("y = " + str(test_set_y[0, index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0, index]].decode("utf-8") + "\" picture.")
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
Interpretation: - Different learning rates give different costs and thus different predictions results. - If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy. - In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
## START CODE HERE ## (PUT YOUR IMAGE NAME) ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
coursera/deep-learning/1.neural-networks-deep-learning/week2/pa.2.Logistic Regression with a Neural Network mindset.ipynb
huajianmao/learning
mit
<p class="normal"> &raquo; <b>Continuous-valued</b> vs. <b>Discrete-valued</b>: <i>based on values assumed by the dependent variable.</i></p> <div class="formula"> $$ \begin{cases} x(t) \in [a, b] & \text{Continuous-valued} \\ x(t) \in \{a_1, a_2, \cdots\} & \text{Discrete-valued} \\ \end{cases} $$ </div>
def continuous_discrete_valued_signals(): t = np.arange(-10, 10.01, 0.01) n_steps = 10. x_c = np.exp(-0.1 * (t ** 2)) x_d = (1/n_steps) * np.round(n_steps * x_c) fig = figure(figsize=(17,5)) plot(t, x_c, label="Continuous-valued") plot(t, x_d, label="Discrete-valued") ylim(-0.1, 1.1) xticks(fontsize=25) yticks(fontsize=25) xlabel('Time', fontsize=25) legend(prop={'size':25}); savefig("img/cont_disc_val.svg", format="svg") continuous_discrete_valued_signals()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit
<p class="normal">Last two classifications can be combined to have four possible combinations of signals:</p> <ul class="content"> <li><i>Continuous-time continuous-valued signals</i></li> <li><i>Continuous-time discrete-valued signals</i></li> <li><i>Discrete-time continuous-valued signals</i></li> <li><i>Discrete-time discrete-valued signals</i></li> </ul>
def continuous_discrete_combos(): t = np.arange(-10, 10.01, 0.01) n = np.arange(-10, 11, 0.5) n_steps = 5. # continuous-time continuous-valued signal x_t_c = np.exp(-0.1 * (t ** 2)) # continuous-time discrete-valued signal x_t_d = (1/n_steps) * np.round(n_steps * x_t_c) # discrete-time continuous-valued signal x_n_c = np.exp(-0.1 * (n ** 2)) # discrete-time discrete-valued signal x_n_d = (1/n_steps) * np.round(n_steps * x_n_c) figure(figsize=(17,8)) subplot2grid((2,2), (0,0), rowspan=1, colspan=1) plot(t, x_t_c,) ylim(-0.1, 1.1) xticks(fontsize=25) yticks(fontsize=25) title("Continuous-time Continuous-valued", fontsize=25) subplot2grid((2,2), (0,1), rowspan=1, colspan=1) plot(t, x_t_d,) ylim(-0.1, 1.1) xticks(fontsize=25) yticks(fontsize=25) title("Continuous-time Discrete-valued", fontsize=25) subplot2grid((2,2), (1,0), rowspan=1, colspan=1) stem(n, x_n_c, basefmt='.') ylim(-0.1, 1.1) xlim(-10, 10) xticks(fontsize=25) yticks(fontsize=25) title("Discrete-time Continuous-valued", fontsize=25) subplot2grid((2,2), (1,1), rowspan=1, colspan=1) stem(n, x_n_d, basefmt='.') ylim(-0.1, 1.1) xlim(-10, 10) xticks(fontsize=25) yticks(fontsize=25) title("Discrete-time Discrete-valued", fontsize=25); tight_layout(); savefig("img/signal_types.svg", format="svg"); continuous_discrete_combos()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit
<p class="normal">EMG recorded from a linear electrode array.</p> <p class="normal"> &raquo; <b>Deterministic</b> vs. <b>Stochastic</b>: <i>e.g. EMG is an example of a stochastic signal.</i></p>
def deterministic_stochastic(): t = np.arange(0., 10., 0.005) x = np.exp(-0.5 * t) * np.sin(2 * np.pi * 2 * t) y1 = np.random.normal(0, 1., size=len(t)) y2 = np.random.uniform(0, 1., size=len(t)) figure(figsize=(17, 10)) # deterministic signal subplot2grid((3,3), (0,0), rowspan=1, colspan=3) plot(t, x, label="$e^{-0.5t}\sin 4\pi t$") title('Deterministic signal', fontsize=25) xticks(fontsize=25) yticks(fontsize=25) legend(prop={'size':25}); # stochastic signal - normal distribution subplot2grid((3,3), (1,0), rowspan=1, colspan=2) plot(t, y1, label="Normal distribution") ylim(-4, 6) title('Stochastic signal (Normal)', fontsize=25) xticks(fontsize=25) yticks(fontsize=25) legend(prop={'size':25}); # histogram subplot2grid((3,3), (1,2), rowspan=1, colspan=1) hist(y1) xlim(-6, 6) title("Histogram", fontsize=25) xticks(fontsize=25) yticks([0, 200, 400, 600], fontsize=25) legend(prop={'size':25}); # stochastic signal - normal distribution subplot2grid((3,3), (2,0), rowspan=1, colspan=2) plot(t, y2, label="Normal distribution") ylim(-0.3, 1.5) title('Stochastic signal (Uniform)', fontsize=25) xticks(fontsize=25) yticks(fontsize=25) legend(prop={'size':25}); # histogram subplot2grid((3,3), (2,2), rowspan=1, colspan=1) hist(y2) xlim(-0.2, 1.2) title("Histogram", fontsize=25) xticks(fontsize=25) yticks([0, 100, 200], fontsize=25) legend(prop={'size':25}) tight_layout() savefig("img/det_stoch.svg", format="svg"); deterministic_stochastic()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit
<p class="normal"> &raquo; <b>Even</b> vs. <b>Odd</b>: <i>based on symmetry about the $t=0$ axis.</i></p> <div class="formula"> $$ \begin{cases} x(t) = x(-t), & \text{Even signal} \\ x(t) = -x(-t), & \text{Odd signal} \\ \end{cases} $$ </div> <p class="normal"><i>Can there be signals that are neither even nor odd?</i></p> <div class="theorem"><b>Theorem</b>: Any arbitrary function can be represented as a sum of an odd and even function. <div class="formula">$$ x(t) = x_{even}(t) + x_{odd}(t) $$</div> where,<div class="formula-inline">$ x_{even}(t) = \frac{x(t) + x(-t)}{2} $</div> and <div class="formula-inline">$ x_{odd}(t) = \frac{x(t) - x(-t)}{2} $</div>. </div>
def even_odd_decomposition(): t = np.arange(-5, 5, 0.01) x = (0.5 * np.exp(-(t-2.1)**2) * np.cos(2*np.pi*t) + np.exp(-t**2) * np.sin(2*np.pi*3*t)) figure(figsize=(17,4)) # Original function plot(t, x, label="$x(t)$") # Even component plot(t, 0.5 * (x + x[::-1]) - 2, label="$x_{even}(t)$") # Odd component plot(t, 0.5 * (x - x[::-1]) + 2, label="$x_{odd}(t)$") xlim(-5, 8) title('Decomposition of a signal into even and odd components', fontsize=25) xlabel('Time', fontsize=25) xticks(fontsize=25) yticks([]) legend(prop={'size':25}) savefig("img/even_odd.svg", format="svg"); even_odd_decomposition()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit
<p class="normal"> &raquo; <b>Periodic</b> vs. <b>Non-periodic</b>: <i>a signal is periodic, if and only if</i></p> <div class="formula"> $$ x(t) = x(t + T), \,\, \forall t, \,\,\, T \text{ is the fundamental period.}$$ </div> <p class="normal"> &raquo; <b>Energy</b> vs. <b>Power</b>: <i>indicates if a signal is short-lived.</i></p> <div class="formula"> $$ E = \int_{-\infty}^{\infty}\left|x(t)\right|^{2}dt \,\,\,\,\,\,\,\,\,\, P = \lim_{T\to\infty}\frac{1}{T}\int_{-T/2}^{T/2}\left|x(t)\right|^{2}dt $$ </div> <p class="normal"><i> A signal is an energy signal, if</i></p> <div class="formula"> $$ 0 < E < \infty $$ </div> <p class="normal"><i> and a signal is a power signal, if</i></p> <div class="formula"> $$ 0 < P < \infty $$ </div> <h6 class="header">Introduction to Signal Processing (Lecture 1)</h6> <h1>What is a system?</h1> <p class="normal">A system is any physical device or algorithm that performs some operation on a signal to transform it into another signal.</p> <h6 class="header">Introduction to Signal Processing (Lecture 1)</h6> <h1>Classification of systems</h1> <p class="normal"><i>Based on the properties of a system:</i></p> <p class="normal"> &raquo; <b>Linearity:</b> $\implies$ <i><b>scaling</b> and <b>superposition</b> </i></p> <p class="normal"> Lets assume, </p> <div class="formula"> $$ f: x_i(t) \mapsto y_i(t) $$ </div> <p class="normal"> The system is linear, if and only if,</p> <div class="formula"> $$ f: \sum_{i}a_ix_i(t) \mapsto \sum_{i}a_iy_i(t) $$ </div> <p class="normal"><i>Which of the following systems are linear?</i></p> <div class="formula">(a) $y(t) = k_1x(t) + k_2x(t-2)$</div> <div class="formula">(b) $y(t) = \int_{t-T}^{t}x(\tau)d\tau$</div> <div class="formula">(c) $y(t) = 0.5x(t) + 1.5$</div> <p class="normal"> &raquo; <b>Memory:</b> <i>a system whose output depends on past, present or future values of input is a <b>system with memory</b>, else the system is <b>memoryless</b></i></p> <p class="normal">Memoryless system: </p> <div class="formula"> $$ y(t) = 0.5x(t) $$ </div> <p class="normal">System with memory: </p> <div class="formula"> $$ y(t) = \int_{t-0.5}^{t}x(\tau)d\tau$$ </div>
def memory(): dt = 0.01 N = np.round(0.5/dt) t = np.arange(-1.0, 5.0, dt) x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0) # memoryless system y1 = 0.5 * x # system with memory. y2 = np.zeros(len(x)) for i in xrange(len(y2)): y2[i] = np.sum(x[max(0, i-N):i]) * dt figure(figsize=(17,4)) plot(t, x, lw=2, label="$x(t)$") plot(t, y1, lw=2, label="$0.5x(t)$") plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$") xlim(-1, 5) ylim(-0.1, 1.1) xticks(fontsize=25) yticks(fontsize=25) xlabel('Time', fontsize=25) legend(prop={'size':25}) savefig("img/memory.svg", format="svg"); memory()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit