markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We then use the restored model to predict the class-numbers for those images.
y_pred = model3.predict(x=images)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Get the class-numbers as integers.
cls_pred = np.argmax(y_pred, axis=1)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Plot the images with their true and predicted class-numbers.
plot_images(images=images, cls_pred=cls_pred, cls_true=cls_true)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Visualization of Layer Weights and Outputs Helper-function for plotting convolutional weights
def plot_conv_weights(weights, input_channel=0): # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(weights) w_max = np.max(weights) # Number of filters used in the conv. layer. num_filters = weights.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot all the filter-weights. for i, ax in enumerate(axes.flat): # Only plot the valid filter-weights. if i<num_filters: # Get the weights for the i'th filter of the input channel. # See new_conv_layer() for details on the format # of this 4-dim tensor. img = weights[:, :, input_channel, i] # Plot image. ax.imshow(img, vmin=w_min, vmax=w_max, interpolation='nearest', cmap='seismic') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Get Layers Keras has a simple way of listing the layers in the model.
model3.summary()
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We count the indices to get the layers we want. The input-layer has index 0.
layer_input = model3.layers[0]
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The first convolutional layer has index 2.
layer_conv1 = model3.layers[2] layer_conv1
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The second convolutional layer has index 4.
layer_conv2 = model3.layers[4]
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Convolutional Weights Now that we have the layers we can easily get their weights.
weights_conv1 = layer_conv1.get_weights()[0]
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
This gives us a 4-rank tensor.
weights_conv1.shape
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Plot the weights using the helper-function from above.
plot_conv_weights(weights=weights_conv1, input_channel=0)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We can also get the weights for the second convolutional layer and plot them.
weights_conv2 = layer_conv2.get_weights()[0] plot_conv_weights(weights=weights_conv2, input_channel=0)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function for plotting the output of a convolutional layer
def plot_conv_output(values): # Number of filters used in the conv. layer. num_filters = values.shape[3] # Number of grids to plot. # Rounded-up, square-root of the number of filters. num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots. fig, axes = plt.subplots(num_grids, num_grids) # Plot the output images of all the filters. for i, ax in enumerate(axes.flat): # Only plot the images for valid filters. if i<num_filters: # Get the output image of using the i'th filter. img = values[0, :, :, i] # Plot image. ax.imshow(img, interpolation='nearest', cmap='binary') # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Input Image Helper-function for plotting a single image.
def plot_image(image): plt.imshow(image.reshape(img_shape), interpolation='nearest', cmap='binary') plt.show()
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Output of Convolutional Layer In order to show the output of a convolutional layer, we can create another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in.
output_conv2 = Model(inputs=layer_input.input, outputs=layer_conv2.output)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the predict() function with the input image.
layer_output2 = output_conv2.predict(np.array([image1])) layer_output2.shape
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We can then plot the images for all 36 channels.
plot_conv_output(values=layer_output2)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Layered violin plot Without color, this plot can simply display the distribution of importance for each variable as a standard violin plot.
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="layered_violin", color='#cccccc')
notebooks/tabular_examples/tree_based_models/Scatter Density vs. Violin Plot Comparison.ipynb
slundberg/shap
mit
For example, in the above, we can see that s5 is the most important variable, and generally it causes either a large positive or negative change in the prediction. However, is it large values of s5 that cause a positive change and small ones that cause a negative change - or vice versa, or something more complicated? If we use color to represent the largeness/smallness of the feature, then this becomes apparent:
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="layered_violin", color='coolwarm')
notebooks/tabular_examples/tree_based_models/Scatter Density vs. Violin Plot Comparison.ipynb
slundberg/shap
mit
Here, red represents large values of a variable, and blue represents small ones. So, it becomes clear that large values of s5 do indeed increase the prediction, and vice versa. You can also see that others (like s6) are pretty evenly split, which indicates that while overall they're still important, their interaction is dependent on other variables. (After all, the whole point of a tree model like xgboost is to capture these interactions, so we can't expect to see everything in a single dimension!) Note that the order of the color isn't important: each violin is actually a number (layered_violin_max_num_bins) of individual smoothed shapes stacked on top of each other, where each shape corresponds to a certain percentile of the feature (e.g. the 5-10% percentile of s5 values). These are always drawn with small values first (and hence closest to the x-axis) and large values last (hence on the 'edge'), and that's why in this case you always see the red on the edge and the blue in the middle. (You could, of course switch this round by using a different color map, but the point is that the order of red inside/outside blue has no inherent meaning.) There are other options you can play with, if you wish. Most notable is the layered_violin_max_num_bins mentioned above. This has an additional effect that if the feature has less that layered_violin_max_num_bins unique values, then instead of partitioning each section as as a percentile (the 5-10% above), we make each section represent a specific value. For example, since sex has only two values, here blue will mean male (or female?) and read means female (or male?). Not sure with the diabetes data if a higher value of sex means male or female <!-- commenting this out for the public repo since there is a fair amount of opinion here. #### Pros - look great - easily interpretable (with color): people can generally get the idea without having to explain in detail - both of these meant they're good to show laymen/clients in presentations etc. #### Cons - take longer to draw (only relevant if you're doing heaps) - can be hard to get the smoothing just right - the code isn't as well supported, so if you want to tweak it, you might have to hack the code yourself--> Dot plot This combines a scatter plot with density estimation by letting dots pile up when they don't fit. The advatange of this approach is that it does not hide anything behind kernel smoothing, so what-you-see-is-what-is-there.
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:])
notebooks/tabular_examples/tree_based_models/Scatter Density vs. Violin Plot Comparison.ipynb
slundberg/shap
mit
<!--#### Pros - if you're looking for really fine features, and your data doesn't have the problems below, then this might be better. However, you probably shouldn't be using a graph to discover such fine features. #### Cons - generally doesn't look as nice for most data sets - can be quite noisy - no smoothing etc. This generally makes it harder to interpret the 'obvious' results. - the plot will depend on the order the dots are drawn (since they'll overlap etc.). In other words it's possible that you could get very different looking plots with the same data. You can get round this somewhat by using a very low opacity - but this then makes the non-overlapping parts of the graph hard to read. - [Note: this issue could be fixed somewhat if the y-value of the dots are given specific meaning (as with the layered violin plot) to avoid plots of different color overlapping. Though then it'd just be the layered violin plot.] - doesn't support categorical data (see the comment for the layered violin plot).--> Violin plot These are a standard violin plot but with outliers drawn as points. This gives a more accurate representation of the density out the outliers than a kernel density estimated from so few points. The color represents the average feature value at that position, so red regions have mostly high valued feature values while blue regions have mostly low feature values.
shap.summary_plot(shap_values[:1000,:], X.iloc[:1000,:], plot_type="violin")
notebooks/tabular_examples/tree_based_models/Scatter Density vs. Violin Plot Comparison.ipynb
slundberg/shap
mit
The function plot_taylor_approximations included here was written by Fernando Perez and was part of work on the original IPython project. Although attribution seems to have been lost over time, we gratefully acknowledge FP and thank him for this code!
def plot_taylor_approximations(func, x0=None, orders=(2, 4), xrange=(0,1), yrange=None, npts=200): """Plot the Taylor series approximations to a function at various orders. Parameters ---------- func : a sympy function x0 : float Origin of the Taylor series expansion. If not given, x0=xrange[0]. orders : list List of integers with the orders of Taylor series to show. Default is (2, 4). xrange : 2-tuple or array. Either an (xmin, xmax) tuple indicating the x range for the plot (default is (0, 1)), or the actual array of values to use. yrange : 2-tuple (ymin, ymax) tuple indicating the y range for the plot. If not given, the full range of values will be automatically used. npts : int Number of points to sample the x range with. Default is 200. """ if not callable(func): raise ValueError('func must be callable') if isinstance(xrange, (list, tuple)): x = np.linspace(float(xrange[0]), float(xrange[1]), npts) else: x = xrange if x0 is None: x0 = x[0] xs = sp.Symbol('x') # Make a numpy-callable form of the original function for plotting fx = func(xs) f = sp.lambdify(xs, fx, modules=['numpy']) # We could use latex(fx) instead of str(), but matploblib gets confused # with some of the (valid) latex constructs sympy emits. So we play it safe. plt.plot(x, f(x), label=str(fx), lw=2) # Build the Taylor approximations, plotting as we go apps = {} for order in orders: app = fx.series(xs, x0, n=order).removeO() apps[order] = app # Must be careful here: if the approximation is a constant, we can't # blindly use lambdify as it won't do the right thing. In that case, # evaluate the number as a float and fill the y array with that value. if isinstance(app, sp.numbers.Number): y = np.zeros_like(x) y.fill(app.evalf()) else: fa = sp.lambdify(xs, app, modules=['numpy']) y = fa(x) tex = sp.latex(app).replace('$', '') plt.plot(x, y, label=r'$n=%s:\, %s$' % (order, tex) ) # Plot refinements if yrange is not None: plt.ylim(*yrange) plt.grid() plt.legend(loc='best').get_frame().set_alpha(0.8) # For an expression made from elementary functions, we must first make it into # a callable function, the simplest way is to use the Python lambda construct. # plot_taylor_approximations(lambda x: 1/sp.cos(x), 0, [2,4,6,8], (0, 2*sp.pi), (-5,5)) plot_taylor_approximations(sp.sin, 0, [2, 4, 6, 8], (0, 2*sp.pi), (-2,2))
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Lecture 10: Taylor's Series and Discrete Calculus Background It is common in physics and engineering to represent transcendental functions and other nonlinear expressions using a few terms from a Taylor series. This series provides a fast and efficient way to compute quantities such as $\mathrm{sin}(x)$ or $e^x$ to a prescribed error. Learning how to calculate the series representation of these functions will provide practical experience with the Taylor series and help the student understand the results of Python methods designed to accelerate and simplify computations. The series can be written generally as: $$f(x) = f{\left (0 \right )} + x \left. \frac{d}{d x} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{2}}{2} \left. \frac{d^{2}}{d x^{2}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{3}}{6} \left. \frac{d^{3}}{d x^{3}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{4}}{24} \left. \frac{d^{4}}{d x^{4}} f{\left (x \right )} \right|{\substack{ x=0 }} + \frac{x^{5}}{120} \left. \frac{d^{5}}{d x^{5}} f{\left (x \right )} \right|_{\substack{ x=0 }} + \mathcal{O}\left(x^{6}\right)$$ Of equal importance the Taylor series permits discrete representation of derivatives and is a common way to perform numerical integration of partial and ordinary differential equations. Expansion of a general function $f(x)$ about a point, coupled with algebraic manipulation, will produce expressions that can be used to approximate derivative quantities. Although any order of derivative can be computed, this lesson will focus on first and second derivatives that will be encountered in the diffusion equation. What Skills Will I Learn? You will practice the following skills: Defining and determining the limits of infinite sequences, series and power series. Define the Taylor series and write the general form about any point and to any order. Derive the central and forward difference formulae for numerical derivatives using the Taylor's series. What Steps Should I Take? Learn to use Sympy to define and find the limits of sequences are series. Learn how to approximate transcendental functions about center points of your choosing. Differentiate an explicit series representation of a function to see that the coefficients of such a series can be determined algebraically. Use Sympy to compute a power series symbolically Derive the finite difference expressions for the first and second derivatives. Read the relevant pages from Hornbeck's text on numerical methods. Generate a list of values that approximate the function $f(x)=x^8$ on the domain ${x | 0 \leq x \leq 1}$. Using these values, numerically compute the derivative at your selected grid points and compare it to the analytical solution. Using this technique, examine how the observed error changes as the number of grid points is varied. Visualize and explain the results. Prepare a new notebook (not just modifications to this one) that describes your approach. Optional challenge: A list is one of the fundamental data structures within Python. Numpy (a Python library) and other parts of Python libraries use vectorized computations. From Wikipedia, vectorization is "a style of computer programming where operations are applied to whole arrays instead of individual elements." With this in mind, we certainly can iterate over our list of points and apply the function that you will soon write in an element by element fashion, however, it is a more common practice in Python and other modern languages to write vectorized code. If this is your first exposure to vectorized computation, I recommend two initial strategies: write out your algorithms and use "classic" flow control and iteration to compute the results. From that point you will more easily see the strategy you should use to write vectorized code. Using the discrete forms of the first and second derivatives (based on central differences) can you devise a vectorized operation that computes the derivative without looping in Python? A Sucessful Jupyter Notebook Will Present a description of the essential elements of Taylor's series and how to compute numerical derivatives; Identify the audience for which the work is intended; Run the code necessary to compute and visualize the error associated with the second order approximation and the changes in grid point spacing; Provide a narrative and equations to explain why your approach is relevant to solving the problem; Provide references and citations to any others' work you use to complete the assignment; Be checked into your GitHub repository by the due date (one week from assignment). A high quality communication provides an organized, logically progressing, blend of narrative, equations, and code that teaches the reader a particular topic or idea. You will be assessed on: * The functionality of the code (i.e. it should perform the task assigned). * The narrative you present. I should be able to read and learn from it. Choose your audience wisely. * The supporting equations and figures you choose to include. If your notebook is just computer code your assignment will be marked incomplete. Reading and Reference Essential Mathematical Methods for Physicists, H. Weber and G. Arfken, Academic Press, 2003 Advanced engineering Mathematics, E. Kreyszig, John wiley and Sons, 2010 Numerical Recipes, W. Press, Cambridge University Press, 1986 Numerical Methods, R. Hornbeck, Quantum Publishers, 1975 Infinite Sequences Ideas relating to sequences, series, and power series are used in the formulation of integral calculus and in the construction of polynomial representations of functions. The limit of functions will also be investigated as boundary conditions for differential equations. For this reason understanding concepts related to sequences and series are important to review. A sequence is an ordered list of numbers. A list such as the following represents a sequence: $$a_1, a_2, a_3, a_4, \dots, a_n, \dots $$ The sequence maps one value $a_n$ for every integer $n$. It is typical to provide a formula for construction of the nth term in the sequence. While ad-hoc strategies could be used to develop sequences using SymPy and lists in Python, SymPy has a sequence class that can be used. A short demonstration is provided next:
import sympy as sp x, y, z, t, a = sp.symbols('x y z t a') k, m, n = sp.symbols('k m n', integer=True) f, g, h = sp.symbols('f g h', cls=sp.Function) sp.var('a1:6') sp.init_printing()
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
It is important to read about SymPy symbols at this time. We can generate a sequence using SeqFormula.
a1 = sp.SeqFormula(n**2, (n,0,5)) list(a1)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
if we want the limit of the sequence at infinity: $$[0, 1, 4, 9, \ldots ]$$ we can use limit_seq:
sp.limit_seq(a1.formula, n)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
DIY: Determine if the following sequences are convergent or divergent. If convergent, what is the limit? $$ \begin{aligned} a_n = & \frac{1}{n} \ a_n = & 1 - (0.2)^n \ a_n = & \frac{1}{2n+1} \end{aligned} $$
# Your code here.
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Infinite Series A series is the sum of a sequence. An infinite series will converge if the partial sums of the series has a finite limit. For example, examine the partial sums of the series: $$ \sum^{\infty}_{n=1} \frac{1}{2^n} $$
a2 = sp.Sum(1/2**n, (n,0,1)) a2 a2.doit() a4 = sp.Sum(n**2, (n,0,5)) a4 a5 = sp.Sum(k**2, (k, 1, m)) a5 a4.doit() a5.doit()
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
A power series is of the form: $$ \sum_{n=0}^{\infty} M_{n} x^{n} = M_0 + M_1 x + M_2 x^2 + \cdots $$
M = sp.IndexedBase('M') sp.Sum(M[n]*x**n, (n,0,m))
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
We can define the series about the point $a$ as follows:
sp.Sum(M[n]*(x-a)**n, (n,0,m))
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
SymPy has a function that can take SymPy expressions and represent them as power series:
sp.series(f(x), x, x0=0)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
DIY: Use SymPy to determine series approximations to $e^x$, $sin(x)$, and $cos(x)$ about the point $x=0$.
# Your code here.
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Taylor's Series Below we present a derivation of Taylor's series and small algebraic argument for series representations of functions. In contrast to the ability to use sympy functions without any deeper understanding, these presentations are intended to give you insight into the origin of the series representation and the factors present within each term. While the algebraic presentation isn't a general case, the essential elements of a general polynomial representation are visible. The function $f(x)$ can be expanded into an infinite series or a finite series plus an error term. Assume that the function has a continuous nth derivative over the interval $a \le x \le b$. Integrate the nth derivative n times: $$\int_a^x f^n(x) dx = f^{(n-1)}(x) - f^{(n-1)}(a)$$ The power on the function $f$ in the equation above indicates the order of the derivative. Do this n times and then solve for f(x) to recover Taylor's series. One of the key features in this derivation is that the integral is definite. This derivation is outlined on Wolfram’s Mathworld. As a second exercise, assume that we wish to expand sin(x) about x=0. First, assume that the series exists and can be written as a power series with unknown coefficients. As a first step, differentiate the series and the function we are expanding. Next, let the value of x go to the value of the expansion point and it will be possible to evaluate the coefficients in turn: $$ \sin x = A+Bx+Cx^2+Dx^3+Ex^4 $$ We can choose an expansion point (e.g. $x = 0$) and differentiate to get a set of simultaneous equations permitting determination of the coefficients. The computer algebra system can help us with this activity:
import sympy as sp sp.init_printing() x, A, B, C, D, E = sp.symbols('x, A, B, C, D, E')
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
To help us get our work done we can use sympy's diff function. Testing this function with a known result, we can write:
sp.diff(sp.sin(x),x)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
A list comprehension is used to organize the results. In each iteration the exact function and the power series are differentiated and stored as an element of a list. The list can be inspected and a set of simultaneous equations can be written down and solved to determine the values of the coefficients. Casting the list as a sympy Matrix object clarifies the correspondance between entries in the list.
orderOfDifferentiation = 1 powerSeries = A+B*x+C*x**2+D*x**3+E*x**4 # Differentiate, element by element, the list [sp.sin(x),powerSeries] [sp.diff(a,x,orderOfDifferentiation) for a in [sp.sin(x),powerSeries]]
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
A list comprehension can be used to organize and extend the results further. We can wrap the list above into another list that changes the order of differentiation each iteration.
maximumOrder = 5 funcAndSeries = [[sp.diff(a,x,order) for a in [sp.sin(x),powerSeries]] for order in range(maximumOrder)] funcAndSeries
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Casting the results as a sympy Matrix object the list is more easily viewed in the Jupyter notebook:
sp.Matrix(funcAndSeries)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
DIY: Determine the coefficients in the above power series. You don't necessarily need to write code to complete this DIY problem.
# Your code here if you feel you need it.
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Your markdown here. Computing a Taylor's Series Symbolically Using sympy the Taylor's series can be computed symbolically.
from sympy import init_printing, symbols, Function init_printing() x, c = symbols("x,c") f = Function("f") f(x).series(x, x0=c, n=3)
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
One of the major uses of Taylor's series in computation is for the evaluation of derivatives. Take note of the fact that the derivatives of a function appear in the evaluation of the series. Computing Derivatives of Discrete Data It may be straightforward to compute the derivative of some functions. For example: $$f(x) = x^2$$ $$f'(x) = 2x$$ In numerical computing situations there is no analytical solution to the problem being solved and therefore no function to integrate or differentiate. The approximate solution is available as a list of discrete points in the domain of the problem's independent variables (e.g. space, time). The values could be represented as a list of numbers: $${f(x_0), f(x_1), f(x_2), ...}$$ The neighboring points $f(x_0)$ and $f(x_1)$ are seperated by a distance $\Delta x = x_1 - x_0$ in the independent variable. Although this will not be apparent from the values, it is implicit in the structure of the data. Taylor's series can be used to compute approximate derivatives of the unknown function directly from the list of points in this situation. We are going to compute a series expansion for an unknown function $f(x)$ in the vicinity of the point $c$ and then examine the relationship between that function and it's derivative quantities at a point $c\pm h$. The goal of the activity is to see if we can find expressions for the derivatives using the data point of interest ($c$) and its neighbors ($c \pm h$). We are going to use the idea of forward and backward differences. Forward differences are computed by expanding an unknown function in a Taylor series about a point “x=c” and then letting x go to c+h. Then, for backward differences, let x go to c-h. Symbolically Computing Forward and Backward Differences In the figure below we indicate the following: the unknown function $f(x)$ as a dashed line the point about which the unknown function is expanded at $x=c$ the distance between successive points is shown as $h$ the approximate values of the function given at the filled squares Imagine that we take the above series expansion and use it to compute the value of the function near the point $c$. Let us evaluate this series by adding and subtracting to the independent varable the quantity $h$. To accomplish this we write down the series expansion for our function about the point $c$, then we let the independent variable $x \rightarrow c+h$ and $c-h$.
x, h, c = sp.symbols("x,h,c") f = sp.Function("f") # the .subs() method replaces occurences of 'x' with something else taylorExpansionPlus = f(x).series(x, x0=c, n=3).removeO().subs(x,c+h) taylorExpansionMinus = f(x).series(x, x0=c, n=3).removeO().subs(x,c-h) taylorExpansionPlus
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Meaning that: $$ f(c+h) = \frac{h^{2}}{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + f{\left (c \right )} $$
taylorExpansionMinus
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Meaning that: $$ f(c-h) = \frac{h^{2}}{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} - h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + f{\left (c \right )} $$ Solving for First and Second Derivatives Inspection of the results shows that the signs on the terms containing the first derivative are different between the two expressions. We can use this to our advantage in solving for the derivative terms explicitly. Note that each grouped expression is equal to zero as is the default in sympy. Find the first derivative in this expression:
(taylorExpansionMinus-f(c-h))-(taylorExpansionPlus-f(c+h))
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Remember that sympy expressions are zero by default. So this is true: $$ - 2 h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} - f{\left (c - h \right )} + f{\left (c + h \right )} = 0 $$ Find the second derivative in this expression:
(taylorExpansionMinus-f(c-h))+(taylorExpansionPlus-f(c+h))
Lecture-10A-Taylors-Series.ipynb
mathinmse/mathinmse.github.io
mit
Mapping all fatalities and getting unique hashs for each
# mapping all traffic fatals in WV to a geohash data=bl.map_table(data,6,list=True) matched=[] # getting matching uniques for row in bl.df2list(data): oldrow=row for row in totaluniques: if oldrow[-1]==row: matched.append(row) # getting point for row in matched: count+=1 temp=data[data.GEOHASH==row] temp['color']='red' a=bl.make_points(temp,list=True) bl.parselist(a,str(count)+'.geojson') filenames.append(str(count)+'.geojson') newurl=bl.loadparsehtml(filenames,apikey,colorkey='color',frame=True)
example_jupyter/example.ipynb
murphy214/berrl
apache-2.0
Showing the new url made with fatalities along certain routes
bl.show(newurl)
example_jupyter/example.ipynb
murphy214/berrl
apache-2.0
First, set filename to what we want to examine and read PhSF header
C = 25 phsfname = "PHSF" + "." + str(C) phsfname = "../" + phsfname print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C))
C25_GP3.ipynb
Iwan-Zotow/VV
mit
The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy. Spatial Distribution tests Here we will plot spatial distribution of the particles, projected from collimator exit position to the isocenter location at 38cm
Znow = 197.5 # we at 200mm at the cooolimator exit Zshot = 380.0 # shot isocenter is at 380mm # radial, X and Y, all units in mm hr = H1Du.H1Du(120, 0.0, 40.0) hx = H1Du.H1Du(128, -32.0, 32.0) hy = H1Du.H1Du(128, -32.0, 32.0) for e in events: WT = e[0] xx, yy, zz = BEAMphsf.move_event(e, Znow, Zshot) #xx = e[2] #yy = e[3] #zz = e[4] r = math.sqrt(xx*xx + yy*yy) hr.fill(r, WT) hx.fill(xx, WT) hy.fill(yy, WT) print("Number of events in R histogram: {0}".format(hr.nof_events())) print("Integral in R histogram: {0}".format(hr.integral())) print("Underflow bin: {0}".format(hr.underflow())) print("Overflow bin: {0}\n".format(hr.overflow())) print("Number of events in X histogram: {0}".format(hx.nof_events())) print("Integral in X histogram: {0}".format(hx.integral())) print("Underflow bin: {0}".format(hx.underflow())) print("Overflow bin: {0}\n".format(hx.overflow())) print("Number of events in Y histogram: {0}".format(hy.nof_events())) print("Integral in Y histogram: {0}".format(hy.integral())) print("Underflow bin: {0}".format(hy.underflow())) print("Overflow bin: {0}".format(hy.overflow())) X = [] Y = [] W = [] norm = 1.0/hr.integral() sum = 0.0 st = hr.step() for k in range (0, hr.size()+1): r_lo = hr.lo() + float(k) * st r_hi = r_lo + st r = 0.5*(r_lo + r_hi) ba = math.pi * (r_hi*r_hi - r_lo*r_lo) # bin area d = hr[k] # data from bin with index k y = d[0] / ba # first part of bin is collected weights y = y * norm X.append(r) Y.append(y) W.append(st) sum += y * ba print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, 0.0, color='b') plt.xlabel('Radius(mm)') plt.ylabel('PDF of the photons') plt.title('Radial distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show()
C25_GP3.ipynb
Iwan-Zotow/VV
mit
We find spatial distribution to be consistent with the collimation setup Angular Distribution tests Here we plot particles angular distribution for all three directional cosines, at the collimator exit. We expect angular distribution to fill collimation angle which is close to 0.033 radians (0.5x25/380).
# angular, WZ, WX and WY, all units in radians h_wz = H1Du.H1Du(100, 1.0 - 0.05, 1.0) h_wx = H1Du.H1Du(110, -0.055, 0.055) h_wy = H1Du.H1Du(110, -0.055, 0.055) for e in events: WT = e[0] wx = e[5] wy = e[6] wz = e[7] h_wz.fill(wz, WT) h_wx.fill(wx, WT) h_wy.fill(wy, WT) print("Number of events in WZ histogram: {0}".format(h_wz.nof_events())) print("Integral in WZ histogram: {0}".format(h_wz.integral())) print("Underflow bin: {0}".format(h_wz.underflow())) print("Overflow bin: {0}\n".format(h_wz.overflow())) print("Number of events in WX histogram: {0}".format(h_wx.nof_events())) print("Integral in WX histogram: {0}".format(h_wx.integral())) print("Underflow bin: {0}".format(h_wx.underflow())) print("Overflow bin: {0}\n".format(h_wx.overflow())) print("Number of events in WY histogram: {0}".format(h_wy.nof_events())) print("Integral in WY histogram: {0}".format(h_wy.integral())) print("Underflow bin: {0}".format(h_wy.underflow())) print("Overflow bin: {0}".format(h_wy.overflow())) X = [] Y = [] W = [] norm = 1.0/h_wz.integral() sum = 0.0 st = h_wz.step() for k in range (0, h_wz.size()+1): x_lo = h_wz.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wz[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WZ') plt.ylabel('PDF of the photons') plt.title('Angular Z distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() X = [] Y = [] W = [] norm = 1.0/h_wx.integral() sum = 0.0 st = h_wx.step() for k in range (0, h_wx.size()): x_lo = h_wx.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wx[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WX') plt.ylabel('PDF of the photons') plt.title('Angular X distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() X = [] Y = [] W = [] norm = 1.0/h_wy.integral() sum = 0.0 st = h_wy.step() for k in range (0, h_wy.size()): x_lo = h_wy.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wy[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WY') plt.ylabel('PDF of the photons') plt.title('Angular Y distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show()
C25_GP3.ipynb
Iwan-Zotow/VV
mit
Integral 1 $$ I_1 = \int_0^\frac{\pi}{2} \sin^2{x} {dx} = \frac{\pi}{4} $$
def integrand(x, a): return np.sin(x)**2 def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.pi/2, args=(a,)) return I def integral_exact(a): return np.pi/4 print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
LimeeZ/phys292-2015-work
mit
Integral 2 $$ I_2 = \int_0^\infty \frac{x\sin{mx}}{x^2 + a^2} {dx} = \frac{\pi}{2} e^{-ma} $$
def integrand(x, m, a): return (x*np.sin((m*x)))/(x**2 + a**2) def integral_approx(m, a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(m,a,)) return I def integral_exact(m, a): return ((np.pi)*.5) *(np.exp(-1*m*a)) print("Numerical: ", integral_approx(1.0,1.0)) print("Exact : ", integral_exact(1.0,1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
LimeeZ/phys292-2015-work
mit
Integral 3 $$ I_3 = \int_0^\frac{\pi}{2} \sin{ax^2} {dx} = \frac{1}{2} \sqrt{\frac{\pi}{2 \pi}} $$
def integrand(a,x): return np.sin((a)*(x**2)) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, (np.pi/2), args=(a)) return I def integral_exact(a): return .5*(np.sqrt((np.pi)/np.pi*2*np.pi)) print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
LimeeZ/phys292-2015-work
mit
Integral 4 $$ I_4 = \int_0^\infty e^{-ax} \cos{bx}{dx}= \frac{a}{a^2+b^2}$$
def integrand(a, x, b): return np.exp(-a*x) * np.cos(b*x) def integral_approx(a, b): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a, b)) return I def integral_exact(a, b): return a / (a**2 + b**2) print("Numerical: ", integral_approx(1.0,1.0)) print("Exact : ", integral_exact(1.0,1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
LimeeZ/phys292-2015-work
mit
Integral 5 $$ I_5 = \int_0^\infty e^{-ax^{2}} {dx}= \frac{1}{2} \sqrt{\frac{\pi}{a}}$$
def integrand(x, a): return np.exp((-a)*(x**2)) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e= integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*(np.sqrt((np.pi)/a)) print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral
assignments/assignment09/IntegrationEx02.ipynb
LimeeZ/phys292-2015-work
mit
Naive implementation
def convolution_naive(image, kernel): output = np.zeros((image.shape[0] - kernel.shape[0] + 1, image.shape[1] - kernel.shape[1] + 1)) # walk over output rows for i in range(output.shape[0]): # walk over output columns for j in range(output.shape[1]): # walk over filter/kernel rows for k in range(kernel.shape[0]): # walk over filter/kernel columns for l in range(kernel.shape[1]): image_x = i + k image_y = j + l output[i, j] += image[image_x, image_y] * kernel[k, l] return output convolution_naive(image, kernel) convolution_naive(image_example, kernel)
deep-learning/Convolution.ipynb
amirziai/learning
mit
Tests the implementation
for _ in range(1000): shape = np.random.randint(5, 20, (1, 2)) image_test = np.random.randint(1, 10, shape[0]) actual = convolution_naive(image_test, kernel) expected = convolve(image_test, kernel_flipped, mode='valid') np.testing.assert_equal(actual, expected) print('All tests passed')
deep-learning/Convolution.ipynb
amirziai/learning
mit
Vertical edge detection
image = np.zeros((6, 6)) image[:, 0:3] = 10 image convolution_naive(image, kernel)
deep-learning/Convolution.ipynb
amirziai/learning
mit
Horizontal edge detection
kernel_horizontal = np.array([[1, 1, 1], [0, 0, 0], [-1, -1, -1]]) kernel_horizontal image = np.array([ [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [0, 0, 0, 10, 10, 10], [0, 0, 0, 10, 10, 10], [0, 0, 0, 10, 10, 10], ]) convolution_naive(image, kernel_horizontal) convolution_naive(image, kernel)
deep-learning/Convolution.ipynb
amirziai/learning
mit
Sobel filter
kernel_sobel = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]]) kernel_sobel
deep-learning/Convolution.ipynb
amirziai/learning
mit
Padding output size = n + 2 * p - f + 1 n: same padding
def padding_for_same(filter_size): return (filter_size - 1) / 2 padding_for_same(3) # even-shaped filters are problematic # odd-shaped is the convention padding_for_same(4)
deep-learning/Convolution.ipynb
amirziai/learning
mit
Strided convolutions
def convolution_strided_naive(image, kernel, stride): output_x = int((image.shape[0] - kernel.shape[0]) / stride + 1) output_y = int((image.shape[1] - kernel.shape[1]) / stride + 1) output = np.zeros((output_x, output_y)) # walk over output rows for i in range(output.shape[0]): # walk over output columns for j in range(output.shape[1]): # walk over filter/kernel rows for k in range(kernel.shape[0]): # walk over filter/kernel columns for l in range(kernel.shape[1]): image_x = i * stride + k image_y = j * stride + l output[i, j] += image[image_x, image_y] * kernel[k, l] return output image = np.array([ [2, 3, 7, 4, 6, 2, 9], [6, 6, 9, 8, 7, 4, 3], [3, 4, 8, 3, 8, 9, 7], [7, 8, 3, 6, 6, 3, 4], [4, 2, 1, 8, 3, 4, 6], [3, 2, 4, 1, 9, 8, 3], [0, 1, 3, 9, 2, 1, 4] ]) kernel = np.array([[3, 4, 4], [1, 0, 2], [-1, 0, 3]]) convolution_strided_naive(image, kernel, stride=2)
deep-learning/Convolution.ipynb
amirziai/learning
mit
Flipping the kernel ⚠ What we've done so far is technically called cross-correlation and not convolution. So what's really done in most of deep learning literature is actually cross-correlation and not convolution. https://www.coursera.org/learn/convolutional-neural-networks/lecture/wfUhx/strided-convolutions
def mirror(x): return np.flipud(np.fliplr(kernel)).T kernel = np.array([ [3, 4, 5], [1, 0, 2], [-1, 9, 7] ]) mirror(kernel)
deep-learning/Convolution.ipynb
amirziai/learning
mit
The reason this is done in signal processing and some branches of math is that we get associativity properties which is lost with our previous notation. However deep learners do not care about associativity (keeps code simple and achieves the same result). Convolution over volumes https://www.coursera.org/learn/convolutional-neural-networks/lecture/ctQZz/convolutions-over-volume
def convolution_3d_naive(image, kernel): assert image.ndim == 3, "we're 3d now bro" assert kernel.ndim == 3, "we're 3d now bro" output = np.zeros((image.shape[0] - kernel.shape[0] + 1, image.shape[1] - kernel.shape[1] + 1)) # walk over output rows for i in range(output.shape[0]): # walk over output columns for j in range(output.shape[1]): # walk over filter/kernel rows for k in range(kernel.shape[0]): # walk over filter/kernel columns for l in range(kernel.shape[1]): # walk over filter channels/depth for m in range(kernel.shape[2]): image_x = i + k image_y = j + l value_from_image = image[image_x, image_y, m] output[i, j] += value_from_image * kernel[k, l, m] return output image = np.random.randint(1, 10, (6, 6, 3)) kernel = np.random.randint(1, 10, (3, 3, 3)) convolution_3d_naive(image, kernel)
deep-learning/Convolution.ipynb
amirziai/learning
mit
The subchains are calculated over and over again, so use caching to avoid recalculating subchains. collatz is changed to be recursive to take advantage of lru_cache. Using cache was inspired by Kristian.
@lru_cache(maxsize=None) def collatz(n): x = [n] if n > 1: if n % 2 == 0: n //= 2 else: n = 3 * n + 1 x.extend(collatz(n)) return x list(gen_collatz(13)) collatz(13) def foo(n): max_chain_len = 0 max_chain_n = 0 for i in range(1, n): if len(collatz(i)) > max_chain_len: max_chain_len = len(collatz(i)) max_chain_n = i return max_chain_n, max_chain_len n = 10**6 %timeit foo(n) assert known_good_output == foo(n)
euler-014-longest-collatz-sequence-20161214.ipynb
james-prior/euler
mit
Only the lengths of the Collatz chains are wanted, not the numbers in the Collatz chains, so len_collatz is created to return only the lengths of the Collatz chains. len_collatz is also recursive to take advantage of lru_cache. len_collatz is faster than collatz, but not as general as collatz.
@lru_cache(maxsize=None) def len_collatz(n): len_chain = 1 if n > 1: if n % 2 == 0: n //= 2 else: n = 3 * n + 1 len_chain += len(collatz(n)) return len_chain len_collatz(13) def foo(n): max_chain_len = 0 max_chain_n = 0 for i in range(1, n): if len_collatz(i) > max_chain_len: max_chain_len = len_collatz(i) max_chain_n = i return max_chain_n, max_chain_len n = 10**6 %timeit foo(n) assert known_good_output == foo(n)
euler-014-longest-collatz-sequence-20161214.ipynb
james-prior/euler
mit
We often use $\boldsymbol{X}$ to represent a dataset of input vectors. The $i^{th}$ input vector in $X$ is notated $X_i$, though often times when iterating through our dataset (like in a summation) we will call our datapoints $x \in X$ and write the the $i^{th}$ input vector as $x^{(i)}$. The $j^{th}$ component of the $i^{th}$ input vector is written $x^{(i)}_j$. The number of input vectors, samples, data points, instances, etc, in $X$ is $\boldsymbol{m}$. The dimensionality (number of features) of each data point is $\boldsymbol{n}$. We use this notation when talking about datasets in general (like in proofs). This should make some sense if you've taken linear algebra - a matrix is said to be $m \times n$ if it has $m$ rows and $n$ columns. $X$ is a matrix that has $m$ samples (rows) and $n$ features (columns). $\boldsymbol{y}$ is the vector containing the labels (or classes) of the $x \in X$.
zeroes = [X[i] for i in range(len(y)) if y[i] == 0] # all 64-dim lists with label '0' ones = [X[i] for i in range(len(y)) if y[i] == 1] # all 64-dim lists with label '1' both = zeroes + ones labels = [0] * len(zeroes) + [1] * len(ones)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
Supervised learning is an area of study within machine learning that entails passing an input vector into a model and outputting a label. Supervised learning is further broken down into classification tasks, in which the label $y$ is taken from some finite set of objects like {red, green blue} or {0, 1, 2, ..., 9} and regression tasks, in which the label $y$ is taken from an infinite set, usually the set of real numbers $\mathbb{R}$. We do this by training our model on $X$, given the correct labels $y$. When we train our model, our model is learning a function that maps from input vectors $x$ to output labels $y$ - hence the name machine learning. Let's train a binary classifier that is able to correctly predict the label of the vectors in our two-label dataset both, using the class labels in labels. A binary classifier is to be contrasted with a multiclass classifier, which predicts a label within a set of two or more classes.
from sklearn.linear_model import LogisticRegression clf = LogisticRegression() # clf is code speak for 'classifier' clf.fit(X=both, y=labels)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
A lot just happened in those three short lines. Let's step through it line by line: from sklearn.linear_model import LogisticRegression From the sklearn (scikit-learn) linear_model module, we import a classifier called Logistic Regression. Linear models are models (or predictors) that attempt to seperate vector inputs of different classes using a linear function. Geometrically, this means our model tries to draw a seperating hyperplane between classes, as opposed to a curved (non-linear) manifold. Logistic regression is a classifier, meaning it can only predict categorical labels, but it is not limited to binary classification, as we'll see later. clf = LogisticRegression() LogisticRegression is just a Python object, so we instantiate it and assign it to the variable clf. clf.fit(both, labels) fit in sklearn is the name of the method call that trains a model on a dataset $X$ given the correct labels $y$. Both LogisticRegression and fit have additional parameters for fine tuning the training process, but the above calls demonstrate model training at its simplest. So now what? We have a classifier that we can pass in an unlabled input vector to, and have it predict whether that input represents a one or a zero - but in doing so we have run into a big problem. A Really Big Problem A natural question to ask about our predictor is "how accurate is it?". We could pass in each $x \in X$ to our predictor, have it predict the label, and compare its prediction to the answer in labels. But this would give us a false sense of confidence in how accurate our predictor actually is. Because we trained our predictor on $X$, we have effectively already given it the answer key to the test. This is not a good way to test how well our predictor can predict never before seen data. To get around this problem, we split our dataset into a training set and a test set before training our model. This way we can train the model on the training set, then test how well it extrapolates to never before seen data on the test set. This is such a common task when training models that scikit-learn has a built-in function for sampling a test/training set.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(both, labels, test_size=0.3) clf = LogisticRegression() clf.fit(X_train, y_train) clf.score(X_test, y_test)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
Amazing! Our predictor was able to predict the labels of the test set with 100% accuracy! Okay, maybe not that amazing. Remember when we projected the ones and zeroes into $\mathbb{R}^2$ in our PCA notebook? They looked like they might be linearly seperable. And that was only in two dimensions. Our classifier can take advantage of the full 64 dimensions of our data to make its predictions. Before we move on to training a classifier on the entire digits dataset, here's a few more ways to get a sense for how well our predictor is doing its job.
clf.predict(X_test)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
clf.predict tells us the actual predictions made on the test set.
def print_proba_table(prob_list, stride=1): mnist_classes = [i for i in range(len(prob_list[0]))] print("Class:", *mnist_classes, sep="\t") print("index", *["---" for i in range(len(mnist_classes))], sep="\t") counter = 0 for prob in prob_list[::stride]: print(counter*stride, *[round(prob[i], 3) for i in range(len(mnist_classes))], sep="\t") counter += 1 print_proba_table(clf.predict_proba(X_test), stride=4)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
clf.predict_proba tells us how confident our predictor is for each label that that is the correct label for the input. The above table, along with the score, tells us that this was a very easy classification task for our predictor. How effective do you think logistic regression will be on the entire digits dataset?
from sklearn.decomposition import PCA pca = PCA(2) Xproj = pca.fit_transform(X) plt.scatter(Xproj.T[0], Xproj.T[1], c=y, alpha=0.5)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
Here's a 2D projection of the entire digits dataset using PCA, yikes! By the way, PCA is a linear dimensionality reduction technique, so it gives us a rough idea of what a linear classifier like logistic regression has to deal with. There also exist non-linear dimensionality reduction techniques, which let you project on non-linear manifolds like spheres, instead of linear manifolds like hyperplanes.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) clf = LogisticRegression() clf.fit(X_train, y_train) clf.score(X_test, y_test)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
Not so easy now, is it? But is 94.8% accuracy good "enough"? Depends on your application.
print_proba_table(clf.predict_proba(X_test), stride=10)
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
From this table we can tell that for a good portion of our digits our classifier had very high confidence in their class label, even with 10 different classes to choose from. But some digits were able to steal at least a tenth of a percent of confidence from our predictor across four different digits. And from clf.score we know that our predictor got roughly one digit wrong for every 20 digits predicted. We can look at some of the digits where our predictor had high uncertainty. $\boldsymbol{\hat{y}}$ is the prediction our model made and $y$ is the actual label. Would you (a human) have done better than logistic regression?
uncertain_indices = [] prob = clf.predict_proba(X_test) for i in range(len(prob)): # number of classes with > 0.45 confidence contender_count = sum([1 if p > 0.45 else 0 for p in prob[i]]) if contender_count == 2: uncertain_indices.append(i) f, ax = plt.subplots(5, 3, sharex=False, sharey=True) f.set_size_inches(9, 15) predictions = clf.predict(X_test) for i in range(5): for j in range(3): ax[i, j].set_xlabel(r"$\^y = $"+str(predictions[uncertain_indices[3*i + j]]) + r", $y = $"+str(y_test[uncertain_indices[3*i+j]]), size='large') ax[i, j].imshow(X_test[uncertain_indices[3*i + j]].reshape(8, 8), cmap='gray', interpolation='none') f.tight_layout()
2016-2017.Meetings/fall/02.hello_machine2.ipynb
nelson-liu/lectures
mit
Generative Model Hyperparameters Below we use RandomSearch to search over a random set (with n=5 below) of hyperparameter configurations, validating on the development set and saving the best model. Note we can also use the GridSearch class to search over all combinations of hyperparameters. We start by defining the model, then the parameter ranges to search over, then the searcher:
from snorkel.learning import GenerativeModel from snorkel.learning import RandomSearch param_ranges = { 'step_size' : [1e-2, 1e-3, 1e-4, 1e-5, 1e-6], 'decay' : [1.0, 0.95, 0.9], 'epochs' : [20, 50, 100] } searcher = RandomSearch(GenerativeModel, param_ranges, L_train, n=5)
tutorials/advanced/Hyperparameter_Search.ipynb
HazyResearch/snorkel
apache-2.0
Finally we fit to the dev set--i.e. we execute the search, scoring and selecting the best configuration according to the dev set score:
%%time gen_model, run_stats = searcher.fit(L_dev, L_gold_dev) run_stats
tutorials/advanced/Hyperparameter_Search.ipynb
HazyResearch/snorkel
apache-2.0
Now we can apply the best learned model. Note that this is just a toy example- the above search may not be that interesting (i.e. all models may get perfect scores). But in practice, hyperparameter tuning is extremely important, especially for the end discriminative model (see below).
# Note that in other tutorials, this is often named `train_marginals`! Y_train = gen_model.marginals(L_train)
tutorials/advanced/Hyperparameter_Search.ipynb
HazyResearch/snorkel
apache-2.0
Making new Layers and Models via subclassing <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/custom_layers_and_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/making_new_layers_and_models_via_subclassing.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Setup
import tensorflow as tf from tensorflow import keras
site/en-snapshot/guide/keras/custom_layers_and_models.ipynb
tensorflow/docs-l10n
apache-2.0
We will train the model with the following parameters
optimal_params = { 'iteration': [300] * 4, 'learning_rate': [0.04, 0.06, 10, 14], 'degree': [3, 1, 3, 1], 'scaled': [False] * 2 + [True] * 2 } optimal_params = OrderedDict(sorted(optimal_params.items(), key=lambda x: x[0]))
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
About parameters: Iteration - Number of the last iteration for the model with FreezeOut, after this iteration, learning rates in all layers will be zero. learning rate - Initial learning rate for all layers. In the scaled method, it will be divided by a number of the last iteration for this layer. degree - Linear or Cube, or Square dependency for disable layers. scaled - scaled(True) of unscaled(False) method of disable layers. The pictures below show how the learning rate will change.
plt.style.use('seaborn-poster') plt.style.use('ggplot') iteration = 300 _, axarr = plt.subplots(2, 2) axarr=axarr.reshape(-1) for params in range(4): graph = [] for i in range(1, 6): gefault_learning = optimal_params['learning_rate'][params] last = int(iteration*(i/10 + 0.5) ** optimal_params['degree'][params]) if optimal_params['scaled'][params] == True: graph.append([0.5 * gefault_learning/last * (1 + np.cos(np.pi * i / last)) for i in range(2, last+1)]) else: graph.append([0.5 * gefault_learning * (1 + np.cos(np.pi * i / last)) for i in range(2, last+1)]) for i in range(len(graph)): axarr[params].set_title('Changing the value of learning rate with params: \n \ lr={} degree={} it={} scaled={}'.format(gefault_learning, optimal_params['degree'][params], \ 300, optimal_params['scaled'][params] )) axarr[params].plot(graph[i], label='{} layer'.format(i)) axarr[params].set_xlabel('Iteration', fontsize=15) axarr[params].set_ylabel('Learning rate', fontsize=15) axarr[params].legend(fontsize=12) plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
We'll compare ResNet model with FreezeOut vs classic ResNet model. First we load MNIST data
src = './../MNIST_data' with open(os.path.join(src, 'mnist_pics.blk'), 'rb') as file: images = blosc.unpack_array(file.read()).reshape(-1, 28, 28) with open(os.path.join(src, 'mnist_labels.blk'), 'rb') as file: labels = blosc.unpack_array(file.read()) global_freeze_loss = [] pipelines = []
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
Then create dataset and pipeline
res_loss=[] ix = DatasetIndex(range(50000)) sess = tf.Session() sess.run(tf.global_variables_initializer()) test_dset = Dataset(ix, ResBatch) test_pipeline = (test_dset.p .train_res(res_loss, images[:50000], labels[:50000])) for i in tqn(range(500)): test_pipeline.next_batch(300,n_epochs=None, shuffle=2)
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
The config allows to easily change the configuration of the model
params_list = pd.DataFrame(optimal_params).values for params in tqn(params_list): freeze_loss = [] config = { 'freeznet':{'iteration': params[1], 'degree': params[0], 'learning_rate': params[2], 'scaled': params[3]} } dataset = Dataset(ix, batch_class=ResBatch) train_pipeline = (dataset .pipeline(config=config) .train_freez(freeze_loss, images[:50000], labels[:50000])) for i in tqn(range(1, 501)): train_pipeline.next_batch(300, n_epochs=None, shuffle=2) global_freeze_loss.append(freeze_loss)
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
Plots below show the losses of the models with different learning rate parameters
_, ax = plt.subplots(2,2) ax = ax.reshape(-1) for i in range(4): utils.ax_draw(global_freeze_loss[i][:300], res_loss[:300], params_list[i], ax[i]) plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
You can see models that use scaled method are more unstable. The spikes in plots can be explained by large momentum. It is easy to see, what loss of the model with freezeout decrease faster than loss of the classic model. If we continue to train the models, you can see an effect of the new learning rate(1e-2 is the same for all layers) on loss value.
_, ax = plt.subplots(2,2) ax = ax.reshape(-1) for i in range(4): utils.ax_draw(global_freeze_loss[i], res_loss, params_list[i], ax[i]) plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
examples/experiments/freezeout/FreezeOut.ipynb
analysiscenter/dataset
apache-2.0
It looks like the score stabilizes after about 6 features, reaches a max at 16, then begins to taper off after about 70 features. We will save the top 45 and the top 75.
# save results to dataframe selected_summary = pd.DataFrame.from_dict(read_dictionary).T selected_summary['index'] = selected_summary.index selected_summary.sort_values(by='avg_score', ascending=0) # save dataframe selected_summary.to_csv('SFS_RF_selected_features_summary.csv', sep=',', header=True, index = False) # re load saved dataframe and sort by score filename = 'SFS_RF_selected_features_summary.csv' selected_summary = pd.read_csv(filename) selected_summary = selected_summary.set_index(['index']) selected_summary.sort_values(by='avg_score', ascending=0).head() # feature selection with highest score selected_summary.iloc[44]['feature_idx'] slct = np.array([257, 3, 4, 6, 7, 8, 10, 12, 16, 273, 146, 19, 26, 27, 284, 285, 30, 34, 163, 1, 42, 179, 155, 181, 184, 58, 315, 190, 320, 193, 194, 203, 290, 80, 210, 35, 84, 90, 97, 18, 241, 372, 119, 120, 126]) slct # isolate and save selected features filename = 'engineered_features_validation_set2.csv' training_data = pd.read_csv(filename) X = training_data.drop(['Formation', 'Well Name'], axis=1) Xs = X.iloc[:, slct] Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1) print np.shape(Xs), list(Xs) Xs.to_csv('SFS_top45_selected_engineered_features_validation_set.csv', sep=',', index=False) # feature selection with highest score selected_summary.iloc[74]['feature_idx'] slct = np.array([257, 3, 4, 5, 6, 7, 8, 265, 10, 12, 13, 16, 273, 18, 19, 26, 27, 284, 285, 30, 34, 35, 1, 42, 304, 309, 313, 58, 315, 319, 320, 75, 80, 338, 84, 341, 89, 90, 92, 97, 101, 102, 110, 372, 119, 120, 122, 124, 126, 127, 138, 139, 146, 155, 163, 165, 167, 171, 177, 179, 180, 181, 184, 190, 193, 194, 198, 203, 290, 210, 211, 225, 241, 249, 253]) slct # isolate and save selected features filename = 'engineered_features_validation_set2.csv' training_data = pd.read_csv(filename) X = training_data.drop(['Formation', 'Well Name'], axis=1) Xs = X.iloc[:, slct] Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1) print np.shape(Xs), list(Xs) Xs.to_csv('SFS_top75_selected_engineered_features_validation_set.csv', sep=',', index=False)
MandMs/03_Facies_classification-MandMs_SFS_v2-validation_set.ipynb
seg/2016-ml-contest
apache-2.0
The DualMap class accepts the same arguments as the normal Map class. Except for these: 'width', 'height', 'left', 'top', 'position'. In the following example we create a DualMap, add layer controls and then show the map. Try panning and zooming to check that both maps are synchronized.
m = folium.plugins.DualMap(location=(52.1, 5.1), zoom_start=8) m
examples/plugin-DualMap.ipynb
python-visualization/folium
mit
Veamos la respuesta a una página web clásica
HTML('<iframe src="https://developer.github.com/v3/" width="700" height="400"></iframe>')
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Veamos la respuesta de una web api
HTML(url="https://api.github.com/")
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
En definitiva, cuando utilizamos una WEB API, están involucrados: Cliente: realiza la petición a través de la url Servidor: da una respuesta a la url recibida Protocolo: se establece en la especificación de la API ¿Cómo utilizar una API? El uso de WEB APIs permite: obtener información que sería cosotsa de obtener y procesar de otra manera (incluso en tiempo real). En algunas ocasiones, la API es pública y cualquiera puede hacer una petición, pero en otras es necesario tener una api key que nos identifica. Por lo tanto, el proceso para obtener datos suele ser: Comprobar si existe una API-REST para obtener dichos datos Obtener una api key en caso de que sea necesaria Leer la especificación de la API para saber cómo componer la URL y como interpretar la respuesta Utilizar la API, en nuestro caso desde Python. Ejemplo: descargando datos de AEMET Esta es la página principal de su API: https://opendata.aemet.es/centrodedescargas/inicio Aquí podemos encontrar: información general, el lugar donde obtener nuestra API key, una interfaz para acceder a los datos para público general
HTML('<iframe src="https://opendata.aemet.es/centrodedescargas/inicio" width="1000" height="400"></iframe>')
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Módulo requests Aunque existen otras formas y librería para trabajar con HTTP en Python, el módulo requests está específicamente pensado para trabajar con APIs web. Como siempre hasta ahora, lo primero es importarlo:
import requests
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Necesitaremos cargar nuesta API key, lo más cómodo es almacenarla en un fichero y cargarla desde ahí. Creemos una función para leerla:
# preserve def load_api_key(file): """Returns the contents in the file without the final line break """ with open(file, 'r') as f: api_key = f.read().rstrip() return api_key # cargamos la api_key api_key = load_api_key("../../apikey-aemet.txt")
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Debemos saber cuál es la url a la que vamos a hacer la petición:
# Fijamos la url y los parámetros # Predicción costera de Asturias, Cantabria y País Vasco debemos introducir # https://opendata.aemet.es/opendata/api/prediccion/maritima/costera/costa/41 url = "https://opendata.aemet.es/opendata/api/prediccion/maritima/costera/costa/41" querystring = {"api_key": api_key}
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Por último, lanzamos la petición:
# Lanzamos la request response = requests.get(url, params=querystring, verify=False)
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Comprobando la respuesta:
# Vemos la respuesta response
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Vemos que hemos obtenido una respuesta con código 200, esto quiere decir, en el código habitual de las API-REST, que todo ha ido bien. De hecho, es conveniente ver que todo ha ido bien antes de hacer nada más:
# código de estado response.status_code == requests.codes.ok
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Pero... ¿dónde están nuestros datos?
response.content
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
¡Parece que esto es un json!
content = response.json() content
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
Efectivamente, la mayoría de las WEB APIs devuelven json o xml. Pero, otra vez... ¿dónde están nuestros datos?
r_meta = requests.get(content['metadatos'], verify=False) r_data = requests.get(content['datos'], verify=False) if r_meta.status_code == requests.codes.ok: metadata = r_meta.json() if r_data.status_code == requests.codes.ok: data = r_data.json() print(metadata[0].keys()) print(data[0].keys()) metadata[0]['periodicidad'] data[0]['prediccion']
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
En definitiva, podríamos reagrupar todo lo anterior como:
# preserve from warnings import warn def get_prediction_for_cantabria(api_key): url = "https://opendata.aemet.es/opendata/api/prediccion/maritima/costera/costa/41" querystring = {"api_key": api_key} response = requests.get(url, params=querystring, verify=False) prediction = None if response.status_code == requests.codes.ok: r_data = requests.get(content['datos'], verify=False) if r_data.status_code == requests.codes.ok: data = r_data.json() prediction = data[0]['prediccion'] elif response.status_code == requests.codes.TOO_MANY_REQUESTS: warn('Too many requests') elif response.status.code == requests.codes.NOT_FOUND: warn('No data for the response') else: warn('Code error {}'.format(response.status_code)) return prediction
notebooks/070-AEMET_APIREST.ipynb
CAChemE/curso-python-datos
bsd-3-clause
You can also write a program into a file and then execute it like so: MacBook-Pro-2:~ tla$ cat Documents/test.py print("Hello, this is a test and it works!") MacBook-Pro-2:~ tla$ python Documents/test.py Hello, this is a test and it works! As we will learn later, it is possible to give more arguments at the command line, which your Python program can use to do other things. Usually if you are given a Python program by someone else, this is how you will run it. You can also run Python in a notebook like this one. PS C:\Users\Tara L Andrews&gt; jupyter notebook [I 08:33:23.342 NotebookApp] Serving notebooks from local directory: C:\Users\Tara L Andrews [I 08:33:23.342 NotebookApp] 0 active kernels [I 08:33:23.342 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/ [I 08:33:23.342 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). Your command prompt won't come back, but instead a web browser window will open and you will be able to make a new notebook document much like this one.
year = 2015 print("Let's party like it's", year)
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Just like the Python interactive shell (and unlike running a Python program from a file), the IPython notebook remembers everything you define until you restart it. This is why we call them "interactive" - you can define things, look at them, and then do things with them without having to start over every time.
year += 1 print("Now we are partying like it's", year)
unit1/Running_Python.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0