text
stringlengths
2.5k
6.39M
kind
stringclasses
3 values
# 1D Reaction-Diffusion problem ## General formulation This is a well known problem to FEM with Continuous Galerkin approximation. Spurious oscillations rise when certain (not rare) conditions are met. The general problem consists in: Find $u \in \mathbb{C}^2(\overline{\Omega})$ in a closed domain $\overline{\Omega}\subset \mathbb{R}^n\,(n=1,2,3)$ such that: \begin{equation} \left\{ \begin{aligned} - &k \Delta u + \sigma u = f(x), \quad \forall x \in \Omega \\ &\left. u \right|_{\partial \Omega} = g(x) \end{aligned} \right. \end{equation} where $\overline{\Omega} := \Omega \cup \partial \Omega$ denotes the closed domain, $\Omega$ is the open domain and $\partial \Omega$ is the domain's boundary (the closure). Note that $\Omega \cap \partial \Omega = \emptyset$. The solution is a scalar field such that $u: \overline{\Omega} \to \mathbb{R}$. Also, we have the "source term" that is a scalar field of the same form, $f: \overline{\Omega} \to \mathbb{R}$, but this function is a given data to the (direct) problem. The $g(x)$ is a prescribed function applied to the boundary, which is also a given data to the problem. The parameter $k$ (here a scalar field like $u$) sometimes is named as "diffusivity coefficient" related to the quantity $u$, in a physical context (transport of a quantity, look for Reynolds' Theorem for further clarifications). In some contexts, parameter $k$ must be a 2-rank tensor, as in anisotropic Porous Media flows. Analagously, we have the parameter $\sigma: \overline{\Omega} \to \mathbb{R}$, which is denoted (in some contexts) as the "reaction coefficient" related to the proportional rate that $u$ is changing due to generation/consumption. Parameters $k$ and $\sigma$ are given data to the (direct) problem too. P.S.: I will not discuss the ill problems formulation, neither others mathematical requirements to assert the well-posedness of the problem, uniqueness and so on. ## 1D simplification In some scenarios, the mathematical modeling approach can be simplified to the 1D case, which is written as follows: Find $u \in \mathbb{C}^2$ such that: \begin{equation} \left\{ \begin{aligned} - &k u''(x) + \sigma u(x) = f(x), \quad \forall x \in ((x_1, x_2) \subset \mathbb{R}) \\ &u(x_1) = u_1 \\ &u(x_2) = u_2 \end{aligned} \right. \end{equation} where $u_1, u_2 \in \mathbb{R}$ are the prescribed boundary condition values, $u'(x) \equiv \dfrac{d u}{d x}$ and thus $u''(x) \equiv \dfrac{d^2 u}{d x^2}$. For the sake of simplicity, I will omit the argument of $u$ function. Such aforementioned problems are known as Two-Point Boundary Value problems. ### Variational formulation and Galerkin approximation One can find the derivation of the weak form, but I will just give its result below. Given the space of admissible solution: \begin{equation} \mathcal{U} (\overline{\Omega}) := \left\{ u \in H^1 (\overline{\Omega}) \left| \,u(x_1) = u_1, u(x_2) = u_2 \right. \right\} \end{equation} and the space of suitable variations as \begin{equation} \mathcal{V} (\overline{\Omega}) := \left\{ v \in H^1 (\overline{\Omega}) \left| \,u(x_1) = u(x_2) = 0 \right. \right\} \end{equation} The discretization of the above spaces is performed by Galerkin approximation with Lagrangean function space $\mathbb{P}$ of order $k$ defined over $\overline{\Omega}$. Thus we define here: \begin{equation} \mathcal{S}_h^k(\overline{\Omega}) := \left\{ \varphi_h \in \mathcal{C}(\Omega^e): \left.\varphi\right|_{\Omega^e} \in \mathbb{P}_k(\Omega^e), \forall \Omega^e \in \mathcal{T}_h \right\} \end{equation} the space of Lagrange polynomials subject to a domain partition $\mathcal{T}_h:=\cup \Omega^e \approx \overline{\Omega}$ and $\cap \Omega^e = \emptyset$ (non-overlapping elements). Thus \begin{equation} \mathcal{U}_h := \mathcal{U} \cap \mathcal{S}_h^k \quad \text{and} \quad \mathcal{V}_h := \mathcal{V} \cap \mathcal{S}_h^k \end{equation} we have the discretized spaces. Now, we can stated out our "discretized" Variational Formulation as: Find $u_h \in \mathcal{U}_h$ such that \begin{equation} a(u_h, v_h) = F(v_h), \quad \forall v_h \in \mathcal{V}_h \end{equation} where \begin{align} a(u, v) &:= \int_{\Omega} u' v' dx + \int_{\Omega} \sigma u v dx\\ F(v) &:= \int_{\Omega} f v dx \end{align} and the domain is $\overline{\Omega} \equiv [x_1, x_2]$. ## A practical FEniCS example Solve this in FEniCS is very straightforward compared to classical FEM codes. The FEniCS framework provides a high-level interface which eases the pain at most. Just for learning purpose, let the given data be setted as: \begin{align*} &f(x) \equiv f = 1 & \\ &u_1 = u_2 = 0 & \\ &x_1 = 0, &x_2 = 1 \\ &k = 10^{-8}, &\sigma = 1 \end{align*} With the source term as $f(x) = 1$, the exact solution can be easily obtained: \begin{equation} u(x) \approx \left\{ \begin{aligned} &u(x) = 1, &\text{if } x \in (x_1, x_2) \\ &u(x) = 0, &\text{if } x = x_1 \text{ or } x = x_2 \end{aligned} \right. \end{equation} for small values of $k$, say $k = 10^{-8}$. So, how to solve the problem with FEniCS? We will construct the procedures stepwisely. * Importing all the libs we'll need. ``` from fenics import * # all the FEniCS namespace (not a recommended Python practice) import matplotlib.pyplot as plt from matplotlib import rc import numpy as np ``` * Defining the domain and related mesh ``` x_left = 0.0 x_right = 1.0 numel = 15 mesh = IntervalMesh(numel, x_left, x_right) # IntervalMesh(num_of_elements, inf_interval, sup_interval) mesh_ref = IntervalMesh(100, x_left, x_right) ``` * Setting the degree of the functions in the Continuous Galerkin method and the variation space. Additionaly, we define here a space to be employed in the projection of the reference analytical solution ``` p = 1 V = FunctionSpace(mesh, "CG", p) # "CG" stands for Continuous Galerkin, p is the degree Vref = FunctionSpace(mesh_ref, "CG", p) ``` * Defining a Python function which marks the boundaries ``` def left(x, on_boundary): return x < 0+DOLFIN_EPS def right(x, on_boundary): return x > 1-DOLFIN_EPS ``` * Setting the prescribed boundary values ``` u1, u2 = 0.0, 0.0 g_left = Constant(u1) g_right = Constant(u2) ``` * Now we modify the spaces, as expected ``` bc_left = DirichletBC(V, g_left, left) bc_right = DirichletBC(V, g_right, right) dirichlet_condition = [bc_left, bc_right] ``` * Here we define the source function over the domain ``` f = Expression("1.0", domain=mesh, degree=p) ``` * Now comes the good part. We set up the Trial and Test functions from the admissible space ``` u_h = TrialFunction(V) v_h = TestFunction(V) ``` * The parameters of the problem ``` k = Constant(1e-8) sigma = Constant(1.0) ``` * Then we write the unsymmetric form, very much like it is written in mathematical form ``` a = k*inner(grad(v_h), grad(u_h))*dx + sigma*inner(u_h, v_h)*dx ``` * Also we define the associated linear form ``` L = f*v_h*dx ``` * Now we declare the solution variable ``` u_sol = Function(V) ``` * Thus we set the discretized variational problem to be solved ``` problem = LinearVariationalProblem(a, L, u_sol, dirichlet_condition) ``` * And we go further and solve it with no difficulties! ``` solver = LinearVariationalSolver(problem) solver.solve() ``` * But we will need to check if the solution is "good enough". So we compare with the available exact solution ``` sol_exact = Expression( 'x[0]<=0+tol || x[0]>=1 - tol ? 0 : 1', degree=p+1, tol=DOLFIN_EPS ) u_e = interpolate(sol_exact, Vref) u_e = interpolate(sol_exact, Vref) ``` * Now, lets plot our results! ``` plot(u_sol, marker='x', label='Approx') plot(u_e, label='Exact') # Setting the font plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotting plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.02*np.max(u_sol.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` ## Something seems not good... (Batman?) Terrible, it isn't? This is due the high Damköhler number of the problem $(Da >> 1)$. The Damköhler number is conceptually defined as \begin{equation} Da := \frac{\text{reaction rate of a physical quantity}}{\text{diffusion rate of a physical quantity}} \end{equation} In Finite Element context, we evaluate the Damköhler number in an element-wise manner, because this is how we (normally) stabilize the method and vanish the spurious oscillations. The local (within each element, valid for 1D and 2D... I need to confirm for 3D) Damköhler number is denote as \begin{equation} Da_K \equiv \frac{\sigma h_k^2}{6k} \end{equation} where $h_K$ is the mesh parameter, related to some characteristic lenght of the element. For further details about what I stated above and for the method I will summarize below, check the paper "The Galerkin Gradient Least-Squares Method" (Franca and Do Carmo, 1989). Highly recommend reading for who want to dive in FEM Stabilization methods. So, where all this stuff will be applied? ## GGLS stabilization As commented above, this method was proposed by "The Galerkin Gradient Least-Squares Method" (Franca and Do Carmo, 1989) aiming to solve singular diffusion problems in reaction dominated cases ($Da >> 1$). The principle is similar to GLS method, but control is gained in $H^1$-seminorm, while classical Galerkin (and GLS) controls over $L^2$-norm. With the modification, the variational problem is rewritten as follows: \begin{equation} B(u,v) = F(v) \end{equation} and \begin{align} &B(u,v) := a(u,v) + R(u,v) \\ &F(v) := L(v) + S(v) \end{align} with \begin{align} &R(u,v) := \sum_{K}\tau_k \left(\nabla\left(\sigma u -k \Delta u\right), \nabla\left(\sigma v -k \Delta v\right)\right)_K\\ &S(v) := \sum_{K}\tau_k \left(f, \nabla\left(\sigma v -k \Delta v\right) \right)_K \end{align} where \begin{equation} (u, v)_{K} := \int_{\Omega^K} u v dx \end{equation} denotes the $L^2$ inner product restricted to the element $K$. Note the presence of a stabilizing parameter $\tau_k$. By this parameter, we control how much we want to add to an element local contribution aiming to stabilize in the local formulation. This parameter is computed locally as follows: \begin{equation} \tau_K := \frac{h_K^2}{6 \sigma} \widetilde{\xi}(Da_K) \end{equation} where \begin{equation} \xi(Da_K) := \left\{ \begin{aligned} &1, &Da_K \geq 8, \\ &0.064 Da_K + 0.49, &1 \leq Da_K < 8 \\ &0, &Da_K < 1 \end{aligned} \right. \end{equation} The above expression is an asymptotic approximation to $\widetilde{\xi}$. See the aforementioned paper for further details. ### 1D GGLS To simplify to 1D, just remember that the gradient reduce to $\nabla (\bullet) \equiv \dfrac{d (\bullet)}{d x} = u'$. Thus, \begin{align} &R(u,v) := \sum_{K}\tau_k \left(\sigma u' -k u''', \sigma v' -k v''' \right)_K \\ &S(v) := \sum_{K}\tau_k \left(f, \sigma u' -k v''' \right)_K \end{align} ## Revisiting the FEniCS implementation Finally, we can stabilize with GGLS the previous Continuous Galerkin FEM approach. * First, we need to define the stabilizing parameter and its related quantities ``` h = CellDiameter(mesh) Da_k = (sigma*pow(h,2))/(6.0*k) eps = conditional(ge(Da_k,8),1,conditional(ge(Da_k,1),0.064*Da_k+0.49,0)) tau = (eps*(pow(h,2)))/(6.0*sigma) ``` * Now we add the stabilizing terms to the classical Galerkin formulation ``` a += inner(grad(sigma*u_h - k*div(grad(u_h))), tau*grad(sigma*v_h - k*div(grad(v_h))))*dx L += inner(grad(f), tau*grad(sigma*v_h - k*div(grad(v_h))))*dx ``` * Redefining the problem ``` u_sol = Function(V) problem = LinearVariationalProblem(a, L, u_sol, dirichlet_condition) ``` * Solving the stabilized problem ``` solver = LinearVariationalSolver(problem) solver.solve() ``` * Now we plot the results to check ``` plot(u_sol, marker='x', label='Approx') plot(u_e, label='Exact') # Configuracoes de fontes no grafico plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotando plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.05*np.max(u_e.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` Now things look right. ## Another stabilization: Galerkin Least Squares (GLS) An alternative stabilization is the very well known method named Galerkin Least Squares, or GLS for short. This methods was introduced by Hughes, Franca and Hulbert (see the paper "A new finite element formulation for computational fluid dynamics: VIII. The Galerkin/least-squares method for advective–diffusive equations" for further details). The formulation to reaction-diffusion problem was based on a remark provide in the Franca and Do Carmo GGLS paper. It's very similar approach to GGLS method in the modification sense related to the classical Galerkin formulation. Now, we have the following terms to add: \begin{align} &R_{GLS}(u,v) := \sum_{K}\tau_k \left(\sigma u -k \Delta u, \sigma v -k \Delta v \right)_K \\ &S_{GLS}(v) := \sum_{K}\tau_k \left(f, \sigma v -k \Delta v \right)_K \end{align} which trivially is extended to 1D as \begin{align} &R_{GLS}(u,v) := \sum_{K}\tau_k \left(\sigma u -k u'', \sigma v -k v''\right)_K \\ &S_{GLS}(v) := \sum_{K}\tau_k \left(f, \sigma v -k v'' \right)_K \end{align} ### 1D GLS implementation in FEniCS For instance, the stabilization parameter and its parameters can be pretty much the same as in GLS, just for testing purpose. Minor changes are necessary, but we will solve within a different variational problem just to compare the methods. * The classical Continuous Galerkin terms ``` a_GLS = k*inner(grad(v_h), grad(u_h))*dx + sigma*inner(u_h, v_h)*dx L_GLS = f*v_h*dx ``` * Adding the GLS terms ``` a_GLS += tau*inner((sigma*u_h - k*div(grad(u_h))), sigma*v_h - k*div(grad(v_h)))*dx L_GLS += tau*inner(f, sigma*v_h - k*div(grad(v_h)))*dx ``` * Now we solve the problem with GLS ``` u_sol_gls = Function(V) problem = LinearVariationalProblem(a_GLS, L_GLS, u_sol_gls, dirichlet_condition) solver = LinearVariationalSolver(problem) solver.solve() ``` * Then we plot: ``` plot(u_sol, marker='.', label='GGLS') plot(u_sol_gls, marker='x', label='GLS') plot(u_e, label='Exact') # Configuracoes de fontes no grafico plt.rc('text',usetex=True) plt.rc('font', size=14) # Plotando plt.xlim(x_left, x_right) # Limites do eixo x plt.ylim(np.min(u_sol.vector().get_local()), 1.02*np.max(u_sol_gls.vector().get_local())) # Limites do eixo y plt.grid(True, linestyle='--') # Ativa o grid do grafico plt.xlabel(r'$x$') # Legenda do eixo x plt.ylabel(r'$u(x)$') # Legenda do eixo y plt.legend(loc='best',borderpad=0.5) # Ativa legenda no grafico e diz para se posicionar na melhor localizacao detectada plt.show() # Exibe o grafico em tela ``` The above result confirms what we expected: GLS can not control spurious oscillation with $Da>>1$ and $p = 1$ while GGLS achieve success controlling $H^1$-seminorm.
github_jupyter
<a href="https://colab.research.google.com/github/chemaar/python-programming-course/blob/master/5_Datastructures_Lists_and_Strings.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Chapter 5: Data structures: Lists and Strings ## 5.1 Introduction In this chapter, we introduce the use of data structures. More specifically, a detailed explanation of the selection, use and implementation of arrays and strings is presented. So far, we have designed and implemented programs that take some simple input, perform some operations and return some value. To do so, we were declaring some variables (2-3?) and work with them. However, we should ask ourselves what happenned when we have a larger input. Let's put a simple example: >Write a program that prints the name and grade of all students. How can we proceed? * Shall we create 72 variables for storing names and grades? * How can we perform operations on the variables like calculating the average grade or count the number of grades $>k$? * How can we scale the program to support the whole set of university students? To properly manage the data is used by a program (as input, as intermediary results or as output), we have to organize and structure data to be able to easily process all data items used by program according to a set of requirements (the selection of a proper data stucture depends on many factors that we will summarize later on this chapter). >**Selection and use of a proper data structure.**. ## 5.2 Data structures: context and definitions First, it is important to recall the notion of a program. When we develop a system, we are going to implement some algorithms that will take some input (data), perform some operations and produce some output. ![alt text](https://github.com/chemaar/python-programming-course/raw/master/imgs/IF-ELSE-Program.png) >General structure of a program. In order to organize, manage and store, the input data, the intermediary results and the final results, we have to first think in needs related to the management of data. To do so, a proper identification of the type of data we have and the type of operations we need to perform is critical to provide a good implementation of the problem we are solving. **A data structure is the way we organize, structure and store data within a program.** The next figure shows the main relationships between data, data structure, data type and variable. In general, we have **data** (e.g. a list of grades), that, depending on the problem, can be conceptualized in some **data structure** (e.g. a vector of numbers) and, then, we can define a **specific data type** to implement that data structure (e.g. a list of numbers). Finally, we can create **variables** of that new data type. ![alt text](https://github.com/chemaar/python-programming-course/raw/master/imgs/IF-ELSE-Data-structures-concepts.png) >Data, Data structure, Data type and variable relationships. Conceptually speaking, we should apply the next steps to identify a proper data structure: 1. Identify the type of data. E.g. a sequence of numbers, records, etc. 2. Identify the structure to organize data. E.g. a vector, a set, etc. 3. Identify the target operations to perform. E.g. search, access element by element, etc. 4. Study the cost (and scalability) of the target operations in the different data structures. Usually, this evaluation requires knowledge about the different data structures and their spatial and temporal complexity for each of the target operations. 5. Select the most efficient data structure. In our case, we will focus in the first three steps trying to directly map our necessities to a set of predefined data structures. ### Common data structures There are a set of well-known data structures that every programmer should know: * Vector (array). It is used to represent a finite set of elements of the same type. A vector can have more than 1 dimension. A 2-d dimension vector is a matrix or a table. * List. It is used to represent an infinite collection of elements. * Set. It is used to represent an infinite set of elements. * Dictionary. It is used to represent elements indexed by some field. * Tree. It is used to represent an hierarchy of elements in which there is one root node and each node has only one parent node. * Graph. It is used to represent relationships between elements. It is a generalization of a tree. Then, these data structures can be implemented by a combination of specific data types. For instance, a dictionary can be implemented through a hash table that internally uses a linked list. ## 5.3 Objects and references In the first chapter of this course, we saw that programming languages can be classified depending on the type of programming paradigm (e.g. functional, object-oriented, etc.). The Object Oriented Programming (OOP) is a paradigm in which we represent the data and operations of our problem and solution making an exercise of abstraction by defining classes including attributes and operations (methods). Complex datatypes and user-defined datatypes are usually defined using objects. This means we define a class with the required attributes and operations to manage the entities of our domain. Following, the main definitions for the OOP paradigm are presented: * What is a **class**? >A class is an abstraction of a (real/virtual) entity defined by a set of attributes (data/features) and a set of capabilities/functionalities/operations (methods). As an example, let suppose we have to store the information about a person (id and name) and provide some capabilities (speaking and running). We can define a class Person with these attributes and capabilities. ``` class Person: id = "" name = "" speaking(): #do speaking running(velocity): #do running ``` A class defines a category of objects and contains all common attributes and operations. * What is an **object**? >An object is an instance, a realization, of a class. It is the realization of a class with specific values for each attribute and a shared behavior. Following with the example, we can now define an instance of a Person, Jhon, with some id (1). * What is an **attribute**? >An attribute is a feature/characteristic/property shared by a set of objects. In the Python programming language, an attribute can be accessed using the next syntax: ``` instance_name.attribute ``` * What is an **method**? >A method is a capability/operation/functionality/behavior shared by a set of objects. In the Python programming language, a method can be invoked using the next syntax: ``` instance_name.method_name(parameters) ``` In our context, we only need to know these basic definitions since our data structures in Python will be implemented through classes (like the class list) and, by extension, we need to know how to invoke a method, how to access an attribute, etc. Finally, there are 4 principles of the OOP that are relevant and are enumerated below: * Abstraction * Encapsulation * Inheritance * Polymorphism More information about OOP in Python can be found in the following link: * https://docs.python.org/3/tutorial/classes.html ## 5.4 The array (vector) data structure #### **Problem statement** There are many problems in which we have to manage a collection of elements of the same type. For instance, if we have to store and perform some operation on a set of grades. How can we do it? Declaring $n$ variables? How can we ensure an unified management of all data items? In general, the issues we have to tackle are: * Organize and structure a finite collection of data items * Same type of data items * Provide operations that must work with all the data items #### **Concept** According to these needs, we have a conceptual data structure that is the **vector** (or array). A vector is a data structure to organize, store and exploit a collection of data items. A vector has some characteristics: * Finite collection of elements * Same type of elements * Fixed size #### **Application** When we have a collection of data items of the same type, we may want to perform some operations: * Access an element (and all) * Iterate over the collection of elements * Search for an element * Filter by a condition * Sort the elements by some criteria * Implement some aggregation operators: sum, min, max, count, size, etc. #### **Array data structure implementation in the Python programming language** In the Python programming language, there is no vector or array data structure as in other programming languages. So, to implement a conceptual vector, we can use the data type `list`. The [class list](https://docs.python.org/3/library/stdtypes.html#list) in Python: `class list([iterable])`: * Lists may be constructed in several ways: * Using a pair of square brackets to denote the empty list: `[]` * Using square brackets, separating items with commas: `[a], [a, b, c]` * Using a list comprehension: `[x for x in iterable]` * Using the type constructor: `list() or list(iterable)` The constructor builds a list whose items are the same and in the same order as iterable’s items. iterable may be either a sequence, a container that supports iteration, or an iterator object. If iterable is already a list, a copy is made and returned, similar to `iterable[:]`. * Mutability. A list in Python is mutable. * Size. A list in Python is dynamic. Although, this is not what we expect from a vector, it is a feature that we may know when using the class list in Python. * Types of the elements. A list in Python can contain elements of different types. As before, this is a feature that does not correspond to the conceptual view of a vector. * Indexing and slicing. A list in Python can be accessed by position (index) or by slicing the list into a chunk. * An index is an integer expression. To access an element by an index, we must use the brackets: `mylist[position]` * An slice follows the same notation and has the same meaning as a `range`. * Nested lists. A list in Python can contain elements that are lists. In this manner, we can implement $n$ dimensional vectors. * Operators. Some operators can be applied to lists * "+" which has the meaning of concatenation. ``` [1, 3, 4] + [2, 5] [1, 3, 4, 2, 5] ``` * "*" which has the meaning of concatenating $n$ times the list elements. ``` [1,3,4]*4 [1, 3, 4, 1, 3, 4, 1, 3, 4, 1, 3, 4] ``` #### **Examples of the main functions and methods** Given a list $L$, some of the main methods to work with a list are presented below (-> return value): * len: `len(L) -> number of elements of the list.` * append: `L.append(object) -> None -- append object to end.` * insert: `L.insert(index, object) -- insert object before index. Index is a position.` * index: `L.index(value, [start, [stop]]) -> integer -- return first index of value. Raises ValueError if the value is not present.` * count: `L.count(value) -> integer -- return number of occurrences of value.` * copy: `L.copy() -> list -- a shallow copy of L`. A new object is created. * reverse: `L.reverse() -- reverse *IN PLACE*`. "In place" means that the list is modified. * sort: `L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*` * remove: `L.remove(value) -> None -- remove first occurrence of value. Raises ValueError if the value is not present.` * clear: `L.clear() -> None -- remove all items from L.` Other interesting functions are: `extend`, `push` and `pop` (operations for the management of a list with different input/output strategies). ``` #Creating a list values = [1,2,3] #Accessing elements #Prints 1 print(values[0]) #Length #Prints 3 print(len(values)) #Iterating #By value for v in values: print(v) #By index for i in range (len(values)): print(values[i]) #Adding an element values.append(100) #Creating a copy other_values = values.copy() #Counting elements values.count(3) #Reversing the list values.reverse() #Removing an element values.remove(1) #Removing all elements values.clear() #Slicing #[start, stop, step] #Listing list methods and get the method definition dir([]) help([].append) ``` ## 5.5. The array of characters (string) data structure #### **Problem statement** Any exchange of information between the program and any other entity uses strings (text sequences) as units of information. When an user inputs something, before being interpreted (as a number for instance), the program and libraries receives a string. When a program reads some input data from a file, service or database, the information is encoded as strings. When something is displayed on the screen, strings are used. So, in general, everything is a string that can be then interpreted as anything else (like a number, a set of elements, etc.). That is why, we need means to easily manage sequences of characters. #### **Concept** According to these needs, we have a conceptual data structure that is the **string**. A string is a data structure to organize, store and exploit a collection of characters. Conceptually speaking, a string is a vector of characters so it shares most of the vector characteristics. * Finite collection of elements (characters) * Same type of elements (character) * Fixed size #### **Application** Since a string is a kind of vector, most of the applications are similar but focusing on the use of characters. * Access an element (and all) * Iterate over the collection of elements * Search for a a character or string * Filter by a condition * Sort the strings by some criteria * Implement some aggregation operators: sum, min, max, count, size, etc. #### **String data structure implementation in the Python programming language** Textual data in Python is handled with str objects, or strings. Strings are immutable sequences of Unicode code points. String literals are written in a variety of ways: * Single quotes: 'allows embedded "double" quotes' * Double quotes: "allows embedded 'single' quotes". * Triple quoted: '''Three single quotes''', """Three double quotes""" Triple quoted strings may span multiple lines - all associated whitespace will be included in the string literal. String literals that are part of a single expression and have only whitespace between them will be implicitly converted to a single string literal. That is, ("spam " "eggs") == "spam eggs". The [class string](https://docs.python.org/3/library/stdtypes.html#str) in python is initialized through the next method: `class str(object=b'', encoding='utf-8', errors='strict')`. * Mutability. A string in Python is **inmutable**. * Size. A string in Python is dynamic. * Types of the elements. A string in Python can contain characters. * Indexing and slicing. A string in Python can be accessed by position (index) or by slicing the string into a chunk. * An index is an integer expression. To access an element by an index, we must use the brackets: `string[position]` * An slice follows the same notation and has the same meaning as a `range`. * Operators. Some operators can be applied to strings: * "+" which has the meaning of concatenation. ``` "Hello" + "World" "HelloWorld" ``` * "*" which has the meaning of concatenating $n$ times the string characters. ``` "Hello"*2 "HelloHello" ``` #### **Examples of the main functions and methods** Given a string $S$, some of the main methods to work with a list are presented below (-> return value): * len: `len(S) -> number of characters of the string.` * capitalize: `S.capitalize() -> str`. Return a capitalized version of S, i.e. make the first character have upper case and the rest lower case. * count: `S.count(sub[, start[, end]]) -> int`. Return the number of non-overlapping occurrences of substring sub in string `S[start:end]`. Optional arguments start and end are interpreted as in slice notation. * endswith: ` S.endswith(suffix[, start[, end]]) -> bool`. Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try. * find: `S.find(sub[, start[, end]]) -> int`. Return the lowest index in S where substring sub is found, such that sub is contained within `S[start:end]`. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. * format: `S.format(*args, **kwargs) -> str`. Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces ('{' and '}'). * index: ` S.index(sub[, start[, end]]) -> int`. Return the lowest index in S where substring sub is found, such that sub is contained within `S[start:end]`. Optional arguments start and end are interpreted as in slice notation. Raises ValueError when the substring is not found. * isalnum: `S.isalnum() -> bool`. Return True if all characters in S are alphanumeric and there is at least one character in S, False otherwise. * isalpha: ` S.isalpha() -> bool`. Return True if all characters in S are alphabetic and there is at least one character in S, False otherwise. * isdecimal: `S.isdecimal() -> bool`. Return True if there are only decimal characters in S, False otherwise. * isdigit: `S.isdigit() -> bool`. Return True if all characters in S are digits and there is at least one character in S, False otherwise. * isidentifier: `S.isidentifier() -> bool`. Return True if S is a valid identifier according to the language definition. Use `keyword.iskeyword()` to test for reserved identifiers such as "def" and "class". * islower: ` S.islower() -> bool`. Return True if all cased characters in S are lowercase and there is at least one cased character in S, False otherwise. * isnumeric: `S.isnumeric() -> bool`. Return True if there are only numeric characters in S, False otherwise. * isspace: `S.isspace() -> bool`. Return True if all characters in S are whitespace and there is at least one character in S, False otherwise. * isupper: `S.isupper() -> bool`. Return True if all cased characters in S are uppercase and there is at least one cased character in S, False otherwise. * join: `S.join(iterable) -> str`. Return a string which is the concatenation of the strings in the iterable. The separator between elements is S. * lower: `S.lower() -> str`. Return a copy of the string S converted to lowercase. * replace: `S.replace(old, new[, count]) -> str`. Return a copy of S with all occurrences of substring old replaced by new. If the optional argument count is given, only the first count occurrences are replaced. * split: `S.split(sep=None, maxsplit=-1) -> list of strings`. Return a list of the words in S, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done. If sep is not specified or is None, any whitespace string is a separator and empty strings are removed from the result. * splitlines: `S.splitlines([keepends]) -> list of strings`. Return a list of the lines in S, breaking at line boundaries. Line breaks are not included in the resulting list unless keepends is given and true. * startswith: `S.startswith(prefix[, start[, end]]) -> bool`. Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try. * strip: `S.strip([chars]) -> str`. Return a copy of the string S with leading and trailing whitespace removed. If chars is given and not None, remove characters in chars instead. * title: `S.title() -> str`. Return a titlecased version of S, i.e. words start with title case characters, all remaining cased characters have lower case. There are other methods that are specific implementations of replace, index, find, etc. ``` #Some examples of string method invocation name = "Mary" print(len(name)) print(name.count("a")) #Formatting print("mary".capitalize()) #Is methods print("123".isalnum()) print("A".isalpha()) print("1".isdigit()) print("def".isidentifier()) print(name.islower()) print(name.isupper()) print(" ".isspace()) print(" ".join(["Mary", "has", "20", "years"])) #Checking values print(name.startswith("M")) print(name.endswith("ry")) #Finding print(name.find("r")) print(name.index("r")) #Replace print(name.replace("M","T")) #Splitting print("Mary has 20 years".split(" ")) print(" Mary ".strip()) #Listing string methods and get the method definition dir("") help("".strip) ``` ## Relevant resources * https://docs.python.org/3/reference/compound_stmts.html * https://docs.python.org/3/library/stdtypes.html#str * https://docs.python.org/3/library/stdtypes.html#textseq
github_jupyter
# Receiver operator modeling and characterization In this notebook we will construct a genetic network to model a Receiver operator, a signal receiver device, upload the simulated data to Flapjack, and then show how to characterize the operator based on this data. ## Import required packages ``` from loica import * import matplotlib.pyplot as plt import numpy as np import getpass ``` ## Make a connection to Flapjack Note here you should specify which instance of Flapjack you will use, whether it is local or the public instance for example. ``` from flapjack import * #fj = Flapjack(url_base='flapjack.rudge-lab.org:8000') fj = Flapjack(url_base='localhost:8000') fj.log_in(username=input('Flapjack username: '), password=getpass.getpass('Password: ')) ``` ## Get or create Flapjack objects To associate with the components of the genetic network and the simulated data with Flapjack we need the Ids of the appropriate objects. Note that if the objects already exist you will be prompted and can simply hit return to use the existing objects. ``` study = fj.create('study', name='Loica testing', description='Test study for demonstrating Loica') sfp = fj.create('signal', name='SFP', color='green', description='Simulated fluorescent protein') dna = fj.create('dna', name='receiver1') vector = fj.create('vector', name='receiver1', dnas=dna.id) ``` ## Create the network with measurable reporter First we create a GeneticNetwork object and associate it with a Flapjack Vector (collection of DNA). The connection to Flapjack is optional, but we will use it here to upload data and characterize our components. ``` network = GeneticNetwork(vector=vector.id[0]) reporter = Reporter(name='SFP', color='green', degradation_rate=0, init_concentration=0, signal_id=sfp.id[0]) network.add_reporter(reporter) ``` ## Create the Receiver operator The receiver operator responds to a signal $s$ to produce an output expression rate $\phi(s)$ modeled as follows: \begin{equation} \phi(s) = \frac { \alpha_0 + \alpha_1 (\frac{s}{K})^n } { 1 + (\frac{s}{K})^n } \end{equation} Here we must create a Supplement object to represent the signal, in this case modeling an acyl-homoserine lactone (AHL). ``` ahl = Supplement(name='AHL1') rec = Receiver(input=ahl, output=reporter, alpha=[0,100], K=1, n=2) network.add_operator(rec) ``` ## Draw the GeneticNetwork as a graph We can now make a visual representation of our GeneticNetwork to check it is wired up correctly. ``` plt.figure(figsize=(3,3), dpi=150) network.draw() ``` ## Simulate the GeneticNetwork In order to simulate the GeneticNetwork behaviour we need to specify the growth conditions in which it will operate. To do this we create a SimulatedMetabolism object which specifies growth functions. ``` def growth_rate(t): return gompertz_growth_rate(t, 0.05, 1, 1, 1) def biomass(t): return gompertz(t, 0.05, 1, 1, 1) metab = SimulatedMetabolism(biomass, growth_rate) media = fj.create('media', name='loica', description='Simulated loica media') strain = fj.create('strain', name='loica', description='Loica test strain') ``` Now we can create Samples that contain our GeneticNetwork driven by the SimulatedMetabolism. We also need to specify the Media and Strain, in order to link to the Flapjack data model. To test the signal receiving behaviour we must also add the signal (ahl) at a range of concentrations. ``` # Create list of samples samples = [] concs = np.append(0, np.logspace(-3, 3, 12)) for conc in concs: for _ in range(1): sample = Sample(genetic_network=network, metabolism=metab, media=media.id[0], strain=strain.id[0]) # Add AHL to samples at given concentration sample.set_supplement(ahl, conc) samples.append(sample) ``` Given our Samples, we can now create an Assay which will simulate an experiment containing them. We need to specify the biomass signal in order to link to the Flapjack data model for later upload. Running the assay will simulate the behaviour of the GeneticNetwork. ``` biomass_signal = fj.create('signal', name='SOD', description='Simulated OD', color='black') assay = Assay(samples, n_measurements=100, interval=0.24, name='Loica receiver1', description='Simulated receiver generated by loica', biomass_signal_id=biomass_signal.id[0] ) assay.run() ``` ## Upload simulated data to Flapjack ``` assay.upload(fj, study.id[0]) ``` Now we can check that the simulation worked by plotting an induction curve using the PyFlapjack package to connect to the Flapjack API. This also allows us to see if we have covered the dynamic range of the Receiver, in order to correctly characterize it. ``` ahl1_id = fj.get('chemical', name='AHL1').id[0] fig = fj.plot(study=study.id, vector=vector.id, signal=sfp.id, type='Induction Curve', analyte=ahl1_id, function='Mean Expression', biomass_signal=biomass_signal.id[0], normalize='None', subplots='Signal', markers='Vector', plot='All data points' ) fig ``` ## Characterize the Receiver operator from the uploaded data ``` rec.characterize( fj, vector=vector.id, media=media.id, strain=strain.id, signal=sfp.id, biomass_signal=biomass_signal.id ) rec.alpha, rec.K, rec.n ```
github_jupyter
#### A Te megoldásod: $\exists x [P(k, y) \Leftarrow N(y)] \Leftrightarrow [P(d, y) \Leftarrow N(y)]$ Visszajelzés: - Az elején az $x$ helyett nem $y$-t akartál írni? - Mindennek, ami az adott kötött paraméterre vonatkozik, a hozzá tartozó zárójelen belül kell lenni, tehát helyesen az $\exists y [...]$-on belül van a teljes kifejezés. - Az ekvivalencia azért nem jó, mert az megengedi, hogy mindkét állítás hamis legyen, helyette $\land$-t kell használni. - Az implikáció helyett szintén $\land$ kell, mert az implikáció megengedi, hogy ha $y$ nem nő, akkor a $P(k, y)$ és $P(d, y)$ állításoknak sem kell teljesülnie. (Tehát azt írtad, hogy "*létezik valaki, aki, ha nő, akkor k (vagy d) gyereke.*", ami igaz akkor is, ha létezik egy férfi, aki nem a gyerekük.) #### B Te megoldásod: $\forall x [P(x, y) \Leftarrow N(x)] \land \nexists [P(x, y) \Leftarrow F(x)]$ Visszajelzés: - Nem biztos, hogy jól értem, hogy mi volt a gondolatmenet, két alternatív értelmezésem van, ezeket először tartalmilag értékelem: - (1) $\forall x [N(x) \Rightarrow \exists y(P(x, y)) \land (\nexists y (P(x, y)) \Rightarrow F(x))]$. Ennek az $\land$ előtt és utáni fele teljesen ekvivalens (tehát valamelyik fölösleges), és azt jelenti, hogy "*minden nőnek van gyereke*", ami helyes, de nem a teljes állítás. - (2) $\forall x [N(x) \Rightarrow \exists y(P(x, y)) \land F(x) \Rightarrow \nexists y (P(x, y))]$. Ez viszont azt jelenti, hogy "*minden nőnek van gyereke és egy férfinak sincs gyereke*", ami nem felel meg a B állításnak. - Szintaktikai hibák. - A kifejezés első felében az $y$-hoz is kell $\exists$ kvantort rendelni, különben nyitott marad a formula, és az igazságértéke attól függ, mit helyettesítünk be az $y$ helyére. - A második felében a $\nexists$ kvantor után lehagytad az $y$ paramétert. - A zárójelek rossz helyeken vannak. #### C Te megoldásod: $\exists x [P(x, b) \Leftarrow F(b)] \land [P(x, c) \Leftarrow F(c)] \lor \exists x [P(x, d) \Leftarrow N(d)] \land [P(x, e) \Leftarrow N(e)]$ Visszajelzés: - $b, c, d, e$-hez is kell $\exists$ kvantor, ugyanabból az okból, amit leírtam a B-ben. - Az implikáció helyett $\land$ kell ugyanabból az okból, amit leírtam az A-ban. - Abból, hogy különböző betűkkel jelölt paramétereket használsz, még nem következik, hogy két különböző személyről van szó, tehát ki kell kötni, hogy $b \neq c$ és $d \neq e$. #### D Te megoldásod: $\forall x [P(x, y) \Leftarrow N(x)]$ Visszajelzés: - $y$-hoz kell a $\exists$ kvantor, ugyanabból az okból, amit leírtam a B-ben. #### E Te megoldásod: $\exists x [N(x) \Rightarrow P(j, x) \Leftarrow P(k, j)] \land \exists x [F(x) \Rightarrow P(j, x) \Leftarrow P(k, j)]$ Visszajelzés: - $j$-hez is kell a $\exists$ kvantor, ugyanabból az okból, amit leírtam a B-ben. - Az implikáció helyett $\land$ kell ugyanabból az okból, amit leírtam az A-ban. <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=978e47b7-a961-4dca-a945-499e8b781a34' target="_blank"> <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img> Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
github_jupyter
``` import tensorflow as tf import numpy as np import utils operator = 'add' (input_train, input_dev, input_test, target_train, target_dev, target_test) = utils.import_data(operator) # If the training dataset takes all examples, then the dev and test datasets are the same as the training one. if input_dev.shape[0] == 0: input_dev = input_train target_dev = target_train input_test = input_train target_test = target_train print(input_train.shape) print(input_dev.shape) print(input_test.shape) print(target_train.shape) print(target_dev.shape) print(target_test.shape) run_id = '20181001180511' operation = 'add' parameters = utils.import_parameters(run_id, operation) w1_value = parameters['h1/kernel'] b1_value = parameters['h1/bias'] b1_value = b1_value.reshape((-1, b1_value.shape[0])) w2_value = parameters['last_logits/kernel'] b2_value = parameters['last_logits/bias'] b2_value = b2_value.reshape((-1, b2_value.shape[0])) def test_graph(str_activation_function): ''' str_activation_function: 'sigmoid', 'tlu' ''' w1 = tf.Variable(w1_value) b1 = tf.Variable(b1_value) w2 = tf.Variable(w2_value) b2 = tf.Variable(b2_value) inputs = tf.placeholder(tf.float32, shape=(None, input_train.shape[1]), name='inputs') # None for mini-batch size targets = tf.placeholder(tf.float32, shape=(None, target_train.shape[1]), name='targets') if str_activation_function == 'tlu': h1 = utils.tf_tlu(tf.matmul(inputs, w1) + b1) predictions = utils.tf_tlu(tf.matmul(h1, w2) + b2) if str_activation_function == 'sigmoid': h1 = tf.sigmoid(tf.matmul(inputs, w1) + b1) predictions = utils.tf_tlu(tf.sigmoid(tf.matmul(h1, w2) + b2)) # Accuracy (accuracy, n_wrong, n_correct) = utils.get_measures(targets, predictions) # Run area ######################### config = tf.ConfigProto() config.gpu_options.allow_growth = True init = tf.global_variables_initializer() with tf.Session(config=config) as sess: sess.run(init) # Run computing test loss, accuracy accuracy_value, n_wrong_value, predictions_value = sess.run( [accuracy, n_wrong, predictions], feed_dict={inputs:input_test, targets:target_test}) return (accuracy_value, n_wrong_value) (accuracy_value, n_wrong_value) = test_graph('sigmoid') print('accuracy: {}, n_wrong_value: {}'.format(accuracy_value, n_wrong_value)) (accuracy_value, n_wrong_value) = test_graph('tlu') print('accuracy: {}, n_wrong_value: {}'.format(accuracy_value, n_wrong_value)) ```
github_jupyter
# Spark on Tour ## Ejemplo de uso de API estructurada para generar información de perfil de usuarios En este notebook vamos a explorar un ejemplo muy representativo de una aplicación de típica de big data: Generar un perfil resumido de los gustos/comportamiento de los usuarios a partir de eventos capturados de su interacción con un sistema. Concretamente vamos a partir de dos datasets iniciales: * *movies.csv*: Cada fila representa una película * *ratings.csv*: Cada fila representa un evento en el que un usuario otorga una puntuación a una película Vamos a mezclar los datasets y finalmente obtener un único dataset en el que cada fila represente a un usuario y contenga información resumida por cada género como: * Número de películas votadas de cada género * Puntuación media de las pelíclas de cada género Este es un caso de uso típico por ejemplo para posteriormente: * Sacar la información a una BBDD operativa y mostrar información estádisca por perfil en un dashboard web * Entrenar modelos de predicción/segmentación a partir del pefil de los usuarios. ### Importamos librerías, definimos esquemas e inicializamos la sesión Spark. ``` import findspark findspark.init() import pyspark from pyspark.sql.types import * from pyspark.sql import SparkSession from pyspark.sql.functions import * ratingSchema = StructType([ StructField("user", IntegerType()), StructField("movie", IntegerType()), StructField("rating", FloatType()) ]) movieSchema = StructType([ StructField("movie", IntegerType()), StructField("title", StringType()), StructField("genres", StringType()) ]) #setup spark session sparkSession = (SparkSession.builder .appName("Introducción API estructurada") .master("local[*]") .config("spark.scheduler.mode", "FAIR") .getOrCreate()) sparkSession.sparkContext.setLogLevel("ERROR") ``` ### Leemos el dataset de ratings usuario / película ``` ratings = sparkSession.read.csv("/tmp/movielens/ratings.csv", schema=ratingSchema, header=True) #ratings.show(10) ``` ### Leemos el dataset de películas ``` movies = sparkSession.read.csv("/tmp/movielens/movies.csv", schema=movieSchema, header=True) #movies.show(10, truncate=False) ``` ### Transformamos el dataset de películas para asociar cada película con cada uno de sus genéros El resultado es un dataset con N filas por película, tantas como género. ``` movies = movies.select("movie", "title", split("genres", "\|").alias("genres")) #movies.show(truncate=False) movies = movies.select("movie", "title", explode("genres").alias("genre")) #movies.show(10) ``` ### Mezclamos movies y ratings Enriquecemos la información de ratings con los géneros de cada película, y nos quedaos con un dataframe donde cada película aparece en varias filas, una por cada género ``` movieRatings = ratings.join(movies, "movie", "left_outer") #movieRatings.show(10) ``` ### Agregamos por género y usuario Calculamos indicadores de interés de un usuario en cada género, como el nº total de películas votadas, la media de rating, el máximo rating y el mínimo rating. Nuestro objetivo es calcular un perfil de usuario por género, por lo que no nos interesan las películas individuales, sino la agregación de los ratings por género para cada usuario ``` userRatingsGenres = movieRatings.groupBy("user", "genre") \ .agg( \ count("rating").alias("num_ratings"), \ avg("rating").alias("avg_rating"), \ min("rating").alias("min_rating"), \ max("rating").alias("max_rating")) \ .sort(asc("user")) #userRatingsGenres.toPandas() ``` ### Generamos el dataset final de perfil de usuario Ya tenemos la información que queríamos, pero no en la forma que necesitamos para, por ejemplo entrenar un modelo de ML y hacer segmentación de usuarios o predicción de cuanto le va a gusar una película en función de los géneros a los que pertenece. Necesitamos generar una única fila por cada usuario que represente el perfil del usuario, y por tanto sus 'gustos' con respecto a los diverso géneros de película. ``` userRatingProfile = userRatingsGenres.groupBy("user") \ .pivot("genre") \ .agg(sum("avg_rating").alias("rating"), sum("num_ratings").alias("num")) userRatingProfile.toPandas() ```
github_jupyter
# Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking". Let's do it! ``` # Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector ``` ## 1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient): $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small." We know the following: - $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! ## 2) 1-dimensional gradient checking Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input. You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. <img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;"> <caption><center> <u> **Figure 1** </u>: **1D linear model**<br> </center></caption> The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. ``` # GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = theta * x ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J)) ``` **Expected Output**: <table style=> <tr> <td> ** J ** </td> <td> 8</td> </tr> </table> **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$. ``` # GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta)) ``` **Expected Output**: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. **Instructions**: - First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$ - Then compute the gradient using backward propagation, and store the result in a variable "grad" - Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula: $$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$ You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them. - If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation. ``` # GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x,thetaplus) # Step 3 J_minus = forward_propagation(x,thetaminus) # Step 4 gradapprox = (J_plus -J_minus)/(2*epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad =backward_propagation(x,theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference)) ``` **Expected Output**: The gradient is correct! <table> <tr> <td> ** difference ** </td> <td> 2.9193358103083e-10 </td> </tr> </table> Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! ## 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. <img src="images/NDgrad_kiank.png" style="width:600px;height:400px;"> <caption><center> <u> **Figure 2** </u>: **deep neural network**<br>*LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*</center></caption> Let's look at your implementations for forward propagation and backward propagation. ``` def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache ``` Now, run backward propagation. ``` def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) * 2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients ``` You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still: $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them. The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. <img src="images/dictionary_to_vector.png" style="width:600px;height:400px;"> <caption><center> <u> **Figure 2** </u>: **dictionary_to_vector() and vector_to_dictionary()**<br> You will need these functions in gradient_check_n()</center></caption> We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that. **Exercise**: Implement gradient_check_n(). **Instructions**: Here is pseudo-code that will help you implement the gradient check. For each i in num_parameters: - To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$ - Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$ Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$ ``` # GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y) ``` **Expected output**: <table> <tr> <td> ** There is a mistake in the backward propagation!** </td> <td> difference = 0.285093156781 </td> </tr> </table> It seems that there were errors in the `backward_propagation_n` code we gave you! Good that you've implemented the gradient check. Go back to `backward_propagation` and try to find/correct the errors *(Hint: check dW2 and db1)*. Rerun the gradient check when you think you've fixed it. Remember you'll need to re-execute the cell defining `backward_propagation_n()` if you modify the code. Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn't graded, we strongly urge you to try to find the bug and re-run gradient check until you're convinced backprop is now correctly implemented. **Note** - Gradient Checking is slow! Approximating the gradient with $\frac{\partial J}{\partial \theta} \approx \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon}$ is computationally costly. For this reason, we don't run gradient checking at every iteration during training. Just a few times to check if the gradient is correct. - Gradient Checking, at least as we've presented it, doesn't work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :) <font color='blue'> **What you should remember from this notebook**: - Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation). - Gradient checking is slow, so we don't run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
github_jupyter
# Tours through the Book While the chapters of this book can be read one after the other, there are many possible paths through the book. In this graph, an arrow _A_ → _B_ means that chapter _A_ is a prerequisite for chapter _B_. You can pick arbitrary paths in this graph to get to the topics that interest you most: ``` # ignore from bookutils import rich_output # ignore if rich_output(): from IPython.display import SVG sitemap = SVG(filename='PICS/Sitemap.svg') else: sitemap = None sitemap ``` But since even this map can be overwhelming, here are a few _tours_ to get you started. Each of these tours allows you to focus on a particular view, depending on whether you are a programmer, student, or researcher. ## The Pragmatic Programmer Tour You have a program to debug. You want to learn about techniques that help you for debugging. You want to get to the point. 1. __Start with [Introduction to Debugging](Intro_Debugging.ipynb) to get the basic concepts.__ (You would know most of these anyway, but it can't hurt to get quick reminders). 2. __Experiment with [interactive debuggers](Debugger.ipynb)__ to observe executions interactively. 3. __Use [assertions](Assertions.ipynb)__ to have the computer systematically run checks during execution. 4. __Explore [delta debugging](DeltaDebugger.ipynb) for your program__ and see how it simplifies failure-inducing inputs [and changes](ChangeDebugger.ipynb). 5. __Visualize [dependencies](Slicer.ipynb) in your program__ and track the flow of data and control in the execution. 6. __Check out [statistical debugging](StatisticalDebugger.ipynb)__ for a simple way to determine code locations associated with failures. 7. __See [automatic repairs](Repairer.ipynb) in action__. In each of these chapters, start with the "Synopsis" parts; these will give you quick introductions on how to use things, as well as point you to relevant usage examples. With this, enough said. Get back to work and enjoy! ## The Young Researcher Tour Your school wants you to publish a great paper every few months. You are looking for infrastructures that you can toy with, helping you to build and evaluate cool ideas. 1. __Toy with [tracer](Tracer.ipynb)__ to collect data on program executions - features you can then [correlate with program failures](StatisticalDebugger.ipynb). 2. __Extract [dependencies](Slicer.ipynb) from your program__ and use them for runtime verification, fault localization, and better repair. 3. __Use [automatic repairs](Repairer.ipynb)__ as a starting point to build your own, better repairs. The Python implementation is well documented and extremely easy to expand. 4. Bring in __[test generation and structured inputs](DDSetDebugger.ipynb)__ to collect input features that relate to failures. 5. Mine __[version and bug repositories](ChangeCounter.ipynb)__ to explore how bugs and all these features evolve over time. For each of these chapters, experiment with the techniques, and then build your own on top. Read the relevant papers to learn more about their motivation and background. And now go and improve the world of automated debugging! ## Lessons Learned * You can go through the book from beginning to end... * ...but it may be preferable to follow a specific tour, based on your needs and resources. * Now [go and explore automated debugging](index.ipynb)!
github_jupyter
# Hyperexponential Static Schedule ``` import numpy as np import scipy, math from scipy.stats import binom, erlang, poisson from scipy.optimize import minimize def SCV_to_params(SCV): # weighted Erlang case if SCV <= 1: K = math.floor(1/SCV) p = ((K + 1) * SCV - math.sqrt((K + 1) * (1 - K * SCV))) / (SCV + 1) mu = K + (1 - p) return K, p, mu # hyperexponential case else: p = 0.5 * (1 + np.sqrt((SCV - 1) / (SCV + 1))) mu = 1 # 1 / mean mu1 = 2 * p * mu mu2 = 2 * (1 - p) * mu return p, mu1, mu2 n = 3 omega = 0.5 SCV = 1.3 p, mu1, mu2 = SCV_to_params(SCV) def trans_p(k,l,y,z,t,p,mu1,mu2): # 1. No client has been served before time t. if l == k+1 and z == y: if y == 1: return np.exp(-mu1 * t) elif y == 2: return np.exp(-mu2 * t) # 2. All clients have been served before time t. elif l == 1: if y == 1: prob = sum([binom.pmf(m, k-1, p) * psi(t, m+1, k-1-m, mu1, mu2) for m in range(k)]) if z == 1: return p * prob elif z == 2: return (1-p) * prob elif y == 2: prob = sum([binom.pmf(m, k-1, p) * psi(t, m, k-m, mu1, mu2) for m in range(k)]) if z == 1: return p * prob elif z == 2: return (1-p) * prob # 3. Some (but not all) clients have been served before time t. elif 2 <= l <= k: if y == 1: prob_diff = sum([binom.pmf(m, k-l, p) * psi(t, m+1, k-l-m, mu1, mu2) for m in range(k-l+1)]) \ - sum([binom.pmf(m, k-l+1, p) * psi(t, m+1, k-l+1-m, mu1, mu2) for m in range(k-l+2)]) if z == 1: return p * prob_diff elif z == 2: return (1-p) * prob_diff elif y == 2: prob_diff = sum([binom.pmf(m, k-l, p) * psi(t, m, k-l+1-m, mu1, mu2) for m in range(k-l+1)]) \ - sum([binom.pmf(m, k-l+1, p) * psi(t, m, k-l+2-m, mu1, mu2) for m in range(k-l+2)]) if z == 1: return p * prob_diff elif z == 2: return (1-p) * prob_diff # any other case is invalid return 0 def zeta(alpha, t, k): if not k: return (np.exp(alpha * t) - 1) / alpha else: return ((t ** k) * np.exp(alpha * t) - k * zeta(alpha, t, k-1)) / alpha def rho(t,m,k,mu1,mu2): if not k: return np.exp(-mu2 * t) * (mu1 ** m) / ((mu1 - mu2) ** (m + 1)) * erlang.cdf(t, m+1, scale=1/(mu1 - mu2)) elif not m: return np.exp(-mu1 * t) * (mu2 ** k) / math.factorial(k) * zeta(mu1-mu2, t, k) else: return (mu1 * rho(t, m-1, k, mu1, mu2) - mu2 * rho(t, m, k-1, mu1, mu2)) / (mu1 - mu2) def psi(t,m,k,mu1,mu2): if not m: return erlang.cdf(t, k, scale=1/mu2) else: return erlang.cdf(t, m, scale=1/mu1) - mu1 * sum([rho(t, m-1, i, mu1, mu2) for i in range(k)]) def sigma(t,m,k,mu1,mu2): if not k: return t * erlang.cdf(t, m, scale=1/mu1) - (m / mu1) * erlang.cdf(t, m+1, scale=1/mu1) elif not m: return t * erlang.cdf(t, k, scale=1/mu2) - (k / mu2) * erlang.cdf(t, k+1, scale=1/mu2) else: return (t - k / mu2) * erlang.cdf(t, m, scale=1/mu1) - (m / mu1) * erlang.cdf(t, m+1, scale=1/mu1) \ + (mu1 / mu2) * sum([(k - i) * rho(t, m-1, i, mu1, mu2) for i in range(k)]) def f_bar(t,k,y,p,mu1,mu2): if y == 1: return sum([binom.pmf(m, k-1, p) * sigma(t, m+1, k-1-m, mu1, mu2) for m in range(k)]) elif y == 2: return sum([binom.pmf(m, k-1, p) * sigma(t, m, k-m, mu1, mu2) for m in range(k)]) def h_bar(k,y,mu1,mu2): if k == 1: return 0 else: if y == 1: return (k-2) + (1/mu1) elif y == 2: return (k-2) + (1/mu2) def compute_probs_hyp(t,p,mu1,mu2): """ Computes P(N_ti = j, Z_ti = z) for i=1,...,n, j=1,...,i and z=1,2. """ n = len(t) probs = [[[None for z in range(2)] for j in range(i+1)] for i in range(n)] probs[0][0][0] = p probs[0][0][1] = 1 - p for i in range(2,n+1): x_i = t[i-1] - t[i-2] for j in range(1,i+1): for z in range(1,3): probs[i-1][j-1][z-1] = 0 for k in range(max(1,j-1),i): for y in range(1,3): probs[i-1][j-1][z-1] += trans_p(k,j,y,z,x_i,p,mu1,mu2) * probs[i-2][k-1][y-1] return probs def static_cost_hyp(t,p,mu1,mu2,omega): """ Computes the cost of a static schedule in the weighted Erlang case. """ n = len(t) # total expected waiting/idle time sum_EW, sum_EI = 0, 0 probs = compute_probs_hyp(t, p, mu1, mu2) for i in range(2,n+1): # waiting time for k in range(2,i+1): sum_EW += h_bar(k, 1, mu1, mu2) * probs[i-1][k-1][0] + h_bar(k, 2, mu1, mu2) * probs[i-1][k-1][1] # idle time for k in range(1,i): x_i = t[i-1] - t[i-2] sum_EI += f_bar(x_i, k, 1, p, mu1, mu2) * probs[i-2][k-1][0] + f_bar(x_i, k, 2, p, mu1, mu2) * probs[i-2][k-1][1] return omega * sum_EI + (1 - omega) * sum_EW static_cost_hyp(range(n),p,mu1,mu2,omega) optimization = minimize(static_cost_hyp, range(n), args=(p,mu1,mu2,omega)) sign = 1 if optimization.x[0] < 0: sign = -1 optimization.x += optimization.x[0] * -sign # let the schedule start at time 0 print(optimization) ``` ## Web Scraping ``` from urllib.request import urlopen from bs4 import BeautifulSoup as soup import pandas as pd url = f'http://www.appointmentscheduling.info/index.php?SCV={SCV}&N={n}&omega={omega}&objFun=1' # opening up connection, grabbing the page uClient = urlopen(url) page_html = uClient.read() uClient.close() # html parsing page_soup = soup(page_html, "html.parser") table = page_soup.findAll("table", {"class": "bordered"})[1] # get appointment schedule df = pd.read_html(str(table))[0] schedule = df[df.columns[2]].values[:-2] schedule static_cost_hyp(schedule,p,mu1,mu2,omega) ```
github_jupyter
<a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/adding_C3/C3/W4/assignment/C3_W4_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` **Note:** This notebook can run using TensorFlow 2.5.0 ``` #!pip install tensorflow==2.5.0 from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam ### YOUR CODE HERE # Figure out how to import regularizers ### import tensorflow.keras.utils as ku import numpy as np tokenizer = Tokenizer() # sonnets.txt !gdown --id 108jAePKK4R3BVYBbYJZ32JWUwxeMg20K data = open('./sonnets.txt').read() corpus = data.lower().split("\n") tokenizer.fit_on_texts(corpus) total_words = len(tokenizer.word_index) + 1 # create input sequences using list of tokens input_sequences = [] for line in corpus: token_list = tokenizer.texts_to_sequences([line])[0] for i in range(1, len(token_list)): n_gram_sequence = token_list[:i+1] input_sequences.append(n_gram_sequence) # pad sequences max_sequence_len = max([len(x) for x in input_sequences]) input_sequences = np.array(pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre')) # create predictors and label predictors, label = input_sequences[:,:-1],input_sequences[:,-1] label = ku.to_categorical(label, num_classes=total_words) ### START CODE HERE model = Sequential() model.add(# Your Embedding Layer) model.add(# An LSTM Layer) model.add(# A dropout layer) model.add(# Another LSTM Layer) model.add(# A Dense Layer including regularizers) model.add(# A Dense Layer) # Pick an optimizer model.compile(# Pick a loss function and an optimizer) ### END CODE HERE print(model.summary()) history = model.fit(predictors, label, epochs=100, verbose=1) import matplotlib.pyplot as plt acc = history.history['accuracy'] loss = history.history['loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'b', label='Training accuracy') plt.title('Training accuracy') plt.figure() plt.plot(epochs, loss, 'b', label='Training Loss') plt.title('Training loss') plt.legend() plt.show() seed_text = "Help me Obi Wan Kenobi, you're my only hope" next_words = 100 for _ in range(next_words): token_list = tokenizer.texts_to_sequences([seed_text])[0] token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre') predicted = model.predict_classes(token_list, verbose=0) output_word = "" for word, index in tokenizer.word_index.items(): if index == predicted: output_word = word break seed_text += " " + output_word print(seed_text) ```
github_jupyter
<a href="https://colab.research.google.com/github/MainakRepositor/ML-Algorithms/blob/master/Simple_Linear_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # SIMPLE LINEAR REGRESSION ## Simple Linear Regression is a machine learning model, falling under the regression category. It is a way by which one can predict results depending on the slope of the graph. In this process, the machine (here, our computer) automatically decided the best gradient or slope which resembles the a relation between the corresponding x and y values. ## Mathematical Explanation : <img src="https://www.dr-asims-anatomy-cafe.com/wp-content/uploads/2015/12/simple-linear-regression-equation-624x468.jpg" width=50%> ## The purpose of the slope is to set a standard as per which these points can be aligned. The graph in which the slope covers maximum of the points is called a best-fit graph. But for the this process, we need to try out many slopes and the method is tedious. Linear regression makes the task easy. It finds the best fit graph for us so that we can use it for further works. <img src="https://cdn-images-1.medium.com/max/640/1*eeIvlwkMNG1wSmj3FR6M2g.gif" width=60%> ## Working principle : The working principle can be listed down in these steps:- <p>i. accept the x and y values from dataset.</p> <p>ii. break them into train and test set for verification of model.</p> <p>iii.fit the x and y in regressor. This provides us the slope.</p> <p>iv. now using this slope, predict the values of y from the x inputs.</p> <p>We will see the same in our model as we proceed through steps</p> ## 1. Importing necessary libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline print("All packages are included successfully!") ``` ## 2. Importing the dataset ``` url = 'https://raw.githubusercontent.com/MainakRepositor/Datasets-/master/Salary_Data.csv' df = pd.read_csv(url,error_bad_lines=False) print("Displaying the top 5 rows of the dataset :\n") df.head() ``` ### Check dataset dimensions, in order to get a good idea for train_test splitting. ``` df.shape ``` ## 3.Checking for missing values ``` print("Null values present : ",df.isnull().values.any()) ``` ## 4.Building the Simple Linear Regression model ### Obtaining the x and y arrays ``` x = df.iloc[:,:-1].values.reshape(-1,1) y = df.iloc[:,-1].values print("Show x and y") print("--------X---------\n") print(x) print("--------Y---------\n") print(y) print("\n\n") print("type of data of x :",type(x)) print("type of data of y :",type(y)) ``` ### Importing linear regression packages. ``` from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression reg = LinearRegression() ``` ### Setting the training and testing dataset sizes and randomly shuffling the values ``` x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=1/3,random_state=0) reg.fit(x_train,y_train) ``` ### Obtaining the slope ``` y_pred = reg.predict(x_test) print(y_pred) ``` ### Finding the coefficient ``` reg.coef_ ``` ### Finding the intercept ``` reg.intercept_ ``` ### Making a prediction and scoring the result ### Predict the salary for 8 years of experience ``` reg.predict([[8]]) ``` ### Mathematical justification : ### Checking the equation in terms of y = coefficient * x + intercept ``` print("coefficient = ",reg.coef_) print("intercept = ",reg.intercept_) print("value of x = ",8) print("Result (y) = ",(reg.coef_*8 + reg.intercept_)) ``` ### Hence, y is equal to the predicted result. ``` y = reg.coef_*8 + reg.intercept_ yp = reg.predict([[8]]) r = int(100.00 - (y - yp)) print("Predicted result accuracy percentage : ",r,"%") ``` ### Displaying test values ``` print(x_test) print(y_train) ``` ## 5. Visualizing the results ``` plt.figure(figsize=(20,7)) plt.scatter(x_train,y_train,color='red') plt.plot(x_train,reg.predict(x_train),color='blue') plt.xlabel('Years of Experience',size=18,color='indigo') plt.ylabel('Salary',size=18,color='indigo') plt.title('Salary vs Experience regression graph (Training set)\n',size=24,color='orange',fontweight='bold') plt.show() plt.figure(figsize=(20,7)) plt.scatter(x_test,y_test,color='green') plt.plot(x_train,reg.predict(x_train),color='black') plt.xlabel('Years of Experience',size=18,color='indigo') plt.ylabel('Salary',size=18,color='indigo') plt.title('Salary vs Experience regression graph (Training set)\n',size=24,color='green',fontweight='bold') plt.show() ``` ## Pros and Cons of Linear Regression : Every thing has a good and a bad side or a limitation,within which it works best. Well, we have them for Linear Regression also. Pros or Advantages : <p>i.Automatically chooses the best fit graph for any dataset with a single x and y.</p> <p>ii.Works very fast, compared to other regression models, due to its simplicity.</p> iii.useful for 2 variable datasets (x and y only) of any size Cons or Disadvantages : <p>i.Not good for multivariate datasets (datasets with more than 1 independent parameters)</p> <p>ii.Cannot recognize mathemetical patterns like a sine series or quadratic trends</p> <p>iii.Could not produce a good result if the data is largely scattered.</p> ``` ```
github_jupyter
``` import numpy as np import pandas as pd ``` ## Data ``` df = pd.read_csv("00-1-shipman-confirmed-victims-x.csv") df.head() ``` ## With plotly ``` import plotly.graph_objects as go import plotly.express as px fig = px.scatter( df, x="fractionalDeathYear", y="Age", color="gender2", color_discrete_sequence=px.colors.qualitative.Set1, marginal_x="histogram", marginal_y="histogram", ) fig.update_layout( autosize=False, width=800, height=600, yaxis_title="Age of victim", xaxis_title="Year of Death", bargap=0.2, # gap between bars of adjacent location coordinates bargroupgap=0., # gap between bars of the same location coordinates, legend=dict( yanchor="top", y=1, xanchor="left", x=0, bordercolor="Black", borderwidth=1, title_text="" ) ) fig.update_xaxes( showgrid=True, ticks="outside", tickson="boundaries", ticklen=5 ) fig.update_yaxes( showgrid=True, ticks="outside", tickson="boundaries", ticklen=5, ) fig px.pie(df, names=df.gender2.unique(), values=df.gender2.value_counts(), color_discrete_sequence=px.colors.qualitative.Set1) ``` ## With plotnine ``` from plotnine import * s = ggplot(df, aes(x="fractionalDeathYear", y="Age", colour="reorder(gender2, gender)")) # initialise plot for the scatter-chart s += geom_point(size=2) # assign scatter chart-type with size 2 points s += labs(x="Year of Death", y="Age of victim") # Adds axis labels s += scale_x_continuous(breaks=range(1975, 1996, 5), limits=[1974,1998]) #x-axis labels every 5 years and between 74 and 98 s += scale_y_continuous(breaks=range(40, 91, 10), limits=[39,95]) # y-axis every 10 years and between 39 and 95 #s += scale_size_continuous(name="Size", guide=False) # turns off size legend s += scale_color_brewer(type="qual", palette="Set1") # sets the colour palette s += theme(legend_position=(0.2,1.12), legend_background=element_rect(color="grey"), legend_title=element_blank()) # positions. borders, and un-titles the legend s s += geom_rug(sides="tr", show_legend=False) s ``` ## With seaborn ``` import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks", context="notebook") sns.set_palette("Set1") plt.figure(figsize=(8, 6)) #g = sns.scatterplot(data=df, x="fractionalDeathYear", y="Age", hue="gender2") g = sns.jointplot(data=df, x="fractionalDeathYear", y="Age", marginal_kws=dict(bins=25), color="black") g.set_axis_labels("Year of Death", "Age of victim") sns.despine(left=False, bottom=False) ```
github_jupyter
AWS/GCP/Azure — These providers expose the official cloud service assets that you would be using for any diagram that leverages one of the main cloud providers. ``` !su root apt-get update !apt-get -y install python-pydot !apt-get -y install python-pydot-ng !apt-get -y install graphviz import matplotlib %matplotlib inline import matplotlib.pyplot as plt %load_ext autoreload %autoreload 2 packages = !conda list len(packages),packages from graphviz import Digraph dot = Digraph(comment='The Round Table') print(dot) dot.node('A', 'King Arthur') dot.node('B', 'Sir Bedevere the Wise') dot.node('L', 'Sir Lancelot the Brave') dot.edges(['AB', 'AL']) dot.edge('B', 'L', constraint='false') print(dot.source) dot.render('test-output/round-table.jpg', view=True) dot import diagrams from diagrams import Diagram, Cluster ``` ## Excercise 3-8 Architecture ``` from diagrams import Diagram, Cluster from diagrams.aws.compute import EC2 from diagrams.aws.network import ELB from diagrams.aws.network import Route53 from diagrams.onprem.database import PostgreSQL # Would typically use RDS from aws.database from diagrams.onprem.inmemory import Redis # Would typically use ElastiCache from aws.database from diagrams.aws.ml import Rekognition with Diagram("Week-3-8 Exercise", direction='LR') as diag: # It's LR by default, but you have a few options with the orientation dns = Route53("dns") load_balancer = ELB("Elastic Load Balancer") database = PostgreSQL("RDS mySQL") # cache = Redis("Cache") with Cluster("Webserver Cluster"): svc_group = [EC2("EC2 Webserver 1"), EC2("EC2 Webserver 2")] dns >> load_balancer >> svc_group # svc_group >> cache svc_group >> database diag # This will illustrate the diagram if you are using a Google Colab or Jypiter notebook. ``` ## big AWS architecture ``` from diagrams import Diagram, Cluster from diagrams.aws.compute import EC2 from diagrams.aws.network import ELB from diagrams.aws.network import Route53 from diagrams.onprem.database import Mysql from diagrams.aws.compute import Lambda from diagrams.aws.storage import SimpleStorageServiceS3 from diagrams.aws.network import VPCRouter from diagrams.aws.ml import Rekognition with Diagram("Azure Archecture", direction='LR') as diag: dns = Route53("dns") vpc = VPCRouter("VPC") load_balancer = ELB("Elastic Load Balancer") database = Mysql("RDS mySQL") lambda1 = Lambda("microservice 1") lambda2 = Lambda("microservice 2") rek = Rekognition("Rekognition") s3 = SimpleStorageServiceS3("S3") with Cluster("Webserver Cluster"): svc_group = [EC2("EC2 Webserver 1"), EC2("EC2 Webserver 2")] dns >> load_balancer >> vpc vpc >> svc_group >> database vpc >> svc_group >> lambda1 lambda1 >> s3 vpc >> svc_group >> lambda2 lambda2 >> s3 vpc >> svc_group >> rek rek >> s3 diag from diagrams import Diagram, Cluster from diagrams.azure.network import Connections from diagrams.onprem.database import Mssql from diagrams.azure.compute import ContainerInstances from diagrams.azure.ml import CognitiveServices from diagrams.azure.analytics import Databricks from diagrams.azure.storage import TableStorage from diagrams.azure.web import AppServices with Diagram("Azure Archecture", direction='LR') as diag: dns = Connections("http") vpc = Databricks("Databricks") database = Mssql("Mssql") lambda1 = ContainerInstances("microservice 1") lambda2 = ContainerInstances("microservice 2") ml = CognitiveServices("CognitiveServices") storage = TableStorage("TableStorage") with Cluster("Webserver Cluster"): svc_group = [AppServices("Appserver 1"), AppServices("Appserver 2")] dns >> vpc vpc >> svc_group >> database vpc >> svc_group >> lambda1 lambda1 >> storage vpc >> svc_group >> lambda2 lambda2 >> storage vpc >> svc_group >> ml ml >> storage diag ``` ## GCP ``` from diagrams import Diagram, Cluster from diagrams.onprem.database import Oracle from diagrams.gcp.network import DedicatedInterconnect from diagrams.gcp.network import LoadBalancing from diagrams.gcp.compute import ContainerOptimizedOS from diagrams.gcp.ml import AutomlVision from diagrams.gcp.network import VirtualPrivateCloud from diagrams.gcp.storage import Storage from diagrams.gcp.compute import KubernetesEngine with Diagram("GCP Archecture", direction='LR') as diag: dns = DedicatedInterconnect("http") load_balancer = LoadBalancing("Load Balancer") vpc = VirtualPrivateCloud("VPC") database = Oracle("Oracle") lambda1 = ContainerOptimizedOS("microservice 1") lambda2 = ContainerOptimizedOS("microservice 2") ml = AutomlVision("AutomlVision") storage = Storage("Storage") with Cluster("Kubernetes Cluster"): svc_group = [KubernetesEngine("KubernetesEngine 1"), KubernetesEngine("KubernetesEngine 2")] dns >> load_balancer >> vpc vpc >> svc_group >> database vpc >> svc_group >> lambda1 lambda1 >> storage vpc >> svc_group >> lambda2 lambda2 >> storage vpc >> svc_group >> ml ml >> storage diag ```
github_jupyter
<a href="https://colab.research.google.com/github/yukinaga/brain_ai_book/blob/master/neural_network_on_torus_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # **恒常性の導入** 今回のネットワークでは、「入試」方式によりニューロンの合格/不合格、すなわち興奮/抑制を決めることでネットワークに恒常性を導入します。 * 活性化関数への入力が大きい順にニューロンを並べ、上位を一定の割合で選別し出力を1(興奮)とする * 残りのニューロンの出力は0(抑制)とする これにより、常に一定の割合でニューロンが興奮することになり、ネットワークの恒常性が保たれることになります。 ``` # import numpy as np # CPU import cupy as np # GPU import matplotlib import matplotlib.pyplot as plt from matplotlib import animation, rc # @markdown ### **トーラス上のニューラルネットワーク** # @markdown 投射ニューロンの割合を入力してください。 proj_ratio = 0.5 # @param {type:"number"} # @markdown 抑制性ニューロンの割合を入力してください。 inhib_ratio = 0.2 # @param {type:"number"} # @markdown タイムステップ数を入力してください。 steps = 240 # @param {type:"number"} # @markdown ### **恒常性の導入** # @markdown 興奮させるニューロンの割合を入力してください。 excite_ratio = 0.5 # @param {type:"number"} # @markdown ### **動画の保存** # @markdown GIFアニメーションを保存しますか? save_gif = False #@param {type:"boolean"} # @markdown MP4形式の動画を保存しますか? save_mp4 = False #@param {type:"boolean"} n_h = 256 # 領域高さ n_w = 256 # 領域幅 n_connect = 64 # ニューロンへの入力数 sigma_inter = 4 # 介在ニューロンの軸索長の標準偏差 w_mu = 0.25 # 重みの平均値 w_sigma = 0.08 # 重みの標準偏差 matplotlib.rcParams["animation.embed_limit"] = 2**128 # アニメーションのサイズの上限 # トラース上のニューラルネットワークのクラス class TorusNetwork(): def __init__(self, n_h, n_w, n_connect): n_neuron = n_h * n_w # ニューロンの総数 self.params = (n_h, n_w, n_neuron, n_connect) self.connect_ids = None # シナプス前細胞のインデックス self.w = None # 重み self.b = None # バイアス self.y = None # ニューロンの出力 self.proj = None # 投射ニューロンかどうか self.proj_ids = None # 投射ニューロンのインデックス self.proj_to_ids = None # 投射先のインデックス self.inhib = None # 抑制性ニューロンかどうか def connect(self, proj_ratio, sigma_inter): n_h, n_w, n_neuron, n_connect = self.params # 投射ニューロンをランダムに選択 n_proj= int(proj_ratio * n_neuron) rand_ids = np.random.permutation(np.arange(n_neuron)) self.proj_ids = rand_ids[:n_proj] self.proj_to_ids = np.random.permutation(self.proj_ids) self.proj = np.zeros(n_neuron, dtype=np.bool) self.proj[self.proj_ids] = True # 介在ニューロンのx座標 inter_dist_x = np.random.randn(n_neuron, n_connect) * sigma_inter inter_dist_x = np.where(inter_dist_x<0, inter_dist_x-0.5, inter_dist_x+0.5).astype(np.int32) x_connect = np.zeros((n_neuron, n_connect), dtype=np.int32) x_connect += np.arange(n_neuron).reshape(-1, 1) x_connect %= n_w x_connect += inter_dist_x x_connect = np.where(x_connect<0, x_connect+n_w, x_connect) x_connect = np.where(x_connect>=n_w, x_connect-n_w, x_connect) # 介在ニューロンのy座標 inter_dist_y = np.random.randn(n_neuron, n_connect) * sigma_inter inter_dist_y = np.where(inter_dist_y<0, inter_dist_y-0.5, inter_dist_y+0.5).astype(np.int32) y_connect = np.zeros((n_neuron, n_connect), dtype=np.int32) y_connect += np.arange(n_neuron).reshape(-1, 1) y_connect //= n_w y_connect += inter_dist_y y_connect = np.where(y_connect<0, y_connect+n_h, y_connect) y_connect = np.where(y_connect>=n_h, y_connect-n_h, y_connect) # シナプス前細胞のインデックス self.connect_ids = x_connect + n_w * y_connect def initialize_network(self, inhib_ratio, w_mu, w_sigma): n_h, n_w, n_neuron, n_connect = self.params # 抑制性ニューロンをランダムに選択 n_inhib = int(inhib_ratio * n_neuron) rand_ids = np.random.permutation(np.arange(n_neuron)) inhib_ids = rand_ids[:n_inhib] self.inhib = np.zeros(n_neuron, dtype=np.bool) self.inhib[inhib_ids] = True # 重みとバイアスの初期化 self.w = np.random.randn(n_neuron, n_connect) * w_sigma + w_mu self.w = np.where(self.inhib[self.connect_ids], -self.w, self.w) self.w /= np.sum(self.w, axis=1, keepdims=True) self.b = np.zeros(n_neuron) # 出力を初期化 self.y = np.random.randint(0, 2, n_neuron) def forward(self, excite_ratio): n_h, n_w, n_neuron, n_connect = self.params n_excite = int(excite_ratio * n_neuron) # 興奮したニューロン数 # ニューラルネットワークの計算 self.y[self.proj_to_ids] = self.y[self.proj_ids] # 投射 x = self.y[self.connect_ids] u = np.sum(self.w*x, axis=1) - self.b larger_ids = np.argpartition(-u, n_excite)[:n_excite] # 恒常性 self.y[:] = 0 self.y[larger_ids] = 1 # ネットワークの設定 tnet = TorusNetwork(n_h, n_w, n_connect) tnet.connect(proj_ratio, sigma_inter) tnet.initialize_network(inhib_ratio, w_mu, w_sigma) # 結果表示のための設定 figure = plt.figure() plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(bottom=False, left=False, right=False, top=False) # 興奮したニューロンのカラー c_excite = np.array([30, 144, 255]).reshape(1, -1) c_map = np.zeros((n_h*n_w, 3)) + c_excite # 画像を格納するリスト images = [] for i in range(steps): tnet.forward(excite_ratio) y = tnet.y.reshape(-1, 1) image = np.zeros((n_h*n_w, 3)) image = np.where(y, c_map, image) image = image.reshape(n_h, n_w, -1).astype(np.uint8) image = plt.imshow(image.tolist()) images.append([image]) anim = animation.ArtistAnimation(figure, images, interval=100) rc("animation", html="jshtml") if save_gif: anim.save("torus_network2.gif", writer="pillow", fps=10) if save_mp4: anim.save("torus_network2.mp4", writer="ffmpeg") plt.close() # 通常のグラフを非表示に anim ``` 結果がうまく表示されない場合は、「ランタイム」→「ランタイムを出荷時設定にリセット」を選択し、ランタイムをリセットしましょう。 なお、計算にはGPUを利用していますが、連続で使用するとランタイムが途中で切断されることがあります。タイムステップ数を少なくするか、しばらく時間を置いてから再びお試しください。
github_jupyter
# Deploy a BigQuery ML user churn propensity model to Vertex AI for online predictions ## Learning objectives * Explore and preprocess a [Google Analytics 4](https://support.google.com/analytics/answer/7029846) data sample in [BigQuery](https://cloud.google.com/bigquery) for machine learning. * Train a [BigQuery ML (BQML)](https://cloud.google.com/bigquery-ml) [XGBoost](https://xgboost.readthedocs.io/en/latest/) classifier to predict user churn on a mobile gaming application. * Tune a BQML XGBoost classifier using [BQML hyperparameter tuning features](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-boosted-tree). * Evaluate the performance of a BQML XGBoost classifier. * Explain your XGBoost model with [BQML Explainable AI](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-xai-overview) global feature attributions. * Generate batch predictions with your BQML XGBoost model. * Export a BQML XGBoost model to a [Google Cloud Storage](https://cloud.google.com/storage). * Upload and deploy a BQML XGBoost model to a [Vertex AI Prediction](https://cloud.google.com/vertex-ai/docs/predictions/getting-predictions) Endpoint for online predictions. ## Introduction In this lab, you will train, evaluate, explain, and generate batch and online predictions with a BigQuery ML (BQML) XGBoost model. You will use a Google Analytics 4 dataset from a real mobile application, Flood it! ([Android app](https://play.google.com/store/apps/details?id=com.labpixies.flood), [iOS app](https://itunes.apple.com/us/app/flood-it!/id476943146?mt=8)), to determine the likelihood of users returning to the application. You will generate batch predictions with your BigQuery ML model as well as export and deploy it to **Vertex AI** for online predictions. [BigQuery ML](https://cloud.google.com/bigquery-ml/docs/introduction) lets you train and do batch inference with machine learning models in BigQuery using standard SQL queries faster by eliminating the need to move data with fewer lines of code. [Vertex AI](https://cloud.google.com/vertex-ai) is Google Cloud's complimentary next generation, unified platform for machine learning development. By developing and deploying BQML machine learning solutions on Vertex AI, you can leverage a scalable online prediction service and MLOps tools for model retraining and monitoring to significantly enhance your development productivity, the ability to scale your workflow and decision making with your data, and accelerate time to value. ![BQML Vertex AI](./images/vertex-bqml-lab-architecture-diagram.png "Vertex BQML Lab Architecture Diagram") Note: this lab is inspired by and extends [Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML](https://cloud.google.com/blog/topics/developers-practitioners/churn-prediction-game-developers-using-google-analytics-4-ga4-and-bigquery-ml). See that blog post and accompanying tutorial for additional depth on this use case and BigQuery ML. In this lab, you will go one step further and focus on how Vertex AI extends BQML's capabilities through online prediction so you can incorporate both customer churn predictions into decision making UIs such as [Looker dashboards](https://looker.com/google-cloud) but also online predictions directly into customer applications to power targeted interventions such as targeted incentives. ### Use case: user churn propensity modeling in the mobile gaming industry According to a [2019 study](https://gameanalytics.com/reports/mobile-gaming-industry-analysis-h1-2019) on 100K mobile games by the Mobile Gaming Industry Analysis, most mobile games only see a 25% retention rate for users after the first 24 hours, known and any game "below 30% retention generally needs improvement". For mobile game developers, improving user retention is critical to revenue stability and increasing profitability. In fact, [Bain & Company research](https://hbr.org/2014/10/the-value-of-keeping-the-right-customers) found that 5% growth in retention rate can result in a 25-95% increase in profits. With lower costs to retain existing customers, the business objective for game developers is clear: reduce churn and improve customer loyalty to drive long-term profitability. Your task in this lab: use machine learning to predict user churn propensity after day 1, a crucial user onboarding window, and serve these online predictions to inform interventions such as targeted in-game rewards and notifications. ## Setup ### Define constants ``` # Retrieve and set PROJECT_ID and REGION environment variables. PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] BQ_LOCATION = 'US' REGION = 'us-central1' ``` ### Import libraries ``` from google.cloud import bigquery from google.cloud import aiplatform as vertexai import numpy as np import pandas as pd ``` ### Create a GCS bucket for artifact storage Create a globally unique Google Cloud Storage bucket for artifact storage. You will use this bucket to export your BQML model later in the lab and upload it to Vertex AI. ``` GCS_BUCKET = f"{PROJECT_ID}-bqmlga4" !gsutil mb -l $REGION gs://$GCS_BUCKET ``` ### Create a BigQuery dataset Next, create a BigQuery dataset from this notebook using the Python-based [`bq` command line utility](https://cloud.google.com/bigquery/docs/bq-command-line-tool). This dataset will group your feature views, model, and predictions table together. You can view it in the [BigQuery](https://pantheon.corp.google.com/bigquery) console. ``` BQ_DATASET = f"{PROJECT_ID}:bqmlga4" !bq mk --location={BQ_LOCATION} --dataset {BQ_DATASET} ``` ### Initialize the Vertex Python SDK client Import the Vertex SDK for Python into your Python environment and initialize it. ``` vertexai.init(project=PROJECT_ID, location=REGION, staging_bucket=f"gs://{GCS_BUCKET}") ``` ## Exploratory Data Analysis (EDA) in BigQuery This lab uses a [public BigQuery dataset]() that contains raw event data from a real mobile gaming app called **Flood it!** ([Android app](https://play.google.com/store/apps/details?id=com.labpixies.flood), [iOS app](https://itunes.apple.com/us/app/flood-it!/id476943146?mt=8)). The data schema originates from Google Analytics for Firebase but is the same schema as [Google Analytics 4](https://support.google.com/analytics/answer/9358801). Take a look at a sample of the raw event dataset using the query below: ``` %%bigquery --project $PROJECT_ID SELECT * FROM `firebase-public-project.analytics_153293282.events_*` TABLESAMPLE SYSTEM (1 PERCENT) ``` Note: in the cell above, Jupyterlab runs cells starting with `%%bigquery` as SQL queries. Google Analytics 4 uses an event based measurement model and each row in this dataset is an event. View the [complete schema](https://support.google.com/analytics/answer/7029846) and details about each column. As you can see above, certain columns are nested records and contain detailed information such as: * app_info * device * ecommerce * event_params * geo * traffic_source * user_properties * items* * web_info* This dataset contains 5.7M events from 15K+ users. ``` %%bigquery --project $PROJECT_ID SELECT COUNT(DISTINCT user_pseudo_id) as count_distinct_users, COUNT(event_timestamp) as count_events FROM `firebase-public-project.analytics_153293282.events_*` ``` ## Dataset preparation in BigQuery Now that you have a better sense for the dataset you will be working with, you will walk through transforming raw event data into a dataset suitable for machine learning using SQL commands in BigQuery. Specifically, you will: * Aggregate events so that each row represents a separate unique user ID. * Define the **user churn label** feature to train your model to prediction (e.g. 1 = churned, 0 = returned). * Create **user demographic** features. * Create **user behavioral** features from aggregated application events. ### Defining churn for each user There are many ways to define user churn, but for the purposes of this lab, you will predict 1-day churn as users who do not come back and use the app again after 24 hr of the user's first engagement. This is meant to capture churn after a user's "first impression" of the application or onboarding experience. In other words, after 24 hr of a user's first engagement with the app: * if the user shows no event data thereafter, the user is considered **churned**. * if the user does have at least one event datapoint thereafter, then the user is considered **returned**. You may also want to remove users who were unlikely to have ever returned anyway after spending just a few minutes with the app, which is sometimes referred to as "bouncing". For example, you will build your model on only on users who spent at least 10 minutes with the app (users who didn't bounce). The query below defines a churned user with the following definition: **Churned = "any user who spent at least 10 minutes on the app, but after 24 hour from when they first engaged with the app, never used the app again"** You will use the raw event data, from their first touch (app installation) to their last touch, to identify churned and bounced users in the `user_churn` view query below: ``` %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_churn AS ( WITH firstlasttouch AS ( SELECT user_pseudo_id, MIN(event_timestamp) AS user_first_engagement, MAX(event_timestamp) AS user_last_engagement FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" GROUP BY user_pseudo_id ) SELECT user_pseudo_id, user_first_engagement, user_last_engagement, EXTRACT(MONTH from TIMESTAMP_MICROS(user_first_engagement)) as month, EXTRACT(DAYOFYEAR from TIMESTAMP_MICROS(user_first_engagement)) as julianday, EXTRACT(DAYOFWEEK from TIMESTAMP_MICROS(user_first_engagement)) as dayofweek, #add 24 hr to user's first touch (user_first_engagement + 86400000000) AS ts_24hr_after_first_engagement, #churned = 1 if last_touch within 24 hr of app installation, else 0 IF (user_last_engagement < (user_first_engagement + 86400000000), 1, 0 ) AS churned, #bounced = 1 if last_touch within 10 min, else 0 IF (user_last_engagement <= (user_first_engagement + 600000000), 1, 0 ) AS bounced, FROM firstlasttouch GROUP BY user_pseudo_id, user_first_engagement, user_last_engagement ); SELECT * FROM bqmlga4.user_churn LIMIT 100; ``` Review how many of the 15k users bounced and returned below: ``` %%bigquery --project $PROJECT_ID SELECT bounced, churned, COUNT(churned) as count_users FROM bqmlga4.user_churn GROUP BY bounced, churned ORDER BY bounced ``` For the training data, you will only end up using data where bounced = 0. Based on the 15k users, you can see that 5,557 ( about 41%) users bounced within the first ten minutes of their first engagement with the app. Of the remaining 8,031 users, 1,883 users ( about 23%) churned after 24 hours which you can validate with the query below: ``` %%bigquery --project $PROJECT_ID SELECT COUNTIF(churned=1)/COUNT(churned) as churn_rate FROM bqmlga4.user_churn WHERE bounced = 0 ``` ### Extract user demographic features There is various user demographic information included in this dataset, including `app_info`, `device`, `ecommerce`, `event_params`, and `geo`. Demographic features can help the model predict whether users on certain devices or countries are more likely to churn. Note that a user's demographics may occasionally change (e.g. moving countries). For simplicity, you will use the demographic information that Google Analytics 4 provides when the user first engaged with the app as indicated by MIN(event_timestamp) in the query below. This enables every unique user to be represented by a single row. ``` %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_demographics AS ( WITH first_values AS ( SELECT user_pseudo_id, geo.country as country, device.operating_system as operating_system, device.language as language, ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num FROM `firebase-public-project.analytics_153293282.events_*` WHERE event_name="user_engagement" ) SELECT * EXCEPT (row_num) FROM first_values WHERE row_num = 1 ); SELECT * FROM bqmlga4.user_demographics LIMIT 10 ``` ### Aggregate user behavioral features Behavioral data in the raw event data spans across multiple events -- and thus rows -- per user. The goal of this section is to aggregate and extract behavioral data for each user, resulting in one row of behavioral data per unique user. As a first step, you can explore all the unique events that exist in this dataset, based on event_name: ``` %%bigquery --project $PROJECT_ID SELECT event_name, COUNT(event_name) as event_count FROM `firebase-public-project.analytics_153293282.events_*` GROUP BY event_name ORDER BY event_count DESC ``` For this lab, to predict whether a user will churn or return, you can start by counting the number of times a user engages in the following event types: * user_engagement * level_start_quickplay * level_end_quickplay * level_complete_quickplay * level_reset_quickplay * post_score * spend_virtual_currency * ad_reward * challenge_a_friend * completed_5_levels * use_extra_steps In the SQL query below, you will aggregate the behavioral data by calculating the total number of times when each of the above event_names occurred in the data set per user. ``` %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.user_behavior AS ( WITH events_first24hr AS ( # Select user data only from first 24 hr of using the app. SELECT e.* FROM `firebase-public-project.analytics_153293282.events_*` e JOIN bqmlga4.user_churn c ON e.user_pseudo_id = c.user_pseudo_id WHERE e.event_timestamp <= c.ts_24hr_after_first_engagement ) SELECT user_pseudo_id, SUM(IF(event_name = 'user_engagement', 1, 0)) AS cnt_user_engagement, SUM(IF(event_name = 'level_start_quickplay', 1, 0)) AS cnt_level_start_quickplay, SUM(IF(event_name = 'level_end_quickplay', 1, 0)) AS cnt_level_end_quickplay, SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) AS cnt_level_complete_quickplay, SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) AS cnt_level_reset_quickplay, SUM(IF(event_name = 'post_score', 1, 0)) AS cnt_post_score, SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) AS cnt_spend_virtual_currency, SUM(IF(event_name = 'ad_reward', 1, 0)) AS cnt_ad_reward, SUM(IF(event_name = 'challenge_a_friend', 1, 0)) AS cnt_challenge_a_friend, SUM(IF(event_name = 'completed_5_levels', 1, 0)) AS cnt_completed_5_levels, SUM(IF(event_name = 'use_extra_steps', 1, 0)) AS cnt_use_extra_steps, FROM events_first24hr GROUP BY user_pseudo_id ); SELECT * FROM bqmlga4.user_behavior LIMIT 10 ``` ### Prepare your train/eval/test datasets for machine learning In this section, you can now combine these three intermediary views (`user_churn`, `user_demographics`, and `user_behavior`) into the final training data view called `ml_features`. Here you can also specify bounced = 0, in order to limit the training data only to users who did not "bounce" within the first 10 minutes of using the app. Note in the query below that a manual `data_split` column is created in your BigQuery ML table using [BigQuery's hashing functions](https://towardsdatascience.com/ml-design-pattern-5-repeatable-sampling-c0ccb2889f39) for repeatable sampling. It specifies a 80% train | 10% eval | 20% test split to evaluate your model's performance and generalization. ``` %%bigquery --project $PROJECT_ID CREATE OR REPLACE VIEW bqmlga4.ml_features AS ( SELECT dem.user_pseudo_id, IFNULL(dem.country, "Unknown") AS country, IFNULL(dem.operating_system, "Unknown") AS operating_system, IFNULL(REPLACE(dem.language, "-", "X"), "Unknown") AS language, IFNULL(beh.cnt_user_engagement, 0) AS cnt_user_engagement, IFNULL(beh.cnt_level_start_quickplay, 0) AS cnt_level_start_quickplay, IFNULL(beh.cnt_level_end_quickplay, 0) AS cnt_level_end_quickplay, IFNULL(beh.cnt_level_complete_quickplay, 0) AS cnt_level_complete_quickplay, IFNULL(beh.cnt_level_reset_quickplay, 0) AS cnt_level_reset_quickplay, IFNULL(beh.cnt_post_score, 0) AS cnt_post_score, IFNULL(beh.cnt_spend_virtual_currency, 0) AS cnt_spend_virtual_currency, IFNULL(beh.cnt_ad_reward, 0) AS cnt_ad_reward, IFNULL(beh.cnt_challenge_a_friend, 0) AS cnt_challenge_a_friend, IFNULL(beh.cnt_completed_5_levels, 0) AS cnt_completed_5_levels, IFNULL(beh.cnt_use_extra_steps, 0) AS cnt_use_extra_steps, chu.user_first_engagement, chu.month, chu.julianday, chu.dayofweek, chu.churned, # https://towardsdatascience.com/ml-design-pattern-5-repeatable-sampling-c0ccb2889f39 # BQML Hyperparameter tuning requires STRING 3 partition data_split column. # 80% 'TRAIN' | 10%'EVAL' | 10% 'TEST' CASE WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) <= 7 THEN 'TRAIN' WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) = 8 THEN 'EVAL' WHEN ABS(MOD(FARM_FINGERPRINT(dem.user_pseudo_id), 10)) = 9 THEN 'TEST' ELSE '' END AS data_split FROM bqmlga4.user_churn chu LEFT OUTER JOIN bqmlga4.user_demographics dem ON chu.user_pseudo_id = dem.user_pseudo_id LEFT OUTER JOIN bqmlga4.user_behavior beh ON chu.user_pseudo_id = beh.user_pseudo_id WHERE chu.bounced = 0 ); SELECT * FROM bqmlga4.ml_features LIMIT 10 ``` ### Validate feature splits Run the query below to validate the number of examples in each data partition for the 80% train |10% eval |10% test split. ``` %%bigquery --project $PROJECT_ID SELECT data_split, COUNT(*) AS n_examples FROM bqmlga4.ml_features GROUP BY data_split ``` ## Train and tune a BQML XGBoost propensity model to predict customer churn The following code trains and tunes the hyperparameters for an XGBoost model. TO provide a minimal demonstration of BQML hyperparameter tuning in this lab, this model will take about 18 min to train and tune with its restricted search space and low number of trials. In practice, you would generally want at [least 10 trials per hyperparameter](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-hyperparameter-tuning#how_many_trials_do_i_need_to_tune_a_model) to achieve improved results. For more information on the default hyperparameters used, you can read the documentation: [CREATE MODEL statement for Boosted Tree models using XGBoost](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-boosted-tree) |Model | BQML model_type | Advantages | Disadvantages| |:-------|:----------:|:----------:|-------------:| |XGBoost | BOOSTED_TREE_CLASSIFIER [(documentation)](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-create-boosted-tree) | High model performance with feature importances and explainability | Slower to train than BQML LOGISTIC_REG | Note: When you run the CREATE MODEL statement, BigQuery ML can automatically split your data into training and test so you can immediately evaluate your model's performance after training. This is a great option for fast model prototyping. In this lab, however, you split your data manually above using hashing for reproducible data splits that can be used comparing model evaluations across different runs. ``` MODEL_NAME="churn_xgb" %%bigquery --project $PROJECT_ID CREATE OR REPLACE MODEL bqmlga4.churn_xgb OPTIONS( MODEL_TYPE="BOOSTED_TREE_CLASSIFIER", # Declare label column. INPUT_LABEL_COLS=["churned"], # Specify custom data splitting using the `data_split` column. DATA_SPLIT_METHOD="CUSTOM", DATA_SPLIT_COL="data_split", # Enable Vertex Explainable AI aggregated feature attributions. ENABLE_GLOBAL_EXPLAIN=True, # Hyperparameter tuning arguments. num_trials=8, max_parallel_trials=4, HPARAM_TUNING_OBJECTIVES=["roc_auc"], EARLY_STOP=True, # Hyperpameter search space. LEARN_RATE=HPARAM_RANGE(0.01, 0.1), MAX_TREE_DEPTH=HPARAM_CANDIDATES([5,6]) ) AS SELECT * EXCEPT(user_pseudo_id) FROM bqmlga4.ml_features %%bigquery --project $PROJECT_ID SELECT * FROM ML.TRIAL_INFO(MODEL `bqmlga4.churn_xgb`); ``` ## Evaluate BQML XGBoost model performance Once training is finished, you can run [ML.EVALUATE](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-evaluate) to return model evaluation metrics. By default, all model trials will be returned so the below query just returns the model performance for optimal first trial. ``` %%bigquery --project $PROJECT_ID SELECT * FROM ML.EVALUATE(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; ``` ML.EVALUATE generates the [precision, recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall), [accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy), [log_loss](https://en.wikipedia.org/wiki/Loss_functions_for_classification#Logistic_loss), [f1_score](https://en.wikipedia.org/wiki/F-score) and [roc_auc](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) using the default classification threshold of 0.5, which can be modified by using the optional `THRESHOLD` parameter. Next, use the [ML.CONFUSION_MATRIX](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-confusion) function to return a confusion matrix for the input classification model and input data. For more information on confusion matrices, you can read through a detailed explanation [here](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative). ``` %%bigquery --project $PROJECT_ID SELECT expected_label, _0 AS predicted_0, _1 AS predicted_1 FROM ML.CONFUSION_MATRIX(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; ``` You can also plot the AUC-ROC curve by using [ML.ROC_CURVE](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-roc) to return the metrics for different threshold values for the model. ``` %%bigquery df_roc --project $PROJECT_ID SELECT * FROM ML.ROC_CURVE(MODEL bqmlga4.churn_xgb) WHERE trial_id=1; df_roc.plot(x="false_positive_rate", y="recall", title="AUC-ROC curve") ``` ## Inspect global feature attributions To provide further context to your model performance, you can use the [ML.GLOBAL_EXPLAIN](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-global-explain#get_global_feature_importance_for_each_class_of_a_boosted_tree_classifier_model) function which leverages Vertex Explainable AI as a back-end. [Vertex Explainable AI](https://cloud.google.com/vertex-ai/docs/explainable-ai) helps you understand your model's outputs for classification and regression tasks. Specifically, Vertex AI tells you how much each feature in the data contributed to your model's predicted result. You can then use this information to verify that the model is behaving as expected, identify and mitigate biases in your models, and get ideas for ways to improve your model and your training data. ``` %%bigquery --project $PROJECT_ID SELECT * FROM ML.GLOBAL_EXPLAIN(MODEL bqmlga4.churn_xgb) ORDER BY attribution DESC; ``` ## Generate batch predictions You can generate batch predictions for your BQML XGBoost model using [ML.PREDICT](https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-predict). ``` %%bigquery --project $PROJECT_ID SELECT * FROM ML.PREDICT(MODEL bqmlga4.churn_xgb, (SELECT * FROM bqmlga4.ml_features WHERE data_split = "TEST")) ``` The following query returns the probability that the user will return after 24 hrs. The higher the probability and closer it is to 1, the more likely the user is predicted to churn, and the closer it is to 0, the more likely the user is predicted to return. ``` %%bigquery --project $PROJECT_ID CREATE OR REPLACE TABLE bqmlga4.churn_predictions AS ( SELECT user_pseudo_id, churned, predicted_churned, predicted_churned_probs[OFFSET(0)].prob as probability_churned FROM ML.PREDICT(MODEL bqmlga4.churn_xgb, (SELECT * FROM bqmlga4.ml_features)) ); ``` ## Export a BQML model to Vertex AI for online predictions See the official BigQuery ML Guide: [Exporting a BigQuery ML model for online prediction](https://cloud.google.com/bigquery-ml/docs/export-model-tutorial) for additional details. ### Export BQML model to GCS You will use the `bq extract` command in the `bq` command-line tool to export your BQML XGBoost model assets to Google Cloud Storage for persistence. See the [documentation](https://cloud.google.com/bigquery-ml/docs/exporting-models) for additional model export options. ``` BQ_MODEL = f"{BQ_DATASET}.{MODEL_NAME}" BQ_MODEL_EXPORT_DIR = f"gs://{GCS_BUCKET}/{MODEL_NAME}" !bq --location=$BQ_LOCATION extract \ --destination_format ML_XGBOOST_BOOSTER \ --model $BQ_MODEL \ $BQ_MODEL_EXPORT_DIR ``` Navigate to [Google Cloud Storage](https://pantheon.corp.google.com/storage) in Google Cloud Console to `"gs://{GCS_BUCKET}/{MODEL_NAME}"`. Validate that you see your exported model assets in the below format: ``` |--/{GCS_BUCKET}/{MODEL_NAME}/ |--/assets/ # Contains preprocessing code. |--0_categorical_label.txt # Contains country vocabulary. |--1_categorical_label.txt # Contains operating_system vocabulary. |--2_categorical_label.txt # Contains language vocabulary. |--model_metadata.json # contains model feature and label mappings. |--main.py # Can be called for local training runs. |--model.bst # XGBoost saved model format. |--xgboost_predictor-0.1.tar.gz # Compress XGBoost model with prediction function. ``` ### Upload BQML model to Vertex AI from GCS Vertex AI contains optimized pre-built training and prediction containers for popular ML frameworks such as TensorFlow, Pytorch, as well as XGBoost. You will upload your XGBoost from GCS to Vertex AI and provide the [latest pre-built Vertex XGBoost prediction container](https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers) to execute your model code to generate predictions in the cells below. ``` IMAGE_URI='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.1-4:latest' model = vertexai.Model.upload( display_name=MODEL_NAME, artifact_uri=BQ_MODEL_EXPORT_DIR, serving_container_image_uri=IMAGE_URI, ) ``` ### Deploy a Vertex `Endpoint` for online predictions Before you use your model to make predictions, you need to deploy it to an `Endpoint` object. When you deploy a model to an `Endpoint`, you associate physical (machine) resources with that model to enable it to serve online predictions. Online predictions have low latency requirements; providing resources to the model in advance reduces latency. You can do this by calling the deploy function on the `Model` resource. This will do two things: 1. Create an `Endpoint` resource for deploying the `Model` resource to. 2. Deploy the `Model` resource to the `Endpoint` resource. The `deploy()` function takes the following parameters: * `deployed_model_display_name`: A human readable name for the deployed model. * `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic. * `machine_type`: The type of machine to use for training. * `accelerator_type`: The hardware accelerator type. * `accelerator_count`: The number of accelerators to attach to a worker replica. * `starting_replica_count`: The number of compute instances to initially provision. * `max_replica_count`: The maximum number of compute instances to scale to. In this lab, only one instance is provisioned. * `explanation_parameters`: Metadata to configure the Explainable AI learning method. * `explanation_metadata`: Metadata that describes your TensorFlow model for Explainable AI such as features, input and output tensors. Note: this can take about 3-5 minutes to provision prediction resources for your model. ``` endpoint = model.deploy( traffic_split={"0": 100}, machine_type="n1-standard-2", ) ``` ### Query model for online predictions XGBoost only takes numerical feature inputs. When you trained your BQML model above with CREATE MODEL statement, it automatically handled encoding of categorical features such as user `country`, `operating system`, and `language` into numeric representations. In order for our exported model to generate online predictions, you will use the categorical feature vocabulary files exported under the `assets/` folder of your model directory and the Scikit-Learn preprocessing code below to map your test instances to numeric values. ``` CATEGORICAL_FEATURES = ['country', 'operating_system', 'language'] from sklearn.preprocessing import OrdinalEncoder def _build_cat_feature_encoders(cat_feature_list, gcs_bucket, model_name, na_value='Unknown'): """Build categorical feature encoders for mapping text to integers for XGBoost inference. Args: cat_feature_list (list): List of string feature names. gcs_bucket (str): A string path to your Google Cloud Storage bucket. model_name (str): A string model directory in GCS where your BQML model was exported to. na_value (str): default is 'Unknown'. String value to replace any vocab NaN values prior to encoding. Returns: feature_encoders (dict): A dictionary containing OrdinalEncoder objects for integerizing categorical features that has the format [feature] = feature encoder. """ feature_encoders = {} for idx, feature in enumerate(cat_feature_list): feature_encoder = OrdinalEncoder(handle_unknown="use_encoded_value", unknown_value=-1) feature_vocab_file = f"gs://{gcs_bucket}/{model_name}/assets/{idx}_categorical_label.txt" feature_vocab_df = pd.read_csv(feature_vocab_file, delimiter = "\t", header=None).fillna(na_value) feature_encoder.fit(feature_vocab_df.values) feature_encoders[feature] = feature_encoder return feature_encoders def preprocess_xgboost(instances, cat_feature_list, feature_encoders): """Transform instances to numerical values for inference. Args: instances (list[dict]): A list of feature dictionaries with the format feature: value. cat_feature_list (list): A list of string feature names. feature_encoders (dict): A dictionary with the format feature: feature_encoder. Returns: transformed_instances (list[list]): A list of lists containing numerical feature values needed for Vertex XGBoost inference. """ transformed_instances = [] for instance in instances: for feature in cat_feature_list: feature_int = feature_encoders[feature].transform([[instance[feature]]]).item() instance[feature] = feature_int instance_list = list(instance.values()) transformed_instances.append(instance_list) return transformed_instances # Build a dictionary of ordinal categorical feature encoders. feature_encoders = _build_cat_feature_encoders(CATEGORICAL_FEATURES, GCS_BUCKET, MODEL_NAME) %%bigquery test_df --project $PROJECT_ID SELECT* EXCEPT (user_pseudo_id, churned, data_split) FROM bqmlga4.ml_features WHERE data_split="TEST" LIMIT 3; # Convert dataframe records to feature dictionaries for preprocessing by feature name. test_instances = test_df.astype(str).to_dict(orient='records') # Apply preprocessing to transform categorical features and return numerical instances for prediction. transformed_test_instances = preprocess_xgboost(test_instances, CATEGORICAL_FEATURES, feature_encoders) # Generate predictions from model deployed to Vertex AI Endpoint. predictions = endpoint.predict(instances=transformed_test_instances) for idx, prediction in enumerate(predictions.predictions): # Class labels [1,0] retrieved from model_metadata.json in GCS model dir. # BQML binary classification default is 0.5 with above "Churn" and below "Not Churn". is_churned = "Churn" if prediction[0] >= 0.5 else "Not Churn" print(f"Prediction: Customer {idx} - {is_churned} {prediction}") print(test_df.iloc[idx].astype(str).to_json() + "\n") ``` ## Next steps Congratulations! In this lab, you trained, tuned, explained, and deployed a BigQuery ML user churn model to generate high business impact batch and online churn predictions to target customers likely to churn with interventions such as in-game rewards and reminder notifications. In this lab, you used `user_psuedo_id` as a user identifier. As next steps, you can extend this code further by having your application return a `user_id` to Google Analytics so you can join your model's predictions with additional first-party data such as purchase history and marketing engagement data. This enables you to integrate batch predictions into Looker dashboards to help product teams prioritize user experience improvements and marketing teams create targeted user interventions such as reminder emails to improve retention. Through having your model in Vertex AI Prediction, you also have a scalable prediction service to call from your application to directly integrate online predictions in order to to tailor personalized user game experiences and allow for targeted habit-building notifications. As you collect more data from your users, you may want to regularly evaluate your model on fresh data and re-train the model if you notice that the model quality is decaying. [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) can help you to automate, monitor, and govern your ML solutions by orchestrating your BQML workflow in a serverless manner, and storing your workflow's artifacts using [Vertex ML Metadata](https://cloud.google.com/vertex-ai/docs/ml-metadata/introduction). For another alternative for continuous BQML models, checkout the blog post [Continuous model evaluation with BigQuery ML, Stored Procedures, and Cloud Scheduler](https://cloud.google.com/blog/topics/developers-practitioners/continuous-model-evaluation-bigquery-ml-stored-procedures-and-cloud-scheduler). ## License ``` # Copyright 2021 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ```
github_jupyter
## Optimizing a constant window parameter throughout ``` import torch import matplotlib.pyplot as plt import math import IPython.display as ipd from jupyterthemes import jtplot jtplot.style() %config InlineBackend.figure_format = 'retina' s = torch.sin(torch.cumsum(2 * math.pi * torch.linspace(16, 2000, 4096) / 4096, 0)) s *= torch.hann_window(len(s)) ** .5 ipd.Audio(s.numpy(), rate=4096) from tqdm import trange def dithering_int(n): if n == int(n): return int(n) return int(torch.bernoulli((n - int(n))) + int(n)) class NumericalError(Exception): def __init__(self, message, grad_hist=None, window_times_signal_grads=None, f_grad=None): self.message = message self.grad_hist = grad_hist self.window_times_signal_grads = window_times_signal_grads self.f_grad = f_grad def __str__(self): if self.message: return 'NumericalError, {0} '.format(self.message) else: return 'NumericalError' def optimize_entire_window_length(s, initial_sigma_over_sr=12.0/4096, lr=1e-1, sr=4096): import torch sigma = torch.tensor(initial_sigma_over_sr * sr, requires_grad=True) sigma_grad_hist = [] import torch.optim optimizer = torch.optim.AdamW([sigma], lr=lr, amsgrad=True, weight_decay=0) #optimizer = torch.optim.RMSprop([sigma], lr=lr) sigma_hist = [] kur_hist = [] plt.figure(figsize=(22, 15)) for ep in trange(901): optimizer.zero_grad() sz = sigma * 6 sz = torch.clamp(sz, min=1, max=len(s) - 1) # smallest sz be 2 hp = sz / 2 if torch.isnan(sz): raise NumericalError( f'size became NaN at iteration {ep}, sigma: {sigma}') intsz = dithering_int(sz) inthp = dithering_int(hp) m = torch.arange(0, intsz, dtype=torch.float32) window = torch.exp(-0.5 * torch.pow((m - sz / 2) / (sigma + 1e-7), 2)) # window_norm = window / torch.sum(window) window_norm = window if torch.isnan(window_norm).any(): raise NumericalError( f'window become NaN at iteration {ep}, sigma: {sigma}') window_times_signal = [window_norm * s[i:i+intsz] for i in range(0, len(s) - intsz, inthp)] f = torch.stack([torch.rfft(x, signal_ndim=1, normalized=True) for x in window_times_signal], dim=1) # pytorch would produce nan if this is # a = (f[..., 0] ** 2 + f[..., 1] ** 2).sqrt(), # and pow(a, 4), pow(a, 2), ... below a = (f[..., 0] ** 2 + f[..., 1] ** 2) kur = torch.sum(torch.pow(a, 2)) / torch.pow(torch.sum(a), 2) kur /= (s.size(-1) / 32768) n_wnd = a.size(0) assert n_wnd > 0 if (torch.isnan(kur).any() or torch.isinf(kur).any()): raise NumericalError( f'kur become NaN at iteration {ep}, sigma: {sigma}') (-kur).backward() torch.nn.utils.clip_grad_norm_([sigma], max_norm=1) if torch.isnan(sigma).any(): raise NumericalError( f'sigma become NaN at iteration {ep}, sigma: {sigma}') if torch.isnan(sigma.grad).any(): raise NumericalError( f'sigma.grad become NaN at iteration {ep}, sigma: {sigma}, kur: {kur}, ' f'kur.grad: {kur.grad}, a.grad: {a.grad} {torch.isnan(a.grad).any()}, ' f'f.grad: {f.grad} {torch.isnan(f.grad).any()}, window.grad: {window.grad}') optimizer.step() if torch.isnan(sigma).any(): raise NumericalError( f'after optimization, sigma become NaN at iteration {ep}, sigma: {sigma}') sigma_hist.append(sigma.item()) kur_hist.append(kur.item()) if ep % 180 == 0: plt.gcf().add_subplot(2, 3, ep // 180 + 1), plt.gca().pcolorfast(a.detach().numpy()) plt.gca().set_title(f'Epoch {ep}, FFT size: {sz:.3f}\nconcentration: {kur.item():.5f}') plt.gcf().tight_layout() return sigma_hist, kur_hist import tqdm if hasattr(tqdm.tqdm, '_instances'): tqdm.tqdm._instances.clear() sigma_grad_hist = [] window_times_signal_grads = [] try: torch.manual_seed(0) sigma_hist, kur_hist = optimize_entire_window_length(s) except NumericalError as e: print(e) sigma_grad_hist = e.grad_hist window_times_signal_grads = e.window_times_signal_grads f_grad = e.f_grad raise e plt.figure(figsize=(20, 4)) _ = plt.gcf().add_subplot(1, 2, 1), plt.gca().plot(sigma_hist), plt.gca().set_title("σ across epochs") _ = plt.gcf().add_subplot(1, 2, 2), plt.gca().plot(kur_hist), plt.gca().set_title("concentration measure across epochs") ```
github_jupyter
# Introduction to pandas The python package pandas is very useful to read csv files, but also many text files that are more or less formated as one observation per row and one column for each feature. As an example, we are going to look at the list of seismic stations from the Northern California seismic network, available here: http://ncedc.org/ftp/pub/doc/NC.info/NC.channel.summary.day ``` url = 'http://ncedc.org/ftp/pub/doc/NC.info/NC.channel.summary.day' ``` First we import useful packages. The package request is useful to read data from a web page. ``` import numpy as np import pandas as pd import io import pickle import requests from datetime import datetime, timedelta from math import cos, sin, pi, sqrt ``` The function read_csv is used to open and read your text file. In the case of a well formatted csv file, only the name of the file needs to be entered: data = pd.read_csv('my_file.csv') However, many options are available if the file is not well formatted. See more on: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html ``` s = requests.get(url).content data = pd.read_csv(io.StringIO(s.decode('utf-8')), header=None, skiprows=2, sep='\s+', usecols=list(range(0, 13))) data.columns = ['station', 'network', 'channel', 'location', 'rate', 'start_time', 'end_time', 'latitude', 'longitude', 'elevation', 'depth', 'dip', 'azimuth'] ``` Let us look at the data. They are now stored into a pandas dataframe. ``` data.head() ``` There are two aways of looking at a particular column: ``` data.station data['station'] ``` If we want to look at a given row or column, and we know its index, we can do: ``` data.iloc[0] data.iloc[:, 0] ``` If we know the name of the column, we can do: ``` data.loc[:, 'station'] ``` We can also access a single value within a column: ``` data.loc[0, 'station'] ``` We can filter the data with the value taken by a given column: ``` data.loc[data.station=='KCPB'] data.loc[(data.station=='KCPB') | (data.station=='KHBB')] data.loc[data.station.isin(['KCPB', 'KHBB'])] ``` We can access to a brief summary of the data: ``` data.station.describe() data.elevation.describe() ``` We can perform standard operations on the whole data set: ``` data.mean() ``` In the case of a categorical variable, we can get the list of possile values that this variable can take: ``` data.channel.unique() ``` and get the number of times that each value is taken: ``` data.station.value_counts() ``` There are several ways of doing an operation on all rows of a column. The first option is to use the map function. If you are not familiar with lambda function in Python, look at: https://realpython.com/python-lambda/ ``` data_elevation_mean = data.elevation.mean() data.elevation.map(lambda p: p - data_elevation_mean) ``` The second option is to use the apply function: ``` def remean_elevation(row): row.elevation = row.elevation - data_elevation_mean return row data.apply(remean_elevation, axis='columns') ``` We can also carry out simple operations on coulumns, provided they make sense. ``` data.network + ' - ' + data.station ``` A useful feature is to group the rows depending on the value of a categorical variable, and then apply the same operation to all the groups. For instance, I want to know how many times each station appears in the file: ``` data.groupby('station').station.count() ``` Or I want to know what is the lowest and the highest elevation for each station: ``` data.groupby('station').elevation.min() data.groupby('station').elevation.max() ``` We can have access to the data type of each column: ``` data.dtypes ``` Here, pandas does not recognize the start_time and end_time columns as a datetime format, so we cannot use datetime operations on them. We first need to convert these columns into a datetime format: ``` # Transform column into datetime format startdate = pd.to_datetime(data['start_time'], format='%Y/%m/%d,%H:%M:%S') data['start_time'] = startdate # Avoid 'OutOfBoundsDatetime' error with year 3000 enddate = data['end_time'].str.replace('3000', '2025') enddate = pd.to_datetime(enddate, format='%Y/%m/%d,%H:%M:%S') data['end_time'] = enddate ``` We can now look when each seismic station was installed: ``` data.groupby('station').apply(lambda df: df.start_time.min()) ``` The agg function allows to carry out several operations to each group of rows: ``` data.groupby(['station']).elevation.agg(['min', 'max']) data.groupby(['station']).agg({'start_time':lambda x: min(x), 'end_time':lambda x: max(x)}) ``` We can also make groups by selecting the values of two categorical variables: ``` data.groupby(['station', 'channel']).agg({'start_time':lambda x: min(x), 'end_time':lambda x: max(x)}) ``` Previously, we just printed the output, but we can also store it in a new variable: ``` data_grouped = data.groupby(['station', 'channel']).agg({'start_time':lambda x: min(x), 'end_time':lambda x: max(x)}) data_grouped.head() ``` When we select only some rows, the index is not automatically reset to start at 0. We can do it manually. Many functions in pandas have also an option to reset the index, and option to transform the dataframe in place, instead of saving the results in another variable. ``` data_grouped.reset_index() ``` It is also possible to sort the dataset by value. ``` data_grouped.sort_values(by='start_time') ``` We can apply the sorting to several columns: ``` data_grouped.sort_values(by=['start_time', 'end_time']) ``` A useful pandas function is the merge functions that allows you two merge two dataframes that have some columns in common, but have also different columns that you may want to compare with each other. For example, I have two earthquake catalogs. The 2007-2009 was established using data from a temporary experiment, and the 2004-2011 was established using data from a permanent seismic network. I would like to know if some earthquakes are detected by a network, but not by the other. I will compare the catalogs between July 2007 and May 2009. There is a time delay of 10s between the detection time of one catalog compared to the other. I will also filter the catalogs to eleiminate false detections. ``` tbegin = datetime(2007, 9, 25, 0, 0, 0) tend = datetime(2009, 5, 14, 0, 0, 0) dt = 10.0 thresh1 = 1.4 thresh2 = 1.9 ``` I first read the two catalogs, and apply the filtering: ``` namefile = 'catalog_2007_2009.pkl' df1 = pickle.load(open(namefile, 'rb')) df1 = df1[['year', 'month', 'day', 'hour', 'minute', 'second', 'cc', 'nchannel']] df1 = df1.astype({'year': int, 'month': int, 'day': int, 'hour': int, 'minute': int, 'second': float, 'cc': float, 'nchannel': int}) date = pd.to_datetime(df1.drop(columns=['cc', 'nchannel'])) df1['date'] = date df1 = df1[(df1['date'] >= tbegin) & (df1['date'] <= tend)] df1_filter = df1.loc[df1['cc'] * df1['nchannel'] >= thresh1] namefile = 'catalog_2004_2011.pkl' df2 = pickle.load(open(namefile, 'rb')) df2 = df2[['year', 'month', 'day', 'hour', 'minute', 'second', 'cc', 'nchannel']] df2 = df2.astype({'year': int, 'month': int, 'day': int, 'hour': int, 'minute': int, 'second': float, 'cc': float, 'nchannel': int}) date = pd.to_datetime(df2.drop(columns=['cc', 'nchannel'])) df2['date'] = date df2['date'] = df2['date'] - timedelta(seconds=dt) df2 = df2[(df2['date'] >= tbegin) & (df2['date'] <= tend)] df2_filter = df2.loc[df2['cc'] * df2['nchannel'] >= thresh2] ``` To make the comparison, I first concatenate the two dataframes into a single dataframe. Then I merge the concatenated dataframe with one of the initial dataframes. I apply the merge operation on the date column, that is if an earthquake in dataset 1 has the same date as an earthquake in dataset 2, I assume it is the same earthquake. You could also check if several columns have the same value, instead of doing the merge operation on only one column. The process adds a merge column to the dataset, which indicates whether a row was found only in dataset 1, only in dataset 2, or in both datasets. ``` # Earthquakes in filtered 2007-2009 catalog but not in (unfiltered) 2004-2011 catalog df_all = pd.concat([df2, df1_filter], ignore_index=True) df_merge = df_all.merge(df2.drop_duplicates(), on=['date'], how='left', indicator=True) df_added_1 = df_merge[df_merge['_merge'] == 'left_only'] # Earthquakes in filtered 2004-2011 catalog but not in (unfiltered) 2007-2009 catalog df_all = pd.concat([df1, df2_filter], ignore_index=True) df_merge = df_all.merge(df1.drop_duplicates(), on=['date'], how='left', indicator=True) df_added_2 = df_merge[df_merge['_merge'] == 'left_only'] df_added_1 df_added_2 ```
github_jupyter
Grid Search Analysis === Compares the results of the grid search per dataset. And spits out the best one... ``` import numpy as np import pandas as pd import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('ggplot') from matplotlib import cm import json import codecs import os basepath = os.path.normpath("C:\Users\hatieke\.ukpsummarizer\scores_grid") dirs = [f for f in os.listdir(basepath) if os.path.isdir(os.path.join(basepath, f))] dirs selected = dirs[1] selected def parse_dir(dir): p = os.path.join(basepath, dir) result_jsons = [] result_files = [f for f in os.listdir(p) if f.startswith("result-") and f.endswith(".json")] for f in result_files: fn = os.path.join(p, f) fsize = os.path.getsize(fn) if fsize > 0: with open(fn) as fp: result_jsons.append(json.load(fp)) return result_jsons f = os.path.join(basepath, selected, "result-af54bffb220ca4e9f459b174b296b3a755f7f1fb1935f3151bc97cd8.json") f with open(f) as fp: o = json.load(fp) o res = [i for i in o["result_rougescores"] if i["iteration"] <= 11] [i["ROUGE-2 R score"] for i in res] [i["ROUGE-2 R score"] for i in o["result_rougescores"] if i["iteration"] <= 11][-1] def parse_single_result_into_dataframe(obj, iteration=11): config = obj[u'config_feedbackstore'] try: res = [i for i in obj["result_rougescores"] if i["iteration"] <= iteration][-1] except: raise BaseException("unknown iteration %s" % (obj["config_run_id"])) total_accept = sum([1 for i in obj[u'log_feedbacks'] if i["value"] == 'accept' and i["iteration"] < iteration]) total_reject = sum([1 for i in obj[u'log_feedbacks'] if i["value"] != 'accept' and i["iteration"] < iteration]) total_feedback = total_accept + total_reject num_iterations = res["iteration"] r1 = res[u'ROUGE-1 R score'] r2 = res[u'ROUGE-2 R score'] r4 = res[u'ROUGE-SU* R score'] classtype = config.get(u'type') cut_off_threshold = config.get(u'cut_off_threshold') iterations_accept = config.get(u'iterations_accept') iterations_reject = config.get(u'iterations_reject') propagation_abort_threshold = config.get(u'propagation_abort_threshold') mass_accept = config.get(u'mass_accept') mass_reject = config.get(u'mass_reject') window_size = config.get(u'N') factor_reject = config.get(u"multiplier_reject") factor_accept = config.get(u"multiplier_accept") cutoff = config.get(u"cut_off_threshold") runid = obj.get("config_run_id") return { "accept" : total_accept, "reject": total_reject, "total_feedback": total_feedback, "ref_summary": str([item["name"] for item in obj[u'models']]), "cfg": config, "num_iterations": num_iterations, "r1": r1, "r2": r2, "r4": r4, "classtype": classtype, "iterations_accept":iterations_accept, "iterations_reject": iterations_reject, "propagation_abort_threshold": propagation_abort_threshold, "mass_accept" : mass_accept, "mass_reject" : mass_reject, "window_size": window_size, "multiplier_reject": factor_reject, "multiplier_accept": factor_accept, "cutoff_threshold": cutoff, "id": runid } #parse_single_result_into_dataframe(first, iteration=10) items = [parse_single_result_into_dataframe(f, iteration=11) for d in dirs for f in parse_dir(d)] #items = [parse_single_result_into_dataframe(f, iteration=11) for f in parse_dir(selected)] len(items) #items[0] df = pd.DataFrame(items) df.to_csv("C:\\Users\\hatieke\\.ukpsummarizer\\tmp\\grid.csv") df.iloc[0] top_score v = top_score["cfg"] v v.apply(lambda x: x.mass_reject) df.iloc[64] df.groupby("ref_summary").max() df.groupby(["ref_summary","classtype"])["r2"].describe().to_csv("C:\\Users\\hatieke\\.ukpsummarizer\\tmp\\grid_groups.csv") ```
github_jupyter
Lambda School Data Science *Unit 4, Sprint 3, Module 1* --- # Recurrent Neural Networks (RNNs) and Long Short Term Memory (LSTM) (Prepare) ## Learning Objectives - <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences - <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras ## Overview > "Yesterday's just a memory - tomorrow is never what it's supposed to be." -- Bob Dylan Wish you could save [Time In A Bottle](https://www.youtube.com/watch?v=AnWWj6xOleY)? With statistics you can do the next best thing - understand how data varies over time (or any sequential order), and use the order/time dimension predictively. A sequence is just any enumerated collection - order counts, and repetition is allowed. Python lists are a good elemental example - `[1, 2, 2, -1]` is a valid list, and is different from `[1, 2, -1, 2]`. The data structures we tend to use (e.g. NumPy arrays) are often built on this fundamental structure. A time series is data where you have not just the order but some actual continuous marker for where they lie "in time" - this could be a date, a timestamp, [Unix time](https://en.wikipedia.org/wiki/Unix_time), or something else. All time series are also sequences, and for some techniques you may just consider their order and not "how far apart" the entries are (if you have particularly consistent data collected at regular intervals it may not matter). # Neural Networks for Sequences (Learn) ## Overview There's plenty more to "traditional" time series, but the latest and greatest technique for sequence data is recurrent neural networks. A recurrence relation in math is an equation that uses recursion to define a sequence - a famous example is the Fibonacci numbers: $F_n = F_{n-1} + F_{n-2}$ For formal math you also need a base case $F_0=1, F_1=1$, and then the rest builds from there. But for neural networks what we're really talking about are loops: ![Recurrent neural network](https://upload.wikimedia.org/wikipedia/commons/b/b5/Recurrent_neural_network_unfold.svg) The hidden layers have edges (output) going back to their own input - this loop means that for any time `t` the training is at least partly based on the output from time `t-1`. The entire network is being represented on the left, and you can unfold the network explicitly to see how it behaves at any given `t`. Different units can have this "loop", but a particularly successful one is the long short-term memory unit (LSTM): ![Long short-term memory unit](https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Long_Short-Term_Memory.svg/1024px-Long_Short-Term_Memory.svg.png) There's a lot going on here - in a nutshell, the calculus still works out and backpropagation can still be implemented. The advantage (ane namesake) of LSTM is that it can generally put more weight on recent (short-term) events while not completely losing older (long-term) information. After enough iterations, a typical neural network will start calculating prior gradients that are so small they effectively become zero - this is the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem), and is what RNN with LSTM addresses. Pay special attention to the $c_t$ parameters and how they pass through the unit to get an intuition for how this problem is solved. So why are these cool? One particularly compelling application is actually not time series but language modeling - language is inherently ordered data (letters/words go one after another, and the order *matters*). [The Unreasonable Effectiveness of Recurrent Neural Networks](https://karpathy.github.io/2015/05/21/rnn-effectiveness/) is a famous and worth reading blog post on this topic. For our purposes, let's use TensorFlow and Keras to train RNNs with natural language. Resources: - https://github.com/keras-team/keras/blob/master/examples/imdb_lstm.py - https://keras.io/layers/recurrent/#lstm - http://adventuresinmachinelearning.com/keras-lstm-tutorial/ Note that `tensorflow.contrib` [also has an implementation of RNN/LSTM](https://www.tensorflow.org/tutorials/sequences/recurrent). ## Follow Along Sequences come in many shapes and forms from stock prices to text. We'll focus on text, because modeling text as a sequence is a strength of Neural Networks. Let's start with a simple classification task using a TensorFlow tutorial. ### RNN/LSTM Sentiment Classification with Keras ``` ''' #Trains an LSTM model on the IMDB sentiment classification task. The dataset is actually too small for LSTM to be of any advantage compared to simpler, much faster methods such as TF-IDF + LogReg. **Notes** - RNNs are tricky. Choice of batch size is important, choice of loss and optimizer is critical, etc. Some configurations won't converge. - LSTM loss decrease patterns during training can be quite different from what you see with CNNs/MLPs/etc. ''' from __future__ import print_function from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM from tensorflow.keras.datasets import imdb max_features = 20000 # cut texts after this number of words (among top max_features most common words) maxlen = 80 batch_size = 32 print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) print(len(x_train), 'train sequences') print(len(x_test), 'test sequences') x_train[0] def print_text_from_seq(x): INDEX_FROM=3 # word index offset word_to_id = imdb.get_word_index() word_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()} word_to_id["<PAD>"] = 0 word_to_id["<START>"] = 1 word_to_id["<UNK>"] = 2 word_to_id["<UNUSED>"] = 3 id_to_word = {value:key for key,value in word_to_id.items()} print('=================================================') print(f'Length = {len(x)}') print('=================================================') print(' '.join(id_to_word[id] for id in x )) print_text_from_seq(x_train[0]) for i in range(0, 6): print(x_train[i]) print_text_from_seq(x_train[i]) print('Pad Sequences (samples x time)') x_train = sequence.pad_sequences(x_train, maxlen=maxlen) x_test = sequence.pad_sequences(x_test, maxlen=maxlen) print('x_train shape: ', x_train.shape) print('x_test shape: ', x_test.shape) x_train[0] print_text_from_seq(x_train[0]) # the review labels are balanced: 50% are positive and 50% are negivate y_train.sum()/len(y_train) from tensorflow.keras.layers import Dropout model_fc = Sequential() model_fc.add(Embedding(max_features, 128)) model_fc.add(Dropout(0.1)) model_fc.add(Dense(50, activation='relu')) model_fc.add(Dropout(0.1)) model_fc.add(Dense(50, activation='relu')) model_fc.add(Dropout(0.1)) model_fc.add(Dense(1, activation='sigmoid')) model_fc.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model_fc.summary() output_fc = model_fc.fit(x_train, y_train, batch_size=batch_size, epochs=2, validation_data=(x_test, y_test)) import matplotlib.pyplot as plt # Plot training & validation loss values plt.plot(output_fc.history['loss']) plt.plot(output_fc.history['val_loss']) plt.title('Model (fully-connected) loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Valid'], loc='upper left') plt.show(); from tensorflow.keras.layers import SpatialDropout1D model = Sequential() model.add(Embedding(max_features, 128)) model.add(SpatialDropout1D(0.3)) model.add(LSTM(128, return_sequences=True)) model.add(SpatialDropout1D(0.3)) model.add(LSTM(128)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() output_lstm = model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_data=(x_test, y_test)) import matplotlib.pyplot as plt # Plot training & validation loss values plt.plot(output_lstm.history['loss']) plt.plot(output_lstm.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Valid'], loc='upper left') plt.show(); plt.plot(output_lstm.history['accuracy']) plt.plot(output_lstm.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Acc') plt.xlabel('Epoch') plt.legend(['Train', 'Valid'], loc='upper left') plt.show(); from tensorflow import keras layer_name = 'embedding_9' test_seq = x_train[0] intermediate_layer_model = keras.Model(inputs=model.input, outputs=model.get_layer(layer_name).output) intermediate_output = intermediate_layer_model(test_seq) intermediate_output[0] intermediate_output[3] print_text_from_seq(x_train[0]) !pip install transformers from transformers import * model_bert = TFBertForSequenceClassification.from_pretrained('bert-base-cased') model_bert.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model_bert.summary() output_bert = model_bert.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_data=(x_test, y_test)) ``` ## Challenge You will be expected to use an Keras LSTM for a classicification task on the *Sprint Challenge*. # LSTM Text generation with Keras (Learn) ## Overview What else can we do with LSTMs? Since we're analyzing the *sequence*, we can do more than classify - we can *generate* text. I'ved pulled some news stories using [newspaper](https://github.com/codelucas/newspaper/). This example is drawn from the Keras [documentation](https://keras.io/examples/lstm_text_generation/). ``` from tensorflow.keras.callbacks import LambdaCallback from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, LSTM from tensorflow.keras.optimizers import RMSprop import numpy as np import random import sys import os ! git clone https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning.git ! mv DS-Unit-4-Sprint-3-Deep-Learning/module1-rnn-and-lstm/articles/ . data_files = os.listdir('./articles') # Read in Data data = [] for file in data_files: if file[-3:] == 'txt': with open(f'./articles/{file}', 'r', encoding='utf-8') as f: data.append(f.read()) len(data) data[-1] # Encode Data as Chars # Gather all text # Why? 1. See all possible characters 2. For training / splitting later text = " ".join(data) # Unique Characters chars = list(set(text)) # Lookup Tables char_int = {c:i for i, c in enumerate(chars)} int_char = {i:c for i, c in enumerate(chars)} len(chars) int_char # Create the sequence data maxlen = 40 step = 5 encoded = [char_int[c] for c in text] sequences = [] # Each element is 40 chars long next_char = [] # One element for each sequence for i in range(0, len(encoded) - maxlen, step): sequences.append(encoded[i : i + maxlen]) next_char.append(encoded[i + maxlen]) print('sequences: ', len(sequences)) sequences[0] # for i in sequences[0]: # print(int_char[i]) next_char[0], int_char[next_char[0]] # Create x & y x = np.zeros((len(sequences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sequences),len(chars)), dtype=np.bool) for i, sequence in enumerate(sequences): for t, char in enumerate(sequence): x[i,t,char] = 1 y[i, next_char[i]] = 1 x.shape x[0] chars y.shape (maxlen, len(chars)) # build the model: a single LSTM model = Sequential() model.add(LSTM(128, input_shape=(maxlen, len(chars)))) model.add(Dense(len(chars), activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') def sample(preds): # helper function to sample an index from a probability array preds = np.asarray(preds).astype('float64') preds = np.log(preds) / 1 exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) def on_epoch_end(epoch, _): # Function invoked at end of each epoch. Prints generated text. print() print('----- Generating text after Epoch: %d' % epoch) start_index = random.randint(0, len(text) - maxlen - 1) generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') sys.stdout.write(generated) for i in range(400): x_pred = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x_pred[0, t, char_int[char]] = 1 preds = model.predict(x_pred, verbose=0)[0] next_index = sample(preds) next_char = int_char[next_index] sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print() print_callback = LambdaCallback(on_epoch_end=on_epoch_end) # fit the model model.fit(x, y, batch_size=32, epochs=10, callbacks=[print_callback]) ``` ## Challenge You will be expected to use a Keras LSTM to generate text on today's assignment. # Review - <a href="#p1">Part 1: </a>Describe Neural Networks used for modeling sequences * Sequence Problems: - Time Series (like Stock Prices, Weather, etc.) - Text Classification - Text Generation - And many more! :D * LSTMs are generally preferred over RNNs for most problems * LSTMs are typically a single hidden layer of LSTM type; although, other architectures are possible. * Keras has LSTMs/RNN layer types implemented nicely - <a href="#p2">Part 2: </a>Apply a LSTM to a text generation problem using Keras * Shape of input data is very important * Can take a while to train * You can use it to write movie scripts. :P
github_jupyter
## 1.1 Access SMAP data with Python We will use the popular python `requests` library to bulk download SMAP data. ### Bulk data downloads from NSIDC * We are going to do this in python, but you can do this in other languages also. * Click [here](https://nsidc.org/support/faq/what-options-are-available-bulk-downloading-data-https-earthdata-login-enabled) to get additional instructions on bulk data downloads from NSIDC. * SMAP data are available through the following url: * https://n5eil02u.ecs.nsidc.org/opendap/SMAP/ * For these examples, we will use **SPL3SMP version 004**, but the same methdology to access and subset SMAP data will work for any of the products availabe there import packages ``` import calendar import os import requests ``` Create a directory to store our data downloads. ``` this_dir = os.getcwd() DATA_DIR = os.path.join(this_dir, 'data/L3_SM_P') if not os.path.exists(DATA_DIR): os.makedirs(DATA_DIR) ``` #### Now we will create a function to return the url and filename of the data we want to download. For SMAP we will do this by year, month, and day. The rest of the filename is predictable. ``` def SMAP_L3_P_36km_Path(year, month, day): fpath_start = 'https://n5eil01u.ecs.nsidc.org/SMAP/' host = 'https://n5eil01u.ecs.nsidc.org/' version = '.004' url_path = '{host}/SMAP/SPL3SMP{version}/{year}.{month:02}.{day:02}/'.format(host=host, version=version, year=year, month=month, day=day) filename = 'SMAP_L3_SM_P_{year}{month:02}{day:02}_R14010_001.h5'.format(year=year, month=month, day=day) smap_data_path = url_path + filename return smap_data_path, filename ``` Add variables for username and password (edit to match your earthdata login credentials). *Note: do not share your username and password with anyone!* ``` # Add Earthdata username and password here. username = '' password = '' assert username and password, 'You must supply your Earthdata username and password!' ``` Download March 2017 data ``` # Download data for March 2017 year = 2017 month = 3 _, days_in_month = calendar.monthrange(2017, 3) # 31 days in March 2017 # Use a requests session to keep track of authentication credentials with requests.Session() as session: session.auth = (username, password) for day in range(1, days_in_month + 1): print('Downloading SMAP data for: '+str(year)+'-'+str(month).zfill(2)+'-'+str(day).zfill(2)) full_path, file_name = SMAP_L3_P_36km_Path(year, month, day) filepath = os.path.join(DATA_DIR, file_name) response = session.get(full_path) # If the response code is 401, we still need to authorize with earthdata. if response.status_code == 401: response = session.get(response.url) assert response.ok, 'Problem downloading data! Reason: {}'.format(response.reason) with open(filepath, 'wb') as f: f.write(response.content) print(file_name + ' downloaded') print('*** SM data saved to: '+ filepath +' *** ') ``` ### Now you have the tools to download in bulk! **FYI these scripts can take a long time, especially if you are downloading larger files.** * SMAP 36km data are ~ 30MB * SMAP 9km data are ~ 300MB * SMAP 3km data are ~ 2 GB ** This is why OPENDAP makes more sense in some cases **
github_jupyter
# Installation - Run these commands - git clone https://github.com/Tessellate-Imaging/Monk_Object_Detection.git - cd Monk_Object_Detection/5_pytorch_retinanet/installation - Select the right requirements file and run - cat requirements_cuda9.0.txt | xargs -n 1 -L 1 pip install # Monk Format ## Dataset Directory Structure ../sample_dataset/kangaroo (root) | |-----------Images (img_dir) | | | |------------------img1.jpg | |------------------img2.jpg | |------------------.........(and so on) | | |-----------train_labels.csv (anno_file) ## Annotation file format | Id | Labels | | img1.jpg | x1 y1 x2 y2 label1 x1 y1 x2 y2 label2 | - Labels: xmin ymin xmax ymax label - xmin, ymin - top left corner of bounding box - xmax, ymax - bottom right corner of bounding box # COCO Format ## Dataset Directory Structure ../sample_dataset (root_dir) | |------kangaroo (coco_dir) | | | |---Images (img_dir) | |----| | |-------------------img1.jpg | |-------------------img2.jpg | |-------------------.........(and so on) | | | |---annotations (anno_dir) | |----| | |--------------------instances_Train.json | |--------------------classes.txt - instances_Train.json -> In proper COCO format - classes.txt -> A list of classes in alphabetical order ``` import os import numpy as np import cv2 import dicttoxml import xml.etree.ElementTree as ET from xml.dom.minidom import parseString from tqdm import tqdm import shutil import json import pandas as pd ``` # Sample Dataset Credits - credits: https://github.com/experiencor/kangaroo ``` # Provide details on directory in Monk Format root = "../sample_dataset/kangaroo/"; img_dir = "Images/"; anno_file = "train_labels.csv"; # Need not change anything below dataset_path = root; images_folder = root + "/" + img_dir; annotations_path = root + "/annotations/"; if not os.path.isdir(annotations_path): os.mkdir(annotations_path) input_images_folder = images_folder; input_annotations_path = root + "/" + anno_file; output_dataset_path = root; output_image_folder = input_images_folder; output_annotation_folder = annotations_path; tmp = img_dir.replace("/", ""); output_annotation_file = output_annotation_folder + "/instances_" + tmp + ".json"; output_classes_file = output_annotation_folder + "/classes.txt"; if not os.path.isdir(output_annotation_folder): os.mkdir(output_annotation_folder); df = pd.read_csv(input_annotations_path); columns = df.columns delimiter = " "; list_dict = []; anno = []; for i in range(len(df)): img_name = df[columns[0]][i]; labels = df[columns[1]][i]; tmp = labels.split(delimiter); for j in range(len(tmp)//5): label = tmp[j*5+4]; if(label not in anno): anno.append(label); anno = sorted(anno) for i in tqdm(range(len(anno))): tmp = {}; tmp["supercategory"] = "master"; tmp["id"] = i; tmp["name"] = anno[i]; list_dict.append(tmp); anno_f = open(output_classes_file, 'w'); for i in range(len(anno)): anno_f.write(anno[i] + "\n"); anno_f.close(); coco_data = {}; coco_data["type"] = "instances"; coco_data["images"] = []; coco_data["annotations"] = []; coco_data["categories"] = list_dict; image_id = 0; annotation_id = 0; for i in tqdm(range(len(df))): img_name = df[columns[0]][i]; labels = df[columns[1]][i]; tmp = labels.split(delimiter); image_in_path = input_images_folder + "/" + img_name; img = cv2.imread(image_in_path, 1); h, w, c = img.shape; images_tmp = {}; images_tmp["file_name"] = img_name; images_tmp["height"] = h; images_tmp["width"] = w; images_tmp["id"] = image_id; coco_data["images"].append(images_tmp); for j in range(len(tmp)//5): x1 = int(tmp[j*5+0]); y1 = int(tmp[j*5+1]); x2 = int(tmp[j*5+2]); y2 = int(tmp[j*5+3]); label = tmp[j*5+4]; annotations_tmp = {}; annotations_tmp["id"] = annotation_id; annotation_id += 1; annotations_tmp["image_id"] = image_id; annotations_tmp["segmentation"] = []; annotations_tmp["ignore"] = 0; annotations_tmp["area"] = (x2-x1)*(y2-y1); annotations_tmp["iscrowd"] = 0; annotations_tmp["bbox"] = [x1, y1, x2-x1, y2-y1]; annotations_tmp["category_id"] = anno.index(label); coco_data["annotations"].append(annotations_tmp) image_id += 1; outfile = open(output_annotation_file, 'w'); json_str = json.dumps(coco_data, indent=4); outfile.write(json_str); outfile.close(); ```
github_jupyter
# APPLICATIONS OF MARKOV DECISION PROCESSES --- In this notebook we will take a look at some indicative applications of markov decision processes. We will cover content from [`mdp.py`](https://github.com/aimacode/aima-python/blob/master/mdp.py), for chapter 17 of Stuart Russel's and Peter Norvig's book [*Artificial Intelligence: A Modern Approach*](http://aima.cs.berkeley.edu/). ``` from mdp import * from notebook import psource, pseudocode ``` ## CONTENTS - Simple MDP - State dependent reward function - State and action dependent reward function - State, action and next state dependent reward function - Grid MDP - Pathfinding problem ## SIMPLE MDP --- ### State dependent reward function Markov Decision Processes are formally described as processes that follow the Markov property which states that "The future is independent of the past given the present". MDPs formally describe environments for reinforcement learning and we assume that the environment is *fully observable*. Let us take a toy example MDP and solve it using the functions in `mdp.py`. This is a simple example adapted from a [similar problem](http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching_files/MDP.pdf) by Dr. David Silver, tweaked to fit the limitations of the current functions. ![title](images/mdp-b.png) Let's say you're a student attending lectures in a university. There are three lectures you need to attend on a given day. <br> Attending the first lecture gives you 4 points of reward. After the first lecture, you have a 0.6 probability to continue into the second one, yielding 6 more points of reward. <br> But, with a probability of 0.4, you get distracted and start using Facebook instead and get a reward of -1. From then onwards, you really can't let go of Facebook and there's just a 0.1 probability that you will concentrate back on the lecture. <br> After the second lecture, you have an equal chance of attending the next lecture or just falling asleep. Falling asleep is the terminal state and yields you no reward, but continuing on to the final lecture gives you a big reward of 10 points. <br> From there on, you have a 40% chance of going to study and reach the terminal state, but a 60% chance of going to the pub with your friends instead. You end up drunk and don't know which lecture to attend, so you go to one of the lectures according to the probabilities given above. <br> We now have an outline of our stochastic environment and we need to maximize our reward by solving this MDP. <br> <br> We first have to define our Transition Matrix as a nested dictionary to fit the requirements of the MDP class. ``` t = { 'leisure': { 'facebook': {'leisure':0.9, 'class1':0.1}, 'quit': {'leisure':0.1, 'class1':0.9}, 'study': {}, 'sleep': {}, 'pub': {} }, 'class1': { 'study': {'class2':0.6, 'leisure':0.4}, 'facebook': {'class2':0.4, 'leisure':0.6}, 'quit': {}, 'sleep': {}, 'pub': {} }, 'class2': { 'study': {'class3':0.5, 'end':0.5}, 'sleep': {'end':0.5, 'class3':0.5}, 'facebook': {}, 'quit': {}, 'pub': {}, }, 'class3': { 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16}, 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24}, 'facebook': {}, 'quit': {}, 'sleep': {} }, 'end': {} } ``` We now need to define the reward for each state. ``` rewards = { 'class1': 4, 'class2': 6, 'class3': 10, 'leisure': -1, 'end': 0 } ``` This MDP has only one terminal state. ``` terminals = ['end'] ``` Let's now set the initial state to Class 1. ``` init = 'class1' ``` We will write a CustomMDP class to extend the MDP class for the problem at hand. This class will implement the `T` method to implement the transition model. This is the exact same class as given in [`mdp.ipynb`](https://github.com/aimacode/aima-python/blob/master/mdp.ipynb#MDP). ``` class CustomMDP(MDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): # All possible actions. actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.reward = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()] ``` We now need an instance of this class. ``` mdp = CustomMDP(t, rewards, terminals, init, gamma=.9) ``` The utility of each state can be found by `value_iteration`. ``` value_iteration(mdp) ``` Now that we can compute the utility values, we can find the best policy. ``` pi = best_policy(mdp, value_iteration(mdp, .01)) ``` `pi` stores the best action for each state. ``` print(pi) ``` We can confirm that this is the best policy by verifying this result against `policy_iteration`. ``` policy_iteration(mdp) ``` Everything looks perfect, but let us look at another possibility for an MDP. <br> Till now we have only dealt with rewards that the agent gets while it is **on** a particular state. What if we want to have different rewards for a state depending on the action that the agent takes next. The agent gets the reward _during its transition_ to the next state. <br> For the sake of clarity, we will call this the _transition reward_ and we will call this kind of MDP a _dynamic_ MDP. This is not a conventional term, we just use it to minimize confusion between the two. <br> This next section deals with how to create and solve a dynamic MDP. ### State and action dependent reward function Let us consider a very similar problem, but this time, we do not have rewards _on_ states, instead, we have rewards on the transitions between states. This state diagram will make it clearer. ![title](images/mdp-c.png) A very similar scenario as the previous problem, but we have different rewards for the same state depending on the action taken. <br> To deal with this, we just need to change the `R` method of the `MDP` class, but to prevent confusion, we will write a new similar class `DMDP`. ``` class DMDP: """A Markov Decision Process, defined by an initial state, transition model, and reward model. We also keep track of a gamma value, for use by algorithms. The transition model is represented somewhat differently from the text. Instead of P(s' | s, a) being a probability number for each state/state/action triplet, we instead have T(s, a) return a list of (p, s') pairs. The reward function is very similar. We also keep track of the possible states, terminal states, and actions for each state.""" def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9): if not (0 < gamma <= 1): raise ValueError("An MDP must have 0 < gamma <= 1") if states: self.states = states else: self.states = set() self.init = init self.actlist = actlist self.terminals = terminals self.transitions = transitions self.rewards = rewards self.gamma = gamma def R(self, state, action): """Return a numeric reward for this state and this action.""" if (self.rewards == {}): raise ValueError('Reward model is missing') else: return self.rewards[state][action] def T(self, state, action): """Transition model. From a state and an action, return a list of (probability, result-state) pairs.""" if(self.transitions == {}): raise ValueError("Transition model is missing") else: return self.transitions[state][action] def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist ``` The transition model will be the same ``` t = { 'leisure': { 'facebook': {'leisure':0.9, 'class1':0.1}, 'quit': {'leisure':0.1, 'class1':0.9}, 'study': {}, 'sleep': {}, 'pub': {} }, 'class1': { 'study': {'class2':0.6, 'leisure':0.4}, 'facebook': {'class2':0.4, 'leisure':0.6}, 'quit': {}, 'sleep': {}, 'pub': {} }, 'class2': { 'study': {'class3':0.5, 'end':0.5}, 'sleep': {'end':0.5, 'class3':0.5}, 'facebook': {}, 'quit': {}, 'pub': {}, }, 'class3': { 'study': {'end':0.6, 'class1':0.08, 'class2':0.16, 'class3':0.16}, 'pub': {'end':0.4, 'class1':0.12, 'class2':0.24, 'class3':0.24}, 'facebook': {}, 'quit': {}, 'sleep': {} }, 'end': {} } ``` The reward model will be a dictionary very similar to the transition dictionary with a reward for every action for every state. ``` r = { 'leisure': { 'facebook':-1, 'quit':0, 'study':0, 'sleep':0, 'pub':0 }, 'class1': { 'study':-2, 'facebook':-1, 'quit':0, 'sleep':0, 'pub':0 }, 'class2': { 'study':-2, 'sleep':0, 'facebook':0, 'quit':0, 'pub':0 }, 'class3': { 'study':10, 'pub':1, 'facebook':0, 'quit':0, 'sleep':0 }, 'end': { 'study':0, 'pub':0, 'facebook':0, 'quit':0, 'sleep':0 } } ``` The MDP has only one terminal state ``` terminals = ['end'] ``` Let's now set the initial state to Class 1. ``` init = 'class1' ``` We will write a CustomDMDP class to extend the DMDP class for the problem at hand. This class will implement everything that the previous CustomMDP class implements along with a new reward model. ``` class CustomDMDP(DMDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) DMDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.rewards = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()] def R(self, state, action): if action is None: return 0 else: return self.rewards[state][action] ``` One thing we haven't thought about yet is that the `value_iteration` algorithm won't work now that the reward model is changed. It will be quite similar to the one we currently have nonetheless. The Bellman update equation now is defined as follows $$U(s)=\max_{a\epsilon A(s)}\bigg[R(s, a) + \gamma\sum_{s'}P(s'\ |\ s,a)U(s')\bigg]$$ It is not difficult to see that the update equation we have been using till now is just a special case of this more generalized equation. We also need to max over the reward function now as the reward function is action dependent as well. <br> We will use this to write a function to carry out value iteration, very similar to the one we are familiar with. ``` def value_iteration_dmdp(dmdp, epsilon=0.001): U1 = {s: 0 for s in dmdp.states} R, T, gamma = dmdp.R, dmdp.T, dmdp.gamma while True: U = U1.copy() delta = 0 for s in dmdp.states: U1[s] = max([(R(s, a) + gamma*sum([(p*U[s1]) for (p, s1) in T(s, a)])) for a in dmdp.actions(s)]) delta = max(delta, abs(U1[s] - U[s])) if delta < epsilon * (1 - gamma) / gamma: return U ``` We're all set. Let's instantiate our class. ``` dmdp = CustomDMDP(t, r, terminals, init, gamma=.9) ``` Calculate utility values by calling `value_iteration_dmdp`. ``` value_iteration_dmdp(dmdp) ``` These are the expected utility values for our new MDP. <br> As you might have guessed, we cannot use the old `best_policy` function to find the best policy. So we will write our own. But, before that we need a helper function to calculate the expected utility value given a state and an action. ``` def expected_utility_dmdp(a, s, U, dmdp): return dmdp.R(s, a) + dmdp.gamma*sum([(p*U[s1]) for (p, s1) in dmdp.T(s, a)]) ``` Now we write our modified `best_policy` function. ``` from utils import argmax def best_policy_dmdp(dmdp, U): pi = {} for s in dmdp.states: pi[s] = argmax(dmdp.actions(s), key=lambda a: expected_utility_dmdp(a, s, U, dmdp)) return pi ``` Find the best policy. ``` pi = best_policy_dmdp(dmdp, value_iteration_dmdp(dmdp, .01)) print(pi) ``` From this, we can infer that `value_iteration_dmdp` tries to minimize the negative reward. Since we don't have rewards for states now, the algorithm takes the action that would try to avoid getting negative rewards and take the lesser of two evils if all rewards are negative. You might also want to have state rewards alongside transition rewards. Perhaps you can do that yourself now that the difficult part has been done. <br> ### State, action and next-state dependent reward function For truly stochastic environments, we have noticed that taking an action from a particular state doesn't always do what we want it to. Instead, for every action taken from a particular state, it might be possible to reach a different state each time depending on the transition probabilities. What if we want different rewards for each state, action and next-state triplet? Mathematically, we now want a reward function of the form R(s, a, s') for our MDP. This section shows how we can tweak the MDP class to achieve this. <br> Let's now take a different problem statement. The one we are working with is a bit too simple. Consider a taxi that serves three adjacent towns A, B, and C. Each time the taxi discharges a passenger, the driver must choose from three possible actions: 1. Cruise the streets looking for a passenger. 2. Go to the nearest taxi stand. 3. Wait for a radio call from the dispatcher with instructions. <br> Subject to the constraint that the taxi driver cannot do the third action in town B because of distance and poor reception. Let's model our MDP. <br> The MDP has three states, namely A, B and C. <br> It has three actions, namely 1, 2 and 3. <br> Action sets: <br> $K_{a}$ = {1, 2, 3} <br> $K_{b}$ = {1, 2} <br> $K_{c}$ = {1, 2, 3} <br> We have the following transition probability matrices: <br> <br> Action 1: Cruising streets <br> $\\ P^{1} = \left[ {\begin{array}{ccc} \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\ \frac{1}{2} & 0 & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \\ \end{array}}\right] \\ \\ $ <br> <br> Action 2: Waiting at the taxi stand <br> $\\ P^{2} = \left[ {\begin{array}{ccc} \frac{1}{16} & \frac{3}{4} & \frac{3}{16} \\ \frac{1}{16} & \frac{7}{8} & \frac{1}{16} \\ \frac{1}{8} & \frac{3}{4} & \frac{1}{8} \\ \end{array}}\right] \\ \\ $ <br> <br> Action 3: Waiting for dispatch <br> $\\ P^{3} = \left[ {\begin{array}{ccc} \frac{1}{4} & \frac{1}{8} & \frac{5}{8} \\ 0 & 1 & 0 \\ \frac{3}{4} & \frac{1}{16} & \frac{3}{16} \\ \end{array}}\right] \\ \\ $ <br> <br> For the sake of readability, we will call the states A, B and C and the actions 'cruise', 'stand' and 'dispatch'. We will now build the transition model as a dictionary using these matrices. ``` t = { 'A': { 'cruise': {'A':0.5, 'B':0.25, 'C':0.25}, 'stand': {'A':0.0625, 'B':0.75, 'C':0.1875}, 'dispatch': {'A':0.25, 'B':0.125, 'C':0.625} }, 'B': { 'cruise': {'A':0.5, 'B':0, 'C':0.5}, 'stand': {'A':0.0625, 'B':0.875, 'C':0.0625}, 'dispatch': {'A':0, 'B':1, 'C':0} }, 'C': { 'cruise': {'A':0.25, 'B':0.25, 'C':0.5}, 'stand': {'A':0.125, 'B':0.75, 'C':0.125}, 'dispatch': {'A':0.75, 'B':0.0625, 'C':0.1875} } } ``` The reward matrices for the problem are as follows: <br> <br> Action 1: Cruising streets <br> $\\ R^{1} = \left[ {\begin{array}{ccc} 10 & 4 & 8 \\ 14 & 0 & 18 \\ 10 & 2 & 8 \\ \end{array}}\right] \\ \\ $ <br> <br> Action 2: Waiting at the taxi stand <br> $\\ R^{2} = \left[ {\begin{array}{ccc} 8 & 2 & 4 \\ 8 & 16 & 8 \\ 6 & 4 & 2\\ \end{array}}\right] \\ \\ $ <br> <br> Action 3: Waiting for dispatch <br> $\\ R^{3} = \left[ {\begin{array}{ccc} 4 & 6 & 4 \\ 0 & 0 & 0 \\ 4 & 0 & 8\\ \end{array}}\right] \\ \\ $ <br> <br> We now build the reward model as a dictionary using these matrices. ``` r = { 'A': { 'cruise': {'A':10, 'B':4, 'C':8}, 'stand': {'A':8, 'B':2, 'C':4}, 'dispatch': {'A':4, 'B':6, 'C':4} }, 'B': { 'cruise': {'A':14, 'B':0, 'C':18}, 'stand': {'A':8, 'B':16, 'C':8}, 'dispatch': {'A':0, 'B':0, 'C':0} }, 'C': { 'cruise': {'A':10, 'B':2, 'C':18}, 'stand': {'A':6, 'B':4, 'C':2}, 'dispatch': {'A':4, 'B':0, 'C':8} } } ``` The Bellman update equation now is defined as follows $$U(s)=\max_{a\epsilon A(s)}\sum_{s'}P(s'\ |\ s,a)(R(s'\ |\ s,a) + \gamma U(s'))$$ It is not difficult to see that all the update equations we have used till now is just a special case of this more generalized equation. If we did not have next-state-dependent rewards, the first term inside the summation exactly sums up to R(s, a) or the state-reward for a particular action and we would get the update equation used in the previous problem. If we did not have action dependent rewards, the first term inside the summation sums up to R(s) or the state-reward and we would get the first update equation used in `mdp.ipynb`. <br> For example, as we have the same reward regardless of the action, let's consider a reward of **r** units for a particular state and let's assume the transition probabilities to be 0.1, 0.2, 0.3 and 0.4 for 4 possible actions for that state. We will further assume that a particular action in a state leads to the same state every time we take that action. The first term inside the summation for this case will evaluate to (0.1 + 0.2 + 0.3 + 0.4)r = r which is equal to R(s) in the first update equation. <br> There are many ways to write value iteration for this situation, but we will go with the most intuitive method. One that can be implemented with minor alterations to the existing `value_iteration` algorithm. <br> Our `DMDP` class will be slightly different. More specifically, the `R` method will have one more index to go through now that we have three levels of nesting in the reward model. We will call the new class `DMDP2` as I have run out of creative names. ``` class DMDP2: """A Markov Decision Process, defined by an initial state, transition model, and reward model. We also keep track of a gamma value, for use by algorithms. The transition model is represented somewhat differently from the text. Instead of P(s' | s, a) being a probability number for each state/state/action triplet, we instead have T(s, a) return a list of (p, s') pairs. The reward function is very similar. We also keep track of the possible states, terminal states, and actions for each state.""" def __init__(self, init, actlist, terminals, transitions={}, rewards={}, states=None, gamma=.9): if not (0 < gamma <= 1): raise ValueError("An MDP must have 0 < gamma <= 1") if states: self.states = states else: self.states = set() self.init = init self.actlist = actlist self.terminals = terminals self.transitions = transitions self.rewards = rewards self.gamma = gamma def R(self, state, action, state_): """Return a numeric reward for this state, this action and the next state_""" if (self.rewards == {}): raise ValueError('Reward model is missing') else: return self.rewards[state][action][state_] def T(self, state, action): """Transition model. From a state and an action, return a list of (probability, result-state) pairs.""" if(self.transitions == {}): raise ValueError("Transition model is missing") else: return self.transitions[state][action] def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist def actions(self, state): """Set of actions that can be performed in this state. By default, a fixed list of actions, except for terminal states. Override this method if you need to specialize by state.""" if state in self.terminals: return [None] else: return self.actlist ``` Only the `R` method is different from the previous `DMDP` class. <br> Our traditional custom class will be required to implement the transition model and the reward model. <br> We call this class `CustomDMDP2`. ``` class CustomDMDP2(DMDP2): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix[state]) actlist = list(set(actlist)) print(actlist) DMDP2.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.rewards = rewards for state in self.t: self.states.add(state) def T(self, state, action): if action is None: return [(0.0, state)] else: return [(prob, new_state) for new_state, prob in self.t[state][action].items()] def R(self, state, action, state_): if action is None: return 0 else: return self.rewards[state][action][state_] ``` We can finally write value iteration for this problem. The latest update equation will be used. ``` def value_iteration_taxi_mdp(dmdp2, epsilon=0.001): U1 = {s: 0 for s in dmdp2.states} R, T, gamma = dmdp2.R, dmdp2.T, dmdp2.gamma while True: U = U1.copy() delta = 0 for s in dmdp2.states: U1[s] = max([sum([(p*(R(s, a, s1) + gamma*U[s1])) for (p, s1) in T(s, a)]) for a in dmdp2.actions(s)]) delta = max(delta, abs(U1[s] - U[s])) if delta < epsilon * (1 - gamma) / gamma: return U ``` These algorithms can be made more pythonic by using cleverer list comprehensions. We can also write the variants of value iteration in such a way that all problems are solved using the same base class, regardless of the reward function and the number of arguments it takes. Quite a few things can be done to refactor the code and reduce repetition, but we have done it this way for the sake of clarity. Perhaps you can try this as an exercise. <br> We now need to define terminals and initial state. ``` terminals = ['end'] init = 'A' ``` Let's instantiate our class. ``` dmdp2 = CustomDMDP2(t, r, terminals, init, gamma=.9) value_iteration_taxi_mdp(dmdp2) ``` These are the expected utility values for the states of our MDP. Let's proceed to write a helper function to find the expected utility and another to find the best policy. ``` def expected_utility_dmdp2(a, s, U, dmdp2): return sum([(p*(dmdp2.R(s, a, s1) + dmdp2.gamma*U[s1])) for (p, s1) in dmdp2.T(s, a)]) from utils import argmax def best_policy_dmdp2(dmdp2, U): pi = {} for s in dmdp2.states: pi[s] = argmax(dmdp2.actions(s), key=lambda a: expected_utility_dmdp2(a, s, U, dmdp2)) return pi ``` Find the best policy. ``` pi = best_policy_dmdp2(dmdp2, value_iteration_taxi_mdp(dmdp2, .01)) print(pi) ``` We have successfully adapted the existing code to a different scenario yet again. The takeaway from this section is that you can convert the vast majority of reinforcement learning problems into MDPs and solve for the best policy using simple yet efficient tools. ## GRID MDP --- ### Pathfinding Problem Markov Decision Processes can be used to find the best path through a maze. Let us consider this simple maze. ![title](images/maze.png) This environment can be formulated as a GridMDP. <br> To make the grid matrix, we will consider the state-reward to be -0.1 for every state. <br> State (1, 1) will have a reward of -5 to signify that this state is to be prohibited. <br> State (9, 9) will have a reward of +5. This will be the terminal state. <br> The matrix can be generated using the GridMDP editor or we can write it ourselves. ``` grid = [ [None, None, None, None, None, None, None, None, None, None, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, +5.0, None], [None, -0.1, None, None, None, None, None, None, None, -0.1, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], [None, -0.1, None, None, None, None, None, None, None, None, None], [None, -0.1, None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None], [None, -0.1, None, None, None, None, None, -0.1, None, -0.1, None], [None, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, -0.1, None, -0.1, None], [None, None, None, None, None, -0.1, None, -0.1, None, -0.1, None], [None, -5.0, -0.1, -0.1, -0.1, -0.1, None, -0.1, None, -0.1, None], [None, None, None, None, None, None, None, None, None, None, None] ] ``` We have only one terminal state, (9, 9) ``` terminals = [(9, 9)] ``` We define our maze environment below ``` maze = GridMDP(grid, terminals) ``` To solve the maze, we can use the `best_policy` function along with `value_iteration`. ``` pi = best_policy(maze, value_iteration(maze)) ``` This is the heatmap generated by the GridMDP editor using `value_iteration` on this environment <br> ![title](images/mdp-d.png) <br> Let's print out the best policy ``` from utils import print_table print_table(maze.to_arrows(pi)) ``` As you can infer, we can find the path to the terminal state starting from any given state using this policy. All maze problems can be solved by formulating it as a MDP.
github_jupyter
# Import packages ``` import numpy as np import numpy.random as npr from matplotlib import pyplot as plt %matplotlib inline from scipy.linalg import orth import time from ssa.models import fit_ssa, weighted_pca, weighted_rrr from ssa.util import get_sample_weights # %load_ext autoreload # %autoreload 2 ``` # Simulate data ``` #Function that creates a sine wave for a given amount of time (T), #where the number of cycles (c) occurs during that time def create_sine(T,c): tau=T/(2*np.pi)/c return np.sin(np.arange(0,T)/tau) ``` #### Generate the simulated data X0 (size [Time x Num_neurons1]) and Y0 (size [Time x Num_neurons2]) ``` np.random.seed(0) #To get the same simulated data T=1200 #Time N1=50 #Number of neurons in population 1 N2=60 #Number of neurons in population 2 R_shared=3 #Number of shared dimensions in lowD representations R1=3 #Number of unshared dimensions in lowD representation of pop 1 R2=3 #Number of unshared dimensions in lowD representation of pop 2 #Orthogonal matrix that projects low dimensional space to full neural space of pop1 U_tmp=orth(npr.randn(R_shared+R1,N1).T).T #Orthogonal matrix that projects low dimensional space to full neural space of pop2 V_tmp=orth(npr.randn(R_shared+R2,N2).T).T #Create shared low dimensional space Z_shared=np.zeros([T,R_shared]) for i in range(R_shared): Z_shared[300*i:300*i+500,i]=create_sine(500,i+1) #Create low dimensional spaces unique to each population Zx=npr.randn(T,R1) Zy=npr.randn(T,R2) #Create high-dimensional neural activity b1=npr.randn(N1) #Offset of neurons b2=npr.randn(N2) #Offset of neurons X0=Z_shared@U_tmp[:R_shared]+Zx@U_tmp[R_shared:]+b1 #Project into high-dimensional space and add offset Y0=Z_shared@V_tmp[:R_shared]+Zy@V_tmp[R_shared:]+b2 #Project into high-dimensional space and add offset # X0=X0+.1*npr.randn(X0.shape[0],X0.shape[1]) #Add noise # Y0=Y0+.1*npr.randn(Y0.shape[0],Y0.shape[1]) #Add noise ``` # Preprocess data (optional) I have found that the method usually works better when zero-centering the data. In this specific example, if you don't zero-center the data, it will take ~5000 iterations to converge to the ground truth, rather than ~2000. ``` # X=np.copy(X0) # Y=np.copy(Y0) X=np.copy(X0-np.mean(X0,axis=0)[None,:]) Y=np.copy(Y0-np.mean(Y0,axis=0)[None,:]) ``` # Set required model parameters ``` #Number of shared dimensions in the low-D model you're fitting R_est=4 ``` # Set optional model parameters All of these have default values, so it's not essential to set them. The values listed below are the default values. ``` #Strength of the sparsity penalty lam=.01 #Number of epochs of model fitting n_epochs=3000 #Learning rate of model fitting lr=.001 #Whether to print the model error while fitting verbose=True #How much to weight each data point in time #(this can be helpful for making sure dimensions still aim to explain time points with low activity) sample_weights=np.ones([X.shape[0],1]) #Weight equally # sample_weights=get_sample_weights(Y) #Weight inversely to norm of activity at each time point ``` # Fit Reduced Rank Regression Model (for comparison) ``` #Fit weighted RRR #Note that this function does not automatically subtract the mean from the data __, U_est_rrr,V_est_rrr,b_rrr = weighted_rrr(X,Y,R_est,sample_weights) #Get the low dimensional representation (the principal components) rrr_latent = X@U_est_rrr ``` # Fit SSA Model ``` #Fit SSA model, latent, y_pred = fit_ssa(X=X,Y=Y,R=R_est,sample_weight=sample_weights,lam=lam,lr=lr,n_epochs=n_epochs,verbose=verbose) #Fit SSA without all the optional parameters # model, latent, y_pred = fit_ssa(X=X,Y=Y,R=R_est) #Get the low dimensional representation ssa_latent=latent.detach().numpy() ``` # Plot results ### Plot unordered lowD representations ``` #Ground truth Z_extra=np.zeros([T,R_est]) Z_extra[:,:R_shared]=Z_shared plt.figure(figsize=(15,5)) for i in range(R_est): #Plot ground truth plt.subplot(R_est,3,3*i+1) plt.plot((Z_extra)[:,i]) plt.ylim([-1.1, 1.1]) plt.yticks([]) if i<R_est-1: plt.xticks([]) else: plt.xlabel('Time') # Plot SSA results plt.subplot(R_est,3,3*i+2) plt.plot(ssa_latent[:,i]) plt.ylim([-1.1, 1.1]) plt.yticks([]) if i<R_est-1: plt.xticks([]) else: plt.xlabel('Time') # Plot PCA results plt.subplot(R_est,3,3*i+3) plt.plot(rrr_latent[:,i]) plt.ylim([-1.1, 1.1]) plt.yticks([]) if i<R_est-1: plt.xticks([]) else: plt.xlabel('Time') #Titles plt.subplot(R_est,3,1) plt.title('True LowD Projections') plt.subplot(R_est,3,2) plt.title('SSA LowD Projections') plt.subplot(R_est,3,3) plt.title('RRR LowD Projections') ```
github_jupyter
# Módulo 2: HTML: Requests y BeautifulSoup ## Parsing Pagina12 <img src='https://www.pagina12.com.ar/assets/media/logos/logo_pagina_12_n.svg?v=1.0.178' width=300></img> En este módulo veremos cómo utilizar las bibliotecas `requests` y `bs4` para programar scrapers de sitios HTML. Nos propondremos armar un scraper de noticias del diario <a href='www.pagina12.com.ar'>Página 12</a>. Supongamos que queremos leer el diario por internet. Lo primero que hacemos es abrir el navegador, escribir la URL del diario y apretar Enter para que aparezca la página del diario. Lo que ocurre en el momento en el que apretamos Enter es lo siguiente: 1. El navegador envía una solicitud a la URL pidiéndole información. 2. El servidor recibe la petición y procesa la respuesta. 3. El servidor envía la respuesta a la IP de la cual recibió la solicitud. 4. Nuestro navegador recibe la respuesta y la muestra **formateada** en pantalla. Para hacer un scraper debemos hacer un programa que replique este flujo de forma automática para luego extraer la información deseada de la respuesta. Utilizaremos `requests` para realizar peticiones y recibir las respuestas y `bs4` para *parsear* la respuesta y extraer la información.<br> Te dejo unos links que tal vez te sean de utilidad: - [Códigos de status HTTP](https://developer.mozilla.org/es/docs/Web/HTTP/Status) - [Documentación de requests](https://requests.kennethreitz.org/en/master/) - [Documentación de bs4](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) ``` import requests url = 'https://www.pagina12.com.ar/' p12 = requests.get(url) p12.status_code p12.content ``` Muchas veces la respuesta a la solicitud puede ser algo que no sea un texto: una imagen, un archivo de audio, un video, etc. ``` p12.text ``` Analicemos otros elementos de la respuesta ``` p12.headers p12.request.headers ``` El contenido de la request que acabamos de hacer está avisando que estamos utilizando la biblioteca requests para python y que no es un navegador convencional. Se puede modificar ``` p12.cookies from bs4 import BeautifulSoup s = BeautifulSoup(p12.text, 'lxml') type(s) print(s.prettify()) ``` Primer ejercicio: obtener un listado de links a las distintas secciones del diario.<br> Usar el inspector de elementos para ver dónde se encuentra la información. ``` secciones = s.find('ul', attrs={'class':'hot-sections'}).find_all('li') secciones [seccion.text for seccion in secciones] seccion = secciones[0] seccion.a.get('href') ``` Estamos interesados en los links, no en el texto ``` links_secciones = [seccion.a.get('href') for seccion in secciones] links_secciones ``` Carguemos la página de una sección para ver cómo se compone ``` sec = requests.get(links_secciones[0]) sec sec.request.url soup_seccion = BeautifulSoup(sec.text, 'lxml') print(soup_seccion.prettify()) ``` La página se divide en un artículo promocionado y una lista `<ul>` con el resto de los artículos ``` featured_article = soup_seccion.find('div', attrs={'class':'featured-article__container'}) featured_article featured_article.a.get('href') article_list = soup_seccion.find('ul', attrs={'class':'article-list'}) def obtener_notas(soup): ''' Función que recibe un objeto de BeautifulSoup de una página de una sección y devuelve una lista de URLs a las notas de esa sección ''' lista_notas = [] # Obtengo el artículo promocionado featured_article = soup.find('div', attrs={'class':'featured-article__container'}) if featured_article: lista_notas.append(featured_article.a.get('href')) # Obtengo el listado de artículos article_list = soup.find('ul', attrs={'class':'article-list'}) for article in article_list.find_all('li'): if article.a: lista_notas.append(article.a.get('href')) return lista_notas ``` Probemos la función ``` lista_notas = obtener_notas(soup_seccion) lista_notas ``` ## Clase 4 En esta clase te voy a hablar un poco del manejo de errores. Para eso vamos a tomar como ejemplo uno de los links que obtuvimos con la función que tenías que armar en la clase anterior. Código de error != 200 ``` r = requests.get(lista_notas[0]) if r.status_code == 200: # Procesamos la respuesta print('procesamos..') else: # Informar el error print('informamos...') url_nota = lista_notas[0] print(url_nota) ``` Supongamos que el link a la nota está mal cargado, o que sacaron la nota del sitio, o que directamente no está funcionando la web de página 12. ``` url_mala = url_nota.replace('2','3') print(url_mala) ``` Esto lo hacemos sólo para simular una URL mal cargada o un sevidor caído ``` r = requests.get(url_mala) if r.status_code == 200: # Procesamos la respuesta print('procesamos..') else: # Informar el error print('informamos status code != 200') ``` Obtuvimos un error que interrumpió la ejecución del código. No llegamos a imprimir el status code. Muchas veces estos errores son inevitables y no dependen de nosotros. Lo que sí depende de nosotros es cómo procesarlos y escribir un código que sea robusto y resistente a los errores. ``` try: nota = requests.get(url_mala) except: print('Error en la request!\n') print('El resto del programa continúa...') ``` Las buenas prácticas de programación incluyen el manejo de errores para darle robustez al código ``` try: nota = requests.get(url_mala) except Exception as e: print('Error en la request:') print(e) print('\n') print('El resto del programa continúa...') ```
github_jupyter
# Create your first deep learning neural network ## Introduction This is the first of our [beginner tutorial series](https://github.com/awslabs/djl/tree/master/jupyter/tutorial) that will take you through creating, training, and running inference on a neural network. In this tutorial, you will learn how to use the built-in `Block` to create your first neural network - a Multilayer Perceptron. ## Neural Network A neural network is a black box function. Instead of coding this function yourself, you provide many sample input/output pairs for this function. Then, we try to train the network to learn how to match the behavior of the function given only these input/output pairs. A better model with more data can more accurately match the function. ## Multilayer Perceptron A Multilayer Perceptron (MLP) is one of the simplest deep learning networks. The MLP has an input layer which contains your input data, an output layer which is produced by the network and contains the data the network is supposed to be learning, and some number of hidden layers. The example below contains an input of size 3, a single hidden layer of size 3, and an output of size 2. The number and sizes of the hidden layers are determined through experimentation but more layers enable the network to represent more complicated functions. Between each pair of layers is a linear operation (sometimes called a FullyConnected operation because each number in the input connected to each number in the output by a matrix multiplication). Not pictured, there is also a non-linear activation function after each linear operation. For more information, see [Multilayer Perceptron](https://en.wikipedia.org/wiki/Multilayer_perceptron). ![MLP Image](https://upload.wikimedia.org/wikipedia/commons/c/c2/MultiLayerNeuralNetworkBigger_english.png) ## Step 1: Setup development environment ### Installation This tutorial requires the installation of the Java Jupyter Kernel. To install the kernel, see the [Jupyter README](https://github.com/awslabs/djl/blob/master/jupyter/README.md). ``` // Add the snapshot repository to get the DJL snapshot artifacts // %mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/ // Add the maven dependencies %maven ai.djl:api:0.6.0 %maven org.slf4j:slf4j-api:1.7.26 %maven org.slf4j:slf4j-simple:1.7.26 // See https://github.com/awslabs/djl/blob/master/mxnet/mxnet-engine/README.md // for more MXNet library selection options %maven ai.djl.mxnet:mxnet-native-auto:1.7.0-b import ai.djl.*; import ai.djl.nn.*; import ai.djl.nn.core.*; import ai.djl.training.*; ``` ## Step 2: Determine your input and output size The MLP model uses a one dimensional vector as the input and the output. You should determine the appropriate size of this vector based on your input data and what you will use the output of the model for. In a later tutorial, we will use this model for Mnist image classification. Our input vector will have size `28x28` because the input images have a height and width of 28 and it takes only a single number to represent each pixel. For a color image, you would need to further multiply this by `3` for the RGB channels. Our output vector has size `10` because there are `10` possible classes for each image. ``` long inputSize = 28*28; long outputSize = 10; ``` ## Step 3: Create a **SequentialBlock** ### NDArray The core data type used for working with Deep Learning is the [NDArray](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/ndarray/NDArray.html). An NDArray represents a multidimensional, fixed-size homogeneous array. It has very similar behavior to the Numpy python package with the addition of efficient computing. We also have a helper class, the [NDList](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/ndarray/NDList.html) which is a list of NDArrays which can have different sizes and data types. ### Block API In DJL, [Blocks](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/nn/Block.html) serve a purpose similar to functions that convert an input `NDList` to an output `NDList`. They can represent single operations, parts of a neural network, and even the whole neural network. What makes blocks special is that they contain a number of parameters that are used in their function and are trained during deep learning. As these parameters are trained, the function represented by the blocks get more and more accurate. When building these block functions, the easiest way is to use composition. Similar to how functions are built by calling other functions, blocks can be built by combining other blocks. We refer to the containing block as the parent and the sub-blocks as the children. We provide several helpers to make it easy to build common block composition structures. For the MLP we will use the [SequentialBlock](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/nn/SequentialBlock.html), a container block whose children form a chain of blocks where each child block feeds its output to the next child block in a sequence. ``` SequentialBlock block = new SequentialBlock(); ``` ## Step 4: Add blocks to SequentialBlock An MLP is organized into several layers. Each layer is composed of a [Linear Block](https://javadoc.io/static/ai.djl/api/0.6.0/index.html?ai/djl/nn/core/Linear.html) and a non-linear activation function. If we just had two linear blocks in a row, it would be the same as a combined linear block ($f(x) = W_2(W_1x) = (W_2W_1)x = W_{combined}x$). An activation is used to intersperse between the linear blocks to allow them to represent non-linear functions. We will use the popular [ReLU](https://javadoc.io/static/ai.djl/api/0.6.0/ai/djl/nn/Activation.html#reluBlock--) as our activation function. The first layer and last layers have fixed sizes depending on your desired input and output size. However, you are free to choose the number and sizes of the middle layers in the network. We will create a smaller MLP with two middle layers that gradually decrease the size. Typically, you would experiment with different values to see what works the best on your data set. ``` block.add(Blocks.batchFlattenBlock(inputSize)); block.add(Linear.builder().setOutChannels(128).build()); block.add(Activation::relu); block.add(Linear.builder().setOutChannels(64).build()); block.add(Activation::relu); block.add(Linear.builder().setOutChannels(outputSize).build()); block ``` ## Summary Now that you've successfully created your first neural network, you can use this network to train your model. Next chapter: [Train your first model](02_train_your_first_model.ipynb) You can find the complete source code for this tutorial in the [model zoo](https://github.com/awslabs/djl/blob/master/model-zoo/src/main/java/ai/djl/basicmodelzoo/basic/Mlp.java).
github_jupyter
``` import pandas as pd import numpy as np df = pd.read_csv("./data/illumina/tumour_microenvironment.csv") df.head() pd.notnull(np.nan) 0.0 == 0 overexpressed = {"1": pd.DataFrame([2], columns=["a"]), "3": pd.DataFrame()} underexpressed = {"1": pd.DataFrame([5], columns=["b"]), "2": pd.DataFrame()} result = {} for key, value in overexpressed.items(): result[key] = {"overexpressed": value} for key, value in underexpressed.items(): if key in result: result[key]["underexpressed"] = value else: result[key] = {"underexpressed": value} result z z["1"] from collections import defaultdict d1= {'boy': 1,'girl':2} d2= {'boy': 'tall','girl':'short'} d3 = defaultdict(list) for d in (d1, d2): print(d.items()) for key, value in d.items(): d3[key].append(value) pd.DataFrame( [[1, 1, 3.0, 1, 1]], columns=[ "category", "index", "log2_fold_change", "id", "original_index", ], ) d=[ {'variant_type': 'clinvar', 'variant_id': 'RCV000120989.2', 'variant_review': 1, 'variant_significance': 'benign', 'variant_phenotype': '', 'variant_color': 'clinvar clinvar-green'}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 3, 'variant_significance': 'benign', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': 'likely benign', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': 'uncertain significance', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': 'benign', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': '-', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': 'pathogenic', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''}, {'variant_type': 'clinvar', 'variant_id': 'RCV000372215.1', 'variant_review': 2, 'variant_significance': 'likely pathogenic', 'variant_phenotype': ['Fanconi anemia'], 'variant_color': ''} ] d index_map = {'pathogenic':4, 'likely pathogenic':3, 'uncertain significance':2, 'likely benign':1, 'benign':0, '-':0, } import operator d = sorted(d, key=lambda k: (k['variant_review'], clinvar_significance_index_map[k["variant_significance"]]), reverse=True) d ```
github_jupyter
## Errors and Exceptions ###### Continue... <tbody><tr> <th style="text-align:center;"> </th> <th style="text-align:center;">Exception Name &amp; Description</th> </tr> <tr> <td class="ts">1</td> <td><p><b>Exception</b></p> <p>Base class for all exceptions</p></td> </tr> <tr> <td class="ts">2</td> <td><p><b>StopIteration</b></p> <p>Raised when the next() method of an iterator does not point to any object.</p></td> </tr> <tr> <td class="ts">3</td> <td><p><b>SystemExit</b></p> <p>Raised by the sys.exit() function.</p></td> </tr> <tr> <td class="ts">4</td> <td><p><b>StandardError</b></p> <p>Base class for all built-in exceptions except StopIteration and SystemExit.</p></td> </tr> <tr> <td class="ts">5</td> <td><p><b>ArithmeticError</b></p> <p>Base class for all errors that occur for numeric calculation.</p></td> </tr> <tr> <td class="ts">6</td> <td><p><b>OverflowError</b></p> <p>Raised when a calculation exceeds maximum limit for a numeric type.</p></td> </tr> <tr> <td class="ts">7</td> <td><p><b>FloatingPointError</b></p> <p>Raised when a floating point calculation fails.</p></td> </tr> <tr> <td class="ts">8</td> <td><p><b>ZeroDivisionError</b></p> <p>Raised when division or modulo by zero takes place for all numeric types.</p></td> </tr> <tr> <td class="ts">9</td> <td><p><b>AssertionError</b></p> <p>Raised in case of failure of the Assert statement.</p></td> </tr> <tr> <td class="ts">10</td> <td><p><b>AttributeError</b></p> <p>Raised in case of failure of attribute reference or assignment.</p></td> </tr> <tr> <td class="ts">11</td> <td><p><b>EOFError</b></p> <p>Raised when there is no input from either the raw_input() or input() function and the end of file is reached.</p></td> </tr> <tr> <td class="ts">12</td> <td><p><b>ImportError</b></p> <p>Raised when an import statement fails.</p></td> </tr> <tr> <td class="ts">13</td> <td><p><b>KeyboardInterrupt</b></p> <p>Raised when the user interrupts program execution, usually by pressing Ctrl+c.</p></td> </tr> <tr> <td class="ts">14</td> <td><p><b>LookupError</b></p> <p>Base class for all lookup errors.</p></td> </tr> <tr> <td class="ts">15</td> <td><p><b>IndexError</b></p> <p>Raised when an index is not found in a sequence.</p></td> </tr> <tr> <td class="ts">16</td> <td><p><b>KeyError</b></p> <p>Raised when the specified key is not found in the dictionary.</p></td> </tr> <tr> <td class="ts">17</td> <td><p><b>NameError</b></p> <p>Raised when an identifier is not found in the local or global namespace.</p></td> </tr> <tr> <td class="ts">18</td> <td><p><b>UnboundLocalError</b></p> <p>Raised when trying to access a local variable in a function or method but no value has been assigned to it.</p></td> </tr> <tr> <td class="ts">19</td> <td><p><b>EnvironmentError</b></p> <p>Base class for all exceptions that occur outside the Python environment.</p></td> </tr> <tr> <td class="ts">20</td> <td><p><b>IOError</b></p> <p>Raised when an input/ output operation fails, such as the print statement or the open() function when trying to open a file that does not exist.</p></td> </tr> <tr> <td class="ts">21</td> <td><p><b>IOError</b></p> <p>Raised for operating system-related errors.</p></td> </tr> <tr> <td class="ts">22</td> <td><p><b>SyntaxError</b></p> <p>Raised when there is an error in Python syntax.</p></td> </tr> <tr> <td class="ts">23</td> <td><p><b>IndentationError</b></p> <p>Raised when indentation is not specified properly.</p></td> </tr> <tr> <td class="ts">24</td> <td><p><b>SystemError</b></p> <p>Raised when the interpreter finds an internal problem, but when this error is encountered the Python interpreter does not exit.</p></td> </tr> <tr> <td class="ts">25</td> <td><p><b>SystemExit</b></p> <p>Raised when Python interpreter is quit by using the sys.exit() function. If not handled in the code, causes the interpreter to exit.</p></td> </tr> <tr> <td class="ts">26</td> <td><p><b>TypeError</b></p> <p>Raised when an operation or function is attempted that is invalid for the specified data type.</p></td> </tr> <tr> <td class="ts">27</td> <td><p><b>ValueError</b></p> <p>Raised when the built-in function for a data type has the valid type of arguments, but the arguments have invalid values specified.</p></td> </tr> <tr> <td class="ts">28</td> <td><p><b>RuntimeError</b></p> <p>Raised when a generated error does not fall into any category.</p></td> </tr> <tr> <td class="ts">29</td> <td><p><b>NotImplementedError</b></p> <p>Raised when an abstract method that needs to be implemented in an inherited class is not actually implemented.</p></td> </tr> </tbody> ### Exercise... try: your code except (ErrClass1, ErrClass2,...): execute code if error generated else: execute code if error not generated finally: always run this block ``` print("line1") print("line2") try: print(7/0) except ZeroDivisionError: print("Can't divided any value with zero") print("line3") print("line4") print("line5") try: open("abc.txt") except: print("File Not available") try: open("abc.txt") except FileNotFoundError: print("File Not available") try: a = ['a','b','c'] print(a[4]) except IndexError: print("NOt Available this index") try: a = ['a','b'] print(a[5]) open('abc.txt') print(7/0) except (IndexError,FileNotFoundError, ZeroDivisionError): print("Try to solve this problem") try: a = ['a','b'] print(a[5]) except IndexError: print("Index out of range") try: open('abc.txt') print(7/0) except FileNotFoundError: print("File Not Foud") try: print(7/0) except ZeroDivisionError: print("Can't Divided any value with zero") try: fh = open("testfile", "r") fh.write("This is my test file for exception handling!!") except IOError: print ("Error: can\'t find file or read data") else: print ("Written content in the file successfully") try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") finally: print ("Error: can\'t find file or read data") try: fh = open("testfile", "w") try: fh.write("This is my test file for exception handling!!") finally: print ("Going to close the file") fh.close() except IOError: print ("Error: can\'t find file or read data" ) # Define a function here. def temp_convert(var): try: return int(var) except ValueError as argument: print( "The argument does not contain numbers\n", argument) # Call above function here. temp_convert("xyz") class Student(): def __init__(self, age, name): if age<18 or age>60: raise Exception("Age should be in 18 to 60") self.age = age self.name = name s1 = Student(20, "Ali") print(s1.age,s1.name) s1 = Student(89, "Hamza") print(s1.age,s1.name) class Student(): def __init__(self, age, name): if age<18 or age>60: raise StudentAgeError("Age should be in 18 to 60") self.age = age self.name = name class StudentAgeError(Exception): pass s2 = Student(89, "Hamza") ``` An interactive calculator Exercise You're going to write an interactive calculator! User input is assumed to be a formula that consist of a number, an operator (at least + and -), and another number, separated by white space (e.g. 1 + 1). Split user input using str.split(), and check whether the resulting list is valid: <li>If the input does not consist of 3 elements, raise a FormulaError, which is a custom Exception. <li>Try to convert the first and third input to a float (like so: float_value = float(str_value)). Catch any ValueError that occurs, and instead raise a FormulaError <li>If the second input is not '+' or '-', again raise a FormulaError If the input is valid, perform the calculation and print out the result. The user is then prompted to provide new input, and so on, until the user enters quit. An interaction could look like this: ``` class FormulaError(Exception): pass def parse_input(user_input): input_list = user_input.split() if len(input_list) != 3: raise FormulaError('Input does not consist of three elements') n1, op, n2 = input_list try: n1 = float(n1) n2 = float(n2) except ValueError: raise FormulaError('The first and third input value must be numbers') return n1, op, n2 def calculate(n1, op, n2): try: if op == '+': return n1 + n2 if op == '-': return n1 - n2 if op == '*': return n1 * n2 if op == '/': try: return n1 / n2 except ZeroDivisionError: return "Can't Divided any value with zero" except: raise FormulaError('{0} is not a valid operator'.format(op)) while True: user_input = input('>>> ') if user_input == 'quit': break n1, op, n2 = parse_input(user_input) result = calculate(n1, op, n2) print(result) ```
github_jupyter
# How to simulate multiple virus strains In this tutorial, we are going to simulate the spread of Covid-19 with two virus strains. ``` %matplotlib inline import warnings import matplotlib.pyplot as plt import numpy as np import pandas as pd import sid from sid.config import INDEX_NAMES warnings.filterwarnings("ignore") ``` ## Preparation For the simulation we need to prepare several objects which are identical to the ones from the general tutorial on the [simulation](how_to_simulate.ipynb). ``` available_ages = [ "0-9", "10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "70-79", "80-100", ] ages = np.random.choice(available_ages, size=10_000) regions = np.random.choice(["North", "South"], size=10_000) initial_states = pd.DataFrame({"age_group": ages, "region": regions}).astype("category") initial_states.head(5) def meet_distant(states, params, seed): possible_nr_contacts = np.arange(10) contacts = np.random.choice(possible_nr_contacts, size=len(states)) return pd.Series(contacts, index=states.index) def meet_close(states, params, seed): possible_nr_contacts = np.arange(5) contacts = np.random.choice(possible_nr_contacts, size=len(states)) return pd.Series(contacts, index=states.index) assort_by = ["age_group", "region"] contact_models = { "distant": {"model": meet_distant, "assort_by": assort_by, "is_recurrent": False}, "close": {"model": meet_close, "assort_by": assort_by, "is_recurrent": False}, } epidemiological_parameters = pd.read_csv("infection_probs.csv", index_col=INDEX_NAMES) epidemiological_parameters assort_probs = pd.read_csv("assort_by_params.csv", index_col=INDEX_NAMES) assort_probs disease_params = sid.load_epidemiological_parameters() disease_params.head(6).round(2) immunity_params = pd.read_csv("immunity_params.csv", index_col=INDEX_NAMES) immunity_params params = pd.concat( [disease_params, epidemiological_parameters, assort_probs, immunity_params] ) ``` ## Additional objects to simulate multiple virus strains To implement multiple virus strains, we have to make the following extensions to the model. 1. Add a multiplier for the contagiousness of each virus to the parameters. 2. Add a multiplier for the immunity resistance factor for each virus to the parameters. 3. Prepare a DataFrame for the initial conditions. > Here we set the immunity resistance factor to 0, which (de facto) removes its influence on the > simulation. A detailed discussion on how to use this parameter is given in the tutorial on > [immunity](how_to_model_immunity.ipynb). ``` for virus, cf in [("base", 1), ("b117", 1.3)]: params.loc[("virus_strain", virus, "contagiousness_factor"), "value"] = cf params.loc[("virus_strain", virus, "immunity_resistance_factor"), "value"] = 0 ``` For the initial conditions, we assume a two-day burn-in period. On the first day, 50 people are infected with the base virus, on the second day one halve of 50 people has the old and the other halve the new variant. Each column in the DataFrame is a categorical. Infected individuals have a code for the variant, all others have NaNs. ``` infected_first_day = set(np.random.choice(10_000, size=50, replace=False)) first_day = pd.Series([pd.NA] * 10_000) first_day.iloc[list(infected_first_day)] = "base" infected_second_day_old_variant = set( np.random.choice( list(set(range(10_000)) - infected_first_day), size=25, replace=False ) ) infected_second_day_new_variant = set( np.random.choice( list(set(range(10_000)) - infected_first_day - infected_second_day_old_variant), size=25, replace=False, ) ) second_day = pd.Series([pd.NA] * 10_000) second_day.iloc[list(infected_second_day_old_variant)] = "base" second_day.iloc[list(infected_second_day_new_variant)] = "b117" initial_infections = pd.DataFrame( { pd.Timestamp("2020-02-25"): pd.Categorical( first_day, categories=["base", "b117"] ), pd.Timestamp("2020-02-26"): pd.Categorical( second_day, categories=["base", "b117"] ), } ) initial_conditions = {"initial_infections": initial_infections, "initial_immunity": 50} ``` ## Run the simulation We are going to simulate this population for 200 periods. ``` simulate = sid.get_simulate_func( initial_states=initial_states, contact_models=contact_models, params=params, initial_conditions=initial_conditions, duration={"start": "2020-02-27", "periods": 365}, virus_strains=["base", "b117"], seed=0, ) result = simulate(params=params) result["time_series"].head() result["last_states"].head() ``` The return of `simulate` is a dictionary with containing the time series data and the last states as a [Dask DataFrame](https://docs.dask.org/en/latest/dataframe.html). This allows to load the data lazily. The ``last_states`` can be used to resume the simulation. We will inspect the ``time_series`` data. If data data fits your working memory, do the following to convert it to a pandas DataFrame. ``` df = result["time_series"].compute() ``` Let us take a look at various statistics of the sample. ``` fig, axs = plt.subplots(3, 2, figsize=(12, 8)) fig.subplots_adjust(bottom=0.15, wspace=0.2, hspace=0.4) axs = axs.flatten() df.resample("D", on="date")["ever_infected"].mean().plot(ax=axs[0]) df.resample("D", on="date")["infectious"].mean().plot(ax=axs[1]) df.resample("D", on="date")["dead"].sum().plot(ax=axs[2]) r_zero = sid.statistics.calculate_r_zero(df, window_length=7) r_zero.plot(ax=axs[3]) r_effective = sid.statistics.calculate_r_effective(df, window_length=7) r_effective.plot(ax=axs[4]) df.query("newly_infected").groupby([pd.Grouper(key="date", freq="D"), "virus_strain"])[ "newly_infected" ].count().unstack().plot(ax=axs[5]) for ax in axs: ax.set_xlabel("") ax.spines["right"].set_visible(False) ax.spines["top"].set_visible(False) axs[0].set_title("Share of Infected People") axs[1].set_title("Share of Infectious People in the Population") axs[2].set_title("Total Number of Deaths") axs[3].set_title("$R_0$ (Basic Reproduction Number)") axs[4].set_title("$R_t$ (Effective Reproduction Number)") axs[5].set_title("Distribution of virus strains among newly infected") plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/get-started/try-apache-beam-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Try Apache Beam - Python In this notebook, we set up your development environment and work through a simple example using the [DirectRunner](https://beam.apache.org/documentation/runners/direct/). You can explore other runners with the [Beam Capatibility Matrix](https://beam.apache.org/documentation/runners/capability-matrix/). To navigate through different sections, use the table of contents. From **View** drop-down list, select **Table of contents**. To run a code cell, you can click the **Run cell** button at the top left of the cell, or by select it and press **`Shift+Enter`**. Try modifying a code cell and re-running it to see what happens. To learn more about Colab, see [Welcome to Colaboratory!](https://colab.sandbox.google.com/notebooks/welcome.ipynb). # Setup First, you need to set up your environment, which includes installing `apache-beam` and downloading a text file from Cloud Storage to your local file system. We are using this file to test your pipeline. ``` # Run and print a shell command. def run(cmd): print('>> {}'.format(cmd)) !{cmd} print('') # Install apache-beam. run('pip install --quiet apache-beam') # Copy the input file into the local file system. run('mkdir -p data') run('gsutil cp gs://dataflow-samples/shakespeare/kinglear.txt data/') ``` # Minimal word count The following example is the "Hello, World!" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word. There are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments. ``` import apache_beam as beam import re inputs_pattern = 'data/*' outputs_prefix = 'outputs/part' # Running locally in the DirectRunner. with beam.Pipeline() as pipeline: ( pipeline | 'Read lines' >> beam.io.ReadFromText(inputs_pattern) | 'Find words' >> beam.FlatMap(lambda line: re.findall(r"[a-zA-Z']+", line)) | 'Pair words with 1' >> beam.Map(lambda word: (word, 1)) | 'Group and sum' >> beam.CombinePerKey(sum) | 'Format results' >> beam.Map(lambda word_count: str(word_count)) | 'Write results' >> beam.io.WriteToText(outputs_prefix) ) # Sample the first 20 results, remember there are no ordering guarantees. run('head -n 20 {}-00000-of-*'.format(outputs_prefix)) ``` # Word count with comments Below is mostly the same code as above, but with comments explaining every line in more detail. ``` import apache_beam as beam import re inputs_pattern = 'data/*' outputs_prefix = 'outputs/part' # Running locally in the DirectRunner. with beam.Pipeline() as pipeline: # Store the word counts in a PCollection. # Each element is a tuple of (word, count) of types (str, int). word_counts = ( # The input PCollection is an empty pipeline. pipeline # Read lines from a text file. | 'Read lines' >> beam.io.ReadFromText(inputs_pattern) # Element type: str - text line # Use a regular expression to iterate over all words in the line. # FlatMap will yield an element for every element in an iterable. | 'Find words' >> beam.FlatMap(lambda line: re.findall(r"[a-zA-Z']+", line)) # Element type: str - word # Create key-value pairs where the value is 1, this way we can group by # the same word while adding those 1s and get the counts for every word. | 'Pair words with 1' >> beam.Map(lambda word: (word, 1)) # Element type: (str, int) - key: word, value: 1 # Group by key while combining the value using the sum() function. | 'Group and sum' >> beam.CombinePerKey(sum) # Element type: (str, int) - key: word, value: counts ) # We can process a PCollection through other pipelines too. ( # The input PCollection is the word_counts created from the previous step. word_counts # Format the results into a string so we can write them to a file. | 'Format results' >> beam.Map(lambda word_count: str(word_count)) # Element type: str - text line # Finally, write the results to a file. | 'Write results' >> beam.io.WriteToText(outputs_prefix) ) # Sample the first 20 results, remember there are no ordering guarantees. run('head -n 20 {}-00000-of-*'.format(outputs_prefix)) ```
github_jupyter
Lambda School Data Science *Unit 2, Sprint 3, Module 4* --- # Model Interpretation 2 You will use your portfolio project dataset for all assignments this sprint. ## Assignment Complete these tasks for your project, and document your work. - [ ] Continue to iterate on your project: data cleaning, exploratory visualization, feature engineering, modeling. - [ ] Make a Shapley force plot to explain at least 1 individual prediction. - [ ] Share at least 1 visualization (of any type) on Slack. But, if you aren't ready to make a Shapley force plot with your own dataset today, that's okay. You can practice this objective with another dataset instead. You may choose any dataset you've worked with previously. ## Stretch Goals - [ ] Make Shapley force plots to explain at least 4 individual predictions. - If your project is Binary Classification, you can do a True Positive, True Negative, False Positive, False Negative. - If your project is Regression, you can do a high prediction with low error, a low prediction with low error, a high prediction with high error, and a low prediction with high error. - [ ] Use Shapley values to display verbal explanations of individual predictions. - [ ] Use the SHAP library for other visualization types. The [SHAP repo](https://github.com/slundberg/shap) has examples for many visualization types, including: - Force Plot, individual predictions - Force Plot, multiple predictions - Dependence Plot - Summary Plot - Summary Plot, Bar - Interaction Values - Decision Plots We just did the first type during the lesson. The [Kaggle microcourse](https://www.kaggle.com/dansbecker/advanced-uses-of-shap-values) shows two more. Experiment and see what you can learn! ## Links - [Kaggle / Dan Becker: Machine Learning Explainability — SHAP Values](https://www.kaggle.com/learn/machine-learning-explainability) - [Christoph Molnar: Interpretable Machine Learning — Shapley Values](https://christophm.github.io/interpretable-ml-book/shapley.html) - [SHAP repo](https://github.com/slundberg/shap) & [docs](https://shap.readthedocs.io/en/latest/) ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns sns.set_style('whitegrid') from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler import category_encoders as ce from xgboost import XGBClassifier from xgboost import XGBRegressor from sklearn.metrics import accuracy_score import eli5 from eli5.sklearn import PermutationImportance from pdpbox.pdp import pdp_isolate, pdp_plot from sklearn.metrics import roc_auc_score %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* !pip install eli5 !pip install pdpbox !pip install shap # If you're working locally: else: DATA_PATH = '../data/' df = sns.load_dataset('titanic').drop(columns=['alive']) target = 'survived' features = df.drop(['survived', 'age', 'deck', 'embarked', 'embark_town'], axis = 1).columns X_train, X_test, y_train, y_test = train_test_split(df[features], df[target]) X_train.shape, X_test.shape, y_train.shape, y_test.shape transformers = make_pipeline(ce.OrdinalEncoder(), SimpleImputer()) X_train = transformers.fit_transform(X_train) X_test = transformers.transform(X_test) eval_set = [(X_train, y_train), (X_test, y_test)] # model = XGBRegressor( # n_estimators=1000, # max_depth=10, # objective='reg:squarederror', # n_jobs=-1, # ) # model.fit(X_train, y_train, eval_set=eval_set, # eval_metric='mae', early_stopping_rounds=20) model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) permuter = PermutationImportance(model, scoring='neg_mean_absolute_error', n_iter=3) permuter.fit(X_test, y_test) feature_names = df[features].columns.tolist() eli5.show_weights(permuter, top=None, feature_names = feature_names) #X_test_processed = processor.transform(X_test) class_index = 1 y_pred_proba = model.predict_proba(X_test)[:, class_index] print(f'Test ROC AUC for class {class_index}:') print(roc_auc_score(y_test, y_pred_proba)) # Ranges from 0-1, higher is better X_train, X_test, y_train, y_test = train_test_split(df[features], df[target]) X_train.shape, X_test.shape, y_train.shape, y_test.shape transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer() ) X_train = transformers.fit_transform(X_train) X_test = transformers.transform(X_test) X_test = pd.DataFrame(X_test, columns = df[features].columns) rf = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) rf.fit(X_train, y_train) print('Validation Accuracy', rf.score(X_test, y_test)) X_test.columns feature = 'pclass' isolated = pdp_isolate( model=rf, dataset=X_test, model_features=X_test.columns, feature=feature ) pdp_plot(isolated, feature_name=feature); X_test['pclass'].value_counts() feature = 'who' isolated = pdp_isolate( model=rf, dataset=X_test, model_features=X_test.columns, feature=feature ) pdp_plot(isolated, feature_name=feature); feature = 'pclass' encoder = transformers.named_steps['ordinalencoder'] for item in encoder.mapping: if item['col'] == feature: feature_mapping = item['mapping'] feature_mapping = feature_mapping[feature_mapping.index.dropna()] category_names = feature_mapping.index.tolist() category_codes = feature_mapping.values.tolist() isolated = pdp_isolate( model=rf, dataset=X_test, model_features=X_test.columns, feature=feature, cust_grid_points=category_codes ) fig, axes = pdp_plot(isolated, feature_name=feature, plot_lines=True, frac_to_plot=0.01) for ax in axes['pdp_ax']: ax.set_xticks(category_codes) ax.set_xticklabels(category_names) ```
github_jupyter
``` import sys sys.path.append('../../') import django,os,glob os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dva.settings") django.setup() import tensorflow as tf from dvalib import detector class_names = {1: {'id': 1, 'name': u'person'}, 2: {'id': 2, 'name': u'bicycle'}, 3: {'id': 3, 'name': u'car'}, 4: {'id': 4, 'name': u'motorcycle'}, 5: {'id': 5, 'name': u'airplane'}, 6: {'id': 6, 'name': u'bus'}, 7: {'id': 7, 'name': u'train'}, 8: {'id': 8, 'name': u'truck'}, 9: {'id': 9, 'name': u'boat'}, 10: {'id': 10, 'name': u'traffic light'}, 11: {'id': 11, 'name': u'fire hydrant'}, 13: {'id': 13, 'name': u'stop sign'}, 14: {'id': 14, 'name': u'parking meter'}, 15: {'id': 15, 'name': u'bench'}, 16: {'id': 16, 'name': u'bird'}, 17: {'id': 17, 'name': u'cat'}, 18: {'id': 18, 'name': u'dog'}, 19: {'id': 19, 'name': u'horse'}, 20: {'id': 20, 'name': u'sheep'}, 21: {'id': 21, 'name': u'cow'}, 22: {'id': 22, 'name': u'elephant'}, 23: {'id': 23, 'name': u'bear'}, 24: {'id': 24, 'name': u'zebra'}, 25: {'id': 25, 'name': u'giraffe'}, 27: {'id': 27, 'name': u'backpack'}, 28: {'id': 28, 'name': u'umbrella'}, 31: {'id': 31, 'name': u'handbag'}, 32: {'id': 32, 'name': u'tie'}, 33: {'id': 33, 'name': u'suitcase'}, 34: {'id': 34, 'name': u'frisbee'}, 35: {'id': 35, 'name': u'skis'}, 36: {'id': 36, 'name': u'snowboard'}, 37: {'id': 37, 'name': u'sports ball'}, 38: {'id': 38, 'name': u'kite'}, 39: {'id': 39, 'name': u'baseball bat'}, 40: {'id': 40, 'name': u'baseball glove'}, 41: {'id': 41, 'name': u'skateboard'}, 42: {'id': 42, 'name': u'surfboard'}, 43: {'id': 43, 'name': u'tennis racket'}, 44: {'id': 44, 'name': u'bottle'}, 46: {'id': 46, 'name': u'wine glass'}, 47: {'id': 47, 'name': u'cup'}, 48: {'id': 48, 'name': u'fork'}, 49: {'id': 49, 'name': u'knife'}, 50: {'id': 50, 'name': u'spoon'}, 51: {'id': 51, 'name': u'bowl'}, 52: {'id': 52, 'name': u'banana'}, 53: {'id': 53, 'name': u'apple'}, 54: {'id': 54, 'name': u'sandwich'}, 55: {'id': 55, 'name': u'orange'}, 56: {'id': 56, 'name': u'broccoli'}, 57: {'id': 57, 'name': u'carrot'}, 58: {'id': 58, 'name': u'hot dog'}, 59: {'id': 59, 'name': u'pizza'}, 60: {'id': 60, 'name': u'donut'}, 61: {'id': 61, 'name': u'cake'}, 62: {'id': 62, 'name': u'chair'}, 63: {'id': 63, 'name': u'couch'}, 64: {'id': 64, 'name': u'potted plant'}, 65: {'id': 65, 'name': u'bed'}, 67: {'id': 67, 'name': u'dining table'}, 70: {'id': 70, 'name': u'toilet'}, 72: {'id': 72, 'name': u'tv'}, 73: {'id': 73, 'name': u'laptop'}, 74: {'id': 74, 'name': u'mouse'}, 75: {'id': 75, 'name': u'remote'}, 76: {'id': 76, 'name': u'keyboard'}, 77: {'id': 77, 'name': u'cell phone'}, 78: {'id': 78, 'name': u'microwave'}, 79: {'id': 79, 'name': u'oven'}, 80: {'id': 80, 'name': u'toaster'}, 81: {'id': 81, 'name': u'sink'}, 82: {'id': 82, 'name': u'refrigerator'}, 84: {'id': 84, 'name': u'book'}, 85: {'id': 85, 'name': u'clock'}, 86: {'id': 86, 'name': u'vase'}, 87: {'id': 87, 'name': u'scissors'}, 88: {'id': 88, 'name': u'teddy bear'}, 89: {'id': 89, 'name': u'hair drier'}, 90: {'id': 90, 'name': u'toothbrush'}} class_index_to_string = {k:v['name'] for k,v in class_names.iteritems()} print class_index_to_string model_path = "/Users/aub3/Dropbox/DeepVideoAnalytics/dvalib/object_detection/ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb" d = detector.TFDetector(model_path=model_path,class_index_to_string=class_index_to_string) d.load() detections = d.detect("/Users/aub3/Dropbox/DeepVideoAnalytics/notebooks/images/person.jpg") for d in detections: print d ``` ### Trying with queue ``` input_path = "/Users/aub3/media/1/frames/*.jpg" model_path = "/Users/aub3/Dropbox/DeepVideoAnalytics/dvalib/object_detection/ssd_mobilenet_v1_coco_11_06_2017/frozen_inference_graph.pb" detection_graph = tf.Graph() with detection_graph.as_default(): filename_queue = tf.train.string_input_producer(tf.train.match_filenames_once(input_path)) image_reader = tf.WholeFileReader() imname, image_file = image_reader.read(filename_queue) image = tf.expand_dims(tf.image.decode_image(image_file),0) od_graph_def = tf.GraphDef() with tf.gfile.GFile(model_path, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='',input_map={"image_tensor:0": image}) sess = tf.InteractiveSession(graph=detection_graph) image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') boxes = detection_graph.get_tensor_by_name('detection_boxes:0') scores = detection_graph.get_tensor_by_name('detection_scores:0') classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') imname ```
github_jupyter
# Ensemble Spatial Interpolation with Python This is a tutorial for demonstration of ensemble spatial interpolation in Python with `pyESI` module. This exercise demonstrates the ESI estimation methods. The steps include: 1. Import a Dataset with coordinates and a target variable 2. Create a grid ad-hoc to the Dataset as a study area 3. Apply regular sampling to the 2D realization 4. Calculate estimation using ESI with the base interpolation function defined as Inverse Distance Weighting (IDW) 5. Visualize the resulting estimation over the grid #### Load the required libraries The following code allows to import the required libraries. ``` import sys sys.path.append('..') import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.pyplot import figure import seaborn as sns cmap = plt.cm.jet from pyESI.ensemble import EnsembleIDW, create_bounds, idw_interpolation ``` #### Prepare the data You can download our dataset via [Google Drive](https://drive.google.com/file/d/1VhXKo776gxPJiPZNziCIKCrxkD5PoN_S/view?usp=sharing). The file has GSLIB format. ``` samples = pd.read_csv("https://drive.google.com/uc?id=1VhXKo776gxPJiPZNziCIKCrxkD5PoN_S", sep='\t', header=None, names=['x','y','grade'], index_col=False, usecols=[0,1,3], dtype=np.float64, skiprows=6) samples.head() nx = 200; ny = 300; cell_size = 2 # grid number of cells and cell size xmin = 1.0; ymin = 1.0; # grid origin xmax = 400; ymax = 600 # calculate the extent of model vmin = samples["grade"].min(); vmax = samples["grade"].max(); plt.figure(figsize=(8,8)) im = plt.scatter(samples["x"],samples["y"],s=None, c=samples["grade"], marker=None, cmap=plt.cm.jet, norm=None, alpha=0.8, linewidths=0.8, edgecolors="black") plt.title("Samples pyESI") plt.xlim(xmin,xmax) plt.ylim(ymin,ymax) plt.xlabel("x (m)") plt.ylabel("y (m)") plt.axis('equal') cbar = plt.colorbar(im, orientation = 'vertical', ticks=np.linspace(vmin,vmax,10)) cbar.set_label("grade", rotation=270, labelpad=20) im ``` #### Create a Grid Create a grid of points as an area of study for estimation with ESI methods. ``` def create_grid_2d(mn, n, siz): """ Creates x, y variables for a 2d grid from gslib style parameters, ie, a table in the form X Y ------ x0 y0 x1 y0 ... x0 y1 ... xn yn :param mn: Starting values vector (xmn, ymn) :param n: Number of points vector (xn, yn) :param siz: Vector with block sizes (xsiz, ysiz) :return: x and y vectors for the whole grid in GSLIB format (ij, F order) """ # Create range arrays x_range = np.linspace(mn[0], int(n[0])*siz[0]+mn[0], num=int(n[0]), endpoint=False) y_range = np.linspace(mn[1], int(n[1])*siz[1]+mn[1], num=int(n[1]), endpoint=False) # Create a mesh grid (Gslib standard) x, y = np.meshgrid(x_range, y_range, indexing="ij") # Now create X, Y flatten variables (Fortran order) x = x.ravel(order="F") y = y.ravel(order="F") return x, y x, y = create_grid_2d([xmin, ymin], [nx, ny], [cell_size, cell_size]) grid = pd.DataFrame({'x':x, 'y':y}) ``` #### Use imported data on a simple example The following are the basic parameters for this example. This includes the creation of a bounding box in the area covered by the samples and the previously created 2D regular grid. ``` origin = create_bounds(samples).union(create_bounds(grid)) origin ``` We create a *Ensemble Spatial Interpolation* (ESI) with the base interpolation function $\textbf{S}_{\mathcal{L}_k}$ defined as *Inverse Distance Weighting* (IDW). In here, along with the bounding box just configured, we use the samples and the parameters defined as $\alpha = 0.7$ and $m = 20$. ``` esi = EnsembleIDW(20, 0.7, origin, samples) ``` The `esi` object is configured and ready to perform an estimation over the grid. ``` result = esi.predict(grid) ``` The `result` esi object has two properties: **estimates** and **variances**. ``` array_results = result.estimates.reshape(ny, nx) array_variance = result.variances.reshape(ny, nx) ``` Let's visualize the resulting ESI<sub>IDW</sub> estimates with its conditional variance $\mathbb{V}_{\mathcal{Z}_{(\mathcal{P}, \mathcal{M})}}$ for ESI<sub>IDW</sub>. ``` plt.figure(figsize=(16,8)) vmin = 0; vmax = 3; plt.subplot(121) cs = plt.contourf(grid.x.values.reshape(ny,nx), grid.y.values.reshape(ny,nx), array_results, cmap=cmap,vmin=vmin, vmax=vmax,levels = np.linspace(vmin,vmax,100)) plt.title("ESI-IDW Estimation") plt.xlabel("x (m)") plt.ylabel("y (m)") plt.axis('equal') plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) cbar = plt.colorbar(orientation = 'vertical') cbar.set_label("grade", rotation=270, labelpad=20) plt.subplot(122) vmin = 0; vmax = 0.5; cs = plt.contourf(grid.x.values.reshape(ny,nx), grid.y.values.reshape(ny,nx), array_variance, cmap=cmap,vmin=vmin, vmax=vmax,levels = np.linspace(vmin,vmax,100)) plt.title("ESI-IDW Conditional Variance") plt.xlabel("x (m)") plt.ylabel("y (m)") plt.axis('equal') plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) cbar = plt.colorbar(orientation = 'vertical') cbar.set_label("var", rotation=270, labelpad=20) plt.show() ``` Let's compare the result with a traditional IDW. ``` idw = np.zeros_like(result.estimates) for idp, point in grid.iterrows(): idw[idp] = idw_interpolation(point.values, samples) ``` Let's visualize the resulting estimation using IDW. ``` idw_results = idw.reshape(ny, nx) plt.figure(figsize=(8,8)) vmin = 0; vmax = 3; cs = plt.contourf(grid.x.values.reshape(ny,nx), grid.y.values.reshape(ny,nx), idw_results, cmap=cmap,vmin=vmin, vmax=vmax,levels = np.linspace(vmin,vmax,100)) plt.title("IDW") plt.xlabel("x (m)") plt.ylabel("y (m)") plt.axis('equal') plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) cbar = plt.colorbar(orientation = 'vertical') cbar.set_label("grade", rotation=270, labelpad=20) plt.show() ```
github_jupyter
## Libraries ``` import pandas as pd pd.options.display.max_columns = 50 import numpy as np import matplotlib.pyplot as plt import seaborn as sns #loop progress from tqdm.notebook import tqdm tqdm.pandas(desc="apply progress") #own timeseries helper functions from timeseries_functions import post_applier ``` ## Load data a) load raw csv ``` df = pd.read_csv('preprocessed_data.csv', index_col = 0) df_original = df.copy() df.publish_date = pd.to_datetime(df.publish_date) df.head() df.info(null_counts=False) ``` b) load presaved dataset ``` df_revs = pd.read_csv('df_revs.csv', index_col=0) df_revs['post_date'] = pd.to_datetime(df_revs['post_date']) ``` ## Constructing df_revs #### clone observations based on num_reviews ``` df_revs = df.loc[df.index.repeat(df.num_reviews)].reset_index(drop = True) rev_chars = ['url', 'avg_rating'] book_chars = [\ 'title', 'author', 'num_ratings', 'num_reviews', 'num_pages', 'publish_date', 'age', 'has_award', 'is_series', 'Fiction', 'Contemporary', 'Romance', 'Mystery', 'Young Adult', 'Fantasy', 'Audiobook', 'Thriller', 'Adult', 'Historical', 'Historical Fiction', 'Nonfiction', 'Mystery Thriller', 'Contemporary Romance', 'Suspense', 'Adult Fiction', 'Science Fiction', 'Crime', 'Womens Fiction', 'LGBT', 'Chick Lit', 'Cultural', 'Autobiography', 'Paranormal', 'Memoir', 'Literary Fiction', 'New Adult', 'War', 'Biography', 'Magic'] df_revs = df_revs[~ df_revs.age.isnull()] df_revs = df_revs[rev_chars + book_chars] df_revs df_revs.info() df.num_reviews.sum() df_revs.isnull().sum() ``` #### assign post_date ``` df_revs['post_date'] = df_revs.progress_apply(post_applier, axis = 1) df_revs df_revs.to_csv('df_revs.csv') ``` ## EDA ### number of reviews ``` df_revs.set_index('post_date', inplace = True) df_revs.head() genres_all = ['Fiction', 'Contemporary', 'Romance', 'Mystery', 'Young Adult', 'Fantasy', 'Audiobook', 'Thriller', 'Adult', 'Historical', 'Historical Fiction', 'Nonfiction', 'Mystery Thriller', 'Contemporary Romance', 'Suspense', 'Adult Fiction', 'Science Fiction', 'Crime', 'Womens Fiction', 'LGBT', 'Chick Lit', 'Cultural', 'Autobiography', 'Paranormal', 'Memoir', 'Literary Fiction', 'New Adult', 'War', 'Biography', 'Magic'] n = 15 genres = random.sample(genres, n) def logsum(x): tvar = np.sum(x) tvar = np.log(tvar+.01) return tvar def genre_statter(data, genres, aggfunc): if type(genres) == list: idx = data.groupby([data.index.year, data.index.week]).agg({genres[0] : aggfunc}).index idx.names = ['year', 'week'] concat_df= pd.DataFrame([], index = idx) for genre in tqdm(genres): tdf = data.groupby(by = [data.index.year, data.index.week]).agg({genre : aggfunc}) concat_df = pd.concat([concat_df, tdf], axis = 1) else: genre = genres concat_df = data.groupby(by = [data.index.year, data.index.week]).agg({genre : aggfunc}) return concat_df def vdf_plotter(viz_dataframe, form = 'wide', **kwargs): fgs = kwargs.get('figsize', (8, 6)) fig, ax = plt.subplots(figsize = fgs) if form == 'wide': viz_dataframe.plot(figsize = fgs, ax = ax) elif form == 'long': #get the plot variables x = kwargs.get('x', None) y = kwargs.get('y', None) hue = kwargs.get('hue', None) #plot the data sns.lineplot(x, y, hue = hue, ax= ax) else: return 'specify the format to be either "wide" or "long"' #set options for the legend box: ## Shrink current axis's height by 10% on the bottom box = ax.get_position() ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9]) ## Put a legend below current axis ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=5) return ax vdf_numrevs = genre_statter(df_revs, genres_all, np.sum) vdf_numrevs vdf_plotter(vdf_numrevs, form = 'wide', figsize = (10, 7)) ``` on a log scale: ``` vdf_numrevs = genre_statter(df_revs, genres_all, logsum) vdf_numrevs vdf_plotter(vdf_numrevs, form = 'wide', figsize = (10, 7)) ``` ## average rating ### genres ``` #shorten data frame down to relevant variables: dv + genres dv = ['avg_rating'] tdf = df_revs[dv + genres] tdf.reset_index(inplace = True) tdf.head() #"melt" the data frame such that you have: post_date + genre + avg_rating in that post_date #1) Initial melting ##initial melting of genre columns into one "genre" vdf = pd.melt(tdf.iloc[:100_000], id_vars = ['post_date', 'avg_rating'], value_vars = genres, var_name = "genre") ##filter out the genre values where genre == 'False' vdf.query("value == True", inplace = True) ##drop the boolean "value" column, it was solely for filtering purposes vdf.drop('value', axis = 1, inplace = True) #sort values ascending vdf.sort_values('post_date', ascending = True, inplace = True) #summarise duplicate rows, featuring an average average vdf = pd.pivot_table(\ data = vdf, index = ['post_date', 'genre'], values = 'avg_rating', aggfunc = np.mean) vdf.reset_index(inplace = True) vdf #set an appropriate time filter, since the early months feature a lot of NA's time_filter = "2019.10.01" mask = vdf['post_date'] > pd.Timestamp(time_filter) data = vdf[mask] #set the plot variables x = data.post_date y = data.avg_rating hue = data.genre vdf_plotter(data, form = 'long', x = x, y = y, hue = hue) ``` ### authors ``` #shorten data frame down to relevant variables: dv + genres vdf = df_revs[['avg_rating', 'author']] vdf.reset_index(inplace = True) vdf.head() n_top = 20 authors = vdf.author.value_counts().iloc[:n_top].index mask = vdf.author.isin(authors) vdf = vdf[mask] #"melt" the data frame such that you have: post_date + genre + avg_rating in that post_date #sort values ascending vdf.sort_values('post_date', ascending = True, inplace = True) #summarise duplicate rows, featuring an average average vdf = pd.pivot_table(\ data = vdf, index = ['post_date','author'], values = 'avg_rating', aggfunc = np.mean) vdf.reset_index(inplace = True) vdf x = vdf.post_date y = vdf.avg_rating hue = vdf.author vdf_plotter(vdf, form = 'long', x = x, y = y, hue = hue) ```
github_jupyter
The challenge to manage a computer cluster to run some experiments, from ideas, the systems to build and control datasets, the distributed programming, the building of the actual cluster, handling the gpus, the building of a system that use that cluster and actually manage to do "X", ..is actually quite fun! ``` from IPython.display import Image Image("luna.jpeg") ``` Systems that learn by exploring the environment in a competitive setting against other entities similar in skill than them, competing against each other and incrementally improving this way. We use human data to explore the science of what learning can help us achieve, human data is used as steping stone to help us for pragmatic reasons go faster towards our goals then we might be able to starting solely from self-play. Components based on human data can help understand the system such that then you can build the more principled version later that does it for itself. In our constructivistic adventure we are *NOT* exploring a reinforcement learning path, ... Our approach is more around the telephone switchboard and the brain analogy found in very old cybernetic and reductionist books and vision this is all quite vage and this still about the missing torch-up results! not the place nor the time, lets focus on what we actually have.. ## Optional challenge maps updates Eight new maps have been added to the optional challenge pool, previously only sparkle was available as Island challenge. Focus is hard, even harder if you focus on the wrong things.. we hope to present you a competitive modern benchmark for Brood War AI, the [current map pool](https://torchup.org/2020/03/10/torch-up-maps-and-optional-challenge/) is composed of eigthteen 1v1 maps from old kespa days, current ladder and modern tournaments. ### Some words on available team maps New versions of Iron Curtain and Fastest Possible are also available for 1.16.1 in the [download](https://torchup.org/files/maps.zip), we plan to support team games starting on an eventual season 3 or in the form of an upcoming online ladder. | (4) Sparkle | &nbsp; | (3) Reap the Storm | &nbsp; | (2) Hitchhiker | :---:|:---:|:---:|:---:|:---:| | <img src="https://spacebeam.org/images/maps/Sparkle.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/Reap_the_Storm.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/Hitchhiker.jpg" width="200"> | | (3) Neo Sylphid | &nbsp; | (4) Colosseum II | &nbsp; | (2) Match Point | :---:|:---:|:---:|:---:|:---:| | <img src="https://spacebeam.org/images/maps/NeoSylphid.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/ColosseumII.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/MatchPoint.jpg" width="200"> | | (4) Empire of the Sun | &nbsp; | (3) Core Breach | &nbsp; | (2) New Bloody Ridge | :---:|:---:|:---:|:---:|:---:| | <img src="https://spacebeam.org/images/maps/NewEmpire.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/CoreBreach.jpg" width="200"> | &nbsp; | <img src="https://spacebeam.org/images/maps/New_Bloody_Ridge.jpg" width="200"> | ## Optional 2v2 and 3v3 challenge Team league maps and the future.. bwheadless.exe only support 1v1 games without dive and hack into its source code.. hopefully season 3? | (4) Iron Curtain | &nbsp; | (8) Fastest Possible | | :---: | :---: | :---: | | <img src="https://spacebeam.org/images/maps/Iron_Curtain.jpg" width="300"> | &nbsp; | <img src="https://spacebeam.org/images/maps/Fastest.jpeg" width="300"> | ## Luerl updates We got a brand new logo for Luerl the language of our favorite game simulator, [watch](https://github.com/rvirding/luerl/issues/102) the history unfold! ## API updates Starting our favorite distributed data base to store from our API calls to game replays. ``` # luna -u riak start 7b76fab9-d6bf-413f-c1d3-3b8d654b8933 Starting unit riak INFO: instance started successfully Done... In the pipe, five by five. ``` ``` import requests as req import ujson as json ``` ### OPTIONS ``` r = req.options('http://127.0.0.1:58008/games') json.loads(r.content) ``` ### GET games ``` r = req.get('http://127.0.0.1:58008/games/') r.status_code ``` ### Pagination ``` r = req.get('http://127.0.0.1:58008/games/page/2') r.status_code json.loads(r.content) ``` ### GET game ``` r = req.get('http://127.0.0.1:58008/games/4c489b26-4c20-4764-9e1a-4bc8e69185c3') a = json.loads(r.content) a ``` ### PATCH ``` data = {'home_crashed': True} r = req.patch('http://127.0.0.1:58008/games/4c489b26-4c20-4764-9e1a-4bc8e69185c3', json.dumps(data)) r.status_code r.content ``` ### DELETE ``` r = req.delete('http://127.0.0.1:58008/games/4c489b26-4c20-4764-9e1a-4bc8e69185c3') r.status_code ``` ### Creating a complete tournament test round If you want to know more about how are we using this API to generate a complete round of test games, pairing the bots, scheduling the maps or anything in between, there is an [available notebook](https://github.com/spacebeam/research/blob/master/notebook/Unsequenced%20Chaos.ipynb) with all details, in general we are using Python 3, Erlang and Lua to hack our way around things. While we wait toguether for this to end as a meme and actually get to a good port having some published results, hopefully this serve to update any curious one of our progress, goals and current state.
github_jupyter
# Time series classification with Mr-SEQL Mr-SEQL\[1\] is a univariate time series classifier which train linear classification models (logistic regression) with features extracted from multiple symbolic representations of time series (SAX, SFA). The features are extracted by using SEQL\[2\]. \[1\] T. L. Nguyen, S. Gsponer, I. Ilie, M. O'reilly and G. Ifrim Interpretable Time Series Classification using Linear Models and Multi-resolution Multi-domain Symbolic Representations in Data Mining and Knowledge Discovery (DAMI), May 2019, https://link.springer.com/article/10.1007/s10618-019-00633-3 \[2\] G. Ifrim, C. Wiuf “Bounded Coordinate-Descent for Biological Sequence Classification in High Dimensional Predictor Space” (KDD 2011) In this notebook, we will demonstrate how to use Mr-SEQL for univariate time series classification with the ArrowHead dataset. ## Imports ``` from sklearn import metrics from sklearn.model_selection import train_test_split from sktime.classification.shapelet_based import MrSEQLClassifier from sktime.datasets import load_arrow_head, load_basic_motions ``` ## Load data For more details on the data set, see the [univariate time series classification notebook](https://github.com/alan-turing-institute/sktime/blob/main/examples/02_classification_univariate.ipynb). ``` X, y = load_arrow_head(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) ``` ## Train and Test Mr-SEQL can be configured to run in different mode with different symbolic representation. seql_mode can be either 'clf' (SEQL as classifier) or 'fs' (SEQL as feature selection). If 'fs' mode is chosen, a logistic regression classifier will be trained with the features extracted by SEQL. 'fs' mode is more accurate in general. symrep can include either 'sax' or 'sfa' or both. Using both usually produces a better result. ``` # Create mrseql object # use sax by default ms = MrSEQLClassifier(seql_mode="clf") # use sfa representations # ms = MrSEQLClassifier(seql_mode='fs', symrep=['sfa']) # use sax and sfa representations # ms = MrSEQLClassifier(seql_mode='fs', symrep=['sax', 'sfa']) # fit training data ms.fit(X_train, y_train) # prediction predicted = ms.predict(X_test) # Classification accuracy print("Accuracy with mr-seql: %2.3f" % metrics.accuracy_score(y_test, predicted)) ``` ## Train and Test Mr-SEQL also supports multivariate time series. Mr-SEQL extracts features from each dimension of the data independently. ``` X, y = load_basic_motions(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape) ms = MrSEQLClassifier() # fit training data ms.fit(X_train, y_train) predicted = ms.predict(X_test) # Classification accuracy print("Accuracy with mr-seql: %2.3f" % metrics.accuracy_score(y_test, predicted)) ```
github_jupyter
# Topic Categorization with PySS3 <br> <div style="text-align:right"><i>To open and run this notebook <b>online</b>, click here: <a href="https://mybinder.org/v2/gh/sergioburdisso/pyss3/master?filepath=examples/topic_categorization.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" style="display: inline"></a></i></div> This is the notebook for the ["Topic Categorization"](https://pyss3.readthedocs.io/en/latest/tutorials/topic-categorization.html) tutorial. In this notebook, we will see how we can use the [PySS3](https://github.com/sergioburdisso/pyss3) Python package to deploy models for Sentiment Analysis on Movie Reviews. Let us begin! First, we need to import the modules we will be using: --- Before we begin, let's import needed modules... ``` %matplotlib inline from pyss3 import SS3 from pyss3.util import Dataset, Evaluation, span from pyss3.server import Live_Test from sklearn.metrics import accuracy_score ``` ... and unzip the "movie_review.zip" dataset inside the `datasets` folder. ``` !unzip -u datasets/topic.zip -d datasets/ ``` Ok, now we are ready to begin. Let's create a new SS3 instance ``` clf = SS3() ``` What are the default [hyperparameter](https://pyss3.readthedocs.io/en/latest/user_guide/ss3-classifier.html#hyperparameters) values? let's see ``` s, l, p, _ = clf.get_hyperparameters() print("Smoothness(s):", s) print("Significance(l):", l) print("Sanction(p):", p) ``` Ok, now let's load the training and the test set using the `load_from_files` function from `pyss3.util`. Since, in this dataset, there's a single file for each category, we will use the argument ``folder_label=False`` to tell PySS3 to use each file as a different category and each line inside of it as a different document: ``` x_train, y_train = Dataset.load_from_files("datasets/topic/train", folder_label=False) x_test, y_test = Dataset.load_from_files("datasets/topic/test", folder_label=False) ``` Let's train our model... ``` # we could've have used clf.fit(x_train, y_train) here, they're equivalent! # We decided to use `clf.train` only because is more "user-friendly". clf.train(x_train, y_train) # clf.fit(x_train, y_train) ``` Note that we don't have to create any document-term matrix! we are using just the plain `x_train` documents :D cool uh? SS3 learns a (spacial kind of) language model for each category and therefore it doesn't need to create any document-term matrices :) in fact, the very concept of "document" becomes irrelevant... Now that the model has been trained, let's test it using the documents in `x_test`. First, we will do it "in the sklearn's own way" with: ``` y_pred = clf.predict(x_test) accuracy = accuracy_score(y_pred, y_test) print("Accuracy was:", accuracy) ``` Alternatively, we could've done it "in the PySS3's own way" by using the built-in ``test`` function provided by the ``Evaluation`` class that we have imported from ``pyss3.util`` at the beginning of this notebook, as follows: ``` Evaluation.test(clf, x_test, y_test) ``` The advantage of using this built-in function is that with just a single line of code we get: * The performance measured in terms of all the well-known metrics ('accuracy', 'precision', 'recall', and 'f1-score'). * A plot showing the obtained confusion matrix, and... * Since all the evaluations performed using the ``Evaluation`` class are permanently cached, if we ever perform this test again, the evaluation will be skipped and values will be retrieved from the cache storage (saving us a lot of time! especially when performing long evaluations). As we can see, the performance doesn't look bad using the default [hyperparameter](https://pyss3.readthedocs.io/en/latest/user_guide/ss3-classifier.html#hyperparameters) values, however, let's now manually analyze what our model has actually learned by using the interactive "live test". Note that since we are not going to use the `x_test` for this live test**(\*)** but instead the documents in "datasets/topic/live_test", we must use the `set_testset_from_files` method to tell the Live Test to load documents from there instead. **(\*)** *try it if you want but since `x_test` contains (preprocessed) tweets, they don't look really good and clean.* ``` # Live_Test.run(clf, x_test, y_test) # <- this visualization doesn't look really clean and good so, instead, # we will use the documents in "live_test" folder: Live_Test.set_testset_from_files("datasets/topic/live_test") Live_Test.run(clf) # <- Unfortunately, if you're running the notebook online with # Binder, won't work, sorry :( ``` <p style="color:red">(!) To <b>STOP</b> the server, press <b>Esc</b> once and then the <b>i</b> key twice</p> <p style="color:red"><u>NOTE</u>: Unfortunately, the Live Test will ONLY WORK if you run this notebook, locally, on your computer. Therefore, if you're using the online Binder version, won't work, sorry :( ... a Live Test like <a href="http://tworld.io/ss3/live_test_online/#30303" target="_blank">this one</a> would have been opened up, locally, in your browser but using the model you just trained above.</p> Makes sense to you? (remember you can use the options to select "words" as the Description Level if you want to know based on what words, and to what degree, is making classification decisions) Live test doesn't look bad, however,there are a couple of documents that are clearly misclassified. We will try creating a "more intelligent" version of this model, a version that can recognize variable-length word n-grams "on the fly". Thus, when calling the `train` we will pass an extra argument `n_grams=3` to indicate we want SS3 to learn to recognize important words, bigrams, and 3-grams _(If you're curious and want to know how this is actually done by SS3, read the paper "t-SS3: a text classifier with dynamic n-grams for early risk detection over text streams", preprint available [here](https://arxiv.org/abs/1911.06147))_. ``` clf = SS3() clf.train(x_train, y_train, n_grams=3) # <-- note the n_grams=3 argument here ``` Now let's see if the performance has improved... ``` y_pred = clf.predict(x_test) print("Accuracy:", accuracy_score(y_pred, y_test)) ``` Yeah, the accuracy slightly improved but more importantly, we should now see that the model has learned "more intelligent patterns" involving sequences of words when using the interactive "live test" (like "machine learning", "artificial intelligence", "self-driving cars", etc. for the "science&technology" category). Let's see... ``` Live_Test.run(clf) # <- remember it won't work online using Binder :( # (!) Remember: to STOP the server, press `Esc` once and then the `I` key twice ``` Fortunately, our model has learned to recognize these important sequences (such as "artificial intelligence" and "machine learning" in doc_2.txt, "self-driving cars" in doc_6.txt, etc.). However, again, some documents aren’t perfectly classified, for instance, doc_10.txt was classified as “science&technology” (as the second topic) which is clearly wrong... So, one last thing we are going to do is to try to find better [hyperparameter](https://pyss3.readthedocs.io/en/latest/user_guide/ss3-classifier.html#hyperparameters) values to improve our model's performance. For example, the following values will improve our classification performance: ``` clf.set_hyperparameters(s=0.32, l=1.62, p=2.35) ``` Let's see if it's true... ``` Evaluation.test(clf, x_test, y_test) ``` The accuracy has improved as expected and the confusion matrix look much better now :) Finally, we could take a look at what our final model looks like using the Live Test tool one last time. ``` Live_Test.run(clf) # (!) Remember: to stop the server, press `Esc` and then the `I` key twice ``` **Want to know how we found out those hyperparameter values** were going to improve our classifier accuracy? Just read the next section! ;) --- ## Hyperparameter Optimization In this section we will see how we can use PySS3's ``Evaluation`` class to perform [Hyperparameter optimization](https://en.wikipedia.org/wiki/Hyperparameter_optimization), which allows us to find better hyperparameter values for our models. To do this, we will perform [grid searches](https://en.wikipedia.org/wiki/Hyperparameter_optimization#Grid_search) using the [Evaluation.grid_search()](https://pyss3.rtfd.io/en/latest/api/index.html#pyss3.util.Evaluation.grid_search) function. Let's create a new (standard) instance of the SS3 classifier. This will speed things up because the model we currently have in ``clf`` recognize variable-length word n-grams, the grid search won't run as fast as with a (standard) model that recognize only words (and the same "best" hyperparameter values usually work for both of them). Note: just ignore the (optional) ``name`` argument below, we're giving our model the name "topics" only to make things clearer when we create the interactive 3D evaluation plot. ``` clf = SS3(name="topics") clf.train(x_train, y_train) ``` The [Evaluation.grid_search()](https://pyss3.rtfd.io/en/latest/api/index.html#pyss3.util.Evaluation.grid_search) takes, for each hyperparameter, the list of values to use in the search, for instance ``s=[0.25, 0.5, 0.75, 1]`` indicates you want the ``grid_search`` to try out evaluating the classifier using those 4 values for the sigma (``s``) hyperparameter. However, for simplicity, instead of using a manually crafted long list of values, we will use the ``span`` function we have imported from ``pyss3.util`` at the beginning of this notebook. This function will create a list of values for us, giving a lower and upper bound, and the number of elements to be generated. For instance, if we want a list of 6 numbers between 0 and 1, we could use: ``` span(0, 1, 6) ``` Thus, we will use the following values for each of the three hyperparameters: ``` s_vals=span(0.2, 0.8, 6) # [0.2 , 0.32, 0.44, 0.56, 0.68, 0.8] l_vals=span(0.1, 2, 6) # [0.1 , 0.48, 0.86, 1.24, 1.62, 2] p_vals=span(1.75, 2.75, 6) # [1.75, 1.95, 2.15, 2.35, 2.55, 2.75] ``` To speed things up, unlike in the ["Movie Reviews (Sentiment Analysis)"](https://pyss3.readthedocs.io/en/latest/tutorials/movie-review.html) tutorial, we will perform the grid search using only the test set (we won't use k-fold cross-validation). Once the search is over, ``Evaluation.grid_search`` will return the hyperparamter values that obtained the best accuracy for us. ``` # the search should take about 15 minutes best_s, best_l, best_p, _ = Evaluation.grid_search( clf, x_test, y_test, s=s_vals, l=l_vals, p=p_vals ) print("The hyperparameter values that obtained the best Accuracy are:") print("Smoothness(s):", best_s) print("Significance(l):", best_l) print("Sanction(p):", best_p) ``` And that's how we found out that these hyperparameter values (``s=0.32, l=1.62, p=2.35``) were going to improve our classifier accuracy. --- **NOTE:** what if we want to find hyperparameter values that performed the best but using a different metric other than accuracy? for example, what if we wanted to find the hyperparameter values that will improve the precision for the ``sports`` topic? we can use the ``Evaluation.get_best_hyperparameters()`` function as follows: ``` s, l, p, _ = Evaluation.get_best_hyperparameters(metric="precision", metric_target="sports") print("s=%.2f, l=%.2f, and p=%.2f" % (s, l, p)) ``` Or the macro averaged f1 score? ``` s, l, p, _ = Evaluation.get_best_hyperparameters(metric="f1-score", metric_target="macro avg") print("s=%.2f, l=%.2f, and p=%.2f" % (s, l, p)) ``` Alternatively, we could have also added these 2 arguments, metric and target, to the grid search in the first place :) (e.g. ``Evaluation.grid_search(..., metric="f1-score", metric_target="macro avg")``). Note that this ``get_best_hyperparameters`` function gave us the values right away! this is because instead of performing the grid search again, this function uses the evaluation cache to retrieve the best values from disk, which save us a lot of time! --- ### Interactive 3D Evaluation Plot The ``Evaluation`` class comes with a really useful function, ``Evaluation.plot()``, that we can use to create an interactive 3D evaluation plot (We highly recommend reading this [brief section](https://pyss3.rtfd.io/en/latest/user_guide/visualizations.html#evaluation-plot), from the documentation, in which it is briefly described). Instead of using the single value returned from the ``Evaluation.grid_search()`` we could use this plot to have a broader view of the relationship between the different hyperparameter values and the performance of our model in the task being addressed. The ``Evaluation.plot()`` function creates a portable HTML file containing the interactive plot for us, and then opens it up in your browser. Let's give it a shot: ``` Evaluation.plot() ``` --- **NOTE:** If your running this notebook online using Binder, the plot won't open. Fortunately, this time there's a work around! if we list the files in the current directory: ``` !ls ``` We can see the plot file has been actually created with the name we have given to our model in it, ``ss3_model_evaluation[topics].html``, however, since the Jupyter kernel is not running in your computer, PySS3 was not able to open the plot for you in your browser. **Work around:** Go to the "File" menu (upper-left corner) and then select the "Open..." option, then click on this file to manually open it up in your browser (or... just click <a href="/view/ss3_model_evaluation%5Btopics%5D.html" target="_blank">here</a>!) --- You should see a plot like this one: ![](imgs/topic_evaluations.png) Rotate the camera and move the cursor over the pink point, that is the point that obtained the best performance (in terms of accuracy), that is, this is the point that the ``grid_search`` gave us. Among the information that is displayed when moving the cursor over the points, a compact version of the obtained confusion matrix is shown. Feel free to play a little bit with this interactive 3D evaluation plot, for instance try changing the metric and target from the options panel :D
github_jupyter
``` import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 image = mpimg.imread("images/test.jpg") plt.imshow(image) print("img type: ",type(image), "\n img size: ",image.shape) ysize = image.shape[0] xsize = image.shape[1] color_select = np.copy(image) #if there is alpha channel this is to skip it. color_select = color_select[:,:,:3] red_threshold = 200 green_threshold = 200 blue_threshold = 200 rgb_threshold = [red_threshold, green_threshold, blue_threshold] thresholds = (image[:,:,0] < rgb_threshold[0]) \ | (image[:,:,1] < rgb_threshold[1]) \ | (image[:,:,2] < rgb_threshold[2]) color_select[thresholds] = [0,0,0] plt.imshow(color_select) plt.show() color_select[thresholds].shape color_select[thresholds] mpimg.imsave("images/test-after.png", color_select) left_bottom = [0, 539] right_bottom = [900, 539] apex = [455, 320] # Fit lines (y=Ax+B) to identify the 3 sided region of interest # np.polyfit() returns the coefficients [A, B] of the fit fit_left = np.polyfit((left_bottom[0], apex[0]), (left_bottom[1], apex[1]), 1) fit_right = np.polyfit((right_bottom[0], apex[0]), (right_bottom[1], apex[1]), 1) fit_bottom = np.polyfit((left_bottom[0], right_bottom[0]), (left_bottom[1], right_bottom[1]), 1) # Find the region inside the lines XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize)) region_thresholds = (YY > (XX*fit_left[0] + fit_left[1])) & \ (YY > (XX*fit_right[0] + fit_right[1])) & \ (YY < (XX*fit_bottom[0] + fit_bottom[1])) # Color pixels red which are inside the region of interest color_select[region_thresholds] = [255, 0, 0] # Display the image plt.imshow(color_select) ``` ## Canny Edge detection on image: To find lane lines we first have to detect the sudden changes in pixel values, this sudden changes are mostly on the edges of objects in images due to illumination. So, first lets convert our image into grayscale and then run canny edge detection on them. ``` another_image = mpimg.imread("images\exit-ramp.jpg") plt.imshow(another_image) gray = cv2.cvtColor(another_image,cv2.COLOR_RGB2GRAY) plt.imshow(gray, cmap='gray') #creteria of egde linking(The smallest value between low_threshold and high_threshold is used for edge linking). low_threshold = 50 high_threshold = 150 #A larger kernel_size implies averaging, or smoothing, over a larger area. kernel_size = 5 blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size), 0) edges = cv2.Canny(blur_gray, low_threshold, high_threshold) plt.imshow(edges, cmap='Greys_r') plt.imsave("images/canny.jpg", edges) # Next we'll create a masked edges image using cv2.fillPoly() mask = np.zeros_like(edges) ignore_mask_color = 255 # This time we are defining a four sided polygon to mask imshape = image.shape vertices = np.array([[(50,imshape[0]),(420, 300), (520,300), (900, imshape[0])]], dtype = np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges,mask) mpimg.imsave("images/area_of_interest.jpg", masked_edges) plt.title("shape:"+str(imshape)) plt.imshow(masked_edges) ``` ## Hough transform to find lane lines Hough transform is the way to represent points, lines, or any shapes from image space(x,y) to Hough space($\rho,\theta$). where $\rho$ is the perpendicular distance of a line from origin and $\theta$ is the distance angle w.r.t horizontal axis. to accomplish the task of finding lane lines, we need to specify some parameters to say what kind of lines we want to detect (i.e., long lines, short lines, bendy lines, dashed lines, etc.). To do this, we'll be using an OpenCV function called HoughLinesP that takes several parameters. Let's code it up and find the lane lines in the image we detected edges in with the Canny function (for a look at coding up a Hough Transform from scratch, check [this](https://alyssaq.github.io/2014/understanding-hough-transform/) out.) ``` # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho =2 theta = np.pi/180 threshold = 17 min_line_length = 40 max_line_gap = 20 line_image = np.copy(image)*0 #creating a blank to draw lines on # Run Hough on edge detected image lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) #lines will contain list of all lines coordinates[(x1,y1)(x2,y2)] # Iterate over the output "lines" and draw lines on the blank for line in lines: for x1,y1,x2,y2 in line: cv2.line(line_image, (x1,y1), (x2,y2), (255.,0,0),10) # Create a "color" binary image to combine with line image color_edges = np.dstack((edges, edges, edges)) # Draw the lines on the edge image combo = cv2.addWeighted(color_edges, 0.8, line_image, 1, 0) plt.imshow(combo) mpimg.imsave("images/hough_transformed_line_image.jpg", combo) colored_combo = cv2.addWeighted(another_image, 0.8, line_image, 1, 0) plt.imshow(colored_combo) mpimg.imsave("images/hough_transformed_on_colored_image.jpg", colored_combo) ```
github_jupyter
# LAB 3c: BigQuery ML Model Deep Neural Network. **Learning Objectives** 1. Create and evaluate DNN model with BigQuery ML 1. Create and evaluate DNN model with feature engineering with ML.TRANSFORM. 1. Calculate predictions with BigQuery's ML.PREDICT ## Introduction In this notebook, we will create multiple deep neural network models to predict the weight of a baby before it is born, using first no feature engineering and then the feature engineering from the previous lab using BigQuery ML. We will create and evaluate a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculate predictions with BigQuery's ML.PREDICT. If you need a refresher, you can go back and look how we made a baseline model in the notebook [BQML Baseline Model](../solutions/3a_bqml_baseline_babyweight.ipynb) or how we combined linear models with feature engineering in the notebook [BQML Linear Models with Feature Engineering](../solutions/3b_bqml_linear_transform_babyweight.ipynb). ## Verify tables exist Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them. ``` %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 ``` ## Model 4: Increase complexity of model using DNN_REGRESSOR DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs. * MODEL_TYPE="DNN_REGRESSOR" * hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB. * dropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training. * batch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised. ### Create DNN_REGRESSOR model Let's train a DNN regressor model in BQ using `MODEL_TYPE=DNN_REGRESSOR` with 2 hidden layers with 64 and 32 neurons each (`HIDDEN_UNITS=[64, 32]`) and a batch size of 32 (`BATCH_SIZE=32`): ``` %%bigquery CREATE OR REPLACE MODEL babyweight.model_4 OPTIONS ( MODEL_TYPE="DNN_REGRESSOR", HIDDEN_UNITS=[64, 32], BATCH_SIZE=32, INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_train ``` ### Get training information and evaluate Let's first look at our training statistics. ``` %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4) ``` Now let's evaluate our trained model on our eval dataset. ``` %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) ``` Let's use our evaluation's `mean_squared_error` to calculate our model's RMSE. ``` %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_4, ( SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_data_eval )) ``` ## Final Model: Apply the TRANSFORM clause Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRANSFORM clause to the final model and run the query. ``` %%bigquery CREATE OR REPLACE MODEL babyweight.final_model TRANSFORM( weight_pounds, is_male, mother_age, plurality, gestation_weeks, ML.FEATURE_CROSS( STRUCT( is_male, ML.BUCKETIZE( mother_age, GENERATE_ARRAY(15, 45, 1) ) AS bucketed_mothers_age, plurality, ML.BUCKETIZE( gestation_weeks, GENERATE_ARRAY(17, 47, 1) ) AS bucketed_gestation_weeks) ) AS crossed) OPTIONS ( MODEL_TYPE="DNN_REGRESSOR", HIDDEN_UNITS=[64, 32], BATCH_SIZE=32, INPUT_LABEL_COLS=["weight_pounds"], DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT * FROM babyweight.babyweight_data_train ``` Let's first look at our training statistics. ``` %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model) ``` Now let's evaluate our trained model on our eval dataset. ``` %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval )) ``` Let's use our evaluation's `mean_squared_error` to calculate our model's RMSE. ``` %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.final_model, ( SELECT * FROM babyweight.babyweight_data_eval )) ``` ## Predict with final model Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery `ML.PREDICT` function. ### Predict from final model using an example from original dataset ``` %%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT "true" AS is_male, 32 AS mother_age, "Twins(2)" AS plurality, 30 AS gestation_weeks )) ``` ### Modify above prediction query using example from simulated dataset Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality. ``` %%bigquery SELECT * FROM ML.PREDICT(MODEL babyweight.final_model, ( SELECT "Unknown" AS is_male, 32 AS mother_age, "Multiple(2+)" AS plurality, 30 AS gestation_weeks )) ``` ## Lab Summary: In this lab, we created and evaluated a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculated predictions with BigQuery's ML.PREDICT. Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
github_jupyter
# Athletes - Web Scraping >- It´s necessary a premium login to extract Segments Information, a free acount doesn´t have full access to all data. >- For this script you can use either a list of logins, or setup a unique login through the **Function** `change_user ()`, with `number_logins=1` and any login of your choice >- For a web scraping process is important to have a exception when the internet connection is lost, for this was created the **Function** `connect()` that tests the connection before each Action, and a loop while in each Action too to garantee to scrap all the information The last update was at February, 18, 2021. <h1>Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Setup</a></span><ul class="toc-item"><li><span><a href="#Environment" data-toc-modified-id="Environment-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Environment</a></span></li></ul></li><li><span><a href="#Steps-functions" data-toc-modified-id="Steps-functions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Steps functions</a></span></li><li><span><a href="#Informations-to-scrap" data-toc-modified-id="Informations-to-scrap-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Informations to scrap</a></span></li><li><span><a href="#Create-session,-open-page-and-change-language-to-English" data-toc-modified-id="Create-session,-open-page-and-change-language-to-English-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Create session, open page and change language to English</a></span></li><li><span><a href="#Scraping" data-toc-modified-id="Scraping-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Scraping</a></span></li></ul></div> ## Setup ``` import time import datetime from selenium import webdriver from selenium.webdriver.support.ui import Select from bs4 import BeautifulSoup import pandas as pd import numpy as np import pyversions # https://pypi.org/project/pyversions/ import sys, os import urllib.request import re from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException from selenium.common.exceptions import NoSuchElementException pyversions.versions(); ``` ### Environment ``` #parameters export = "NEW YORK.csv" #It´s used a group of logins to access the Strava plataform and scraping the information #id_login it´s the first index to rotate logins path2 = r'../Datasets' global logins logins=pd.read_excel(os.path.join(path2,'logins.xlsx')) id_login=0 #choose the first index in the logins list to rotate the accesses number_logins = 1 #choose the number of logins to use count_logins = 1 aux_id = id_login #variable used to compare the current position of the index with the initial club = 1 #1 - without attend Strava clubs, and 2 - with attend Strava clubs segments = 'https://www.strava.com/segments/8386468' last_page = 850 # a huge number to paginate and support the scraping process logins.columns ``` ## Steps functions It's important to evaluate the page behavior, trough the DOM tree. Most of scripts were used based on their own xpath. For example: to select age_group, the scripts finds the element ```sh element=driver.find_element_by_xpath('//*[@id="premium-enhanced"]/ul/ul[1]/li['+str(ag)+']/a') ``` and selects each one with the **ag variable**. Bellow the list of functions with their descriptions: - **open_section** - create a new instance for selenium webdriver, and load the login page - **login** - fill down the form and click - **user_access** - kill the session and starts a new one with other logins, works together with **change_user" function - **change_user** - this function changes the logged user if you have a list of logins. It's import to avoid 429 error (too many requests) - **select_english_language** - to scrap all the information in English, this function changes the language view to English - **go_link** - It's used to load the segments page - **select_gender** - for each gender selection the script fills with M for male, and F for female - **select_age_group** - for each age group he script selects each one through xpath - **next_page** - the page is selected in the footer, the **li [ ]** term needs to sum with 2 to get the next_page. For example ```sh ...ul/li['+str(page+2)+']/a') ``` if you are in page 2, the page 3 will be loaded through ``` ...ul/li[4]/a') ``` There is an exception for this part: ```NoSuchElementException``` when the element isn't found, in this case when the page is the last one from the table. In the html code there is a specific area to the page selection, and each page number has a xpath that repeats after page 5. For this reason there a condition that repeats page 6 for values greater than 5. >a consequence from this: to go to page 8, you need to pass through each previous page ```sh if page>=6: page=6 element=driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(page+2)+']/a') ``` - **page_loaded** - check if the page was loaded, in a negative condition, the process starts from the current age_group selection. During this reloading process, the script doesn't store data, only after reaching the page where stopped before. - **scraping** - activities information are stored in this part. Personal informations as **gender** and **age group** are obtained through the for loops and selections. - **create_dataset** - organize the informations and save a **.csv** file - **connect** - check the Internet connection, there a loop while to wait the Internet connection ``` def open_section(): while True: try: connect() driver = webdriver.Chrome() driver.get('https://www.strava.com/login') time.sleep(5) return driver break except: continue def login(driver,email,key): while True: try: connect() username = driver.find_element_by_id("email") password = driver.find_element_by_id("password") username.send_keys(email) password.send_keys(key) driver.find_element_by_id("login-button").click() time.sleep(2) break except: driver.get('https://www.strava.com/login') time.sleep(5) continue def user_access(driver,ge,ag,id_login): global count_logins global aux_id global segments if count_logins%3==0 and number_logins>1: #check if there is more than 1 login and the number of logins up to the limit for another connect() driver.close() driver.quit() aux_id = aux_id + 1 if (aux_id - id_login)>=number_logins: aux_id = id_login driver = open_section() email, key = change_user(aux_id,logins) login(driver,email, key) select_english_language(driver,aux_id) go_link(driver,url=segments) select_gender(driver,ge) select_age_group(driver,ag,ge) count_logins=count_logins+1 return driver def change_user(id_login,logins): email = logins['email'][id_login] key=logins['key'][id_login] return email, key def select_english_language(driver,id_login): while True: try: connect() element=driver.find_element_by_xpath('//*[@id="language-picker"]/ul/li[3]/div') # select English language driver.execute_script("arguments[0].click();", element) #click break except: email, key = change_user(id_login,logins) login(driver,email,key) continue def go_link(driver,url): while True: try: connect() driver.get(url) time.sleep(2) break except: continue def select_gender(driver,gender): while True: try: connect() element=driver.find_element_by_xpath('//*[@id="segment-results"]/div[2]/table/tbody/tr/td[4]/div/ul/li['+str(gender)+']/a') driver.execute_script("arguments[0].click();", element) time.sleep(1) break except: driver.refresh() continue def select_age_group(driver,ag,ge): global club while True: try: connect() #if the user attends to Strava clubs, the code below should be changed to "ul[2]" element=driver.find_element_by_xpath('//*[@id="premium-enhanced"]/ul/ul['+str(club)+']/li['+str(ag)+']/a') driver.execute_script("arguments[0].click();", element) time.sleep(5) loading_page = driver.find_element_by_xpath('//*[@id="segment-results"]/div[2]/h4') loading_page_html=loading_page.get_attribute('innerHTML') loading_page_soup=BeautifulSoup(loading_page_html, "html.parser") if re.search(age_group[str(ag)],str(loading_page_soup),re.IGNORECASE): break except: select_gender(driver,ge) continue def next_page(driver,page,ge,ag): #driver.find_element_by_link_text("→").click() while True: try: connect() if page>=6: page=6 element=driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(page+2)+']/a') element_html=element.get_attribute('innerHTML') element_soup=BeautifulSoup(element_html, "html.parser") driver.execute_script("arguments[0].click();", element) time.sleep(3) end = False break except NoSuchElementException: if connect() == True: print('End page') end = True break return end def page_loaded(driver,page,ge,ag,end): print('Page loaded?') if end == False: aux_page = page while True: connect() if page>=6: page=5 try: connect() loading_page = driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(page+2)+']/span') loading_page_html=loading_page.get_attribute('innerHTML') loading_page_soup=BeautifulSoup(loading_page_html, "html.parser") if str(aux_page+1) == str(loading_page_soup): break else: select_gender(driver,ge) select_age_group(driver,ag,ge) reload_page = 1 #next_page(driver,aux_page,ge,ag) aux_reload_page = reload_page while (aux_reload_page <= aux_page ): try: connect() if reload_page>=6: reload_page=6 element=driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(reload_page+2)+']/a') driver.execute_script("arguments[0].click();", element) time.sleep(3) reload_page = reload_page + 1 aux_reload_page = aux_reload_page + 1 except: continue except: try: connect() loading_page = driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(page+3)+']/span') loading_page_html=loading_page.get_attribute('innerHTML') loading_page_soup=BeautifulSoup(loading_page_html, "html.parser") if str(aux_page+1) == str(loading_page_soup): break else: select_gender(driver,ge) select_age_group(driver,ag,ge) reload_page = 1 aux_reload_page = reload_page while (aux_reload_page <= aux_page ): try: connect() if reload_page>=6: reload_page=6 element=driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(reload_page+2)+']/a') driver.execute_script("arguments[0].click();", element) time.sleep(3) reload_page = reload_page + 1 aux_reload_page = aux_reload_page + 1 except: continue except: select_gender(driver,ge) select_age_group(driver,ag,ge) reload_page = 1 aux_reload_page = reload_page while (aux_reload_page <= aux_page ): try: connect() if reload_page>=6: reload_page=6 element=driver.find_element_by_xpath('//*[@id="results"]/nav/ul/li['+str(reload_page+2)+']/a') driver.execute_script("arguments[0].click();", element) time.sleep(3) reload_page = reload_page + 1 aux_reload_page = aux_reload_page + 1 except: continue continue CRED = '\033[92m' CEND = '\033[0m' print(CRED+'OKAY !!!!'+CEND) def scraping(driver,ge,ag): global results global result_age_group global result_gender global url_activity global url_athlete while True: try: connect() information = driver.find_element_by_id("segment-leaderboard") html=information.get_attribute('innerHTML') soup = BeautifulSoup(html, "html.parser") table = soup.find("table", attrs={"class": "table table-striped table-padded table-leaderboard"}) rows = table.findAll("tr") break except: select_gender(driver,ge) select_age_group(driver,ag,ge) continue print('Scraping...') for row in rows: try: a = [t.text.strip() for t in row.findAll("td")][0:] b = [c['href'] for c in row.find_all('a', href=True) if c.text] #hyperlinks if len(a) == 6: results.append(a) result_age_group.append(age_group[str(ag)]) result_gender.append(gender[str(ge)]) url_activity.append('https://strava.com' + b[1]) #b[0] athlete url | b[1] event url url_athlete.append('https://strava.com' + b[0]) except: next def create_dataset(): df_result_age_group=pd.DataFrame(result_age_group,columns=['age_group']) df_result_gender=pd.DataFrame(result_gender,columns=['gender']) df_url_activity=pd.DataFrame(url_activity,columns=['url_activity']) df_url_athlete = pd.DataFrame(url_athlete,columns=['url_athlete']) columns = ['classification','name', 'date','pace', 'heart_rate','time'] df=pd.DataFrame(results,columns=columns) df=pd.concat([df,df_result_age_group,df_result_gender,df_url_activity,df_url_athlete],axis=1) df['major'] = df['date'].apply(lambda x: export[:-4] +' '+ str(datetime.datetime.strptime(x, '%b %d, %Y').year)) df.to_csv(export,index=False,sep=';') print('Dataset created') def connect(): CRED = '\033[91m' CEND = '\033[0m' while True: try: urllib.request.urlopen('https://outlook.live.com/owa/') #Python 3.x return True break except: time.sleep(5) print(CRED+'Lost connection'+CEND) continue ``` ## Informations to scrap ``` results = [] result_age_group = [] result_gender=[] url_activity = [] url_athlete = [] age_group={'2':'19 and under','3':'20 to 24','4':'25 to 34','5':'35 to 44','6':'45 to 54','7':'55 to 64','8':'65 to 69', '9':'70 to 74','10':'75+'} gender={'2':'M','3':'F'} ``` ## Create session, open page and change language to English ``` driver = open_section() email, key = change_user(id_login,logins) login(driver,email, key) select_english_language(driver,id_login) go_link(driver,url=segments) ``` ## Scraping ``` for ge in range(2,4): #'2':'M','3':'F' for ag in range(2,11): # '2':'19 and under','3':'20 to 24','4':'25 to 34','5':'35 to 44','6':'45 to 54','7':'55 to 64','8':'65 to 69', #'9':'70 to 74','10':'75+' select_gender(driver,ge) select_age_group(driver,ag,ge) for page in range(1,last_page): CRED = '\033[1m' CEND = '\033[0m' print(CRED + gender[str(ge)] + " - " + age_group[str(ag)] + " - page - " + str(page) + CEND) scraping(driver,ge,ag) driver = user_access(driver,ge,ag,id_login) end = next_page(driver,page,ge,ag) page_loaded(driver,page,ge,ag,end) if end == True: break create_dataset() ```
github_jupyter
``` import pyxtf import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` ### Read the test file ``` # Read the test file # Note that at this point, the header_7125 and header_7125_snippet is not implemented, which is why a warning is shown # The bathymetry and sonar headers are implemented, however - which can be read while ignoring the unimplemented packets test_file = r'/media/data/Reson7125.XTF' (fh, p) = pyxtf.xtf_read(test_file) ``` ### Print XTFFileHeader ``` # This prints all the ctypes fields present print(fh) ``` ### Print XTFChanInfo belonging to the first channel ``` # The ChanInfo field is an array of XTFChanInfo objects # Note that the ChanInfo field always has a size of 6, even if the number of channels is less. # Use the fh.NumXChannels fields to calculate the number (the function xtf_channel_count does this) n_channels = fh.channel_count(verbose=True) actual_chan_info = [fh.ChanInfo[i] for i in range(0, n_channels)] print('Number of data channels: {}\n'.format(n_channels)) # Print the first channel print(actual_chan_info[0]) ``` ### Inspect the type of packets returned in the packets dictionary ``` # Print the keys in the packets-dictionary print([key for key in p]) # The returned packets is a Dict[XTFHeaderType, List[XTFClass]] # The values in the dict are lists of pings, of the class in question sonar_ch = p[pyxtf.XTFHeaderType.sonar] # type: List[pyxtf.XTFPingHeader] # Each element in the list is a ping (XTFPingHeader) # This retrieves the first ping in the file of the sonar type sonar_ch_ping1 = sonar_ch[0] # The properties in the header defines the attributes common for all subchannels # (e.g sonar often has port/stbd subchannels) print(sonar_ch_ping1) ``` ### Inspect the subchannels in the first sonar package (in this case there are two) ``` # The data and header for each subchannel is contained in the data and ping_chan_headers respectively. # The data is a list of numpy arrays (one for each subchannel) sonar_subchan0 = sonar_ch_ping1.data[0] # type: np.ndarray sonar_subchan1 = sonar_ch_ping1.data[1] # type: np.ndarray print(sonar_subchan0.shape) print(sonar_subchan1.shape) ``` ### Plot a signal-view of both subchannels of the first ping ``` fig, (ax1, ax2) = plt.subplots(2,1, figsize=(12,8)) ax1.semilogy(np.arange(0, sonar_subchan0.shape[0]), sonar_subchan0) ax2.semilogy(np.arange(0, sonar_subchan1.shape[0]), sonar_subchan1) # Each subchannel has a XTFPingChanHeader, # which contains information that can change from ping to ping in each of the subchannels sonar_ping1_ch_header0 = sonar_ch_ping1.ping_chan_headers[0] print(sonar_ping1_ch_header0) ``` ### Concatenate the sonar data to produce a dense array/image ``` # The function concatenate_channels concatenates all the individual pings for a channel, and returns it as a dense numpy array np_chan1 = pyxtf.concatenate_channel(p[pyxtf.XTFHeaderType.sonar], file_header=fh, channel=0, weighted=True) np_chan2 = pyxtf.concatenate_channel(p[pyxtf.XTFHeaderType.sonar], file_header=fh, channel=1, weighted=True) # Clip to range (max cannot be used due to outliers) # More robust methods are possible (through histograms / statistical outlier removal) upper_limit = 2 ** 14 np_chan1.clip(0, upper_limit-1, out=np_chan1) np_chan2.clip(0, upper_limit-1, out=np_chan2) # The sonar data is logarithmic (dB), add small value to avoid log10(0) np_chan1 = np.log10(np_chan1 + 0.0001) np_chan2 = np.log10(np_chan2 + 0.0001) # Transpose so that the largest axis is horizontal np_chan1 = np_chan1 if np_chan1.shape[0] < np_chan1.shape[1] else np_chan1.T np_chan2 = np_chan2 if np_chan2.shape[0] < np_chan2.shape[1] else np_chan2.T # The following plots the waterfall-view in separate subplots fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 12)) ax1.imshow(np_chan1, cmap='gray', vmin=0, vmax=np.log10(upper_limit)) ax2.imshow(np_chan2, cmap='gray', vmin=0, vmax=np.log10(upper_limit)) ```
github_jupyter
## Tensorflow 2.0 RC Serving with Keras Train and export a simple Softmax Regression TensorFlow model. The model is from the TensorFlow "MNIST For ML Beginner" tutorial. This program simply follows all its training instructions, and uses TensorFlow SavedModel to export the trained model with proper signatures that can be loaded by standard tensorflow_model_server. <br> Usage: mnist_saved_model.py [--training_iteration=x] [--model_version=y] export_dir <br> https://www.tensorflow.org/beta/guide/saved_model#exporting_custom_models ``` from matplotlib import pyplot as plt import tensorflow as tf import numpy as np file = tf.keras.utils.get_file( "grace_hopper.jpg", "https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg") img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224]) plt.imshow(img) plt.axis('off') x = tf.keras.preprocessing.image.img_to_array(img) x = tf.keras.applications.mobilenet.preprocess_input(x[tf.newaxis,...]) print(x.shape) #tf.keras.applications.vgg19.decode_predictions labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt') imagenet_labels = np.array(open(labels_path).read().splitlines()) print(labels_path) print(imagenet_labels, imagenet_labels.shape) pretrained_model = tf.keras.applications.MobileNet() result_before_save = pretrained_model(x) print(result_before_save.shape) decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1] print("Result before saving:\n", decoded) tf.saved_model.save(pretrained_model, "/tmp/mobilenet/1/") !saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve --signature_def serving_default loaded = tf.saved_model.load("/tmp/mobilenet/1/") print(list(loaded.signatures.keys())) # ["serving_default"] infer = loaded.signatures["serving_default"] print(infer.structured_outputs) labeling = infer(tf.constant(x))[pretrained_model.output_names[0]] decoded = imagenet_labels[np.argsort(labeling)[0,::-1][:5]+1] print("Result after saving and loading:\n", decoded, decoded.shape) !ls /tmp/mobilenet/1 !saved_model_cli show --dir /tmp/mobilenet/1 --tag_set serve ``` Exporting custom models <br> In the first section, tf.saved_model.save automatically determined a signature for the tf.keras.Model object. This worked because Keras Model objects have an unambiguous method to export and known input shapes. tf.saved_model.save works just as well with low-level model building APIs, but you will need to indicate which function to use as a signature if you're planning to serve a model. ``` class CustomModule(tf.Module): def __init__(self): super(CustomModule, self).__init__() self.v = tf.Variable(1.) @tf.function def __call__(self, x): return x * self.v @tf.function(input_signature=[tf.TensorSpec([], tf.float32)]) def mutate(self, new_v): self.v.assign(new_v) module = CustomModule() module(tf.constant(0.)) tf.saved_model.save(module, "/tmp/module_no_signatures") imported = tf.saved_model.load("/tmp/module_no_signatures") assert 3. == imported(tf.constant(3.)).numpy() imported.mutate(tf.constant(2.)) assert 6. == imported(tf.constant(3.)).numpy() # will not work! #imported(tf.constant([3.])) ``` ValueError: Could not find matching function to call loaded from the SavedModel. Got: Positional arguments (1 total): * Tensor("x:0", shape=(1,), dtype=float32) Keyword arguments: {} Expected these arguments to match one of the following 1 option(s): Option 1: Positional arguments (1 total): * TensorSpec(shape=(), dtype=tf.float32, name='x') Keyword arguments: {} ``` module.__call__.get_concrete_function(x=tf.TensorSpec([None], tf.float32)) tf.saved_model.save(module, "/tmp/module_no_signatures") imported = tf.saved_model.load("/tmp/module_no_signatures") assert [3.] == imported(tf.constant([3.])).numpy() !saved_model_cli show --dir /tmp/module_no_signatures --tag_set serve call = module.__call__.get_concrete_function(tf.TensorSpec(None, tf.float32)) tf.saved_model.save(module, "/tmp/module_with_signature", signatures=call) !saved_model_cli show --dir /tmp/module_with_signature --tag_set serve --signature_def serving_default imported = tf.saved_model.load("/tmp/module_with_signature") signature = imported.signatures["serving_default"] assert [3.] == signature(x=tf.constant([3.]))["output_0"].numpy() imported.mutate(tf.constant(2.)) assert [6.] == signature(x=tf.constant([3.]))["output_0"].numpy() assert 2. == imported.v.numpy() @tf.function(input_signature=[tf.TensorSpec([], tf.string)]) def parse_string(string_input): return imported(tf.strings.to_number(string_input)) signatures = {"serving_default": parse_string, "from_float": imported.signatures["serving_default"]} tf.saved_model.save(imported, "/tmp/module_with_multiple_signatures", signatures) !saved_model_cli show --dir /tmp/module_with_multiple_signatures --tag_set serve !saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def serving_default --input_exprs="string_input='3.'" !saved_model_cli run --dir /tmp/module_with_multiple_signatures --tag_set serve --signature_def from_float --input_exprs="x=3." optimizer = tf.optimizers.SGD(0.05) def train_step(): with tf.GradientTape() as tape: loss = (10. - imported(tf.constant(2.))) ** 2 variables = tape.watched_variables() grads = tape.gradient(loss, variables) optimizer.apply_gradients(zip(grads, variables)) return loss for _ in range(10): # "v" approaches 5, "loss" approaches 0 print("loss={:.2f} v={:.2f}".format(train_step(), imported.v.numpy())) @tf.function(input_signature=[tf.TensorSpec([], tf.int32)]) def control_flow(x): if x < 0: tf.print("Invalid!") else: tf.print(x % 3) to_export = tf.Module() to_export.control_flow = control_flow tf.saved_model.save(to_export, "/tmp/control_flow") imported = tf.saved_model.load("/tmp/control_flow") imported.control_flow(tf.constant(-1)) # Invalid! imported.control_flow(tf.constant(2)) # 2 imported.control_flow(tf.constant(3)) # 0 input_column = tf.feature_column.numeric_column("x") estimator = tf.estimator.LinearClassifier(feature_columns=[input_column]) def input_fn(): return tf.data.Dataset.from_tensor_slices( ({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16) estimator.train(input_fn) serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec([input_column])) export_path = estimator.export_saved_model( "/tmp/from_estimator/", serving_input_fn) imported = tf.saved_model.load(export_path) def predict(x): example = tf.train.Example() example.features.feature["x"].float_list.value.extend([x]) return imported.signatures["predict"](examples=tf.constant([example.SerializeToString()])) print(predict(1.5)) print(predict(3.5)) ```
github_jupyter
## RIHAD VARIAWA, Data Scientist - Who has fun LEARNING, EXPLORING & GROWING <h1><center>Simple Linear Regression</center></h1> <h4>About this Notebook</h4> In this notebook, we learn how to use scikit-learn to implement simple linear regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, evaluate your model using test set, and finally use model to predict unknown value. <h1>Table of contents</h1> <div class="alert alert-block alert-info" style="margin-top: 20px"> <ol> <li><a href="#understanding_data">Understanding the Data</a></li> <li><a href="#reading_data">Reading the data in</a></li> <li><a href="#data_exploration">Data Exploration</a></li> <li><a href="#simple_regression">Simple Regression Model</a></li> </ol> </div> <br> <hr> ### Importing Needed packages ``` import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ``` ### Downloading Data To download the data, we will use !wget to download it from IBM Object Storage. ``` #!wget -O _datasets/FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv ``` __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) <h2 id="understanding_data">Understanding the Data</h2> ### `FuelConsumption.csv`: We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64) - **MODELYEAR** e.g. 2014 - **MAKE** e.g. Acura - **MODEL** e.g. ILX - **VEHICLE CLASS** e.g. SUV - **ENGINE SIZE** e.g. 4.7 - **CYLINDERS** e.g 6 - **TRANSMISSION** e.g. A6 - **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9 - **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9 - **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2 - **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 <h2 id="reading_data">Reading the data in</h2> ``` df = pd.read_csv("_datasets/FuelConsumption.csv") # take a look at the dataset df.head() ``` <h2 id="data_exploration">Data Exploration</h2> Lets first have a descriptive exploration on our data. ``` # summarize the data df.describe() ``` Lets select some features to explore more. ``` cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ``` we can plot each of these features: ``` viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']] viz.hist() plt.show() ``` Now, lets plot each of these features vs the Emission, to see how linear is their relation: ``` plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue') plt.xlabel("FUELCONSUMPTION_COMB") plt.ylabel("Emission") plt.show() plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` ## Practice plot __CYLINDER__ vs the Emission, to see how linear is their relation: ``` # write your code here plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue') plt.xlabel('Cylinders') plt.ylabel('Emission') plt.show() ``` Double-click __here__ for the solution. <!-- Your answer is below: plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Cylinders") plt.ylabel("Emission") plt.show() --> #### Creating train and test dataset Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems. This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing. Lets split our dataset into train and test sets, 80% of the entire data for training, and the 20% for testing. We create a mask to select random rows using __np.random.rand()__ function: ``` msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ``` <h2 id="simple_regression">Simple Regression Model</h2> Linear Regression fits a linear model with coefficients $\theta = (\theta_1, ..., \theta_n)$ to minimize the 'residual sum of squares' between the independent x in the dataset, and the dependent y by the linear approximation. #### Train data distribution ``` plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ``` #### Modeling Using sklearn package to model data. ``` from sklearn import linear_model regr = linear_model.LinearRegression() train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (train_x, train_y) # The coefficients print ('Coefficients: ', regr.coef_) print ('Intercept: ',regr.intercept_) ``` As mentioned before, __Coefficient__ and __Intercept__ in the simple linear regression, are the parameters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data. Notice that all of the data must be available to traverse and calculate the parameters. #### Plot outputs we can plot the fit line over the data: ``` plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r') plt.xlabel("Engine size") plt.ylabel("Emission") ``` #### Evaluation we compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement. There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set: <ul> <li> Mean absolute error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.</li> <li> Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean absolute error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.</li> <li> Root Mean Squared Error (RMSE): This is the square root of the Mean Square Error. </li> <li> R-squared is not error, but is a popular metric for accuracy of your model. It represents how close the data are to the fitted regression line. The higher the R-squared, the better the model fits your data. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).</li> </ul> ``` from sklearn.metrics import r2_score test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) test_y_hat = regr.predict(test_x) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_hat - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_hat - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y_hat , test_y) ) ``` <h2>Want to learn more?</h2> IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: <a href="http://cocl.us/ML0101EN-SPSSModeler">SPSS Modeler</a> Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href="https://cocl.us/ML0101EN_DSX">Watson Studio</a> <h3>Thanks for completing this lesson!</h3> <h4>Author: <a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a></h4> <p><a href="https://ca.linkedin.com/in/saeedaghabozorgi">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p> <hr> <p>Copyright &copy; 2018 <a href="https://cocl.us/DX0108EN_CC">Cognitive Class</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.</p>
github_jupyter
## Data Loading Tutorial Loading data is one crucial step in the deep learning pipeline. PyTorch makes it easy to write custom data loaders for your particular dataset. In this notebook, we're going to download and load cifar-10 dataset and return a torch "tensor" as the image and the label. ## Getting the dataset Head over to [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset homepage and download the "python" version mentioned on the page. Extract the `tar.gz` archive in your home directory. ## Exploring the dataset CIFAR-10 dataset is divided into 5 training batches and one test batch. In order to train effectively we need to use all the training data. The datasets themselves are stored in a "pickle" format so we have to "unpickle" them first: ``` import sys, os import pickle def unpickle(fname): with open(fname, 'rb') as f: Dict = pickle.load(f, encoding='bytes') return Dict ``` Let's test if we actually can load the data after the unpickling. For that we're going to load one the data batches and see if we can find the data and labels from it: ``` Dic = unpickle(os.path.join('/home/akulshr','cifar-10-batches-py', 'data_batch_1')) print(Dic[b'data'].shape) print(len(Dic[b'labels'])) ``` We notice that the data is a numpy array of 1000x3072 which means that the data batch contains 10000 images of size 3072 (32x32x3). The labels are 1000x1 list in which the *i*th element corresponds to the correct label for the *i*th image. This is how data is in the other training batches. Whenever you've got a new dataset always try to load one or two images(or a data batch) like in this case to get a feel of how the data is. Loading data like this can be tedious and time consuming. Let's write a helper function which will return the data and the labels: ``` def load_data(batch): print ("Loading batch:{}".format(batch)) return unpickle(batch) ``` This little helper function will call our "unpickling" function to unpickle the data and return the unpicked data back to us.In datasets where there is only images, you can use `PIL.Image`. Armed with this let's write a dataloader for PyTorch. ## DataLoading in PyTorch Dataloading in PyTorch is a two step process. First, you need define a custom class which is subclassed from a `data.Dataset` class in `torch.utils.data`. This class takes in arguments which tell PyTorch about the location of the dataset, any "transforms" that you need to make before the dataset is loading. First lets import the relevant modules. The `data` module which contains the useful functions for dataloading is contained in `torch.utils.data`. We also need a utility called `glob` to get all our batches in a list. The dataset is split into 5 different parts and we need to have them all together so we can use the entire training set: ``` import torch import torch.utils.data as data import glob from PIL import Image import numpy as np class CIFARLoader(data.Dataset): """ CIFAR-10 Loader: Loads the CIFAR-10 data according to an index value and returns the data and the labels. args: root: Root of the data directory. Optional args: transforms: The transforms you wish to apply to the data. target_transforms: The transforms you wish to apply to the labels. """ def __init__(self, root,transform=None, target_transform=None): self.root = root self.transform = transform self.target_transform = target_transform patt = os.path.join(self.root, 'data_batch_*') # create the pattern we want to search for. self.batches = sorted(glob.glob(patt)) self.train_data = [] self.train_labels = [] for batch in self.batches: entry = {} entry = load_data(batch) self.train_data.append(entry[b'data']) self.train_labels += entry[b'labels'] ############################################# # We need to "concatenate" all the different # # training samples into one big array. For # # doing that we're going to use a numpy # # function called "concatenate". # ############################################## self.train_data = np.concatenate(self.train_data) self.train_data = self.train_data.reshape((50000, 3, 32,32)) self.train_data = self.train_data.transpose((0,2,3,1)) # pay attention to this step! def __getitem__(self, index): image = self.train_data[index] label = self.train_labels[index] if self.transform is not None: image = self.transform(image) if self.target_transform is not None: label = self.target_transform(label) print(image.size()) return image, label def __len__(self): return len(self.train_data) ``` Every dataloader in PyTorch needs this form. The class you write for a DataLoader contains two methods `__init__` and `__getitem__`. The `__init__` method is where you define arguments telling the location of your dataset, transforms (if any). It is also a good idea to have a variable which can hold a list of all images/batches in your dataset. There are quite a number of things going on in the function we wrote. Let's break it down: - We create two lists `train_data` and `train_labels` to contain the data and their associated labels. This is helpful since we have 5 training batches of 10,000 images each and it will be time consuming to do the same operation over and over again for all 5 batches. - We read in all batches at once in our `for loop`. This is a better approach almost every time and should be applied to any dataset that you're trying to read. - We use a funcion called `np.concatenate` to "concatenate" the training data which is in the form of numpy arrays. The labels are in the form of lists and hence we simply use the python concat operator `+` to concatenate them. The `__getitem__` method should accept only one argument: the index of the image/batch you want to access. We get to enjoy our hardwork as we can simply load a data and label by one index and return it. When writing your custom dataloaders it is a good practice to keep your `__getitem__` as simple as possible. Having written a dataloader, we can now test it. For the time being, don't worry about the test code, we'll get to it in later talks. The code below can be used for testing the dataloader class above: ``` import torchvision.transforms as transforms import matplotlib.pyplot as plt tfs = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))])# convert any data into a torch tensor root='/home/akulshr/cifar-10-batches-py/' cifar_train = CIFARLoader(root, transform=tfs) # create a "CIFARLoader instance". cifar_loader = data.DataLoader(cifar_train, batch_size=4, shuffle=True, num_workers=2) data_iter = iter(cifar_loader) data,label = data_iter.next() ``` In our test function we create an "instance" of the dataloader. Notice how we don't pass anything to the `target_transform`parameter. This is because our labels in this case are in the form of lists. If they are in a different form then we have to employ different strategies, but this is rare in practice. The output should show you `torch.Size([3,32,32])` which is pytorch way of saying that your data is now a "tensor" with dimensions `3x32x32`. This is correct since we know that CIFAR-10 data has dimensions `3x32x32`.
github_jupyter
<a href="https://github.com/stevetjoa/stanford-mir"><img style="position: absolute; top: 0; right: 0; border: 0;" src="https://camo.githubusercontent.com/a6677b08c955af8400f44c6298f40e7d19cc5b2d/68747470733a2f2f73332e616d617a6f6e6177732e636f6d2f6769746875622f726962626f6e732f666f726b6d655f72696768745f677261795f3664366436642e706e67" alt="Fork me on GitHub" data-canonical-src="https://s3.amazonaws.com/github/ribbons/forkme_right_gray_6d6d6d.png"></a> # Notes on Music Information Retrieval [Registration for the Stanford CCRMA MIR 2018 Workshop is open!](https://ccrma.stanford.edu/workshops/mir-2018) This year's esteemed instructors include: - [Meinard Müller](https://www.audiolabs-erlangen.de/fau/professor/mueller) (Int'l Audio Laboratories Erlangen; author, [*Fundamentals of Music Processing*](https://www.audiolabs-erlangen.de/fau/professor/mueller/bookFMP)) - [Brian McFee](https://bmcfee.github.io/) (NYU; creator, [librosa](https://github.com/librosa/librosa)). ## Introduction 1. [**About This Site**](about.html) ([ipynb](about.ipynb)) Start here! 1. [What is MIR?](why_mir.html) ([ipynb](why_mir.ipynb)) 1. [Python Basics and Dependencies](python_basics.html) ([ipynb](python_basics.ipynb)) 1. [Jupyter Basics](get_good_at_ipython.html) ([ipynb](get_good_at_ipython.ipynb)) 1. [Jupyter Audio Basics](ipython_audio.html) ([ipynb](ipython_audio.ipynb)) 1. [SoX and ffmpeg](sox_and_ffmpeg.html) ([ipynb](sox_and_ffmpeg.ipynb)) 1. [NumPy and SciPy Basics](numpy_basics.html) ([ipynb](numpy_basics.ipynb)) 1. [Alphabetical Index of Terms](alphabetical_index.html) ([ipynb](alphabetical_index.ipynb)) ## Music Representations 1. [Sheet Music Representations](sheet_music_representations.html) ([ipynb](sheet_music_representations.ipynb)) 1. [Symbolic Representations](symbolic_representations.html) ([ipynb](symbolic_representations.ipynb)) 1. [Audio Representation](audio_representation.html) ([ipynb](audio_representation.ipynb)) 1. [Tuning Systems](tuning_systems.html) ([ipynb](tuning_systems.ipynb)) 1. [Understanding Audio Features through Sonification](feature_sonification.html) ([ipynb](feature_sonification.ipynb)) ## Signal Analysis and Feature Extraction 1. [Basic Feature Extraction](basic_feature_extraction.html) ([ipynb](basic_feature_extraction.ipynb)) 1. [Segmentation](segmentation.html) ([ipynb](segmentation.ipynb)) 1. [Energy and RMSE](energy.html) ([ipynb](energy.ipynb)) 1. [Zero Crossing Rate](zcr.html) ([ipynb](zcr.ipynb)) 1. [Fourier Transform](fourier_transform.html) ([ipynb](fourier_transform.ipynb)) 1. [Short-time Fourier Transform and Spectrogram](stft.html) ([ipynb](stft.ipynb)) 1. [Constant-Q Transform and Chroma](chroma.html) ([ipynb](chroma.ipynb)) 1. [Video: Chroma Features](video_chroma.html) ([ipynb](video_chroma.ipynb)) 1. [Magnitude Scaling](magnitude_scaling.html) ([ipynb](magnitude_scaling.ipynb)) 1. [Spectral Features](spectral_features.html) ([ipynb](spectral_features.ipynb)) 1. [Autocorrelation](autocorrelation.html) ([ipynb](autocorrelation.ipynb)) 1. [Pitch Transcription Exercise](pitch_transcription_exercise.html) ([ipynb](pitch_transcription_exercise.ipynb)) ## Rhythm, Tempo, and Beat Tracking 1. [Novelty Functions](novelty_functions.html) ([ipynb](novelty_functions.ipynb)) 1. [Peak Picking](peak_picking.html) ([ipynb](peak_picking.ipynb)) 1. [Onset Detection](onset_detection.html) ([ipynb](onset_detection.ipynb)) 1. [Onset-based Segmentation with Backtracking](onset_segmentation.html) ([ipynb](onset_segmentation.ipynb)) 1. [Tempo Estimation](tempo_estimation.html) ([ipynb](tempo_estimation.ipynb)) 1. [Beat Tracking](beat_tracking.html) ([ipynb](beat_tracking.ipynb)) 1. [Video: Tempo and Beat Tracking](video_tempo_beat_tracking.html) ([ipynb](video_tempo_beat_tracking.ipynb)) 1. [Drum Transcription using ADTLib](adtlib.html) ([ipynb](adtlib.ipynb)) ## Machine Learning 1. [K-Nearest Neighbor Classification](knn.html) ([ipynb](knn.ipynb)) 1. [Cross Validation](cross_validation.html) ([ipynb](cross_validation.ipynb)) 1. [Exercise: K-Nearest Neighbor Instrument Classification](knn_instrument_classification.html) ([ipynb](knn_instrument_classification.ipynb)) 1. [K-Means Clustering](kmeans.html) ([ipynb](kmeans.ipynb)) 1. [Exercise: Unsupervised Instrument Classification using K-Means](kmeans_instrument_classification.html) ([ipynb](kmeans_instrument_classification.ipynb)) 1. [Neural Networks](neural_networks.html) ([ipynb](neural_networks.ipynb)) 1. [Evaluation](evaluation.html) ([ipynb](evaluation.ipynb)) 1. [Genre Recognition](genre_recognition.html) ([ipynb](genre_recognition.ipynb)) 1. [Exercise: Genre Recognition](exercise_genre_recognition.html) ([ipynb](exercise_genre_recognition.ipynb)) ## Music Synchronization 1. [Dynamic Programming](dp.html) ([ipynb](dp.ipynb)) 1. [Longest Common Subsequence](lcs.html) ([ipynb](lcs.ipynb)) 1. [Dynamic Time Warping](dtw.html) ([ipynb](dtw.ipynb)) 1. [Dynamic Time Warping Example](dtw_example.html) ([ipynb](dtw_example.ipynb)) ## Music Structure Analysis 1. [Mel-Frequency Cepstral Coefficients](mfcc.html) ([ipynb](mfcc.ipynb)) ## Content-Based Audio Retrieval 1. [Locality Sensitive Hashing](lsh_fingerprinting.html) ([ipynb](lsh_fingerprinting.ipynb)) ## Musically Informed Audio Decomposition 1. [Principal Component Analysis](pca.html) ([ipynb](pca.ipynb)) 1. [Nonnegative Matrix Factorization](nmf.html) ([ipynb](nmf.ipynb)) 1. [Harmonic-Percussive Source Separation](hpss.html) ([ipynb](hpss.ipynb)) 1. [Exercise: Source Separation using NMF](nmf_source_separation.html) ([ipynb](nmf_source_separation.ipynb)) ## Just For Fun 1. [THX Logo Theme](thx_logo_theme.html) ([ipynb](thx_logo_theme.ipynb))
github_jupyter
# 0 TorchText ## Dataset Preview Your first step to deep learning in NLP. We will be mostly using PyTorch. Just like torchvision, PyTorch provides an official library, torchtext, for handling text-processing pipelines. We will be using previous session tweet dataset. Let's just preview the dataset. ``` from google.colab import drive drive.mount('/content/gdrive/') !cp '/content/gdrive/My Drive/EVA/stanfordSentimentTreebank.zip' stanfordSentimentTreebank.zip !unzip -q -o stanfordSentimentTreebank.zip -d stanfordSentimentTreebank import pandas as pd df = pd.read_csv('/content/stanfordSentimentTreebank/stanfordSentimentTreebank/datasetSentences.txt',sep='\t') df.tail() df.shape df.info() ``` ## Defining Fields ``` from torch.utils.data import DataLoader from torch.optim.lr_scheduler import ReduceLROnPlateau class StanfordDatasetReader(): def __init__(self, sst_dir, split_idx): merged_dataset = self.get_merged_dataset(sst_dir) merged_dataset['sentiment values'] = merged_dataset['sentiment values'].astype(float) self.dataset = merged_dataset[merged_dataset["splitset_label"] == split_idx] # self.dataset["Revised_Sentiment"] = self.discretize_label(self.dataset.iloc[5]) self.dataset['Revised_sentiment values'] = self.dataset.apply(lambda x: labelfunc(x["sentiment values"]), axis=1) # train_st_data['Revised_sentiment values'] = train_st_data.apply(lambda x: myfunc(x["sentiment values"]), axis=1) # https://github.com/iamsimha/conv-sentiment-analysis/blob/master/code/dataset_reader.py def get_merged_dataset(self, sst_dir): sentiment_labels = pd.read_csv(os.path.join(sst_dir, "sentiment_labels.txt"), sep="|") sentence_ids = pd.read_csv(os.path.join(sst_dir, "datasetSentences.txt"), sep="\t") dictionary = pd.read_csv(os.path.join(sst_dir, "dictionary.txt"), sep="|", names=['phrase', 'phrase ids']) train_test_split = pd.read_csv(os.path.join(sst_dir, "datasetSplit.txt")) sentence_phrase_merge = pd.merge(sentence_ids, dictionary, left_on='sentence', right_on='phrase') sentence_phrase_split = pd.merge(sentence_phrase_merge, train_test_split, on='sentence_index') return pd.merge(sentence_phrase_split, sentiment_labels, on='phrase ids').sample(frac=1) def discretize_label(self, label): print(type(label)) if label <= 0.2: return 0 if label <= 0.4: return 1 if label <= 0.6: return 2 if label <= 0.8: return 3 return 4 def word_to_index(self, word): if word in self.w2i: return self.w2i[word] else: return self.w2i["<OOV>"] def __len__(self): return self.dataset.shape[0] # def __getitem__(self, idx): # return {"sentence": [self.word_to_index(x) for x in self.dataset.iloc[idx, 1].split()], # "label": self.discretize_label(self.dataset.iloc[idx, 5])} def labelfunc(label): if label <= 0.2: return 0 if label <= 0.4: return 1 if label <= 0.6: return 2 if label <= 0.8: return 3 return 4 def get_data(self): return self.dataset def __getitem__(self, idx): return {"sentence": [x for x in self.dataset.iloc[idx, 1].split()], "label": self.discretize_label(self.dataset.iloc[idx, 5])} def labelfunc(label): if label <= 0.2: return 0 if label <= 0.4: return 1 if label <= 0.6: return 2 if label <= 0.8: return 3 return 4 import os def load_data(sst_dir="/content/stanfordSentimentTreebank/stanfordSentimentTreebank/"): train_st_data_cl = StanfordDatasetReader(sst_dir, 1).get_data() # train_st_data_cl['Revised_sentiment values'] = train_st_data.apply(lambda x: labelfunc(x["sentiment values"]), axis=1) test_st_data_cl = StanfordDatasetReader(sst_dir, 2).get_data() # test_st_data_cl['Revised_sentiment values'] = test_st_data_cl.apply(lambda x: labelfunc(x["sentiment values"]), axis=1) validation_st_data_cl = StanfordDatasetReader(sst_dir, 3).get_data() # validation_st_data_cl['Revised_sentiment values'] = validation_st_data_cl.apply(lambda x: labelfunc(x["sentiment values"]), axis=1) return train_st_data_cl,test_st_data_cl,validation_st_data_cl train_st_data,test_st_data,validation_st_data = load_data() train_st_data.head() test_st_data.tail() validation_st_data.head() ``` ### Further NLP Augemnattion ``` !pip install nlpaug # !pip install transformers ## Lets do the NLP data augmentation import nlpaug.augmenter.char as nac import nlpaug.augmenter.word as naw import nlpaug.augmenter.sentence as nas import nlpaug.flow as nafc from nlpaug.util import Action ``` ##### Some basic examples for understanding and then further data augmentation by these - Substitute word by WordNet's synonym - Swap word randomly - Delete a set of contunous word will be removed randomly - Delete word randomly augemnattion ``` # validation_st_data.head() train_st_data['sentence'].iloc[0] aug = naw.SynonymAug(aug_src='wordnet') ## Substitute word by WordNet's synonym¶ augmented_text = aug.augment(train_st_data['sentence'].iloc[0]) print("Original:") print(train_st_data['sentence'].iloc[0]) print("Augmented Text:") print(augmented_text) train_st_data_SynonymAug_aug = train_st_data train_st_data_SynonymAug_aug['sentence_aug'] = train_st_data_SynonymAug_aug.apply(lambda x: aug.augment(x['sentence']),axis=1) ## Swap word randomly¶ aug = naw.RandomWordAug(action="swap") # Swap word randomly¶ augmented_text = aug.augment(train_st_data['sentence'].iloc[0]) print("Original:") print(train_st_data['sentence'].iloc[0]) print("Augmented Text:") print(augmented_text) train_st_data_swap_aug = train_st_data train_st_data_swap_aug['sentence_aug'] = train_st_data_swap_aug.apply(lambda x: aug.augment(x['sentence']),axis=1) ## Swap word randomly¶ # aug = naw.RandomWordAug(action='crop',aug_p=0.5, aug_min=0) # augmented_text = aug.augment(train_st_data['sentence'].iloc[0]) ## Delete a set of contunous word will be removed randomly¶ # print("Original:") # print(train_st_data['sentence'].iloc[0]) # print("Augmented Text:") # print(augmented_text) # train_st_data_crop_aug = train_st_data # train_st_data_crop_aug['sentence_aug'] = train_st_data_crop_aug.apply(lambda x: aug.augment(x['sentence']),axis=1) ## Delete a set of contunous word will be removed randomly¶ text = 'The quick brown fox jumps over the lazy dog .' # Augmenter that apply random word operation to textual input.Augmenter that apply randomly behavior for augmentation. aug = naw.RandomWordAug() augmented_data = aug.augment(text) augmented_data train_st_data_delete_aug = train_st_data # train_st_data_aug[sentence_aug] = aug.augment(train_st_data_aug.loc["sentence"] ) #--Using position to slice Email using a lambda function train_st_data_delete_aug['sentence_aug'] = train_st_data_delete_aug.apply(lambda x: aug.augment(x['sentence']),axis=1) ## Delete word randomly augemnattion print("Original:") print(train_st_data_delete_aug['sentence'].iloc[0]) print("Augmented Text:") print(train_st_data_delete_aug['sentence_aug'].iloc[0]) train_st_data_delete_aug.head() ## Now I need to add all these data frames combined_data_aug = pd.concat([train_st_data_delete_aug, train_st_data_swap_aug, train_st_data_SynonymAug_aug], axis=0) ## after this, now I need to drop the sentence column and rename sentence_aug to sentence combined_data_aug.drop('sentence', axis=1, inplace=True) combined_data_aug.rename(columns = {'sentence_aug':'sentence'}, inplace = True) combined_data_aug.head() ``` ### Final Data Preparation ``` def get_final_data(train_st_data,test_st_data,validation_st_data,combined_data_aug): train_st_data_final = train_st_data.drop(['sentence_index','phrase','phrase ids','splitset_label','sentiment values'],axis=1) train_st_data_final.rename(columns = {'Revised_sentiment values': 'sentiment'}, inplace = True) combined_data_aug.drop(['sentence_index','phrase','phrase ids','splitset_label','sentiment values'],axis=1,inplace=True) combined_data_aug.rename(columns = {'Revised_sentiment values': 'sentiment'}, inplace = True) train_st_data_final_mixed = pd.concat([combined_data_aug, train_st_data_final], axis=0) train_st_data_final_mixed = train_st_data_final_mixed.reset_index(drop=True) ## This is being done because data.Example.fromlist was failing test_st_data_final = test_st_data.drop(['sentence_index','phrase','phrase ids','splitset_label','sentiment values'],axis=1) test_st_data_final.rename(columns = {'Revised_sentiment values': 'sentiment'}, inplace = True) test_st_data_final = test_st_data_final.reset_index(drop=True) ## This is being done because data.Example.fromlist was failing validation_st_data_final = validation_st_data.drop(['sentence_index','phrase','phrase ids','splitset_label','sentiment values'],axis=1) validation_st_data_final.rename(columns = {'Revised_sentiment values': 'sentiment'} , inplace = True) validation_st_data_final = validation_st_data_final.reset_index(drop=True) ## This is being done because data.Example.fromlist was failing return train_st_data_final_mixed, test_st_data_final, validation_st_data_final train_st_data_final, test_st_data_final, validation_st_data_final = get_final_data(train_st_data,test_st_data,validation_st_data,combined_data_aug) train_st_data_final.drop(['sentence_aug'],axis=1,inplace=True) train_st_data_final.head() # train_st_data_final.to_csv(r'train_st_data_final.csv', index = False) train_st_data_final.shape train_st_data_final.sentiment.value_counts() validation_st_data_final.sentiment.value_counts() ``` Now we shall be defining LABEL as a LabelField, which is a subclass of Field that sets sequen tial to False (as it’s our numerical category class). TWEET is a standard Field object, where we have decided to use the spaCy tokenizer and convert all the text to lower‐ case. ``` # Import Library import random import torch, torchtext from torchtext import data import pandas as pd # Manual Seed SEED = 43 torch.manual_seed(SEED) Sentence = data.Field(sequential = True, tokenize = 'spacy', batch_first =True, include_lengths=True) Sentiment = data.LabelField(tokenize ='spacy',is_target=True, batch_first =True, sequential =False) ``` Having defined those fields, we now need to produce a list that maps them onto the list of rows that are in the CSV: ``` fields = [('sentence', Sentence),('sentiment',Sentiment)] # saving the dataframe train_st_data_final.to_csv('train_st_data_final.csv', index=False) test_st_data_final.to_csv('test_st_data_final.csv', index=False) validation_st_data_final.to_csv('validation_st_data_final.csv', index=False) !cp '/content/gdrive/My Drive/EVA/train_st_data_final_fine_grained.csv' train_st_data_final_fine_grained.csv !cp '/content/gdrive/My Drive/EVA/validation_st_data_final_fine_grained.csv' validation_st_data_final_fine_grained.csv !cp '/content/gdrive/My Drive/EVA/test_st_data_final_fine_grained.csv' test_st_data_final_fine_grained.csv train_st_data_final = pd.read_csv("train_st_data_final_fine_grained.csv") test_st_data_final = pd.read_csv("test_st_data_final_fine_grained.csv") validation_st_data_final = pd.read_csv("validation_st_data_final_fine_grained.csv") train_st_data_final.head() # train_st_data_final.drop(['sentence_aug'],axis=1,inplace=True) test_st_data_final.head() import matplotlib.pyplot as plt ax = train_st_data_final['sentiment'].value_counts(sort=False).plot(kind='barh') ax.set_xlabel("Number of Samples in training Set") ax.set_ylabel("sentiment") ``` It is clear that most of the training samples belong to classes 0 and 3 (the weakly negative/positive classes). A sizeable number of samples belong to the neutral class. Barely 12% of the samples are from the strongly negative class 1, which is something to keep in mind as we evaluate our classifier accuracy. ``` # # saving the dataframe # train_st_data_final.to_csv('train_st_data_final.csv', index=False) # test_st_data_final.to_csv('test_st_data_final.csv', index=False) # validation_st_data_final.to_csv('validation_st_data_final.csv', index=False) ``` Armed with our declared fields, lets convert from pandas to list to torchtext. We could also use TabularDataset to apply that definition to the CSV directly but showing an alternative approach too. ``` example_trng = [data.Example.fromlist([train_st_data_final.sentence[i],train_st_data_final.sentiment[i]], fields) for i in range(train_st_data_final.shape[0])] example_val = [data.Example.fromlist([validation_st_data_final.sentence[i],validation_st_data_final.sentiment[i]], fields) for i in range(validation_st_data_final.shape[0])] # Creating dataset #twitterDataset = data.TabularDataset(path="tweets.csv", format="CSV", fields=fields, skip_header=True) # twitterDataset = data.Dataset(example, fields) train = data.Dataset(example_trng, fields) valid = data.Dataset(example_val, fields) ``` Finally, we can split into training, testing, and validation sets by using the split() method: ``` # (train, valid) = twitterDataset.split(split_ratio=[0.85, 0.15], random_state=random.seed(SEED)) (len(train), len(valid)) ``` An example from the dataset: ``` vars(train.examples[10]) ``` ## Building Vocabulary At this point we would have built a one-hot encoding of each word that is present in the dataset—a rather tedious process. Thankfully, torchtext will do this for us, and will also allow a max_size parameter to be passed in to limit the vocabu‐ lary to the most common words. This is normally done to prevent the construction of a huge, memory-hungry model. We don’t want our GPUs too overwhelmed, after all. Let’s limit the vocabulary to a maximum of 5000 words in our training set: ``` MAX_VOCAB_SIZE = 25_000 Sentence.build_vocab(train, max_size = MAX_VOCAB_SIZE, vectors = "glove.6B.200d", # vectors = "glove.6B.300d", unk_init = torch.Tensor.normal_) Sentiment.build_vocab(train) # https://github.com/shayneobrien/sentiment-classification/blob/master/notebooks/11-cnn-conv2d-cbow-glove.ipynb # https://github.com/shayneobrien/sentiment-classification # Sentence.build_vocab(train) # Sentiment.build_vocab(train) ``` By default, torchtext will add two more special tokens, <unk> for unknown words and <pad>, a padding token that will be used to pad all our text to roughly the same size to help with efficient batching on the GPU. ``` print('Size of input vocab : ', len(Sentence.vocab)) print('Size of label vocab : ', len(Sentiment.vocab)) print('Top 10 words appreared repeatedly :', list(Sentence.vocab.freqs.most_common(10))) print('Labels : ', Sentiment.vocab.stoi) print('Size of input vocab : ', len(Sentence.vocab)) print('Size of label vocab : ', len(Sentiment.vocab)) print('Top 10 words appreared repeatedly :', list(Sentence.vocab.freqs.most_common(10))) print('Labels : ', Sentiment.vocab.stoi) ``` **Lots of stopwords!!** Now we need to create a data loader to feed into our training loop. Torchtext provides the BucketIterator method that will produce what it calls a Batch, which is almost, but not quite, like the data loader we used on images. But at first declare the device we are using. ``` device = torch.device("cuda" if torch.cuda.is_available() else "cpu") train_iterator, valid_iterator = data.BucketIterator.splits((train, valid), batch_size = 32, sort_key = lambda x: len(x.sentence), sort_within_batch=True, device = device) ``` Save the vocabulary for later use ``` import os, pickle with open('tokenizer.pkl', 'wb') as tokens: pickle.dump(Sentence.vocab.stoi, tokens) ``` ## Defining Our Model We use the Embedding and LSTM modules in PyTorch to build a simple model for classifying tweets. In this model we create three layers. 1. First, the words in our tweets are pushed into an Embedding layer, which we have established as a 300-dimensional vector embedding. 2. That’s then fed into a 2 stacked-LSTMs with 100 hidden features (again, we’re compressing down from the 300-dimensional input like we did with images). We are using 2 LSTMs for using the dropout. 3. Finally, the output of the LSTM (the final hidden state after processing the incoming tweet) is pushed through a standard fully connected layer with three outputs to correspond to our three possible classes (negative, positive, or neutral). ``` import torch.nn as nn import torch.nn.functional as F class classifier(nn.Module): # Define all the layers used in model def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, dropout, bidirectional,pad_idx): super().__init__() # Embedding layer self.embedding = nn.Embedding(vocab_size, embedding_dim,padding_idx = pad_idx) # LSTM layer self.encoder = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, dropout=dropout, bidirectional = bidirectional, batch_first=True) # try using nn.GRU or nn.RNN here and compare their performances # try bidirectional and compare their performances self.dropout = nn.Dropout(dropout) # Dense layer self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, text, text_lengths): # text = [batch size, sent_length] embedded = self.embedding(text) # embedded = [batch size, sent_len, emb dim] # packed sequence packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.cpu(), batch_first=True) packed_output, (hidden, cell) = self.encoder(packed_embedded) #hidden = [batch size, num layers * num directions,hid dim] #cell = [batch size, num layers * num directions,hid dim] # Hidden = [batch size, hid dim * num directions] dense_outputs = self.fc(hidden) # Final activation function softmax output = F.softmax(dense_outputs[0], dim=1) # output = F.softmax(dense_outputs, dim=1) return output # import torch.nn as nn # import torch.nn.functional as F # class classifier(nn.Module): # # Define all the layers used in model # def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, dropout, bidirectional,pad_idx): # super().__init__() # # Embedding layer # self.embedding = nn.Embedding(vocab_size, embedding_dim,padding_idx = pad_idx) # # LSTM layer # self.encoder = nn.LSTM(embedding_dim, # hidden_dim, # num_layers=n_layers, # dropout=dropout, # bidirectional = bidirectional, # batch_first=True) # # try using nn.GRU or nn.RNN here and compare their performances # # try bidirectional and compare their performances # self.dropout = nn.Dropout(dropout) # # Dense layer # self.fc = nn.Linear(hidden_dim, output_dim) # def forward(self, text, text_lengths): # # text = [batch size, sent_length] # embedded = self.embedding(text) # # embedded = [batch size, sent_len, emb dim] # # packed sequence # packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.cpu(), batch_first=True) # packed_output, (hidden, cell) = self.encoder(packed_embedded) # #hidden = [batch size, num layers * num directions,hid dim] # #cell = [batch size, num layers * num directions,hid dim] # # Hidden = [batch size, hid dim * num directions] # dense_outputs = self.dropout(self.fc(hidden)) # # Final activation function softmax # output = F.softmax(dense_outputs[0], dim=1) # # output = F.softmax(dense_outputs, dim=1) # return output # Define hyperparameters size_of_vocab = len(Sentence.vocab) embedding_dim = 200 num_hidden_nodes = 512 num_output_nodes = 5 num_layers = 5 dropout = 0.4 bidirectional = True PAD_IDX = Sentence.vocab.stoi[Sentence.pad_token] # Instantiate the model model = classifier(size_of_vocab, embedding_dim, num_hidden_nodes, num_output_nodes, num_layers, dropout, bidirectional,PAD_IDX) print(model) #No. of trianable parameters def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ``` ## Model Training and Evaluation First define the optimizer and loss functions ``` import torch.optim as optim # define optimizer and loss optimizer = optim.Adam(model.parameters(), lr=2e-3) criterion = nn.CrossEntropyLoss() # define metric def binary_accuracy(preds, y): #round predictions to the closest integer _, predictions = torch.max(preds, 1) correct = (predictions == y).float() acc = correct.sum() / len(correct) return acc pretrained_embeddings = Sentence.vocab.vectors print(pretrained_embeddings.shape) model.embedding.weight.data.copy_(pretrained_embeddings) UNK_IDX = Sentence.vocab.stoi[Sentence.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(embedding_dim) model.embedding.weight.data[PAD_IDX] = torch.zeros(embedding_dim) print(model.embedding.weight.data) # push to cuda if available model = model.to(device) criterion = criterion.to(device) ``` The main thing to be aware of in this new training loop is that we have to reference `batch.tweets` and `batch.labels` to get the particular fields we’re interested in; they don’t fall out quite as nicely from the enumerator as they do in torchvision. **Training Loop** ``` def train(model, iterator, optimizer, criterion): # initialize every epoch epoch_loss = 0 epoch_acc = 0 # set the model in training phase model.train() for batch in iterator: # resets the gradients after every batch optimizer.zero_grad() # retrieve text and no. of words tweet, tweet_lengths = batch.sentence # convert to 1D tensor predictions = model(tweet, tweet_lengths).squeeze() # compute the loss loss = criterion(predictions, batch.sentiment) # compute the binary accuracy acc = binary_accuracy(predictions, batch.sentiment) # backpropage the loss and compute the gradients loss.backward() # update the weights optimizer.step() # loss and accuracy epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) ``` **Evaluation Loop** ``` def evaluate(model, iterator, criterion): # initialize every epoch epoch_loss = 0 epoch_acc = 0 # deactivating dropout layers model.eval() # deactivates autograd with torch.no_grad(): for batch in iterator: # retrieve text and no. of words tweet, tweet_lengths = batch.sentence # convert to 1d tensor predictions = model(tweet, tweet_lengths).squeeze() # compute loss and accuracy loss = criterion(predictions, batch.sentiment) acc = binary_accuracy(predictions, batch.sentiment) # keep track of loss and accuracy epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) ``` **Let's Train and Evaluate** ``` N_EPOCHS = 20 best_valid_loss = float('inf') #freeze embeddings model.embedding.weight.requires_grad = unfrozen = False for epoch in range(N_EPOCHS): # train the model train_loss, train_acc = train(model, train_iterator, optimizer, criterion) # evaluate the model valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) # save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'saved_weights.pt') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% \n') path='./saved_weights.pt' model.load_state_dict(torch.load(path)); N_EPOCHS = 15 best_valid_loss = float('inf') #freeze embeddings model.embedding.weight.requires_grad = unfrozen = True for epoch in range(N_EPOCHS): # train the model train_loss, train_acc = train(model, train_iterator, optimizer, criterion) # evaluate the model valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) # save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'saved_weights.pt') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% \n') ``` ## Model Testing ``` #load weights and tokenizer path='./saved_weights.pt' model.load_state_dict(torch.load(path)); model.eval(); tokenizer_file = open('./tokenizer.pkl', 'rb') tokenizer = pickle.load(tokenizer_file) #inference import spacy nlp = spacy.load('en') def classify_tweet(tweet): categories = {0: "Negative", 1:"Positive", 2:"Neutral"} # tokenize the tweet tokenized = [tok.text for tok in nlp.tokenizer(tweet)] # convert to integer sequence using predefined tokenizer dictionary indexed = [tokenizer[t] for t in tokenized] # compute no. of words length = [len(indexed)] # convert to tensor tensor = torch.LongTensor(indexed).to(device) # reshape in form of batch, no. of words tensor = tensor.unsqueeze(1).T # convert to tensor length_tensor = torch.LongTensor(length) # Get the model prediction prediction = model(tensor, length_tensor) _, pred = torch.max(prediction, 1) return categories[pred.item()] classify_tweet("A valid explanation for why Trump won't let women on the golf course.") ``` ## Discussion on Data Augmentation Techniques You might wonder exactly how you can augment text data. After all, you can’t really flip it horizontally as you can an image! :D In contrast to data augmentation in images, augmentation techniques on data is very specific to final product you are building. As its general usage on any type of textual data doesn't provides a significant performance boost, that's why unlike torchvision, torchtext doesn’t offer a augmentation pipeline. Due to powerful models as transformers, augmentation tecnhiques are not so preferred now-a-days. But its better to know about some techniques with text that will provide your model with a little more information for training. ### Synonym Replacement First, you could replace words in the sentence with synonyms, like so: The dog slept on the mat could become The dog slept on the rug Aside from the dog's insistence that a rug is much softer than a mat, the meaning of the sentence hasn’t changed. But mat and rug will be mapped to different indices in the vocabulary, so the model will learn that the two sentences map to the same label, and hopefully that there’s a connection between those two words, as everything else in the sentences is the same. ### Random Insertion A random insertion technique looks at a sentence and then randomly inserts synonyms of existing non-stopwords into the sentence n times. Assuming you have a way of getting a synonym of a word and a way of eliminating stopwords (common words such as and, it, the, etc.), shown, but not implemented, in this function via get_synonyms() and get_stopwords(), an implementation of this would be as follows: ``` def random_insertion(sentence, n): words = remove_stopwords(sentence) for _ in range(n): new_synonym = get_synonyms(random.choice(words)) sentence.insert(randrange(len(sentence)+1), new_synonym) return sentence ``` ## Random Deletion As the name suggests, random deletion deletes words from a sentence. Given a probability parameter p, it will go through the sentence and decide whether to delete a word or not based on that random probability. Consider of it as pixel dropouts while treating images. ``` def random_deletion(words, p=0.5): if len(words) == 1: # return if single word return words remaining = list(filter(lambda x: random.uniform(0,1) > p,words)) if len(remaining) == 0: # if not left, sample a random word return [random.choice(words)] else: return remaining ``` ### Random Swap The random swap augmentation takes a sentence and then swaps words within it n times, with each iteration working on the previously swapped sentence. Here we sample two random numbers based on the length of the sentence, and then just keep swapping until we hit n. ``` def random_swap(sentence, n=5): length = range(len(sentence)) for _ in range(n): idx1, idx2 = random.sample(length, 2) sentence[idx1], sentence[idx2] = sentence[idx2], sentence[idx1] return sentence ``` For more on this please go through this [paper](https://arxiv.org/pdf/1901.11196.pdf). ### Back Translation Another popular approach for augmenting text datasets is back translation. This involves translating a sentence from our target language into one or more other languages and then translating all of them back to the original language. We can use the Python library googletrans for this purpose. ``` !pip install googletrans==3.1.0a0 import random import googletrans from googletrans import Translator # import googletrans.Translator translator = Translator() sentence = ['The dog slept on the rug'] available_langs = list(googletrans.LANGUAGES.keys()) trans_lang = random.choice(available_langs) print(f"Translating to {googletrans.LANGUAGES[trans_lang]}") translations = translator.translate(sentence, dest=trans_lang) t_text = [t.text for t in translations] print(t_text) translations_en_random = translator.translate(t_text, src=trans_lang, dest='en') en_text = [t.text for t in translations_en_random] print(en_text) ```
github_jupyter
# Description This notebook runs some pre-analyses using spectral clustering to explore the best set of parameters to cluster `z_score_std` data version. # Environment variables ``` from IPython.display import display import conf N_JOBS = conf.GENERAL["N_JOBS"] display(N_JOBS) %env MKL_NUM_THREADS=$N_JOBS %env OPEN_BLAS_NUM_THREADS=$N_JOBS %env NUMEXPR_NUM_THREADS=$N_JOBS %env OMP_NUM_THREADS=$N_JOBS ``` # Modules loading ``` %load_ext autoreload %autoreload 2 from pathlib import Path import warnings import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from utils import generate_result_set_name ``` # Settings ``` INITIAL_RANDOM_STATE = 30000 ``` # Z-score standardized data ``` INPUT_SUBSET = "z_score_std" INPUT_STEM = "projection-smultixcan-efo_partial-mashr-zscores" input_filepath = Path( conf.RESULTS["DATA_TRANSFORMATIONS_DIR"], INPUT_SUBSET, f"{INPUT_SUBSET}-{INPUT_STEM}.pkl", ).resolve() display(input_filepath) assert input_filepath.exists(), "Input file does not exist" input_filepath_stem = input_filepath.stem display(input_filepath_stem) data = pd.read_pickle(input_filepath) data.shape data.head() ``` # Clustering ``` from sklearn.cluster import SpectralClustering from clustering.utils import compute_performance ``` ## `gamma` parameter ### Using default value (`gamma=1.0`) ``` with warnings.catch_warnings(): warnings.filterwarnings("always") clus = SpectralClustering( eigen_solver="arpack", # eigen_tol=1e-2, n_clusters=2, n_init=1, affinity="rbf", gamma=1.00, random_state=INITIAL_RANDOM_STATE, ) part = clus.fit_predict(data) # show number of clusters and their size _tmp = pd.Series(part).value_counts() display(_tmp) assert _tmp.shape[0] == 1 ``` The algorithm does not work with the default `gamma=1.0`. Other values for this parameter should be explored. ### Using `gamma=5.00` ``` with warnings.catch_warnings(): warnings.filterwarnings("always") clus = SpectralClustering( eigen_solver="arpack", # eigen_tol=1e-2, n_clusters=2, n_init=1, affinity="rbf", gamma=5.00, random_state=INITIAL_RANDOM_STATE, ) part = clus.fit_predict(data) # show number of clusters and their size _tmp = pd.Series(part).value_counts() display(_tmp) assert _tmp.shape[0] == 1 ``` The algorithm does not work either with `gamma>1.0`. ### Using `gamma=0.01` ``` with warnings.catch_warnings(): warnings.filterwarnings("always") clus = SpectralClustering( eigen_solver="arpack", eigen_tol=1e-3, n_clusters=2, n_init=1, affinity="rbf", gamma=0.01, random_state=INITIAL_RANDOM_STATE, ) part = clus.fit_predict(data) # show number of clusters and their size _tmp = pd.Series(part).value_counts() display(_tmp) assert _tmp.shape[0] == 2 assert 3 < _tmp.loc[1] < 10 # show some clustering performance measures to assess the quality of the partition _tmp = compute_performance(data, part) assert 0.30 < _tmp["si"] < 0.31 assert 10.0 < _tmp["ch"] < 11.00 assert 1.30 < _tmp["db"] < 1.50 ``` For values around `gamma=0.01` the algorithm takes a lot of time to converge (here I used `eigen_tol=1e-03` to force convergence). ### Using `gamma=0.001` ``` with warnings.catch_warnings(): warnings.filterwarnings("always") clus = SpectralClustering( eigen_solver="arpack", # eigen_tol=1e-3, n_clusters=2, n_init=1, affinity="rbf", gamma=0.001, random_state=INITIAL_RANDOM_STATE, ) part = clus.fit_predict(data) # show number of clusters and their size _tmp = pd.Series(part).value_counts() display(_tmp) assert _tmp.shape[0] == 2 assert 80 < _tmp.loc[1] < 90 # show some clustering performance measures to assess the quality of the partition _tmp = compute_performance(data, part) assert 0.10 < _tmp["si"] < 0.16 assert 40.0 < _tmp["ch"] < 42.00 assert 3.00 < _tmp["db"] < 4.00 ``` For values around `gamma=0.001` now the algorithm converges, although most of the performance measures are worse. This suggests smaller values should be explored for this parameter. ## Extended test Here I run some test across several `k` and `gamma` values; then I check how results perform with different clustering quality measures. ``` CLUSTERING_OPTIONS = {} CLUSTERING_OPTIONS["K_RANGE"] = [2, 4, 6, 8, 10, 20, 30, 40, 50, 60] CLUSTERING_OPTIONS["N_REPS_PER_K"] = 5 CLUSTERING_OPTIONS["KMEANS_N_INIT"] = 10 CLUSTERING_OPTIONS["GAMMAS"] = [ 1e-03, # 1e-04, # 1e-05, 1e-05, # 1e-06, # 1e-07, # 1e-08, # 1e-09, 1e-10, # 1e-11, # 1e-12, # 1e-13, # 1e-14, 1e-15, 1e-17, 1e-20, 1e-30, 1e-40, 1e-50, ] CLUSTERING_OPTIONS["AFFINITY"] = "rbf" display(CLUSTERING_OPTIONS) CLUSTERERS = {} idx = 0 random_state = INITIAL_RANDOM_STATE for k in CLUSTERING_OPTIONS["K_RANGE"]: for gamma_value in CLUSTERING_OPTIONS["GAMMAS"]: for i in range(CLUSTERING_OPTIONS["N_REPS_PER_K"]): clus = SpectralClustering( eigen_solver="arpack", n_clusters=k, n_init=CLUSTERING_OPTIONS["KMEANS_N_INIT"], affinity=CLUSTERING_OPTIONS["AFFINITY"], gamma=gamma_value, random_state=random_state, ) method_name = type(clus).__name__ CLUSTERERS[f"{method_name} #{idx}"] = clus random_state = random_state + 1 idx = idx + 1 display(len(CLUSTERERS)) _iter = iter(CLUSTERERS.items()) display(next(_iter)) display(next(_iter)) clustering_method_name = method_name display(clustering_method_name) ``` ## Generate ensemble ``` import tempfile from clustering.ensembles.utils import generate_ensemble ensemble = generate_ensemble( data, CLUSTERERS, attributes=["n_clusters", "gamma"], ) ensemble.shape ensemble.head() ensemble["gamma"] = ensemble["gamma"].apply(lambda x: f"{x:.1e}") ensemble["n_clusters"].value_counts() _tmp = ensemble["n_clusters"].value_counts().unique() assert _tmp.shape[0] == 1 assert _tmp[0] == int( CLUSTERING_OPTIONS["N_REPS_PER_K"] * len(CLUSTERING_OPTIONS["GAMMAS"]) ) ensemble_stats = ensemble["n_clusters"].describe() display(ensemble_stats) ``` ## Testing ``` assert ensemble_stats["min"] > 1 assert not ensemble["n_clusters"].isna().any() assert ensemble.shape[0] == len(CLUSTERERS) # all partitions have the right size assert np.all( [part["partition"].shape[0] == data.shape[0] for idx, part in ensemble.iterrows()] ) # no partition has negative clusters (noisy points) assert not np.any([(part["partition"] < 0).any() for idx, part in ensemble.iterrows()]) assert not np.any( [pd.Series(part["partition"]).isna().any() for idx, part in ensemble.iterrows()] ) # check that the number of clusters in the partitions are the expected ones _real_k_values = ensemble["partition"].apply(lambda x: np.unique(x).shape[0]) display(_real_k_values) assert np.all(ensemble["n_clusters"].values == _real_k_values.values) ``` ## Add clustering quality measures ``` from sklearn.metrics import ( silhouette_score, calinski_harabasz_score, davies_bouldin_score, ) ensemble = ensemble.assign( si_score=ensemble["partition"].apply(lambda x: silhouette_score(data, x)), ch_score=ensemble["partition"].apply(lambda x: calinski_harabasz_score(data, x)), db_score=ensemble["partition"].apply(lambda x: davies_bouldin_score(data, x)), ) ensemble.shape ensemble.head() ``` # Cluster quality ``` with pd.option_context("display.max_rows", None, "display.max_columns", None): _df = ensemble.groupby(["n_clusters", "gamma"]).mean() display(_df) with sns.plotting_context("talk", font_scale=0.75), sns.axes_style( "whitegrid", {"grid.linestyle": "--"} ): fig = plt.figure(figsize=(14, 6)) ax = sns.pointplot(data=ensemble, x="n_clusters", y="si_score", hue="gamma") ax.set_ylabel("Silhouette index\n(higher is better)") ax.set_xlabel("Number of clusters ($k$)") ax.set_xticklabels(ax.get_xticklabels(), rotation=45) plt.grid(True) plt.tight_layout() with sns.plotting_context("talk", font_scale=0.75), sns.axes_style( "whitegrid", {"grid.linestyle": "--"} ): fig = plt.figure(figsize=(14, 6)) ax = sns.pointplot(data=ensemble, x="n_clusters", y="ch_score", hue="gamma") ax.set_ylabel("Calinski-Harabasz index\n(higher is better)") ax.set_xlabel("Number of clusters ($k$)") ax.set_xticklabels(ax.get_xticklabels(), rotation=45) plt.grid(True) plt.tight_layout() with sns.plotting_context("talk", font_scale=0.75), sns.axes_style( "whitegrid", {"grid.linestyle": "--"} ): fig = plt.figure(figsize=(14, 6)) ax = sns.pointplot(data=ensemble, x="n_clusters", y="db_score", hue="gamma") ax.set_ylabel("Davies-Bouldin index\n(lower is better)") ax.set_xlabel("Number of clusters ($k$)") ax.set_xticklabels(ax.get_xticklabels(), rotation=45) plt.grid(True) plt.tight_layout() ``` # Stability ## Group ensemble by n_clusters ``` parts = ensemble.groupby(["gamma", "n_clusters"]).apply( lambda x: np.concatenate(x["partition"].apply(lambda x: x.reshape(1, -1)), axis=0) ) parts.shape parts.head() parts.iloc[0].shape assert np.all( [ parts.loc[k].shape == (int(CLUSTERING_OPTIONS["N_REPS_PER_K"]), data.shape[0]) for k in parts.index ] ) ``` ## Compute stability ``` from sklearn.metrics import adjusted_rand_score as ari from scipy.spatial.distance import pdist parts_ari = pd.Series( {k: pdist(parts.loc[k], metric=ari) for k in parts.index}, name="n_clusters" ) parts_ari_stability = parts_ari.apply(lambda x: x.mean()) display(parts_ari_stability.sort_values(ascending=False).head(15)) parts_ari_df = pd.DataFrame.from_records(parts_ari.tolist()).set_index( parts_ari.index.copy() ) parts_ari_df.index.rename(["gamma", "n_clusters"], inplace=True) parts_ari_df.shape _n_total_parts = int( CLUSTERING_OPTIONS["N_REPS_PER_K"] ) # * len(CLUSTERING_OPTIONS["GAMMAS"])) assert int(_n_total_parts * (_n_total_parts - 1) / 2) == parts_ari_df.shape[1] parts_ari_df.head() ``` ## Stability plot ``` parts_ari_df_plot = ( parts_ari_df.stack().reset_index().rename(columns={"level_2": "idx", 0: "ari"}) ) parts_ari_df_plot.dtypes parts_ari_df_plot.head() with pd.option_context("display.max_rows", None, "display.max_columns", None): _df = parts_ari_df_plot.groupby(["n_clusters", "gamma"]).mean() display(_df) with sns.plotting_context("talk", font_scale=0.75), sns.axes_style( "whitegrid", {"grid.linestyle": "--"} ): fig = plt.figure(figsize=(14, 6)) ax = sns.pointplot(data=parts_ari_df_plot, x="n_clusters", y="ari", hue="gamma") ax.set_ylabel("Averange ARI") ax.set_xlabel("Number of clusters ($k$)") ax.set_xticklabels(ax.get_xticklabels(), rotation=45) plt.grid(True) plt.tight_layout() ``` # Conclusions We choose `1e-03` as the `gamma` parameter for this data version.
github_jupyter
# Detecting the Ingroup and Outgroup of a Text --- In this first experiment we use Bush (2001) and bin Laden’s (19XX) narratives as a dataset to detect the ingroup and outgroup of each. Shown in Figure 1, a manual annotation of two text reveals the ingroups and outgroups for each orator. Bush’s text is his “Address to Joint Session of Congress Following 9/11 Attacks ” made on the 20 September 2001; bin Laden’s text is his “Declaration of Jihad Against the Americans Occupying the Land of the Two Holy Places” published on 23 August 1996. These texts were chosen as they are the first of the dataset in which each declares war against each other. Bush identifies his ingroup as, ‘fellow Americans’, and after identifying al Qaeda as an terrorist organisation, he declares, “Our war on terror begins with al Qaeda”. Bin Laden similarly identifies his ingroup as “Muslim brethren” and declares Jihad against his outgroup of the Americans by stating, “driving back the American occupier enemy is the most essential duty after faith”. The ingroup and outgroup of each text are clearly defined by each orator, as is their intention to legitimise warfare. ## Setup NLP Pipeline and data ``` %%time import os import pandas as pd import spacy from cndlib.visuals import display_side_by_side from cndlib.pipeline import add_hard_coded_entities, merge_compounds, custom_tokenizer nlp = spacy.load("en_core_web_md") nlp.tokenizer = custom_tokenizer(nlp) dirname = "C:\\Users\\spa1e17\\OneDrive - University of Southampton\\Hostile-Narrative-Analysis\\dataset" filename = "named_entity_corrections.json" filepath = os.path.join(dirname, filename) add_hard_coded_entities(nlp, filepath) merge_ents = nlp.create_pipe("merge_entities") nlp.add_pipe(merge_ents, after = "entity_ruler") nlp.add_pipe(merge_compounds, last = True) print([pipe for pipe in nlp.pipe_names]) ``` ### Create the Dataset The dataset is created by extracting the named entities relating to people or groups, which in turn are annotated in relation to each orator as either ingroup or outgroup. The annotations were made by words signifying group membership, such as "terrorist organisation known as al Qaeda", or by inference using annotator judgement. The annotations are saved in a .csv file and can be reviewed as required. ``` import os docs = None docs = {"bush" : {"name" : "George Bush", "text" : dict()}, "binladen" : {"name" : "Osama bin Laden", "text" : dict()}} docs['bush']['dirpath'] = u"C:\\Users\\spa1e17\\OneDrive - University of Southampton\\Hostile-Narrative-Analysis\\dataset\\George Bush" docs['bush']['text']['filename'] = "20010920-Address to Joint Session of Congress Following 911 Attacks.txt" docs['binladen']['dirpath'] = u"C:\\Users\\spa1e17\\OneDrive - University of Southampton\\Hostile-Narrative-Analysis\\dataset\\Osama bin Laden" docs['binladen']['text']['filename'] = "19960823-Declaration of Jihad Against the Americans Occupying the Land of the Two Holiest Sites.txt" # lambda function to capture the named entities of a text which are GEP, NORP, ORG or PERSON entity_list = lambda ents: [ent for ent in ents.noun_chunks if all((ent.root.pos_ != "ADJ", ent.root.ent_type_ in ["PERSON", "ORG", "NORP","GPE"])) ] # lambda function to process doc and extract entities get_entities = lambda orator : entity_list(nlp(orator["text"]["rawtext"])) for orator in docs.values(): with open(os.path.join(orator['dirpath'], orator['text']['filename']), 'r') as fp: # get bush entities orator["text"]["rawtext"] = fp.read() orator["text"]["entities"] = get_entities(orator) orator['text']['analytics'] = dict() n = 20 captions = [f"First {n} of {len(orator['text']['entities'])} Entities for {orator['name']}" for orator in docs.values()] display_side_by_side([pd.DataFrame([(ent.text, ent.root.text, ent.root.ent_type_, ent.root.pos_) for ent in docs[orator]["text"]["entities"]]).head(20) for orator in docs], captions) ``` ### Assigning Ingroup to Outgroup of a Text Export the entities to an output csv file for manual annotation. Annotation was based on three methods. Firstly, if the named entity had an associated seed term of either elevation or othering it was annotated as ingroup or outgroup respectively. For example, any seed term associated with the term “enemy” would be considered to be an outgroup, while any entity preceded by the phrase “my fellow” would be annotated as an ingroup. Secondly, any named entity whose grouping was identified elsewhere would be annotated as “linked”. For example, where the single instance of an entity had been annotated as an outgroup through the term “enemy”, all other instances of the same entity would be annotated with the same label. The final method is through inferred knowledge which draws upon real-world knowledge to make the annotation. For review, the annotated dataset is available online and the first 12 annotation results are in Figure 1. ``` %time import csv import pandas as pd # Field headers for the csv fields = ["Orator", "Entity Type", "Part of Speech", "Entity Phrase", "Entity Root", "Grouping", "Seed Term", "Sentence"] entities = [] for orator in docs: for entity in docs[orator]["text"]["entities"]: entities.append([orator, entity.root.ent_type_, entity.root.pos_, entity.text, entity.root, '', '', str(entity.sent).replace('\n', ' ').strip()]) dirpath = os.getcwd() filename = "entity_list.csv" filepath = os.path.join(dirpath, filename) df = pd.DataFrame(entities, columns = fields) df.to_csv(filepath, sep=',',index=False) df ``` ### Import the Annotations as a Tab Deliminated File The annotation file is saved as a .txt file and imported using tab delimiations to avoid any clashes with sentence commas. ``` import os import csv import pandas as pd from cndlib.visuals import display_side_by_side filename = "entity_list_gold.txt" filename = os.path.join(os.getcwd(), filename) for orator in docs: docs[orator]['text']['groups'] = {"ingroup" : set(), "outgroup" : set()} with open(filename, newline = "") as fp: data = csv.DictReader(fp, delimiter = '\t') for row in data: if row["Grouping"].lower().strip() == "ingroup": docs[row["Orator"]]['text']['groups']["ingroup"].add(row["Entity Root"].lower().strip()) if row["Grouping"].lower().strip() == "outgroup": docs[row["Orator"]]['text']['groups']["outgroup"].add(row["Entity Root"].lower().strip()) dfs = [] captions = [] for orator in docs.values(): data = orator['text']['groups'] for grouping, group_list in data.items(): # convert set() to list() data[grouping] = list(data[grouping]) dfs.append(pd.DataFrame(group_list, columns = [f"{orator['name']}'s {grouping.title()}s"])) captions.append(f"{orator['name']}'s text has {len(group_list)} annotated {grouping} terms") display_side_by_side(dfs, captions) ``` ## Test 1: Testing IBM Watson Sentiment Analysis IBM's sentiment analyser has a feature to "analyse target phrases in context of the surrounding text for focused sentiment and emotion results" . For the first test, therefore, the named entities shown in figure 1 for each orator were passed to the API to get the sentiment scores for each. A positive score towards a named entity should infer ingroup membership, while a negative score should infer outgroup membership. The annotated entity phrases were passed to the API as target phrases and results were assessed against the annotations from figure 1. If positive sentiment scores correlated with an ingroup annotation or negative scores correlated with outgroup, the test was a pass; the test was a fail for positive score correlating with outgroup, or negative scores correlating with ingroup. API Documentation: https://cloud.ibm.com/docs/natural-language-understanding?topic=natural-language-understanding-getting-started ### Initiate Watson API ``` %%time import json from ibm_watson import NaturalLanguageUnderstandingV1 from ibm_cloud_sdk_core.authenticators import IAMAuthenticator from ibm_watson.natural_language_understanding_v1 import EmotionOptions, Features, EntitiesOptions, KeywordsOptions, SentimentOptions apikey = '' url = '' authenticator = IAMAuthenticator(apikey) service = NaturalLanguageUnderstandingV1(version='2019-07-12', authenticator=authenticator) service.set_service_url(url) # print(json.dumps(response, indent=2)) ``` ### Get Responses from Watson API ``` %%time import json for orator in docs.values(): print(f"getting results for {orator['name']}") text = orator['text']['rawtext'] targets = orator['text']['groups']['ingroup'] + orator['text']['groups']['outgroup'] orator['text']['analytics'].update(service.analyze(text=text, features = Features( emotion = EmotionOptions(targets = targets), entities = EntitiesOptions(sentiment = True, emotion = True), sentiment = SentimentOptions(targets = targets, document = True), keywords = KeywordsOptions(sentiment = True, emotion = True), )).get_result()) #empty the list of entity Spans to enable saving file as a json object orator['text']['entities'] = None print(f"{orator['name']} complete with {len(orator['text']['analytics']['sentiment']['targets'])} target entities scored for sentiment") with open(os.path.join(os.getcwd(), f"entity_sentiment_scores.json"), "wb") as f: f.write(json.dumps(docs).encode("utf-8")) ``` ### Overall Document Sentiment Score ``` import os import json docs = None with open(os.path.join(os.getcwd(), f"entity_sentiment_scores.json"), "r") as f: docs = json.load(f) for orator in docs.values(): response = orator['text']['analytics'] print(f"document sentiment {orator['name']}: {response['sentiment']['document']['label']}") print(f"document sentiment score for {orator['name']}: {response['sentiment']['document']['score']}") print() ``` ### Scores for Annotated Entities ``` import pandas as pd from cndlib.visuals import display_side_by_side def get_group(orator, entity): """ function to get the grouping of an entity from the orator's groupings """ if entity in docs[orator]['text']['groups']['ingroup']: return "ingroup" if entity in docs[orator]['text']['groups']['outgroup']: return "outgroup" return "not found" def assessment_test(col1, col2): """ function to test whether a sentiment scores matches ingroup/outgroup """ if col1 == "positive" or col1 == "neutral" and col2 == "ingroup": return "pass" if col1 == "negative" and col2 == "ingroup": return "fail" if col1 == "negative" and col2 == "outgroup": return "pass" if col1 == "positive" or col1 == "neutral" and col2 == "outgroup": return "fail" # create new dataframe based on filtered columns scores = lambda table, labels: table[table.label.isin(labels)].sort_values("score", ascending = 'negative' in labels, ignore_index = True) ## iterate through the docs for orator in docs: # capture results results = pd.DataFrame(docs[orator]["text"]['analytics']["sentiment"]["targets"]) ## create a dataframe for positive and negative results dfs = dict() dfs = {"ingroup" : {"result" : None, "df" : scores(results, ['neutral', 'positive'])}, "outgroup" : {"result" : None, "df" : scores(results, ['negative'])}} for obj in dfs.values(): df = obj["df"] # get the grouping for each entity df["grouping"] = df.apply(lambda x: get_group(orator, x["text"]), axis = 1) # test whether sentiment score matches ingroup/outgroup df["test result"] = df.apply(lambda x: assessment_test(x["label"], x["grouping"]), axis=1) # get the success scores for ingroup and outgroup obj["result"] = format(df["test result"].value_counts(normalize = True)["pass"], '.0%') # format dataframe df.drop('mixed', axis = 1, inplace = True) df['text'] = df['text'].str.title() df.rename(columns = {"score" : "sentiment score", "text" : "entity text"}, inplace = True) df.columns = df.columns.str.title() docs[orator]['text']['analytics']['sentiment']['dfs'] = dfs # display the outputs display_side_by_side([output["df"] for output in dfs.values()], [f"{key.title()} scores for {docs[orator]['name']} has a True Positive Score of {obj['result']} from a total of {len(obj['df'])} Entities" for key, obj in dfs.items()]) print() # dfs = [] # captions = [] # for orator in docs.values(): # for group, df in orator['text']['analytics']['sentiment']['dfs'].items(): # dfs.append(df['df'].head(13) # captions.append(f"{group.title()} scores for {orator['name']} has a Success of {df['result']} from a total of {len(df['df'])} Entities") # display_side_by_side(dfs, captions) ``` In relation to the annotation methods, these are somewhat counter-intuitive results. For Bush’s outgroups the phrases ‘al Qaeda’, ‘Taliban’, ‘the Taliban Regime’, ‘al Qaeda’ and ‘Islamic Movement of Uzbekistan’ are annotated as outgroups and generate negative scores between -0.56 and -0.81 as expected. The phrases, ‘the United States’, ‘America’, ‘Americans’, ‘the United States of America’ and ‘United States Authorities’, however, are annotated as ingroups, but generate negative scores between -0.31 and -0.65. These overlapping range of scores do not correlate with how a President would refer to his country in a time of national mourning. The phrases, ‘Christians’, ‘Jews’, ‘Muslims’ and ‘Arlene’ generate the most negative results for Bush despite being annotated as ingroups. Of these ‘Arlene’ is the most negative with a score of -0.87 and occurs in the sentence, ‘It was given to me by his mom, Arlene, as a proud memorial to her son’. This mention is in reference to a Police Shield given to George Bush by Arlene Howard in memorial to her son George who was killed in the attacks. The context in which ‘Arlene’ is mentioned is entirely positive. There are 37 annotations for entities associated with elevation or othering seed terms. Any phrase containing the word, ‘enemy’ would reasonably be scored as negative. The phrase, ‘US Enemy’, nevertheless, generates a score of +0.42, which is higher than ‘Gabriel’ at +0.38, a reference to the Angel Gabriel who bin Laden repeatedly reveres. Equally, bin Laden refers to the ‘mujahidin’ as ‘our brothers the people’ yet generates a score of -0.47, which is more negative than both ‘Americans’ and ‘Marines’ at -0.28 and -0.36 respectively. Bush’s phrase, “The enemy of America is not our many Muslim friends” establishes Muslims as an ingroup, whereas the term, ‘Muslims’ generates the second most negative score of -0.83. There is also problem with how different mentions of the same entity are linked across a narrative. Despite being an outgroup of bin Laden, ‘Israel’ receives the third highest score of +0.44 behind a reference to President Clinton at +0.84 and ‘brother Muslims’ at +0.64. “Israel” is mentioned once in bin Laden’s text, but there are two mentions of the “Israeli-American alliance”, one mention of “Israeli-American enemy alliance” and one mention of “Israelis” in the phrase, “their Jihad against their enemies and yours, the Israelis and Americans”. Where bin Laden’s counter-intuitively generates positive scores for Israel, the phrases, ‘Jewish-Crusade Alliance”, “Jew” and “Jews” each generate negative scores as expected. Against expectations, however, the phrases, ‘Muslim’, ‘Muslims’ and ‘Ulema’ generate negative scores of +0.60, +0.62 and +0.64 respectively despite the phrase, ‘brother Muslims’ receiving the second highest score for positivity. There are 43 annotations that rely upon linking entities to specific clauses of elevation or othering, resolving these different mentions of the same entity might produce more intuitive results. In addition to linking entities, there is also a problem with linking them to specific noun phrases. In Bush’s text, “Osama bin Laden” and “Egyptian Islamic Jihad” generate neutral scores, whereas they are annotated as his outgroups. They occur in the phrase, “This group and its leader -- a person named Usama bin Laden -- are linked to many other organizations in different countries, including the Egyptian Islamic Jihad and the Islamic Movement of Uzbekistan.”. The noun phrase, “this group” refers to a mention of al Qaeda in the previous paragraph who Bush variously others as “terrorists” and “murderers”. Nevertheless, there is no obvious way to resolve, ‘this group’ to ‘al Qaeda’ to establish group status of bin Laden or the Egyptian Islamic Jihad. Their annotation relies upon real-world knowledge for which there are 57 annotations in the dataset. Given these entities are only mentioned once, real-world knowledge is the only way to identify their group status. Beyond linking seed terms to entities, there is also a problem with linking entities to functional narrative clauses. For bin Laden, the phrase, ‘Prophet’ – a reference to Muhammed – generates a negative score of -0.53 despite being a religious figure of bin Laden’s ingroup ‘America’ generating a less negative score. 12 out of 28 mentions of ‘Prophet’ are followed by the phrase, ‘may God's prayers and blessings be upon him’, which is used as a religious narrative clause to elevate the entity associated with the pronoun, ‘him’. This clause is functionally similar to ‘God bless America’, which sanctifies the object of the clause, in this case ‘America’. Such as use of religion should attract high scores of positivity for sentiment analysis, which appears to be unlikely for this algorithm. ### Scores for Watson Defined Entities ``` # get emotion scores # entry["emotion"]["sadness"], entry["emotion"]["joy"], entry["emotion"]["fear"], entry["emotion"]["disgust"], entry["emotion"]["anger"]) columns = ["Entity", "Sentiment Score", "Sentiment Label"] scores = lambda labels, table: pd.DataFrame( # get sentiment scores [(entry["text"], entry["sentiment"]["score"], entry["sentiment"]["label"]) # iterate through table if positive/negative for entry in table if entry["sentiment"]["label"] in labels], # set column names columns = columns) \ \ .sort_values("Sentiment Score", ascending = label not in labels, ignore_index = True) for orator in docs.values(): results = orator['text']['analytics']['entities'] n = 10 display_side_by_side([scores(['positive', 'neutral'], results), scores(['negative'], results)], [f"Top 10 Positive Scores for API Defined Entities in {orator['name']}'s Dataset'", f"Top 10 Negative Scores for API Defined Entities in {orator['name']}'s Dataset'"]) print() ``` ## Test 2: Proximity of Named Entities to Seed Terms ``` import tqdm import pickle group = ["the United States of America", "Americans", "America", "The United States"] term = None results = dict() for sentence in doc.sents: terms = set(group).intersection(set([token.text.strip() for token in sentence])) if terms: term = list(terms)[0] terms = [token.text for token in sentence if token.pos_ in ["NOUN", "VERB"]] if term and term in results.keys(): results[term].extend(terms) elif term: results[term] = terms ibm_df = dict() n = 1 for entity, terms in tqdm.tqdm(results.items()): ibm_df[entity] = {"positive" : list(), "negative" : list(), "neutral" : list()} for term in terms: analytics = service.analyze(text=term, features=Features( sentiment=SentimentOptions()), language = "en").get_result() sentiment = analytics['sentiment']['document'] score = {term : round(sentiment['score'], 2)} if sentiment['label'] == "positive": # print(f"appending {score} to {entity}['positive']") ibm_df[entity]['positive'].append(score) elif sentiment['label'] == "negative": # print(f"appending {score} to {entity}['negative']") ibm_df[entity]['negative'].append(score) elif sentiment['label'] == "neutral": # print(f"appending {score} to {entity}['neutral']") ibm_df[entity]['neutral'].append(score) with open(os.path.join(os.getcwd(), f"manual_cooccurring_scores.json"), "wb") as f: f.write(json.dumps(ibm_df).encode("utf-8")) filepath = os.getcwd() pickle_filename = "ibm_cooccuring_scores.pkl" with open(os.path.join(filepath, pickle_filename), 'wb') as file: pickle.dump(ibm_df, file) ``` ## Load and Display the Results ``` import os import pickle import pandas as pd from statistics import mean from cndlib.visuals import display_side_by_side # load file from disc filepath = os.getcwd() pickle_filename = "ibm_cooccuring_scores.pkl" with open(os.path.join(filepath, pickle_filename), 'rb') as file: ibm_df = pickle.load(file) def get_sentiment(entity): """ function to get the sentiment score for the entity being assessed """ for target in docs['bush']["text"]['analytics']["sentiment"]["targets"]: if target['text'] == entity.lower(): return target['score'] def get_averages_row(df): """ function to get the average scores for each sentiment polarity """ averages = list() for result in df.columns: averages.append(f"Average ({round(mean([score.get(list(score.keys())[0]) for score in df[result].tolist() if isinstance(score, dict)]), 2)})") return pd.DataFrame(dict(zip(df.columns, averages)), index=[0]) ## create DataFrames from the results dfs = [pd.DataFrame(dict([(k, pd.Series(v, dtype='object')) for k,v in ibm_df[d].items() if k in ['positive', 'negative', 'neutral']])) for d in ibm_df] ## function to get the length of the greater number of positive or negative results get_table_size = lambda df: max([value.count() for key, value in df.items() if key in ['positive', 'negative']]) ## cell formatting function to convert the dictionary results to a string format_cell = lambda x: f"{list(x.items())[0][0]} ({list(x.items())[0][1]})" ## get DataFrame caption captions = [f"Entity: '{d}' - sentiment {round(get_sentiment(d), 2)}" for d in ibm_df] ## display DataFrames display_side_by_side([df .head(get_table_size(df)) # shrink table to longest of either positivity or negativity .applymap(format_cell, na_action='ignore') # reformat dictionary results to strings .fillna('').append(get_averages_row(df), ignore_index = True) # append the average scores for each columns .rename(columns={key : f"{key.title()}, ({value.count()} Terms)" for key, value in df.items()}) # rename columns to include number of entities for df in dfs], captions) ```
github_jupyter
<a href="https://colab.research.google.com/github/RohanOpenSource/Deep-Learning-And-Beyond/blob/main/ExplodingAndVanishingGradients.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Training neural networks can cause many problems that we can combat. The first of these is the exploding and vanishing gradient problem. During backpropogation, gradients can either become smaller and smaller, leaving to the lower layers of the neural network to remain unchanged, or they can become larger and larger making the neural network not work at all. This normally results in the usage of the sigmoid activation function or the normal distribution initlialization which both cause more variance between layers. To have this not happen, we need a similar variance for the inputs of each layer and the outputs. To acheive this, we can randomly initialize the weights. This is called Glorot and He Intilialization and is what is used by default in Keras. ``` import tensorflow as tf import numpy as np from sklearn.datasets import load_iris iris = load_iris() #you know the drill we're using iris X = iris.data[:, 2:] y = iris.target model = tf.keras.Sequential([ tf.keras.layers.Dense(128, activation="sigmoid", kernel_initializer=tf.keras.initializers.GlorotUniform), #sigmoid can also cause exploding/vanishing gradients tf.keras.layers.Dense(64, activation="relu"), #relu isn't perfect either as it leads to some neurons always outputting 0, thus being useless tf.keras.layers.Dense(32, activation="leaky_relu"), # rather than making negative numbers 0, leaky relu squashes them quite heavily meaning no neurons become useless tf.keras.layers.Dense(3, activation="softmax") ]) model.build([1, 2]) model.summary() model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model.fit(X, y, epochs=30) ``` Sometimes, using variants of relu and glorot initialization is not enough to prevent the gradients from vanishing or exploding. In this case, batch normalization is required. Batch Normalization normalizes and centers the inputs of the layer before it. This reduces the variance between layers and if used as the first layer, can remove the need to normalize the data before hand. ``` fashion_mnist = tf.keras.datasets.fashion_mnist (X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data() X_valid, X_train = X_train_full[:5000] / 255.0, X_train_full[5000:] / 255.0 y_valid, y_train = y_train_full[:5000], y_train_full[5000:] X_test = X_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=[28, 28]), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(300, activation="elu", kernel_initializer="he_normal"), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(10, activation="softmax") ]) model.summary() model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid)) ``` Another way to avoid exploding/vanishing gradients is to clip the gradients before they become to big or small during backpropogation. This is called gradient clipping. ``` optimizer = tf.keras.optimizers.SGD(clipvalue=1.0) model.compile(loss = "sparse_categorical_crossentropy", optimizer=optimizer, metrics=["accuracy"]) model.fit(X_train, y_train, epochs=5, validation_data=(X_valid, y_valid)) ```
github_jupyter
# pandas From Scratch Data Structures, Series and Dataframe ______________________ **Source** [Python for Data Analysis Ch05, by Wes McKinney](https://github.com/wesm/pydata-book/blob/2nd-edition/ch05.ipynb) ______________________ __**pandas**__ is a Python toolkit optimized for building data structures and data cleaning operations. Works with tabular or heterogeneous data sets, unlike Numpy which perfers homogeneous numerical data. Similar to Numpy array-based functions, processing with out FOR loops. Many applications can begin with either Series or Dataframes. ``` import pandas as pd from pandas import Series, DataFrame import numpy as np np.random.seed(12345) import matplotlib.pyplot as plt plt.rc('figure', figsize=(10, 6)) PREVIOUS_MAX_ROWS = pd.options.display.max_rows pd.options.display.max_rows = 20 np.set_printoptions(precision=4, suppress=True) ``` ## pandas Series **Series** A sequence of values matched to its index. An *index* is an array of data labels, the A one-dimensional array-like object, default index will start at 0. Option to use strings as indexes, you can select one plus values if quoted in the call function. Series concept: a fixed-length, ordered dictionary. ``` obj = pd.Series([4, 7, -5, 3]) obj obj.values obj.index # like range(4) obj2 = pd.Series([4, 7, -5, 3], index=['d', 'b', 'a', 'c']) obj2 obj2.index #quote the index value if not numeric obj2['a'] obj2['d'] = 6 obj2[['c', 'a', 'd']] #operations against the object to not change index-value relationship obj2[obj2 > 0] obj2 * 2 np.exp(obj2) #Similar to dict usages 'b' in obj2 'e' in obj2 #sdata starts as a dictionary, pass into pandas to create a series sdata = {'Camelot': 33350100, 'Castle Anthrax': 7105400, 'Oregon': 1643000, 'Spamtowne': 555000} obj3 = pd.Series(sdata) obj3 #you can specify the returned key order, defaults to softed #Utopia does not exist in the series states = ['Utopia', 'Spamtowne', 'Oregon', 'Camelot'] obj4 = pd.Series(sdata, index=states) obj4 #detect if data object has nulls print('$----test null----$\n', pd.isnull(obj4)) print('$----test not null----$\n', pd.notnull(obj4)) #optional to use instance methods print(obj3.isnull()) print('$$-----------more---------$$') print(obj4.isnull()) #Series auto align by index labels in math operations, similar to database joins print(obj3 + obj4) #the series object and index obj4.name = 'population' obj4.index.name = 'state' obj4 obj.index = ['Lancelot', 'Thripshaw', 'Boubacar', 'Camara'] print(obj) ``` ### DataFrame _________________ *Dataframes* are order collections of various type columns. Stored as 2-dimensional arrays, indexed by columns and rows, basically a dictionary that contains series with the same index. Stored as 2D blocks, but can be used in higher dimensional data by using hierarchical indexing. The lists have to have the same length, index is assigned if you do not assign it. ``` data = {'country': ['Senegal', 'Mali', 'Gambia', 'Guinee', 'Burkina Faso', 'Ghana'], 'year': [2000, 2001, 2002, 2001, 2002, 2003], 'pop': [1.3, 1.6, 3.6, 2.4, 22.9, 35.2]} frame = pd.DataFrame(data) #the columns should result in alphabetical order by frame frame.head() #defaults to first five rows #specify the column order pd.DataFrame(data, columns=['year', 'country', 'pop']) #create a dataframe, if you pass in a crocodile then it fills with NaN frame2 = pd.DataFrame(data, columns=['year', 'country', 'pop', 'crocodiles'], index=['one', 'two', 'three', 'four', 'five', 'six']) print(frame2) print(frame2.columns) #return a column by dictionary-like notation or attribute print(frame2['country']) print(frame2.year) #retrieve a row by position or name frame2.loc['four'] #you can assign a value for each row #frame2['crocodiles'] = 17.56 frame2['crocodiles'] = np.arange(6.) print(frame2) #example of the length not matching, it fills in the missing values with NaN val = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five']) frame2['crocodiles'] = val frame2 #create a new column, assign a boolean where country equals Senegal #have to use square brackets when creating, later you can return without frame2['western'] = frame2.country == 'Senegal' frame2 #remove the column created above, should return a list of what is still there del frame2['western'] frame2.columns #create a nested dictionary of dictionaries #pandas will default to inner keys as row indices and outer dict keys as columns pop = {'Coleoptera': {3001: 2.48, 4002: 2.79}, 'Lepidoptera': {3000: 1.15, 5001: 12.7, 7003: 33.6}} frame3 = pd.DataFrame(pop) frame3 #Transpose, swap the columns and row, similar syntax to Numpy arrays #the inner dict keys mix together and sort to create the index frame3.T #sort the keys in the order you specify #inner dict keys are mixed and sorted if you are not explicit pd.DataFrame(pop, index=[3001, 5001, 4002]) pdata = {'Coleopter': frame3['Coleoptera'][:-1], 'Lepidoptera': frame3['Lepidoptera'][:2]} pd.DataFrame(pdata) frame3.index.name = 'year'; frame3.columns.name = 'state' frame3 frame3.values frame2.values ``` ### Index Objects Hold axis labels and metadata such as axis names. If you use an array of labels when building a Series or Dataframe it will be converted into an index internally. ``` obj = pd.Series(range(3), index=['a', 'b', 'c']) index = obj.index index index[1:] index[1] = 'd' # TypeError because index objects are immutable #not mutable makes it a good choice to share Index Objects between dataframes labels = pd.Index(np.arange(3)) labels obj2 = pd.Series([1.5, -2.5, 0], index=labels) print(obj2) obj2.index is labels frame3 frame3.columns 'Ohio' in frame3.columns 2003 in frame3.index #pd indexes can contain duplicate labels, not a set dup_labels = pd.Index(['foo', 'foo', 'bar', 'bar']) dup_labels ``` ## Essential Functionality ### Reindexing ``` obj = pd.Series([4.5, 7.2, -5.3, 3.6], index=['d', 'b', 'a', 'c']) obj #perform a reindex on the above object, rearranges to these new values, NaN if missing a match obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e']) obj2 #Sometime it is needed to fill in values when reindexing, the ffill method option #forward fills obj3 = pd.Series(['Hymenoptera', 'Coleoptera', 'Hemiptera'], index=[0, 2, 4]) obj3 obj3.reindex(range(6), method='ffill') frame = pd.DataFrame(np.arange(9).reshape((3, 3)), index=['a24', '1c3', '66d'], columns=['Psocoptera', 'Neuroptera', 'Trichoptera']) frame frame2 = frame.reindex(['a', 'b', 'c', 'd']) frame2 #use the columns keyword to orders = ['Neuroptera', 'Mecoptera', 'Trichoptera'] frame.reindex(columns=orders) frame.loc[['a', 'b', 'c', 'd'], orders] ``` ### Dropping Entries from an Axis ``` obj = pd.Series(np.arange(5.), index=['Cnidaria', 'Annelida', 'Echinodermata', 'Urochordata', 'Nematoda']) obj new_obj = obj.drop('Urochordata') new_obj obj.drop(['Cnidaria', 'Annelida']) data = pd.DataFrame(np.arange(16).reshape((4, 4)), index=['Asteroidea', 'Ophiuroidea', 'Holothuroidea', 'Echinoidea'], columns=["go'o", 'didi', 'tatti', 'nye']) data data.drop(['Holothuroidea', 'Ophiuroidea']) data.drop('didi', axis=1) data.drop(["go'o", 'nye'], axis='columns') obj.drop('Cnidaria', inplace=True) obj ``` ### Indexing, Selection, and Filtering ``` obj = pd.Series(np.arange(6,11,1.), index=['Khaya', 'Mangifera', 'Balanites', 'Azadirachta', 'Delonix']) obj obj['Mangifera'] obj[3] obj[2:4] obj[[1, 3]] obj[obj < 7] obj[['Mangifera', 'Khaya', 'Azadirachta']] obj obj[3:5] obj['Mangifera':'Azadirachta'] obj['Khaya':'Azadirachta'] = 8 obj data = pd.DataFrame(np.arange(21, 41,1).reshape((4, 5)), index=['Gibberellin', 'Oligosaccharins', 'Cytokinins', 'Auxin'], columns=['one', 'two', 'three', 'four', 'five']) data data['two'] data[['three', 'one']] data[:2] data[data['three'] > 28] data < 5 data[data < 5] = 0 data ``` #### Selection with loc and iloc ``` data.loc['Cytokinins', ['two', 'three']] data.iloc[2, [3, 0, 1]] data.iloc[2] data.iloc[[1, 2], [3, 0, 1]] data.loc[:'Auxin', 'two'] data.iloc[:, :3][data.three > 25] ``` ### Integer Indexes ser = pd.Series(np.arange(3.)) ser ser[-1] ``` ser = pd.Series(np.arange(3.)) ser ser2 = pd.Series(np.arange(3.), index=['a', 'b', 'c']) ser2[-1] ser[:1] ser.loc[:1] ser.iloc[:1] ``` ### Arithmetic and Data Alignment ``` s1 = pd.Series([7.3, -2.5, 3.4, 1.5], index=['a', 'c', 'd', 'e']) s2 = pd.Series([-2.1, 3.6, -1.5, 4, 3.1], index=['a', 'c', 'e', 'f', 'g']) s1 s2 s1 + s2 df1 = pd.DataFrame(np.arange(9.).reshape((3, 3)), columns=list('bcd'), index=['Ohio', 'Texas', 'Colorado']) df2 = pd.DataFrame(np.arange(12.).reshape((4, 3)), columns=list('bde'), index=['Utah', 'Ohio', 'Texas', 'Oregon']) df1 df2 df1 + df2 df1 = pd.DataFrame({'A': [1, 2]}) df2 = pd.DataFrame({'B': [3, 4]}) df1 df2 df1 - df2 ``` #### Arithmetic methods with fill values ``` df1 = pd.DataFrame(np.arange(12.).reshape((3, 4)), columns=list('abcd')) df2 = pd.DataFrame(np.arange(20.).reshape((4, 5)), columns=list('abcde')) print(df1) print(df2) df2.loc[1, 'b'] = np.nan df1 df2 df1 + df2 df1.add(df2, fill_value=0) 1 / df1 df1.rdiv(1) df1.reindex(columns=df2.columns, fill_value=0) ``` #### Operations between DataFrame and Series ``` arr = np.arange(12.).reshape((3, 4)) arr arr[0] arr - arr[0] frame = pd.DataFrame(np.arange(12.).reshape((4, 3)), columns=list('txw'), index=['Bozicactus', 'Cotyledon', 'Hatioria', 'Lobivia']) series = frame.iloc[0] frame series frame - series series2 = pd.Series(range(3), index=['b', 'e', 'f']) frame + series2 series3 = frame['d'] frame series3 frame.sub(series3, axis='index') ``` ### Function Application and Mapping ``` frame = pd.DataFrame(np.random.randn(4, 3), columns=list('bde'), index=['denudatum', 'netrelianum', 'platense', 'saglionis']) frame np.abs(frame) f = lambda x: x.max() - x.min() frame.apply(f) frame.apply(f, axis='columns') def f(x): return pd.Series([x.min(), x.max()], index=['min', 'max']) frame.apply(f) format = lambda x: '%.2f' % x frame.applymap(format) frame['e'].map(format) ``` ### Sorting and Ranking ``` obj = pd.Series(range(4), index=['d', 'a', 'b', 'c']) obj.sort_index() frame = pd.DataFrame(np.arange(8).reshape((2, 4)), index=['three', 'one'], columns=['d', 'a', 'b', 'c']) frame.sort_index() frame.sort_index(axis=1) frame.sort_index(axis=1, ascending=False) obj = pd.Series([4, 7, -3, 2]) obj.sort_values() obj = pd.Series([4, np.nan, 7, np.nan, -3, 2]) obj.sort_values() frame = pd.DataFrame({'b': [4, 7, -3, 2], 'a': [0, 1, 0, 1]}) frame frame.sort_values(by='b') frame.sort_values(by=['a', 'b']) obj = pd.Series([7, -5, 7, 4, 2, 0, 4]) obj.rank() obj.rank(method='first') # Assign tie values the maximum rank in the group obj.rank(ascending=False, method='max') frame = pd.DataFrame({'b': [4.3, 7, -3, 2], 'a': [0, 1, 0, 1], 'c': [-2, 5, 8, -2.5]}) frame frame.rank(axis='columns') ``` ### Axis Indexes with Duplicate Labels ``` obj = pd.Series(range(5), index=['a', 'a', 'b', 'b', 'c']) obj obj.index.is_unique obj['a'] obj['c'] df = pd.DataFrame(np.random.randn(4, 3), index=['a', 'a', 'b', 'b']) df df.loc['b'] ``` ## Summarizing and Computing Descriptive Statistics ``` df = pd.DataFrame([[1.4, np.nan], [7.1, -4.5], [np.nan, np.nan], [0.75, -1.3]], index=['a', 'b', 'c', 'd'], columns=['one', 'two']) df df.sum() df.sum(axis='columns') df.mean(axis='columns', skipna=False) df.idxmax() df.cumsum() df.describe() obj = pd.Series(['a', 'a', 'b', 'c'] * 4) obj.describe() ``` ### Correlation and Covariance conda install pandas-datareader ``` price = pd.read_pickle('examples/yahoo_price.pkl') volume = pd.read_pickle('examples/yahoo_volume.pkl') ``` import pandas_datareader.data as web all_data = {ticker: web.get_data_yahoo(ticker) for ticker in ['AAPL', 'IBM', 'MSFT', 'GOOG']} price = pd.DataFrame({ticker: data['Adj Close'] for ticker, data in all_data.items()}) volume = pd.DataFrame({ticker: data['Volume'] for ticker, data in all_data.items()}) ``` returns = price.pct_change() returns.tail() returns['MSFT'].corr(returns['IBM']) returns['MSFT'].cov(returns['IBM']) returns.MSFT.corr(returns.IBM) returns.corr() returns.cov() returns.corrwith(returns.IBM) returns.corrwith(volume) ``` ### Unique Values, Value Counts, and Membership ``` obj = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c']) uniques = obj.unique() uniques.value_counts() obj.value_counts() pd.value_counts(obj.values, sort=False) obj mask = obj.isin(['b', 'c']) mask obj[mask] to_match = pd.Series(['c', 'a', 'b', 'b', 'c', 'a']) unique_vals = pd.Series(['c', 'b', 'a']) pd.Index(unique_vals).get_indexer(to_match) data = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4], 'Qu2': [2, 3, 1, 2, 3], 'Qu3': [1, 5, 2, 4, 4]}) data result = data.apply(pd.value_counts).fillna(0) result ``` ## Conclusion ``` pd.options.display.max_rows = PREVIOUS_MAX_ROWS ```
github_jupyter
# 基尼系数 ![GA7a5j.png](https://s1.ax1x.com/2020/03/28/GA7a5j.png) $$G = 1 - \sum_{i=1}^kp_i^2$$ - 对于二分类问题 $$G = 1 - x^2 - (1-x)^2$$ $$\Downarrow$$ $$= -2x^2 + 2x$$ - 可以看出,对于二分类问题,当$x = \frac{1}{2}$ 时,基尼系数又最大值 - 即此时,系统不确定性最大 ### 1. 基尼系数 ``` import numpy as np import matplotlib.pyplot as plt from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, 2:] y = iris.target from sklearn.tree import DecisionTreeClassifier dt_clf = DecisionTreeClassifier(max_depth=2, criterion='gini') dt_clf.fit(X, y) def plot_decision_boundary(model, axis): x0, x1 = np.meshgrid( np.linspace(axis[0], axis[1], int((axis[1] - axis[0])*100)).reshape(1, -1), np.linspace(axis[2], axis[3], int((axis[3] - axis[2])*100)).reshape(-1, 1) ) X_new = np.c_[x0.ravel(), x1.ravel()] y_predic = model.predict(X_new) zz = y_predic.reshape(x0.shape) from matplotlib.colors import ListedColormap custom_cmap = ListedColormap(['#EF9A9A', '#FFF590', '#90CAF9']) plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap) plot_decision_boundary(dt_clf, axis=(0.5, 7.5, 0, 3)) plt.scatter(X[y==0, 0], X[y==0, 1]) plt.scatter(X[y==1, 0], X[y==1, 1]) plt.scatter(X[y==2, 0], X[y==2, 1]) def gini(p): return 1 - p**2 - (1-x)**2 x = np.linspace(0.01, 0.99) plt.plot(x, gini(x)) ``` ### 2. 模拟使用基尼系数划分 ``` from collections import Counter from math import log # 基于维度 d 的 value 值进行划分 def split(X, y, d, value): index_a = (X[:, d] <= value) index_b = (X[:, d] > value) return X[index_a], X[index_b], y[index_a], y[index_b] # 计算每一类样本点的基尼系数的和 def gini(y): counter = Counter(y) res = 1.0 for num in counter.values(): p = num / len(y) res -= p**2 return res # 寻找要划分的 value 值 def try_split(X, y): best_g = float('inf') # 最小的基尼系数的值 best_d, best_v = -1, -1 # 划分的维度,划分的位置 # 遍历每一个维度 for d in range(X.shape[1]): # 每两个样本点在 d 这个维度中间的值. 首先把 d 维所有样本排序 sorted_index = np.argsort(X[:, d]) for i in range(1, len(X)): if X[sorted_index[i-1], d] != X[sorted_index[i], d]: v = (X[sorted_index[i-1], d] + X[sorted_index[i], d]) / 2 x_l, x_r, y_l, y_r = split(X, y, d, v) # 计算当前划分后的两部分结果基尼系数是多少 g = gini(y_l) + gini(y_r) if g < best_g: best_g, best_d, best_v = g, d, v return best_g, best_d, best_v best_g, best_d, best_v = try_split(X, y) print("best_g = ", best_g) print("best_d = ", best_d) print("best_v = ", best_v) ``` **可以看出,在第 0 个维度(x轴)的 2.45 处划分,有最小的基尼系数 0.5** ``` X1_l, X1_r, y1_l, y1_r = split(X, y, best_d, best_v) # 从上图可以看出,经过一次划分,粉红色部分只有一类,故基尼系数为 0 gini(y1_l) gini(y1_r) best_g2, best_d2, best_v2 = try_split(X1_r, y1_r) print("best_g = ", best_g2) print("best_d", best_d2) print("best_v", best_v2) ``` **可以看出,在第 1 个维度(y轴)的 1.75 处划分,有最小的基尼系数 0.21** - scikit-learn 中默认为基尼系数
github_jupyter
## Dependencies ``` import json, glob from tweet_utility_scripts import * from tweet_utility_preprocess_roberta_scripts_aux import * from transformers import TFRobertaModel, RobertaConfig from tokenizers import ByteLevelBPETokenizer from tensorflow.keras import layers from tensorflow.keras.models import Model ``` # Load data ``` test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv') print('Test samples: %s' % len(test)) display(test.head()) ``` # Model parameters ``` input_base_path = '/kaggle/input/244-robertabase/' with open(input_base_path + 'config.json') as json_file: config = json.load(json_file) config # vocab_path = input_base_path + 'vocab.json' # merges_path = input_base_path + 'merges.txt' base_path = '/kaggle/input/qa-transformers/roberta/' vocab_path = base_path + 'roberta-base-vocab.json' merges_path = base_path + 'roberta-base-merges.txt' config['base_model_path'] = base_path + 'roberta-base-tf_model.h5' config['config_path'] = base_path + 'roberta-base-config.json' model_path_list = glob.glob(input_base_path + '*.h5') model_path_list.sort() print('Models to predict:') print(*model_path_list, sep='\n') ``` # Tokenizer ``` tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path, lowercase=True, add_prefix_space=True) ``` # Pre process ``` test['text'].fillna('', inplace=True) test['text'] = test['text'].apply(lambda x: x.lower()) test['text'] = test['text'].apply(lambda x: x.strip()) x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test) ``` # Model ``` module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False) def model_fn(MAX_LEN): input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids') attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask') base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model") last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask}) logits = layers.Dense(2, name="qa_outputs", use_bias=False)(last_hidden_state) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits]) return model ``` # Make predictions ``` NUM_TEST_IMAGES = len(test) test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN'])) for model_path in model_path_list: print(model_path) model = model_fn(config['MAX_LEN']) model.load_weights(model_path) test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'])) test_start_preds += test_preds[0] test_end_preds += test_preds[1] ``` # Post process ``` test['start'] = test_start_preds.argmax(axis=-1) test['end'] = test_end_preds.argmax(axis=-1) test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1) # Post-process test.loc[test['sentiment'] == 'neutral', 'selected_text'] = test["text"] test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1) test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1) test['selected_text'].fillna(test['text'], inplace=True) ``` # Visualize predictions ``` test['text_len'] = test['text'].apply(lambda x : len(x)) test['label_len'] = test['selected_text'].apply(lambda x : len(x)) test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' '))) test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' '))) test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids)) test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids)) test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1) display(test.head(10)) display(test.describe()) ``` # Test set predictions ``` submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv') submission['selected_text'] = test['selected_text'] submission.to_csv('submission.csv', index=False) submission.head(10) ```
github_jupyter
# Tensorflow Extended (TFX) - In this kernel we try to use Tensorflow data validation library to easily visualize the statistics and given schema,the other main purpose of this tool is to compare train and test statistics (Train -Test Skew) - This tensorflow data validation packages uses Facets internally to render interactive visualization - This packages was used in production system to validate the data for anomalies before passing into our model (ML/DL model -since model expects data in particular schema) - For further information kindly go through the **TFX documentation** [TFX](https://www.tensorflow.org/tfx) [Tensorflow Data validation](https://www.tensorflow.org/tfx/data_validation/get_started) * Disclaimer: we can analyse the Descriptive nature of the data by rich set of tools already available in the python ecosystem, this kernel purpose is to introduce the new tool (from Tensorflow ecosytem) which can be useful in analysing descriptive nature and production system also .* ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("../input")) # Any results you write to the current directory are saved as output. ``` # Installing Tensorflow data validation - Tfdv uses Apache Beam to full pass the dataset (Big datasets) and for aggregations - Apache Beam has some nice characteristics such as - **unfied programing model **for Batch and stream processing - **portable** Execute pipelines on multiple execution environments. - Apache beam supports different language SDKs - Apache beam runs with different backend runners such as - Apache Flink - Apache Spark - Apache Gearpump - Apache Nemo - Apache Samza - Google cloud dataflow - Direct runners (without parallelism in local machines) *Note this kernel uses Direct runners this may be slow while getting statistics from the dataset.*** ``` !pip install --user tensorflow_data_validation import tensorflow_data_validation as tfdv ``` # Generating Statistics Satistics is nothing but tfdv full passes the dataset and collects the characteristics (aggregation metrics) of the dataset and use this statistics (Train statistics ) to validate the statistics of the test data in prediction or serving time. - Currently, We can prepare Satistics of the dataset in three ways : - Directly from the CSVs - From pandas Dataframe - From Tensorflow specific- TFRecord ``` dipole_moments=pd.read_csv("../input/dipole_moments.csv") magnetic_shielding_tensors=pd.read_csv("../input/magnetic_shielding_tensors.csv") mulliken_charges=pd.read_csv("../input/mulliken_charges.csv") potential_energy=pd.read_csv("../input/potential_energy.csv") structures=pd.read_csv("../input/structures.csv") scalar_coupling_contributions=pd.read_csv("../input/scalar_coupling_contributions.csv") #dipole_moments=pd.read_csv("../input/dipole_moments.csv") #dipole_moments=tfdv.generate_statistics_from_csv('../input/dipole_moments.csv') #magnetic_shielding_tensors=tfdv.generate_statistics_from_csv('../input/magnetic_shielding_tensors.csv') #mulliken_charges=tfdv.generate_statistics_from_csv('../input/mulliken_charges.csv') #potential_energy=tfdv.generate_statistics_from_csv('../input/potential_energy.csv') #scalar_coupling_contributions=tfdv.generate_statistics_from_csv('../input/scalar_coupling_contributions.csv') ``` Run any of the above cells to generate statistics, i observed generating statistics from the pandas dataframe was bit faster than generating from the CSVs. # Visualizing the Generated Statistics in Facets - This is as simple as calling **tfdv.visualize_statistics** api ``` dipole_moments_stats=tfdv.generate_statistics_from_dataframe(dipole_moments) tfdv.visualize_statistics(dipole_moments_stats) magnetic_shielding_tensors_stats=tfdv.generate_statistics_from_dataframe(magnetic_shielding_tensors) tfdv.visualize_statistics(magnetic_shielding_tensors_stats) mulliken_charges_stats=tfdv.generate_statistics_from_dataframe(mulliken_charges) tfdv.visualize_statistics(mulliken_charges_stats) potential_energy_stats=tfdv.generate_statistics_from_dataframe(potential_energy) tfdv.visualize_statistics(potential_energy_stats) structures_stats=tfdv.generate_statistics_from_dataframe(structures) tfdv.visualize_statistics(structures_stats) scalar_coupling_contributions_stats=tfdv.generate_statistics_from_dataframe(scalar_coupling_contributions) tfdv.visualize_statistics(scalar_coupling_contributions_stats) ``` # Saving schema for training statistics - Schema is nothing but the expected features /schema of the given dataset - Schema can be prepared in two ways: - Manually write the schema - USe the generated Satistics to infer schema. - You can save/serialize the schema of your training dataset in to the disk and later you load and validate the statistis of the test data ``` dipole_moments_schema=tfdv.infer_schema(dipole_moments_stats) tfdv.write_schema_text(dipole_moments_schema,"dipole_moments_schema") magnetic_shielding_tensors_schema=tfdv.infer_schema(magnetic_shielding_tensors_stats) tfdv.write_schema_text(magnetic_shielding_tensors_schema,"magnetic_shielding_tensors_schema") mulliken_charges_schema=tfdv.infer_schema(mulliken_charges_stats) tfdv.write_schema_text(mulliken_charges_schema,"mulliken_charges_schema") potential_energy_schema=tfdv.infer_schema(potential_energy_stats) tfdv.write_schema_text(potential_energy_schema,"potential_energy_schema") structures_schema=tfdv.infer_schema(structures_stats) tfdv.write_schema_text(structures_schema,"structures_schema") scalar_coupling_contributions_schema=tfdv.infer_schema(scalar_coupling_contributions_stats) tfdv.write_schema_text(scalar_coupling_contributions_schema,"scalar_coupling_contributions_schema") print(os.listdir(".")),tfdv.load_schema_text('magnetic_shielding_tensors_schema') tfdv.display_schema(dipole_moments_schema) tfdv.display_schema(magnetic_shielding_tensors_schema) tfdv.display_schema(mulliken_charges_schema) tfdv.display_schema(potential_energy_schema) tfdv.display_schema(scalar_coupling_contributions_schema) tfdv.display_schema(structures_schema) tfdv.visualize_statistics(potential_energy) tfdv.visualize_statistics(scalar_coupling_contributions) tfdv.visualize_statistics(scalar_coupling_contributions) ```
github_jupyter
``` import sys sys.path.append("/home/sidhu/Documents/tf-transformers/src/") import tensorflow_text as text import tensorflow as tf import numpy as np import tensorflow_datasets as tfds # Load tokenizer vocab_file = 'bert_tokenizer_dir/vocab.txt' def _create_vocab_table_and_initializer(vocab_file): vocab_initializer = tf.lookup.TextFileInitializer( vocab_file, key_dtype=tf.string, key_index=tf.lookup.TextFileIndex.WHOLE_LINE, value_dtype=tf.int64, value_index=tf.lookup.TextFileIndex.LINE_NUMBER) vocab_table = tf.lookup.StaticHashTable(vocab_initializer, default_value=-1) return vocab_table, vocab_initializer vocab_table , vocab_initializer = _create_vocab_table_and_initializer(vocab_file) bert_tokenizer = text.BertTokenizer( vocab_table, lower_case=False) CLS_ID, SEP_ID, PAD_ID, UNK_ID, MASK_ID = (101, 102, 0, 100, 103) imdb = tfds.load('imdb_reviews') def get_masked_input_and_labels(encoded_texts): # 15% BERT masking inp_mask = np.random.rand(*encoded_texts.shape) < 0.15 # Do not mask special tokens inp_mask[encoded_texts == CLS_ID] = False inp_mask[encoded_texts == SEP_ID] = False # Set targets to -1 by default, it means ignore labels = -1 * np.ones(encoded_texts.shape, dtype=int) # Set labels for masked tokens labels[inp_mask] = encoded_texts[inp_mask] # Prepare input encoded_texts_masked = np.copy(encoded_texts) # Set input to [MASK] which is the last token for the 90% of tokens # This means leaving 10% unchanged inp_mask_2mask = inp_mask & (np.random.rand(*encoded_texts.shape) < 0.90) encoded_texts_masked[ inp_mask_2mask ] = MASK_ID # mask token is the last in the dict # Set 10% to a random token inp_mask_2random = inp_mask_2mask & (np.random.rand(*encoded_texts.shape) < 1 / 9) encoded_texts_masked[inp_mask_2random] = np.random.randint( 3, MASK_ID, inp_mask_2random.sum() ) # Prepare sample_weights to pass to .fit() method sample_weights = np.ones(labels.shape) sample_weights[labels == -1] = 0 # y_labels would be same as encoded_texts i.e input tokens y_labels = np.copy(encoded_texts) # Extract masked lm positions (where we have masked) # and use tf.ragged to convert it into tensor indexes, positions = np.where(encoded_texts_masked == MASK_ID) unique, counts = np.unique(indexes, return_counts=True) counts = counts[:-1] # an extra at last (dont know) counts = np.cumsum(counts) masked_lm_positions = tf.ragged.constant(np.split(positions, counts)) masked_lm_positions = masked_lm_positions.to_tensor(PAD_ID) # Gather the positions we want y_labels = np.take_along_axis(y_labels, masked_lm_positions.numpy(), axis=1) sample_weights = np.take_along_axis(sample_weights, masked_lm_positions.numpy(), axis=1) return encoded_texts_masked, y_labels, sample_weights, masked_lm_positions def create_mlm(encoded_text): # Input to `augment()` is a TensorFlow tensor which # is not supported by `imgaug`. This is why we first # convert it to its `numpy` variant. return get_masked_input_and_labels(encoded_text.numpy()) def add_start_end(ragged): count = ragged.bounding_shape()[0] starts = tf.fill([count,1], CLS_ID) ends = tf.fill([count,1], SEP_ID) return tf.concat([starts, ragged, ends], axis=1) MAX_SEQ_LEN = 128 def text_to_instance(batch): text = batch['text'] encoded_text = bert_tokenizer.tokenize(text) encoded_text = tf.cast(encoded_text.merge_dims(-2, -1), tf.int32) encoded_text = encoded_text[:, :MAX_SEQ_LEN-2] encoded_text = add_start_end(encoded_text) encoded_text = encoded_text.to_tensor() input_ids, masked_lm_labels, masked_lm_weights, masked_lm_positions = tf.py_function(create_mlm, [encoded_text], [tf.int32, tf.int32, tf.float32, tf.int32]) # masked_lm_labels = get_2d_from_2d(masked_lm_labels, masked_lm_positions) # masked_lm_weights = get_2d_from_2d(masked_lm_weights, masked_lm_positions) input_type_ids = tf.zeros_like(input_ids) input_mask = tf.ones_like(input_ids) inputs = {} inputs['input_ids'] = input_ids inputs['input_type_ids'] = input_type_ids inputs['input_mask'] = input_mask inputs['masked_lm_positions'] = masked_lm_positions labels = {} labels['masked_lm_labels'] = masked_lm_labels labels['masked_lm_mask'] = masked_lm_weights return (inputs, labels) batch_size = 5 dataset_unsupervised = imdb['unsupervised'] dataset_unsupervised = dataset_unsupervised.batch(batch_size) dataset_unsupervised = dataset_unsupervised.map(text_to_instance) for (batch_inputs, batch_labels) in dataset_unsupervised.take(1): print(batch_inputs, batch_labels) from tf_transformers.models import BertModel from tf_transformers.losses import cross_entropy_loss model_name = 'bert-base-cased' model, config = BertModel.get_model(model_name=model_name, use_masked_lm_positions=True, return_all_layer_outputs=True) model.input model.output model_outputs = model(batch_inputs) model_outputs['all_layer_token_logits'] def loss_fn(y_true_dict, y_pred_dict): loss_dict = {} loss_holder = [] for i, layer_output in enumerate(y_pred_dict['all_layer_token_logits']): layer_loss = cross_entropy_loss(labels=y_true_dict['masked_lm_labels'], logits=layer_output, label_weights=y_true_dict['masked_lm_mask']) loss_dict['layer_{}'.format(i)] = layer_loss loss_holder.append(layer_loss) loss_dict['loss'] = tf.reduce_mean(loss_holder) return loss_dict loss_fn(batch_labels, model_outputs) class MaskedTextGenerator(tf.keras.callbacks.Callback): def __init__(self, tokenizer, top_k=5): self.tokenizer = tokenizer model , config = BertModel.get_model(model_name='bert-base-cased', use_masked_lm_positions=False, return_all_layer_outputs=True) self.original_model = model def on_epoch_end(self, epoch, logs=None): self.original_model.set_weights(self.model.get_weights()) sample_text = "I have watched this [MASK] and it was awesome" input_ids = tf.constant(self.tokenizer.encode(sample_text)) masked_index = np.where(input_ids == MASK_ID)[0][0] input_ids = tf.expand_dims(input_ids, axis=0) input_type_ids = tf.zeros_like(input_ids) input_mask = tf.ones_like(input_ids) inputs = {} inputs["input_ids"] = input_ids inputs["input_type_ids"] = input_type_ids inputs["input_mask"] = input_mask outputs = self.original_model(inputs) for i, layer_output in enumerate(outputs['all_layer_token_logits']): prob_value = tf.reduce_max(layer_output, axis=-1)[0][masked_index] predicted_token = self.tokenizer.decode([tf.argmax(layer_output, axis=-1)[0][masked_index]]) print("Layer {}, {}, {}".format(i, predicted_token, prob_value)) generator_callback = MaskedTextGenerator(tokenizer) ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Neuromatch Academy: Week 3, Day 2, Tutorial 1 # Hidden Dynamics: Sequential Probability Ratio Test __Content creators:__ Yicheng Fei and Xaq Pitkow __Content reviewers:__ John Butler, Matt Krause, Spiros Chavlis, Michael Waskom, Jesse Livezey, and Byron Galbraith --- #Tutorial Objectives In W3D1, we learned how to combine the sensory evidence and our prior experience with Bayes' Theorem, producing a posterior probability distribution that would let us choose between the most probable of *two* options (fish being on the left or fish being on the right). Here, we add a *third* option: choosing to collect more evidence before making a decision. --- In this notebook we will perform a *Sequential Probability Ratio Test* (SPRT) between two hypotheses $s=+1$ and $s=-1$ by running simulations of a *Drift Diffusion Model (DDM)*. As data comes in, we accumulate evidence linearly until a stopping criterion is met before deciding which hypothesis to accept. In this tutorial, you will * Simulate the Drift-Diffusion Model. * Gain intuition about the tradeoff between decision speed and accuracy. ``` #@title Video 1: Overview of Tutorials on Hidden Dynamics from IPython.display import YouTubeVideo video = YouTubeVideo(id="ofNRpSpRxl4", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` --- # Setup ``` # Imports import numpy as np from scipy import stats import matplotlib.pyplot as plt #@title Figure settings import ipywidgets as widgets # interactive display %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle") #@title Helper functions def simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample, verbose=True): """Simulate and plot a SPRT for a fixed amount of time given a std. Args: sigma (float): Standard deviation of the observations. stop_time (int): Number of steps to run before stopping. num_sample (int): The number of samples to plot. """ evidence_history_list = [] if verbose: print("#Trial\tTotal_Evidence\tDecision") for i in range(num_sample): evidence_history, decision, Mvec = simulate_SPRT_fixedtime(sigma, stop_time) if verbose: print("{}\t{:f}\t{}".format(i, evidence_history[-1], decision)) evidence_history_list.append(evidence_history) fig, ax = plt.subplots() maxlen_evidence = np.max(list(map(len,evidence_history_list))) ax.plot(np.zeros(maxlen_evidence), '--', c='red', alpha=1.0) for evidences in evidence_history_list: ax.plot(np.arange(len(evidences)), evidences) ax.set_xlabel("Time") ax.set_ylabel("Cumulated log likelihood ratio") ax.set_title("Log likelihood ratio trajectories under the fixed-time " + "stopping rule") plt.show(fig) def plot_accuracy_vs_stoptime(stop_time_list, accuracy_list): """Simulate and plot a SPRT for a fixed amount of times given a std. Args: stop_time_list (int): List of number of steps to run before stopping. accuracy_list (int): List of accuracies for each stop time """ fig, ax = plt.subplots() ax.plot(stop_time_list, accuracy_list) ax.set_xlabel('Stop Time') ax.set_ylabel('Average Accuracy') plt.show(fig) def simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha, verbose=True): """Simulate and plot a SPRT for a fixed amount of times given a std. Args: sigma (float): Standard deviation of the observations. num_sample (int): The number of samples to plot. alpha (float): Threshold for making a decision. """ # calculate evidence threshold from error rate threshold = threshold_from_errorrate(alpha) # run simulation evidence_history_list = [] if verbose: print("#Trial\tTime\tCumulated Evidence\tDecision") for i in range(num_sample): evidence_history, decision, Mvec = simulate_SPRT_threshold(sigma, threshold) if verbose: print("{}\t{}\t{:f}\t{}".format(i, len(Mvec), evidence_history[-1], decision)) evidence_history_list.append(evidence_history) fig, ax = plt.subplots() maxlen_evidence = np.max(list(map(len,evidence_history_list))) ax.plot(np.repeat(threshold,maxlen_evidence + 1), c="red") ax.plot(-np.repeat(threshold,maxlen_evidence + 1), c="red") ax.plot(np.zeros(maxlen_evidence + 1), '--', c='red', alpha=0.5) for evidences in evidence_history_list: ax.plot(np.arange(len(evidences) + 1), np.concatenate([[0], evidences])) ax.set_xlabel("Time") ax.set_ylabel("Cumulated log likelihood ratio") ax.set_title("Log likelihood ratio trajectories under the threshold rule") plt.show(fig) def simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample): """Simulate and plot a SPRT for a set of thresholds given a std. Args: sigma (float): Standard deviation of the observations. alpha_list (float): List of thresholds for making a decision. num_sample (int): The number of samples to plot. """ accuracies, decision_speeds = simulate_accuracy_vs_threshold(sigma, threshold_list, num_sample) # Plotting fig, ax = plt.subplots() ax.plot(decision_speeds, accuracies, linestyle="--", marker="o") ax.plot([np.amin(decision_speeds), np.amax(decision_speeds)], [0.5, 0.5], c='red') ax.set_xlabel("Average Decision speed") ax.set_ylabel('Average Accuracy') ax.set_title("Speed/Accuracy Tradeoff") ax.set_ylim(0.45, 1.05) plt.show(fig) def threshold_from_errorrate(alpha): """Calculate log likelihood ratio threshold from desired error rate `alpha` Args: alpha (float): in (0,1), the desired error rate Return: threshold: corresponding evidence threshold """ threshold = np.log((1. - alpha) / alpha) return threshold ``` --- # Section 1: Introduction to the SPRT ## Section 1.1: The random dot task A classic experimental task in neuroscience is the random dot kinematogram ([Newsome, Britten, Movshon 1989](https://www.nature.com/articles/341052a0.pdf)), in which a pattern of moving dots are moving in random directions but with some weak coherence that favors a net rightward or leftward motion. The observer must guess the direction. Neurons in the brain are informative about this task, and have responses that correlate with the choice. Below is a video by Pamela Reinagle of a rat guessing the direction of motion in such a task. ``` #@title Video 2: Rat performing the Random Dot Motion Task from IPython.display import YouTubeVideo video = YouTubeVideo(id="oDxcyTn-0os", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` In this tutorial, we will consider a model for this random dot motion task. In each time bin $t$, we are shown dots moving at net measured velocity $m_t$, either in a negative ($m<0$) or positive ($m>0$) direction. Although the dots' velocities varies over time, the $m_t$ are generated by a fixed probability distribution $p(m|s)$ that depends on a fixed latent variable $s=\pm 1$: $$ \\ \begin{eqnarray} p(m|s=+1) &=& \mathcal{N}\left(\mu_+,\sigma^2\right) \\ &&\textrm{or} \\ p(m|s=-1) &=& \mathcal{N}\left(\mu_-,\sigma^2\right) \\ \end{eqnarray} \\ $$ Here we assume the measurement probabilities have the same variances regardless of $s$, and different means. We, like the rat, want to synthesize our evidence to determine whether $s=+1$ or $-1$. ## Section 1.2: Sequential Probability Ratio Test(SPRT) ``` #@title Video 3: Decision making: Sequential Probability Ratio Test # Insert the ID of the corresponding youtube video from IPython.display import YouTubeVideo video = YouTubeVideo(id="DGoPoLkDiUw", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` <!-- <img alt="PGM" width="400" src="https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/W2D3_DecisionMaking/static/W2D3_Tutorial1_PGM.png?raw=true"> --> <img src="https://drive.google.com/uc?export=view&id=1vE2XQ5qMQ_pJgzgZRCnNVQEpP-nupt87" alt="HMM drawing" width="400"> Suppose we obtain a sequence of independent measurements $m_{1:T}$ from a distribution $p(m_{1:T}|s)$. Remember that $s$ is our hidden state and for now, is either -1 or 1. Our measurements come from either $p(m_t|s=-1)$ or $p(m_t|s=1)$. We wish to test which value of $s$ is more likely given our sequence of measurements. A crucial assumption in Hidden Markov Models is that all measurements are drawn independently given the latent state. In the fishing example, you might have a high or low probability of catching fish depending on where the school of fish is --- but if you already *knew* the location $s$ of the school (which is what we mean by a conditional probability $p(m|s)$), then your chances of catching fish at one time is unaffected by whether you caught fish there previously. Mathematically, we write this independence as $p(m_1, m_2|s)=p(m_1|s)p(m_2|s)$, using the product rule of probabilities. When we consider a whole time series of measurements $m_{1:T}$, we can compute the product $p(m_{1:T}|s)=\prod_{t=1}^T p(m_t|s)$. We can then compare the total evidence up to time $T$ for our two hypotheses (of whether our state is -1 or 1) by taking a ratio of the likelihoods. $$L_t=\frac{\prod_{t=1}^T p(m_t|s=+1)}{\prod_{t=1}^T p(m_t|s=-1)}$$ The above tells us the likelihood of the measurements if $s = 1$ divided by the likelihood of the measurements if $s = -1$. It is convenient to take the _log_ of this likelihood ratio, converting the products to sums: $$S_t = \log L_t = \sum_{t=1}^T \log \frac{p(m_t|s=+1)}{p(m_t|s=-1)} \tag{1}$$ We can name each term in the sum as $$\Delta_t= \log \frac{p(m_t|s=+1)}{p(m_t|s=-1)}$$ Due to the independence of measurements, this can be calculated recursively _online_ as new data points arrive: $$ S_t = S_{t-1} + \Delta_t \tag{2}$$ where we update our log-likelihood ratio $S_t$ by $\Delta_t$ every time we see a new measurement $m_t$. We will use $S_t$ to make our decisions! If $S_t$ is positive, the likelihood of $s = 1$ is higher. If it is negative, the likelihood of $s = -1$ is higher. We need to figure out when we make our decision though, as $S_t$ can change with each new measurement. A rule for making a decision can be implemented in two ways: 1. Fixed time (Section 2): Stop collecting data after a predetermined number of measurements $t$, and accept the hypothesis that $s=+1$ if $S_t>0$, otherwise accept $s=-1$ if $S_t<0$ (and choose randomly if $S_t=0$). The significance level or desired error rate $\alpha$ can then be determined as $\alpha = (1+\exp(|S_t|))^{-1}$. 2. Confidence threshold (Bonus Section 1): Choose an acceptable error rate $\alpha$. Then accept the hypothesis $s=1$ when $S_t \ge b=\log \frac{1-\alpha}{\alpha}$, analogously accept $s=-1$ when $S_t\le -b$, and keep collecting data until one of those confidence thresholds is reached. Historical note: this is the rule that Alan Turing used to break the Enigma code and win World War II! ## Section 1.3: SPRT as a Drift Diffusion Model (DDM) ``` #@title Video 4: SPRT and the Random Dot Motion Task from IPython.display import YouTubeVideo video = YouTubeVideo(id="7WBB4M_Vf58", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` The evidence favoring the two latent states is random, but according to our model it will weakly favor one hypothesis over another. The accumulation of evidence will thus "drift" toward one outcome, while "diffusing" in random directions, hence the term "drift-diffusion model" (DDM). The process is most likely (but not guaranteed) to reach the correct outcome eventually. We can do a little math below to show that the update $\Delta_t$ to the log-likelihood ratio is a gaussian random number. You can derive this yourself, filling in the steps below, or skip to the end result. **Bonus exercise: derive Drift Diffusion Model from SPRT** Assume measurements are Gaussian-distributed with different means depending on the discrete latent variable $s$: $$p(m|s=\pm 1) = \mathcal{N}\left(\mu_\pm,\sigma^2\right)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp{\left[-\frac{(m-\mu_\pm)^2}{2\sigma^2}\right]}$$ In the log likelihood ratio for a single data point $m_i$, the normalizations cancel to give $$\Delta_t=\log \frac{p(m_t|s=+1)}{p(m_t|s=-1)} = \frac{1}{2\sigma^2}\left[-\left(m_t-\mu_+\right)^2 + (m_t-\mu_-)^2\right] \tag{5}$$ It's convenient to rewrite $m=\mu_\pm + \sigma \epsilon$, where $\epsilon\sim \mathcal{N}(0,1)$ is a standard Gaussian variable with zero mean and unit variance. (Why does this give the correct probability for $m$?). The preceding formula can then be rewritten as $$\Delta_t = \frac{1}{2\sigma^2}\left( -((\mu_\pm+\sigma\epsilon)-\mu_+)^2 + ((\mu_\pm+\sigma\epsilon)-\mu_-)^2\right) \tag{5}$$ Let's assume that $s=+1$ so $\mu_\pm=\mu_+$ (if $s=-1$ then the result is the same with a reversed sign). In that case, the means in the first term $m_t-\mu_+$ cancel, leaving $$\Delta_t = \frac{(\mu_+-\mu_-)^2}{2\sigma^2}+\frac{\mu_+-\mu_-}{\sigma}\epsilon \tag{5}$$ where the first term is the constant *drift*, and the second term is the random *diffusion*. Adding these $\Delta_t$ over time gives a biased random walk known as the Drift Diffusion Model, $S_t=\sum_t \Delta_t$. The log-likelihood ratio is then normally distributed with a time-dependent mean and variance, $$S_t\sim\mathcal{N}\left(\tfrac{1}{2}\frac{\delta\mu^2}{\sigma^2}t,\ \frac{\delta\mu^2}{\sigma^2}t\right)$$ where $\delta\mu=\mu_+-\mu_-$. The mean and the variance both increase linearly with time, so the standard deviation grows more slowly, as only $\sqrt{t}$. This means that the distributions becomes more and more distinct as evidence is acquired over time. You will simulate this process below. **Neural application** Neural responses in lateral intraparietal cortex (LIP) to the random-dots kinematogram has been well-described by this drift-diffusion process (Huk and Shadlen 2005), suggesting that these neurons gradually integrate evidence. Interestingly there is also a more recent competing hypothesis that neural activity jumps from low to high at random latent times, such that on average it looks like a gradual ramping (Latimer et al 2015). Scientific evidence about these processes are judged by how well the corresponding Hidden Markov Models fit the data! --- # Section 2: DDM with fixed-time stopping rule ## Section 2.1: Simulation of DDM with fixed-time stopping rule ``` #@title Video 5: Simulate the DDM with a fixed-time stopping rule from IPython.display import YouTubeVideo video = YouTubeVideo(id="9WNAZnEa64Y", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` ### Coding Exercise 1: Simulating an SPRT model Assume we are performing a random dot motion task and at each time we see a moving dot with a sensory measurement $m_t$ of velocity. All data points are sampled from the same distribution $p$, which is either $p_+=\mathcal{N}\left(\mu,\sigma^2\right)$ or $p_-=\mathcal{N}\left(-\mu,\sigma^2\right)$, depending on which direction the dots are moving in. Let's now generate some simulated data under this setting and perform SPRT using the fixed time stopping rule. In this exercise, without loss of generality, we assume the true data-generating model is $p_+$. We will implement a function `simulate_SPRT_fixedtime`, which will generate measurements based on $\mu$, $\sigma$, and the true state. It will then accumulate evidence and output a decision on the state. We will use the helper function `log_likelihood_ratio`, implemented in the next cell, which computes the log of the likelihood of the state being 1 divided by the likelihood of the state being -1. We will then run and visualize 10 simulations of evidence accumulation and decision. ``` # @markdown Execute this cell to enable the helper function log_likelihood_ratio def log_likelihood_ratio(Mvec, p0, p1): """Given a sequence(vector) of observed data, calculate the log of likelihood ratio of p1 and p0 Args: Mvec (numpy vector): A vector of scalar measurements p0 (Gaussian random variable): A normal random variable with `logpdf' method p1 (Gaussian random variable): A normal random variable with `logpdf` method Returns: llvec: a vector of log likelihood ratios for each input data point """ return p1.logpdf(Mvec) - p0.logpdf(Mvec) def simulate_SPRT_fixedtime(sigma, stop_time, true_dist = 1): """Simulate a Sequential Probability Ratio Test with fixed time stopping rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and N(-1,sigma^2). Args: sigma (float): Standard deviation of observation models stop_time (int): Number of samples to take before stopping true_dist (1 or -1): Which state is the true state. Returns: evidence_history (numpy vector): the history of cumulated evidence given generated data decision (int): 1 for s = 1, -1 for s = -1 Mvec (numpy vector): the generated sequences of measurement data in this trial """ ################################################# ## TODO for students ## # Fill out function and remove raise NotImplementedError("Student exercise: complete simulate_SPRT_fixedtime") ################################################# # Set means of observation distributions mu_pos = 1.0 mu_neg = -1.0 # Make observation distributions p_pos = stats.norm(loc = mu_pos, scale = sigma) p_neg = stats.norm(loc = mu_neg, scale = sigma) # Generate a random sequence of measurements if true_dist == 1: Mvec = p_pos.rvs(size = stop_time) else: Mvec = p_neg.rvs(size = stop_time) # Calculate log likelihood ratio for each measurement (delta_t) ll_ratio_vec = log_likelihood_ratio(Mvec, p_neg, p_pos) # Calculate cumulated evidence (S) given a vector of individual evidences (hint: np.cumsum) evidence_history = ... # Make decision if evidence_history[-1] > 0: # Decision given positive S_t (last value of evidence history) decision = ... elif evidence_history[-1] < 0: # Decision given negative S_t (last value of evidence history) decision = ... else: # Random decision if S_t is 0 decision = np.random.randint(2) return evidence_history, decision, Mvec # Set random seed np.random.seed(100) # Set model parameters sigma = 3.5 # standard deviation for p+ and p- num_sample = 10 # number of simulations to run stop_time = 150 # number of steps before stopping simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_85d9b919.py) *Example output:* <img alt='Solution hint' align='left' width=573 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial1_Solution_85d9b919_1.png> ### Interactive Demo 2.1: Trajectories under the fixed-time stopping rule In the following demo, you can change the noise level in the observation model (sigma) and the number of time steps before stopping (stop_time) using the sliders. You will then observe 10 simulations with those parameters. 1. Are you more likely to make the wrong decision (choose the incorrect state) with high or low noise? 2. What happens when sigma is very small? Why? 3. Are you more likely to make the wrong decision (choose the incorrect state) with fewer or more time steps before stopping? ``` #@markdown Make sure you execute this cell to enable the widget! def simulate_SPRT_fixedtime(sigma, stop_time, true_dist = 1): """Simulate a Sequential Probability Ratio Test with fixed time stopping rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and N(-1,sigma^2). Args: sigma (float): Standard deviation of observation models stop_time (int): Number of samples to take before stopping true_dist (1 or -1): Which state is the true state. Returns: evidence_history (numpy vector): the history of cumulated evidence given generated data decision (int): 1 for s = 1, -1 for s = -1 Mvec (numpy vector): the generated sequences of measurement data in this trial """ # Set means of observation distributions mu_pos = 1.0 mu_neg = -1.0 # Make observation distributions p_pos = stats.norm(loc = mu_pos, scale = sigma) p_neg = stats.norm(loc = mu_neg, scale = sigma) # Generate a random sequence of measurements if true_dist == 1: Mvec = p_pos.rvs(size = stop_time) else: Mvec = p_neg.rvs(size = stop_time) # Calculate log likelihood ratio for each measurement (delta_t) ll_ratio_vec = log_likelihood_ratio(Mvec, p_neg, p_pos) # Calculate cumulated evidence (S) given a vector of individual evidences (hint: np.cumsum) evidence_history = np.cumsum(ll_ratio_vec) # Make decision if evidence_history[-1] > 0: # Decision given positive S_t (last value of evidence history) decision = 1 elif evidence_history[-1] < 0: # Decision given negative S_t (last value of evidence history) decision = -1 else: # Random decision if S_t is 0 decision = np.random.randint(2) return evidence_history, decision, Mvec np.random.seed(100) num_sample = 10 @widgets.interact def plot(sigma=(0.05, 10.0, 0.05), stop_time=(5, 500, 1)): simulate_and_plot_SPRT_fixedtime(sigma, stop_time, num_sample, verbose=False) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_8ab841e6.py) ## Section 2.2: Accuracy vs stopping time If you stop taking samples too early, (e.g., make a decision after only seeing 5 samples), or there's a huge amount of observation noise that buries the signal, you are likely to be driven by observation noise to a negative cumulated log likelihood ratio and thus make a wrong decision. You could get a sense of this by increasing noise level or decreasing stopping time in the last exercise. Now let's look at how decision accuracy varies with the number of samples we see quantitatively.Accuracy is simply defined as the proportion of correct trials across our repeated simulations: $\frac{\# \textrm{ correct decisions}}{\# \textrm{ total simulation runs}}$. ### Coding Exercise 2: The Speed/Accuracy Tradeoff We will fix our observation noise level. In this exercise you will implement a function to run several repeated simulations for a certain stopping time and calculate the average decision accuracy. We will then visualize the relation average decision accuracy and stopping time. ``` def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample): """Calculate the average decision accuracy vs. stopping time by running repeated SPRT simulations for each stop time. Args: sigma (float): standard deviation for observation model stop_list_list (list-like object): a list of stopping times to run over num_sample (int): number of simulations to run per stopping time Returns: accuracy_list: a list of average accuracies corresponding to input `stop_time_list` decisions_list: a list of decisions made in all trials """ ################################################# ## TODO for students## # Fill out function and remove raise NotImplementedError("Student exercise: complete simulate_accuracy_vs_stoptime") ################################################# # Determine true state (1 or -1) true_dist = 1 # Set up tracker of accuracy and decisions accuracies = np.zeros(len(stop_time_list),) decisions_list = [] # Loop over stop times for i_stop_time, stop_time in enumerate(stop_time_list): # Set up tracker of decisions for this stop time decisions = np.zeros((num_sample,)) # Loop over samples for i in range(num_sample): # Simulate run for this stop time (hint: last exercise) _, decision, _= ... # Log decision decisions[i] = decision # Calculate accuracy accuracies[i_stop_time] = ... # Log decisions decisions_list.append(decisions) return accuracies, decisions_list # Set random seed np.random.seed(100) # Set parameters of model sigma = 4.65 # standard deviation for observation noise num_sample = 200 # number of simulations to run for each stopping time stop_time_list = np.arange(1, 150, 10) # Array of stopping times to use # Calculate accuracies for each stop time accuracies, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample) # Visualize plot_accuracy_vs_stoptime(stop_time_list, accuracies) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_d5a87445.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial1_Solution_d5a87445_0.png> ### Interactive Demo 2.2: Accuracy versus stop-time In the following demo, we will show the same visualization as in the previous exercise, but you will be able to vary the noise level `sigma` of the observation distributions. First think and discuss, 1. What do you expect low levels of noise to do to the accuracy vs stop time plot? 2. What do you expect high levels of noise to do to the accuracy vs stop time plot? Play with the demo and see if you were correct or not. ``` #@markdown Make sure you execute this cell to enable the widget! def simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample): """Calculate the average decision accuracy vs. stopping time by running repeated SPRT simulations for each stop time. Args: sigma (float): standard deviation for observation model stop_list_list (list-like object): a list of stopping times to run over num_sample (int): number of simulations to run per stopping time Returns: accuracy_list: a list of average accuracies corresponding to input `stop_time_list` decisions_list: a list of decisions made in all trials """ # Determine true state (1 or -1) true_dist = 1 # Set up tracker of accuracy and decisions accuracies = np.zeros(len(stop_time_list),) decisions_list = [] # Loop over stop times for i_stop_time, stop_time in enumerate(stop_time_list): # Set up tracker of decisions for this stop time (hint: last exercise) decisions = np.zeros((num_sample,)) # Loop over samples for i in range(num_sample): # Simulate run for this stop time _, decision, _= simulate_SPRT_fixedtime(sigma, stop_time, true_dist) # Log decision decisions[i] = decision # Calculate accuracy accuracies[i_stop_time] = np.sum(decisions == true_dist) / decisions.shape[0] # Log decisions decisions_list.append(decisions) return accuracies, decisions_list np.random.seed(100) num_sample = 100 stop_time_list = np.arange(1, 150, 10) @widgets.interact def plot(sigma=(0.05, 10.0, 0.05)): # Calculate accuracies for each stop time accuracies, _ = simulate_accuracy_vs_stoptime(sigma, stop_time_list, num_sample) # Visualize plot_accuracy_vs_stoptime(stop_time_list, accuracies) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_a956a168.py) Please see Bonus Section 1 to learn about and work with a different stopping rule for DDMs: a fixed threshold on confidence. --- # Summary Good job! By simulating Drift Diffusion Models to perform decision making, you have learnt how to 1. Calculate individual sample evidence as the log likelihood ratio of two candidate models, accumulate evidence from new data points, and make decision based on current evidence in `Exercise 1` 2. Run repeated simulations to get an estimate of decision accuraries in `Exercise 2` 3. Implement the thresholding stopping rule where we can control our error rate by taking adequate amounts of data, and calculate the evidence threshold from desired error rate in `Exercise 3` 4. Explore and gain intuition about the speed/accuracy tradeoff for perceptual decision making in `Exercise 4` --- # Bonus ## Bonus Section 1: DDM with fixed thresholds on confidence ``` #@title Video 6: Fixed threshold on confidence from IPython.display import YouTubeVideo video = YouTubeVideo(id="E8lvgFeIGQM", width=854, height=480, fs=1) print("Video available at https://youtu.be/" + video.id) video ``` The next exercises consider a variant of the DDM with fixed confidence thresholds instead of fixed decision time. This may be a better description of neural integration. Please complete this material after you have finished the main content of all tutorials, if you would like extra information about this topic. ### Exercise 3: Simulating the DDM with fixed thresholds In this exercise, we will use thresholding as our stopping rule and observe the behavior of the DDM. With thresholding stopping rule, we define a desired error rate and will continue making measurements until that error rate is reached. Experimental evidence suggested that evidence accumulation and thresholding stopping strategy happens at neuronal level (see [this article](https://www.annualreviews.org/doi/full/10.1146/annurev.neuro.29.051605.113038) for further reading). * Complete the function `threshold_from_errorrate` to calculate the evidence threshold from desired error rate $\alpha$ as described in the formulas below. The evidence thresholds $th_1$ and $th_0$ for $p_+$ and $p_-$ are opposite of each other as shown below, so you can just return the absolute value. $$ \begin{align} th_{L} &= \log \frac{\alpha}{1-\alpha} &= -th_{R} \\ th_{R} &= \log \frac{1-\alpha}{\alpha} &= -th{_1}\\ \end{align} $$ * Complete the function `simulate_SPRT_threshold` to simulate an SPRT with thresholding stopping rule given noise level and desired threshold * Run repeated simulations for a given noise level and a desired error rate visualize the DDM traces using our provided code ``` def simulate_SPRT_threshold(sigma, threshold , true_dist=1): """Simulate a Sequential Probability Ratio Test with thresholding stopping rule. Two observation models are 1D Gaussian distributions N(1,sigma^2) and N(-1,sigma^2). Args: sigma (float): Standard deviation threshold (float): Desired log likelihood ratio threshold to achieve before making decision Returns: evidence_history (numpy vector): the history of cumulated evidence given generated data decision (int): 1 for pR, 0 for pL data (numpy vector): the generated sequences of data in this trial """ muL = -1.0 muR = 1.0 pL = stats.norm(muL, sigma) pR = stats.norm(muR, sigma) has_enough_data = False data_history = [] evidence_history = [] current_evidence = 0.0 # Keep sampling data until threshold is crossed while not has_enough_data: if true_dist == 1: Mvec = pR.rvs() else: Mvec = pL.rvs() ######################################################################## # Insert your code here to: # * Calculate the log-likelihood ratio for the new sample # * Update the accumulated evidence raise NotImplementedError("`simulate_SPRT_threshold` is incomplete") ######################################################################## # individual log likelihood ratios ll_ratio = log_likelihood_ratio(...) # cumulated evidence for this chunk evidence_history.append(...) # update the collection of all data data_history.append(Mvec) current_evidence = evidence_history[-1] # check if we've got enough data if abs(current_evidence) > threshold: has_enough_data = True data_history = np.array(data_history) evidence_history = np.array(evidence_history) # Make decision if evidence_history[-1] > 0: decision = 1 elif evidence_history[-1] < 0: decision = 0 else: decision = np.random.randint(2) return evidence_history, decision, data_history np.random.seed(100) sigma = 2.8 num_sample = 10 log10_alpha = -6.5 # log10(alpha) alpha = np.power(10.0, log10_alpha) ################################################################################ # Un-comment the following code after completing this exercise ################################################################################ # simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_f93723a3.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial1_Solution_f93723a3_2.png> ### Interactive Demo: DDM with fixed threshold **Suggestion** * Play with difference values of `alpha` and `sigma` and observe how that affects the dynamics of Drift-Diffusion Model. ``` #@title #@markdown Make sure you execute this cell to enable the widget! np.random.seed(100) num_sample = 10 @widgets.interact def plot(sigma=(0.05, 10.0, 0.05), log10_alpha=(-8, -1, .1)): alpha = np.power(10.0, log10_alpha) simulate_and_plot_SPRT_fixedthreshold(sigma, num_sample, alpha, verbose=False) ``` ### Exercise 4: Speed/Accuracy Tradeoff Revisited The faster you make a decision, the lower your accuracy often is. This phenomenon is known as the **speed/accuracy tradeoff**. Humans can make this tradeoff in a wide range of situations, and many animal species, including ants, bees, rodents, and monkeys also show similar effects. To illustrate the speed/accuracy tradeoff under thresholding stopping rule, let's run some simulations under different thresholds and look at how average decision "speed" (1/length) changes with average decision accuracy. We use speed rather than accuracy because in real experiments, subjects can be incentivized to respond faster or slower; it's much harder to precisely control their decision time or error threshold. * Complete the function `simulate_accuracy_vs_threshold` to simulate and compute average accuracies vs. average decision lengths for a list of error thresholds. You will need to supply code to calculate average decision 'speed' from the lengths of trials. You should also calculate the overall accuracy across these trials. * We've set up a list of error thresholds. Run repeated simulations and collect average accuracy with average length for each error rate in this list, and use our provided code to visualize the speed/accuracy tradeoff. You should see a positive correlation between length and accuracy. ``` def simulate_accuracy_vs_threshold(sigma, threshold_list, num_sample): """Calculate the average decision accuracy vs. average decision length by running repeated SPRT simulations with thresholding stopping rule for each threshold. Args: sigma (float): standard deviation for observation model threshold_list (list-like object): a list of evidence thresholds to run over num_sample (int): number of simulations to run per stopping time Returns: accuracy_list: a list of average accuracies corresponding to input `threshold_list` decision_speed_list: a list of average decision speeds """ decision_speed_list = [] accuracy_list = [] for threshold in threshold_list: decision_time_list = [] decision_list = [] for i in range(num_sample): # run simulation and get decision of current simulation _, decision, Mvec = simulate_SPRT_threshold(sigma, threshold) decision_time = len(Mvec) decision_list.append(decision) decision_time_list.append(decision_time) ######################################################################## # Insert your code here to: # * Calculate mean decision speed given a list of decision times # * Hint: Think about speed as being inversely proportional # to decision_length. If it takes 10 seconds to make one decision, # our "decision speed" is 0.1 decisions per second. # * Calculate the decision accuracy raise NotImplementedError("`simulate_accuracy_vs_threshold` is incomplete") ######################################################################## # Calculate and store average decision speed and accuracy decision_speed = ... decision_accuracy = ... decision_speed_list.append(decision_speed) accuracy_list.append(decision_accuracy) return accuracy_list, decision_speed_list ################################################################################ # Un-comment the following code block after completing this exercise ################################################################################ # np.random.seed(100) # sigma = 3.75 # num_sample = 200 # alpha_list = np.logspace(-2, -0.1, 8) # threshold_list = threshold_from_errorrate(alpha_list) # simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample) ``` [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial1_Solution_656ae75a.py) *Example output:* <img alt='Solution hint' align='left' width=560 height=416 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial1_Solution_656ae75a_0.png> ### Interactive Demo: Speed/Accuracy with a threshold rule **Suggestions** * Play with difference values of noise level `sigma` and observe how that affects the speed/accuracy tradeoff. ``` #@title #@markdown Make sure you execute this cell to enable the widget! np.random.seed(100) num_sample = 100 alpha_list = np.logspace(-2, -0.1, 8) threshold_list = threshold_from_errorrate(alpha_list) @widgets.interact def plot(sigma=(0.05, 10.0, 0.05)): alpha = np.power(10.0, log10_alpha) simulate_and_plot_accuracy_vs_threshold(sigma, threshold_list, num_sample) ```
github_jupyter
# Optimization Methods Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: <img src="images/cost.jpg" style="width:650px;height:300px;"> <caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption> **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`. To get started, run the following code to import the libraries you will need. ``` import numpy as np import matplotlib.pyplot as plt import scipy.io import math import sklearn import sklearn.datasets from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset from testCases import * %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' ``` ## 1 - Gradient Descent A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$ where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_gd def update_parameters_with_gd(parameters, grads, learning_rate): """ Update parameters using one step of gradient descent Arguments: parameters -- python dictionary containing your parameters to be updated: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients to update each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl learning_rate -- the learning rate, scalar. Returns: parameters -- python dictionary containing your updated parameters """ L = len(parameters) // 2 # number of layers in the neural networks # Update rule for each parameter for l in range(L): ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)] parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)] ### END CODE HERE ### return parameters parameters, grads, learning_rate = update_parameters_with_gd_test_case() parameters = update_parameters_with_gd(parameters, grads, learning_rate) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td > **W1** </td> <td > [[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.74604067] [-0.75184921]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.88020257] [ 0.02561572] [ 0.57539477]] </td> </tr> </table> A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**: ``` python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): # Forward propagation a, caches = forward_propagation(X, parameters) # Compute cost. cost = compute_cost(a, Y) # Backward propagation. grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) ``` - **Stochastic Gradient Descent**: ```python X = data_input Y = labels parameters = initialize_parameters(layers_dims) for i in range(0, num_iterations): for j in range(0, m): # Forward propagation a, caches = forward_propagation(X[:,j], parameters) # Compute cost cost = compute_cost(a, Y[:,j]) # Backward propagation grads = backward_propagation(a, caches, parameters) # Update parameters. parameters = update_parameters(parameters, grads) ``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: <img src="images/kiank_sgd.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption> **Note** also that implementing SGD requires 3 for-loops in total: 1. Over the number of iterations 2. Over the $m$ training examples 3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$) In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. <img src="images/kiank_minibatch.png" style="width:750px;height:250px;"> <caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption> <font color='blue'> **What you should remember**: - The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step. - You have to tune a learning rate hyperparameter $\alpha$. - With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). ## 2 - Mini-Batch Gradient descent Let's learn how to build mini-batches from the training set (X, Y). There are two steps: - **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. <img src="images/kiank_shuffle.png" style="width:550px;height:300px;"> - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: <img src="images/kiank_partition.png" style="width:550px;height:300px;"> **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches: ```python first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size] second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size] ... ``` Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$). ``` # GRADED FUNCTION: random_mini_batches def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) mini_batch_size -- size of the mini-batches, integer Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ np.random.seed(seed) # To make your "random" minibatches the same as ours m = X.shape[1] # number of training examples mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((1,m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:, k*mini_batch_size:(k+1)*mini_batch_size] mini_batch_Y = shuffled_Y[:, k*mini_batch_size:(k+1)*mini_batch_size] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[:, num_complete_minibatches*mini_batch_size:] mini_batch_Y = shuffled_Y[:, num_complete_minibatches*mini_batch_size:] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case() mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size) print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape)) print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape)) print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape)) print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape)) print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape)) print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape)) print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3])) ``` **Expected Output**: <table style="width:50%"> <tr> <td > **shape of the 1st mini_batch_X** </td> <td > (12288, 64) </td> </tr> <tr> <td > **shape of the 2nd mini_batch_X** </td> <td > (12288, 64) </td> </tr> <tr> <td > **shape of the 3rd mini_batch_X** </td> <td > (12288, 20) </td> </tr> <tr> <td > **shape of the 1st mini_batch_Y** </td> <td > (1, 64) </td> </tr> <tr> <td > **shape of the 2nd mini_batch_Y** </td> <td > (1, 64) </td> </tr> <tr> <td > **shape of the 3rd mini_batch_Y** </td> <td > (1, 20) </td> </tr> <tr> <td > **mini batch sanity check** </td> <td > [ 0.90085595 -0.7612069 0.2344157 ] </td> </tr> </table> <font color='blue'> **What you should remember**: - Shuffling and Partitioning are the two steps required to build mini-batches - Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. ## 3 - Momentum Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. <img src="images/opt_momentum.png" style="width:400px;height:250px;"> <caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center> **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is: for $l =1,...,L$: ```python v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) ``` **Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop. ``` # GRADED FUNCTION: initialize_velocity def initialize_velocity(parameters): """ Initializes the velocity as a python dictionary with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters. Arguments: parameters -- python dictionary containing your parameters. parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl Returns: v -- python dictionary containing the current velocity. v['dW' + str(l)] = velocity of dWl v['db' + str(l)] = velocity of dbl """ L = len(parameters) // 2 # number of layers in the neural networks v = {} # Initialize velocity for l in range(L): ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0],parameters["W" + str(l+1)].shape[1])) v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0],1)) ### END CODE HERE ### return v parameters = initialize_velocity_test_case() v = initialize_velocity(parameters) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) ``` **Expected Output**: <table style="width:40%"> <tr> <td > **v["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> </table> **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases} v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\ W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}} \end{cases}\tag{3}$$ $$\begin{cases} v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\ b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$ where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_momentum def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate): """ Update parameters using Momentum Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl v -- python dictionary containing the current velocity: v['dW' + str(l)] = ... v['db' + str(l)] = ... beta -- the momentum hyperparameter, scalar learning_rate -- the learning rate, scalar Returns: parameters -- python dictionary containing your updated parameters v -- python dictionary containing your updated velocities """ L = len(parameters) // 2 # number of layers in the neural networks # Momentum update for each parameter for l in range(L): ### START CODE HERE ### (approx. 4 lines) # compute velocities v["dW" + str(l+1)] = beta*v["dW" + str(l+1)] + (1 - beta)*grads["dW" + str(l+1)] v["db" + str(l+1)] = beta*v["db" + str(l+1)] + (1 - beta)*grads["db" + str(l+1)] # update parameters parameters["W" + str(l+1)] -= learning_rate*v["dW" + str(l+1)] parameters["b" + str(l+1)] -= learning_rate*v["db" + str(l+1)] ### END CODE HERE ### return parameters, v parameters, grads, v = update_parameters_with_momentum_test_case() parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) ``` **Expected Output**: <table style="width:90%"> <tr> <td > **W1** </td> <td > [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.74493465] [-0.76027113]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.87809283] [ 0.04055394] [ 0.58207317]] </td> </tr> <tr> <td > **v["dW1"]** </td> <td > [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[-0.01228902] [-0.09357694]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.02344157] [ 0.16598022] [ 0.07420442]]</td> </tr> </table> **Note** that: - The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps. - If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?** - The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. <font color='blue'> **What you should remember**: - Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent. - You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. ## 4 - Adam Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?** 1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2". The update rule is, for $l = 1, ..., L$: $$\begin{cases} v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\ v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\ s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\ s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon} \end{cases}$$ where: - t counts the number of steps taken of Adam - L is the number of layers - $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate - $\varepsilon$ is a very small number to avoid dividing by zero As usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information. **Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is: for $l = 1, ..., L$: ```python v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)]) ``` ``` # GRADED FUNCTION: initialize_adam def initialize_adam(parameters) : """ Initializes v and s as two python dictionaries with: - keys: "dW1", "db1", ..., "dWL", "dbL" - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters. Arguments: parameters -- python dictionary containing your parameters. parameters["W" + str(l)] = Wl parameters["b" + str(l)] = bl Returns: v -- python dictionary that will contain the exponentially weighted average of the gradient. v["dW" + str(l)] = ... v["db" + str(l)] = ... s -- python dictionary that will contain the exponentially weighted average of the squared gradient. s["dW" + str(l)] = ... s["db" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural networks v = {} s = {} # Initialize v, s. Input: "parameters". Outputs: "v, s". for l in range(L): ### START CODE HERE ### (approx. 4 lines) v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0], parameters["W" + str(l+1)].shape[1])) v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0], 1)) s["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)].shape[0], parameters["W" + str(l+1)].shape[1])) s["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)].shape[0], 1)) ### END CODE HERE ### return v, s parameters = initialize_adam_test_case() v, s = initialize_adam(parameters) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) print("s[\"dW1\"] = " + str(s["dW1"])) print("s[\"db1\"] = " + str(s["db1"])) print("s[\"dW2\"] = " + str(s["dW2"])) print("s[\"db2\"] = " + str(s["db2"])) ``` **Expected Output**: <table style="width:40%"> <tr> <td > **v["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW1"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db1"]** </td> <td > [[ 0.] [ 0.]] </td> </tr> <tr> <td > **s["dW2"]** </td> <td > [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td > **s["db2"]** </td> <td > [[ 0.] [ 0.] [ 0.]] </td> </tr> </table> **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases} v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\ v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\ s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\ s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\ W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon} \end{cases}$$ **Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. ``` # GRADED FUNCTION: update_parameters_with_adam def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8): """ Update parameters using Adam Arguments: parameters -- python dictionary containing your parameters: parameters['W' + str(l)] = Wl parameters['b' + str(l)] = bl grads -- python dictionary containing your gradients for each parameters: grads['dW' + str(l)] = dWl grads['db' + str(l)] = dbl v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary learning_rate -- the learning rate, scalar. beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updates Returns: parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionary s -- Adam variable, moving average of the squared gradient, python dictionary """ L = len(parameters) // 2 # number of layers in the neural networks v_corrected = {} # Initializing first moment estimate, python dictionary s_corrected = {} # Initializing second moment estimate, python dictionary # Perform Adam update on all parameters for l in range(L): # Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v". ### START CODE HERE ### (approx. 2 lines) v["dW" + str(l+1)] = beta1*v["dW" + str(l+1)] + (1 - beta1)*grads["dW" + str(l+1)] v["db" + str(l+1)] = beta1*v["db" + str(l+1)] + (1 - beta1)*grads["db" + str(l+1)] ### END CODE HERE ### # Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected". ### START CODE HERE ### (approx. 2 lines) v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)]/(1 - (beta1)**t) v_corrected["db" + str(l+1)] = v["db" + str(l+1)]/(1 - (beta1)**t) ### END CODE HERE ### # Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s". ### START CODE HERE ### (approx. 2 lines) s["dW" + str(l+1)] = beta2*s["dW" + str(l+1)] + (1 - beta2)*grads["dW" + str(l+1)]**2 s["db" + str(l+1)] = beta2*s["db" + str(l+1)] + (1 - beta2)*grads["db" + str(l+1)]**2 ### END CODE HERE ### # Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected". ### START CODE HERE ### (approx. 2 lines) s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)]/(1 - (beta2)**t) s_corrected["db" + str(l+1)] = s["db" + str(l+1)]/(1 - (beta2)**t) ### END CODE HERE ### # Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters". ### START CODE HERE ### (approx. 2 lines) parameters["W" + str(l+1)] -= learning_rate*v_corrected["dW" + str(l+1)]/(np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon) parameters["b" + str(l+1)] -= learning_rate*v_corrected["db" + str(l+1)]/(np.sqrt(s_corrected["db" + str(l+1)]) + epsilon) ### END CODE HERE ### return parameters, v, s parameters, grads, v, s = update_parameters_with_adam_test_case() parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) print("v[\"dW1\"] = " + str(v["dW1"])) print("v[\"db1\"] = " + str(v["db1"])) print("v[\"dW2\"] = " + str(v["dW2"])) print("v[\"db2\"] = " + str(v["db2"])) print("s[\"dW1\"] = " + str(s["dW1"])) print("s[\"db1\"] = " + str(s["db1"])) print("s[\"dW2\"] = " + str(s["dW2"])) print("s[\"db2\"] = " + str(s["db2"])) ``` **Expected Output**: <table> <tr> <td > **W1** </td> <td > [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] </td> </tr> <tr> <td > **b1** </td> <td > [[ 1.75225313] [-0.75376553]] </td> </tr> <tr> <td > **W2** </td> <td > [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]] </td> </tr> <tr> <td > **b2** </td> <td > [[-0.88529978] [ 0.03477238] [ 0.57537385]] </td> </tr> <tr> <td > **v["dW1"]** </td> <td > [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] </td> </tr> <tr> <td > **v["db1"]** </td> <td > [[-0.01228902] [-0.09357694]] </td> </tr> <tr> <td > **v["dW2"]** </td> <td > [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] </td> </tr> <tr> <td > **v["db2"]** </td> <td > [[ 0.02344157] [ 0.16598022] [ 0.07420442]] </td> </tr> <tr> <td > **s["dW1"]** </td> <td > [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]] </td> </tr> <tr> <td > **s["db1"]** </td> <td > [[ 1.51020075e-05] [ 8.75664434e-04]] </td> </tr> <tr> <td > **s["dW2"]** </td> <td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td> </tr> <tr> <td > **s["db2"]** </td> <td > [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]] </td> </tr> </table> You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. ## 5 - Model with different optimization algorithms Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.) ``` train_X, train_Y = load_dataset() ``` We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()` - Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()` - Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()` ``` def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True): """ 3-layer neural network model which can be run in different optimizer modes. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) layers_dims -- python list, containing the size of each layer learning_rate -- the learning rate, scalar. mini_batch_size -- the size of a mini batch beta -- Momentum hyperparameter beta1 -- Exponential decay hyperparameter for the past gradients estimates beta2 -- Exponential decay hyperparameter for the past squared gradients estimates epsilon -- hyperparameter preventing division by zero in Adam updates num_epochs -- number of epochs print_cost -- True to print the cost every 1000 epochs Returns: parameters -- python dictionary containing your updated parameters """ L = len(layers_dims) # number of layers in the neural networks costs = [] # to keep track of the cost t = 0 # initializing the counter required for Adam update seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours # Initialize parameters parameters = initialize_parameters(layers_dims) # Initialize the optimizer if optimizer == "gd": pass # no initialization required for gradient descent elif optimizer == "momentum": v = initialize_velocity(parameters) elif optimizer == "adam": v, s = initialize_adam(parameters) # Optimization loop for i in range(num_epochs): # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch seed = seed + 1 minibatches = random_mini_batches(X, Y, mini_batch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # Forward propagation a3, caches = forward_propagation(minibatch_X, parameters) # Compute cost cost = compute_cost(a3, minibatch_Y) # Backward propagation grads = backward_propagation(minibatch_X, minibatch_Y, caches) # Update parameters if optimizer == "gd": parameters = update_parameters_with_gd(parameters, grads, learning_rate) elif optimizer == "momentum": parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate) elif optimizer == "adam": t = t + 1 # Adam counter parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t, learning_rate, beta1, beta2, epsilon) # Print the cost every 1000 epoch if print_cost and i % 1000 == 0: print ("Cost after epoch %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(costs) plt.ylabel('cost') plt.xlabel('epochs (per 100)') plt.title("Learning rate = " + str(learning_rate)) plt.show() return parameters ``` You will now run this 3 layer neural network with each of the 3 optimization methods. ### 5.1 - Mini-batch Gradient descent Run the following code to see how the model does with mini-batch gradient descent. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, optimizer = "gd") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Gradient Descent optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.2 - Mini-batch gradient descent with momentum Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Momentum optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.3 - Mini-batch with Adam mode Run the following code to see how the model does with Adam. ``` # train 3-layer model layers_dims = [train_X.shape[0], 5, 2, 1] parameters = model(train_X, train_Y, layers_dims, optimizer = "adam") # Predict predictions = predict(train_X, train_Y, parameters) # Plot decision boundary plt.title("Model with Adam optimization") axes = plt.gca() axes.set_xlim([-1.5,2.5]) axes.set_ylim([-1,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` ### 5.4 - Summary <table> <tr> <td> **optimization method** </td> <td> **accuracy** </td> <td> **cost shape** </td> </tr> <td> Gradient descent </td> <td> 79.7% </td> <td> oscillations </td> <tr> <td> Momentum </td> <td> 79.7% </td> <td> oscillations </td> </tr> <tr> <td> Adam </td> <td> 94% </td> <td> smoother </td> </tr> </table> Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm. Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster. Some advantages of Adam include: - Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum) - Usually works well even with little tuning of hyperparameters (except $\alpha$) **References**: - Adam paper: https://arxiv.org/pdf/1412.6980.pdf
github_jupyter
``` import tensorflow as tf import os import time from matplotlib import pyplot as plt from IPython import display import tensorflow_datasets as tfds BUFFER_SIZE = 400 EPOCHS = 100 LAMBDA = 100 DATASET = 'seg' BATCH_SIZE = 32 IMG_WIDTH = 128 IMG_HEIGHT = 128 patch_size = 8 num_patches = (IMG_HEIGHT // patch_size) ** 2 projection_dim = 64 embed_dim = 64 num_heads = 2 ff_dim = 32 assert IMG_WIDTH == IMG_HEIGHT, "image width and image height must have same dims" tf.config.run_functions_eagerly(False) dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True) def normalize(input_image, input_mask): input_image = tf.cast(input_image, tf.float32) / 255.0 input_mask -= 1 return input_image, input_mask @tf.function def load_image_train(datapoint): input_image = tf.image.resize(datapoint['image'], (128, 128)) input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128)) if tf.random.uniform(()) > 0.5: input_image = tf.image.flip_left_right(input_image) input_mask = tf.image.flip_left_right(input_mask) input_image, input_mask = normalize(input_image, input_mask) return input_image, input_mask def load_image_test(datapoint): input_image = tf.image.resize(datapoint['image'], (128, 128)) input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128)) input_image, input_mask = normalize(input_image, input_mask) return input_image, input_mask TRAIN_LENGTH = info.splits['train'].num_examples STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE train = dataset['train'].map(load_image_train, num_parallel_calls=tf.data.AUTOTUNE) test = dataset['test'].map(load_image_test) train_dataset = train.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() train_dataset = train_dataset.prefetch(buffer_size=tf.data.AUTOTUNE) test_dataset = test.batch(BATCH_SIZE) def display(display_list): plt.figure(figsize=(15, 15)) title = ['Input Image', 'True Mask', 'Predicted Mask'] for i in range(len(display_list)): plt.subplot(1, len(display_list), i+1) plt.title(title[i]) plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i])) plt.axis('off') plt.show() for image, mask in train.take(1): sample_image, sample_mask = image, mask display([sample_image, sample_mask]) def create_mask(pred_mask): pred_mask = tf.argmax(pred_mask, axis=-1) pred_mask = pred_mask[..., tf.newaxis] return pred_mask[0] def show_predictions(dataset=None, num=1): if dataset: for image, mask in dataset.take(num): pred_mask = generator.predict(image) display([image[0], mask[0], create_mask(pred_mask)]) else: display([sample_image, sample_mask, create_mask(generator.predict(sample_image[tf.newaxis, ...]))]) class Patches(tf.keras.layers.Layer): def __init__(self, patch_size): super(Patches, self).__init__() self.patch_size = patch_size def call(self, images): batch_size = tf.shape(images)[0] patches = tf.image.extract_patches( images=images, sizes=[1, self.patch_size, self.patch_size, 1], strides=[1, self.patch_size, self.patch_size, 1], rates=[1, 1, 1, 1], padding="SAME", ) patch_dims = patches.shape[-1] patches = tf.reshape(patches, [batch_size, -1, patch_dims]) return patches class PatchEncoder(tf.keras.layers.Layer): def __init__(self, num_patches, projection_dim): super(PatchEncoder, self).__init__() self.num_patches = num_patches self.projection = layers.Dense(units=projection_dim) self.position_embedding = layers.Embedding( input_dim=num_patches, output_dim=projection_dim ) def call(self, patch): positions = tf.range(start=0, limit=self.num_patches, delta=1) encoded = self.projection(patch) + self.position_embedding(positions) return encoded class TransformerBlock(tf.keras.layers.Layer): def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1): super(TransformerBlock, self).__init__() self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim) self.ffn = tf.keras.Sequential( [layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),] ) self.layernorm1 = layers.LayerNormalization(epsilon=1e-6) self.layernorm2 = layers.LayerNormalization(epsilon=1e-6) self.dropout1 = layers.Dropout(rate) self.dropout2 = layers.Dropout(rate) def call(self, inputs, training): attn_output = self.att(inputs, inputs) attn_output = self.dropout1(attn_output, training=training) out1 = self.layernorm1(inputs + attn_output) ffn_output = self.ffn(out1) ffn_output = self.dropout2(ffn_output, training=training) return self.layernorm2(out1 + ffn_output) from tensorflow import Tensor from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization,\ Add, AveragePooling2D, Flatten, Dense from tensorflow.keras.models import Model def relu_bn(inputs: Tensor) -> Tensor: relu = ReLU()(inputs) bn = BatchNormalization()(relu) return bn def residual_block(x: Tensor, downsample: bool, filters: int, kernel_size: int = 3) -> Tensor: y = Conv2D(kernel_size=kernel_size, strides= (1 if not downsample else 2), filters=filters, padding="same")(x) y = relu_bn(y) y = Conv2D(kernel_size=kernel_size, strides=1, filters=filters, padding="same")(y) if downsample: x = Conv2D(kernel_size=1, strides=2, filters=filters, padding="same")(x) out = Add()([x, y]) out = relu_bn(out) return out from tensorflow.keras import layers def Generator(): inputs = layers.Input(shape=(128, 128, 3)) patches = Patches(patch_size)(inputs) encoded_patches = PatchEncoder(num_patches, projection_dim)(patches) x = TransformerBlock(64, num_heads, ff_dim)(encoded_patches) x = TransformerBlock(64, num_heads, ff_dim)(x) x = TransformerBlock(64, num_heads, ff_dim)(x) x = TransformerBlock(64, num_heads, ff_dim)(x) x = layers.Reshape((8, 8, 256))(x) x = layers.Conv2DTranspose(512, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x) x = layers.BatchNormalization()(x) x = layers.LeakyReLU()(x) x = residual_block(x, downsample=False, filters=512) x = layers.Conv2DTranspose(256, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x) x = layers.BatchNormalization()(x) x = layers.LeakyReLU()(x) x = residual_block(x, downsample=False, filters=256) x = layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x) x = layers.BatchNormalization()(x) x = layers.LeakyReLU()(x) x = residual_block(x, downsample=False, filters=64) x = layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False)(x) x = layers.BatchNormalization()(x) x = layers.LeakyReLU()(x) x = residual_block(x, downsample=False, filters=32) x = layers.Conv2D(3, (3, 3), strides=(1, 1), padding='same', use_bias=False, activation='tanh')(x) return tf.keras.Model(inputs=inputs, outputs=x) generator = Generator() tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64) generator.summary() class DisplayCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): clear_output(wait=True) show_predictions() print ('\nSample Prediction after epoch {}\n'.format(epoch+1)) generator.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) EPOCHS = 200 VAL_SUBSPLITS = 5 VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS model_history = generator.fit(train_dataset, epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH, validation_steps=VALIDATION_STEPS, validation_data=test_dataset) show_predictions(train_dataset) generator.save_weights('seg-gen-weights.h5') generator.load_weights('weights/seg-gen-weights (5).h5') for inp, tar in train_dataset: break plt.imshow(generator(inp)[0]) import numpy as np plt.imsave('pred1.png', np.array(create_mask(generator(inp))).astype(np.float32).reshape(128, 128)) plt.imshow(inp[0]) import numpy as np plt.imsave('tar1.png', np.array(inp[0]).astype(np.float32).reshape(128, 128, 3)) ```
github_jupyter
![OpenDC%20Serverless%20logo%20%28Final%29.png](attachment:OpenDC%20Serverless%20logo%20%28Final%29.png) # Cold Lambda experiment results analysis This Notebook presents a full analysis of the results of the Lambda validation experiment from the OpenDC Serverless paper This work accompanies the OpenDC Serverless bachelors thesis paper by Soufiane Jounaid within @Large Research Vrije Universiteit Amsterdam 2020. ``` import string import warnings import re import math import datetime import seaborn as sns import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.units as munits from matplotlib.legend import Legend from statsmodels.distributions.empirical_distribution import ECDF warnings.filterwarnings('ignore') ``` ## Input handling (skip if processed results available) ``` lambda_path = "resources/lambda/results/lambda" lambda_results={ 'timer': pd.read_csv(f"{lambda_path}/timer-concurrency.csv"), 'frequent': pd.read_csv(f"{lambda_path}/frequent-concurrency.csv"), 'infrequent': pd.read_csv(f"{lambda_path}/infrequent-concurrency.csv"), 'veryinfrequent': pd.read_csv(f"{lambda_path}/veryinfrequent-concurrency.csv"), 'timer_noconcurrency': pd.read_csv(f"{lambda_path}/timer-noconcurrency.csv"), 'frequent_noconcurrency': pd.read_csv(f"{lambda_path}/frequent-noconcurrency.csv"), 'infrequent_noconcurrency': pd.read_csv(f"{lambda_path}/infrequent-noconcurrency.csv"), 'veryinfrequent_noconcurrency': pd.read_csv(f"{lambda_path}/veryinfrequent-noconcurrency.csv"), } opendc_path = "resources/lambda/results/opendc-serverless" opendc_results = { 'timer': pd.read_parquet(f"{opendc_path}/timer-concurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-600000msTimeout-18VM's.parquet", engine="pyarrow"), 'frequent': pd.read_parquet(f"{opendc_path}/frequent-concurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-840000msTimeout-18VM's.parquet", engine="pyarrow"), 'infrequent': pd.read_parquet(f"{opendc_path}/infrequent-concurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-1740000msTimeout-18VM's.parquet", engine="pyarrow"), 'veryinfrequent': pd.read_parquet(f"{opendc_path}/veryinfrequent-concurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-600000msTimeout-18VM's.parquet", engine="pyarrow"), 'timer_noconcurrency': pd.read_parquet(f"{opendc_path}/timer-noconcurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-600000msTimeout-18VM's.parquet", engine="pyarrow"), 'frequent_noconcurrency': pd.read_parquet(f"{opendc_path}/frequent-noconcurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-300000msTimeout-18VM's.parquet", engine="pyarrow"), 'infrequent_noconcurrency': pd.read_parquet(f"{opendc_path}/infrequent-noconcurrency/randomAlloc-randomRouting-hybrid-histogramResourceManagement-14400000msLimit-18VM's.parquet", engine="pyarrow"), 'veryinfrequent_noconcurrency': pd.read_parquet(f"{opendc_path}/veryinfrequent-noconcurrency/randomAlloc-randomRouting-fixed-keep-aliveResourceManagement-600000msTimeout-18VM's.parquet", engine="pyarrow")} ``` # Pre processing (skip if done once) ## Matching the timestamp format between both results ``` for key in lambda_results.keys(): lambda_results.update({f'{key}': lambda_results.get(key).rename(columns={ 'bin(1s)': 'Datetime', 'avg(@timestamp)': 'Time', 'avg(@duration)': 'ExecutionTime', 'avg(@initDuration)': 'ColdStarts', 'avg(@maxMemoryUsed /1024 / 1024)': 'MemoryUsage'})}) lambda_results.get(key)['Time'] = lambda_results.get(key)['Time'].astype(int) lambda_results.get(key).loc[lambda_results.get(key)['ColdStarts'] > 0, 'ColdStarts'] = 1 lambda_results.get(key).drop(lambda_results.get(key).columns.difference(['Time','ColdStarts']), 1, inplace=True) for key in list(opendc_results): opendc_results.get(key)['Time'] = (1596255027000 + opendc_results.get(key)['Time']).astype(int) opendc_results.update({f'{key}': opendc_results.get(key)[opendc_results.get(key)['Invocations'] > 0].reset_index(drop=True)}) working_list = opendc_results.get(key).loc[opendc_results.get(key)['Invocations'] > 1] for row in working_list.iterrows(): for i in range(row[1].Invocations -1): opendc_results.update({f'{key}': opendc_results.get(key).append(row[1]) .sort_values('Time', axis=0) .assign(Invocations=1) .reset_index(drop=True)}) opendc_results.get(key).loc[opendc_results.get(key)['ColdStarts'] > 0, 'ColdStarts'] = 1 opendc_results.get(key).loc[opendc_results.get(key)['ColdStarts'] > 0, 'ColdStarts'] = 1 opendc_results.get(key).drop(opendc_results.get(key).columns.difference(['Time','ColdStarts']), 1, inplace=True) for key in opendc_results.keys(): dates = [] for i in opendc_results.get(key)['Time']: dates.append(datetime.datetime.fromtimestamp(i/1000).strftime('%Y-%m-%d %H:%M:%S')) opendc_results.get(key).insert(0, 'Datetime', dates) for key in lambda_results.keys(): dates = [] for i in lambda_results.get(key)['Time']: dates.append(datetime.datetime.fromtimestamp(i/1000)) lambda_results.get(key).insert(0, 'Datetime', dates) ``` ## Quick Fix for Loss of data due to internet loss: During the experiment, internet was cutoff betweem the invoker and the AWS API at August-1-20 14:17:16, the missing requests are thus removed here Missing indices: * frequent= 29,38,213, 183, 278 * frequent_noconcurrency = 182 * infrequent = 44, 100, 101, 102, 103, 104 * infrequent_noconcurrency = 60 ``` opendc_results['frequent'].drop([29,38,213,183,278], axis=0, inplace=True) opendc_results['frequent'].reset_index(drop=True, inplace=True) opendc_results['frequent_noconcurrency'].drop(182, axis=0, inplace=True) opendc_results['frequent_noconcurrency'].reset_index(drop=True, inplace=True) opendc_results['infrequent'].drop([44,100,101,102,103,104], axis=0, inplace=True) opendc_results['infrequent'].reset_index(drop=True, inplace=True) opendc_results['infrequent_noconcurrency'].drop(60, axis=0, inplace=True) opendc_results['infrequent_noconcurrency'].reset_index(drop=True, inplace=True) for key in opendc_results.keys(): opendc_results.get(key)['Datetime'] = lambda_results.get(key)['Datetime'] for key in lambda_results.keys(): lambda_results.get(key).to_csv(f"resources/lambda/results/processed-results/lambda_{key}.csv", index=False) opendc_results.get(key).to_csv(f"resources/lambda/results/processed-results/opendc_{key}.csv", index=False) ``` ## Processed results (if available) ``` path = "resources/lambda/results/processed-results" lambda_procresults={ 'timer': pd.read_csv(f"{path}/lambda_timer.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'frequent': pd.read_csv(f"{path}/lambda_frequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'infrequent': pd.read_csv(f"{path}/lambda_infrequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'veryinfrequent': pd.read_csv(f"{path}/lambda_veryinfrequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'timer_noconcurrency': pd.read_csv(f"{path}/lambda_timer_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'frequent_noconcurrency': pd.read_csv(f"{path}/lambda_frequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'infrequent_noconcurrency': pd.read_csv(f"{path}/lambda_infrequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'veryinfrequent_noconcurrency': pd.read_csv(f"{path}/lambda_veryinfrequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), } opendc_procresults={ 'timer': pd.read_csv(f"{path}/opendc_timer.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'frequent': pd.read_csv(f"{path}/opendc_frequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'infrequent': pd.read_csv(f"{path}/opendc_infrequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'veryinfrequent': pd.read_csv(f"{path}/opendc_veryinfrequent.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'timer_noconcurrency': pd.read_csv(f"{path}/opendc_timer_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'frequent_noconcurrency': pd.read_csv(f"{path}/opendc_frequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'infrequent_noconcurrency': pd.read_csv(f"{path}/opendc_infrequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'infrequent_noconcurrency_hybrid': pd.read_csv(f"{path}/opendc_infrequent_noconcurrency_hybrid.csv", parse_dates=['Datetime'], index_col=['Datetime']), 'veryinfrequent_noconcurrency': pd.read_csv(f"{path}/opendc_veryinfrequent_noconcurrency.csv", parse_dates=['Datetime'], index_col=['Datetime']), } ``` # OpenDC vs Lambda Coldstart variation ``` sns.set() converter = mdates.ConciseDateConverter() munits.registry[np.datetime64] = converter munits.registry[datetime.date] = converter munits.registry[datetime.datetime] = converter ``` ## Comparative Cold Start event plots ### Timer Invocation Pattern ``` with sns.axes_style("ticks", {'axes.grid': True}): fig, ax =plt.subplots(2,1, figsize= (20,10)) fig.suptitle("Timer Invocation Pattern", fontsize=20) ax[0].set_ylim(-0.5,1.5) ax[1].set_ylim(-0.5,1.5) ax[0].set_title("Concurrency", fontsize=15) ax[1].set_title("No Concurrency", fontsize=15) ax[0].set_yticks([]) ax[0].set_ylabel('Cold start (line = true)', fontsize=15) ax[1].set_yticks([]) ax[1].set_ylabel('ColdStart (line = True)', fontsize=15) ax[1].set_xlabel('Time', fontsize=15) plot1 = ax[0].eventplot(lambda_procresults['timer'][lambda_procresults['timer']['ColdStarts'] > 0].index, color='purple') plot2 = ax[0].eventplot(opendc_procresults['timer'][opendc_procresults['timer']['ColdStarts'] > 0].index, lineoffsets=0, color='red') plot3 = ax[1].eventplot(lambda_procresults['timer_noconcurrency'][lambda_procresults['timer_noconcurrency']['ColdStarts'] > 0].index, color='purple') plot4 = ax[1].eventplot(opendc_procresults['timer_noconcurrency'][opendc_procresults['timer_noconcurrency']['ColdStarts'] > 0].index, lineoffsets=0, color='red') ax[0].legend([plot1,plot2], labels =['Lambda','OpenDC Serverless'], fontsize=15, loc="lower right") ax[1].legend([plot3,plot4], labels =['Lambda','OpenDC Serverless'], fontsize=15, loc="lower right") ax[0].tick_params(axis='both', which='major', labelsize=15) ax[0].tick_params(axis='both', which='minor', labelsize=15) ax[1].tick_params(axis='both', which='major', labelsize=15) ax[1].tick_params(axis='both', which='minor', labelsize=15) ``` ### Frequent Invocation Pattern ``` with sns.axes_style("ticks", {'axes.grid': True}): fig, ax =plt.subplots(2, 1, figsize= (20,10)) fig.suptitle("Frequent Invocation Pattern", fontsize= 20) ax[0].set_ylim(-0.5,1.5) ax[1].set_ylim(-0.5,1.5) ax[0].set_title("Concurrency", fontsize=15) ax[1].set_title("No Concurrency", fontsize=15) ax[0].set_yticks([]) ax[0].set_ylabel('Cold start (line = true)', fontsize=15) ax[1].set_yticks([]) ax[1].set_ylabel('ColdStart (line = True)', fontsize=15) ax[1].set_xlabel('Time', fontsize=15) plot1 = ax[0].eventplot(lambda_procresults['frequent'][lambda_procresults['frequent']['ColdStarts'] > 0].index, color='purple') plot2 = ax[0].eventplot(opendc_procresults['frequent'][opendc_procresults['frequent']['ColdStarts'] > 0].index, lineoffsets=0, color='red') plot3 = ax[1].eventplot(lambda_procresults['frequent_noconcurrency'][lambda_procresults['frequent_noconcurrency']['ColdStarts'] > 0].index, color='purple') plot4 = ax[1].eventplot(opendc_procresults['frequent_noconcurrency'][opendc_procresults['frequent_noconcurrency']['ColdStarts'] > 0].index, lineoffsets=0, color='red') ax[0].legend([plot1,plot2], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[1].legend([plot3,plot4], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[0].tick_params(axis='both', which='major', labelsize=15) ax[0].tick_params(axis='both', which='minor', labelsize=15) ax[1].tick_params(axis='both', which='major', labelsize=15) ax[1].tick_params(axis='both', which='minor', labelsize=15) ``` ### Infrequent Invocation Pattern ``` with sns.axes_style("ticks", {'axes.grid': True}): fig, ax =plt.subplots(2, 1, figsize= (20,10)) fig.suptitle("Infrequent Invocation Pattern", fontsize= 20) ax[0].set_ylim(-0.5,1.5) ax[1].set_ylim(-0.5,1.5) ax[0].set_title("Concurrency", fontsize=15) ax[1].set_title("No Concurrency", fontsize=15) ax[0].set_yticks([]) ax[0].set_ylabel('Cold start (line = true)', fontsize=15) ax[1].set_yticks([]) ax[1].set_ylabel('ColdStart (line = True)', fontsize=15) ax[1].set_xlabel('Time', fontsize=15) plot1 = ax[0].eventplot(lambda_procresults['infrequent'][lambda_procresults['infrequent']['ColdStarts'] > 0].index, color='purple') plot2 = ax[0].eventplot(opendc_procresults['infrequent'][opendc_procresults['infrequent']['ColdStarts'] > 0].index, lineoffsets=0, color='red') plot3 = ax[1].eventplot(lambda_procresults['infrequent_noconcurrency'][lambda_procresults['infrequent_noconcurrency']['ColdStarts'] > 0].index, color='purple') plot4 = ax[1].eventplot(opendc_procresults['infrequent_noconcurrency'][opendc_procresults['infrequent_noconcurrency']['ColdStarts'] > 0].index, lineoffsets=0, color='red') ax[0].legend([plot1,plot2], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[1].legend([plot3,plot4], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[0].tick_params(axis='both', which='major', labelsize=15) ax[0].tick_params(axis='both', which='minor', labelsize=15) ax[1].tick_params(axis='both', which='major', labelsize=15) ax[1].tick_params(axis='both', which='minor', labelsize=15) with sns.axes_style("ticks", {'axes.grid': True}): fig, ax =plt.subplots(1, figsize= (20,6)) fig.suptitle("Infrequent Invocation Pattern with Azure's Hybrid Histogram policy", fontsize=20) ax.set_ylim(-0.5,1.5) ax.set_title("No Concurrency", fontsize=15) ax.set_yticks([]) ax.set_ylabel('Cold start (line = true)', fontsize=15) ax.set_xlabel('Time', fontsize=15) plot1 = ax.eventplot(lambda_procresults['infrequent_noconcurrency'][lambda_procresults['infrequent_noconcurrency']['ColdStarts'] > 0].index, color='purple') plot2 = ax.eventplot(opendc_procresults['infrequent_noconcurrency_hybrid'][opendc_procresults['infrequent_noconcurrency_hybrid']['ColdStarts'] > 0].index, lineoffsets=0, color='red') ax.legend([plot1,plot2], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax.tick_params(axis='both', which='major', labelsize=15) ax.tick_params(axis='both', which='minor', labelsize=15) ``` ### Very Infrequent Invocation Pattern ``` with sns.axes_style("ticks", {'axes.grid': True}): fig, ax =plt.subplots(2, 1, figsize= (20,10)) fig.suptitle("Very Infrequent Invocation Pattern", fontsize= 20) ax[0].set_ylim(-0.5,1.5) ax[1].set_ylim(-0.5,1.5) ax[0].set_title("Concurrency", fontsize=15) ax[1].set_title("No Concurrency", fontsize=15) ax[0].set_yticks([]) ax[0].set_ylabel('Cold start (line = true)', fontsize=15) ax[1].set_yticks([]) ax[1].set_ylabel('ColdStart (line = True)', fontsize=15) ax[1].set_xlabel('Time', fontsize=15) plot1 = ax[0].eventplot(lambda_procresults['veryinfrequent'][lambda_procresults['veryinfrequent']['ColdStarts'] > 0].index, color='purple') plot2 = ax[0].eventplot(opendc_procresults['veryinfrequent'][opendc_procresults['veryinfrequent']['ColdStarts'] > 0].index, lineoffsets=0, color='red') plot3 = ax[1].eventplot(lambda_procresults['veryinfrequent_noconcurrency'][lambda_procresults['veryinfrequent_noconcurrency']['ColdStarts'] > 0].index, color='purple') plot4 = ax[1].eventplot(opendc_procresults['veryinfrequent_noconcurrency'][opendc_procresults['veryinfrequent_noconcurrency']['ColdStarts'] > 0].index, lineoffsets=0, color='red') ax[0].legend([plot1,plot2], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[1].legend([plot3,plot4], labels =['Lambda','OpenDC Serverless'], fontsize=15) ax[0].tick_params(axis='both', which='major', labelsize=15) ax[0].tick_params(axis='both', which='minor', labelsize=15) ax[1].tick_params(axis='both', which='major', labelsize=15) ax[1].tick_params(axis='both', which='minor', labelsize=15) ```
github_jupyter
# 自然语言推断与数据集 :label:`sec_natural-language-inference-and-dataset` 在 :numref:`sec_sentiment`中,我们讨论了情感分析问题。这个任务的目的是将单个文本序列分类到预定义的类别中,例如一组情感极性中。然而,当需要决定一个句子是否可以从另一个句子推断出来,或者需要通过识别语义等价的句子来消除句子间冗余时,知道如何对一个文本序列进行分类是不够的。相反,我们需要能够对成对的文本序列进行推断。 ## 自然语言推断 *自然语言推断*(natural language inference)主要研究 *假设*(hypothesis)是否可以从*前提*(premise)中推断出来, 其中两者都是文本序列。 换言之,自然语言推断决定了一对文本序列之间的逻辑关系。这类关系通常分为三种类型: * *蕴涵*(entailment):假设可以从前提中推断出来。 * *矛盾*(contradiction):假设的否定可以从前提中推断出来。 * *中性*(neutral):所有其他情况。 自然语言推断也被称为识别文本蕴涵任务。 例如,下面的一个文本对将被贴上“蕴涵”的标签,因为假设中的“表白”可以从前提中的“拥抱”中推断出来。 >前提:两个女人拥抱在一起。 >假设:两个女人在示爱。 下面是一个“矛盾”的例子,因为“运行编码示例”表示“不睡觉”,而不是“睡觉”。 >前提:一名男子正在运行Dive Into Deep Learning的编码示例。 假设:该男子正在睡觉。 第三个例子显示了一种“中性”关系,因为“正在为我们表演”这一事实无法推断出“出名”或“不出名”。 >前提:音乐家们正在为我们表演。 >假设:音乐家很有名。 自然语言推断一直是理解自然语言的中心话题。它有着广泛的应用,从信息检索到开放领域的问答。为了研究这个问题,我们将首先研究一个流行的自然语言推断基准数据集。 ## 斯坦福自然语言推断(SNLI)数据集 [**斯坦福自然语言推断语料库(Stanford Natural Language Inference,SNLI)**]是由500000多个带标签的英语句子对组成的集合 :cite:`Bowman.Angeli.Potts.ea.2015`。我们在路径`../data/snli_1.0`中下载并存储提取的SNLI数据集。 ``` import os import re import torch from torch import nn from d2l import torch as d2l #@save d2l.DATA_HUB['SNLI'] = ( 'https://nlp.stanford.edu/projects/snli/snli_1.0.zip', '9fcde07509c7e87ec61c640c1b2753d9041758e4') data_dir = d2l.download_extract('SNLI') ``` ### [**读取数据集**] 原始的SNLI数据集包含的信息比我们在实验中真正需要的信息丰富得多。因此,我们定义函数`read_snli`以仅提取数据集的一部分,然后返回前提、假设及其标签的列表。 ``` #@save def read_snli(data_dir, is_train): """将SNLI数据集解析为前提、假设和标签""" def extract_text(s): # 删除我们不会使用的信息 s = re.sub('\\(', '', s) s = re.sub('\\)', '', s) # 用一个空格替换两个或多个连续的空格 s = re.sub('\\s{2,}', ' ', s) return s.strip() label_set = {'entailment': 0, 'contradiction': 1, 'neutral': 2} file_name = os.path.join(data_dir, 'snli_1.0_train.txt' if is_train else 'snli_1.0_test.txt') with open(file_name, 'r') as f: rows = [row.split('\t') for row in f.readlines()[1:]] premises = [extract_text(row[1]) for row in rows if row[0] in label_set] hypotheses = [extract_text(row[2]) for row in rows if row[0] \ in label_set] labels = [label_set[row[0]] for row in rows if row[0] in label_set] return premises, hypotheses, labels ``` 现在让我们[**打印前3对**]前提和假设,以及它们的标签(“0”、“1”和“2”分别对应于“蕴涵”、“矛盾”和“中性”)。 ``` train_data = read_snli(data_dir, is_train=True) for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]): print('前提:', x0) print('假设:', x1) print('标签:', y) ``` 训练集约有550000对,测试集约有10000对。下面显示了训练集和测试集中的三个[**标签“蕴涵”、“矛盾”和“中性”是平衡的**]。 ``` test_data = read_snli(data_dir, is_train=False) for data in [train_data, test_data]: print([[row for row in data[2]].count(i) for i in range(3)]) ``` ### [**定义用于加载数据集的类**] 下面我们来定义一个用于加载SNLI数据集的类。类构造函数中的变量`num_steps`指定文本序列的长度,使得每个小批量序列将具有相同的形状。换句话说,在较长序列中的前`num_steps`个标记之后的标记被截断,而特殊标记“&lt;pad&gt;”将被附加到较短的序列后,直到它们的长度变为`num_steps`。通过实现`__getitem__`功能,我们可以任意访问带有索引`idx`的前提、假设和标签。 ``` #@save class SNLIDataset(torch.utils.data.Dataset): """用于加载SNLI数据集的自定义数据集""" def __init__(self, dataset, num_steps, vocab=None): self.num_steps = num_steps all_premise_tokens = d2l.tokenize(dataset[0]) all_hypothesis_tokens = d2l.tokenize(dataset[1]) if vocab is None: self.vocab = d2l.Vocab(all_premise_tokens + \ all_hypothesis_tokens, min_freq=5, reserved_tokens=['<pad>']) else: self.vocab = vocab self.premises = self._pad(all_premise_tokens) self.hypotheses = self._pad(all_hypothesis_tokens) self.labels = torch.tensor(dataset[2]) print('read ' + str(len(self.premises)) + ' examples') def _pad(self, lines): return torch.tensor([d2l.truncate_pad( self.vocab[line], self.num_steps, self.vocab['<pad>']) for line in lines]) def __getitem__(self, idx): return (self.premises[idx], self.hypotheses[idx]), self.labels[idx] def __len__(self): return len(self.premises) ``` ### [**整合代码**] 现在,我们可以调用`read_snli`函数和`SNLIDataset`类来下载SNLI数据集,并返回训练集和测试集的`DataLoader`实例,以及训练集的词表。值得注意的是,我们必须使用从训练集构造的词表作为测试集的词表。因此,在训练集中训练的模型将不知道来自测试集的任何新词元。 ``` #@save def load_data_snli(batch_size, num_steps=50): """下载SNLI数据集并返回数据迭代器和词表""" num_workers = d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_data = read_snli(data_dir, True) test_data = read_snli(data_dir, False) train_set = SNLIDataset(train_data, num_steps) test_set = SNLIDataset(test_data, num_steps, train_set.vocab) train_iter = torch.utils.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader(test_set, batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter, train_set.vocab ``` 在这里,我们将批量大小设置为128时,将序列长度设置为50,并调用`load_data_snli`函数来获取数据迭代器和词表。然后我们打印词表大小。 ``` train_iter, test_iter, vocab = load_data_snli(128, 50) len(vocab) ``` 现在我们打印第一个小批量的形状。与情感分析相反,我们有分别代表前提和假设的两个输入`X[0]`和`X[1]`。 ``` for X, Y in train_iter: print(X[0].shape) print(X[1].shape) print(Y.shape) break ``` ## 小结 * 自然语言推断研究“假设”是否可以从“前提”推断出来,其中两者都是文本序列。 * 在自然语言推断中,前提和假设之间的关系包括蕴涵关系、矛盾关系和中性关系。 * 斯坦福自然语言推断(SNLI)语料库是一个比较流行的自然语言推断基准数据集。 ## 练习 1. 机器翻译长期以来一直是基于翻译输出和翻译真实值之间的表面$n$元语法匹配来进行评估的。你能设计一种用自然语言推断来评价机器翻译结果的方法吗? 1. 我们如何更改超参数以减小词表大小? [Discussions](https://discuss.d2l.ai/t/5722)
github_jupyter
# Proof of concept of new "composable" ADMM formulation 3/30/21 This notebook is a proof of concept and understanding of the new ADMM formulation, based on grouping quadratic terms and linear constraints in with the global equality constraint. ``` %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt import pandas as pd from scipy import signal from scipy import sparse as sp from time import time import seaborn as sns import cvxpy as cvx sns.set_style('darkgrid') import sys sys.path.append('..') from osd import Problem from osd.components import GaussNoise, SmoothSecondDifference, PiecewiseConstant, SparseFirstDiffConvex, LaplaceNoise, Blank from osd.generators import proj_l2_d2, make_pwc_data from osd.utilities import progress ``` ## Problem data generation ``` T = 500 X_real = np.zeros((3, T)) X_real[0] = 0.1 * np.random.randn(T) X_real[1] = proj_l2_d2(np.random.randn(T), theta=5e2) * 2 X_real[2] = make_pwc_data(T, segments=4) y = np.sum(X_real, axis=0) fig, ax = plt.subplots(nrows=3, sharex=True, figsize=(14, 7)) ax[0].set_title('Smooth component') ax[0].plot(X_real[1]) ax[1].set_title('PWC component') ax[1].plot(X_real[2]) ax[2].set_title('Observed signal') ax[2].plot(y, linewidth=1, marker='.') # ax[2].plot(signal1 + signal2, label='true signal minus noise', ls='--') plt.tight_layout() plt.show() ``` ## Example 1: Quadratically smooth, plus Gaussian noise ``` y = np.sum(X_real[:2], axis=0) fig, ax = plt.subplots(nrows=3, sharex=True, figsize=(14, 7)) ax[0].set_title('Noise component') ax[0].plot(X_real[0]) ax[1].set_title('Smooth component') ax[1].plot(X_real[1]) ax[2].set_title('Observed signal') ax[2].plot(y, linewidth=1, marker='.') # ax[2].plot(signal1 + signal2, label='true signal minus noise', ls='--') plt.tight_layout() plt.show() ``` ### Solve with CVXPY + MOSEK ``` c1 = GaussNoise() c2 = SmoothSecondDifference(theta=1e2) components = [c1, c2] problem = Problem(y, components) problem.weights.value = [c.theta for c in problem.components] problem.decompose(admm=False, solver='MOSEK') K = len(components) fig, ax = plt.subplots(nrows=K, sharex=True, figsize=(10,8)) for k in range(K): true = X_real[k] est = problem.estimates[k] ax[k].plot(true, label='true') ax[k].plot(est, label='estimated (mean adj.)') ax[k].set_title('Component {}'.format(k)) ax[k].legend() plt.tight_layout() ``` ### Solve with new ADMM formulation This problem is comprised of quadratic terms and linear constraints, so everything goes in the z-update. We have $$ g(z,\tilde{z};\theta) = \left\lVert z_1 \right\rVert_2^2 + \theta\left\lVert \tilde{z}_2\right\rVert_2^2, $$ with $$ \mathbf{dom}\,g = \left\{z,\tilde{z}\mid\sum_{k} z_k = y, D^2 z_2 = \tilde{z}_2\right\}.$$ Let $\hat{z} = \left[ \begin{matrix} z_1^T & z_2^T & \tilde{z}_2^T\end{matrix}\right]^T$. Then $g$ can be rewritten as $$ g(\hat{z};P) = \hat{z}^T P \hat{z},$$ where $P\in\mathbf{R}^{(3T-2)\times(3T-2)}$ is a diagonal matrix with the first T entries equal to 1, the second T entris equal to 0, and the final T-1 entries equal to $\theta$. ``` # Initialization X = np.zeros((K, T)) X[0] = y X_tilde = np.zeros(T-2) Z = np.copy(X) Z_tilde = np.copy(X_tilde) U = np.zeros_like(X) U_tile = np.zeros_like(X_tilde) d = np.zeros(3 * T - 2) d[:T] = 1 d[2*T:] = 1e2 P = 2 * np.diag(d) D = np.diff(np.eye(T), axis=0, n=2) F_1 = np.block([np.eye(T), np.eye(T), np.zeros((T, T-2))]) F_2 = np.block([np.zeros((T-2, T)), D, -1 * np.eye(T-2)]) F = np.block([[F_1], [F_2]]) A = np.block([ [P + np.eye(3*T - 2), F.T], [F, np.zeros((F.shape[0], F.shape[0]))] ]) A.shape v = np.random.randn(3 * T - 2) g = np.block([y, np.zeros(T - 2)]) vp = np.block([v, g]) A_s = sp.csc_matrix(A) A_factored = sp.linalg.splu(A_s) first_solve = A_factored.solve(vp) r = vp - A_s.dot(first_solve) second_solve = first_solve + A_factored.solve(r) plt.plot(r) x = cvx.Variable(3 * T - 2) objective = cvx.Minimize( cvx.sum_squares(x[:T]) + 1e2 * cvx.sum_squares(x[2*T:]) + 0.5 * cvx.sum_squares(x - v) ) constraints = [ x[:T] + x[T:2*T] == y, cvx.diff(x[T:2*T], k=2) == x[2*T:] ] problem = cvx.Problem(objective, constraints) problem.solve(verbose=True, solver='MOSEK') objective.value (cvx.sum_squares(first_solve[:T]) + 1e2 * cvx.sum_squares(first_solve[2*T:3*T-2]) + 0.5 * cvx.sum_squares(first_solve[:3*T-2] - v)).value (cvx.sum_squares(second_solve[:T]) + 1e2 * cvx.sum_squares(second_solve[2*T:3*T-2]) + 0.5 * cvx.sum_squares(second_solve[:3*T-2] - v)).value cvx_est = np.block([x.value, problem.constraints[0].dual_value, problem.constraints[1].dual_value]) cvx_residual = A.dot(cvx_est) - vp linsolve_residual = A.dot(first_solve) - vp plt.plot(cvx_residual, linewidth=1, marker='.', alpha=0.2, label='cvx_residual') plt.plot(linsolve_residual, linewidth=1, marker='.', alpha=0.2, label='linsolve_residual') plt.legend(); plt.plot(x.value, linewidth=1, label='cvxpy') plt.plot(second_solve[:3*T-1], linewidth=1, label='kkt solve') plt.legend(); np.alltrue([ np.alltrue(np.isclose(second_solve[:T] + second_solve[T:2*T], y)), np.alltrue(np.isclose(x.value[:T] + x.value[T:2*T], y)) ]) plt.plot(x.value[:T] + x.value[T:2*T] - y, linewidth=1, label='cvxpy') plt.plot(second_solve[:T] + second_solve[T:2*T] - y, linewidth=1, label='kkt') plt.legend(); np.alltrue([ np.alltrue(np.isclose(np.diff(second_solve[T:2*T], n=2), second_solve[2*T:3*T-2])), np.alltrue(np.isclose(np.diff(x.value[T:2*T], n=2), x.value[2*T:])) ]) plt.plot(np.diff(x.value[T:2*T], n=2) - x.value[2*T:], linewidth=1, label='cvxpy') plt.plot(np.diff(second_solve[T:2*T], n=2) - second_solve[2*T:3*T-2], linewidth=1, label='kkt') plt.legend(); from scipy import sparse as sp A_s = sp.csc_matrix(A) A_s %timeit sp.csc_matrix(A) %timeit out_test = np.linalg.solve(A, vp) %timeit A_factored = sp.linalg.splu(A_s) A_factored = sp.linalg.splu(A_s) %timeit sparse_test = A_factored.solve(vp) A_factored = sp.linalg.splu(A_s) sparse_test = A_factored.solve(vp) r = vp - A_s.dot(sparse_test) print(np.linalg.norm(r)) c = A_factored.solve(r) x = sparse_test + c r = vp - A_s.dot(x) print(np.linalg.norm(r)) plt.plot(x.value) plt.plot(out_test[:3*T-1]) plt.plot(sparse_test[:3*T-1]) ```
github_jupyter
<a href="https://colab.research.google.com/github/EvenSol/NeqSim-Colab/blob/master/notebooks/thermodynamics/ThermoPropertyCharts.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #@title Thermodynamic property charts #@markdown Demonstration of thermodynamic property charts using neqsim and CoolProp %%capture !pip install neqsim !pip install CoolProp import neqsim from neqsim.thermo.thermoTools import * import matplotlib import numpy as np import matplotlib.pyplot as plt import math plt.style.use('classic') %matplotlib inline #@title Thermodynamic property charts #@markdown This video gives an intriduction to creating thermodynamic property charts from IPython.display import YouTubeVideo YouTubeVideo('kLA9AGv8AB4', width=600, height=400) ``` # The pH diagram On the P-H diagram, pressure is indicated on the y-axis and enthalpy is indicated on the x-axis. Typically enthalpy is in units of Btu/lb and pressure is in units of pounds per square inch (psi). The upside down U figure shown on the diagram designates the points at which the refrigerant changes phase. The left vertical curve indicates the saturated liquid curve and the right vertical curve indicates the saturated vapor curve. The region in between the two curves describe refrigerant states that contain a mixture of both liquid and vapor. The locations to the left of the saturated liquid curve indicate that the refrigerant is in liquid form and locations to the right of the saturated vapor curve indicate that the refrigerant is in vapor form. The point at which the two curves meet is called the critical point. The importance of this point is that at any point above, no additional pressure will change the vapor into a liquid. ``` #@title Create a pH diagram { run: "auto" } import CoolProp from CoolProp.Plots import PropertyPlot componentName = "R134a" #@param ["R134a", "ethane", "propane", "CO2", "nitrogen"] plot = PropertyPlot(componentName, 'PH', unit_system='EUR', tp_limits='ACHP') plot.calc_isolines(CoolProp.iQ, num=11) plot.calc_isolines(CoolProp.iT, num=25) plot.calc_isolines(CoolProp.iSmass, num=15) plot.show() ``` # The Ts diagram A Temperature-entropy diagram (T-s diagram) is the type of diagram most frequently used to analyze energy transfer system cycles. It is used in thermodynamics to visualize changes to temperature and specific entropy during a thermodynamic process or cycle. ``` #@title Create a Ts diagram { run: "auto" } componentName = "water" #@param ["water", "HEOS::R245fa", "R134a", "ethane", "propane", "CO2", "nitrogen"] import CoolProp from CoolProp.Plots import PropertyPlot plot = PropertyPlot(componentName, 'TS', unit_system='EUR', tp_limits='ORC') plot.calc_isolines(CoolProp.iQ, num=11) plot.calc_isolines(CoolProp.iP, iso_range=[1,50], num=10, rounding=True) plot.draw() plot.isolines.clear() plot.props[CoolProp.iP]['color'] = 'green' plot.props[CoolProp.iP]['lw'] = '0.5' plot.calc_isolines(CoolProp.iP, iso_range=[1,50], num=10, rounding=False) plot.show() #@title Enthalpy chart of natural gas minPressure = 1.0 #@param {type:"number"} maxPressure = 200.0 #@param {type:"number"} minTemperature = -150.0 #@param {type:"number"} maxTemperature = 200.0 #@param {type:"number"} nitrogen = 1.0 #@param {type:"number"} CO2 = 2.5 #@param {type:"number"} methane = 80.0 #@param {type:"number"} ethane = 5.0 #@param {type:"number"} propane = 2.5 #@param {type:"number"} ibutane = 1.25 #@param {type:"number"} nbutane = 1.25 #@param {type:"number"} ipentane = 0.5 #@param {type:"number"} npentane = 0.5 #@param {type:"number"} nhexane = 0.005 #@param {type:"number"} fluid1 = fluid('srk') fluid1.addComponent("nitrogen", nitrogen) fluid1.addComponent("CO2", CO2) fluid1.addComponent("methane", methane) fluid1.addComponent("ethane", ethane) fluid1.addComponent("propane", propane) fluid1.addComponent("i-butane", ibutane) fluid1.addComponent("n-butane", nbutane) fluid1.addComponent("i-pentane", ipentane) fluid1.addComponent("n-pentane", nbutane) fluid1.addComponent("n-hexane", nhexane) fluid1.setMixingRule(2); def enthalpy(temperature1,pressure1): fluid1.setPressure(pressure1, 'bara') fluid1.setTemperature(temperature1, 'C') TPflash(fluid1) fluid1.initProperties(); return fluid1.getEnthalpy('J/kg')/1000.0 temperature = np.arange(minTemperature, maxTemperature, int((maxTemperature-minTemperature)/100)+1) pressure = np.arange(minPressure, maxPressure, int((maxPressure-minPressure)/100)+1) X, Y = np.meshgrid(temperature, pressure) enthalpyGas = np.fromiter(map(enthalpy, X.ravel(), Y.ravel()), X.dtype).reshape(X.shape) thermoOps = neqsim.thermodynamicOperations.ThermodynamicOperations(fluid1) thermoOps.calcPTphaseEnvelope() fig, ax = plt.subplots() CS = ax.contour(temperature,pressure, enthalpyGas) ax.clabel(CS, inline=1, fontsize=10) plt.plot([x - 273.15 for x in list(thermoOps.getOperation().get("dewT"))],list(thermoOps.getOperation().get("dewP")), label="dew point") plt.plot([x - 273.15 for x in list(thermoOps.getOperation().get("bubT"))],list(thermoOps.getOperation().get("bubP")), label="bubble point") plt.title('Enthalpy [kJ/kg] of natural gas') plt.xlabel('Temperature [\u00B0C]') plt.ylabel('Pressure [bar]') plt.legend() plt.show() #@title Enthalpy and entropy lines in a phase envelope of natural gas minPressure = 1.0 #@param {type:"number"} maxPressure = 150.0 #@param {type:"number"} minTemperature = -150.0 #@param {type:"number"} maxTemperature = 50.0 #@param {type:"number"} nitrogen = 1.0 #@param {type:"number"} CO2 = 2.5 #@param {type:"number"} methane = 80.0 #@param {type:"number"} ethane = 5.0 #@param {type:"number"} propane = 2.5 #@param {type:"number"} ibutane = 1.25 #@param {type:"number"} nbutane = 1.0 #@param {type:"number"} ipentane = 0.4 #@param {type:"number"} npentane = 0.3 #@param {type:"number"} nhexane = 0.08#@param {type:"number"} fluid1 = fluid('srk') fluid1.addComponent("nitrogen", nitrogen) fluid1.addComponent("CO2", CO2) fluid1.addComponent("methane", methane) fluid1.addComponent("ethane", ethane) fluid1.addComponent("propane", propane) fluid1.addComponent("i-butane", ibutane) fluid1.addComponent("n-butane", nbutane) fluid1.addComponent("i-pentane", ipentane) fluid1.addComponent("n-pentane", nbutane) fluid1.addComponent("n-hexane", nhexane) fluid1.setMixingRule(2); def enthalpy(temperature1,pressure1): fluid1.setPressure(pressure1, 'bara') fluid1.setTemperature(temperature1, 'C') TPflash(fluid1) fluid1.initProperties(); return fluid1.getEnthalpy('J/kg')/1000.0 def entropy(temperature1,pressure1): fluid1.setPressure(pressure1, 'bara') fluid1.setTemperature(temperature1, 'C') TPflash(fluid1) fluid1.initProperties(); return fluid1.getEntropy('J/kgK')/1000.0 def fraction(temperature1,pressure1): fluid1.setPressure(pressure1, 'bara') fluid1.setTemperature(temperature1, 'C') TPflash(fluid1) fluid1.initProperties(); result = np.nan if(fluid1.getNumberOfPhases()>1): result = fluid1.getPhaseFraction('gas','mass') return result temperature = np.arange(minTemperature, maxTemperature, int((maxTemperature-minTemperature)/1000)+1) pressure = np.arange(minPressure, maxPressure, int((maxPressure-minPressure)/1000)+1) X, Y = np.meshgrid(temperature, pressure) enthalpyGas = np.fromiter(map(enthalpy, X.ravel(), Y.ravel()), X.dtype).reshape(X.shape) entropyGas = np.fromiter(map(entropy, X.ravel(), Y.ravel()), X.dtype).reshape(X.shape) gasQuality = np.fromiter(map(fraction, X.ravel(), Y.ravel()), X.dtype).reshape(X.shape) thermoOps = neqsim.thermodynamicOperations.ThermodynamicOperations(fluid1) thermoOps.calcPTphaseEnvelope() plt.rcParams['figure.figsize'] = [20, 10] fig, ax = plt.subplots() CS = ax.contour(temperature,pressure, enthalpyGas, 12 ,colors='r') CS.collections[0].set_label('enthalpy [kJ/kg]') CS2 = ax.contour(temperature,pressure, entropyGas, 12 ,colors='b') CS2.collections[0].set_label('entropy [kJ/kgK]') CS3 = ax.contour(temperature,pressure, gasQuality, 12 ,colors='k',levels=[0.5, 0.6, 0.7, 0.8, 0.9, 0.95]) CS3.collections[0].set_label('gas mass fraction') ax.clabel(CS, inline=1, fontsize=12) ax.clabel(CS2, inline=1, fontsize=12) ax.clabel(CS3, inline=1, fontsize=12) plt.plot([x - 273.15 for x in list(thermoOps.getOperation().get("dewT"))],list(thermoOps.getOperation().get("dewP")), label="dew point", linewidth=2, color='k') plt.plot([x - 273.15 for x in list(thermoOps.getOperation().get("bubT"))],list(thermoOps.getOperation().get("bubP")), label="bubble point", linestyle='--', linewidth=2, color='k') plt.title('Phase envelope of a rich natural gas') plt.xlabel('Temperature [\u00B0C]', fontsize=15) plt.ylabel('Pressure [bara]', fontsize=15) plt.legend() plt.xticks(ticks=[-150.0, -100.0, -80.0,- 70.0, -60.0, -50.0,-40.0, -30.0, -20.0, -10.0, 0.0, 10.0, 20.0, 30.0, 40.0, 50.0]) plt.show() ```
github_jupyter
# MLFlow Pre-packaged Model Server AB Test Deployment In this example we will build two models with MLFlow and we will deploy them as an A/B test deployment. The reason this is powerful is because it allows you to deploy a new model next to the old one, distributing a percentage of traffic. These deployment strategies are quite simple using Seldon, and can be extended to shadow deployments, multi-armed-bandits, etc. ## Tutorial Overview This tutorial will follow closely break down in the following sections: 1. Train the MLFlow elastic net wine example 2. Deploy your trained model leveraging our pre-packaged MLFlow model server 3. Test the deployed MLFlow model by sending requests 4. Deploy your second model as an A/B test 5. Visualise and monitor the performance of your models using Seldon Analytics It will follow closely our talk at the [Spark + AI Summit 2019 on Seldon and MLflow](https://www.youtube.com/watch?v=D6eSfd9w9eA). ## Dependencies For this example to work you must be running Seldon 0.3.2 or above - you can follow our [getting started guide for this](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html). In regards to other dependencies, make sure you have installed: * Helm v3.0.0+ * kubectl v1.14+ * Python 3.6+ * MLFlow 1.1.0 * [pygmentize](https://pygments.org/docs/cmdline/) * [tree](http://mama.indstate.edu/users/ice/tree/) We will also take this chance to load the Python dependencies we will use through the tutorial: ``` import pandas as pd import numpy as np from seldon_core.seldon_client import SeldonClient ``` #### Let's get started! 🚀🔥 ## 1. Train the first MLFlow Elastic Net Wine example For our example, we will use the elastic net wine example from [MLflow's tutorial](https://www.mlflow.org/docs/latest/tutorial.html). ### MLproject As any other MLflow project, it is defined by its `MLproject` file: ``` !pygmentize -l yaml MLproject ``` We can see that this project uses Conda for the environment and that it's defined in the `conda.yaml` file: ``` !pygmentize conda.yaml ``` Lastly, we can also see that the training will be performed by the `train.py` file, which receives two parameters `alpha` and `l1_ratio`: ``` !pygmentize train.py ``` ### Dataset We will use the wine quality dataset. Let's load it to see what's inside: ``` data = pd.read_csv("wine-quality.csv") data.head() ``` ### Training We've set up our MLflow project and our dataset is ready, so we are now good to start training. MLflow allows us to train our model with the following command: ``` bash $ mlflow run . -P alpha=... -P l1_ratio=... ``` On each run, `mlflow` will set up the Conda environment defined by the `conda.yaml` file and will run the training commands defined in the `MLproject` file. ``` !mlflow run . -P alpha=0.5 -P l1_ratio=0.5 ``` Each of these commands will create a new run which can be visualised through the MLFlow dashboard as per the screenshot below. ![](images/mlflow-dashboard.png) Each of these models can actually be found on the `mlruns` folder: ``` !tree -L 1 mlruns/0 ``` ### MLmodel Inside each of these folders, MLflow stores the parameters we used to train our model, any metric we logged during training, and a snapshot of our model. If we look into one of them, we can see the following structure: ``` !tree mlruns/0/$(ls mlruns/0 | head -1) ``` In particular, we are interested in the `MLmodel` file stored under `artifacts/model`: ``` !pygmentize -l yaml mlruns/0/$(ls mlruns/0 | head -1)/artifacts/model/MLmodel ``` This file stores the details of how the model was stored. With this information (plus the other files in the folder), we are able to load the model back. Seldon's MLflow server will use this information to serve this model. Now we should upload our newly trained model into a public Google Bucket or S3 bucket. We have already done this to make it simpler, which you will be able to find at `gs://seldon-models/mlflow/model-a`. ## 2. Deploy your model using the Pre-packaged Moldel Server for MLFlow Now we can deploy our trained MLFlow model. For this we have to create a Seldon definition of the model server definition, which we will break down further below. We will be using the model we updated to our google bucket (gs://seldon-models/mlflow/elasticnet_wine), but you can use your model if you uploaded it to a public bucket. ## Setup Seldon Core Use the setup notebook to [Setup Cluster](../../seldon_core_setup.ipynb#Setup-Cluster) with [Ambassador Ingress](../../seldon_core_setup.ipynb#Ambassador) and [Install Seldon Core](../../seldon_core_setup.ipynb#Install-Seldon-Core). Instructions [also online](./seldon_core_setup.html). ``` !pygmentize mlflow-model-server-seldon-config.yaml ``` Once we write our configuration file, we are able to deploy it to our cluster by running it with our command ``` !kubectl apply -f mlflow-model-server-seldon-config.yaml ``` Once it's created we just wait until it's deployed. It will basically download the image for the pre-packaged MLFlow model server, and initialise it with the model we specified above. You can check the status of the deployment with the following command: ``` !kubectl rollout status deployment.apps/mlflow-deployment-mlflow-deployment-dag-0-wines-classifier ``` Once it's deployed, we should see a "succcessfully rolled out" message above. We can now test it! ## 3. Test the deployed MLFlow model by sending requests Now that our model is deployed in Kubernetes, we are able to send any requests. We will first need the URL that is currently available through Ambassador. If you are running this locally, you should be able to reach it through localhost, in this case we can use port 80. ``` !kubectl get svc | grep ambassador ``` Now we will select the first datapoint in our dataset to send to the model. ``` x_0 = data.drop(["quality"], axis=1).values[:1] print(list(x_0[0])) ``` We can try sending a request first using curl: ``` !curl -X POST -H 'Content-Type: application/json' \ -d "{'data': {'names': [], 'ndarray': [[7.0, 0.27, 0.36, 20.7, 0.045, 45.0, 170.0, 1.001, 3.0, 0.45, 8.8]]}}" \ http://localhost:80/seldon/seldon/mlflow-deployment/api/v0.1/predictions ``` We can also send the request by using our python client ``` from seldon_core.seldon_client import SeldonClient import math import numpy as np import subprocess HOST = "localhost" # Add the URL you found above port = "80" # Make sure you use the port above batch = x_0 payload_type = "ndarray" sc = SeldonClient( gateway="ambassador", namespace="seldon", gateway_endpoint=HOST + ":" + port) client_prediction = sc.predict( data=batch, deployment_name="mlflow-deployment", names=[], payload_type=payload_type) print(client_prediction.response) ``` ## 4. Deploy your second model as an A/B test Now that we have a model in production, it's possible to deploy a second model as an A/B test. Our model will also be an Elastic Net model but using a different set of parameters. We can easily train it by leveraging MLflow: ``` !mlflow run . -P alpha=0.75 -P l1_ratio=0.2 ``` As we did before, we will now need to upload our model to a cloud bucket. To speed things up, we already have done so and the second model is now accessible in `gs://seldon-models/mlflow/model-b`. ### A/B test We will deploy our second model as an A/B test. In particular, we will redirect 20% of the traffic to the new model. This can be done by simply adding a `traffic` attribute on our `SeldonDeployment` spec: ``` !pygmentize ab-test-mlflow-model-server-seldon-config.yaml ``` And similar to the model above, we only need to run the following to deploy it: ``` !kubectl apply -f ab-test-mlflow-model-server-seldon-config.yaml ``` We can check that the models have been deployed and are running with the following command. We should now see the "a-" model and the "b-" models. ``` !kubectl get pods ``` ## 5. Visualise and monitor the performance of your models using Seldon Analytics This section is optional, but by following the instructions you will be able to visualise the performance of both models as per the chart below. In order for this example to work you need to install and run the [Grafana Analytics package for Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/analytics/analytics.html#helm-analytics-chart). For this we can access the URL with the command below, it will request an admin and password which by default are set to the following: * Username: admin * Password: password You can access the grafana dashboard through the port provided below: ``` !kubectl get svc grafana-prom -o jsonpath='{.spec.ports[0].nodePort}' ``` Now that we have both models running in our Kubernetes cluster, we can analyse their performance using Seldon Core's integration with Prometheus and Grafana. To do so, we will iterate over the training set (which can be found in `wine-quality.csv`), making a request and sending the feedback of the prediction. Since the `/feedback` endpoint requires a `reward` signal (i.e. the higher the better), we will simulate one as: $$ R(x_{n}) = \begin{cases} \frac{1}{(y_{n} - f(x_{n}))^{2}} &, y_{n} \neq f(x_{n}) \\ 500 &, y_{n} = f(x_{n}) \end{cases} $$ , where $R(x_{n})$ is the reward for input point $x_{n}$, $f(x_{n})$ is our trained model and $y_{n}$ is the actual value. ``` def _get_reward(y, y_pred): if y == y_pred: return 500 return 1 / np.square(y - y_pred) def _test_row(row): input_features = row[:-1] feature_names = input_features.index.to_list() X = input_features.values.reshape(1, -1) y = row[-1].reshape(1, -1) # Note that we are re-using the SeldonClient defined previously r = sc.predict( deployment_name="mlflow-deployment", data=X, names=feature_names) y_pred = r.response.data.tensor.values reward = _get_reward(y, y_pred) sc.feedback( deployment_name="mlflow-deployment", prediction_request=r.request, prediction_response=r.response, reward=reward) return reward[0] data.apply(_test_row, axis=1) ``` You should now be able to see Seldon's pre-built Grafana dashboard. ![](images/grafana-mlflow.jpg) In bottom of the dashboard you can see the following charts: - On the left: the requests per second, which shows the different traffic breakdown we specified. - On the center: the reward, where you can see how model `a` outperforms model `b` by a large margin. - On the right, the latency for each one of them. You are able to add your own custom metrics, and try out other more complex deployments by following further guides at https://docs.seldon.io/projects/seldon-core/en/latest/workflow/README.html
github_jupyter
``` %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import pathlib import os import pwd import figformat fig_width,fig_height,params=figformat.figure_format(fig_width=3.4,fig_height=3.4) mpl.rcParams.update(params) #mpl.rcParams.keys() #help(figformat) def get_username(): return pwd.getpwuid(os.getuid())[0] user = get_username() run_dir = pathlib.Path(rf"/Users/{user}/GitHub/LEC/examples/") #path to TABLES print(f"run_dir: {run_dir}") ``` ### The function $q(\chi_e)$ for $\chi_e\ll 1$ \begin{equation} q(\chi_e\ll 1)\approx 1-\frac{55}{16}\sqrt{3}\chi + 48\chi^2 \nonumber \end{equation} ``` def func(x): f=1-(55/16)*np.sqrt(3)*x + 48*x**2 return f x=np.arange(0.001,10,0.001) ``` ### The function $q(\chi_e)$ for $\chi_e\gg 1$ \begin{equation} q(\chi_e\gg 1)\approx\frac{48}{243}\Gamma(\frac{2}{3})\chi^{-4/3} \left[ 1 -\frac{81}{16\Gamma(2/3)}(3\chi)^{-2/3} \right] \nonumber \end{equation} ``` def func2(y): f2=(48/243)*1.354*2.08*y**(-4/3)*(1-(81/16)*0.73855*(3*y)**(-2/3)) return f2 y=np.arange(0.1,100,0.001) ``` ### Extract $\chi_e$, $P_\mathrm{Q}$, $P_\mathrm{C}$, and $q(\chi_e)$ ``` Xi,Prad,Pc,g = np.loadtxt('P_rad.dat',unpack=True,usecols=[0,1,2,3],dtype=np.float) fig, ax = plt.subplots() ax.plot(x,func(x), "b", ls='--') ax.plot(y,func2(y),"g", ls='-.') ax.plot(Xi, g, color='black') ax.set_xlim(0.001,100) ax.set_ylim(0.001,3) ax.set_yscale('log') ax.set_xscale('log') ax.set_xlabel("$\chi_e$") ax.set_ylabel("$q(\chi_e)$") ax.minorticks_on() ax2 = ax.twinx() ax2.plot(Xi,Prad,'-y',color='red',label='$P_Q$') ax2.plot(Xi,Pc,'--',color='red',label='$P_Q$') ax2.set_ylim(1,1e7) ax2.set_yscale('log') ax2.set_ylabel("$P_\mathrm{rad} (W)$",color='red') ax2.minorticks_on() ax2.tick_params(axis='y',labelcolor='red') ax2.tick_params(which='major',color='red') ax2.tick_params(which='minor',color='red') ax2.spines['right'].set_color('red') eq1 = r"\begin{eqnarray*}" + \ r"g(\chi_e\ll 1)&\approx&1-\frac{55}{16}\sqrt{3}\chi \\&+& 48\chi^2" + \ r"\end{eqnarray*}" #ax.text(0.35, 0.4, eq1, {'color': 'b', 'fontsize': 15}, va="top", ha="right") eq2 = r"\begin{eqnarray*}" + \ r"&&g(\chi_e\gg 1)\approx\frac{48}{243}\Gamma(\frac{2}{3})\chi^{-4/3} \\" + \ r"&& \times \{ 1 -\frac{81}{16\Gamma(2/3)}(3\chi)^{-2/3} \}" + \ r"\end{eqnarray*}" #ax.text(1, 0.01, eq2, {'color': 'g', 'fontsize': 15}, va="top", ha="right") fig = plt.gcf() fig.set_size_inches(fig_width, fig_width/1.618) fig.tight_layout() plt.show() fig.savefig(rf"/Users/StevE/GitHub/LEC/docs/source/figures/qchi.png",format='png',dpi=600,transparent=True, bbox_inches='tight') ```
github_jupyter
# End to end DLA prediction, localization, and column density measurement ![title](html/sightline.png) This demo utilizes pretrained convolutional neural network classification and regression models. Predictions are executed and visualized at each step in the pipeline for DLA detection, localization, and column density measurement. This demo has no dependency other than the python files included with it, it will download the necessary FITS files, or you can specify a directory for it to load them from, you specify the plate, mjd, and fiber below. ``` # Enter the plate mjd and fiber of one or more SDSS 12 fits files, the re-run all cells # The FITS files will be downloaded from the SDSS site and require internet access (and that the SDSS site is up) LOCAL_FITS_FILES_LOCATION = "../BOSS_dat_all" PLATE_MJD_FIBER = [ [4637,55616,522], # Good example (DR9:4091/20.494 +/- 0.069) [5002,55710,598], # Good example (DR9:4990/20.885 +/- 0.104) # [5010,55748,888], # No DLA example, localization off, but NO_DLA classification accurate [4565,55591,868], # Multiple DLA detection, Minor peak detection issue (DR9:5029/20.088 +/- 0.062) [4091,55498,785], # Misclassified but localization model finds DLA w/ minor issues (DR9:3856/20.685 +/- 0.113) [4568,55600,884], # Left side issue example; bad central wavelength peaks calculations (DR9:3762/20.070 +/- 0.105) ] ``` # Results The following 5 components are provide for each plate-mjd-fiber: - 1) Print the central wavelength and column density measure of each DLA detected in the sightline. - 2) Plot of the full sightline in rest frame - 3) Plot of the DLA range 960A to 1216A in rest frame - 4) Plot of the localization confidence across the sightline (redshift matches graph 2) - 5) Plot of column density values measured at each point around the DLA (these values are averaged to arrive at the final printed value above) ``` %load_ext autoreload import sys sys.path.append('./src') %autoreload import imp, json, os, urllib, numpy as np from classification_model import predictions_ann as predictions_ann_c1 from localize_model import predictions_ann as predictions_ann_c2, predictions_to_central_wavelength from density_model import predictions_ann as predictions_ann_r1 from data_loader import normalize, read_fits_file, scan_flux_sample, \ scan_flux_about_central_wavelength, get_raw_data_for_classification, REST_RANGE from DataSet import DataSet from matplotlib import pyplot as plt %matplotlib inline plt.rcParams.update({'figure.max_open_warning': 0}) MODEL_CHECKPOINT_C1 = "models/classification_model" MODEL_CHECKPOINT_C2 = "models/localize_model" MODEL_CHECKPOINT_R1 = "models/density_model" # REST_RANGE = [920.0, 1334.0] def plot(y, x_label="Rest Frame", y_label="Flux", x=None, ylim=[-2,12], xlim=None, z_qso=None): fig, ax = plt.subplots(figsize=(15, 3.75)) if x is None: ax.plot(y, '-k') else: ax.plot(x,y,'-k') ax.set_xlabel(x_label) ax.set_ylabel(y_label) plt.ylim(ylim) plt.xlim(xlim) return fig, ax def load_data(plate,mjd,fiber): # Download the file (data1, z_qso) = read_fits_file(plate, mjd, fiber, fits_base_dir=LOCAL_FITS_FILES_LOCATION, download_if_notfound=False) # Classification 1 dataset classification1 = get_raw_data_for_classification(data1, z_qso) # Classification 2 dataset flux, offsets = scan_flux_sample(normalize(data1, z_qso), data1['loglam'], z_qso, -1, -1, plate, mjd, fiber, -1, -1, exclude_positive_samples=False, kernel=400, stride=1, pos_sample_kernel_percent=0.3) return data1, z_qso, DataSet(classification1), DataSet(flux), offsets for pmf in PLATE_MJD_FIBER: # Download fits file and create a custom DataSet object data1, z_qso, c1_dataset, c2_dataset, c2_offsets = load_data(pmf[0],pmf[1],pmf[2]) lam = 10.0**data1['loglam'] lam_rest = lam/(1.0 + z_qso) ix_dla_range = np.logical_and(lam_rest >= REST_RANGE[0], lam_rest <= REST_RANGE[1]) y_plot_range = np.mean(data1['flux'][np.logical_not(np.isnan(data1['flux']))]) + 10 # Classification model - Load model hyperparameter file & Generate predictions with open(MODEL_CHECKPOINT_C1+"_hyperparams.json", 'r') as fp: hyperparameters_c1 = json.load(fp) prediction, confidence = predictions_ann_c1(hyperparameters_c1, c1_dataset.fluxes, c1_dataset.labels, MODEL_CHECKPOINT_C1) # Print results print("\nClassified %d-%d-%d as %s with confidence of %d%% (where 50%% = guess, 100%% = confident)"% \ (pmf[0],pmf[1],pmf[2], "HAS_DLA" if round(prediction)==1.0 else "NO_DLA", int((abs(0.5-confidence)+0.5)*100))) # Plot full sightline plot(data1['flux'], x_label="Full sightline in rest frame with invalid pixels removed", y_label='Flux', x=lam_rest, xlim=[REST_RANGE[0],lam_rest[-1]], ylim=[-2,y_plot_range]) # Plot DLA range of sightline (fig_sight, ax_sight) = plot(data1['flux'], x_label="Sightline in range %dA to 1250A, region of interest for DLA's'" % REST_RANGE[0], y_label='Flux', x=lam_rest, xlim=[REST_RANGE[0],1250], ylim=[-2,y_plot_range]) # Localization model - Load model hyperparameter file with open(MODEL_CHECKPOINT_C2+"_hyperparams.json", 'r') as fp: hyperparameters_c2 = json.load(fp) loc_pred, loc_conf = predictions_ann_c2(hyperparameters_c2, c2_dataset.fluxes, c2_dataset.labels, c2_offsets, MODEL_CHECKPOINT_C2) (fig, ax) = plot(loc_conf, ylim=[0,1], x=lam_rest[ix_dla_range], xlim=[REST_RANGE[0],1250], z_qso=z_qso, x_label="DLA Localization confidence & localization prediction(s)") # Identify peaks from classification-2 results (peaks, peaks_uncentered, smoothed_sample, ixs_left, ixs_right) = predictions_to_central_wavelength(loc_conf, 1, 50, 300)[0] # print(np.shape(peaks), np.shape(peaks_uncentered), np.shape(loc_conf), np.shape(smoothed_sample)) ax.plot(lam_rest[ix_dla_range], smoothed_sample, color='blue', alpha=0.9) for peak, peak_uncentered, ix_left, ix_right in zip(peaks, peaks_uncentered, ixs_left, ixs_right): peak_lam_rest = lam_rest[ix_dla_range][peak] if peak_lam_rest > 1250 or peak_lam_rest < REST_RANGE[0]: print(" > Excluded peak: %0.0fA" % peak_lam_rest) continue # Plot peak '+' markers ax.plot(lam_rest[ix_dla_range][peak_uncentered], loc_conf[peak_uncentered], '+', mew=3, ms=7, color='red', alpha=1) ax.plot(lam_rest[ix_dla_range][peak], smoothed_sample[peak], '+', mew=7, ms=15, color='blue', alpha=0.9) ax.plot(lam_rest[ix_dla_range][ix_left], loc_conf[peak_uncentered]/2, '+', mew=3, ms=7, color='orange', alpha=1) ax.plot(lam_rest[ix_dla_range][ix_right], loc_conf[peak_uncentered]/2, '+', mew=3, ms=7, color='orange', alpha=1) # Column density estimate density_data = DataSet(scan_flux_about_central_wavelength(data1['flux'], data1['loglam'], z_qso, peak_lam_rest*(1+z_qso), 0, 80, 0, 0, 0, 400, 0.2)) with open(MODEL_CHECKPOINT_R1+"_hyperparams.json", 'r') as fp: hyperparameters_r1 = json.load(fp) density_pred = predictions_ann_r1(hyperparameters_r1, density_data.fluxes, density_data.labels, MODEL_CHECKPOINT_R1) density_pred_np = np.array(density_pred) mean_col_density_prediction = np.mean(density_pred_np) # Bar plot fig_b, ax_b = plt.subplots(figsize=(15, 3.75)) ax_b.bar(np.arange(0,np.shape(density_pred_np)[1]), density_pred_np[0,:], 0.25) ax_b.set_xlabel("Individual Column Density estimates for peak @ %0.0fA, +/- 0.3 of mean. " % (peak_lam_rest) + "Mean: %0.3f - Median: %0.3f - Stddev: %0.3f" % (np.mean(density_pred_np), np.median(density_pred_np), np.std(density_pred_np))) plt.ylim([mean_col_density_prediction-0.3,mean_col_density_prediction+0.3]) ax_b.plot(np.arange(0,np.shape(density_pred_np)[1]), np.ones((np.shape(density_pred_np)[1],),np.float32)*mean_col_density_prediction) # Sightline plot transparent marker boxes ax_sight.fill_between(lam_rest[ix_dla_range][peak-10:peak+10], y_plot_range, -2, color='gray', lw=0, alpha=0.1) ax_sight.fill_between(lam_rest[ix_dla_range][peak-30:peak+30], y_plot_range, -2, color='gray', lw=0, alpha=0.1) ax_sight.fill_between(lam_rest[ix_dla_range][peak-50:peak+50], y_plot_range, -2, color='gray', lw=0, alpha=0.1) ax_sight.fill_between(lam_rest[ix_dla_range][peak-70:peak+70], y_plot_range, -2, color='gray', lw=0, alpha=0.1) print(" > DLA central wavelength at: %0.0fA rest / %0.0fA spectrum w/ confidence %0.2f, has Column Density: %0.3f" %(peak_lam_rest, peak_lam_rest*(1+z_qso), smoothed_sample[peak], mean_col_density_prediction)) handles, labels = ax.get_legend_handles_labels() ax.legend(['DLA pred','Smoothed pred','Original peak','Recentered peak','Centering points'], bbox_to_anchor=(0.25, 1.1)) # Print URL print(" > http://dr12.sdss3.org/spectrumDetail?plateid=%d&mjd=%d&fiber=%d"%(pmf[0],pmf[1],pmf[2])) ```
github_jupyter
# Introduction to Spatial Data Today we will introduce the basics of working with spatial data, including loading spatial datasets as shapefiles or CSV files, projecting data, performing geometric operations, spatially joining multiple datasets together, and simple mapping. ``` import geopandas as gpd import pandas as pd from shapely.geometry import Point ``` ## 1. Quick overview of key concepts ### What is GIS? GIS stands for geographic information system. GIS software lets you work with spatial data, that is, data associated with locations on the Earth. These locations are represented with coordinates: longitude (x), latitude (y), and often elevation (z). With GIS software you can collect, edit, query, analyze, and visualize spatial data. Examples of GIS software include ArcGIS, QGIS, PostGIS, and GeoPandas. ### Some terminology: - **geoid**: (that's *gee-oid*) the surface of the earth's gravity field, which approximates mean sea level - **spheroid** or **ellipsoid** (interchangeable terms): a model that smoothly approximates the geoid - **datum**: based on spheroid but incorporates local variations in the shape of the Earth. Used to describe a point on the Earth's surface, such as in latitude and longitude. - WGS84 (World Geodetic Survey 1984 datum) uses the WGS84 spheroid - The latitude and longitude coordinates of some point differ slightly based on the datum. GPS uses WGS84. - **coordinate reference system** (CRS) or spatial reference system (SRS): a series of parameters that [define](http://spatialreference.org/) the coordinate system and spatial extent (aka, domain) of some dataset. - **geographic coordinate system** (GCS): specifies a datum, spheroid, units of measure (such as meters), and a prime meridian - **projected coordinate system** or map projection: projects a map of the Earth's 3-D spherical surface onto a flat surface that can be measured in units like meters. Here's a [list of projections](https://en.wikipedia.org/wiki/List_of_map_projections). - **eastings** and **northings**: the x and y coordinates of a projected map, usually measured in meters - **false origin**: the 0,0 origin point from which eastings and northings are measured on a map, usually the lower left corner rather than the center - **PROJ.4**: a library to convert/project spatial data with consistent CRS [parameter names](https://github.com/OSGeo/proj.4/wiki/GenParms) ### Common CRS parameters (and their PROJ.4 names): - datum (datum) - ellipse (ellps) - projection (proj) - the name of the projected coordinate system, such as Albers Equal Area (aea) or Lambert Conformal Conic (lcc) - standard parallels (lat_1, lat_2) - where the projection surface touches the globe - at the standard parallels, the projection shows no distortion - central meridian and latitude of origin (lon_0, lat_0) - the origin of the projection's x and y coordinates (eastings and northings) - usually the center of the map projection - false easting and false northing (x_0, y_0) - offsets to add to all your eastings and northings - usually used to make all the coordinates on the map positive numbers by starting 0,0 at the lower left corner rather than the center of the map (see false origin, above) ### Common projection types: - *equal area* projections: maintain area at the expense of shape, distance, and direction - such as the [Albers Equal Area](https://en.wikipedia.org/wiki/Albers_projection) projection - *conformal* projections: maintain shapes at the expense of area, distance, and direction - such as the [Lambert Conformal Conic](https://en.wikipedia.org/wiki/Lambert_conformal_conic_projection) projection - *equidistant* projections: [preserve distance](https://en.wikipedia.org/wiki/Map_projection#Equidistant) from one point or along all meridians and parallels - *azimuthal* projections: maintain direction from one point to all other points - such as an [orthographic](https://en.wikipedia.org/wiki/Orthographic_projection_in_cartography) projection - others compromise to minimize overall distortion or aim for aesthetic value - such as the [Robinson](https://en.wikipedia.org/wiki/Robinson_projection) projection ## 2. Loading spatial data You can use a GIS like ArcGIS or QGIS to open a spatial data file (typically a shapefile, GeoJSON file, or CSV file with lat-long columns). Today we'll introduce the basic concepts of spatial data and GIS operations using [geopandas](http://geopandas.org/user.html), which spatializes pandas dataframes. It uses the [shapely](https://shapely.readthedocs.io/en/latest/manual.html) package for geometry. We'll focus on common, shared concepts and operations, rather than "how-to" in the user interface of a specific GIS program. ### 2a. Loading a shapefile Where to get census shapefiles? https://www.census.gov/cgi-bin/geo/shapefiles/index.php The term "shapefile" is a misnomer... a shapefile is actually a folder containing multiple files that contain spatial geometry, attribute data, projection information, etc: https://en.wikipedia.org/wiki/Shapefile ``` # tell geopandas to read a shapefile with its read_file() function, passing in the shapefile folder # this produces a GeoDataFrame gdf = gpd.read_file("../../data/tl_2017_06_tract/") gdf.shape # just like regular pandas, see the first 5 rows of the GeoDataFrame # this is a shapefile of polygon geometries, that is, tract boundaries gdf.head() # mapping is as easy as calling the GeoDataFrame's plot method ax = gdf.plot() # just like in regular pandas, we can filter and subset the GeoDataFrame # retain only tracts in LA, OC, Ventura counties counties = ["037", "059", "111"] gdf_tracts = gdf[gdf["COUNTYFP"].isin(counties)] gdf_tracts.shape # what is the CRS? # this derives from the shapefile's .prj file # always make sure the shapefile you load has prj info so you get a CRS attribute! gdf_tracts.crs ``` ### 2b. Loading a CSV file Often, you won't have a shapefile (which is explicitly spatial), but rather a CSV file which is implicitly spatial (contains lat-lng columns). If you're loading a CSV file (or other non-explicitly spatial file type) of lat-lng data: 1. first load the CSV file as a DataFrame the usual way with pandas 2. then create a new geopandas GeoDataFrame from your DataFrame 3. manually create a geometry column 4. set the CRS ``` # load rental listings data as a regular pandas dataframe df = pd.read_csv("../../data/listings-la_oc_vc.csv") df.shape # examine first five rows df.head() ``` **Always define the CRS** if you are manually creating a GeoDataFrame! Earlier, when we loaded the shapefile, geopandas loaded the CRS from the shapefile itself. But our CSV file is not explicitly spatial and it contains no CRS data, so we have to tell it what it is. In our case, the CRS is EPSG:4326, which is WGS84 lat-lng data, such as for GPS. Your data source should always tell you what CRS their coordinates are in. If they don't, ask! Don't just guess. ``` # create a new geopandas geodataframe manually from the pandas dataframe gdf_listings = gpd.GeoDataFrame(df) gdf_listings.shape # create a geometry column in our listings dataset to contain shapely geometry for geopandas to use # notice the shapely points are represented as lng, lat so that they are equivalent to x, y # notice that i'm setting the CRS geometry = gpd.points_from_xy(x=gdf_listings["longitude"], y=gdf_listings["latitude"]) gdf_listings["geometry"] = geometry gdf_listings.crs = "epsg:4326" gdf_listings.shape gdf_listings.head() # what's the CRS gdf_listings.crs ``` ## 3. Projection, part I Your datasets need to be in the same CRS if you want to work with them together. If they're not, then project one or both of them so they're in the same CRS. ``` gdf_tracts.crs == gdf_listings.crs # project the tracts geodataframe to the CRS of the listings geodataframe gdf_tracts = gdf_tracts.to_crs(gdf_listings.crs) gdf_tracts.crs == gdf_listings.crs ``` **Be careful**: heed the difference between `gdf.crs` and `gdf.to_crs()`. The first tells you the geodataframe's current CRS. The latter projects the geodataframe to a new CRS. ## 4. Geometric operations GIS and spatial analysis use common "computational geometry" operations like intersects, within, and dissolve. - *intersects* tells you if each geometry in one dataset intersects with some other (single) geometry - *within* tells you if each geometry in one dataset is within some other (single) geometry - *dissolve* lets you aggregate data (merge their geometries together) if they share some attribute in common: this is the spatial equivalent of pandas's groupby function Many other operations exist in the world of GIS, but these are among the most common and useful. ### 4a. intersects Example: I want to find all the tracts that have at least 1 rental listing within their boundaries. So, I'm going to intersect the tracts with a single geometry object that represents all the listings. ``` # create a single, unified MultiPoint object containing all the listings' geometries # use geopandas unary_union attribute to get a single geometry object representing all the points unified_listings = gdf_listings["geometry"].unary_union type(unified_listings) unified_listings # get the tracts that spatially-intersect with anything in the listings dataset mask = gdf_tracts.intersects(unified_listings) gdf_tracts[mask].shape # how many tracts didn't intersect any rental listings? gdf_tracts[~mask].shape ``` ### 4b. dissolve Example: I want to merge all the tracts in each county to aggregate them up to the county level. This will merge all tract-level geometries into new county-level geometries. ``` # dissolve lets you aggregate data based on shared values in some column, such as county fips codes gdf_tracts.head() # dissolve the tracts into counties # aggregate their numerical columns by summing them gdf_counties = gdf_tracts.dissolve("COUNTYFP", aggfunc="sum") gdf_counties # quick and dirty map of our 3 counties ax = gdf_counties.plot(cmap="plasma") ``` ### 4c. within Example: I want to find all the rental listings in Orange County. But my rental listings don't contain any explicit tract or county information: they only tell me lat-long. But I can use those lat-long coordinates to find which listings fall *within* the geometry (spatial boundary) of Orange County. ``` # get orange county's geometry oc_geometry = gdf_counties.loc["059", "geometry"] type(oc_geometry) oc_geometry # find all the listings within orange county mask = gdf_listings.within(oc_geometry) oc_listings = gdf_listings[mask] oc_listings.shape ``` ## 5. Projection, part II ``` # map the OC rental listings ax = oc_listings.plot() oc_listings.head() # you can easily calculate buffers # buffer creates a polygon around your geometry with some specified distance oc_listings_buffered = oc_listings.buffer(distance=0.1) # what are these units? 0.1 what? oc_listings_buffered.head() ax = oc_listings_buffered.plot(edgecolor="k", alpha=0.3) ``` But these buffers are weird because the data are not projected. They're all in lat-long degrees. Let's project it to a **projected coordinate system**. You need to look up an appropriate projection for the spatial extents of your data/map. This is a huge topic in and of itself, so for today we'll just focus on some (over-simplified) rules of thumb: 1. If you're mapping global data, choose a global projection like [Mercator](https://spatialreference.org/ref/sr-org/16/) or [Robinson](https://spatialreference.org/ref/esri/54030/) 2. If you're mapping national data, choose a national projection like [epsg:2163](https://spatialreference.org/ref/epsg/2163/) for the US 3. If you're mapping regional data, choose a local projection, like [UTM zone 11N](https://spatialreference.org/ref/epsg/32611/) for Southern California: ![](img/utm_zones.png) https://spatialreference.org/ is a good resource. There you can click the "proj4" link on any CRS's page to get a string you can use with geopandas. ``` # plot the tracts, then the listings on top of them ax = gdf_tracts.plot() ax = gdf_listings.plot(ax=ax, c="r", markersize=1) # define a CRS appropriate for projecting USA data usa_crs = "epsg:2163" gdf_tracts = gdf_tracts.to_crs(usa_crs) gdf_listings = gdf_listings.to_crs(usa_crs) # plot the projected tracts + listings ax = gdf_tracts.plot() ax = gdf_listings.plot(ax=ax, c="r", markersize=1) # specify a projection manually with a proj4 string # we'll map with UTM zone 11 which is good for Southern California (see link above) utm_11 = "+proj=utm +zone=11 +ellps=WGS84 +datum=WGS84 +units=m +no_defs" gdf_tracts = gdf_tracts.to_crs(utm_11) gdf_listings = gdf_listings.to_crs(utm_11) ax = gdf_tracts.plot() ax = gdf_listings.plot(ax=ax, c="r", markersize=1) # buffer listings by 5km then plot again ax = gdf_tracts.plot() ax = gdf_listings.buffer(5000).plot(ax=ax, fc="r") ``` So that's our projected data and shapefile. Notice how the shape has changed, and how the units make more sense: they are in meters now. So our buffers are a 5km radius from each point. Buffers are useful for tasks like, for example, finding all the transit stations within 1km of a census tract. ## 6. Non-spatial merge vs spatial joins ### 6a. Quick review of pandas (non-spatial) merge Joins two dataframes based on some column they have in common. ``` # load the CA tract-level census data from previous weeks tract_indicators = pd.read_csv("../../data/census_tracts_data_ca.csv", dtype={"GEOID10": str}) tract_indicators.shape # 5 rows of the tracts census data tract_indicators.head() # 5 rows of the tracts shapefile gdf_tracts.head() # merge the 2 datasets on a shared column: tract fips code gdf_tracts_ind = pd.merge(left=gdf_tracts, right=tract_indicators, how="left", left_on="GEOID", right_on="GEOID10") gdf_tracts_ind.head() # merging a dataframe (right) into a (left) geodataframe, we got a geodataframe back and kept our CRS gdf_tracts_ind.crs ``` ### 6b. Geopandas spatial join Joins two geodataframes based on some shared spatial location. Let's say I want to know which county each rental listing is in: I need to *join* the listings to the county that each listing is within, using the gdf_counties GeoDataFrame we created earlier by dissolving the tracts. ``` # remember (again): always double-check CRS before any spatial operations gdf_listings.crs == gdf_counties.crs # they don't match, so project the counties to the CRS of the listings gdf_counties = gdf_counties.to_crs(gdf_listings.crs) gdf_listings.crs == gdf_counties.crs # spatial join listings to counties # this is a left-join to ensure we retain all the listings in our resulting merged dataset gdf_listings_counties = gpd.sjoin(gdf_listings, gdf_counties, how="left", op="within") gdf_listings_counties.shape # what did it do? inspect first 5 rows of listings gdf_listings_counties.head() # give "index_right" a more sensible column name gdf_listings_counties = gdf_listings_counties.rename(columns={"index_right": "county"}) gdf_listings_counties.head() groups = gdf_listings_counties.groupby("county") # which counties have the highest median asking rents? groups["rent"].median().sort_values(ascending=False) # which counties have the most bedrooms per unit in the listings? groups["bedrooms"].mean().sort_values(ascending=False) # which tracts have the most listings? groups["geometry"].count().sort_values(ascending=False) # create a subset of only those listings in orange county # equivalent to gdf_listings.within(oc_geometry) from earlier oc_listings = gdf_listings_counties[gdf_listings_counties["county"] == "059"] oc_listings.shape ``` ## 7. Mapping ``` # get the merged tracts + indicators that are in OC oc_tracts_ind = gdf_tracts_ind[gdf_tracts_ind["COUNTYFP"] == "059"] oc_tracts_ind.head() # drop the tract that has 0 land area... it's just in the ocean # census bureau uses these to represent territory boundary without making on-land tracts extend out into ocean oc_tracts_ind = oc_tracts_ind[oc_tracts_ind["ALAND"] > 0] # map using GeoDataFrame plot method with some style configurations ax = oc_tracts_ind.plot(column="med_household_income", cmap="plasma", edgecolor="k", lw=0.2, figsize=(9, 6), legend=True) # turn the "axis" off and save to disk ax.axis("off") ax.get_figure().savefig("oc-income.png", dpi=300, bbox_inches="tight") # map a different column ax = oc_tracts_ind.plot(column="pct_english_only", cmap="plasma", edgecolor="k", lw=0.2, figsize=(9, 6), legend=True) ax.axis("off") ax.get_figure().savefig("oc-language.png", dpi=300, bbox_inches="tight") # map tracts as a basemap with listings as points on top ax = oc_tracts_ind.plot(facecolor="#aaaaaa", edgecolor="w", lw=0.5, figsize=(12, 9), legend=False) # now plot listings, colored by asking rent ax = oc_listings.dropna().plot(ax=ax, markersize=10, legend=True, cmap="plasma", column="rent", scheme="Quantiles") ax.axis("off") ax.set_title("Apartment Listings in Orange County by Asking Rent (USD)") ax.get_figure().savefig("oc-listings.png", dpi=300, bbox_inches="tight") # now it's your turn # make the plot above more effective and accessible using the visualization best practices you have read about ```
github_jupyter
``` from __future__ import absolute_import, division, print_function from matplotlib.font_manager import _rebuild; _rebuild() #Helper libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd import scipy.io as spio import keras from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasClassifier from keras.utils import np_utils from sklearn.model_selection import KFold, cross_val_score, GridSearchCV, train_test_split, StratifiedKFold from sklearn.pipeline import Pipeline from sklearn.neural_network import MLPClassifier from sklearn.metrics import precision_recall_fscore_support, roc_curve, auc from sklearn.preprocessing import LabelEncoder, LabelBinarizer, StandardScaler from yellowbrick.classifier import ROCAUC, PrecisionRecallCurve from yellowbrick.model_selection import LearningCurve import csv import random import re import string import sys # Initialize random number generator for reproducibility. seed = 7 np.random.seed(seed) # Load in dataset. data = spio.loadmat("features_10s_2019-01-30.mat"); features = data['features']; labels = data['labels_features']; animal_id_features = data['animal_id_features'].transpose(); animal_names = data['animal_names'].transpose(); feat_names = data['feat_names']; col_names = pd.DataFrame(feat_names) # Label each feature column with its description. def find_between(s): start = '\''; end = '\''; return((s.split(start))[1].split(end)[0]) cols = []; c_names = col_names.values.ravel(); for x in range(len(c_names)): name = str (c_names[x]); cols.append(find_between(name)) # Create a DataFrame of features with columns named & rows labeled. feat_data = pd.DataFrame(data=features,columns=cols) feat_data.insert(0,'AnimalId',animal_id_features) feat_data.insert(0,'Labels',labels.transpose()) # Select the features corresponding to one animal. def get_single_animal_features(df, index) : return df.loc[df['AnimalId'] == index] # Delete the rows corresponding to the animal left out. def get_loo_features(df, index): df[df.AnimalId != index] return df def get_pooled_features(df): return feat_data # Initialize the model with optimal gridsearched parameters. def get_mlp(): return MLPClassifier( max_iter=1500, verbose=51, tol=0.000001, learning_rate='constant', alpha=0.001, solver='adam', batch_size=512, activation='tanh') # Plot ROC curve, AUC. def plot_rocauc(animal_id, X_train, y_train, X_test, y_test, mlp): classes=["Normal","Pre-Ictal","Seizure"] v = ROCAUC(mlp, classes=classes) v.fit(X_train, y_train) # Fit the training data to the visualizer v.score(X_test, y_test) # Evaluate the model on the test data ROC_title = "ROCAUC_{}.png".format(animal_id) v.poof(outpath=ROC_title) # Save plot w unique title # Plot the precision-recall curve. def plot_precision_recall(animal_id, X_train, y_train, X_test, y_test, mlp): v = PrecisionRecallCurve(mlp) v.fit(X_train, y_train) # Fit the training data to the visualizer v.score(X_test, y_test) # Evaluate the model on the test data PR_title = "PR_{}.png".format(animal_id) v.poof(outpath=PR_title) # Save plot w unique title # Produce MLPClassification for each animal in the dataset. for i in range(1,13): index = i animal_id = animal_names[index-1][0] sys.stdout = open("log_{}.txt".format(animal_id), "w") print("Animal chosen: %s" % animal_names[index - 1][0]) # Get features. single_animal_features = get_single_animal_features(feat_data, index); # Get only labels corresponding to selected animal's features. y = single_animal_features['Labels'] X = single_animal_features.drop(columns={'Labels','AnimalId'}) # Split data into training (80%) and testing (20%). X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2); # Standardize the data since the MLP is sensitive to feature scaling. scaler = StandardScaler() # Fit only to the training data. scaler.fit(X_train) # Apply the transformations to the data. X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # Construct multi-layer perceptron classifier. mlp_classifier = get_mlp() # Produce graphs. plot_rocauc(animal_id, X_train, y_train, X_test, y_test, mlp_classifier); plot_precision_recall(animal_id, X_train, y_train, X_test, y_test, mlp_classifier); # Run model with 4-fold cross validation. Report mean accuracy. scores = cross_val_score(mlp, X_train, y_train, cv=4) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) sys.stdout.close() ```
github_jupyter
## Image网 Submission `128x128` This contains a submission for the Image网 leaderboard in the `128x128` category. In this notebook we: 1. Train on 1 pretext task: - Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images. 2. Train on 4 downstream tasks: - We load the pretext weights and train for `5` epochs. - We load the pretext weights and train for `20` epochs. - We load the pretext weights and train for `80` epochs. - We load the pretext weights and train for `200` epochs. Our leaderboard submissions are the accuracies we get on each of the downstream tasks. ``` import json import torch import numpy as np from functools import partial from fastai2.layers import Mish, MaxPool, LabelSmoothingCrossEntropy from fastai2.learner import Learner from fastai2.metrics import accuracy, top_k_accuracy from fastai2.basics import DataBlock, RandomSplitter, GrandparentSplitter, CategoryBlock from fastai2.optimizer import ranger, Adam, SGD, RMSProp from fastai2.vision.all import * from fastai2.vision.core import * from fastai2.vision.augment import * from fastai2.vision.learner import unet_learner, unet_config from fastai2.vision.models.xresnet import xresnet50, xresnet34 from fastai2.data.transforms import Normalize, parent_label from fastai2.data.external import download_url, URLs, untar_data from fastcore.utils import num_cpus from torch.nn import MSELoss from torchvision.models import resnet34 ``` ## Pretext Task: Image Inpainting ``` # We create this dummy class in order to create a transform that ONLY operates on images of this type # We will use it to create all input images class PILImageInput(PILImage): pass class RandomCutout(RandTransform): "Picks a random scaled crop of an image and resize it to `size`" split_idx = None def __init__(self, min_n_holes=5, max_n_holes=10, min_length=5, max_length=50, **kwargs): super().__init__(**kwargs) self.min_n_holes=min_n_holes self.max_n_holes=max_n_holes self.min_length=min_length self.max_length=max_length def encodes(self, x:PILImageInput): """ Note that we're accepting our dummy PILImageInput class fastai2 will only pass images of this type to our encoder. This means that our transform will only be applied to input images and won't be run against output images. """ n_holes = np.random.randint(self.min_n_holes, self.max_n_holes) pixels = np.array(x) # Convert to mutable numpy array. FeelsBadMan h,w = pixels.shape[:2] for n in range(n_holes): h_length = np.random.randint(self.min_length, self.max_length) w_length = np.random.randint(self.min_length, self.max_length) h_y = np.random.randint(0, h) h_x = np.random.randint(0, w) y1 = int(np.clip(h_y - h_length / 2, 0, h)) y2 = int(np.clip(h_y + h_length / 2, 0, h)) x1 = int(np.clip(h_x - w_length / 2, 0, w)) x2 = int(np.clip(h_x + w_length / 2, 0, w)) pixels[y1:y2, x1:x2, :] = 0 return Image.fromarray(pixels, mode='RGB') # Default parameters gpu=None lr=1e-2 size=128 sqrmom=0.99 mom=0.9 eps=1e-6 epochs=15 bs=64 mixup=0. opt='ranger', arch='xresnet50' sh=0. sa=0 sym=0 beta=0. act_fn='Mish' fp16=0 pool='AvgPool', dump=0 runs=1 meta='' # Chosen parameters lr=8e-3 sqrmom=0.99 mom=0.95 eps=1e-6 bs=64 opt='ranger' sa=1 fp16=1 #NOTE: My GPU cannot run fp16 :'( arch='xresnet50' pool='MaxPool' gpu=0 # NOTE: Normally loaded from their corresponding string m = xresnet34 act_fn = Mish pool = MaxPool def get_dbunch(size, bs, sh=0., workers=None): if size<=224: path = URLs.IMAGEWANG_160 else: path = URLs.IMAGEWANG source = untar_data(path) if workers is None: workers = min(8, num_cpus()) #CHANGE: Input is ImageBlock(cls=PILImageInput) #CHANGE: Output is ImageBlock #CHANGE: Splitter is RandomSplitter (instead of on /val folder) item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5), RandomCutout] batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None dblock = DataBlock(blocks=(ImageBlock(cls=PILImageInput), ImageBlock), splitter=RandomSplitter(0.1), get_items=get_image_files, get_y=lambda o: o, item_tfms=item_tfms, batch_tfms=batch_tfms) return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers) # Use the Ranger optimizer opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta) size = 128 bs = 64 dbunch = get_dbunch(size, bs, sh=sh) #CHANGE: We're predicting pixel values, so we're just going to predict an output for each RGB channel dbunch.vocab = ['R', 'G', 'B'] len(dbunch.train.dataset), len(dbunch.valid.dataset) dbunch.show_batch() learn = unet_learner(dbunch, partial(m, sa=sa), pretrained=False, opt_func=opt_func, metrics=[], loss_func=MSELoss()).to_fp16() cbs = MixUp(mixup) if mixup else [] learn.fit_flat_cos(15, lr, wd=1e-2, cbs=cbs) # I'm not using fastai2's .export() because I only want to save # the model's parameters. torch.save(learn.model[0].state_dict(), 'imagewang_inpainting_15_epochs_nopretrain.pth') ``` ## Downstream Task: Image Classification ``` def get_dbunch(size, bs, sh=0., workers=None): if size<=224: path = URLs.IMAGEWANG_160 else: path = URLs.IMAGEWANG source = untar_data(path) if workers is None: workers = min(8, num_cpus()) item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)] batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None dblock = DataBlock(blocks=(ImageBlock, CategoryBlock), splitter=GrandparentSplitter(valid_name='val'), get_items=get_image_files, get_y=parent_label, item_tfms=item_tfms, batch_tfms=batch_tfms) return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers, )#item_tfms=item_tfms, batch_tfms=batch_tfms) dbunch = get_dbunch(size, bs, sh=sh) ``` ### 5 Epochs ``` epochs = 5 runs = 5 m_part = partial(m, c_out=20, act_cls=torch.nn.ReLU, sa=sa, sym=sym, pool=pool) for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.unfreeze() learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs) ``` ### 20 Epochs ``` epochs = 20 runs = 3 for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.unfreeze() learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs) ``` ## 80 epochs ``` epochs = 80 runs = 1 for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.unfreeze() learn.fit_flat_cos(80, lr, wd=1e-2, cbs=cbs) for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.freeze() learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs) ``` Accuracy: **62.18%** ### 200 epochs ``` epochs = 200 runs = 1 for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.unfreeze() learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs) for run in range(runs): print(f'Run: {run}') ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20)) learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False, metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(), config={'custom_head':ch}) if dump: print(learn.model); exit() if fp16: learn = learn.to_fp16() cbs = MixUp(mixup) if mixup else [] # # Load weights generated from training on our pretext task state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth') learn.model[0].load_state_dict(state_dict) learn.freeze() learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs) ``` Accuracy: **62.03%**
github_jupyter
``` import numpy import pandas import math stops = pandas.read_csv('stop_withBG1.csv') stops.head() #stop_id is unique in stops file BG_data = stops.groupby('BG_label').size().to_frame('number of stops').reset_index() BG_data.head() stop_times = pandas.read_csv('stop_times.csv') stop_times.head() routes = pandas.read_csv('routes.txt') routes.head() trips = pandas.read_csv('trips.txt') trips.head() #how many times a stop_id appears in the stop_times file is the number of trip ids associated with that stop result = stop_times.groupby('stop_id').size().to_frame('number of trips').reset_index() result.head() #get the number of trips for each BG #join stop_id result with stops file (containing the BG labels) total = result.set_index('stop_id').join(stops.set_index('stop_id')) total.head() num_trips = total.groupby('BG_label').sum()['number of trips'] num_trips.head() fares = pandas.read_csv('fare_attributes.txt') fares.head() fares_rules = pandas.read_csv('fare_rules.txt') fares_rules.head() BG_data.to_csv('BG_num_stops.csv') num_trips.to_csv('BG_num_trips.csv') ################################################################# #travel time to LR station import googlemaps from datetime import datetime gmaps = googlemaps.Client(key='AIzaSyAIDMGHegx6uKnI8chHLBvncQh8MsGwjec') ############################################################################### #input: long, lats as strings #output: journy time in seconds def get_travel_time(long1,lat1,long2,lat2): address1 = str(lat1) + ', ' + str(long1) address2 = str(lat2) + ', ' + str(long2) #print(address1, address2) now = datetime.now() directions_result = gmaps.distance_matrix(address1, address2, mode="transit", #walking, transit departure_time=now) #print(directions_result) time_s = directions_result['rows'][0]['elements'][0]['duration']['value'] return time_s def get_fastest_travel_time(long1,lat1,inputs): time = 100000000000 #find the closest point (straight distance) for index, row in inputs.iterrows(): long2 = float(row['long']) lat2 = float(row['lat']) time_test = float(get_travel_time(long1,lat1,long2,lat2)) if time_test < time: time = time_test #use the losest point to calculate the shortest walking distance walking_time = float(time)/60 return walking_time #get the inputs - in this case the light rail stations inputs = pandas.read_csv('LightRail.csv')[['long','lat']] BG_centroids = pandas.read_csv('BG_centroids.csv') BG_centroids['time'] = 0 for index, row in BG_centroids.iloc[393:].iterrows(): long1 = str(row['long']) lat1 = str(row['lat']) #print(long1,lat1) travel_time = get_fastest_travel_time(long1,lat1,inputs) BG_centroids.set_value(index, 'time', travel_time) BG_centroids.head() BG_centroids.to_csv('BG_trans_LR.csv') ##do we have street car or other type info avialable? #route_type = routes.groupby('route_type').size()['types'] route_type = routes.groupby('route_type').size().to_frame('types').reset_index() route_type.head() #0 = street car / light rail... does not differentiate these #3 = bus #4 = ferry ############################################################# #sidewalks sidewalks = pandas.read_csv('SDOT_Sidewalks.csv') sidewalks.head() sidewalks = sidewalks.assign(BG_geoid = 'Nan') sidewalks = sidewalks.assign(BG_label = 'Nan') from shapely.geometry import shape, Point import json with open('BG.geojson') as f: js = json.load(f) #for every sidewalk data point (row in sidewalks), find which BG it is in (according to shape) #saved to file -- do not rerun ! count = 0 for ind, row in sidewalks.iterrows(): if row.isnull()['Shape']: continue parse = row['Shape'].split(', ') lat = parse[0][1:] long = parse[1][:-1] #print(lat,long) point = Point(float(long), float(lat)) for feature in js['features']: #print(feature) polygon = shape(feature['geometry']) #print(polygon.contains(point)) if polygon.contains(point): count = count + 1 found_polygon = feature['properties'] #print(polygon['geoid'], polygon['name']) sidewalks.loc[ind, 'BG_geoid'] = found_polygon['GEO_ID_GRP'] sidewalks.loc[ind, 'BG_label'] = found_polygon['BLKGRP_LBL'] #print('Found containing polygon:', feature['properties']) break ind sidewalks.head() sidewalks.isnull().sum() sidewalks.to_csv('sidewalks_withBG.csv') total_len = sidewalks.groupby('BG_geoid').sum()['Shape_Length'] total_len.head() total_len.to_csv('BG_totalSW.csv') result = sidewalks.groupby(['BG_geoid','SRTS_SIDEWALK_RANK']).sum()['Shape_Length'] result.head() #BG sidewalk length where type/rank = # result1 = sidewalks.loc[sidewalks['SRTS_SIDEWALK_RANK'] == 4.0] result11 = result1.groupby(['BG_geoid']).sum()['Shape_Length'] result11.head() result11.to_csv('BG_SW4.csv') ```
github_jupyter
# Initialization Welcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning. If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results. A well chosen initialization can: - Speed up the convergence of gradient descent - Increase the odds of gradient descent converging to a lower training (and generalization) error To get started, run the following cell to load the packages and the planar dataset you will try to classify. ``` import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec %matplotlib inline plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y = load_dataset() ``` You would like a classifier to separate the blue dots from the red dots. ## 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: - *Zeros initialization* -- setting `initialization = "zeros"` in the input argument. - *Random initialization* -- setting `initialization = "random"` in the input argument. This initializes the weights to large random values. - *He initialization* -- setting `initialization = "he"` in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. **Instructions**: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this `model()` calls. ``` def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): """ Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (2, number of examples) Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples) learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descent print_cost -- if True, print the cost every 1000 iterations initialization -- flag to choose which initialization to use ("zeros","random" or "he") Returns: parameters -- parameters learnt by the model """ grads = {} costs = [] # to keep track of the loss m = X.shape[1] # number of examples layers_dims = [X.shape[0], 10, 5, 1] # Initialize parameters dictionary. if initialization == "zeros": parameters = initialize_parameters_zeros(layers_dims) elif initialization == "random": parameters = initialize_parameters_random(layers_dims) elif initialization == "he": parameters = initialize_parameters_he(layers_dims) # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID. a3, cache = forward_propagation(X, parameters) # Loss cost = compute_loss(a3, Y) # Backward propagation. grads = backward_propagation(X, Y, cache) # Update parameters. parameters = update_parameters(parameters, grads, learning_rate) # Print the loss every 1000 iterations if print_cost and i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) costs.append(cost) # plot the loss plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters ``` ## 2 - Zero initialization There are two types of parameters to initialize in a neural network: - the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$ - the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$ **Exercise**: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. ``` # GRADED FUNCTION: initialize_parameters_zeros def initialize_parameters_zeros(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ parameters = {} L = len(layers_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1])) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_zeros([3,2,1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 0. 0. 0.] [ 0. 0. 0.]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[ 0. 0.]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using zeros initialization. ``` parameters = model(train_X, train_Y, initialization = "zeros") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary: ``` print ("predictions_train = " + str(predictions_train)) print ("predictions_test = " + str(predictions_test)) plt.title("Model with Zeros initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression. <font color='blue'> **What you should remember**: - The weights $W^{[l]}$ should be initialized randomly to break symmetry. - It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly. ## 3 - Random initialization To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. **Exercise**: Implement the following function to initialize your weights to large random values (scaled by \*10) and your biases to zeros. Use `np.random.randn(..,..) * 10` for weights and `np.zeros((.., ..))` for biases. We are using a fixed `np.random.seed(..)` to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters. ``` # GRADED FUNCTION: initialize_parameters_random def initialize_parameters_random(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours parameters = {} L = len(layers_dims) # integer representing the number of layers for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * 10 parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_random([3, 2, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 17.88628473 4.36509851 0.96497468] [-18.63492703 -2.77388203 -3.54758979]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.82741481 -6.27000677]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using random initialization. ``` parameters = model(train_X, train_Y, initialization = "random") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) ``` If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes. Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s. ``` print (predictions_train) print (predictions_test) plt.title("Model with large random initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Observations**: - The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity. - Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm. - If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization. <font color='blue'> **In summary**: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! ## 4 - He initialization Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of `sqrt(1./layers_dims[l-1])` where He initialization would use `sqrt(2./layers_dims[l-1])`.) **Exercise**: Implement the following function to initialize your parameters with He initialization. **Hint**: This function is similar to the previous `initialize_parameters_random(...)`. The only difference is that instead of multiplying `np.random.randn(..,..)` by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation. ``` # GRADED FUNCTION: initialize_parameters_he def initialize_parameters_he(layers_dims): """ Arguments: layer_dims -- python array (list) containing the size of each layer. Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": W1 -- weight matrix of shape (layers_dims[1], layers_dims[0]) b1 -- bias vector of shape (layers_dims[1], 1) ... WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1]) bL -- bias vector of shape (layers_dims[L], 1) """ np.random.seed(3) parameters = {} L = len(layers_dims) - 1 # integer representing the number of layers for l in range(1, L + 1): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l-1]) parameters['b' + str(l)] = np.zeros((layers_dims[l], 1)) ### END CODE HERE ### return parameters parameters = initialize_parameters_he([2, 4, 1]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> [[ 1.78862847 0.43650985] [ 0.09649747 -1.8634927 ] [-0.2773882 -0.35475898] [-0.08274148 -0.62700068]] </td> </tr> <tr> <td> **b1** </td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td> **W2** </td> <td> [[-0.03098412 -0.33744411 -0.92904268 0.62552248]] </td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> Run the following code to train your model on 15,000 iterations using He initialization. ``` parameters = model(train_X, train_Y, initialization = "he") print ("On the train set:") predictions_train = predict(train_X, train_Y, parameters) print ("On the test set:") predictions_test = predict(test_X, test_Y, parameters) plt.title("Model with He initialization") axes = plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) ``` **Observations**: - The model with He initialization separates the blue and the red dots very well in a small number of iterations. ## 5 - Conclusions You have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is: <table> <tr> <td> **Model** </td> <td> **Train accuracy** </td> <td> **Problem/Comment** </td> </tr> <td> 3-layer NN with zeros initialization </td> <td> 50% </td> <td> fails to break symmetry </td> <tr> <td> 3-layer NN with large random initialization </td> <td> 83% </td> <td> too large weights </td> </tr> <tr> <td> 3-layer NN with He initialization </td> <td> 99% </td> <td> recommended method </td> </tr> </table> <font color='blue'> **What you should remember from this notebook**: - Different initializations lead to different results - Random initialization is used to break symmetry and make sure different hidden units can learn different things - Don't intialize to values that are too large - He initialization works well for networks with ReLU activations.
github_jupyter
# Using R and MATLAB in Jupyter with Python <a class="tocSkip"></a> N.B. Although Jupyter stands for Julia, Python, and R, R Markdown is really the way to go if you're working with R. # Run R code Note that this requires running from a Python 3 instance of Jupyter (in my case, at least). ## R for Jupyter installation instructions: In theory, you should just be able to run this line and be all set, but it didn't work for me: `conda install -c r r-essentials` <br><br> If that didn't work, go through these steps: - In R (not RStudio), run the following: ```R install.packages(c('repr', 'IRdisplay', 'evaluate', 'crayon', 'pbdZMQ', 'devtools', 'uuid', 'digest')) devtools::install_github('IRkernel/IRkernel') IRkernel::installspec() # to register the kernel in the current R installation ``` - Install the interface between R and Python in terminal with: ``` conda install -c r r-essentials conda install rpy2 ``` - make sure you have R added to your PATH (in my case, `C:\Program Files\R\R-3.4.1\bin\x64`) - _Windows_: Need `R_HOME` (same path as above, minus `\bin\x64`) and `R_USER` (just your windows user name) added as separate environment variables. - _Windows_: You may also need to add the following two directories to your PATH: `C:\Anaconda3\Library\mingw-w64\bin; C:\Anaconda3\Library\mingw-w64\lib` - Install libraries like ggplot2 directly into R itself, not RStudio: `install.packages('ggplot2', dependencies=TRUE)` - [See here for further information if needed](https://github.com/IRkernel/IRkernel) and [here for more info about R notebooks in Jupyter](https://www.datacamp.com/community/blog/jupyter-notebook-r). # Example Python to R pipeline First, make some example data in Python. ``` import pandas as pd df = pd.DataFrame( { "Letter": ["a", "a", "a", "b", "b", "b", "c", "c", "c"], "X": [4, 3, 5, 2, 1, 7, 7, 5, 9], "Y": [0, 4, 3, 6, 7, 10, 11, 9, 13], "Z": [1, 2, 3, 1, 2, 3, 1, 2, 3], } ) ``` Load extension allowing one to run R code from within a Python notebook. ``` %load_ext rpy2.ipython ``` Do stuff in R with cell or line magics. "-i" imports to R, "-o" outputs from R back to Python. <div class="alert alert-info"> For some reason, my configuration is stripping the arguments to the cell magic; may be an issue with jupytext (although a previous similar issue was supposedly fixed.) Use `%%R -i df` to make the below code work in the meantime. </div> ``` %%R -i df library("ggplot2") ggplot(data = df) + geom_point(aes(x = X, y = Y, color = Letter, size = Z)) ``` # Run MATLAB code ## MATLAB for Jupyter Installation ```bash pip install matlab_kernel pip install pymatbridge ``` If you're getting a "zmq channel closed" error, open jupyter notebook from a different port when using MATLAB ```bash jupyter notebook --port=8889 ``` ## Example Python to MATLAB pipeline Load MATLAB extension for running MATLAB code within a Python notebook. ``` %load_ext pymatbridge ``` Let's try transposing an array from Python in MATLAB, then feeding it back into Python. <br> First, define an array. ``` a = [ [1, 2], [3, 4], [5, 6] ] a ``` Now, transpose it easily in MATLAB! ``` %%matlab -i a -o a a = a' ``` Finally, check that Python can see the correct value of `a` ``` a ``` ## Plotting in MATLAB ``` %%matlab b = linspace(0.01,6*pi,100); plot(sin(b)) grid on hold on plot(cos(b),'r') ``` Exit MATLAB when done. ``` %unload_ext pymatbridge ``` # Bonus: Run Javascript code Note that Javascript executes as the notebook is opened, even if it's been exported as HTML! ``` %%javascript console.log('hey!') ```
github_jupyter
# Funzioni # Una funzione in informatica è un blocco di codice organizzato attraverso cui vengono compiute delle azioni. I motivi per utilizzare una funzione sono: - maggiore chiarezza del codice - unica definizione per una specifica azione - riduzione della lunghezza del codice - possibilità di ripetere la stessa azione con facilità ## Creazione di una funzione ## In python una funzione è definita attraverso la parola chiave `def` seguito dal nome della funzione che vogliamo usare e delle parentesi `()` che in genere contengono dei parametri, subito dopo di esso è presente il carattere `:` e il codice **indentato** contenente la **docstring** che fornisce informazioni sul suo utilizzo e le sue azioni seguito dalle istruzioni da eseguire e una riga finale contenente `return` e il valore che vogliamo ritornare anche in forma di variabile, la struttura logica è la seguente: ``` def nome_funzione(parametri): '''funzione docstring''' codice_funzione return valore ``` Ora che abbiamo capiro la sintassi da utilizzare vediamo degli esempi. ### Required arguments - valori dei parametri richiesti ### ``` def somma(x, y): #due parametri sono forniti in input '''somma due numeri ''' #informazioni sulla funzione tot = x + y #esegui la somma return tot #ritorna il valore della somma ``` Ora se noi scriviamo `somma()` e con il cursore all'interno delle parentesi schiacciamo `Shift + Tab` vedremo apparire una mini finestra con la **docstring** che abbiamo scritto nella funzione e altri informazioni. Vediamo ora di usare la funzione definita. <div class="alert alert-block alert-danger"> Stiamo attenti che qualora ci dimenticassimo di definire i valori dei parametri un <b>errore</b> verrà stampato dicendoci che ci siamo dimenticati di definire i parametri. </div> ``` somma() # non ho specificato valori ``` ### Keyword arguments - valori dei parametri specificati ### ``` tot = somma(x = 3, y = 2) #tot avrà il valore della funzione return print('La somma di 3 e 2 è:',tot) ``` Come possiamo vedere abbiamo passato dei valori effettivi alla funzione ponendo i parametri a specifici valori, è possibile anche non usare la sintassi `parametro = valore` e scrivere solo i valori, stando però attenti di mantenere l'ordine, a meno che specifici tipi delle variabili siano diversi, vedremo questo in seguito. ``` tot = somma(3,2) # sottointeso x = 3, y = 2 print('La somma di 3 e 2 è:',tot) ``` ### Funzioni senza return o parametri ### Le funzioni possono anche essere **prive di parametri o ritornare nessun valore** e comunque funzionare, vediamo degli esempi: ``` def valore_nullo(): #nessun parametro è richiesto ''' ritorna il valore nullo''' return 0 # ritorno il valore senza eseguire alcuna azione zero = valore_nullo() #notare che non ho passato nessun parametro print('Il valore ritornato dalla funzione valore_nullo è:', zero) def trattini(numero): ''' stampera di seguito un numero di trattini definito dal valore del parametro passato''' print('-'*numero) #notare che non metto un return trattini(80) ``` Dobbiamo però notare la seguente cosa, nel caso in cui si associ una variable al risultato di trattini otteniamo il risultato `None`. ``` valore = trattini(80) print('Valore è:', valore) ``` Ovviamente l'azione della funzione verrà eseguita e quindi verrano stampati 80 trattini, ma il valore che sarà ritornato sarà `None`, il motivo è dovuto al fatto che se non è presente il `return` nella funzione python definirà la funzione con `return None` come riga finale.<br> **Nota:**`None` è un tipo di variabile che non contiene nessun valore e può essere tradotto come **nulla**, il suo comportamento è molto importante! ``` #funzione senza parametri o return def saluta(): #nesssun parametro ''' stampa Ciao''' print('Ciao') #no return saluta() ``` ### Default Arguments - Argomenti di default dei parametri ### Nelle funzioni in genere potrebbe essere richiesto che qualora non sia specificato nulla si passi un valore di default al suo interno e si eseguino comunque le azioni. ``` def default_somma(x = 0, y = 0): '''somma di due numeri, qualora non siano passati valori i due numeri saranno assunti come nulli''' tot = x + y return tot #confrontiamo il comportamento delle due funzioni #caso in cui i valori sono definiti tot_somma = somma(x = 3, y = 2) tot_somma_default = default_somma(x = 3, y = 2) print('Somma di valori definiti') print('funzione somma:', tot_somma) print('funzione default somma:', tot_somma_default) print('Somma senza valori definiti') print('funzione default somma:', default_somma()) print('funzione somma:',somma()) ``` Come possiamo notare nel caso della funzione con default argument non ci sono problemi e viene ritornato la somma dei valori di default 0, mentre la funzione senza default argument produce un errore. ### Variable length arguments - argomenti di lunghezza variabile ### Nel caso in cui noi non fossimo in grado di sapere quanti parametri o argomenti siano passati possiamo passare un parametro che contenga i valori in eccesso. #### Arbitrary arguments - argomenti arbitrari #### In questo caso un arbitrary arguments è passato come un parametro preceduto da `*` che convertirà i valori in una **tupla**, vediamo il suo utilizzo: ``` #*args permette di definire un numero arbitrario di valori def somma_arbitrary(*args): ''' somma su due o più valori''' tot = 0 #inizializza tot a 0 for arg in args: #loop sulla tupla #somma sui valori ulteriori tot += arg # tot = tot + arg return tot tot_arbitrary = somma_arbitrary(3, 2, 5) print('Somma di 3, 2 e 5:',tot_arbitrary) ``` Possiamo notare che in questo caso non servono i keyward arguments ($parametro = valore$), ma solo una serie di valori in input il che semplifica di molto le cose. #### Arbitrary Keyword arguments - argomenti arbitrari con chiave #### Nel caso in cui noi vogliamo specificare il valore associato a una particolare variabile senza sapere quanti argomenti ci servano è possibile usare un parametro(di solito **kwargs**) preceduto da `**` questo ci convertirà gli argomenti passati in **dizionario**. ``` def stampa_nomi(**kwargs): ''' stampa i nomi delle persone, scrivere i nomi in formato Nome:Cognome''' #ricorda che kwards è un dizionario for name, surname in kwargs.items(): print('Nome:', name, ' Cognome:', surname) #return non necessario stampa_nomi(Matteo = 'Conterno', Simona = 'Ferrero', Marco = 'Coluccio', Flavio = 'Pirazzi') ``` ## Specificare il tipo dei parametri e del return ## Poiché le funzioni in python possono accettare **qualsiasi tipo** di variabile come input dei parametri questo potrebbe portare a strani comportamenti delle funzioni, se per esempio alla prima funzione somma passiamo due strighe ecco cosa succede: ``` tot = somma(x = 'Ehi', y = 'Ciao') print('Risultato della somma di Ehi e Ciao:',tot) ``` Per specificare il tipo dei parametri si utilizza il carattere `:` subito dopo il nome del parametro seguito dal tipo, mentre per specificare il tipo dell'output si usa la combinazione `->` dopo le parentesi delle funzioni specificando subito dopo di essa il tipo. ``` # x:int y:int convertono i parametri in intero # -> int specifica che l'output deve essere convertito in intero def somma_tipo(x:int = 0, y:int = 0) -> (int): # notare x:int, y:int ''' somma due numeri interi, ritorna un numero intero''' tot = x + y return tot tot = somma_tipo(x = 2, y = 3) print('Somma di 2 e 3:', tot, '\n il totale è di tipo:', type(tot)) ``` Notiamo però che questo non impedisce alle variabili di assumere altri tipi di valori ed eseguire il codice restituisce non il valore specificato. ``` somma_tipo(x = 'ciao', y = 'dopo') ``` Questo è dovuto al fatto che le funzione secondo la filosofia python devono funzionare con qualsiasi tipo di variabile, ma rimane comunque utile specificare il tipo per capire quali sono le richieste in input, qualora volessimo impedire alcuni operazioni possiamo far creare un **errore** che saranno spiegati in futuro. ## Comando Pass ## Le funzioni non possono essere vuote, ma è possibile crearne senza occorrere in errori usando il comando `pass`, il comando può essere usato anche nel caso in cui particolari condizioni sono soddisfatte e non sia necessario eseguire il blocco di codice contenuto in una parte della funzione. ``` def vuota(): ''' non fa nulla!''' pass vuota() def positivo(numero): ''' non ritorna nulla se il numero è positivo, altrimenti dice se è nullo o negativo''' if numero > 0: #il numero è positivo pass #non fare nulla! elif numero < 0: print('è negativo!') else: print('nullo') positivo(2) # non fa nulla positivo(-3) ``` ## Funzioni Anonime o Lamba ## In certi contesti potrebbe non essere necessario definire delle lunghe funzioni, ma delle brevi funzioni che compiano poche operazioni, in tal caso python permette l'utilizzo di **funzioni lamba o anonime**. ``` #questa funzione prenderà come argomento x e restituirà il quadrato di esso #è equivalente a #def square(x): # return x * x square = lambda x : x * x #notare la dichiarazione di lambda ``` Per richiamare la funzione basta scrivere il suo nome e mettere tra parentesi un valore ed essa restituirà il valore delle corrispondenti operazioni dopo `:`. ``` five = 5 print('Il quadrato di', five, 'è', square(five)) ``` In genere le funzioni anonime sono usate in contemporanea alle funzioni `filter` e `map` delle liste per ripulire velocemente i dati e saranno usate anche nei dataframes successivamente. ### Filter e Map ### **Filter** è una funzione che ha come **argomenti un'altra funzione e una lista** per cui il risultato sarà una nuova lista dove i risultati saranno gli elementi che sooddisfano la condizione della funzione. ``` lista = [1,2,4,5,6,22, 43, 55] nuova_lista = list(filter(lambda x: (x%2)!=0, lista)) print('Lista dei soli numeri dispari:',nuova_lista) ``` **Map** è una funzione che ha come **argomenti un'altra funzione e una lista** per cui il risultato sarà la nuova lista con gli elementi ottenuti applicando la funzione senza la necessità che ci sia una condizione. ``` lista_quadrati = list(map(lambda x: x*x, lista)) #quadrato di ogni elemento nella lista print('Lista dei quadrati:',lista_quadrati) ``` *** COMPLIMENTI AVETE FINITO LA LEZIONE SULLE FUNZIONI!
github_jupyter
##### Copyright 2020 Google LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` import os import numpy import pandas from six.moves import zip from sklearn import mixture import gzip !pip install python-Levenshtein import Levenshtein ``` ## Code to fit GMM ``` R1_TILE21_WT_SEQ = 'DEEEIRTTNPVATEQYGSVSTNLQRGNR' # Covariance type to use in Gaussian Mixture Model. _COVAR_TYPE = 'full' # Number of components to use in Gaussian Mixture Model. _NUM_COMPONENTS = 2 class BinningLabeler(object): """Emits class labels from provided cutoff values. Input cutoffs are encoded as 1-D arrays. Given a cutoffs array of size n, creates n+1 labels for cutoffs, where the first bin is [-inf, cutoffs[0]], and last bin is (cutoffs[-1], inf]. """ def __init__(self, cutoffs): """Constructor. Args: cutoffs: (numpy.ndarray or list or numeric) values to bin data at. First bin is [-inf, cutoffs[0]], and last bin is (cutoffs[-1], inf]. Raises: ValueError: If no cutoff(s) (i.e. an empty list) is provided. """ cutoffs = numpy.atleast_1d(cutoffs) if cutoffs.size: self._cutoffs = numpy.sort(cutoffs) else: raise ValueError('Invalid cutoffs. At least one cutoff value required.') def predict(self, values): """Provides model labels for input value(s) using the cutoff bins. Args: values: (numpy.ndarray or numeric) Value(s) to infer a label on. Returns: A numpy array with length len(values) and labels corresponding to categories defined by the cutoffs array intervals. The labels are [0, 1, . . ., n], where n = len(cutoffs). Note, labels correspond to bins in sorted order from smallest to largest cutoff value. """ return numpy.digitize(values, self._cutoffs) class TwoGaussianMixtureModelLabeler(object): """Emits class labels from Gaussian Mixture given input data. Input data is encoded as 1-D arrays. Allows for an optional ambiguous label between the two modelled Gaussian distributions. Without the optional ambigouous category, the two labels are: 0 - For values more likely derived from the Gaussian with smaller mean 2 - For values more likely derived from the Gaussian with larger mean When allowing for an ambiguous category the three labels are: 0 - For values more likely derived from the Gaussian with smaller mean 1 - For values which fall within an ambiguous probability cutoff. 2 - For values more likely derived from the Gaussian with larger mean """ def __init__(self, data): """Constructor. Args: data: (numpy.ndarray or list) Input data to model with Gaussian Mixture. Input data is presumed to be in the form [x1, x2, ...., xn]. """ self._data = numpy.array([data]).T self._gmm = mixture.GaussianMixture( n_components=_NUM_COMPONENTS, covariance_type=_COVAR_TYPE).fit(self._data) # Re-map the gaussian with smaller mean to the "0" label. self._label_by_index = dict( list(zip([0, 1], numpy.argsort(self._gmm.means_[:, 0]).tolist()))) self._label_by_index_fn = numpy.vectorize(lambda x: self._label_by_index[x]) def predict(self, values, probability_cutoff=0.): """Provides model labels for input value(s) using the GMM. Args: values: (array or single float value) Value(s) to infer a label on. When values=None, predictions are run on self._data. probability_cutoff: (float) Proability between 0 and 1 to identify which values correspond to ambiguous labels. At probablity_cutoff=0 (default) it only returns the original two state predictions. Returns: A numpy array with length len(values) and labels corresponding to 0,1 if probability_cutoff = 0 and 0, 1, 2 otherwise. In the latter, 0 corresponds to the gaussian with smaller mean, 1 corresponds to the ambiguous label, and 2 corresponds to the gaussian with larger mean. """ values = numpy.atleast_1d(values) values = numpy.array([values]).T predictions = self._label_by_index_fn(self._gmm.predict(values)) # Re-map the initial 0,1 predictions to 0,2. predictions *= 2 if probability_cutoff > 0: probas = self._gmm.predict_proba(values) max_probas = numpy.max(probas, axis=1) ambiguous_values = max_probas < probability_cutoff # Set ambiguous label as 1. predictions[ambiguous_values] = 1 return predictions ``` ## Load validation experiment dataframe ``` with gzip.open('GAS1_target_20190516.csv.gz', 'rb') as f: gas1 = pandas.read_csv(f, index_col=None) gas1 = gas1.rename({ 'aa': 'sequence', 'mask': 'mutation_sequence', 'mut': 'num_mutations', 'category': 'partition', }, axis=1) gas1_orig = gas1.copy() ## for comparison below if needed gas1.head() ``` #### Validate that N->F columns computed as expected ``` numpy.testing.assert_allclose( gas1.GAS1_plasmid_F, gas1.GAS1_plasmid_N / gas1.GAS1_plasmid_N.sum()) numpy.testing.assert_allclose( gas1.GAS1_virus_F, gas1.GAS1_virus_N / gas1.GAS1_virus_N.sum()) ``` ### Filter sequences with insufficient plasmids #### Find zero-plasmid sequences ``` zero_plasmids_mask = gas1.GAS1_plasmid_N == 0 zero_plasmids_mask.sum() ``` #### Find low-plasmid count sequences These selection values are unreliable, more noisy ``` low_plasmids_mask = (gas1.GAS1_plasmid_N < 10) & ~zero_plasmids_mask low_plasmids_mask.sum() ``` #### Drop sequences that don't meet the plasmid count bars ``` seqs_to_remove = (low_plasmids_mask | zero_plasmids_mask) seqs_to_remove.sum() num_seqs_before_plasmid_filter = len(gas1) num_seqs_before_plasmid_filter gas1 = gas1[~seqs_to_remove].copy() num_seqs_before_plasmid_filter - len(gas1) len(gas1) ``` ### Add pseudocounts ``` PSEUDOCOUNT = 1 def counts_to_frequency(counts): return counts / counts.sum() gas1['virus_N'] = gas1.GAS1_virus_N + PSEUDOCOUNT gas1['plasmid_N'] = gas1.GAS1_plasmid_N + PSEUDOCOUNT gas1['virus_F'] = counts_to_frequency(gas1.virus_N) gas1['plasmid_F'] = counts_to_frequency(gas1.plasmid_N) ``` ### Compute viral selection ``` gas1['viral_selection'] = numpy.log2(gas1.virus_F / gas1.plasmid_F) assert 0 == gas1.viral_selection.isna().sum() assert not numpy.any(numpy.isinf(gas1.viral_selection)) gas1.viral_selection.describe() ``` ### Compute GMM threshold ``` # Classify the selection coeff series after fitting to a GMM gmm_model = TwoGaussianMixtureModelLabeler( gas1[gas1.partition.isin(['stop', 'wild_type'])].viral_selection) gas1['viral_selection_gmm'] = gmm_model.predict(gas1.viral_selection) # Compute the threshold for the viable class from the GMM labels selection_coeff_threshold = gas1.loc[gas1.viral_selection_gmm == 2, 'viral_selection'].min() print('selection coeff cutoff = %.3f' % selection_coeff_threshold) # Add a label column def is_viable_mutant(mutant_data): return mutant_data['viral_selection'] > selection_coeff_threshold gas1['is_viable'] = gas1.apply(is_viable_mutant, axis=1) print(gas1.is_viable.mean()) ``` ---- ### De-dupe model-designed sequences #### Partition the sequences that should not be de-deduped Split off the partitions for which we want to retain replicates, such as controls/etc. ``` ml_generated_seqs = [ 'cnn_designed_plus_rand_train_seed', 'cnn_designed_plus_rand_train_walked', 'cnn_rand_doubles_plus_single_seed', 'cnn_rand_doubles_plus_single_walked', 'cnn_standard_seed', 'cnn_standard_walked', 'lr_designed_plus_rand_train_seed', 'lr_designed_plus_rand_train_walked', 'lr_rand_doubles_plus_single_seed', 'lr_rand_doubles_plus_single_walked', 'lr_standard_seed', 'lr_standard_walked', 'rnn_designed_plus_rand_train_seed', 'rnn_designed_plus_rand_train_walked', 'rnn_rand_doubles_plus_singles_seed', 'rnn_rand_doubles_plus_singles_walked', 'rnn_standard_seed', 'rnn_standard_walked', ] is_ml_generated_mask = gas1.partition.isin(ml_generated_seqs) ml_gen_df = gas1[is_ml_generated_mask].copy() non_ml_gen_df = gas1[~is_ml_generated_mask].copy() ml_gen_df.partition.value_counts() ml_gen_deduped = ml_gen_df.groupby('sequence').apply( lambda dupes: dupes.loc[dupes.plasmid_N.idxmax()]).copy() display(ml_gen_deduped.shape) ml_gen_deduped.head() ``` #### Concatenate de-deduped ML-generated seqs with rest ``` gas1_deduped = pandas.concat([ml_gen_deduped, non_ml_gen_df], axis=0) print(gas1_deduped.shape) gas1_deduped.partition.value_counts() ``` ## Compute edit distance for chip ``` gas1 = gas1_deduped gas1['num_edits'] = gas1.sequence.apply( lambda s: Levenshtein.distance(R1_TILE21_WT_SEQ, s)) gas1.num_edits.describe() COLUMN_SCHEMA = [ 'sequence', 'partition', 'mutation_sequence', 'num_mutations', 'num_edits', 'viral_selection', 'is_viable', ] gas1a = gas1[COLUMN_SCHEMA].copy() ``` #### Concat with training data chip ``` harvard = pandas.read_csv('r0r1_with_partitions_and_labels.csv', index_col=None) harvard = harvard.rename({ 'S': 'viral_selection', 'aa_seq': 'sequence', 'mask': 'mutation_sequence', 'mut': 'num_mutations', }, axis=1) designed_mask = harvard.partition.isin(['min_fit', 'thresh', 'temp']) harvard.loc[designed_mask, ['partition']] = 'designed' harvard['num_edits'] = harvard.sequence.apply( lambda s: Levenshtein.distance(R1_TILE21_WT_SEQ, s)) harvard.num_edits.describe() harvard1 = harvard[COLUMN_SCHEMA].copy() harvard1.head(3) harvard1['chip'] = 'harvard' gas1a['chip'] = 'gas1' combined = pandas.concat([ harvard1, gas1a, ], axis=0, sort=False) print(combined.shape) combined.partition.value_counts() combined.head() ```
github_jupyter
# Binary-Search https://www.geeksforgeeks.org/binary-search/ Given a sorted array `arr[]` of n elements, write a function to search a given element x in `arr[]`. A simple approach is to do linear search.The time complexity of above algorithm is O(n). Another approach to perform the same task is using Binary Search. Binary Search: Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty. We basically ignore half of the elements just after one comparison. - Compare x with the middle element. - If x matches with middle element, we return the mid index. - Else If x is greater than the mid element, then x can only lie in right half subarray after the mid element. - So we recur for right half. - Else (x is smaller) recur for the left half. ### Iterative implementation of Binary Search ``` # Python3 Program for recursive binary search. # Returns index of x in arr if present, else -1 # def binarySearch (arr, l, r, x): # check base case if r >= l: mid = l + (r-l)//2 # if element is presetn at the middele itself if x == arr[mid]: return mid elif x < arr[mid]: return binarySearch(arr, l, mid-1, x) else: return binarySearch(arr, mid+1, r, x) else: return -1 def binarySearch (arr, l, r, x): # check base case while l<=r: mid = l + (r-l)//2 # if element is presetn at the middele itself if x == arr[mid]: return mid elif x < arr[mid]: r = mid - 1 else: l = mid + 1 return -1 %%time arr = [ 2, 3, 4, 10, 40 ] x = 10 # Function call result = binarySearch(arr, 0, len(arr)-1, x) if result != -1: print (f"Element is present at index {result}") else: print ("Element is not present in array") ``` **Time Complexity**: The time complexity of Binary Search can be written as $T(n) = T(n/2) + c $ The above recurrence can be solved either using Recurrence Tree method or Master method. It falls in case II of Master Method and solution of the recurrence is $\theta(Logn)$. Auxiliary Space: O(1) in case of iterative implementation. In case of recursive implementation, O(Logn) recursion call stack space.
github_jupyter
``` import os import requests def get_table_url(table_name, base_url=os.environ['NEWSROOMDB_URL']): return '{}table/json/{}'.format(os.environ['NEWSROOMDB_URL'], table_name) def get_table_data(table_name): url = get_table_url(table_name) try: r = requests.get(url) return r.json() except: print("Request failed. Probably because the response is huge. We should fix this.") return get_table_data(table_name) shooting_victims = get_table_data('shootings') print("Loaded {} shooting victims".format(len(shooting_victims))) homicides = get_table_data('homicides') print("Loaded {} homicides".format(len(homicides))) from shapely.geometry import shape # Ohio Street on the north, Kinzie Street on the south, Leclaire Avenue on the west, and Lamon Avenue on the east # I generated this GeoJSON using http://geojson.io/ boundary_json = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -87.75301694869995, 41.89130427991353 ], [ -87.7481460571289, 41.891376160028166 ], [ -87.74800658226013, 41.88790186196206 ], [ -87.7528989315033, 41.887837965055716 ], [ -87.75301694869995, 41.89130427991353 ] ] ] } } ] } boundary = shape(boundary_json['features'][0]['geometry']) from shapely.geometry import Point def record_in_bounds(record, bounds, coordinate_field="Geocode Override"): coordinates = record['Geocode Override'][1:-1].split(',') if len(coordinates) != 2: return False point = Point(float(coordinates[1]), float(coordinates[0])) if bounds.contains(point): return True return False def records_in_bounds(records, bounds): in_bounds = [] for record in records: if record_in_bounds(record, bounds): in_bounds.append(record) return in_bounds shooting_victims_within_bounds = records_in_bounds(shooting_victims, boundary) homicides_within_bounds = records_in_bounds(homicides, boundary) import pandas as pd # Load records into Pandas data frames in case we need to do further analysis shooting_victims_df = pd.DataFrame.from_records(shooting_victims_within_bounds) homicides_df = pd.DataFrame.from_records(homicides_within_bounds) # Output data to CSV to share with reporters shooting_victims_df.to_csv("hubbard_playlot_park__shooting_victims.csv") homicides_df.to_csv("hubbard_playlot_park__homicides.csv") ```
github_jupyter
``` import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras.datasets import fashion_mnist from matplotlib import pyplot as plt fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 train_images = train_images / 255.0 test_images = test_images / 255.0 train_images = train_images / 255.0 train_images = train_images.reshape(-1,28,28,1) train_images = train_images.astype('float32') test_images = test_images / 255.0 test_images = test_images.reshape(-1,28,28,1) test_images = test_images.astype('float32') train_images = train_images / 255.0 test_images = test_images / 255.0 model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) train_images.shape model = Sequential() model.add(Conv2D(32, (3,3), activation='relu', padding = 'same', input_shape=(28,28,1))) model.add(Conv2D(32, (3,3), activation ='relu', padding = 'same')) model.add(MaxPooling2D(pool_size=(2, 2), strides = (2,2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax')) model.compile(loss = keras.losses.categorical_crossentropy, optimizer = keras.optimizers.Adagrad(), metrics = ['accuracy']) fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() train_images = train_images / 255.0 test_images = test_images / 255.0 model = keras.Sequential([ Flatten(input_shape=(28, 28)), Dense(128, activation='relu'), Dropout(0.5), Dense(128, activation='relu'), Dense(10, activation='softmax') ]) model.compile(optimizer = keras.optimizers.Adagrad(), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) model.fit(train_images, train_labels, epochs=10) train_labels model.summary() model.compile(optimizer = keras.optimizers.Adagrad(), loss = keras.losses.categorical_crossentropy, metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) train_labels.shape model.fit(train_images, train_labels, epochs=10) history = model.history history.history plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\nTest accuracy:', test_acc) ```
github_jupyter
``` import tensorflow as tf tf.logging.set_verbosity(tf.logging.WARN) import pickle import numpy as np import os from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score import os from tensorflow.python.client import device_lib from collections import Counter from math import inf f = open('../../Glove/word_embedding_glove', 'rb') word_embedding = pickle.load(f) f.close() word_embedding = word_embedding[: len(word_embedding)-1] f = open('../../Glove/vocab_glove', 'rb') vocab = pickle.load(f) f.close() word2id = dict((w, i) for i,w in enumerate(vocab)) id2word = dict((i, w) for i,w in enumerate(vocab)) unknown_token = "UNKNOWN_TOKEN" f = open("../../../dataset/sense/dict_sense-keys", 'rb') dict_sense_keys = pickle.load(f) f.close() f = open("../../../dataset/sense/dict_word-sense", 'rb') dict_word_sense = pickle.load(f) f.close() # Model Description sense_word = 'force' model_name = 'model-5' sense_word_dir = '../output/' + sense_word model_dir = sense_word_dir + '/' + model_name save_dir = os.path.join(model_dir, "save/") log_dir = os.path.join(model_dir, "log") if not os.path.exists(sense_word_dir): os.mkdir(sense_word_dir) if not os.path.exists(model_dir): os.mkdir(model_dir) if not os.path.exists(save_dir): os.mkdir(save_dir) if not os.path.exists(log_dir): os.mkdir(log_dir) f = open("../../../dataset/checkwords/"+ sense_word + "_data", 'rb') data = pickle.load(f) f.close() data_y = [] for i in range(len(data)): data_y.append(dict_sense_keys[data[i][0]][3]) sense_count = Counter(data_y) sense_count = sense_count.most_common()[:5] vocab_sense = [k for k,v in sense_count] vocab_sense = sorted(vocab_sense, key=lambda x:int(x[0])) print(sense_count,vocab_sense) def make_mask_matrix(sense_word,vocab_sense): mask_mat = [] sense_list = [int(string[0]) for string in vocab_sense] sense_count = list(Counter(sense_list).values()) start=0 prev=0 for i in range(len(set(sense_list))): temp_row=[0]*len(sense_list) for j in range(len(sense_list)): if j>=start and j<sense_count[i]+prev: temp_row[j]= 0 else: temp_row[j]= -10 start+=sense_count[i] prev+=sense_count[i] mask_mat.append(temp_row) return mask_mat mask_mat = make_mask_matrix(sense_word,vocab_sense) mask_mat data_x = [] data_label = [] data_pos = [] for i in range(len(data)): if dict_sense_keys[data[i][0]][3] in vocab_sense: data_x.append(data[i][1]) data_label.append(dict_sense_keys[data[i][0]][3]) data_pos.append(dict_sense_keys[data[i][0]][1]) print(len(data_label), len(data_y)) # vocab_sense = dict_word_sense[sense_word] sense2id = dict((s, i) for i,s in enumerate(vocab_sense)) id2sense = dict((i, s) for i,s in enumerate(vocab_sense)) count_pos = Counter(data_pos) count_pos = count_pos.most_common() vocab_pos = [int(k) for k,v in count_pos] vocab_pos = sorted(vocab_pos, key=lambda x:int(x)) pos2id = dict((str(s), i) for i,s in enumerate(vocab_pos)) id2pos = dict((i, str(s)) for i,s in enumerate(vocab_pos)) print(vocab_pos) max_len = 0 for i in range(len(data_x)): max_len = max(max_len, len(data_x[i])) if(len(data_x[i])>200): print(i) print("max_len: ", max_len) # Parameters mode = 'train' num_senses = len(vocab_sense) num_pos = len(vocab_pos) batch_size = 64 vocab_size = len(vocab) unk_vocab_size = 1 word_emb_size = len(word_embedding[0]) max_sent_size = max(200, max_len) hidden_size = 100 keep_prob = 0.5 l2_lambda = 0.001 init_lr = 0.005 decay_steps = 500 decay_rate = 0.96 clip_norm = 1 clipping = True lambda_loss_pos = 4 # MODEL def attention(input_x, input_mask, W_att): h_masked = tf.boolean_mask(input_x, input_mask) h_tanh = tf.tanh(h_masked) u = tf.matmul(h_tanh, W_att) a = tf.nn.softmax(u) c = tf.reduce_sum(tf.multiply(h_tanh, a), 0) return c x = tf.placeholder('int32', [batch_size, max_sent_size], name="x") y = tf.placeholder('int32', [batch_size], name="y") y_pos = tf.placeholder('int32', [batch_size], name="y_pos") x_mask = tf.placeholder('bool', [batch_size, max_sent_size], name='x_mask') is_train = tf.placeholder('bool', [], name='is_train') word_emb_mat = tf.placeholder('float', [None, word_emb_size], name='emb_mat') input_keep_prob = tf.cond(is_train,lambda:keep_prob, lambda:tf.constant(1.0)) x_len = tf.reduce_sum(tf.cast(x_mask, 'int32'), 1) mask_matrix = tf.constant(value=mask_mat, shape=list(np.array(mask_mat).shape), dtype='float32') # mask_matrix with tf.name_scope("word_embedding"): if mode == 'train': unk_word_emb_mat = tf.get_variable("word_emb_mat", dtype='float', shape=[unk_vocab_size, word_emb_size], initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=0, dtype=tf.float32)) else: unk_word_emb_mat = tf.get_variable("word_emb_mat", shape=[unk_vocab_size, word_emb_size], dtype='float') final_word_emb_mat = tf.concat([word_emb_mat, unk_word_emb_mat], 0) Wx = tf.nn.embedding_lookup(final_word_emb_mat, x) with tf.variable_scope("lstm1"): cell_fw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) cell_bw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) d_cell_fw1 = tf.contrib.rnn.DropoutWrapper(cell_fw1, input_keep_prob=input_keep_prob) d_cell_bw1 = tf.contrib.rnn.DropoutWrapper(cell_bw1, input_keep_prob=input_keep_prob) (fw_h1, bw_h1), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw1, d_cell_bw1, Wx, sequence_length=x_len, dtype='float', scope='lstm1') h1 = tf.concat([fw_h1, bw_h1], 2) with tf.variable_scope("lstm2"): cell_fw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) cell_bw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True) d_cell_fw2 = tf.contrib.rnn.DropoutWrapper(cell_fw2, input_keep_prob=input_keep_prob) d_cell_bw2 = tf.contrib.rnn.DropoutWrapper(cell_bw2, input_keep_prob=input_keep_prob) (fw_h2, bw_h2), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw2, d_cell_bw2, h1, sequence_length=x_len, dtype='float', scope='lstm2') h = tf.concat([fw_h2, bw_h2], 2) with tf.variable_scope("attention_pos"): W_att1 = tf.Variable(tf.truncated_normal([2*hidden_size, 1], mean=0.0, stddev=0.1, seed=0), name="W_att1") c1 = tf.expand_dims(attention(h1[0], x_mask[0], W_att1), 0) for i in range(1, batch_size): c1 = tf.concat([c1, tf.expand_dims(attention(h1[i], x_mask[i], W_att1), 0)], 0) with tf.variable_scope("softmax_layer_pos"): W1 = tf.Variable(tf.truncated_normal([2*hidden_size, num_pos], mean=0.0, stddev=0.1, seed=0), name="W1") b1 = tf.Variable(tf.zeros([num_pos]), name="b1") drop_c1 = tf.nn.dropout(c1, input_keep_prob) logits_pos = tf.matmul(drop_c1, W1) + b1 predictions_pos = tf.argmax(logits_pos, 1) final_masking = tf.nn.embedding_lookup(mask_matrix, predictions_pos) with tf.variable_scope("attention"): W_att = tf.Variable(tf.truncated_normal([2*hidden_size, 1], mean=0.0, stddev=0.1, seed=0), name="W_att") c = tf.expand_dims(attention(h[0], x_mask[0], W_att), 0) for i in range(1, batch_size): c = tf.concat([c, tf.expand_dims(attention(h[i], x_mask[i], W_att), 0)], 0) with tf.variable_scope("softmax_layer"): W = tf.Variable(tf.truncated_normal([2*hidden_size, num_senses], mean=0.0, stddev=0.1, seed=0), name="W") b = tf.Variable(tf.zeros([num_senses]), name="b") drop_c = tf.nn.dropout(c, input_keep_prob) logits = tf.matmul(drop_c, W) + b masked_logits = logits + final_masking predictions = tf.argmax(masked_logits, 1) loss_pos = lambda_loss_pos * tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits_pos, labels=y_pos)) loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=masked_logits, labels=y)) global_step = tf.Variable(0, trainable=False, name="global_step") learning_rate = tf.train.exponential_decay(init_lr, global_step, decay_steps, decay_rate, staircase=True) tv_all = tf.trainable_variables() tv_regu =[] for t in tv_all: if t.name.find('b:')==-1: tv_regu.append(t) # l2 Loss l2_loss = l2_lambda * tf.reduce_sum([ tf.nn.l2_loss(v) for v in tv_regu ]) total_loss = loss + l2_loss + loss_pos #pos optimizer pos_optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss_pos, global_step) # Optimizer for loss optimizer = tf.train.AdamOptimizer(learning_rate) # Gradients and Variables for Loss grads_vars = optimizer.compute_gradients(total_loss) # Clipping of Gradients clipped_grads = grads_vars if(clipping == True): clipped_grads = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in clipped_grads] # Training Optimizer for Total Loss train_op = optimizer.apply_gradients(clipped_grads, global_step=global_step) # Summaries var_summaries = [] for v in tv_all: var_summary = tf.summary.histogram("{}/var".format(v.name), v) var_summaries.append(var_summary) var_summaries_merged = tf.summary.merge(var_summaries) loss_summary = tf.summary.scalar("loss", loss) loss_poss_summary = tf.summary.scalar('loss_pos',loss_pos) total_loss_summary = tf.summary.scalar("total_loss", total_loss) summary = tf.summary.merge_all() os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"]="0" config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) sess.run(tf.global_variables_initializer()) # For initializing all the variables saver = tf.train.Saver() # For Saving the model summary_writer = tf.summary.FileWriter(log_dir, sess.graph) # For writing Summaries index = [] for i in range(len(data_x)): index.append(i) index_train, index_val, label_train, label_val = train_test_split(index, data_label, train_size=0.8, shuffle=True, stratify=data_label, random_state=0) data_x = np.array(data_x) data_pos = np.array(data_pos) x_train = data_x[index_train] pos_train = data_pos[index_train] x_val = data_x[index_val] pos_val = data_pos[index_val] def data_prepare(x, y, p): num_examples = len(x) xx = np.zeros([num_examples, max_sent_size], dtype=int) xx_mask = np.zeros([num_examples, max_sent_size], dtype=bool) yy = np.zeros([num_examples], dtype=int) pp = np.zeros([num_examples], dtype=int) for j in range(num_examples): for i in range(max_sent_size): if(i>=len(x[j])): break w = x[j][i] xx[j][i] = word2id[w] if w in word2id else word2id['UNKNOWN_TOKEN'] xx_mask[j][i] = True yy[j] = sense2id[y[j]] pp[j] = pos2id[p[j]] return xx, xx_mask, yy, pp def eval_score(yy, pred, pp, pred_pos): num_batches = int(len(yy)/batch_size) f1 = f1_score(yy[:batch_size*num_batches], pred, average='macro') accu = accuracy_score(yy[:batch_size*num_batches], pred) f1_pos = f1_score(pp[:batch_size*num_batches], pred_pos, average='macro') accu_pos = accuracy_score(pp[:batch_size*num_batches], pred_pos) return f1*100, accu*100, f1_pos*100, accu_pos*100 def model(xx, yy, mask, pp, train_cond=True, pre_train=False): num_batches = int(len(xx)/batch_size) losses = 0 preds = [] pos_preds = [] for j in range(num_batches): s = j * batch_size e = (j+1) * batch_size feed_dict = {x:xx[s:e], y:yy[s:e], y_pos:pp[s:e], x_mask:mask[s:e], is_train:train_cond, input_keep_prob:keep_prob, word_emb_mat:word_embedding} if(train_cond==True): if(pre_train==False): _, _loss, step, _summary = sess.run([train_op, total_loss, global_step, summary], feed_dict) else: _, _loss, step, _summary = sess.run([pos_optimizer, loss_pos, global_step, summary], feed_dict) summary_writer.add_summary(_summary, step) # if step%5==0: # print("Steps:{}".format(step), ", Loss: {}".format(_loss)) else: _loss, pred, pred_pos = sess.run([total_loss, predictions, predictions_pos], feed_dict) preds.append(pred) pos_preds.append(pred_pos) losses +=_loss if(train_cond==False): y_pred = [] y_pred_pos = [] for i in range(num_batches): for pred in preds[i]: y_pred.append(pred) for pred_pos in pos_preds[i]: y_pred_pos.append(pred_pos) return losses/num_batches, y_pred, y_pred_pos return losses/num_batches, step x_id_train, mask_train, y_train, pos_id_train = data_prepare(x_train, label_train, pos_train) x_id_val, mask_val, y_val, pos_id_val = data_prepare(x_val, label_val, pos_val) num_epochs = 60 pre_train_period = 1 log_period = 5 for i in range(num_epochs): random = np.random.choice(len(y_train), size=(len(y_train)), replace=False) x_id_train = x_id_train[random] y_train = y_train[random] mask_train = mask_train[random] pos_id_train = pos_id_train[random] if(i<pre_train_period): losses, step = model(x_id_train, y_train, mask_train, pos_id_train, pre_train=True) else: losses, step = model(x_id_train, y_train, mask_train, pos_id_train) print("Epoch:", i+1,"Step:", step, "loss:",losses) if((i+1)%log_period==0): saver.save(sess, save_path=save_dir) print("Model Saved") train_loss, train_pred, train_pred_pos = model(x_id_train, y_train, mask_train, pos_id_train, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_train, train_pred, pos_id_train, train_pred_pos) print("Train: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", train_loss) val_loss, val_pred, val_pred_pos = model(x_id_val, y_val, mask_val, pos_id_val, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_val, val_pred, pos_id_val, val_pred_pos) print("Val: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", val_loss) num_epochs = 10 pre_train_period = 0 log_period = 5 for i in range(num_epochs): random = np.random.choice(len(y_train), size=(len(y_train)), replace=False) x_id_train = x_id_train[random] y_train = y_train[random] mask_train = mask_train[random] pos_id_train = pos_id_train[random] if(i<pre_train_period): losses, step = model(x_id_train, y_train, mask_train, pos_id_train, pre_train=True) else: losses, step = model(x_id_train, y_train, mask_train, pos_id_train) print("Epoch:", i+1,"Step:", step, "loss:",losses) if((i+1)%log_period==0): saver.save(sess, save_path=save_dir) print("Model Saved") train_loss, train_pred, train_pred_pos = model(x_id_train, y_train, mask_train, pos_id_train, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_train, train_pred, pos_id_train, train_pred_pos) print("Train: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", train_loss) val_loss, val_pred, val_pred_pos = model(x_id_val, y_val, mask_val, pos_id_val, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_val, val_pred, pos_id_val, val_pred_pos) print("Val: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", val_loss) num_epochs = 20 pre_train_period = 0 log_period = 5 for i in range(num_epochs): random = np.random.choice(len(y_train), size=(len(y_train)), replace=False) x_id_train = x_id_train[random] y_train = y_train[random] mask_train = mask_train[random] pos_id_train = pos_id_train[random] if(i<pre_train_period): losses, step = model(x_id_train, y_train, mask_train, pos_id_train, pre_train=True) else: losses, step = model(x_id_train, y_train, mask_train, pos_id_train) print("Epoch:", i+1,"Step:", step, "loss:",losses) if((i+1)%log_period==0): saver.save(sess, save_path=save_dir) print("Model Saved") train_loss, train_pred, train_pred_pos = model(x_id_train, y_train, mask_train, pos_id_train, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_train, train_pred, pos_id_train, train_pred_pos) print("Train: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", train_loss) val_loss, val_pred, val_pred_pos = model(x_id_val, y_val, mask_val, pos_id_val, train_cond=False) f1_, accu_, f1_pos_, accu_pos_ = eval_score(y_val, val_pred, pos_id_val, val_pred_pos) print("Val: F1 : ", f1_, "Accu: ", accu_, "POS F1 : ", f1_pos_, "POS Accu: ", accu_pos_, "Loss: ", val_loss) saver.restore(sess, save_dir) ```
github_jupyter
# Tensor shapes in Pyro 0.2 This tutorial introduces Pyro's organization of tensor dimensions. Before starting, you should familiarize yourself with [PyTorch broadcasting semantics](http://pytorch.org/docs/master/notes/broadcasting.html). #### Summary: - While you are learning or debugging, set `pyro.enable_validation(True)`. - Tensors broadcast by aligning on the right: `torch.ones(3,4,5) + torch.ones(5)`. - Distribution `.sample().shape == batch_shape + event_shape`. - Distribution `.log_prob(x).shape == batch_shape` (but not `event_shape`!). - Use `my_dist.expand_by([2,3,4])` to draw a batch of samples. - Use `my_dist.independent(1)` to declare a dimension as dependent. - Use `with pyro.iarange('name', size):` to declare a dimension as independent. - All dimensions must be declared either dependent or independent. - Try to support batching on the left. This lets Pyro auto-parallelize. - use negative indices like `x.sum(-1)` rather than `x.sum(2)` - use ellipsis notation like `pixel = image[..., i, j]` #### Table of Contents - [Distribution shapes](#Distributions-shapes:-batch_shape-and-event_shape) - [Examples](#Examples) - [Reshaping distributions](#Reshaping-distributions) - [It is always safe to assume dependence](#It-is-always-safe-to-assume-dependence) - [Declaring independence with iarange](#Declaring-independent-dims-with-iarange) - [Subsampling inside iarange](#Subsampling-tensors-inside-an-iarange) - [Broadcasting to allow Parallel Enumeration](#Broadcasting-to-allow-parallel-enumeration) - [Writing parallelizable code](#Writing-parallelizable-code) ``` import os import torch import pyro from torch.distributions import constraints from pyro.distributions import Bernoulli, Categorical, MultivariateNormal, Normal from pyro.infer import Trace_ELBO, TraceEnum_ELBO, config_enumerate from pyro.optim import Adam smoke_test = ('CI' in os.environ) pyro.enable_validation(True) # <---- This is always a good idea! # We'll ue this helper to check our models are correct. def test_model(model, guide, loss): pyro.clear_param_store() loss.loss(model, guide) ``` ## Distributions shapes: `batch_shape` and `event_shape` <a class="anchor" id="Distributions-shapes:-batch_shape-and-event_shape"></a> PyTorch `Tensor`s have a single `.shape` attribute, but `Distribution`s have two shape attributions with special meaning: `.batch_shape` and `.event_shape`. These two combine to define the total shape of a sample ```py x = d.sample() assert x.shape == d.batch_shape + d.event_shape ``` Indices over `.batch_shape` denote independent random variables, whereas indices over `.event_shape` denote dependent random variables. Because the dependent random variables define probability together, the `.log_prob()` method only produces a single number for each event of shape `.event_shape`. Thus the total shape of `.log_prob()` is `.batch_shape`: ```py assert d.log_prob(x).shape == d.batch_shape ``` Note that the `Distribution.sample()` method also takes a `sample_shape` parameter that indexes over independent identically distributed (iid) random varables, so that ```py x2 = d.sample(sample_shape) assert x2.shape == sample_shape + batch_shape + event_shape ``` In summary ``` | iid | independent | dependent ------+--------------+-------------+------------ shape = sample_shape + batch_shape + event_shape ``` For example univariate distributions have empty event shape (because each number is an independent event). Distributions over vectors like `MultivariateNormal` have `len(event_shape) == 1`. Distributions over matrices like `InverseWishart` have `len(event_shape) == 2`. ### Examples <a class="anchor" id="Examples"></a> The simplest distribution shape is a single univariate distribution. ``` d = Bernoulli(0.5) assert d.batch_shape == () assert d.event_shape == () x = d.sample() assert x.shape == () assert d.log_prob(x).shape == () ``` Distributions can be batched by passing in batched parameters. ``` d = Bernoulli(0.5 * torch.ones(3,4)) assert d.batch_shape == (3, 4) assert d.event_shape == () x = d.sample() assert x.shape == (3, 4) assert d.log_prob(x).shape == (3, 4) ``` Another way to batch distributions is via the `.expand_by()` method. This only works if parameters are identical along the leftmost dimensions. ``` d = Bernoulli(torch.tensor([0.1, 0.2, 0.3, 0.4])).expand_by([3]) assert d.batch_shape == (3, 4) assert d.event_shape == () x = d.sample() assert x.shape == (3, 4) assert d.log_prob(x).shape == (3, 4) ``` Multivariate distributions have nonempty `.event_shape`. For these distributions, the shapes of `.sample()` and `.log_prob(x)` differ: ``` d = MultivariateNormal(torch.zeros(3), torch.eye(3, 3)) assert d.batch_shape == () assert d.event_shape == (3,) x = d.sample() assert x.shape == (3,) # == batch_shape + event_shape assert d.log_prob(x).shape == () # == batch_shape ``` ### Reshaping distributions <a class="anchor" id="Reshaping-distributions"></a> In Pyro you can treat a univariate distribution as multivariate by calling the `.independent(_)` property. ``` d = Bernoulli(0.5 * torch.ones(3,4)).independent(1) assert d.batch_shape == (3,) assert d.event_shape == (4,) x = d.sample() assert x.shape == (3, 4) assert d.log_prob(x).shape == (3,) ``` While you work with Pyro programs, keep in mind that samples have shape `batch_shape + event_shape`, whereas `.log_prob(x)` values have shape `batch_shape`. You'll need to ensure that `batch_shape` is carefully controlled by either trimming it down with `.independent(n)` or by declaring dimensions as independent via `pyro.iarange`. ### It is always safe to assume dependence <a class="anchor" id="It-is-always-safe-to-assume-dependence"></a> Often in Pyro we'll declare some dimensions as dependent even though they are in fact independent, e.g. ```py pyro.sample("x", dist.Normal(0, 1).expand_by([10]).independent(1)) ``` This is useful for two reasons: First it allows us to easily swap in a `MultivariateNormal` distribution later. Second it simplifies the code a bit since we don't need an `iarange` (see below) as in ```py with pyro.iarange("x_iarange", 10): pyro.sample("x", dist.Normal(0, 1).expand_by([10])) ``` The difference between these two versions is that the second version with `iarange` informs Pyro that it can make use of independence information when estimating gradients, whereas in the first version Pyro must assume they are dependent (even though the normals are in fact independent). This is analogous to d-separation in graphical models: it is always safe to add edges and assume variables *may* be dependent (i.e. to widen the model class), but it is unsafe to assume independence when variables are actually dependent (i.e. narrowing the model class so the true model lies outside of the class, as in mean field). In practice Pyro's SVI inference algorithm uses reparameterized gradient estimators for `Normal` distributions so both gradient estimators have the same performance. ## Declaring independent dims with `iarange` <a class="anchor" id="Declaring-independent-dims-with-iarange"></a> Pyro models can use the context manager [pyro.iarange](http://docs.pyro.ai/en/dev/primitives.html#pyro.iarange) to declare that certain batch dimensions are independent. Inference algorithms can then take advantage of this independence to e.g. construct lower variance gradient estimators or to enumerate in linear space rather than exponential space. An example of an independent dimension is the index over data in a minibatch: each datum should be independent of all others. The simplest way to declare a dimension as independent is to declare the rightmost batch dimension as independent via a simple ```py with pyro.iarange("my_iarange"): # within this context, batch dimension -1 is independent ``` We recommend always providing an optional size argument to aid in debugging shapes ```py with pyro.iarange("my_iarange", len(my_data)): # within this context, batch dimension -1 is independent ``` Starting with Pyro 0.2 you can additionally nest `iaranges`, e.g. if you have per-pixel independence: ```py with pyro.iarange("x_axis", 320): # within this context, batch dimension -1 is independent with pyro.iarange("y_axis", 200): # within this context, batch dimensions -2 and -1 are independent ``` Note that we always count from the right by using negative indices like -2, -1. Finally if you want to mix and match `iarange`s for e.g. noise that depends only on `x`, some noise that depends only on `y`, and some noise that depends on both, you can declare multiple `iaranges` and use them as reusable context managers. In this case Pyro cannot automatically allocate a dimension, so you need to provide a `dim` argument (again counting from the right): ```py x_axis = pyro.iarange("x_axis", 3, dim=-2) y_axis = pyro.iarange("y_axis", 2, dim=-3) with x_axis: # within this context, batch dimension -2 is independent with y_axis: # within this context, batch dimension -3 is independent with x_axis, y_axis: # within this context, batch dimensions -3 and -2 are independent ``` Let's take a closer look at batch sizes within `iarange`s. ``` def model1(): a = pyro.sample("a", Normal(0, 1)) b = pyro.sample("b", Normal(torch.zeros(2), 1).independent(1)) with pyro.iarange("c_iarange", 2): c = pyro.sample("c", Normal(torch.zeros(2), 1)) with pyro.iarange("d_iarange", 3): d = pyro.sample("d", Normal(torch.zeros(3,4,5), 1).independent(2)) assert a.shape == () # batch_shape == () event_shape == () assert b.shape == (2,) # batch_shape == () event_shape == (2,) assert c.shape == (2,) # batch_shape == (2,) event_sahpe == () assert d.shape == (3,4,5) # batch_shape == (3,) event_shape == (4,5) x_axis = pyro.iarange("x_axis", 3, dim=-2) y_axis = pyro.iarange("y_axis", 2, dim=-3) with x_axis: x = pyro.sample("x", Normal(0, 1).expand_by([3, 1])) with y_axis: y = pyro.sample("y", Normal(0, 1).expand_by([2, 1, 1])) with x_axis, y_axis: xy = pyro.sample("xy", Normal(0, 1).expand_by([2, 3, 1])) z = pyro.sample("z", Normal(0, 1).expand_by([2, 3, 1, 5]).independent(1)) assert x.shape == (3, 1) # batch_shape == (3,1) event_shape == () assert y.shape == (2, 1, 1) # batch_shape == (2,1,1) event_shape == () assert xy.shape == (2, 3, 1) # batch_shape == (2,3,1) event_shape == () assert z.shape == (2, 3, 1, 5) # batch_shape == (2,3,1) event_shape == (5,) test_model(model1, model1, Trace_ELBO()) ``` It is helpful to visualize the `.shape`s of each sample site by aligning them at the boundary between `batch_shape` and `event_shape`: dimensions to the right will be summed out in `.log_prob()` and dimensions to the left will remain. ``` batch dims | event dims -----------+----------- | a = sample("a", Normal(0, 1)) |2 b = sample("b", Normal(zeros(2), 1) | .independent(1) | with iarange("c", 2): 2| c = sample("c", Normal(zeros(2), 1)) | with iarange("d", 3): 3|4 5 d = sample("d", Normal(zeros(3,4,5), 1) | .independent(2) | | x_axis = iarange("x", 3, dim=-2) | y_axis = iarange("y", 2, dim=-3) | with x_axis: 3 1| x = sample("x", Normal(0, 1).expand_by([3, 1])) | with y_axis: 2 1 1| y = sample("y", Normal(0, 1).expand_by([2, 1, 1])) | with x_axis, y_axis: 2 3 1| xy = sample("xy", Normal(0, 1).expand_by([2, 3, 1])) 2 3 1|5 z = sample("z", Normal(0, 1).expand_by([2, 3, 1, 5]) | .independent(1)) ``` As an exercise, try to tabulate the shapes of sample sites in one of your own programs. ## Subsampling tensors inside an `iarange` <a class="anchor" id="Subsampling-tensors-inside-an-iarange"></a> One of the main uses of [iarange](http://docs.pyro.ai/en/dev/primitives.html#pyro.iarange) is to subsample data. This is possible within an `iarange` because data are independent, so the expected value of the loss on, say, half the data should be half the expected loss on the full data. To subsample data, you need to inform Pyro of both the original data size and the subsample size; Pyro will then choose a random subset of data and yield the set of indices. ``` data = torch.arange(100) def model2(): mean = pyro.param("mean", torch.zeros(len(data))) with pyro.iarange("data", len(data), subsample_size=10) as ind: assert len(ind) == 10 # ind is a LongTensor that indexes the subsample. batch = data[ind] # Select a minibatch of data. mean_batch = mean[ind] # Take care to select the relevant per-datum parameters. # Do stuff with batch: x = pyro.sample("x", Normal(mean_batch, 1), obs=batch) assert len(x) == 10 test_model(model2, guide=lambda: None, loss=Trace_ELBO()) ``` ## Broadcasting to allow parallel enumeration <a class="anchor" id="Broadcasting-to-allow-parallel-enumeration"></a> Pyro 0.2 introduces the ability to enumerate discrete latent variables in parallel. This can significantly reduce the variance of gradient estimators when learning a posterior via [SVI](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.svi.SVI). To use discrete enumeration, Pyro needs to allocate tensor dimension that it can use for enumeration. To avoid conflicting with other dimensions that we want to use for `iarange`s, we need to declare a budget of the maximum number of tensor dimensions we'll use. This budget is called `max_iarange_nesting` and is an argument to [SVI](http://docs.pyro.ai/en/dev/inference_algos.html) (the argument is simply passed through to [TraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.html#pyro.infer.traceenum_elbo.TraceEnum_ELBO)). To understand `max_iarange_nesting` and how Pyro allocates dimensions for enumeration, let's revisit `model1()` from above. This time we'll map out three types of dimensions: enumeration dimensions on the left (Pyro takes control of these), batch dimensions in the middle, and event dimensions on the right. ``` max_iarange_nesting = 3 |<--->| enumeration|batch|event -----------+-----+----- |. . .| a = sample("a", Normal(0, 1)) |. . .|2 b = sample("b", Normal(zeros(2), 1) | | .independent(1)) | | with iarange("c", 2): |. . 2| c = sample("c", Normal(zeros(2), 1)) | | with iarange("d", 3): |. . 3|4 5 d = sample("d", Normal(zeros(3,4,5), 1) | | .independent(2)) | | | | x_axis = iarange("x", 3, dim=-2) | | y_axis = iarange("y", 2, dim=-3) | | with x_axis: |. 3 1| x = sample("x", Normal(0, 1).expand_by([3,1])) | | with y_axis: |2 1 1| y = sample("y", Normal(0, 1).expand_by([2,1,1])) | | with x_axis, y_axis: |2 3 1| xy = sample("xy", Normal(0, 1).expand_by([2,3,1])) |2 3 1|5 z = sample("z", Normal(0, 1).expand_by([2,3,1,5])) | | .independent(1)) ``` Note that it is safe to overprovision `max_iarange_nesting=4` but we cannot underprovision `max_iarange_nesting=2` (or Pyro will error). Let's see how this works in practice. ``` @config_enumerate(default="parallel") def model3(): p = pyro.param("p", torch.arange(6) / 6) locs = pyro.param("locs", torch.tensor([-1., 1.])) a = pyro.sample("a", Categorical(torch.ones(6) / 6)) b = pyro.sample("b", Bernoulli(p[a])) # Note this depends on a. with pyro.iarange("c_iarange", 4): c = pyro.sample("c", Bernoulli(0.3).expand_by([4])) with pyro.iarange("d_iarange", 5): d = pyro.sample("d", Bernoulli(0.4).expand_by([5,4])) e_loc = locs[d.long()].unsqueeze(-1) e_scale = torch.arange(1, 8) e = pyro.sample("e", Normal(e_loc, e_scale) .independent(1)) # Note this depends on d. # enumerated|batch|event dims assert a.shape == ( 6, 1, 1 ) # Six enumerated values of the Categorical. assert b.shape == ( 2, 6, 1, 1 ) # 2 enumerated Bernoullis x 6 Categoricals. assert c.shape == ( 2, 1, 1, 1, 4 ) # Only 2 Bernoullis; does not depend on a or b. assert d.shape == (2, 1, 1, 1, 5, 4 ) # Only two Bernoullis. assert e.shape == (2, 1, 1, 1, 5, 4, 7) # This is sampled and depends on d. assert e_loc.shape == (2, 1, 1, 1, 5, 4, 1,) assert e_scale.shape == ( 7,) test_model(model3, model3, TraceEnum_ELBO(max_iarange_nesting=2)) ``` Let's take a closer look at those dimensions. First note that Pyro allocates enumeration dims starting from the right at `max_iarange_nesting`: Pyro allocates dim -3 to enumerate `a`, then dim -4 to enumerate `b`, then dim -5 to enumerate `c`, and finally dim -6 to enumerate `d`. Next note that variables only have extent (size > 1) in dimensions they depend on. This helps keep tensors small and computation cheap. We can draw a similar map of the tensor dimensions: ``` max_iarange_nesting = 2 |<->| enumeration batch event ------------|---|----- 6|1 1| a = pyro.sample("a", Categorical(torch.ones(6) / 6)) 2 1|1 1| b = pyro.sample("b", Bernoulli(p[a])) | | with pyro.iarange("c_iarange", 4): 2 1 1|1 4| c = pyro.sample("c", Bernoulli(0.3).expand_by([4])) | | with pyro.iarange("d_iarange", 5): 2 1 1 1|5 4| d = pyro.sample("d", Bernoulli(0.4).expand_by([5,4])) 2 1 1 1|5 4|1 e_loc = locs[d.long()].unsqueeze(-1) | |7 e_scale = torch.arange(1, 8) 2 1 1 1|5 4|7 e = pyro.sample("e", Normal(e_loc, e_scale) | | .independent(1)) ``` ### Writing parallelizable code <a class="anchor" id="Writing-parallelizable-code"></a> It can be tricky to write Pyro models that correctly handle parallelized sample sites. Two tricks help: [broadcasting](http://pytorch.org/docs/master/notes/broadcasting.html) and [ellipsis slicing](http://python-reference.readthedocs.io/en/latest/docs/brackets/ellipsis.html). Let's look at a contrived model to see how these work in practice. Our aim is to write a model that works both with and without enumeration. ``` width = 8 height = 10 sparse_pixels = torch.LongTensor([[3, 2], [3, 5], [3, 9], [7, 1]]) enumerated = None # set to either True or False below def fun(observe): p_x = pyro.param("p_x", torch.tensor(0.1), constraint=constraints.unit_interval) p_y = pyro.param("p_y", torch.tensor(0.1), constraint=constraints.unit_interval) x_axis = pyro.iarange('x_axis', width, dim=-2) y_axis = pyro.iarange('y_axis', height, dim=-1) # Note that the shapes of these sites depend on whether Pyro is enumerating. with x_axis: x_active = pyro.sample("x_active", Bernoulli(p_x).expand_by([width, 1])) with y_axis: y_active = pyro.sample("y_active", Bernoulli(p_y).expand_by([height])) if enumerated: assert x_active.shape == (2, width, 1) assert y_active.shape == (2, 1, 1, height) else: assert x_active.shape == (width, 1) assert y_active.shape == (height,) # The first trick is to broadcast. This works with or without enumeration. p = 0.1 + 0.5 * x_active * y_active if enumerated: assert p.shape == (2, 2, width, height) else: assert p.shape == (width, height) # The second trick is to index using ellipsis slicing. # This allows Pyro to add arbitrary dimensions on the left. dense_pixels = torch.zeros_like(p) for x, y in sparse_pixels: dense_pixels[..., x, y] = 1 if enumerated: assert dense_pixels.shape == (2, 2, width, height) else: assert dense_pixels.shape == (width, height) with x_axis, y_axis: if observe: pyro.sample("pixels", Bernoulli(p), obs=dense_pixels) def model4(): fun(observe=True) @config_enumerate(default="parallel") def guide4(): fun(observe=False) # Test without enumeration. enumerated = False test_model(model4, guide4, Trace_ELBO()) # Test with enumeration. enumerated = True test_model(model4, guide4, TraceEnum_ELBO(max_iarange_nesting=2)) ```
github_jupyter
# TensorFlow Core Learning Algorithms In this notebook we will walk through 4 fundemental machine learning algorithms. We will apply each of these algorithms to unique problems and datasets before highlighting the use cases of each. The algorithms we will focus on include: - Linear Regression - Classification - Clustering - Hidden Markov Models It is worth noting that there are many tools within TensorFlow that could be used to solve the problems we will see below. I have chosen the tools that I belive give the most variety and are easiest to use. ## Linear Regression Linear regression is one of the most basic forms of machine learning and is used to predict numeric values. In this tutorial we will use a linear model to predict the survival rate of passangers from the titanic dataset. *This section is based on the following documentation: https://www.tensorflow.org/tutorials/estimator/linear* ### How it Works Before we dive in, I will provide a very surface level explination of the linear regression algorithm. Linear regression follows a very simple concept. If data points are related linearly, we can generate a line of best fit for these points and use it to predict future values. Let's take an example of a data set with one feature and one label. ``` import matplotlib.pyplot as plt import numpy as np x = [1, 2, 2.5, 3, 4] y = [1, 4, 7, 9, 15] plt.plot(x, y, 'ro') plt.axis([0, 6, 0, 20]) ``` We can see that this data has a linear coorespondence. When the x value increases, so does the y. Because of this relation we can create a line of best fit for this dataset. In this example our line will only use one input variable, as we are working with two dimensions. In larger datasets with more features our line will have more features and inputs. "Line of best fit refers to a line through a scatter plot of data points that best expresses the relationship between those points." Here's a refresher on the equation of a line in 2D. $ y = mx + b $ Here's an example of a line of best fit for this graph. ``` plt.plot(x, y, 'ro') plt.axis([0, 6, 0, 20]) plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x))) plt.show() ``` Once we've generated this line for our dataset, we can use its equation to predict future values. We just pass the features of the data point we would like to predict into the equation of the line and use the output as our prediction. ### Setup and Imports Before we get started we must install *sklearn* and import the following modules. ``` from __future__ import absolute_import, division, print_function, unicode_literals import numpy as np import pandas as pd import matplotlib.pyplot as plt from IPython.display import clear_output from six.moves import urllib import tensorflow.compat.v2.feature_column as fc import tensorflow as tf ``` ### Data So, if you haven't realized by now a major part of machine learning is data! In fact, it's so important that most of what we do in this tutorial will focus on exploring, cleaning and selecting appropriate data. The dataset we will be focusing on here is the titanic dataset. It has tons of information about each passanger on the ship. Our first step is always to understand the data and explore it. So, let's do that! **Below we will load a dataset and learn how we can explore it using some built-in tools. ** ``` # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') # training data dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') # testing data y_train = dftrain.pop('survived') y_eval = dfeval.pop('survived') ``` The ```pd.read_csv()``` method will return to us a new pandas *dataframe*. You can think of a dataframe like a table. In fact, we can actually have a look at the table representation. We've decided to pop the "survived" column from our dataset and store it in a new variable. This column simply tells us if the person survived our not. To look at the data we'll use the ```.head()``` method from pandas. This will show us the first 5 items in our dataframe. ``` dftrain.head() ``` And if we want a more statistical analysis of our data we can use the ```.describe()``` method. ``` dftrain.describe() ``` And since we talked so much about shapes in the previous tutorial let's have a look at that too! ``` dftrain.shape ``` So have have 627 entries and 9 features, nice! Now let's have a look at our survival information. ``` y_train.head() ``` Notice that each entry is either a 0 or 1. Can you guess which stands for survival? **And now because visuals are always valuable let's generate a few graphs of the data.** ``` dftrain.age.hist(bins=20) dftrain.sex.value_counts().plot(kind='barh') dftrain['class'].value_counts().plot(kind='barh') pd.concat([dftrain, y_train], axis=1).groupby('sex').survived.mean().plot(kind='barh').set_xlabel('% survive') ``` After analyzing this information, we should notice the following: - Most passengers are in their 20's or 30's - Most passengers are male - Most passengers are in "Third" class - Females have a much higher chance of survival ### Training vs Testing Data You may have noticed that we loaded **two different datasets** above. This is because when we train models, we need two sets of data: **training and testing**. The **training** data is what we feed to the model so that it can develop and learn. It is usually a much larger size than the testing data. The **testing** data is what we use to evaulate the model and see how well it is performing. We must use a seperate set of data that the model has not been trained on to evaluate it. Can you think of why this is? Well, the point of our model is to be able to make predictions on NEW data, data that we have never seen before. If we simply test the model on the data that it has already seen we cannot measure its accuracy accuratly. We can't be sure that the model hasn't simply memorized our training data. This is why we need our testing and training data to be seperate. ### Feature Columns In our dataset we have two different kinds of information: **Categorical and Numeric** Our **categorical data** is anything that is not numeric! For example, the sex column does not use numbers, it uses the words "male" and "female". Before we continue and create/train a model we must convet our categorical data into numeric data. We can do this by encoding each category with an integer (ex. male = 1, female = 2). Fortunately for us TensorFlow has some tools to help! ``` CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: vocabulary = dftrain[feature_name].unique() # gets a list of all unique values from given feature column feature_columns.append(tf.feature_column.categorical_column_with_vocabulary_list(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(tf.feature_column.numeric_column(feature_name, dtype=tf.float32)) print(feature_columns) ``` Let's break this code down a little bit... Essentially what we are doing here is creating a list of features that are used in our dataset. The cryptic lines of code inside the ```append()``` create an object that our model can use to map string values like "male" and "female" to integers. This allows us to avoid manually having to encode our dataframes. *And here is some relevant documentation* https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list?version=stable ### The Training Process So, we are almost done preparing our dataset and I feel as though it's a good time to explain how our model is trained. Specifically, how input data is fed to our model. For this specific model data is going to be streamed into it in small batches of 32. This means we will not feed the entire dataset to our model at once, but simply small batches of entries. We will feed these batches to our model multiple times according to the number of **epochs**. An **epoch** is simply one stream of our entire dataset. The number of epochs we define is the amount of times our model will see the entire dataset. We use multiple epochs in hope that after seeing the same data multiple times the model will better determine how to estimate it. Ex. if we have 10 ephocs, our model will see the same dataset 10 times. Since we need to feed our data in batches and multiple times, we need to create something called an **input function**. The input function simply defines how our dataset will be converted into batches at each epoch. ### Input Function The TensorFlow model we are going to use requires that the data we pass it comes in as a ```tf.data.Dataset``` object. This means we must create a *input function* that can convert our current pandas dataframe into that object. Below you'll see a seemingly complicated input function, this is straight from the TensorFlow documentation (https://www.tensorflow.org/tutorials/estimator/linear). I've commented as much as I can to make it understandble, but you may want to refer to the documentation for a detailed explination of each method. ``` def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32): def input_function(): # inner function, this will be returned ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df)) # create tf.data.Dataset object with data and its label if shuffle: ds = ds.shuffle(1000) # randomize order of data ds = ds.batch(batch_size).repeat(num_epochs) # split dataset into batches of 32 and repeat process for number of epochs return ds # return a batch of the dataset return input_function # return a function object for use train_input_fn = make_input_fn(dftrain, y_train) # here we will call the input_function that was returned to us to get a dataset object we can feed to the model eval_input_fn = make_input_fn(dfeval, y_eval, num_epochs=1, shuffle=False) ``` ### Creating the Model In this tutorial we are going to use a linear estimator to utilize the linear regression algorithm. Creating one is pretty easy! Have a look below. ``` linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns) # We create a linear estimtor by passing the feature columns we created earlier ``` ### Training the Model Training the model is as easy as passing the input functions that we created earlier. ``` linear_est.train(train_input_fn) # train result = linear_est.evaluate(eval_input_fn) # get model metrics/stats by testing on tetsing data clear_output() # clears consoke output print(result['accuracy']) # the result variable is simply a dict of stats about our model ``` And we now we have a model with a 74% accuracy (this will change each time)! Not crazy impressive but decent for our first try. Now let's see how we can actually use this model to make predicitons. We can use the ```.predict()``` method to get survival probabilities from the model. This method will return a list of dicts that store a predicition for each of the entries in our testing data set. Below we've used some pandas magic to plot a nice graph of the predictions. As you can see the survival rate is not very high :/ ``` pred_dicts = list(linear_est.predict(eval_input_fn)) probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts]) probs.plot(kind='hist', bins=20, title='predicted probabilities') ``` That's it for linear regression! Now onto classification. ## Classification Now that we've covered linear regression it is time to talk about classification. Where regression was used to predict a numeric value, classification is used to seperate data points into classes of different labels. In this example we will use a TensorFlow estimator to classify flowers. Since we've touched on how estimators work earlier, I'll go a bit quicker through this example. This section is based on the following guide from the TensorFlow website. https://www.tensorflow.org/tutorials/estimator/premade ### Imports and Setup ``` from __future__ import absolute_import, division, print_function, unicode_literals import tensorflow as tf import pandas as pd ``` ### Dataset This specific dataset seperates flowers into 3 different classes of species. - Setosa - Versicolor - Virginica The information about each flower is the following. - sepal length - sepal width - petal length - petal width ``` CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species'] SPECIES = ['Setosa', 'Versicolor', 'Virginica'] # Lets define some constants to help us later on train_path = tf.keras.utils.get_file( "iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv") test_path = tf.keras.utils.get_file( "iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv") train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0) test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0) # Here we use keras (a module inside of TensorFlow) to grab our datasets and read them into a pandas dataframe ``` Let's have a look at our data. ``` train.head() ``` Now we can pop the species column off and use that as our label. ``` train_y = train.pop('Species') test_y = test.pop('Species') train.head() # the species column is now gone train.shape # we have 120 entires with 4 features ``` ### Input Function Remember that nasty input function we created earlier. Well we need to make another one here! Fortunatly for us this one is a little easier to digest. ``` def input_fn(features, labels, training=True, batch_size=256): # Convert the inputs to a Dataset. dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels)) # Shuffle and repeat if you are in training mode. if training: dataset = dataset.shuffle(1000).repeat() return dataset.batch(batch_size) ``` ### Feature Columns And you didn't think we forgot about the feature columns, did you? ``` # Feature columns describe how to use the input. my_feature_columns = [] for key in train.keys(): my_feature_columns.append(tf.feature_column.numeric_column(key=key)) print(my_feature_columns) ``` ### Building the Model And now we are ready to choose a model. For classification tasks there are variety of different estimators/models that we can pick from. Some options are listed below. - ```DNNClassifier``` (Deep Neural Network) - ```LinearClassifier``` We can choose either model but the DNN seems to be the best choice. This is because we may not be able to find a linear coorespondence in our data. So let's build a model! ``` # Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each. classifier = tf.estimator.DNNClassifier( feature_columns=my_feature_columns, # Two hidden layers of 30 and 10 nodes respectively. hidden_units=[30, 10], # The model must choose between 3 classes. n_classes=3) ``` What we've just done is created a deep neural network that has two hidden layers. These layers have 30 and 10 neurons respectively. This is the number of neurons the TensorFlow official tutorial uses so we'll stick with it. However, it is worth mentioning that the number of hidden neurons is an arbitrary number and many experiments and tests are usually done to determine the best choice for these values. Try playing around with the number of hidden neurons and see if your results change. ### Training Now it's time to train the model! ``` classifier.train( input_fn=lambda: input_fn(train, train_y, training=True), steps=5000) # We include a lambda to avoid creating an inner function previously ``` The only thing to explain here is the **steps** argument. This simply tells the classifier to run for 5000 steps. Try modifiying this and seeing if your results change. Keep in mind that more is not always better. ### Evaluation Now let's see how this trained model does! ``` eval_result = classifier.evaluate( input_fn=lambda: input_fn(test, test_y, training=False)) print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result)) ``` Notice this time we didn't specify the number of steps. This is because during evaluation the model will only look at the testing data one time. ### Predictions Now that we have a trained model it's time to use it to make predictions. I've written a little script below that allows you to type the features of a flower and see a prediction for its class. ``` def input_fn(features, batch_size=256): # Convert the inputs to a Dataset without labels. return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size) features = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth'] predict = {} print("Please type numeric values as prompted.") for feature in features: valid = True while valid: val = input(feature + ": ") if not val.isdigit(): valid = False predict[feature] = [float(val)] predictions = classifier.predict(input_fn=lambda: input_fn(predict)) for pred_dict in predictions: class_id = pred_dict['class_ids'][0] probability = pred_dict['probabilities'][class_id] print('Prediction is "{}" ({:.1f}%)'.format( SPECIES[class_id], 100 * probability)) # Here is some example input and expected classes you can try above expected = ['Setosa', 'Versicolor', 'Virginica'] predict_x = { 'SepalLength': [5.1, 5.9, 6.9], 'SepalWidth': [3.3, 3.0, 3.1], 'PetalLength': [1.7, 4.2, 5.4], 'PetalWidth': [0.5, 1.5, 2.1], } ``` And that's pretty much it for classification! ## Clustering Now that we've covered regression and classification it's time to talk about clustering data! Clustering is a Machine Learning technique that involves the grouping of data points. In theory, data points that are in the same group should have similar properties and/or features, while data points in different groups should have highly dissimilar properties and/or features. Unfortunalty there are issues with the current version of TensorFlow and the implementation for KMeans. This means we cannot use KMeans without writing the algorithm from scratch. We aren't quite at that level yet, so we'll just explain the basics of clustering for now. #### Basic Algorithm for K-Means. - Step 1: Randomly pick K points to place K centroids - Step 2: Assign all the data points to the centroids by distance. The closest centroid to a point is the one it is assigned to. - Step 3: Average all the points belonging to each centroid to find the middle of those clusters (center of mass). Place the corresponding centroids into that position. - Step 4: Reassign every point once again to the closest centroid. - Step 5: Repeat steps 3-4 until no point changes which centroid it belongs to. ``` import numpy as np import tensorflow as tf num_points = 100 dimensions = 2 points = np.random.uniform(0, 1000, [num_points, dimensions]) def input_fn(): return tf.compat.v1.train.limit_epochs( tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1) num_clusters = 5 kmeans = tf.compat.v1.estimator.experimental.KMeans( num_clusters=num_clusters, use_mini_batch=False) # train num_iterations = 10 previous_centers = None for _ in range(num_iterations): kmeans.train(input_fn) cluster_centers = kmeans.cluster_centers() if previous_centers is not None: print ('delta:', cluster_centers - previous_centers) previous_centers = cluster_centers print('score:', kmeans.score(input_fn)) print('cluster centers:', cluster_centers) # map the input points to their clusters cluster_indices = list(kmeans.predict_cluster_index(input_fn)) for i, point in enumerate(points): cluster_index = cluster_indices[i] center = cluster_centers[cluster_index] print('point:', point, 'is in cluster', cluster_index, 'centered at', center) kmeans.cluster_centers() ``` ## Hidden Markov Models "The Hidden Markov Model is a finite set of states, each of which is associated with a (generally multidimensional) probability distribution []. Transitions among the states are governed by a set of probabilities called transition probabilities." [HMM](http://jedlik.phy.bme.hu/~gerjanos/HMM/node4.html) A hidden markov model works with probabilities to predict future events or states. In this section we will learn how to create a hidden markov model that can predict the weather. *This section is based on the following [TensorFlow tutorial](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/HiddenMarkovModel) ### Data Let's start by discussing the type of data we use when we work with a hidden markov model. In the previous sections we worked with large datasets of 100's of different entries. For a markov model we are only interested in probability distributions that have to do with states. We can find these probabilities from large datasets or may already have these values. We'll run through an example in a second that should clear some things up, but let's discuss the components of a markov model. **States:** In each markov model we have a finite set of states. These states could be something like "warm" and "cold" or "high" and "low" or even "red", "green" and "blue". These states are "hidden" within the model, which means we do not direcly observe them. **Observations:** Each state has a particular outcome or observation associated with it based on a probability distribution. An example of this is the following: *On a hot day Tim has a 80% chance of being happy and a 20% chance of being sad.* **Transitions:** Each state will have a probability defining the likelyhood of transitioning to a different state. An example is the following: *a cold day has a 30% chance of being followed by a hot day and a 70% chance of being follwed by another cold day.* To create a hidden markov model we need. - States - Observation Distribution - Transition Distribution For our purpose we will assume we already have this information available as we attempt to predict the weather on a given day. ### Imports and Setup Due to a version mismatch with tensorflow v2 and tensorflow_probability we need to install the most recent version of tensorflow_probability (see below). ``` !pip install --upgrade tensorflow-probability tf.__version__ import tensorflow_probability as tfp # We are using a different module from tensorflow this time import tensorflow as tf ``` ### Weather Model Taken direclty from the [TensorFlow documentation](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/HiddenMarkovModel). We will model a simple weather system and try to predict the temperature on each day given the following information. 1. Cold days are encoded by a 0 and hot days are encoded by a 1. 2. The first day in our sequence has an 80% chance of being cold. 3. A cold day has a 30% chance of being followed by a hot day. 4. A hot day has a 20% chance of being followed by a cold day. 5. On each day the temperature is normally distributed with mean and standard deviation 0 and 5 on a cold day and mean and standard deviation 15 and 10 on a hot day. If you're unfamiliar with **standard deviation** it can be put simply as the range of expected values. In this example, on a hot day the average temperature is 15 and ranges from 5 to 25. To model this in TensorFlow we will do the following. ``` tfd = tfp.distributions # making a shortcut for later on initial_distribution = tfd.Categorical(probs=[0.2, 0.8]) # Refer to point 2 above transition_distribution = tfd.Categorical(probs=[[0.5, 0.5], [0.2, 0.8]]) # refer to points 3 and 4 above observation_distribution = tfd.Normal(loc=[0., 15.], scale=[5., 10.]) # refer to point 5 above # the loc argument represents the mean and the scale is the standard devitation ``` We've now created distribution variables to model our system and it's time to create the hidden markov model. ``` model = tfd.HiddenMarkovModel( initial_distribution=initial_distribution, transition_distribution=transition_distribution, observation_distribution=observation_distribution, num_steps=7) ``` The number of steps represents the number of days that we would like to predict information for. In this case we've chosen 7, an entire week. To get the **expected temperatures** on each day we can do the following. ``` mean = model.mean() # due to the way TensorFlow works on a lower level we need to evaluate part of the graph # from within a session to see the value of this tensor # in the new version of tensorflow we need to use tf.compat.v1.Session() rather than just tf.Session() with tf.compat.v1.Session() as sess: print(mean.numpy()) ``` ## Conclusion So that's it for the core learning algorithms in TensorFlow. Hopefully you've learned about a few interesting tools that are easy to use! To practice I'd encourage you to try out some of these algorithms on different datasets.
github_jupyter
``` from IPython.display import HTML # Cell visibility - COMPLETE: #tag = HTML('''<style> #div.input { # display:none; #} #</style>''') #display(tag) #Cell visibility - TOGGLE: tag = HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide() } else { $('div.input').show() } code_show = !code_show } $( document ).ready(code_toggle); </script> <p style="text-align:right"> Toggle cell visibility <a href="javascript:code_toggle()">here</a>.</p>''') display(tag) ``` ## Operacije na matrikah Primer je namenjen vpogledu v osnovne operacije na matrikah: seštevanje, odštevanje, množenje, transponiranje in določitev inverza. Najprej določite velikost (do vključno velikosti $3\times3$) matrik $A$ in $B$, nato pa določite njune elemente z uporabo vnosnih polj. Ob inicializaciji matrik bodo v vnosnih poljih že vnešene naključne vrednosti elementov obeh matrik. Glede na definirani matriki so z uporabo ustreznega gumba izvrši želena operacija, poleg tega se prikaže tudi rezultat v matematičnem zapisu. Vse operacije seveda niso veljavne za katerikoli par izbranih matrik - npr. pri množenju matrik mora biti število stoplcev prve matrike ($A$) enako število vrstic druge matrike ($B$). <!--This example provides the insight into basic matrix operations: addition, substraction, multiplication, the transpose of a matrix, and matrix inversion. You can define the size of two matrices A and B (up to 3x3), and set all their element values using input fields. In addition, some random values will be generated while matrix is being initialized. According to the corresponding values, all operations will be performed (please use the respective buttons), and results will be displayed in a mathematical notation. Of course, not all operations will be valid for a given matrices. For example, matrix multiplication assumes that the number of columns in the first matrix (A) is equal to the number of rows in the second matrix (B).--> ``` from ipywidgets import interactive, Button, HBox, VBox, GridBox, Layout, Text, Output, FloatText import ipywidgets as widgets from IPython.display import Markdown import matplotlib.pyplot as plt import numpy as np import math import random from IPython.display import HTML from IPython.display import display style={'description_width': 'initial'} cols_A = widgets.Dropdown( options=['1', '2', '3'], value='2', description='Št. stoplcev v matriki $A$:', disabled=False, style=style ) rows_A = widgets.Dropdown( options=['1', '2', '3'], value='2', description='Št. vrstic v matriki $A$:', disabled=False, style=style ) cols_B = widgets.Dropdown( options=['1', '2', '3'], value='2', description='Št. stoplcev v matriki $B$:', disabled=False, style=style ) rows_B = widgets.Dropdown( options=['1', '2', '3'], value='2', description='Št. vrstic v matriki $B$:', disabled=False, style=style ) left_box = VBox([cols_A, rows_A]) right_box = VBox([cols_B, rows_B]) menu_box = HBox([left_box, right_box]) button_create = widgets.Button(description="Ustvari matriki") button_add = widgets.Button(description='Seštej', layout=widgets.Layout(margin='10px 5px 5px 5px')) button_sub = widgets.Button(description='Odštej', layout=widgets.Layout(margin='5px')) button_multiply = widgets.Button(description='Zmnoži', layout=widgets.Layout(margin='5px')) button_transpose = widgets.Button(description='Transponiraj A', layout=widgets.Layout(margin='5px')) button_inverze = widgets.Button(description='Inverz matrike A', layout=widgets.Layout(margin='5px')) button_reset = widgets.Button(description='ponastavi vse', layout=widgets.Layout(margin='5px')) box_layout_2 = widgets.Layout(border='solid blue', height='300px', width='40%') box_layout_3 = widgets.Layout(border='solid black', height = '300px', width='60%') output = widgets.Output(layout = box_layout_2) output_info = widgets.Output(layout = box_layout_3) box_layout_1 = widgets.Layout(border='solid red', width='18%', align_items='center', margin='0px 2% 0px 0px') my_v_box = widgets.VBox([button_add, button_sub, button_multiply, button_transpose, button_inverze, button_reset], layout = box_layout_1 ) my_h_box = widgets.HBox([my_v_box,widgets.Label(value=' '), output]) first = True def on_button_create_clicked(b): global matrix_A, matrix_B, m_A, m_B global first title_A = Output() title_B = Output() row_values = [] for i in range(int(rows_A.value)): row = [FloatText(value=str(round(random.uniform(0, 10), 1)), layout=Layout(width='60px')) for i in range(int(cols_A.value))] hbox_row = HBox(children=row) row_values.append(hbox_row) m_A = VBox(children=row_values) row_values = [] for i in range(int(rows_B.value)): row = [FloatText(value=str(round(random.uniform(0, 10), 1)), layout=Layout(width='60px')) for i in range(int(cols_B.value))] hbox_row = HBox(children=row) row_values.append(hbox_row) m_B = VBox(children=row_values) if (first): output.layout.visibility = 'visible' output_info.layout.visibility = 'visible' my_h_box.layout.visibility = 'visible' first = False with output_info: output_info.clear_output() display(title_A, m_A, title_B, m_B) with title_A: print("Matrika A") with title_B: print("Matrika B") if first == False: with output: output.clear_output() def vmatrix(a): if len(a.shape) > 2: raise ValueError('bmatrix can at most display two dimensions') lines = str(a).replace('[', '').replace(']', '').splitlines() rv = [r'\begin{vmatrix}'] rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines] rv += [r'\end{vmatrix}'] return '\n'.join(rv) def get_matrices(): global matrix_A, matrix_B global m_A, m_B AA = [] for i in range(int(rows_A.value)): single_hbox = m_A.children[i]; for j in range(int(cols_A.value)): no = float(single_hbox.children[j].value); AA.append(no) BB = [] for i in range(int(rows_B.value)): single_hbox = m_B.children[i]; for j in range(int(cols_B.value)): no = float(single_hbox.children[j].value); BB.append(no) AA = np.array(AA).reshape(int(rows_A.value), int(cols_A.value)) BB = np.array(BB).reshape(int(rows_B.value), int(cols_B.value)) return AA, BB def on_button_add_clicked(b): A, B = get_matrices() with output: output.clear_output() display(Markdown('Matrika $A$: $%s$' % vmatrix(A))) display(Markdown('Matrika $B$: $%s$' % vmatrix(B))) nRowsA = len(A) nColsA = len(A[0]) nRowsB = len(B) nColsB = len(B[0]) if (nRowsA != nRowsB): print("Izbranih matrik ni možno sešteti") elif (nColsA != nColsB): print("Izbranih matrik ni možno sešteti") else: EE = np.add(A, B) display(Markdown('$A + B = $$%s$' % vmatrix(EE))) def on_button_sub_clicked(b): A, B = get_matrices() with output: output.clear_output() display(Markdown('Matrika $A$: $%s$' % vmatrix(A))) display(Markdown('Matrika $B$: $%s$' % vmatrix(B))) nRowsA = len(A) nColsA = len(A[0]) nRowsB = len(B) nColsB = len(B[0]) if (nRowsA != nRowsB): print("Izbranih matrik ni možno odšteti") elif (nColsA != nColsB): print("Izbranih matrik ni možno odšteti") else: EE = np.subtract(A, B) display(Markdown('$A - B =$ $%s$' % vmatrix(EE))) def on_button_multiply_clicked(b): A, B = get_matrices() with output: output.clear_output() display(Markdown('Matrika $A$: $%s$' % vmatrix(A))) display(Markdown('Matrika $B$: $%s$' % vmatrix(B))) try: EE = np.matmul(A, B) except: print("Izbranih matrik ni možno zmnožiti") pass else: display(Markdown('$AB = $$%s$' % vmatrix(EE))) def on_button_transpose_clicked(b): A, B = get_matrices() with output: output.clear_output() display(Markdown('Matrika $A$: $%s$' % vmatrix(A))) display(Markdown('Transponirana matrika $A$: $%s$' % vmatrix(A.T))) def on_button_inverze_clicked(b): A, B = get_matrices() with output: output.clear_output() display(Markdown('Matrika $A$: $%s$' % vmatrix(A))) try: inverse = np.linalg.inv(A) except np.linalg.LinAlgError: print("Inverz matrike A ne obstaja") pass else: display(Markdown('Inverz matrike $A$: $%s$' % vmatrix(inverse))) def on_button_reset_clicked(b): global first first = True output.layout.visibility = 'hidden' output_info.layout.visibility = 'hidden' my_h_box.layout.visibility = 'hidden' cols_A.value = '2' cols_B.value = '2' rows_A.value = '2' rows_B.value = '2' button_create.on_click(on_button_create_clicked) button_add.on_click(on_button_add_clicked) button_sub.on_click(on_button_sub_clicked) button_multiply.on_click(on_button_multiply_clicked) button_transpose.on_click(on_button_transpose_clicked) button_inverze.on_click(on_button_inverze_clicked) button_reset.on_click(on_button_reset_clicked) display(menu_box, button_create) display(output_info) display(my_h_box) my_h_box.layout.visibility = 'hidden' output_info.layout.visibility = 'hidden' ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** ## Import Packages ``` #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ``` ## Read in an Image ``` #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ``` ## Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:** `cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** ## Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ``` import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) ``` ## Test Images Build your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ``` import os os.listdir("test_images/") ``` ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) return result ``` Let's try the one with the solid white lane on the right first ... ``` white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ``` Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ``` HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ``` ## Improve the draw_lines() function **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".** **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ``` yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ``` ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. ## Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ``` challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ```
github_jupyter
``` # Importing Modules import pandas as pd from glob import glob import numpy as np from PIL import Image import torch data_file = glob('Dataset\\*')[0] data_file # Class Names # 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral. # Reading the data df = pd.read_csv(data_file) df.head() mystring = df['pixels'].tolist()[0] mylist = mystring.split(' ') myarray = np.array([mylist], dtype = np.uint) reshapedrray = myarray.reshape((48,48)) new_image = Image.fromarray(reshapedrray) new_image.show() def convert_str_pixels_to_array(str): return np.array(mystring.split(' '), dtype = np.uint) adf = df.head() adf['img_arr'] = adf.apply(lambda x : convert_str_pixels_to_array(x['pixels']), axis = 1) adf['img_arr'].dtypes from torch.utils.data import Dataset import numpy as np # Borrowed from a medium article class FER2013Dataset(Dataset): """Face Expression Recognition Dataset""" def __init__(self, file_path): """ Args: file_path (string): Path to the csv file with emotion, pixel & usage. """ self.file_path = file_path self.classes = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral') # Define the name of classes / expression with open(self.file_path) as f: # load how many row / image the data contains self.total_images = len(f.readlines()) - 1 #reduce 1 for row of column def __len__(self): # to return total images when call `len(dataset)` return self.total_images def __getitem__(self, idx): # to return image and emotion when call `dataset[idx]` if torch.is_tensor(idx): idx = idx.tolist() with open(self.file_path) as f: # read all the csv using readlines emotion, img, usage = f.readlines()[idx + 1].split(",") #plus 1 to skip first row (column name) emotion = int(emotion) # just make sure it is int not str img = img.split(" ") # because the pixels are seperated by space img = np.array(img, 'int') # just make sure it is int not str img = img.reshape(48,48) # change shape from 2304 to 48 * 48 sample = img, emotion return sample train_file = 'Dataset/Training.csv' val_file = 'Dataset/PrivateTest.csv' test_file = 'Dataset/PublicTest.csv' train_dataset = FER2013Dataset(file_path = train_file) val_dataset = FER2013Dataset(file_path = val_file) test_dataset = FER2013Dataset(file_path = test_file) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size = 1, shuffle = True) val_loader = torch.utils.data.DataLoader(val_dataset, batch_size = 1, shuffle = True) test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = 1, shuffle = True) def train(epoch): # Initializing or setting to the training method # model.train() for batch_idx, (data, target) in enumerate(train_loader): # Converting the tensor to be calculated on GPU if available data, target = data, target print(data) print(target) break train(1) ```
github_jupyter
``` spark = SparkSession.builder \ .master("local") \ .appName("Natural Language Processing") \ .config("spark.executor.memory", "6gb") \ .getOrCreate() df = spark.read.format('com.databricks.spark.csv')\ .options(header='true', inferschema='true')\ .load('TherapyBotSession.csv') df.show() df = df.select('id', 'label', 'chat') df.show() df.groupBy("label") \ .count() \ .orderBy("count", ascending = False) \ .show() import pyspark.sql.functions as F df = df.withColumn('word_count',F.size(F.split(F.col('chat'),' '))) df.show() df.groupBy('label')\ .agg(F.avg('word_count').alias('avg_word_count'))\ .orderBy('avg_word_count', ascending = False) \ .show() df_plot = df.select('id', 'word_count').toPandas() import matplotlib.pyplot as plt %matplotlib inline df_plot.set_index('id', inplace=True) df_plot.plot(kind='bar', figsize=(16, 6)) plt.ylabel('Word Count') plt.title('Word Count distribution') plt.show() from textblob import TextBlob def sentiment_score(chat): return TextBlob(chat).sentiment.polarity from pyspark.sql.types import FloatType sentiment_score_udf = F.udf(lambda x: sentiment_score(x), FloatType()) df = df.select('id', 'label', 'chat','word_count', sentiment_score_udf('chat').alias('sentiment_score')) df.show() df.groupBy('label')\ .agg(F.avg('sentiment_score').alias('avg_sentiment_score'))\ .orderBy('avg_sentiment_score', ascending = False) \ .show() df = df.withColumn('words',F.split(F.col('chat'),' ')) df.show() stop_words = ['i','me','my','myself','we','our','ours','ourselves', 'you','your','yours','yourself','yourselves','he','him', 'his','himself','she','her','hers','herself','it','its', 'itself','they','them','their','theirs','themselves', 'what','which','who','whom','this','that','these','those', 'am','is','are','was','were','be','been','being','have', 'has','had','having','do','does','did','doing','a','an', 'the','and','but','if','or','because','as','until','while', 'of','at','by','for','with','about','against','between', 'into','through','during','before','after','above','below', 'to','from','up','down','in','out','on','off','over','under', 'again','further','then','once','here','there','when','where', 'why','how','all','any','both','each','few','more','most', 'other','some','such','no','nor','not','only','own','same', 'so','than','too','very','can','will','just','don','should','now'] from pyspark.ml.feature import StopWordsRemover stopwordsRemovalFeature = StopWordsRemover(inputCol="words", outputCol="words without stop").setStopWords(stop_words) from pyspark.ml import Pipeline stopWordRemovalPipeline = Pipeline(stages=[stopwordsRemovalFeature]) pipelineFitRemoveStopWords = stopWordRemovalPipeline.fit(df) df = pipelineFitRemoveStopWords.transform(df) df.select('words', 'words without stop').show(5) label = F.udf(lambda x: 1.0 if x == 'escalate' else 0.0, FloatType()) df = df.withColumn('label', label('label')) df.select('label').show() import pyspark.ml.feature as feat TF_ = feat.HashingTF(inputCol="words without stop", outputCol="rawFeatures", numFeatures=100000) IDF_ = feat.IDF(inputCol="rawFeatures", outputCol="features") pipelineTFIDF = Pipeline(stages=[TF_, IDF_]) pipelineFit = pipelineTFIDF.fit(df) df = pipelineFit.transform(df) df.select('label', 'rawFeatures','features').show() (trainingDF, testDF) = df.randomSplit([0.75, 0.25], seed = 1234) from pyspark.ml.classification import LogisticRegression logreg = LogisticRegression(regParam=0.025) logregModel = logreg.fit(trainingDF) predictionDF = logregModel.transform(testDF) predictionDF.select('label', 'probability', 'prediction').show() predictionDF.crosstab('label', 'prediction').show() from sklearn import metrics actual = predictionDF.select('label').toPandas() predicted = predictionDF.select('prediction').toPandas() print('accuracy score: {}%'.format(round(metrics.accuracy_score(actual, predicted),3)*100)) from pyspark.ml.evaluation import BinaryClassificationEvaluator scores = predictionDF.select('label', 'rawPrediction') evaluator = BinaryClassificationEvaluator() print('The ROC score is {}%'.format(round(evaluator.evaluate(scores),3)*100)) predictionDF.describe('label').show() ```
github_jupyter
# Visualisation In this notebook, we will further explore the different things that can be done with `interpret`. If you'd like a basic intro, see the [Interpret Intro Notebook](https://github.com/ttumiel/interpret/blob/master/nbs/Interpret-Intro.ipynb) ## Install ``` # install from PyPI !pip install interpret-pytorch # Install from github # !pip install git+https://github.com/ttumiel/interpret ``` ## Channel Visualisations ``` from interpret import OptVis, ImageParam, denorm, get_layer_names import torchvision, torch # Create a network. network = torchvision.models.googlenet(pretrained=True) # Print the layer names so that we can choose one to optimise for get_layer_names(network) ``` Perhaps we want to optimise for the layer 'inception4c/branch1/conv', we can select this layer by passing it into the class method `OptVis.from_layer`. This will create an `OptVis` object with the output of that layer as the objective. We can also choose which channel we would like to optimise for in that layer. ``` layer = 'inception4c/branch1/conv' # choose layer channel = 32 # choose channel in layer # Create an OptVis object that will create a layer objective to optimise optvis = OptVis.from_layer(network, layer=layer, channel=channel) # Parameterise input noise in colour decorrelated Fourier domain img_param = ImageParam(128, fft=True, decorrelate=True) # Create visualisation # thresh is a tuple containing the iterations at which to display the image optvis.vis(img_param, thresh=(250,500)) channel = 14 # choose channel in layer optvis = OptVis.from_layer(network, layer=layer, channel=channel) optvis.vis() # you can leave out the image parameterisation to use the default ``` ## Neuron Visualisations You can also view a single neuron in a particular channel or all of the neurons across a channel by passing in a value to `neuron`. ``` channel = 14 # choose channel in layer neuron = 6 # Choose the center neuron optvis = OptVis.from_layer(network, layer=layer, channel=channel, neuron=neuron) optvis.vis() ``` ## Manually setting objectives We can also manually create an objective and pass it to the constructor of the `OptVis` class. By creating our own objective, we can do interesting things, like combine 2 different objectives to see how they interact: ``` from interpret.vis import LayerObjective objective32 = LayerObjective(network, layer, channel=32) objective14 = LayerObjective(network, layer, channel=14) objective = objective32 + objective14 optvis = OptVis(network, objective) optvis.vis() # And you can interpolate between them: objective = 0.75*objective32 + 0.25*objective14 optvis = OptVis(network, objective) optvis.vis() # Or you can minimise the activation by negating the objective. objective = -objective32 optvis = OptVis(network, objective) optvis.vis() ``` ## Other Objectives Additionally, you can optimise based on other objectives. An objective is just a function that returns a `loss` value to optimise. The objective can also be a class if it needs to save some state. For example the `LayerObjective` hooks into the pytorch model and grabs the output of the particular layer. It then returns the negative mean of this value as the loss (i.e. we want to maximise the activation of that particular layer.) Another objective to optimise for is `DeepDream`. This creates a dream-like effect on an input image. If you'd like to add other objectives, please make a PR! ``` # Deep Dream from interpret.vis import ImageFile # Download an image to apply attribution to !curl https://www.yourpurebredpuppy.com/dogbreeds/photos2-G/german-shepherd-05.jpg -o dog.jpg # Parameterise the image img_param = ImageFile("dog.jpg", size=256) img_param.cuda() # Deep Dream optvis = OptVis.from_dream(network, layer=layer) optvis.vis(img_param, thresh=(30,)); ``` ## Creating Objectives To create an object you can either subclass from `Objective`. This is particularly useful if you want to save some state. If you do not have any state, you can create a function that takes the network input `x` and returns some loss to minimise, and decorate it with `@Objective` so that it has all the `Objective` properties. ## Improving Visualisations Visualisations don't always play nice and sometimes you may have to change a few things around. You might try some of the following: - Add a bit of weight decay, using the `wd` parameter in `optvis.vis`. - Changing the transformations. You will have to make sure the transformations operate on tensors so that the gradient can be propagated through. See `transforms.py`. This seems particularly useful for layers that are deep in the network, like the final output. - Add other regularisation terms like the L1 or L2 norm, or total variation to help reduce noise. ## Image Parameterisations We can parameterise the input image in several different ways. Naively, we can just initialise the image as random noise and feed that through the network - but this doesn't tend to work so well, often introducing noise into the visualisation. A better parameterisation is a spatial and colour decorrelated space ([See Feature Visualisation](https://distill.pub/2017/feature-visualization/)) which generates better visualisations. ``` optvis = OptVis.from_layer(network, layer=layer, channel=channel) # Parameterise a naive pixel-based image img_param = ImageParam(128, fft=False, decorrelate=False) optvis.vis(img_param) # Use the colour and spatial decorrelated parameterisation img_param = ImageParam(128, fft=True, decorrelate=True) optvis.vis(img_param) ``` Anything that is differentiable could be used as an image parameterisation. In [Differentiable Image Parameterisations](https://distill.pub/2018/differentiable-parameterizations/) several different parameterisations are explored. Here we show the CPPN (compositional pattern producing network) that can create infinite resolution images by feeding in the xy coordinates of the image into a convolutional network and getting out RGB values at those coordinates. ``` from interpret import CPPNParam # Use a CPPN to parameterise image img_param = CPPNParam(128) # CPPN parameterisation works better without transform robustness # and uses a lower learning rate. optvis.vis(img_param, lr=0.004, transform=False); ```
github_jupyter
``` import numpy as np import scipy.sparse as sp import matplotlib.pyplot as plt import json from sklearn.metrics import f1_score import sys import os path = os.path.abspath(os.getcwd()) + '/../data_load' sys.path.insert(0, path) from movie_data import MovieData %matplotlib inline moviedata = MovieData(min_genre_frequency=0.05) print(moviedata.genre_labels) from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer vectorizer = TfidfVectorizer( max_df=0.7, analyzer='word', ngram_range=(1, 1), max_features=10000, stop_words='english' ) X = vectorizer.fit_transform(moviedata.plots) #vectorizer.get_feature_names() X.todense().shape from sklearn.decomposition import PCA pca = PCA(n_components=2000) XX = pca.fit_transform(X.todense()) plt.imshow(XX, aspect='auto') from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X.todense(), moviedata.one_hot_genres, test_size=0.4, random_state=42) XX.shape ``` ## Least Squares ``` genre_coeffs = [] scores = [] for i, label in enumerate(moviedata.genre_labels): coeffs = np.linalg.lstsq(X_train, y_train[:, i])[0] genre_coeffs.append(coeffs) y_pred = np.dot(X_test, coeffs) y_pred = np.array(y_pred > 0.25, dtype=int) score = f1_score(y_test[:, i], y_pred) scores.append(score) print("Genre: {}, Score: {}".format(labels[i], score)) #np.linalg.lstsq(X_train, y_train[:, 0])[0] np.mean(scores) ``` ## RandomForrest Model ``` from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import f1_score clf = RandomForestClassifier(n_estimators=25, max_features='auto', max_depth=None) clf.fit(X_train, y_train) y_guess_probs = clf.predict_proba(X_test) y_guess = clf.predict(X_test) scores = f1_score(y_test, y_guess, average=None) for item in list(zip(moviedata.genre_labels, scores)): print("{} {:.2f}".format(item[0], item[1])) print("Average {:.3f}".format(np.mean(scores))) np.mean(scores) ``` ## Logrithmic ``` from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC from sklearn.utils import shuffle from sklearn import preprocessing from sklearn import utils X_train, X_test, y_train, y_test = train_test_split( XX, moviedata.one_hot_genres, test_size=0.1, random_state=43 ) scores = [] for i, label in enumerate(moviedata.genre_labels): clf = LogisticRegression() clf.fit(X_train, y_train[:, i]) y_pred_probs = clf.predict_proba(X_test)[:, 1] plt.hist(y_pred_probs) y_pred = np.array(y_pred_probs > np.mean(y_pred_probs), dtype=int) score = f1_score(y_test[:, i], y_pred) scores.append(score) print("{} {:.2f}".format(label, score)) print('Mean f1 score: {:.3f}'.format(np.mean(scores))) ```
github_jupyter
[Table of Contents](./table_of_contents.ipynb) # Discrete Bayes Filter ``` from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ``` The Kalman filter belongs to a family of filters called *Bayesian filters*. Most textbook treatments of the Kalman filter present the Bayesian formula, perhaps shows how it factors into the Kalman filter equations, but mostly keeps the discussion at a very abstract level. That approach requires a fairly sophisticated understanding of several fields of mathematics, and it still leaves much of the work of understanding and forming an intuitive grasp of the situation in the hands of the reader. I will use a different way to develop the topic, to which I owe the work of Dieter Fox and Sebastian Thrun a great debt. It depends on building an intuition on how Bayesian statistics work by tracking an object through a hallway - they use a robot, I use a dog. I like dogs, and they are less predictable than robots which imposes interesting difficulties for filtering. The first published example of this that I can find seems to be Fox 1999 [1], with a fuller example in Fox 2003 [2]. Sebastian Thrun also uses this formulation in his excellent Udacity course Artificial Intelligence for Robotics [3]. In fact, if you like watching videos, I highly recommend pausing reading this book in favor of first few lessons of that course, and then come back to this book for a deeper dive into the topic. Let's now use a simple thought experiment, much like we did with the g-h filter, to see how we might reason about the use of probabilities for filtering and tracking. ## Tracking a Dog Let's begin with a simple problem. We have a dog friendly workspace, and so people bring their dogs to work. Occasionally the dogs wander out of offices and down the halls. We want to be able to track them. So during a hackathon somebody invented a sonar sensor to attach to the dog's collar. It emits a signal, listens for the echo, and based on how quickly an echo comes back we can tell whether the dog is in front of an open doorway or not. It also senses when the dog walks, and reports in which direction the dog has moved. It connects to the network via wifi and sends an update once a second. I want to track my dog Simon, so I attach the device to his collar and then fire up Python, ready to write code to track him through the building. At first blush this may appear impossible. If I start listening to the sensor of Simon's collar I might read **door**, **hall**, **hall**, and so on. How can I use that information to determine where Simon is? To keep the problem small enough to plot easily we will assume that there are only 10 positions in the hallway, which we will number 0 to 9, where 1 is to the right of 0. For reasons that will be clear later, we will also assume that the hallway is circular or rectangular. If you move right from position 9, you will be at position 0. When I begin listening to the sensor I have no reason to believe that Simon is at any particular position in the hallway. From my perspective he is equally likely to be in any position. There are 10 positions, so the probability that he is in any given position is 1/10. Let's represent our belief of his position in a NumPy array. I could use a Python list, but NumPy arrays offer functionality that we will be using soon. ``` import numpy as np belief = np.array([1/10]*10) print(belief) ``` In [Bayesian statistics](https://en.wikipedia.org/wiki/Bayesian_probability) this is called a [*prior*](https://en.wikipedia.org/wiki/Prior_probability). It is the probability prior to incorporating measurements or other information. More completely, this is called the *prior probability distribution*. A [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) is a collection of all possible probabilities for an event. Probability distributions always sum to 1 because something had to happen; the distribution lists all possible events and the probability of each. I'm sure you've used probabilities before - as in "the probability of rain today is 30%". The last paragraph sounds like more of that. But Bayesian statistics was a revolution in probability because it treats probability as a belief about a single event. Let's take an example. I know that if I flip a fair coin infinitely many times I will get 50% heads and 50% tails. This is called [*frequentist statistics*](https://en.wikipedia.org/wiki/Frequentist_inference) to distinguish it from Bayesian statistics. Computations are based on the frequency in which events occur. I flip the coin one more time and let it land. Which way do I believe it landed? Frequentist probability has nothing to say about that; it will merely state that 50% of coin flips land as heads. In some ways it is meaningless to assign a probability to the current state of the coin. It is either heads or tails, we just don't know which. Bayes treats this as a belief about a single event - the strength of my belief or knowledge that this specific coin flip is heads is 50%. Some object to the term "belief"; belief can imply holding something to be true without evidence. In this book it always is a measure of the strength of our knowledge. We'll learn more about this as we go. Bayesian statistics takes past information (the prior) into account. We observe that it rains 4 times every 100 days. From this I could state that the chance of rain tomorrow is 1/25. This is not how weather prediction is done. If I know it is raining today and the storm front is stalled, it is likely to rain tomorrow. Weather prediction is Bayesian. In practice statisticians use a mix of frequentist and Bayesian techniques. Sometimes finding the prior is difficult or impossible, and frequentist techniques rule. In this book we can find the prior. When I talk about the probability of something I am referring to the probability that some specific thing is true given past events. When I do that I'm taking the Bayesian approach. Now let's create a map of the hallway. We'll place the first two doors close together, and then another door further away. We will use 1 for doors, and 0 for walls: ``` hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) ``` I start listening to Simon's transmissions on the network, and the first data I get from the sensor is **door**. For the moment assume the sensor always returns the correct answer. From this I conclude that he is in front of a door, but which one? I have no reason to believe he is in front of the first, second, or third door. What I can do is assign a probability to each door. All doors are equally likely, and there are three of them, so I assign a probability of 1/3 to each door. ``` import kf_book.book_plots as book_plots from kf_book.book_plots import figsize, set_figsize import matplotlib.pyplot as plt belief = np.array([1/3, 1/3, 0, 0, 0, 0, 0, 0, 1/3, 0]) book_plots.bar_plot(belief) ``` This distribution is called a [*categorical distribution*](https://en.wikipedia.org/wiki/Categorical_distribution), which is a discrete distribution describing the probability of observing $n$ outcomes. It is a [*multimodal distribution*](https://en.wikipedia.org/wiki/Multimodal_distribution) because we have multiple beliefs about the position of our dog. Of course we are not saying that we think he is simultaneously in three different locations, merely that we have narrowed down our knowledge to one of these three locations. My (Bayesian) belief is that there is a 33.3% chance of being at door 0, 33.3% at door 1, and a 33.3% chance of being at door 8. This is an improvement in two ways. I've rejected a number of hallway positions as impossible, and the strength of my belief in the remaining positions has increased from 10% to 33%. This will always happen. As our knowledge improves the probabilities will get closer to 100%. A few words about the [*mode*](https://en.wikipedia.org/wiki/Mode_%28statistics%29) of a distribution. Given a list of numbers, such as {1, 2, 2, 2, 3, 3, 4}, the *mode* is the number that occurs most often. For this set the mode is 2. A distribution can contain more than one mode. The list {1, 2, 2, 2, 3, 3, 4, 4, 4} contains the modes 2 and 4, because both occur three times. We say the former list is [*unimodal*](https://en.wikipedia.org/wiki/Unimodality), and the latter is *multimodal*. Another term used for this distribution is a [*histogram*](https://en.wikipedia.org/wiki/Histogram). Histograms graphically depict the distribution of a set of numbers. The bar chart above is a histogram. I hand coded the `belief` array in the code above. How would we implement this in code? We represent doors with 1, and walls as 0, so we will multiply the hallway variable by the percentage, like so; ``` belief = hallway * (1/3) print(belief) ``` ## Extracting Information from Sensor Readings Let's put Python aside and think about the problem a bit. Suppose we were to read the following from Simon's sensor: * door * move right * door Can we deduce Simon's location? Of course! Given the hallway's layout there is only one place from which you can get this sequence, and that is at the left end. Therefore we can confidently state that Simon is in front of the second doorway. If this is not clear, suppose Simon had started at the second or third door. After moving to the right, his sensor would have returned 'wall'. That doesn't match the sensor readings, so we know he didn't start there. We can continue with that logic for all the remaining starting positions. The only possibility is that he is now in front of the second door. Our belief is: ``` belief = np.array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.]) ``` I designed the hallway layout and sensor readings to give us an exact answer quickly. Real problems are not so clear cut. But this should trigger your intuition - the first sensor reading only gave us low probabilities (0.333) for Simon's location, but after a position update and another sensor reading we know more about where he is. You might suspect, correctly, that if you had a very long hallway with a large number of doors that after several sensor readings and positions updates we would either be able to know where Simon was, or have the possibilities narrowed down to a small number of possibilities. This is possible when a set of sensor readings only matches one to a few starting locations. We could implement this solution now, but instead let's consider a real world complication to the problem. ## Noisy Sensors Perfect sensors are rare. Perhaps the sensor would not detect a door if Simon sat in front of it while scratching himself, or misread if he is not facing down the hallway. Thus when I get **door** I cannot use 1/3 as the probability. I have to assign less than 1/3 to each door, and assign a small probability to each blank wall position. Something like ```Python [.31, .31, .01, .01, .01, .01, .01, .01, .31, .01] ``` At first this may seem insurmountable. If the sensor is noisy it casts doubt on every piece of data. How can we conclude anything if we are always unsure? The answer, as for the problem above, is with probabilities. We are already comfortable assigning a probabilistic belief to the location of the dog; now we have to incorporate the additional uncertainty caused by the sensor noise. Say we get a reading of **door**, and suppose that testing shows that the sensor is 3 times more likely to be right than wrong. We should scale the probability distribution by 3 where there is a door. If we do that the result will no longer be a probability distribution, but we will learn how to fix that in a moment. Let's look at that in Python code. Here I use the variable `z` to denote the measurement. `z` or `y` are customary choices in the literature for the measurement. As a programmer I prefer meaningful variable names, but I want you to be able to read the literature and/or other filtering code, so I will start introducing these abbreviated names now. ``` def update_belief(hall, belief, z, correct_scale): for i, val in enumerate(hall): if val == z: belief[i] *= correct_scale belief = np.array([0.1] * 10) reading = 1 # 1 is 'door' update_belief(hallway, belief, z=reading, correct_scale=3.) print('belief:', belief) print('sum =', sum(belief)) plt.figure() book_plots.bar_plot(belief) ``` This is not a probability distribution because it does not sum to 1.0. But the code is doing mostly the right thing - the doors are assigned a number (0.3) that is 3 times higher than the walls (0.1). All we need to do is normalize the result so that the probabilities correctly sum to 1.0. Normalization is done by dividing each element by the sum of all elements in the list. That is easy with NumPy: ``` belief / sum(belief) ``` FilterPy implements this with the `normalize` function: ```Python from filterpy.discrete_bayes import normalize normalize(belief) ``` It is a bit odd to say "3 times as likely to be right as wrong". We are working in probabilities, so let's specify the probability of the sensor being correct, and compute the scale factor from that. The equation for that is $$scale = \frac{prob_{correct}}{prob_{incorrect}} = \frac{prob_{correct}} {1-prob_{correct}}$$ Also, the `for` loop is cumbersome. As a general rule you will want to avoid using `for` loops in NumPy code. NumPy is implemented in C and Fortran, so if you avoid for loops the result often runs 100x faster than the equivalent loop. How do we get rid of this `for` loop? NumPy lets you index arrays with boolean arrays. You create a boolean array with logical operators. We can find all the doors in the hallway with: ``` hallway == 1 ``` When you use the boolean array as an index to another array it returns only the elements where the index is `True`. Thus we can replace the `for` loop with ```python belief[hall==z] *= scale ``` and only the elements which equal `z` will be multiplied by `scale`. Teaching you NumPy is beyond the scope of this book. I will use idiomatic NumPy constructs and explain them the first time I present them. If you are new to NumPy there are many blog posts and videos on how to use NumPy efficiently and idiomatically. Here is our improved version: ``` from filterpy.discrete_bayes import normalize def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) belief[hall==z] *= scale normalize(belief) belief = np.array([0.1] * 10) scaled_update(hallway, belief, z=1, z_prob=.75) print('sum =', sum(belief)) print('probability of door =', belief[0]) print('probability of wall =', belief[2]) book_plots.bar_plot(belief, ylim=(0, .3)) ``` We can see from the output that the sum is now 1.0, and that the probability of a door vs wall is still three times larger. The result also fits our intuition that the probability of a door must be less than 0.333, and that the probability of a wall must be greater than 0.0. Finally, it should fit our intuition that we have not yet been given any information that would allow us to distinguish between any given door or wall position, so all door positions should have the same value, and the same should be true for wall positions. This result is called the [*posterior*](https://en.wikipedia.org/wiki/Posterior_probability), which is short for *posterior probability distribution*. All this means is a probability distribution *after* incorporating the measurement information (posterior means 'after' in this context). To review, the *prior* is the probability distribution before including the measurement's information. Another term is the [*likelihood*](https://en.wikipedia.org/wiki/Likelihood_function). When we computed `belief[hall==z] *= scale` we were computing how *likely* each position was given the measurement. The likelihood is not a probability distribution because it does not sum to one. The combination of these gives the equation $$\mathtt{posterior} = \frac{\mathtt{likelihood} \times \mathtt{prior}}{\mathtt{normalization}}$$ When we talk about the filter's output we typically call the state after performing the prediction the *prior* or *prediction*, and we call the state after the update either the *posterior* or the *estimated state*. It is very important to learn and internalize these terms as most of the literature uses them extensively. Does `scaled_update()` perform this computation? It does. Let me recast it into this form: ``` def scaled_update(hall, belief, z, z_prob): scale = z_prob / (1. - z_prob) likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return normalize(likelihood * belief) ``` This function is not fully general. It contains knowledge about the hallway, and how we match measurements to it. We always strive to write general functions. Here we will remove the computation of the likelihood from the function, and require the caller to compute the likelihood themselves. Here is a full implementation of the algorithm: ```python def update(likelihood, prior): return normalize(likelihood * prior) ``` Computation of the likelihood varies per problem. For example, the sensor might not return just 1 or 0, but a `float` between 0 and 1 indicating the probability of being in front of a door. It might use computer vision and report a blob shape that you then probabilistically match to a door. It might use sonar and return a distance reading. In each case the computation of the likelihood will be different. We will see many examples of this throughout the book, and learn how to perform these calculations. FilterPy implements `update`. Here is the previous example in a fully general form: ``` from filterpy.discrete_bayes import update def lh_hallway(hall, z, z_prob): """ compute likelihood that a measurement matches positions in the hallway.""" try: scale = z_prob / (1. - z_prob) except ZeroDivisionError: scale = 1e8 likelihood = np.ones(len(hall)) likelihood[hall==z] *= scale return likelihood belief = np.array([0.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) update(likelihood, belief) ``` ## Incorporating Movement Recall how quickly we were able to find an exact solution when we incorporated a series of measurements and movement updates. However, that occurred in a fictional world of perfect sensors. Might we be able to find an exact solution with noisy sensors? Unfortunately, the answer is no. Even if the sensor readings perfectly match an extremely complicated hallway map, we cannot be 100% certain that the dog is in a specific position - there is, after all, a tiny possibility that every sensor reading was wrong! Naturally, in a more typical situation most sensor readings will be correct, and we might be close to 100% sure of our answer, but never 100% sure. This may seem complicated, but let's go ahead and program the math. First let's deal with the simple case - assume the movement sensor is perfect, and it reports that the dog has moved one space to the right. How would we alter our `belief` array? I hope that after a moment's thought it is clear that we should shift all the values one space to the right. If we previously thought there was a 50% chance of Simon being at position 3, then after he moved one position to the right we should believe that there is a 50% chance he is at position 4. The hallway is circular, so we will use modulo arithmetic to perform the shift. ``` def perfect_predict(belief, move): """ move the position by `move` spaces, where positive is to the right, and negative is to the left """ n = len(belief) result = np.zeros(n) for i in range(n): result[i] = belief[(i-move) % n] return result belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) plt.subplot(121) book_plots.bar_plot(belief, title='Before prediction', ylim=(0, .4)) belief = perfect_predict(belief, 1) plt.subplot(122) book_plots.bar_plot(belief, title='After prediction', ylim=(0, .4)) ``` We can see that we correctly shifted all values one position to the right, wrapping from the end of the array back to the beginning. The next cell animates this so you can see it in action. Use the slider to move forwards and backwards in time. This simulates Simon walking around and around the hallway. It does not yet incorporate new measurements so the probability distribution does not change shape, only position. ``` from ipywidgets import interact, IntSlider belief = np.array([.35, .1, .2, .3, 0, 0, 0, 0, 0, .05]) perfect_beliefs = [] for _ in range(20): # Simon takes one step to the right belief = perfect_predict(belief, 1) perfect_beliefs.append(belief) def simulate(time_step): book_plots.bar_plot(perfect_beliefs[time_step], ylim=(0, .4)) interact(simulate, time_step=IntSlider(value=0, max=len(perfect_beliefs)-1)); ``` ## Terminology Let's pause a moment to review terminology. I introduced this terminology in the last chapter, but let's take a second to help solidify your knowledge. The *system* is what we are trying to model or filter. Here the system is our dog. The *state* is its current configuration or value. In this chapter the state is our dog's position. We rarely know the actual state, so we say our filters produce the *estimated state* of the system. In practice this often gets called the state, so be careful to understand the context. One cycle of prediction and updating with a measurement is called the state or system *evolution*, which is short for *time evolution* [7]. Another term is *system propagation*. It refers to how the state of the system changes over time. For filters, time is usually a discrete step, such as 1 second. For our dog tracker the system state is the position of the dog, and the state evolution is the position after a discrete amount of time has passed. We model the system behavior with the *process model*. Here, our process model is that the dog moves one or more positions at each time step. This is not a particularly accurate model of how dogs behave. The error in the model is called the *system error* or *process error*. The prediction is our new *prior*. Time has moved forward and we made a prediction without benefit of knowing the measurements. Let's work an example. The current position of the dog is 17 m. Our epoch is 2 seconds long, and the dog is traveling at 15 m/s. Where do we predict he will be in two seconds? Clearly, $$ \begin{aligned} \bar x &= 17 + (15*2) \\ &= 47 \end{aligned}$$ I use bars over variables to indicate that they are priors (predictions). We can write the equation for the process model like this: $$ \bar x_{k+1} = f_x(\bullet) + x_k$$ $x_k$ is the current position or state. If the dog is at 17 m then $x_k = 17$. $f_x(\bullet)$ is the state propagation function for x. It describes how much the $x_k$ changes over one time step. For our example it performs the computation $15 \cdot 2$ so we would define it as $$f_x(v_x, t) = v_k t$$. ## Adding Uncertainty to the Prediction `perfect_predict()` assumes perfect measurements, but all sensors have noise. What if the sensor reported that our dog moved one space, but he actually moved two spaces, or zero? This may sound like an insurmountable problem, but let's model it and see what happens. Assume that the sensor's movement measurement is 80% likely to be correct, 10% likely to overshoot one position to the right, and 10% likely to undershoot to the left. That is, if the movement measurement is 4 (meaning 4 spaces to the right), the dog is 80% likely to have moved 4 spaces to the right, 10% to have moved 3 spaces, and 10% to have moved 5 spaces. Each result in the array now needs to incorporate probabilities for 3 different situations. For example, consider the reported movement of 2. If we are 100% certain the dog started from position 3, then there is an 80% chance he is at 5, and a 10% chance for either 4 or 6. Let's try coding that: ``` def predict_move(belief, move, p_under, p_correct, p_over): n = len(belief) prior = np.zeros(n) for i in range(n): prior[i] = ( belief[(i-move) % n] * p_correct + belief[(i-move-1) % n] * p_over + belief[(i-move+1) % n] * p_under) return prior belief = [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior) ``` It appears to work correctly. Now what happens when our belief is not 100% certain? ``` belief = [0, 0, .4, .6, 0, 0, 0, 0, 0, 0] prior = predict_move(belief, 2, .1, .8, .1) book_plots.plot_belief_vs_prior(belief, prior) prior ``` Here the results are more complicated, but you should still be able to work it out in your head. The 0.04 is due to the possibility that the 0.4 belief undershot by 1. The 0.38 is due to the following: the 80% chance that we moved 2 positions (0.4 $\times$ 0.8) and the 10% chance that we undershot (0.6 $\times$ 0.1). Overshooting plays no role here because if we overshot both 0.4 and 0.6 would be past this position. **I strongly suggest working some examples until all of this is very clear, as so much of what follows depends on understanding this step.** If you look at the probabilities after performing the update you might be dismayed. In the example above we started with probabilities of 0.4 and 0.6 in two positions; after performing the update the probabilities are not only lowered, but they are strewn out across the map. This is not a coincidence, or the result of a carefully chosen example - it is always true of the prediction. If the sensor is noisy we lose some information on every prediction. Suppose we were to perform the prediction an infinite number of times - what would the result be? If we lose information on every step, we must eventually end up with no information at all, and our probabilities will be equally distributed across the `belief` array. Let's try this with 100 iterations. The plot is animated; use the slider to change the step number. ``` belief = np.array([1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) predict_beliefs = [] for i in range(100): belief = predict_move(belief, 1, .1, .8, .1) predict_beliefs.append(belief) print('Final Belief:', belief) # make interactive plot def show_prior(step): book_plots.bar_plot(predict_beliefs[step-1]) plt.title('Step {}'.format(step)) interact(show_prior, step=IntSlider(value=1, max=len(predict_beliefs))); print('Final Belief:', belief) ``` After 100 iterations we have lost almost all information, even though we were 100% sure that we started in position 0. Feel free to play with the numbers to see the effect of differing number of updates. For example, after 100 updates a small amount of information is left, after 50 a lot is left, but by 200 iterations essentially all information is lost. And, if you are viewing this online here is an animation of that output. <img src="animations/02_no_info.gif"> I will not generate these standalone animations through the rest of the book. Please see the preface for instructions to run this book on the web, for free, or install IPython on your computer. This will allow you to run all of the cells and see the animations. It's very important that you practice with this code, not just read passively. ## Generalizing with Convolution We made the assumption that the movement error is at most one position. But it is possible for the error to be two, three, or more positions. As programmers we always want to generalize our code so that it works for all cases. This is easily solved with [*convolution*](https://en.wikipedia.org/wiki/Convolution). Convolution modifies one function with another function. In our case we are modifying a probability distribution with the error function of the sensor. The implementation of `predict_move()` is a convolution, though we did not call it that. Formally, convolution is defined as $$ (f \ast g) (t) = \int_0^t \!f(\tau) \, g(t-\tau) \, \mathrm{d}\tau$$ where $f\ast g$ is the notation for convolving f by g. It does not mean multiply. Integrals are for continuous functions, but we are using discrete functions. We replace the integral with a summation, and the parenthesis with array brackets. $$ (f \ast g) [t] = \sum\limits_{\tau=0}^t \!f[\tau] \, g[t-\tau]$$ Comparison shows that `predict_move()` is computing this equation - it computes the sum of a series of multiplications. [Khan Academy](https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution) [4] has a good introduction to convolution, and Wikipedia has some excellent animations of convolutions [5]. But the general idea is already clear. You slide an array called the *kernel* across another array, multiplying the neighbors of the current cell with the values of the second array. In our example above we used 0.8 for the probability of moving to the correct location, 0.1 for undershooting, and 0.1 for overshooting. We make a kernel of this with the array `[0.1, 0.8, 0.1]`. All we need to do is write a loop that goes over each element of our array, multiplying by the kernel, and summing the results. To emphasize that the belief is a probability distribution I have named it `pdf`. ``` def predict_move_convolution(pdf, offset, kernel): N = len(pdf) kN = len(kernel) width = int((kN - 1) / 2) prior = np.zeros(N) for i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N prior[i] += pdf[index] * kernel[k] return prior ``` This illustrates the algorithm, but it runs very slow. SciPy provides a convolution routine `convolve()` in the `ndimage.filters` module. We need to shift the pdf by `offset` before convolution; `np.roll()` does that. The move and predict algorithm can be implemented with one line: ```python convolve(np.roll(pdf, offset), kernel, mode='wrap') ``` FilterPy implements this with `discrete_bayes`' `predict()` function. ``` from filterpy.discrete_bayes import predict belief = [.05, .05, .05, .05, .55, .05, .05, .05, .05, .05] prior = predict(belief, offset=1, kernel=[.1, .8, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6)) ``` All of the elements are unchanged except the middle ones. The values in position 4 and 6 should be $$(0.1 \times 0.05)+ (0.8 \times 0.05) + (0.1 \times 0.55) = 0.1$$ Position 5 should be $$(0.1 \times 0.05) + (0.8 \times 0.55)+ (0.1 \times 0.05) = 0.45$$ Let's ensure that it shifts the positions correctly for movements greater than one and for asymmetric kernels. ``` prior = predict(belief, offset=3, kernel=[.05, .05, .6, .2, .1]) book_plots.plot_belief_vs_prior(belief, prior, ylim=(0,0.6)) ``` The position was correctly shifted by 3 positions and we give more weight to the likelihood of an overshoot vs an undershoot, so this looks correct. Make sure you understand what we are doing. We are making a prediction of where the dog is moving, and convolving the probabilities to get the prior. If we weren't using probabilities we would use this equation that I gave earlier: $$ \bar x_{k+1} = x_k + f_{\mathbf x}(\bullet)$$ The prior, our prediction of where the dog will be, is the amount the dog moved plus his current position. The dog was at 10, he moved 5 meters, so he is now at 15 m. It couldn't be simpler. But we are using probabilities to model this, so our equation is: $$ \bar{ \mathbf x}_{k+1} = \mathbf x_k \ast f_{\mathbf x}(\bullet)$$ We are *convolving* the current probabilistic position estimate with a probabilistic estimate of how much we think the dog moved. It's the same concept, but the math is slightly different. $\mathbf x$ is bold to denote that it is an array of numbers. ## Integrating Measurements and Movement Updates The problem of losing information during a prediction may make it seem as if our system would quickly devolve into having no knowledge. However, each prediction is followed by an update where we incorporate the measurement into the estimate. The update improves our knowledge. The output of the update step is fed into the next prediction. The prediction degrades our certainty. That is passed into another update, where certainty is again increased. Let's think about this intuitively. Consider a simple case - you are tracking a dog while he sits still. During each prediction you predict he doesn't move. Your filter quickly *converges* on an accurate estimate of his position. Then the microwave in the kitchen turns on, and he goes streaking off. You don't know this, so at the next prediction you predict he is in the same spot. But the measurements tell a different story. As you incorporate the measurements your belief will be smeared along the hallway, leading towards the kitchen. On every epoch (cycle) your belief that he is sitting still will get smaller, and your belief that he is inbound towards the kitchen at a startling rate of speed increases. That is what intuition tells us. What does the math tell us? We have already programmed the update and predict steps. All we need to do is feed the result of one into the other, and we will have implemented a dog tracker!!! Let's see how it performs. We will input measurements as if the dog started at position 0 and moved right one position each epoch. As in a real world application, we will start with no knowledge of his position by assigning equal probability to all positions. ``` from filterpy.discrete_bayes import update hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) prior = np.array([.1] * 10) likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) ``` After the first update we have assigned a high probability to each door position, and a low probability to each wall position. ``` kernel = (.1, .8, .1) prior = predict(posterior, 1, kernel) book_plots.plot_prior_vs_posterior(prior, posterior, True, ylim=(0,.5)) ``` The predict step shifted these probabilities to the right, smearing them about a bit. Now let's look at what happens at the next sense. ``` likelihood = lh_hallway(hallway, z=1, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) ``` Notice the tall bar at position 1. This corresponds with the (correct) case of starting at position 0, sensing a door, shifting 1 to the right, and sensing another door. No other positions make this set of observations as likely. Now we will add an update and then sense the wall. ``` prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) ``` This is exciting! We have a very prominent bar at position 2 with a value of around 35%. It is over twice the value of any other bar in the plot, and is about 4% larger than our last plot, where the tallest bar was around 31%. Let's see one more cycle. ``` prior = predict(posterior, 1, kernel) likelihood = lh_hallway(hallway, z=0, z_prob=.75) posterior = update(likelihood, prior) book_plots.plot_prior_vs_posterior(prior, posterior, ylim=(0,.5)) ``` I ignored an important issue. Earlier I assumed that we had a motion sensor for the predict step; then, when talking about the dog and the microwave I assumed that you had no knowledge that he suddenly began running. I mentioned that your belief that the dog is running would increase over time, but I did not provide any code for this. In short, how do we detect and/or estimate changes in the process model if we aren't directly measuring it? For now I want to ignore this problem. In later chapters we will learn the mathematics behind this estimation; for now it is a large enough task just to learn this algorithm. It is profoundly important to solve this problem, but we haven't yet built enough of the mathematical apparatus that is required, and so for the remainder of the chapter we will ignore the problem by assuming we have a sensor that senses movement. ## The Discrete Bayes Algorithm This chart illustrates the algorithm: ``` book_plots.predict_update_chart() ``` This filter is a form of the g-h filter. Here we are using the percentages for the errors to implicitly compute the $g$ and $h$ parameters. We could express the discrete Bayes algorithm as a g-h filter, but that would obscure the logic of this filter. The filter equations are: $$\begin{aligned} \bar {\mathbf x} &= \mathbf x \ast f_{\mathbf x}(\bullet)\, \, &\text{Predict Step} \\ \mathbf x &= \|\mathcal L \cdot \bar{\mathbf x}\|\, \, &\text{Update Step}\end{aligned}$$ $\mathcal L$ is the usual way to write the likelihood function, so I use that. The $\|\|$ notation denotes taking the norm. We need to normalize the product of the likelihood with the prior to ensure $x$ is a probability distribution that sums to one. We can express this in pseudocode. **Initialization** 1. Initialize our belief in the state **Predict** 1. Based on the system behavior, predict state for the next time step 2. Adjust belief to account for the uncertainty in prediction **Update** 1. Get a measurement and associated belief about its accuracy 2. Compute how likely it is the measurement matches each state 3. Update state belief with this likelihood When we cover the Kalman filter we will use this exact same algorithm; only the details of the computation will differ. Algorithms in this form are sometimes called *predictor correctors*. We make a prediction, then correct them. Let's animate this. First Let's write functions to perform the filtering and to plot the results at any step. I've plotted the position of the doorways in black. Prior are drawn in orange, and the posterior in blue. I draw a thick vertical line to indicate where Simon really is. This is not an output of the filter - we know where Simon is only because we are simulating his movement. ``` def discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway): posterior = np.array([.1]*10) priors, posteriors = [], [] for i, z in enumerate(measurements): prior = predict(posterior, 1, kernel) priors.append(prior) likelihood = lh_hallway(hallway, z, z_prob) posterior = update(likelihood, prior) posteriors.append(posterior) return priors, posteriors def plot_posterior(hallway, posteriors, i): plt.title('Posterior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(posteriors[i], ylim=(0, 1.0)) plt.axvline(i % len(hallway), lw=5) def plot_prior(hallway, priors, i): plt.title('Prior') book_plots.bar_plot(hallway, c='k') book_plots.bar_plot(priors[i], ylim=(0, 1.0), c='#ff8015') plt.axvline(i % len(hallway), lw=5) def animate_discrete_bayes(hallway, priors, posteriors): def animate(step): step -= 1 i = step // 2 if step % 2 == 0: plot_prior(hallway, priors, i) else: plot_posterior(hallway, posteriors, i) return animate ``` Let's run the filter and animate it. ``` # change these numbers to alter the simulation kernel = (.1, .8, .1) z_prob = 1.0 hallway = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0]) # measurements with no noise zs = [hallway[i % len(hallway)] for i in range(50)] priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=1, max=len(zs)*2)); ``` Now we can see the results. You can see how the prior shifts the position and reduces certainty, and the posterior stays in the same position and increases certainty as it incorporates the information from the measurement. I've made the measurement perfect with the line `z_prob = 1.0`; we will explore the effect of imperfect measurements in the next section. Finally, Another thing to note is how accurate our estimate becomes when we are in front of a door, and how it degrades when in the middle of the hallway. This should make intuitive sense. There are only a few doorways, so when the sensor tells us we are in front of a door this boosts our certainty in our position. A long stretch of no doors reduces our certainty. ## The Effect of Bad Sensor Data You may be suspicious of the results above because I always passed correct sensor data into the functions. However, we are claiming that this code implements a *filter* - it should filter out bad sensor measurements. Does it do that? To make this easy to program and visualize I will change the layout of the hallway to mostly alternating doors and hallways, and run the algorithm on 6 correct measurements: ``` hallway = np.array([1, 0, 1, 0, 0]*2) kernel = (.1, .8, .1) prior = np.array([.1] * 10) zs = [1, 0, 1, 0, 0, 1] z_prob = 0.75 priors, posteriors = discrete_bayes_sim(prior, kernel, zs, z_prob, hallway) interact(animate_discrete_bayes(hallway, priors, posteriors), step=IntSlider(value=12, max=len(zs)*2)); ``` We have identified the likely cases of having started at position 0 or 5, because we saw this sequence of doors and walls: 1,0,1,0,0. Now I inject a bad measurement. The next measurement should be 0, but instead we get a 1: ``` measurements = [1, 0, 1, 0, 0, 1, 1] priors, posteriors = discrete_bayes_sim(prior, kernel, measurements, z_prob, hallway); plot_posterior(hallway, posteriors, 6) ``` That one bad measurement has significantly eroded our knowledge. Now let's continue with a series of correct measurements. ``` with figsize(y=5.5): measurements = [1, 0, 1, 0, 0, 1, 1, 1, 0, 0] for i, m in enumerate(measurements): likelihood = lh_hallway(hallway, z=m, z_prob=.75) posterior = update(likelihood, prior) prior = predict(posterior, 1, kernel) plt.subplot(5, 2, i+1) book_plots.bar_plot(posterior, ylim=(0, .4), title='step {}'.format(i+1)) plt.tight_layout() ``` We quickly filtered out the bad sensor reading and converged on the most likely positions for our dog. ## Drawbacks and Limitations Do not be mislead by the simplicity of the examples I chose. This is a robust and complete filter, and you may use the code in real world solutions. If you need a multimodal, discrete filter, this filter works. With that said, this filter it is not used often because it has several limitations. Getting around those limitations is the motivation behind the chapters in the rest of this book. The first problem is scaling. Our dog tracking problem used only one variable, $pos$, to denote the dog's position. Most interesting problems will want to track several things in a large space. Realistically, at a minimum we would want to track our dog's $(x,y)$ coordinate, and probably his velocity $(\dot{x},\dot{y})$ as well. We have not covered the multidimensional case, but instead of an array we use a multidimensional grid to store the probabilities at each discrete location. Each `update()` and `predict()` step requires updating all values in the grid, so a simple four variable problem would require $O(n^4)$ running time *per time step*. Realistic filters can have 10 or more variables to track, leading to exorbitant computation requirements. The second problem is that the filter is discrete, but we live in a continuous world. The histogram requires that you model the output of your filter as a set of discrete points. A 100 meter hallway requires 10,000 positions to model the hallway to 1cm accuracy. So each update and predict operation would entail performing calculations for 10,000 different probabilities. It gets exponentially worse as we add dimensions. A 100x100 m$^2$ courtyard requires 100,000,000 bins to get 1cm accuracy. A third problem is that the filter is multimodal. In the last example we ended up with strong beliefs that the dog was in position 4 or 9. This is not always a problem. Particle filters, which we will study later, are multimodal and are often used because of this property. But imagine if the GPS in your car reported to you that it is 40% sure that you are on D street, and 30% sure you are on Willow Avenue. A forth problem is that it requires a measurement of the change in state. We need a motion sensor to detect how much the dog moves. There are ways to work around this problem, but it would complicate the exposition of this chapter, so, given the aforementioned problems, I will not discuss it further. With that said, if I had a small problem that this technique could handle I would choose to use it; it is trivial to implement, debug, and understand, all virtues. ## Tracking and Control We have been passively tracking an autonomously moving object. But consider this very similar problem. I am automating a warehouse and want to use robots to collect all of the items for a customer's order. Perhaps the easiest way to do this is to have the robots travel on a train track. I want to be able to send the robot a destination and have it go there. But train tracks and robot motors are imperfect. Wheel slippage and imperfect motors means that the robot is unlikely to travel to exactly the position you command. There is more than one robot, and we need to know where they all are so we do not cause them to crash. So we add sensors. Perhaps we mount magnets on the track every few feet, and use a Hall sensor to count how many magnets are passed. If we count 10 magnets then the robot should be at the 10th magnet. Of course it is possible to either miss a magnet or to count it twice, so we have to accommodate some degree of error. We can use the code from the previous section to track our robot since magnet counting is very similar to doorway sensing. But we are not done. We've learned to never throw information away. If you have information you should use it to improve your estimate. What information are we leaving out? We know what control inputs we are feeding to the wheels of the robot at each moment in time. For example, let's say that once a second we send a movement command to the robot - move left 1 unit, move right 1 unit, or stand still. If I send the command 'move left 1 unit' I expect that in one second from now the robot will be 1 unit to the left of where it is now. This is a simplification because I am not taking acceleration into account, but I am not trying to teach control theory. Wheels and motors are imperfect. The robot might end up 0.9 units away, or maybe 1.2 units. Now the entire solution is clear. We assumed that the dog kept moving in whatever direction he was previously moving. That is a dubious assumption for my dog! Robots are far more predictable. Instead of making a dubious prediction based on assumption of behavior we will feed in the command that we sent to the robot! In other words, when we call `predict()` we will pass in the commanded movement that we gave the robot along with a kernel that describes the likelihood of that movement. ### Simulating the Train Behavior We need to simulate an imperfect train. When we command it to move it will sometimes make a small mistake, and its sensor will sometimes return the incorrect value. ``` class Train(object): def __init__(self, track_len, kernel=[1.], sensor_accuracy=.9): self.track_len = track_len self.pos = 0 self.kernel = kernel self.sensor_accuracy = sensor_accuracy def move(self, distance=1): """ move in the specified direction with some small chance of error""" self.pos += distance # insert random movement error according to kernel r = random.random() s = 0 offset = -(len(self.kernel) - 1) / 2 for k in self.kernel: s += k if r <= s: break offset += 1 self.pos = int((self.pos + offset) % self.track_len) return self.pos def sense(self): pos = self.pos # insert random sensor error if random.random() > self.sensor_accuracy: if random.random() > 0.5: pos += 1 else: pos -= 1 return pos ``` With that we are ready to write the filter. We will put it in a function so that we can run it with different assumptions. I will assume that the robot always starts at the beginning of the track. The track is implemented as being 10 units long, but think of it as a track of length, say 10,000, with the magnet pattern repeated every 10 units. A length of 10 makes it easier to plot and inspect. ``` def train_filter(iterations, kernel, sensor_accuracy, move_distance, do_print=True): track = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) prior = np.array([.9] + [0.01]*9) posterior = prior[:] normalize(prior) robot = Train(len(track), kernel, sensor_accuracy) for i in range(iterations): # move the robot and robot.move(distance=move_distance) # peform prediction prior = predict(posterior, move_distance, kernel) # and update the filter m = robot.sense() likelihood = lh_hallway(track, m, sensor_accuracy) posterior = update(likelihood, prior) index = np.argmax(posterior) if do_print: print('''time {}: pos {}, sensed {}, ''' '''at position {}'''.format( i, robot.pos, m, track[robot.pos])) print(''' estimated position is {}''' ''' with confidence {:.4f}%:'''.format( index, posterior[index]*100)) book_plots.bar_plot(posterior) if do_print: print() print('final position is', robot.pos) index = np.argmax(posterior) print('''Estimated position is {} with ''' '''confidence {:.4f}%:'''.format( index, posterior[index]*100)) ``` Read the code and make sure you understand it. Now let's do a run with no sensor or movement error. If the code is correct it should be able to locate the robot with no error. The output is a bit tedious to read, but if you are at all unsure of how the update/predict cycle works make sure you read through it carefully to solidify your understanding. ``` import random random.seed(3) np.set_printoptions(precision=2, suppress=True, linewidth=60) train_filter(4, kernel=[1.], sensor_accuracy=.999, move_distance=4, do_print=True) ``` We can see that the code was able to perfectly track the robot so we should feel reasonably confident that the code is working. Now let's see how it fairs with some errors. ``` random.seed(5) train_filter(4, kernel=[.1, .8, .1], sensor_accuracy=.9, move_distance=4, do_print=True) ``` There was a sensing error at time 1, but we are still quite confident in our position. Now let's run a very long simulation and see how the filter responds to errors. ``` with figsize(y=5.5): for i in range (4): random.seed(3) plt.subplot(221+i) train_filter(148+i, kernel=[.1, .8, .1], sensor_accuracy=.8, move_distance=4, do_print=False) plt.title ('iteration {}'.format(148+i)) ``` We can see that there was a problem on iteration 149 as the confidence degrades. But within a few iterations the filter is able to correct itself and regain confidence in the estimated position. ## Bayes Theorem and the Total Probability Theorem We developed the math in this chapter merely by reasoning about the information we have at each moment. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem) and the [*Total Probability Theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Bayes theorem tells us how to compute the probability of an event given previous information. We implemented the `update()` function with this probability calculation: $$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization\, factor}}$$ We haven't developed the mathematics to discuss Bayes yet, but this is Bayes' theorem. Every filter in this book is an expression of Bayes' theorem. In the next chapter we will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation: $$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$ where $\| \cdot\|$ expresses normalizing the term. We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter. Likewise, the `predict()` step computes the total probability of multiple possible events. This is known as the *Total Probability Theorem* in statistics, and we will also cover this in the next chapter after developing some supporting math. For now I need you to understand that Bayes' theorem is a formula to incorporate new information into existing information. ## Summary The code is very short, but the result is impressive! We have implemented a form of a Bayesian filter. We have learned how to start with no information and derive information from noisy sensors. Even though the sensors in this chapter are very noisy (most sensors are more than 80% accurate, for example) we quickly converge on the most likely position for our dog. We have learned how the predict step always degrades our knowledge, but the addition of another measurement, even when it might have noise in it, improves our knowledge, allowing us to converge on the most likely result. This book is mostly about the Kalman filter. The math it uses is different, but the logic is exactly the same as used in this chapter. It uses Bayesian reasoning to form estimates from a combination of measurements and process models. **If you can understand this chapter you will be able to understand and implement Kalman filters.** I cannot stress this enough. If anything is murky, go back and reread this chapter and play with the code. The rest of this book will build on the algorithms that we use here. If you don't understand why this filter works you will have little success with the rest of the material. However, if you grasp the fundamental insight - multiplying probabilities when we measure, and shifting probabilities when we update leads to a converging solution - then after learning a bit of math you are ready to implement a Kalman filter. ## References * [1] D. Fox, W. Burgard, and S. Thrun. "Monte carlo localization: Efficient position estimation for mobile robots." In *Journal of Artifical Intelligence Research*, 1999. http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html * [2] Dieter Fox, et. al. "Bayesian Filters for Location Estimation". In *IEEE Pervasive Computing*, September 2003. http://swarmlab.unimaas.nl/wp-content/uploads/2012/07/fox2003bayesian.pdf * [3] Sebastian Thrun. "Artificial Intelligence for Robotics". https://www.udacity.com/course/cs373 * [4] Khan Acadamy. "Introduction to the Convolution" https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution * [5] Wikipedia. "Convolution" http://en.wikipedia.org/wiki/Convolution * [6] Wikipedia. "Law of total probability" http://en.wikipedia.org/wiki/Law_of_total_probability * [7] Wikipedia. "Time Evolution" https://en.wikipedia.org/wiki/Time_evolution * [8] We need to rethink how we teach statistics from the ground up http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up
github_jupyter
``` import numpy as np import pandas as pd import tensorflow as tf df = pd.read_csv('dataset_AirQual.csv') df.info() df['pm2.5'].describe() #use fillna() method to replace missing values with mean value df['pm2.5'].fillna(df['pm2.5'].mean(), inplace = True) df.info() df['cbwd'].unique() df['cbwd'].value_counts() #one hot encoding cols = df.columns.tolist() df_new = pd.get_dummies(df[cols]) df_new.head() #put column pm2.5 at the end of the df #avoid one of the column rearrangement steps cols = df_new.columns.tolist() cols_new = cols[:5] + cols[6:] + cols[5:6] df_new = df_new[cols_new] df_new.head() ``` Before I start to build, train and validate the model, I want to check the correlation between the indepependent variables and the dependent variable pm2.5. The higher the cumulated wind speed (lws) and the more the wind is blowin from north west (cbwd_NW), the lower the concentration of pm2.5. <br> The more the wind is blowing from south west (cbwd_cv) and the higher the dew point (DEWP), the higher the concentration of pm2.5 in the air. The dew point indicates the absolute humidity. During times with high humidity, more pm2.5 particles can connect themselves with water droplets, that hover in the air. ``` indep_var = cols_new[:-1] df_new[indep_var].corrwith(df_new['pm2.5']).sort_values() #get matrix arrays of dependent and independent variables X = df_new.iloc[:, :-1].values y = df_new.iloc[:, -1].values from sklearn.preprocessing import StandardScaler #training the model def train(X_train, y): #scale the training set data sc = StandardScaler() X_train_trans = sc.fit_transform(X_train) #inintialize ANN as sequence of layers ann = tf.keras.models.Sequential() #add input and first hidden layer ann.add(tf.keras.layers.Dense(units=7, activation='relu')) #add second hidden layer ann.add(tf.keras.layers.Dense(units=7, activation='relu')) #add third hidden layer ann.add(tf.keras.layers.Dense(units=7, activation='relu')) #add fourth hidden layer ann.add(tf.keras.layers.Dense(units=7, activation='relu')) #add output layer ann.add(tf.keras.layers.Dense(units=1)) #compile the ANN ann.compile(optimizer='adam', loss='mean_squared_error') #train ANN on training set ann.fit(X_train_trans, y, batch_size=32, epochs=100) return ann from sklearn.preprocessing import StandardScaler #make predictions (apply model to new data) def predict(X_val, ann): #scale the new data sc = StandardScaler() X_val_trans = sc.fit_transform(X_val) y_pred = ann.predict(X_val_trans) return y_pred from sklearn.metrics import mean_squared_error #do k-fold cross-validation from sklearn.model_selection import KFold kfold = KFold(n_splits=10, shuffle=True, random_state=1) mse_list = [] for train_idx, val_idx in kfold.split(X): #split data in train & val sets X_train = X[train_idx] X_val = X[val_idx] y_train = y[train_idx] y_val = y[val_idx] #train model and make predictions model = train(X_train, y_train) y_pred = predict(X_val, model) #evaluate mse = mean_squared_error(y_val, y_pred) mse_list.append(mse) print('mse = %0.3f ± %0.3f' % (np.mean(mse_list), np.std(mse_list))) #compare predicted values with real ones np.set_printoptions(precision=2) conc_vec = np.concatenate((y_pred.reshape(len(y_pred),1), y_val.reshape(len(y_val),1)), 1) conc_vec[50:100] ```
github_jupyter
# The data block API ``` from fastai.gen_doc.nbdoc import * from fastai.tabular import * from fastai.text import * from fastai.vision import * np.random.seed(42) ``` The data block API lets you customize the creation of a [`DataBunch`](/basic_data.html#DataBunch) by isolating the underlying parts of that process in separate blocks, mainly: 1. Where are the inputs and how to create them? 1. How to split the data into a training and validation sets? 1. How to label the inputs? 1. What transforms to apply? 1. How to add a test set? 1. How to wrap in dataloaders and create the [`DataBunch`](/basic_data.html#DataBunch)? Each of these may be addresses with a specific block designed for your unique setup. Your inputs might be in a folder, a csv file, or a dataframe. You may want to split them randomly, by certain indices or depending on the folder they are in. You can have your labels in your csv file or your dataframe, but it may come from folders or a specific function of the input. You may choose to add data augmentation or not. A test set is optional too. Finally you have to set the arguments to put the data together in a [`DataBunch`](/basic_data.html#DataBunch) (batch size, collate function...) The data block API is called as such because you can mix and match each one of those blocks with the others, allowing for a total flexibility to create your customized [`DataBunch`](/basic_data.html#DataBunch) for training, validation and testing. The factory methods of the various [`DataBunch`](/basic_data.html#DataBunch) are great for beginners but you can't always make your data fit in the tracks they require. <img src="imgs/mix_match.png" alt="Mix and match" width="200"> As usual, we'll begin with end-to-end examples, then switch to the details of each of those parts. ## Examples of use Let's begin with our traditional MNIST example. ``` path = untar_data(URLs.MNIST_TINY) tfms = get_transforms(do_flip=False) path.ls() (path/'train').ls() ``` In [`vision.data`](/vision.data.html#vision.data), we create an easy [`DataBunch`](/basic_data.html#DataBunch) suitable for classification by simply typing: ``` data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=24) ``` This is aimed at data that is in folders following an ImageNet style, with the [`train`](/train.html#train) and `valid` directories, each containing one subdirectory per class, where all the pictures are. There is also a `test` directory containing unlabelled pictures. With the data block API, we can group everything together like this: ``` data = (ImageItemList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch data.show_batch(3, figsize=(6,6), hide_axis=False) ``` Let's look at another example from [`vision.data`](/vision.data.html#vision.data) with the planet dataset. This time, it's a multiclassification problem with the labels in a csv file and no given split between valid and train data, so we use a random split. The factory method is: ``` planet = untar_data(URLs.PLANET_TINY) planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.) data = ImageDataBunch.from_csv(planet, folder='train', size=128, suffix='.jpg', sep = ' ', ds_tfms=planet_tfms) ``` With the data block API we can rewrite this like that: ``` data = (ImageItemList.from_csv(planet, 'labels.csv', folder='train', suffix='.jpg') #Where to find the data? -> in planet 'train' folder .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_df(sep=' ') #How to label? -> use the csv file .transform(planet_tfms, size=128) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch data.show_batch(rows=2, figsize=(9,7)) ``` The data block API also allows you to get your data together in problems for which there is no direct [`ImageDataBunch`](/vision.data.html#ImageDataBunch) factory method. For a segmentation task, for instance, we can use it to quickly get a [`DataBunch`](/basic_data.html#DataBunch). Let's take the example of the [camvid dataset](http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/). The images are in an 'images' folder and their corresponding mask is in a 'labels' folder. ``` camvid = untar_data(URLs.CAMVID_TINY) path_lbl = camvid/'labels' path_img = camvid/'images' ``` We have a file that gives us the names of the classes (what each code inside the masks corresponds to: a pedestrian, a tree, a road...) ``` codes = np.loadtxt(camvid/'codes.txt', dtype=str); codes ``` And we define the following function that infers the mask filename from the image filename. ``` get_y_fn = lambda x: path_lbl/f'{x.stem}_P{x.suffix}' ``` Then we can easily define a [`DataBunch`](/basic_data.html#DataBunch) using the data block API. Here we need to use `tfm_y=True` in the transform call because we need the same transforms to be applied to the target mask as were applied to the image. ``` data = (SegmentationItemList.from_folder(path_img) .random_split_by_pct() .label_from_func(get_y_fn, classes=codes) .transform(get_transforms(), tfm_y=True, size=128) .databunch()) data.show_batch(rows=2, figsize=(7,5)) ``` Another example for object detection. We use our tiny sample of the [COCO dataset](http://cocodataset.org/#home) here. There is a helper function in the library that reads the annotation file and returns the list of images names with the list of labelled bboxes associated to it. We convert it to a dictionary that maps image names with their bboxes and then write the function that will give us the target for each image filename. ``` coco = untar_data(URLs.COCO_TINY) images, lbl_bbox = get_annotations(coco/'train.json') img2bbox = dict(zip(images, lbl_bbox)) get_y_func = lambda o:img2bbox[o.name] ``` The following code is very similar to what we saw before. The only new addition is the use of a special function to collate the samples in batches. This comes from the fact that our images may have multiple bounding boxes, so we need to pad them to the largest number of bounding boxes. ``` data = (ObjectItemList.from_folder(coco) #Where are the images? -> in coco .random_split_by_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_func(get_y_func) #How to find the labels? -> use get_y_func .transform(get_transforms(), tfm_y=True) #Data augmentation? -> Standard transforms with tfm_y=True .databunch(bs=16, collate_fn=bb_pad_collate)) #Finally we convert to a DataBunch and we use bb_pad_collate data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6)) ``` But vision isn't the only application where the data block API works. It can also be used for text and tabular data. With our sample of the IMDB dataset (labelled texts in a csv file), here is how to get the data together for a language model. ``` imdb = untar_data(URLs.IMDB_SAMPLE) data_lm = (TextList.from_csv(imdb, 'texts.csv', cols='text') #Where are the inputs? Column 'text' of this csv .random_split_by_pct() #How to split it? Randomly with the default 20% .label_for_lm() #Label it for a language model .databunch()) data_lm.show_batch() ``` For a classification problem, we just have to change the way labelling is done. Here we use the csv column `label`. ``` data_clas = (TextList.from_csv(imdb, 'texts.csv', cols='text') .split_from_df(col='is_valid') .label_from_df(cols='label') .databunch()) data_clas.show_batch() ``` Lastly, for tabular data, we just have to pass the name of our categorical and continuous variables as an extra argument. We also add some [`PreProcessor`](/data_block.html#PreProcessor)s that are going to be applied to our data once the splitting and labelling is done. ``` adult = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(adult/'adult.csv') dep_var = 'salary' cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country'] cont_names = ['education-num', 'hours-per-week', 'age', 'capital-loss', 'fnlwgt', 'capital-gain'] procs = [FillMissing, Categorify, Normalize] data = (TabularList.from_df(df, path=adult, cat_names=cat_names, cont_names=cont_names, procs=procs) .split_by_idx(valid_idx=range(800,1000)) .label_from_df(cols=dep_var) .databunch()) data.show_batch() ``` ## Step 1: Provide inputs The basic class to get your inputs into is the following one. It's also the same class that will contain all of your labels (hence the name [`ItemList`](/data_block.html#ItemList)). ``` show_doc(ItemList, title_level=3) ``` This class regroups the inputs for our model in `items` and saves a `path` attribute which is where it will look for any files (image files, csv file with labels...). `create_func` is applied to `items` to get the final output. `label_cls` will be called to create the labels from the result of the label function, `xtra` contains additional information (usually an underlying dataframe) and `processor` is to be applied to the inputs after the splitting and labelling. It has multiple subclasses depending on the type of data you're handling. Here is a quick list: - [`CategoryList`](/data_block.html#CategoryList) for labels in classification - [`MultiCategoryList`](/data_block.html#MultiCategoryList) for labels in a multi classification problem - [`FloatList`](/data_block.html#FloatList) for float labels in a regression problem - [`ImageItemList`](/vision.data.html#ImageItemList) for data that are images - [`SegmentationItemList`](/vision.data.html#SegmentationItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList) - [`SegmentationLabelList`](/vision.data.html#SegmentationLabelList) for segmentation masks - [`ObjectItemList`](/vision.data.html#ObjectItemList) like [`ImageItemList`](/vision.data.html#ImageItemList) but will default labels to `ObjectLabelList` - `ObjectLabelList` for object detection - [`PointsItemList`](/vision.data.html#PointsItemList) for points (of the type [`ImagePoints`](/vision.image.html#ImagePoints)) - [`ImageImageList`](/vision.data.html#ImageImageList) for image to image tasks - [`TextList`](/text.data.html#TextList) for text data - [`TextFilesList`](/text.data.html#TextFilesList) for text data stored in files - [`TabularList`](/tabular.data.html#TabularList) for tabular data - [`CollabList`](/collab.html#CollabList) for collaborative filtering Once you have selected the class that is suitable, you can instantiate it with one of the following factory methods ``` show_doc(ItemList.from_folder) show_doc(ItemList.from_df) show_doc(ItemList.from_csv) ``` ### Optional step: filter your data The factory method may have grabbed too many items. For instance, if you were searching sub folders with the `from_folder` method, you may have gotten files you don't want. To remove those, you can use one of the following methods. ``` show_doc(ItemList.filter_by_func) show_doc(ItemList.filter_by_folder) show_doc(ItemList.filter_by_rand) show_doc(ItemList.to_text) show_doc(ItemList.use_partial_data) ``` ### Writing your own [`ItemList`](/data_block.html#ItemList) First check if you can't easily customize one of the existing subclass by: - subclassing an existing one and replacing the `get` method (or the `open` method if you're dealing with images) - applying a custom `processor` (see step 4) - changing the default `label_cls` for the label creation - adding a default [`PreProcessor`](/data_block.html#PreProcessor) with the `_processor` class variable If this isn't the case and you really need to write your own class, there is a [full tutorial](/tutorial.itemlist) that explains how to proceed. ``` show_doc(ItemList.analyze_pred) show_doc(ItemList.get) show_doc(ItemList.new) ``` You'll never need to subclass this normally, just don't forget to add to `self.copy_new` the names of the arguments that needs to be copied each time `new` is called in `__init__`. ``` show_doc(ItemList.reconstruct) ``` ## Step 2: Split the data between the training and the validation set This step is normally straightforward, you just have to pick oe of the following functions depending on what you need. ``` show_doc(ItemList.no_split) show_doc(ItemList.random_split_by_pct) show_doc(ItemList.split_by_files) show_doc(ItemList.split_by_fname_file) show_doc(ItemList.split_by_folder) jekyll_note("This method looks at the folder immediately after `self.path` for `valid` and `train`.") show_doc(ItemList.split_by_idx) show_doc(ItemList.split_by_idxs) show_doc(ItemList.split_by_list) show_doc(ItemList.split_by_valid_func) show_doc(ItemList.split_from_df) jekyll_warn("This method assumes the data has been created from a csv file or a dataframe.") ``` ## Step 3: Label the inputs To label your inputs, use one of the following functions. Note that even if it's not in the documented arguments, you can always pass a `label_cls` that will be used to create those labels (the default is the one from your input [`ItemList`](/data_block.html#ItemList), and if there is none, it will go to [`CategoryList`](/data_block.html#CategoryList), [`MultiCategoryList`](/data_block.html#MultiCategoryList) or [`FloatList`](/data_block.html#FloatList) depending on the type of the labels). This is implemented in the following function: ``` show_doc(ItemList.get_label_cls) ``` The first example in these docs created labels as follows: ``` path = untar_data(URLs.MNIST_TINY) ll = ImageItemList.from_folder(path).split_by_folder().label_from_folder().train ``` If you want to save the data necessary to recreate your [`LabelList`](/data_block.html#LabelList) (not including saving the actual image/text/etc files), you can use `to_df` or `to_csv`: ```python ll.train.to_csv('tmp.csv') ``` Or just grab a `pd.DataFrame` directly: ``` ll.to_df().head() show_doc(ItemList.label_empty) show_doc(ItemList.label_from_list) show_doc(ItemList.label_from_df) jekyll_warn("This method only works with data objects created with either `from_csv` or `from_df` methods.") show_doc(ItemList.label_const) show_doc(ItemList.label_from_folder) jekyll_note("This method looks at the last subfolder in the path to determine the classes.") show_doc(ItemList.label_from_func) show_doc(ItemList.label_from_re) show_doc(CategoryList, title_level=3) ``` [`ItemList`](/data_block.html#ItemList) suitable for storing labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `processor` will default to [`CategoryProcessor`](/data_block.html#CategoryProcessor). ``` show_doc(MultiCategoryList, title_level=3) ``` It will store list of labels in `items` belonging to `classes`. If `None` are passed, `classes` will be determined by the unique different labels. `sep` is used to split the content of `items` in a list of tags. If `one_hot=True`, the items contain the labels one-hot encoded. In this case, it is mandatory to pass a list of `classes` (as we can't use the different labels). ``` show_doc(FloatList, title_level=3) show_doc(EmptyLabelList, title_level=3) ``` ## Invisible step: preprocessing This isn't seen here in the API, but if you passed a `processor` (or a list of them) in your initial [`ItemList`](/data_block.html#ItemList) during step 1, it will be applied here. If you didn't pass any processor, a list of them might still be created depending on what is in the `_processor` variable of your class of items (this can be a list of [`PreProcessor`](/data_block.html#PreProcessor) classes). A processor is a transformation that is applied to all the inputs once at initialization, with a state computed on the training set that is then applied without modification on the validation set (and maybe the test set). For instance, it can be processing texts to tokenize then numericalize them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set. Another example is in tabular data, where we fill missing values with (for instance) the median computed on the training set. That statistic is stored in the inner state of the [`PreProcessor`](/data_block.html#PreProcessor) and applied on the validation set. This is the generic class for all processors. ``` show_doc(PreProcessor, title_level=3) show_doc(PreProcessor.process_one) ``` Process one `item`. This method needs to be written in any subclass. ``` show_doc(PreProcessor.process) ``` Process a dataset. This default to apply `process_one` on every `item` of `ds`. ``` show_doc(CategoryProcessor, title_level=3) show_doc(CategoryProcessor.generate_classes) show_doc(MultiCategoryProcessor, title_level=3) show_doc(MultiCategoryProcessor.generate_classes) ``` ## Optional steps ### Add transforms Transforms differ from processors in the sense they are applied on the fly when we grab one item. They also may change each time we ask for the same item in the case of random transforms. ``` show_doc(LabelLists.transform) ``` This is primary for the vision application. The `kwargs` are the one expected by the type of transforms you pass. `tfm_y` is among them and if set to `True`, the transforms will be applied to input and target. ### Add a test set To add a test set, you can use one of the two following methods. ``` show_doc(LabelLists.add_test) jekyll_note("Here `items` can be an `ItemList` or a collection.") show_doc(LabelLists.add_test_folder) ``` **Important**! No labels will be collected if available. Instead, either the passed `label` argument or a first label from `train_ds` will be used for all entries of this dataset. In the `fastai` framework `test` datasets have no labels - this is the unknown data to be predicted. If you want to use a `test` dataset with labels, you probably need to use it as a validation set, as in: ``` data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() ...) ``` Another approach, where you do use a normal validation set, and then when the training is over, you just want to validate the test set w/ labels as a validation set, you can do this: ``` tfms = [] path = Path('data').resolve() data = (ImageItemList.from_folder(path) .split_by_pct() .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn = create_cnn(data, models.resnet50, metrics=accuracy) learn.fit_one_cycle(5,1e-2) # now replace the validation dataset entry with the test dataset as a new validation dataset: # everything is exactly the same, except replacing `split_by_pct` w/ `split_by_folder` # (or perhaps you were already using the latter, so simply switch to valid='test') data_test = (ImageItemList.from_folder(path) .split_by_folder(train='train', valid='test') .label_from_folder() .transform(tfms) .databunch() .normalize() ) learn.data = data_test learn.validate() ``` Of course, your data block can be totally different, this is just an example. ## Step 4: convert to a [`DataBunch`](/basic_data.html#DataBunch) This last step is usually pretty straightforward. You just have to include all the arguments we pass to [`DataBunch.create`](/basic_data.html#DataBunch.create) (`bs`, `num_workers`, `collate_fn`). The class called to create a [`DataBunch`](/basic_data.html#DataBunch) is set in the `_bunch` attribute of the inputs of the training set if you need to modify it. Normally, the various subclasses we showed before handle that for you. ``` show_doc(LabelLists.databunch) ``` ## Inner classes ``` show_doc(LabelList, title_level=3) ``` Optionally apply `tfms` to `y` if `tfm_y` is `True`. ``` show_doc(LabelList.export) show_doc(LabelList.transform_y) show_doc(LabelList.get_state) show_doc(LabelList.load_empty) show_doc(LabelList.load_state) show_doc(LabelList.process) show_doc(LabelList.set_item) show_doc(LabelList.to_df) show_doc(LabelList.to_csv) show_doc(LabelList.transform) show_doc(ItemLists, title_level=3) show_doc(ItemLists.label_from_lists) show_doc(ItemLists.transform) show_doc(ItemLists.transform_y) show_doc(LabelLists, title_level=3) show_doc(LabelLists.get_processors) show_doc(LabelLists.load_empty) show_doc(LabelLists.load_state) show_doc(LabelLists.process) ``` ## Helper functions ``` show_doc(get_files) ``` ## Undocumented Methods - Methods moved below this line will intentionally be hidden ``` show_doc(CategoryList.new) show_doc(LabelList.new) show_doc(CategoryList.get) show_doc(LabelList.predict) show_doc(ItemList.new) show_doc(ItemList.process_one) show_doc(ItemList.process) show_doc(MultiCategoryProcessor.process_one) show_doc(FloatList.get) show_doc(CategoryProcessor.process_one) show_doc(CategoryProcessor.create_classes) show_doc(CategoryProcessor.process) show_doc(MultiCategoryList.get) show_doc(FloatList.new) show_doc(FloatList.reconstruct) show_doc(MultiCategoryList.analyze_pred) show_doc(MultiCategoryList.reconstruct) show_doc(CategoryList.reconstruct) show_doc(CategoryList.analyze_pred) show_doc(EmptyLabelList.reconstruct) show_doc(EmptyLabelList.get) show_doc(LabelList.databunch) ``` ## New Methods - Please document or move to the undocumented section
github_jupyter
This notebook is intended to test the mavenn functionality which enables the computation of parameter uncertainties via inference on simulated data ``` # Standard imports import pandas as pd import matplotlib.pyplot as plt import numpy as np import re import seaborn as sns import time import tensorflow as tf %matplotlib inline #Load mavenn and check path import mavenn !rm -rf 'models/' !mkdir 'models/' # Load example data data_df = mavenn.load_example_dataset('mpsa') trainval_df, test_df = mavenn.split_dataset(data_df) print(f'training + validation N: {len(trainval_df):,}') trainval_df.head(10) # Get sequence length L = len(trainval_df['x'][0]) # Define model model = mavenn.Model(L=L, alphabet='rna', gpmap_type='pairwise', regression_type='GE', ge_noise_model_type='SkewedT', ge_heteroskedasticity_order=2) # Set training data model.set_data(x=trainval_df['x'], y=trainval_df['y'], validation_flags=trainval_df['validation'], shuffle=True) # Fit model to data model.fit(learning_rate=.001, epochs=300, batch_size=200, early_stopping=True, early_stopping_patience=30, try_tqdm = True, linear_initialization=True, verbose=False) # # Save model model.save('models/mpsa_ge_pairwise') # Show training history print('On test data:') # Get x and y x_test = test_df['x'].values y_test = test_df['y'].values # Compute variational information I_var, dI_var = model.I_variational(x=x_test, y=y_test) print(f'I_var_test: {I_var:.3f} +- {dI_var:.3f} bits') # Compute predictive information I_pred, dI_pred = model.I_predictive(x=x_test, y=y_test) print(f'I_pred_test: {I_pred:.3f} +- {dI_pred:.3f} bits') I_var_hist = model.history['I_var'] val_I_var_hist = model.history['val_I_var'] fig, ax = plt.subplots(1,1,figsize=[4,4]) ax.plot(I_var_hist, label='I_var_train') ax.plot(val_I_var_hist, label='I_var_val') ax.axhline(I_var, color='C2', linestyle=':', label='I_var_test') ax.axhline(I_pred, color='C3', linestyle=':', label='I_pred_test') ax.legend() ax.set_xlabel('epochs') ax.set_ylabel('bits') ax.set_title('training hisotry') #ax.set_ylim([0, I_pred*1.2]); # Predict latent phentoype values (phi) on test data phi_test = model.x_to_phi(x_test) # Predict measurement values (yhat) on test data yhat_test = model.x_to_yhat(x_test) # Set phi lims and create grid in phi space phi_lim = [min(phi_test)-.5, max(phi_test)+.5] phi_grid = np.linspace(phi_lim[0], phi_lim[1], 1000) # Compute yhat each phi gridpoint yhat_grid = model.phi_to_yhat(phi_grid) # Compute 90% CI for each yhat q = [0.05, 0.95] #[0.16, 0.84] yqs_grid = model.yhat_to_yq(yhat_grid, q=q) # Create figure fig, ax = plt.subplots(1, 1, figsize=[4, 4]) # Illustrate measurement process with GE curve ax.scatter(phi_test, y_test, color='C0', s=5, alpha=.2, label='test data') ax.plot(phi_grid, yhat_grid, linewidth=2, color='C1', label='$\hat{y} = g(\phi)$') ax.plot(phi_grid, yqs_grid[:, 0], linestyle='--', color='C1', label='68% CI') ax.plot(phi_grid, yqs_grid[:, 1], linestyle='--', color='C1') ax.set_xlim(phi_lim) ax.set_xlabel('latent phenotype ($\phi$)') ax.set_ylabel('measurement ($y$)') ax.set_title('measurement process') ax.legend() # Fix up plot fig.tight_layout() plt.show() #uncertainty_dict = model.compute_parameter_uncertainties(num_simulations=2) sim_models = model.bootstrap(data_df=data_df, num_models=20, initialize_from_fit_model=True,) # Save models for model_num, sim_model in enumerate(sim_models): sim_model.save(f'models/mpsa_ge_pairwise_sim_{model_num:02d}') !ls -lah models/ ```
github_jupyter
# 2-3.1 Intro Python # The Power of List Iteration - **for in: `for` loop using `in`** - for range: **`for range(start,stop,step)`** - more list methods: **`.extend()`, `+, .reverse(), .sort()`** - strings to lists, **`.split()`**, and list to strings, **`.join()`** ----- ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font> - **Iterate through Lists** using **`for`** with **`in`** - Use **`for range()`** in looping operations - Use list methods **`.extend()`, `+, .reverse(), .sort()`** - convert between lists and strings using **`.split()`** and **`.join()`** # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font> ## Iterate through Lists using # ` for in` [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/3545f443-d2b5-4d77-9a8c-cfe07976c697/Unit2_Section3.1a-Iterate_through_Lists.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/3545f443-d2b5-4d77-9a8c-cfe07976c697/Unit2_Section3.1a-Iterate_through_Lists.vtt","srclang":"en","kind":"subtitles","label":"english"}]) ```python cities = ["New York", "Shanghai", "Munich", "Tokyo", "Dubai", "Mexico City", "São Paulo", "Hyderabad"] for city in cities: print(city) ``` # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font> ``` # [ ] review and run example cities = ["New York", "Shanghai", "Munich", "Tokyo", "Dubai", "Mexico City", "São Paulo", "Hyderabad"] for city in cities: print(city) # [ ] review and run example sales = [6, 8, 9, 11, 12, 17, 19, 20, 22] total = 0 for sale in sales: total += sale print("total sales:", total) ``` change the iterator variable name from "sale" to "dollars" or to any valid name ``` # [ ] review and run example sales = [6, 8, 9, 11, 12, 17, 19, 20, 22] total = 0 for dollars in sales: total += dollars print("total sales:", total) ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font> ### Iterate through Lists using `in` ``` # [ ] create a list of 4 to 6 strings: birds # print each bird in the list # [ ] create a list of 7 integers: player_points # [ ] print double the points for each point value # [ ] create long_string by concatenating the items in the "birds" list previously created # print long_string - make sure to put a space betweeen the bird names ``` # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font> ## Sort and Filter [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/32652fe3-7d3e-4b7b-b9fb-7f87a6c0ad59/Unit2_Section3.1b-Sort_and_Filter.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/32652fe3-7d3e-4b7b-b9fb-7f87a6c0ad59/Unit2_Section3.1b-Sort_and_Filter.vtt","srclang":"en","kind":"subtitles","label":"english"}]) ### use comparison operators while iterating lists ### &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font> ``` # [ ] review and run example of sorting into strings to display foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform", "intermediate cuneiform", "medial cuneiform"] longer_names = "" shorter_names = "" for bone_name in foot_bones: if len(bone_name) < 10: shorter_names += "\n" + bone_name else: longer_names += "\n" + bone_name print(shorter_names) print(longer_names) # [ ] review and run example of sorting into lists foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform", "intermediate cuneiform", "medial cuneiform"] longer_names = [] shorter_names = [] for bone_name in foot_bones: if len(bone_name) < 10: shorter_names.append(bone_name) else: longer_names.append(bone_name) print(shorter_names) print(longer_names) ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font> ## sort and filter ``` # [ ] Using cities from the example above iterate throught the list using "for"/"in" # [ ] Print only cities starting with "m" # [ ] Using cities from the example above iterate throught the list using "for"/"in" # cities = ["New York", "Shanghai", "Munich", "Tokyo", "Dubai", "Mexico City", "São Paulo", "Hyderabad"] # [ ] sort into lists with "A" in the city name and without "A" in the name: a_city & no_a_city ``` # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font> ## More iteration of lists ## - Counting ## - Searching [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/72acdbd5-454d-4900-b381-387ad49596f1/Unit2_Section3.1c_count_in_iteration.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/72acdbd5-454d-4900-b381-387ad49596f1/Unit2_Section3.1c_count_in_iteration.vtt","srclang":"en","kind":"subtitles","label":"english"}]) ### use string methods while iterating lists # &nbsp; <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font> ``` # [ ] review and run example # iterates the "cities" list, count & sum letter "a" in each city name cities = ["New York", "Shanghai", "Munich", "Tokyo", "Dubai", "Mexico City", "São Paulo", "Hyderabad"] search_letter = "a" total = 0 for city_name in cities: total += city_name.lower().count(search_letter) print("The total # of \"" + search_letter + "\" found in the list is", total) ``` <font size="4" color="#00A0B2" face="verdana"> <B>search function</B></font> [![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)](http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/2fa709be-5857-4291-beee-4d893d93468e/Unit2_Section3.1d_city_search_function.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/2fa709be-5857-4291-beee-4d893d93468e/Unit2_Section3.1d_city_search_function.vtt","srclang":"en","kind":"subtitles","label":"english"}]) ``` # [ ] review and run example # city_search function has a default list of cities to search def city_search(search_item, cities = ["New York", "Shanghai", "Munich", "Tokyo"] ): for city in cities: if city.lower() == search_item.lower(): return True else: # go to the next item pass # no more items in list return False # a list of cities visited_cities = ["New York", "Shanghai", "Munich", "Tokyo", "Dubai", "Mexico City", "São Paulo", "Hyderabad"] search = input("enter a city visited: ") # Search the default city list print(search, "in default cities is", city_search(search)) # search the list visited_cities using 2nd argument print(search, "in visitied_cites list is", city_search(search,visited_cities)) ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 3</B></font> ## Program: Paint Stock check a list for a paint color request and print status of color "found"/"not found" - create list, paint_colors, with 5+ colors - get user input of string:color_request - iterate through each color in paint_colors to check for a match with color_request ``` # [ ] complete paint stock ``` # &nbsp; <font size="6" color="#B24C00" face="verdana"> <B>Task 4</B></font> ## Program: Foot Bones Quiz **Create a function** that will iterate through foot_bones looking for a match of a string argument - Call the function 2 times with the name of a footbone - print immediate feedback for each answer (correct - incorrect) - print the total # of foot_bones identified The program will use the foot_bones list: ```python foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform", "intermediate cuneiform", "medial cuneiform"] ``` Bonus: remove correct response item from list if correct so user cannot answer same item twice ``` # [ ] Complete Foot Bones Quiz # foot_bones = ["calcaneus", "talus", "cuboid", "navicular", "lateral cuneiform", # "intermediate cuneiform", "medial cuneiform"] # [ ] bonus version ``` [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) &nbsp; [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) &nbsp; © 2017 Microsoft
github_jupyter
``` import sys sys.path.append('..') from deepgraph.utils.logging import log from deepgraph.utils.common import batch_parallel, ConfigMixin, shuffle_in_unison_inplace, pickle_dump from deepgraph.utils.image import batch_pad_mirror from deepgraph.constants import * from deepgraph.conf import rng from deepgraph.pipeline import Processor, Packet from deepgraph.nn.init import * class Transformer(Processor): """ Apply online random augmentation. """ def __init__(self, name, shapes, config, buffer_size=10): super(Transformer, self).__init__(name, shapes, config, buffer_size) self.mean = None def init(self): if self.conf("mean_file") is not None: self.mean = np.load(self.conf("mean_file")) else: log("Transformer - No mean file specified.", LOG_LEVEL_WARNING) def process(self): packet = self.pull() # Return if no data is there if not packet: return False # Unpack data, label = packet.data # Do processing log("Transformer - Processing data", LOG_LEVEL_VERBOSE) h = 240 w = 320 start = time.time() # Mean if packet.phase == PHASE_TRAIN or packet.phase == PHASE_VAL: data = data.astype(np.float32) if self.mean is not None: std = self.conf("std") for idx in range(data.shape[0]): # Subtract mean data[idx] = data[idx] - self.mean.astype(np.float32) if std is not None: data[idx] = data[idx] * std if self.conf("offset") is not None: label -= self.conf("offset") if packet.phase == PHASE_TRAIN: # Do elementwise operations data_old = data label_old = label data = np.zeros((data_old.shape[0], data_old.shape[1], h, w), dtype=np.float32) label = np.zeros((label_old.shape[0], h, w), dtype=np.float32) for idx in range(data.shape[0]): # Rotate # We rotate before cropping to be able to get filled corners # Maybe even adjust the border after rotating deg = np.random.randint(-5,6) # Operate on old data. Careful - data is already in float so we need to normalize and rescale afterwards # data_old[idx] = 255. * rotate_transformer_rgb_uint8(data_old[idx] * 0.003921568627, deg).astype(np.float32) # label_old[idx] = rotate_transformer_scalar_float32(label_old[idx], deg) # Take care of any empty areas, we crop on a smaller surface depending on the angle # TODO Remove this once loss supports masking shift = 0 #np.tan((deg/180.) * math.pi) # Random crops #cy = rng.randint(data_old.shape[2] - h - shift, size=1) #cx = rng.randint(data_old.shape[3] - w - shift, size=1) data[idx] = data_old[idx] label[idx] = label_old[idx] # Flip horizontally with probability 0.5 """ p = rng.randint(2) if p > 0: data[idx] = data[idx, :, :, ::-1] label[idx] = label[idx, :, ::-1] # RGB we mult with a random value between 0.8 and 1.2 r = rng.randint(80,121) / 100. g = rng.randint(80,121) / 100. b = rng.randint(80,121) / 100. data[idx, 0] = data[idx, 0] * r data[idx, 1] = data[idx, 1] * g data[idx, 2] = data[idx, 2] * b """ # Shuffle # data, label = shuffle_in_unison_inplace(data, label) elif packet.phase == PHASE_VAL: # Center crop pass #cy = (data.shape[2] - h) // 2 #cx = (data.shape[3] - w) // 2 #data = data[:, :, cy:cy+h, cx:cx+w] #label = label[:, cy:cy+h, cx:cx+w] end = time.time() log("Transformer - Processing took " + str(end - start) + " seconds.", LOG_LEVEL_VERBOSE) # Try to push into queue as long as thread should not terminate self.push(Packet(identifier=packet.id, phase=packet.phase, num=2, data=(data, label))) return True def setup_defaults(self): super(Transformer, self).setup_defaults() self.conf_default("mean_file", None) self.conf_default("offset", None) self.conf_default("std", 1.0) from theano.tensor.nnet import relu from deepgraph.graph import * from deepgraph.nn.core import * from deepgraph.nn.conv import * from deepgraph.nn.loss import * from deepgraph.pipeline import Optimizer, H5DBLoader, Pipeline # Print to console for testing def build_u_graph(): graph = Graph("u_depth") """ Inputs """ data = Data(graph, "data", T.ftensor4, shape=(-1, 3, 240, 320)) label = Data(graph, "label", T.ftensor3, shape=(-1, 1, 240, 320), config={ "phase": PHASE_TRAIN }) """ Contractive part """ conv_1 = Conv2D( graph, "conv_1", config={ "channels": 64, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_2 = Conv2D( graph, "conv_2", config={ "channels": 64, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) pool_2 = Pool(graph, "pool_2", config={ "kernel": (2, 2) }) conv_3 = Conv2D( graph, "conv_3", config={ "channels": 128, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_4 = Conv2D( graph, "conv_4", config={ "channels": 128, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) pool_4 = Pool(graph, "pool_4", config={ "kernel": (2, 2) }) conv_5 = Conv2D( graph, "conv_5", config={ "channels": 256, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_6 = Conv2D( graph, "conv_6", config={ "channels": 256, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) pool_6 = Pool(graph, "pool_6", config={ "kernel": (2, 2), }) conv_7 = Conv2D( graph, "conv_7", config={ "channels": 512, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_8 = Conv2D( graph, "conv_8", config={ "channels": 512, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) pool_8 = Pool(graph, "pool_8", config={ "kernel": (2, 2) }) """ Prediction core """ conv_9 = Conv2D( graph, "conv_9", config={ "channels": 128, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) fl_10 = Flatten(graph, "pc_10", config={ "dims" : 2 }) fc_10a = Dense(graph, "fc_10a", config={ "out": 4096, "activation": None, "weight_filler": xavier(), "bias_filler": constant(1) }) dp_10a = Dropout(graph, "dp_10a", config={ }) fc_10 = Dense(graph, "fc_10", config={ "out": 19200, "activation": None, "weight_filler": xavier(), "bias_filler": constant(1) }) dp_10 = Dropout(graph, "dp_10", config={ }) rs_10 = Reshape(graph, "rs_10", config={ "shape": (-1, 64, 15, 20) }) conv_10 = Conv2D( graph, "conv_10", config={ "channels": 64, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) """ Expansive path """ up_11 = Upsample(graph, "up_11", config={ "kernel": (2, 2) }) upconv_11 = Conv2D( graph, "upconv_11", config={ "channels": 512, "kernel": (3,3), "border_mode": 1, "activation": None, "weight_filler": xavier(), "bias_filler": constant(0) }) conv_12 = Conv2D( graph, "conv_12", config={ "channels": 512, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_13 = Conv2D( graph, "conv_13", config={ "channels": 512, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) up_14 = Upsample(graph, "up_14", config={ "kernel": (2, 2) }) upconv_14 = Conv2D( graph, "upconv_14", config={ "channels": 256, "kernel": (3,3), "border_mode": 1, "activation": None, "weight_filler": xavier(), "bias_filler": constant(0) }) conv_15 = Conv2D( graph, "conv_15", config={ "channels": 256, "kernel": (3, 3), "border_mode": 1, "weight_filler": xavier(), "bias_filler": constant(0) } ) conv_16 = Conv2D( graph, "conv_16", config={ "channels": 256, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) up_17 = Upsample(graph, "up_17", config={ "kernel": (2, 2) }) upconv_17 = Conv2D( graph, "upconv_17", config={ "channels": 128, "kernel": (3,3), "border_mode": 1, "activation": None, "weight_filler": xavier(), "bias_filler": constant(0) }) conv_18 = Conv2D( graph, "conv_18", config={ "channels": 128, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_19 = Conv2D( graph, "conv_19", config={ "channels": 128, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) up_20 = Upsample(graph, "up_20", config={ "mode": "constant", "kernel": (2, 2) }) upconv_20 = Conv2D( graph, "upconv_20", config={ "channels": 64, "kernel": (3,3), "border_mode": 1, "activation": None, "weight_filler": xavier(), "bias_filler": constant(0) }) conv_21 = Conv2D( graph, "conv_21", config={ "channels": 64, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_22 = Conv2D( graph, "conv_22", config={ "channels": 64, "kernel": (3, 3), "border_mode": 1, "activation": relu, "weight_filler": xavier(gain="relu"), "bias_filler": constant(0) } ) conv_23 = Conv2D( graph, "conv_23", config={ "channels": 1, "kernel": (1, 1), "weight_filler": xavier(), "bias_filler": constant(0) } ) """ Feed forward nodes """ concat_20 = Concatenate(graph, "concat_20", config={ "axis": 1 }) concat_17 = Concatenate(graph, "concat_17", config={ "axis": 1 }) concat_14 = Concatenate(graph, "concat_14", config={ "axis": 1 }) concat_11 = Concatenate(graph, "concat_11", config={ "axis": 1 }) """ Losses / Error """ loss = EuclideanLoss(graph, "loss") error = MSE(graph, "mse", config={ "root": True, "is_output": True, "phase": PHASE_TRAIN }) """ Make connections """ data.connect(conv_1) conv_1.connect(conv_2) conv_2.connect(concat_20) conv_2.connect(pool_2) pool_2.connect(conv_3) conv_3.connect(conv_4) conv_4.connect(concat_17) conv_4.connect(pool_4) pool_4.connect(conv_5) conv_5.connect(conv_6) conv_6.connect(concat_14) conv_6.connect(pool_6) pool_6.connect(conv_7) conv_7.connect(conv_8) conv_8.connect(concat_11) conv_8.connect(pool_8) pool_8.connect(conv_9) conv_9.connect(fl_10) fl_10.connect(fc_10a) fc_10a.connect(dp_10a) dp_10a.connect(fc_10) fc_10.connect(dp_10) dp_10.connect(rs_10) rs_10.connect(conv_10) conv_10.connect(up_11) up_11.connect(upconv_11) upconv_11.connect(concat_11) concat_11.connect(conv_12) conv_12.connect(conv_13) conv_13.connect(up_14) up_14.connect(upconv_14) upconv_14.connect(concat_14) concat_14.connect(conv_15) conv_15.connect(conv_16) conv_16.connect(up_17) up_17.connect(upconv_17) upconv_17.connect(concat_17) concat_17.connect(conv_18) conv_18.connect(conv_19) conv_19.connect(up_20) up_20.connect(upconv_20) upconv_20.connect(concat_20) concat_20.connect(conv_21) conv_21.connect(conv_22) conv_22.connect(conv_23) conv_23.connect(loss) label.connect(loss) conv_23.connect(error) label.connect(error) return graph if __name__ == "__main__": batch_size = 8 chunk_size = 10*batch_size transfer_shape = ((chunk_size, 3, 240, 320), (chunk_size, 240, 320)) g = build_u_graph() # Build the training pipeline db_loader = H5DBLoader("db", ((chunk_size, 3, 480, 640), (chunk_size, 1, 480, 640)), config={ "db": '/home/ga29mix/nashome/data/nyu_depth_v2_combined_50.hdf5', # "db": '../data/nyu_depth_unet_large.hdf5', "key_data": "images", "key_label": "depths", "chunk_size": chunk_size }) transformer = Transformer("tr", transfer_shape, config={ # Measured for the data-set # "offset": 2.7321029 "mean_file" : "/home/ga29mix/nashome/data/nyu_depth_v2_combined_50.npy", "std": 1.0 / 76.18328376 }) optimizer = Optimizer("opt", g, transfer_shape, config={ "batch_size": batch_size, "chunk_size": chunk_size, "learning_rate": 0.001, # "learning_rate": 0.000001 converges ( a bit) "momentum": 0.9, "weight_decay": 0.0005, "print_freq": 200, "save_freq": 10000, # "weights": "../data/vnet_init_2_iter_4500.zip", "save_prefix": "../data/vnet" }) p = Pipeline(config={ "validation_frequency": 15, "cycles": 3000 }) p.add(db_loader) p.add(transformer) p.add(optimizer) p.run() %matplotlib inline import matplotlib.pyplot as plt l = np.array([e["loss"] for e in optimizer.losses]) print len(l) plt.plot(l) # print g.last_updates[2].get_value() ```
github_jupyter
# Optical Tweezers 292 GUI ## Functionality: - Connect to Optical Tweezers using NIDAQ Control software and drivers to gather data and analyze. ## Support | | Supports Functionality | |-----------|--------------| |Taking Sine| Yes | |Graphing Sine| Yes | |Taking Sawtooth| Yes | |Graphing Sawtooth| Yes | |Taking Power Spectrum| Yes | |Graphing Power Spectrum| In Progress... | ## Guides: GUI Documentation: [292 Optical Tweezers Documentation](https://docs.google.com/document/d/1UIyrJpPVibWfCxfNz_ZTX6MNZtwCGOgMLPl32wL9eTw/edit?usp=sharing) ## Contributors: - Emma Li - Ryland Birchmeier ``` %matplotlib notebook import numpy as np import scipy from scipy import signal import matplotlib.pyplot as plt import time import sys import glob import math from multiprocessing import Process import threading from threading import Thread """ Using IPython File Upload: https://github.com/peteut/ipython-file-upload pip install fileupload jupyter nbextension install --py fileupload jupyter nbextension enable --py fileupload """ import io from IPython.display import display #import fileupload from PyDAQmx import * import ctypes from ctypes import byref import pickle import PySimpleGUI as sg # matplotlib tkinter canvas import from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import webbrowser from scipy.optimize import leastsq import traceback from PIL import Image, ImageTk, ImageSequence # import keyboard print("Imports Ran Successfully") ################################################################################# # # To get this working on a new machine you might need to: # * install the niDAQmx drivers from National Instruments # * install PyDAQmx ('pip install PyDAQmx' should do it) # * install pysimplegui ('pip install PySimpleGUI' or 'conda install -c conda-forge pysimplegui') # ################################################################################# class GlobalVariables: gif_filename = "Loading (3).gif" save_folder = "" analyze_folder = "" volts_to_um = 2.58 # Recalibrate for Your Own Use!! viscosity = 8.9e-4 radius_bead = 6.25e-7 # sg.theme("BlueMono") # sg.theme("DarkTanBlue") sg.theme("DarkBlue") laser_power = 105 trial_num = 0 frequency = 12.5 amplitude = 0.25 num_trials = 6 test_voltage = 5 osc_num = 4 samp_freq = 5000 offset = 7.5 waveform = 'Sine' test_axis = 'X-Axis' oscillation_axis = 'X-Axis' # Power Spectrum sampling_frequency = 100000 number_datapoints = 100000 number_runs_per_bead = 10 ## data_loader = None app_state = 0 Running = True running_trials = False canvas_scrolled_to = 0 variables = GlobalVariables() class DataLoader: opened_files = [] grouped_files = [] # 2d array grouped_files_by_laser_power = [] # 3d array laser_powers = [] # Power Spectrum Data opened_power_spectrum = [] power_spectrum_by_laser_power = [] # 2d array ''' @Params folder_path : String for the name of the directory containing the data to be analyzed ''' def __init__(self, folder_path=variables.analyze_folder): self.folder_path = folder_path self.opened_files.clear() self.glob_data() self.grouped_files.clear() self.group_trials() self.grouped_files_by_laser_power.clear() self.group_groups_by_laser_power() self.laser_powers.clear() self.populate_laser_powers() # Power Spectrum self.opened_power_spectrum.clear() self.glob_power_spectrum_data() self.power_spectrum_by_laser_power.clear() self.group_power_spectrum_by_laser_power() ''' Note: Loads Oscillations Only! ''' def glob_data(self): fls = glob.glob(self.folder_path+"/*Oscillations*") print(fls) for file_name in fls: file_opened = pickle.load(open(file_name,'rb')) self.opened_files.append(file_opened) print(f'{len(self.opened_files)} files opened') def glob_power_spectrum_data(self): fls = glob.glob(self.folder_path+"/*PowerSpectrum*") print(fls) for file_name in fls: file_opened = pickle.load(open(file_name,'rb')) self.opened_power_spectrum.append(file_opened) print(f'{len(self.opened_files)} files opened') ''' @Params opened_file : dictionary for individual trial storing params (e.g. axis, fs) and array of data points ''' def get_psd(self, opened_file): # Center Graph Around 0 zero_calibrated_psd = opened_file['inputData_0'] - opened_file['inputData_0'].mean() zero_calibrated_psd = zero_calibrated_psd[(round(len(zero_calibrated_psd) * 12 / 16)):(round(len(zero_calibrated_psd) * 15 / 16))] # can change the multipliers if necessarry zero_calibrated_psd.sort() zero_calibrated_psd = abs(zero_calibrated_psd) # Find Amplitude counter = 0; num_data_points = 10 for j in range(num_data_points): counter += zero_calibrated_psd[-j] return counter / num_data_points # aka average of most extreme 10 points def get_sawtooth_psd(self, opened_file): # Center Graph Around 0 zero_calibrated_psd = opened_file['inputData_0'] - opened_file['inputData_0'].mean() zero_calibrated_psd = zero_calibrated_psd[(round(len(zero_calibrated_psd) * 12 / 16)):(round(len(zero_calibrated_psd) * 15 / 16))] # can change the multipliers if necessarry zero_calibrated_psd.sort() zero_calibrated_psd = abs(zero_calibrated_psd) # Find average PSD return np.mean(np.array(zero_calibrated_psd)) def get_laser_power(self, opened_file): return opened_file['laser_power'] def get_trial_num(self, opened_file): return opened_file['trial_num'] def get_frequency(self, opened_file): return opened_file['frequency'] def get_amplitude(self, opened_file): return opened_file['amplitude'] def get_num_trials(self, opened_file): return opened_file['num_trials'] def get_waveform(self, opened_file): return opened_file['waveform'] def group_trials(self): for file in self.opened_files: appended = False if len(self.grouped_files) != 0: for grouped_set in self.grouped_files: first_item_in_group = grouped_set[0] if self.get_laser_power(first_item_in_group)==self.get_laser_power(file) and self.get_frequency(first_item_in_group)==self.get_frequency(file) and self.get_amplitude(first_item_in_group)==self.get_amplitude(file) and self.get_waveform(first_item_in_group)==self.get_waveform(file): grouped_set.append(file) appended = True if not appended: self.grouped_files.append([file]) print(np.array(self.grouped_files).shape) def group_power_spectrum_by_laser_power(self): for file in self.opened_power_spectrum: appended = False if len(self.power_spectrum_by_laser_power) != 0: for group in self.power_spectrum_by_laser_power: first_item_in_group = group[0] if self.get_laser_power(first_item_in_group)==self.get_laser_power(file): group.append(file) appended = True if not appended: self.power_spectrum_by_laser_power.append([file]) print(np.array(self.power_spectrum_by_laser_power).shape) ''' @Params ''' def group_groups_by_laser_power(self): for group in self.grouped_files: appended = False if len(self.grouped_files_by_laser_power) != 0: for laser_power_group in self.grouped_files_by_laser_power: first_item_in_group = laser_power_group[0][0] print(first_item_in_group) print(group[0]) if self.get_laser_power(first_item_in_group) == self.get_laser_power(group[0]): laser_power_group.append(group) appended = True if not appended: self.grouped_files_by_laser_power.append([group]) print(np.array(self.grouped_files_by_laser_power).shape) ''' @Params ''' def populate_laser_powers(self): for group in self.grouped_files_by_laser_power: self.laser_powers.append(self.get_laser_power(group[0][0])) print("Laser Powers: " + str(self.laser_powers)) ''' @Params grouped_files : 1D array containing data from one frequency ''' def get_psd_average_std(self, grouped_files): psd_list = [] for file in grouped_files: psd_list.append(self.get_psd(file)) average_psd = np.mean(np.array(psd_list)) std_psd = np.std(np.array(psd_list)) return average_psd, std_psd def get_psd_average_std_sawtooth(self, grouped_files): psd_list = [] for file in grouped_files: print(self.get_sawtooth_psd(file)) psd_list.append(self.get_sawtooth_psd(file)) average_psd = np.mean(np.array(psd_list)) std_psd = np.std(np.array(psd_list)) return average_psd, std_psd # Create Instance When Use class Controller: x_piezo = "Dev2/ao0" # The x-piezo is connected to AO0 (pin 12) y_piezo = "Dev2/ao1" # The y-piezo is connected to AO1 (pin 13) inChannels = "Dev2/ai0, Dev2/ai1, Dev2/ai2, Dev2/ai3" num_input_channels = 4 autoStart = 0 dataLayout = DAQmx_Val_GroupByChannel fillMode = DAQmx_Val_GroupByChannel timeout = 10 read = int32() sampsWritten = int32() saved_data = {} # Initialize empty dictionary to store data ''' @Params device : String of name of the channel connecting to the OT num_samps : int of samples per channel sampling_rate : float for rate of sampling per second tsk : Task to set up the connection between the GUI and the DAQ ? for outputs timing : boolean set True for oscillations ''' def setUpOutput(self, device, num_samps=20000, sampling_rate=1000, tsk=None, timing=True): ''' Initialize the DAQ for sending analog outputs For waveforms, use timing=True For sending a single voltage, use timing=False Device is "Dev1/ao0", "Dev1/ao1", or whatever the right channel is ''' if tsk is None: tsk = Task() tsk.CreateAOVoltageChan(device, "", 0.0, 10.0, DAQmx_Val_Volts, None) # sets up channel for output between 0 and 10 V if timing: tsk.CfgSampClkTiming("", sampling_rate, DAQmx_Val_Rising, DAQmx_Val_FiniteSamps, num_samps) return tsk ''' @Params tsk : Task to set up the connection between the GUI and the DAQ? for inputs sampling_rate : float for rate of sampling per second num_samps : int of samples per channel ''' def setUpInput(self, tsk=None, sampling_rate=1000, num_samps=20000): if tsk is None: tsk = Task() tsk.CreateAIVoltageChan(self.inChannels, "", DAQmx_Val_Cfg_Default, -10.0, 10.0, DAQmx_Val_Volts, None) tsk.CfgSampClkTiming("", sampling_rate, DAQmx_Val_Rising, DAQmx_Val_FiniteSamps, num_samps) return tsk ''' @Params test_axis : axis to test (x or y) test_distance : volts to move (not perfect yet... but gets the test done) ''' # Set the piezo stages to a certain position def test_move(self, test_axis, test_distance): print(test_distance) print(test_axis) axis = self.x_piezo if test_axis == 'Y-Axis': axis = self.y_piezo print(axis) to = self.setUpOutput(axis, timing=False) # Use "timing=False" for sending a single voltage to piezo voltage = float(test_distance) # read's the inputted voltage from textbox print(voltage) to.WriteAnalogScalarF64(1, 10.0, voltage, None) time.sleep(0.5) to.StopTask() print("Test Complete") def run_power_spectrum(self): self.saved_data = {} num_pts = variables.number_datapoints fs = variables.sampling_frequency inputData = np.zeros((num_pts*self.num_input_channels,),dtype=np.float64) ti = self.setUpInput(sampling_rate=fs, num_samps=num_pts) ti.StartTask() ti.ReadAnalogF64(num_pts,self.timeout*2,self.fillMode,inputData,len(inputData),byref(self.read),None) time.sleep(num_pts/fs + 0.5) ti.StopTask() self.saved_data['Fs'] = fs self.saved_data['laser_power'] = variables.laser_power self.saved_data['inputData_0'] = inputData[0:num_pts] self.saved_data['inputData_1'] = inputData[num_pts:2*num_pts] self.saved_data['inputData_2'] = inputData[2*num_pts:3*num_pts] self.saved_data['inputData_3'] = inputData[3*num_pts:4*num_pts] ''' @Params oscaxis : String detailing axis to be oscillated numosc : int of number of oscillations to be completed sampfreq : float for number of samples taken per sec oscfreq : float for the frequency (var) amp : float for amplitude of platform movement os : int of offset for graph wf : String for type of wave ''' def oscillate(self, oscaxis, numosc, sampfreq, oscfreq, amp, os, wf): print("-----------------------------------") print(oscaxis) print(numosc) print(sampfreq) print(oscfreq) print(amp) print(os) print(wf) print("-----------------------------------") axis = self.x_piezo # '-OscAxis-' # Check which axis to oscillate if oscaxis == 'Y-Axis': print("y") axis = self.y_piezo num_osc = float(numosc) # number of oscillations (from textbox) fs = float(sampfreq) # sampling frequency freq = float(oscfreq) # frequency of oscillation time_oscillating = float(num_osc / freq) # time spend oscillating samples_oscillating = time_oscillating * fs # number of samples osc_freq_in_samples = freq / fs # oscillation frequency in units of samples rather than Hz # This will be the argument of the sine or sawtooth or square wave function sine_argument = 2 * np.pi * osc_freq_in_samples * np.arange(samples_oscillating) amplitude = amp # Amplitude of oscillation offset = os # Offset waveform = wf # kind of waveform, Sine, Square or Sawtooth if waveform == 'Square': data = amplitude * scipy.signal.square(sine_argument) + offset elif waveform == 'Sawtooth': data = amplitude * scipy.signal.sawtooth(sine_argument, width=0.5) + offset elif waveform == 'Sine': data = amplitude * np.sin(sine_argument) + offset else: print("BAD WAVE ARGUMENT") data = np.ones_like(sine_argument) * offset half_second_of_const = np.ones((int(0.5 * fs),), dtype=np.float64) * offset # half-second pause before oscillations begin dataOut = np.hstack((half_second_of_const, data)) # Data to send to piezo inputData = np.zeros((len(dataOut) * self.num_input_channels,), dtype=np.float64) # Data to recieve from NI DAQ board to = self.setUpOutput(axis, num_samps=len(dataOut), sampling_rate=fs, timing=True) # Output task ti = self.setUpInput(sampling_rate=fs, num_samps=len(dataOut)) # Input task ti.StartTask() to.WriteAnalogF64(len(dataOut), self.autoStart, self.timeout, self.dataLayout, dataOut, byref(self.sampsWritten), None) # Define processes that will start the input and output of the NI DAQ def outputProc(): to.StartTask() def inputProc(): ti.ReadAnalogF64(len(dataOut), self.timeout * 2, self.fillMode, inputData, len(inputData), byref(self.read), None) # Make a list of threads. One thread is for input, the other output thread_list = [] thread_list.append(threading.Thread(target=outputProc)) thread_list.append(threading.Thread(target=inputProc)) for thread in thread_list: thread.start() time.sleep(time_oscillating + 0.5) time.sleep(1.0) to.StopTask() ti.StopTask() # Store the data in the dictionary "saved_data" self.saved_data['Fs'] = fs self.saved_data['axis'] = oscaxis # Added-> self.saved_data['laser_power'] = variables.laser_power self.saved_data['trial_num'] = variables.trial_num self.saved_data['frequency'] = variables.frequency self.saved_data['amplitude'] = variables.amplitude self.saved_data['num_trials'] = variables.num_trials self.saved_data['waveform'] = variables.waveform self.saved_data['osc_num'] = variables.osc_num # <-End Added self.saved_data['dataOut'] = dataOut self.saved_data['inputData_0'] = inputData[0:len(dataOut)] self.saved_data['inputData_1'] = inputData[len(dataOut):2 * len(dataOut)] self.saved_data['inputData_2'] = inputData[2 * len(dataOut):3 * len(dataOut)] self.saved_data['inputData_3'] = inputData[3 * len(dataOut):4 * len(dataOut)] print("oscasix = " + str(oscaxis)) print("numosc = " + str(numosc)) print("sampfreq = " + str(sampfreq)) print("oscfreq = " + str(oscfreq)) print("amp = " + str(amp)) print("os = " + str(os)) print("wf = " + str(wf)) print(self.saved_data) print("data successfully collected") ''' @Params save_location : folder to save data filepath ''' def save_previous_data(self, save_location): pickle.dump(self.saved_data, open( save_location + f"/Oscillations_{variables.waveform}_LaserPower{variables.laser_power}ma_OscFreq{variables.frequency}Trial{variables.trial_num}.p", 'wb')) print("Saved to: " + save_location) def save_power_spectrum(self, save_location): pickle.dump(self.saved_data, open( save_location + f"/PowerSpectrum_LaserPower{variables.laser_power}ma_Trial{variables.trial_num}.p", 'wb')) print("Saved to: " + save_location) controller = Controller() class TkinterDataCollectionGraphing: figure_canvas_agg_take_data = None figure_canvas_agg_analyze_data_psd_velocity = None figure_canvas_agg_k_vs_laser_power = None figure_canvas_agg_ps_k_vs_laser_power = None k_values = [] power_spectrum_k_values = [] ''' @Params figure_agg : Tkinter "FigureCanvasTkAgg" which holds a widget ''' def delete_figure_agg(self, figure_agg): figure_agg.get_tk_widget().forget() plt.close('all') ''' @Params canvas : tkinter canvas embed figure_canvas_agg : Tkinter "FigureCanvasTkAgg" which holds a widget ''' # helper method to draw plt figure in canvas def draw_figure(self, canvas, figure_canvas_agg): figure_canvas_agg.draw() figure_canvas_agg.get_tk_widget().pack(side='top', fill='both', expand=1) return figure_canvas_agg ''' @Params window : Pysimplegui window key : key for canvas to draw to ''' def display_previous_trial_graph(self, window, key): plt.ioff() plt.figure() fig = plt.gcf() # Define the time array. We can do this as we know the sampling frequency. times = np.arange(0, len(controller.saved_data['dataOut'])) * (1. / controller.saved_data['Fs']) # saved_data is the global variable above oscillation method plt.cla() plt.plot(times, controller.saved_data['inputData_0'] - controller.saved_data['inputData_0'].mean(), label='Channel 0') plt.plot(times, controller.saved_data['inputData_2'] - controller.saved_data['inputData_2'].mean(), label='Channel 2') plt.legend(loc=2) plt.xlabel('Time (s)') plt.ylabel('Voltage (V)') # plt.show(block=False) figure_canvas_agg = FigureCanvasTkAgg(fig, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg def display_power_spectrum_k_values(self, window, key, data_loader): # Load all values into one averaged power spectrum dataset averaged_array = [] laser_powers = [] for laser_power_arr in data_loader.power_spectrum_by_laser_power: total = [] laser_powers.append(data_loader.get_laser_power(laser_power_arr[0])) for trial in laser_power_arr: x_coord = trial['inputData_0'] y_coord = trial['inputData_1'] total.append((np.array(x_coord) + np.array(y_coord))/2.0) averaged_array.append(np.array(total)/len(laser_power_arr)) print(np.array(averaged_array).shape) ####Support Methods### def theoreticalPS(f,fc,A): return A/(f**2 + fc**2) def plot_results(f, fc, logA, title=''): fig,ax = plt.subplots() ax.loglog(f[::2], 0.5*(ps_1_new+ps_0_new)[::2]) ax.loglog(f, theoreticalPS(f, fc, 10**logA), label="f_c = %.2f"%fc) ax.legend() ax.set_xlabel("Frequency (Hz)") ax.set_ylabel("Power Spectrum (V^2)") ax.set_title(title) def theoryPSerror(params, dataPS, freqs): theoretical_model = theoreticalPS(freqs, params[0], params[1]) return np.log(abs(dataPS-theoretical_model)) ####################### def get_k_value_from_data(load_index, averaged_array): saved_data_ps = averaged_array[load_index] plt_points = np.linspace(0,len(saved_data_ps)-1,min(4000,len(saved_data_ps)),dtype=np.int32) data_time = np.arange(0,len(saved_data_ps))/100000.0 #time points #------------------------------------------------------------------------------------------ #Take the Fourier transform of the qpd signal and square it. That's the power spectrum nfft = len(saved_data_ps) ps_0 = np.abs(np.fft.rfft(saved_data_ps))**2 / (nfft/100000.0) ps_1 = np.abs(np.fft.rfft(saved_data_ps))**2 / (nfft/100000.0) time_step = 1./100000.0 #inverse of sampling frequency is time interval freqs = np.fft.rfftfreq(len(saved_data_ps), time_step) idx = np.argsort(freqs) #------------------------------------------------------------------------------------------ #Let's average the data. binning = np.geomspace(1,len(saved_data_ps),num=1024,endpoint=False,dtype=np.int32) freqs_new = []; ps_0_new = []; ps_1_new = [] freqs_used = freqs[idx]#[len(freqs)/2:] ps_0_used = ps_0[idx]#[len(freqs)/2:] ps_1_used = ps_1[idx]#[len(freqs)/2:] j=0 for i,space in enumerate(binning): if j<len(freqs_used): freqs_new.append(freqs_used[j:j+space].mean()) ps_0_new.append(ps_0_used[j:j+space].mean()) ps_1_new.append(ps_1_used[j:j+space].mean()) j=j+space else: break #Turn those lists into arrays freqs_new = np.array(freqs_new); ps_0_new = np.array(ps_0_new); ps_1_new = np.array(ps_1_new) #------------------------------------------------------------------------------------------ # calculate diffusion coefficient --> D = (kb * T)/(6pi* n * R) diffusion_coefficient = ((1.380649 * (10**(-23))) * 293.15)/(6 * math.pi * (variables.viscosity) * (variables.radius_bead)) SF_squared_0 = np.array(ps_0_new)*(np.array(freqs_new)**2) SF_squared_1 = np.array(ps_1_new)*(np.array(freqs_new)**2) # Go from 2/4 -> 3/4 of the dataset to get the relavant data averaged_0_y_values = np.mean(SF_squared_0[round(len(SF_squared_0)*1/2):round(len(SF_squared_0)*3/4)]) averaged_1_y_values = np.mean(SF_squared_1[round(len(SF_squared_1)*1/2):round(len(SF_squared_1)*3/4)]) averaged_0_and_1 = np.mean([averaged_0_y_values, averaged_1_y_values]) r = ((averaged_0_and_1 * (math.pi**2))/diffusion_coefficient)**(1/2) averaged_x_and_y_list = [] for i in range(len(ps_0_new)): averaged_x_and_y_list.append(np.mean([ps_0_new[i], ps_1_new[i]])) averaged_x_and_y_list = np.array(averaged_x_and_y_list) fc_list = [] for i in range(len(averaged_x_and_y_list)): fc_list.append(((((r**2)*diffusion_coefficient)/(averaged_x_and_y_list[i]*(math.pi**2)))-(freqs_new[i]**2))**(1/2)) fc_list = np.array(fc_list) fc_list = fc_list[fc_list>=0] fc_list = np.sqrt(fc_list) fc = np.mean(np.array(fc_list)) ####################MAKE USER INPUT############################ Very Sensitive (Maybe find new fitting method!) initGuess = np.array([35, 10**6.5]) fit_results, flag = leastsq(theoryPSerror, initGuess, args=(0.5*(ps_0_new+ps_1_new), freqs_new)) ############################################################### # calculate alpha a = abs(fit_results[0]) * 2 * math.pi * 6 * math.pi * (variables.viscosity) * (variables.radius_bead) print("fc: " + str(fc)) print("fit results: " + str(fit_results)) k = a/r return k k_values = [] for i in range(len(averaged_array)): k_values.append(get_k_value_from_data(i, averaged_array)) print(f"K Values: {k_values}") self.power_spectrum_k_values = k_values # # Testing K VALUES when have no data (test code) # k_values = [4e-13, 5e-13, 6e-13] # k_values vs laser_powers # Create Plot plt.ioff() plt.figure() fig = plt.gcf() plt.cla() plt.ylabel('K values (N/V)') plt.xlabel('Laser Power (ma)') m3, b3 = np.polyfit(laser_powers, k_values, 1) plt.plot(np.array(laser_powers), m3 * np.array(laser_powers) + b3) plt.title("K (N/V) vs Laser Power (ma)") plt.scatter(laser_powers, k_values) plt.show(block=False) figure_canvas_agg = FigureCanvasTkAgg(fig, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg def display_power_spectrum_graph_previous_trial(self, window, key): plt_points = np.linspace(0,len(controller.saved_data['inputData_0'])-1,min(2000,len(controller.saved_data['inputData_0'])),dtype=np.int) data_time = np.arange(0,len(controller.saved_data['inputData_0']))/controller.saved_data['Fs'] time_step = 1./controller.saved_data['Fs'] ps_0 = np.abs(np.fft.fft(controller.saved_data['inputData_0']))**2 * time_step ps_1 = np.abs(np.fft.fft(controller.saved_data['inputData_1']))**2 * time_step freqs = np.fft.fftfreq(len(controller.saved_data['inputData_0']), time_step) idx = np.argsort(freqs) plt.ioff() plt.cla() fig2,ax2 = plt.subplots() ax2.loglog(freqs[idx][plt_points], ps_0[idx][plt_points]) ax2.loglog(freqs[idx][plt_points], ps_1[idx][plt_points]) figure_canvas_agg = FigureCanvasTkAgg(fig2, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg ''' @Params window : Pysimplegui window key : key for canvas to draw to file_list_one_laser_power : 2d array with groups of data for one velocity per array data_loader : dataloader object (should use variables.data_loader unless have other reason) Note: Only use one waveform per graph!! ''' def display_psd_vs_velocity_graph(self, window, key, file_list_one_laser_power, data_loader): waveform_used = file_list_one_laser_power[0][0]['waveform'] if waveform_used == 'Sine': print("Sine Graphing...") # w = 2 * math.pi * file_list_one_laser_power['frequency'] velocity_list_x = [] psd_list_y = [] psd_error_list_y = [] # Populate PSD list and Error List for psd_list in file_list_one_laser_power: # Get average psd and standard devaition of psds for point and error bars average_psd, std_psd = data_loader.get_psd_average_std(psd_list) psd_list_y.append(average_psd); psd_error_list_y.append(std_psd) #Populate Velocity List for files in file_list_one_laser_power: # Get Frequency and Amplitude amplitude_group = files[0]['amplitude'] frequency_group = files[0]['frequency'] velocity_list_x.append((2 * math.pi * frequency_group)*amplitude_group*(1/variables.volts_to_um)*0.000001) print(velocity_list_x) print(psd_list_y) print(psd_error_list_y) # Create Plot plt.ioff() plt.figure() fig = plt.gcf() plt.cla() plt.xlabel('Velocity (m/s)') plt.ylabel('PSD (V)') plt.errorbar(velocity_list_x, psd_list_y, yerr=psd_error_list_y, fmt='o', color='black', ecolor='lightgray', elinewidth=3, capsize=0); m1, b1 = np.polyfit(velocity_list_x, psd_list_y, 1) plt.plot(np.array(velocity_list_x), m1 * np.array(velocity_list_x) + b1) plt.title(f"{file_list_one_laser_power[0][0]['laser_power']}ma PSD(V) vs Velocity(m/s)") plt.scatter(velocity_list_x, psd_list_y) plt.show(block=False) figure_canvas_agg = FigureCanvasTkAgg(fig, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg elif waveform_used == 'Sawtooth': print("Sawtooth Graphing...") velocity_list_x_sawtooth = [] psd_list_y_sawtooth = [] psd_error_list_y_sawtooth = [] # Populate PSD list and Error List for psd_list in file_list_one_laser_power: # Get average psd and standard devaition of psds for point and error bars average_psd_sawtooth, std_psd_sawtooth = data_loader.get_psd_average_std_sawtooth(psd_list) psd_list_y_sawtooth.append(average_psd_sawtooth); psd_error_list_y_sawtooth.append(std_psd_sawtooth) #Populate Velocity List for files in file_list_one_laser_power: # Find the slope of the sawtooth lines to calculate velocity # Calculating x-axis of Position vs Time time_oscillating = files[0]['osc_num']/files[0]['frequency'] # Get Time Experienced Per Movement In One Direction num_of_slopes = files[0]['osc_num'] * 2 time_per_slope = time_oscillating/num_of_slopes # Distance in Volts Per Movement (double the amplitude experienced) sum_height_top_bottom = 0 counter = 0 for file in files: sum_height_top_bottom += max(file['inputData_2'])-min(file['inputData_2']) counter += 1 print("Len Files: " + str(len(files))) print("Counter: " + str(counter)) average_height_top_bottom = sum_height_top_bottom/counter slope_saw_wave = average_height_top_bottom/time_per_slope velocity_list_x_sawtooth.append(slope_saw_wave*(1/variables.volts_to_um)*0.000001) print(velocity_list_x_sawtooth) print(psd_list_y_sawtooth) print(psd_error_list_y_sawtooth) # Create sawtooth plot (differences from sine: m2, b2, velocity_list_x_sawtooth, psd_list_y_sawtooth, psd_error_list_y_sawtooth) plt.ioff() plt.figure() fig = plt.gcf() plt.cla() plt.xlabel('Velocity (m/s)') plt.ylabel('PSD (V)') plt.errorbar(velocity_list_x_sawtooth, psd_list_y_sawtooth, yerr=psd_error_list_y_sawtooth, fmt='o', color='black', ecolor='lightgray', elinewidth=3, capsize=0); m2, b2 = np.polyfit(velocity_list_x_sawtooth, psd_list_y_sawtooth, 1) plt.plot(np.array(velocity_list_x_sawtooth), m2 * np.array(velocity_list_x_sawtooth) + b2) plt.title(f"{file_list_one_laser_power[0][0]['laser_power']}ma PSD(V) vs Velocity(m/s)") plt.scatter(velocity_list_x_sawtooth, psd_list_y_sawtooth) plt.show(block=False) figure_canvas_agg = FigureCanvasTkAgg(fig, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg ''' @Params window : Pysimplegui window key : canvas key data_loader : dataloader object (should use variables.data_loader unless have other reason) ''' def display_k_vs_laser_power(self, window, key, data_loader): self.k_values = [] def truncate(n, decimals=0): multiplier = 10 ** decimals return int(n * multiplier) / multiplier def populate_k_values(file_list_one_laser_power): print("K value populated sine") velocity_list_x = [] psd_list_y = [] psd_error_list_y = [] # Populate PSD list and Error List for psd_list in file_list_one_laser_power: # Get average psd and standard devaition of psds for point and error bars average_psd, std_psd = data_loader.get_psd_average_std(psd_list) psd_list_y.append(average_psd) psd_error_list_y.append(std_psd) # Populate Velocity List for files in file_list_one_laser_power: # Get Frequency and Amplitude amplitude_group = files[0]['amplitude'] frequency_group = files[0]['frequency'] velocity_list_x.append( (2 * math.pi * frequency_group) * amplitude_group * (1 / variables.volts_to_um) * 0.000001) print(velocity_list_x) print(psd_list_y) slope, _ = np.polyfit(velocity_list_x, psd_list_y, 1) k_value = 6 * math.pi * variables.viscosity * variables.radius_bead * (1/slope) self.k_values.append(k_value) def populate_k_values_sawtooth(file_list_one_laser_power): print("K value populated sawtooth") velocity_list_x_sawtooth = [] psd_list_y_sawtooth = [] psd_error_list_y_sawtooth = [] # Populate PSD list and Error List for psd_list in file_list_one_laser_power: # Get average psd and standard devaition of psds for point and error bars average_psd_sawtooth, std_psd_sawtooth = data_loader.get_psd_average_std_sawtooth(psd_list) psd_list_y_sawtooth.append(average_psd_sawtooth); psd_error_list_y_sawtooth.append(std_psd_sawtooth) #Populate Velocity List for files in file_list_one_laser_power: # Find the slope of the sawtooth lines to calculate velocity # Calculating x-axis of Position vs Time time_oscillating = files[0]['osc_num']/files[0]['frequency'] # Get Time Experienced Per Movement In One Direction num_of_slopes = files[0]['osc_num'] * 2 time_per_slope = time_oscillating/num_of_slopes # Distance in Volts Per Movement (double the amplitude experienced) sum_height_top_bottom = 0 counter = 0 for file in files: sum_height_top_bottom += max(file['inputData_2'])-min(file['inputData_2']) counter += 1 print("Len Files: " + str(len(files))) print("Counter: " + str(counter)) average_height_top_bottom = sum_height_top_bottom/counter slope_saw_wave = average_height_top_bottom/time_per_slope velocity_list_x_sawtooth.append(slope_saw_wave*(1/variables.volts_to_um)*0.000001) print(velocity_list_x_sawtooth) print(psd_list_y_sawtooth) print(psd_error_list_y_sawtooth) slope,_ = np.polyfit(velocity_list_x_sawtooth, psd_list_y_sawtooth, 1) k_value = 6 * math.pi * variables.viscosity * variables.radius_bead * (1/slope) self.k_values.append(k_value) for file_list_one_laser_power_populate in data_loader.grouped_files_by_laser_power: if data_loader.get_waveform(file_list_one_laser_power_populate[0][0])=="Sine": print("K value generated") populate_k_values(file_list_one_laser_power_populate) print("k = " + str(self.k_values[-1])) elif data_loader.get_waveform(file_list_one_laser_power_populate[0][0])=="Sawtooth": print("K value generated") populate_k_values_sawtooth(file_list_one_laser_power_populate) print("k = " + str(self.k_values[-1])) else: print("not a wave type supported") print(data_loader.laser_powers) print(self.k_values) # Create Plot plt.ioff() plt.figure() fig = plt.gcf() plt.cla() plt.ylabel('K values (N/V)') plt.xlabel('Laser Power (ma)') m1, b1 = np.polyfit(data_loader.laser_powers, self.k_values, 1) plt.plot(np.array(data_loader.laser_powers), m1 * np.array(data_loader.laser_powers) + b1) plt.title("K (N/V) vs Laser Power (ma)") plt.scatter(data_loader.laser_powers, self.k_values) plt.show(block=False) figure_canvas_agg = FigureCanvasTkAgg(fig, window[key].TKCanvas) self.draw_figure(window[key].TKCanvas, figure_canvas_agg) return figure_canvas_agg tkinter_graphing = TkinterDataCollectionGraphing() #####################Intro############################################### intro_screen = [[sg.Image(filename=variables.gif_filename, enable_events=True, key="-IMAGE-")]] ####################Menu################################################# title_screen_center_col = [[sg.Text("OT 292", size=(12, 1), font=("Verdana", 100), justification='center')], [sg.Text("----------------", size=(12, 1), font=("Verdana", 100), justification="center")], [sg.Button("Create New Dataset", font=("Verdana", 30)), sg.Button("Analyze Previous Dataset", font=("Verdana", 30))], [sg.Button("Manual", size=(15, 1), key = '-Manual-')]] title_screen_bottom_row = [[]] title_screen_layout = [ [sg.Column(title_screen_center_col), sg.Column(title_screen_bottom_row, element_justification='center')]] #####################Add New Dataset##################################### backbutton_col = [[sg.Button("<-- Back", key='-Back1-')]] new_dataset_center_col = [[sg.T("")], [sg.Text("Choose a folder: "), sg.Input(key="-IN2-", change_submits=True, size=(50, 1)), sg.FolderBrowse(key="-IN-")], [sg.Button("Continue", size=(20, 1))]] new_dataset_layout = [[sg.Column(backbutton_col), sg.Column(new_dataset_center_col, element_justification='center')]] #####################Taking Data######################################### amplitude_input = sg.InputText(key="-Amplitude-", default_text=(f"{variables.amplitude}")) frequency_input = sg.InputText(key="-OscFreq-", default_text=(f"{variables.frequency}")) backbutton_col1 = [[sg.Button("<-- Back", key='-Back2-')]] data_left_col = [[sg.Text("Data Parameters", size=(20, 1), font=("Verdana", 30))], [sg.Text("Oscillation Axis", size=(25, 1), font=("Verdana", 10)), sg.Combo(['X-Axis', 'Y-Axis'], default_value='X-Axis', key='-OscAxis-')], [sg.Text("Sampling Frequency (Hz)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-SampFreq-", default_text=(variables.samp_freq))], [sg.Text("Amplitude (V)", size=(25, 1), font=("Verdana", 10)), amplitude_input], [sg.Text("Oscillation Frequency (Hz)", size=(25, 1), font=("Verdana", 10)), frequency_input], [sg.Text("Offset (For Graphing)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-Offset-", default_text=(variables.offset))], [sg.Text("Waveform", size=(25, 1), font=("Verdana", 10)), sg.Combo(['Sine', 'Sawtooth'], default_value=(variables.waveform), key='-Waveform-')], [sg.Text("# of Oscillations", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-OscNum-", default_text=(variables.osc_num))], [sg.Text("----------------------------------------------------", size=(25, 1), font=("Verdana", 10))], [sg.Text("Data Save Info", size=(25, 1), font=("Verdana", 30))], [sg.Text("Laser Power (ma)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-LaserPower-", default_text=(f"{variables.laser_power}"))], [sg.Button("Save Set Values")], [sg.Text("----------------------------------------------------", size=(25, 1), font=("Verdana", 10))], [sg.Text("Test Voltage", size=(25, 1), font=("Verdana", 30))], [sg.Text("Text Axis", size=(25, 1), font=("Verdana", 10)), sg.Combo(['X-Axis', 'Y-Axis'], default_value=(variables.test_axis), key='-TestAxis-')], [sg.Text("Test Distance (V)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-TestVolts-", default_text=(variables.test_voltage))], [sg.Button("Move Distance (Test)", size=(20, 1)), sg.Button("Reset Trial Number", size=(20, 1))], ] trial_text_field = sg.Text(f"Trial # {variables.trial_num}", size=(15, 1), font=("Verdana", 15)) amplitude_text_field = sg.Text(f"Amplitude = {variables.amplitude} V", size=(20, 1), font=("Verdana", 15)) frequency_text_field = sg.Text(f"Frequency = {variables.frequency} Hz", size=(20, 1), font=("Verdana", 15)) run_button = sg.Button("Start Trials", size=(20, 1)) data_middle_col = [[sg.Text("Run Trials", size=(0, 1), font=("Verdana", 30))], [sg.Text("Vary Amplitude or Frequency (Velocity)", size=(35, 1), font=("Verdana", 10)), sg.Combo(['Frequency', 'Amplitude'], default_value='Frequency', key='-FreqOrAmp-')], [sg.Text("Vary Amount (Hz for Freq, V for Amp)", size=(31, 1), font=("Verdana", 10)), sg.InputText(key="-VaryAmount-", default_text=("2.5"))], [sg.Text("Trials Per Velocity", size=(15, 1), font=("Verdana", 10)), sg.InputText(key="-NumTrialsPerVelocity-", default_text=("6"))], [run_button], [sg.Text("Currently On...", size=(15, 1), font=("Verdana", 30))], [trial_text_field], [amplitude_text_field], [frequency_text_field]] graph_canvas = sg.Canvas(key='figCanvas') data_right_col = [[sg.Text("Previous Trial Graph", size=(0, 1), font=("Verdana", 30))], [graph_canvas], [sg.Button("Accept and Run Next Trail", size=(15, 1)), sg.Button("Redo Last Trial", size=(15, 1))], ] data_collect_layout = [ [sg.Column(backbutton_col1), sg.Column(data_left_col, size=(400, 550)), sg.Column(data_middle_col), sg.Column(data_right_col)]] print(len(data_collect_layout[0])) ########################Analyzing Data############################################### backbutton_analyze = [[sg.Button("<-- Back", key='-BackAnalyze-')]] scroller_canvas = sg.Canvas(key='figCanvasScroller') psd_vs_velocity_graphs = [[sg.Text("PSD vs Velocity Graphs", size=(0, 1), font=("Verdana", 30))], [scroller_canvas], [sg.Button("<-Previous", size=(20, 1)), sg.Button("Next->", size=(20, 1))], ] k_laser_power_canvas = sg.Canvas(key='klaserpower') k_value_text_field = sg.Text(f"K Values = {tkinter_graphing.k_values}", size=(80, 1), font=("Verdana", 10)) k_vs_laser_power_graphs = [[sg.Text("K vs Laser Power (Oscillations)", size=(0, 1), font=("Verdana", 30))], [k_laser_power_canvas], [k_value_text_field], ] k_laser_power_power_spectrum = sg.Canvas(key='pskvalues') k_value_field_ps = sg.Text(f"K Values = {tkinter_graphing.power_spectrum_k_values}", size=(80, 1), font=("Verdana", 10)) power_spectrum_layout = [[sg.Text("K vs Laser Power (Power Spectrum)", size=(0, 1), font=("Verdana", 30))], [k_laser_power_power_spectrum], [k_value_field_ps]] data_analyze_layout = [[sg.Column(backbutton_analyze)], [sg.Column(psd_vs_velocity_graphs), sg.Column(k_vs_laser_power_graphs), sg.Column(power_spectrum_layout)]] #########################Choose Method################################################# main_col_choose = [[sg.Text("Oscillations or Power Spectrum", size=(0, 1), font=("Verdana", 30))], [sg.Button("Oscillations", size=(30, 1)), sg.Button("Power Spectrum", size=(30, 1))]] data_choose_collect_method = [[sg.Column(main_col_choose)]] ###########################Power Spectrum Collect####################################### ps_back = [[sg.Button("<--Back")]] ps_col_1 = [[sg.Text("Power Spectrum Parameters", size=(0, 1), font=("Verdana", 30))], [sg.Text("Laser Power (ma)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-LaserPower-", default_text=(f"{variables.laser_power}"))], [sg.Text("Sampling Frequency (Hz)", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-SampFreq-", default_text=(f"{variables.sampling_frequency}"))], [sg.Text("Number Datapoints", size=(25, 1), font=("Verdana", 10)), sg.InputText(key="-NumDat-", default_text=(f"{variables.number_datapoints}"))], # [sg.Text("# Runs Per Bead", size=(25, 1), font=("Verdana", 10)), # sg.InputText(key="-NumPerBead-", default_text=(f"{variables.number_runs_per_bead}"))], [sg.Button("Save Set Values")], [sg.Button("Run Trial"), sg.Button("Redo Trial")]] ps_canvas = sg.Canvas(key='-PsCanvas-') ps_col_2 = [[ps_canvas]] power_spectrum_collect_layout = [[sg.Column(ps_back), sg.Column(ps_col_1), sg.Column(ps_col_2)]] # Creates each window w/ relevant layout window_load = sg.Window('OT 292 DAQ Control', intro_screen, default_element_size=(12, 1), resizable=False, finalize=True) window_load.Hide() window_menu = sg.Window('OT 292 DAQ Control', title_screen_layout, default_element_size=(12, 1), resizable=False, finalize=True) window_menu.Hide() window_new_dataset = sg.Window('OT 292 DAQ Control', new_dataset_layout, default_element_size=(12, 1), resizable=False, finalize=True) window_new_dataset.Hide() window_collect_data = sg.Window('OT 292 DAQ Control', data_collect_layout, default_element_size=(12, 1), resizable=False, finalize=True) window_collect_data.Hide() window_analyze_data = sg.Window('OT 292 DAQ Control', data_analyze_layout, default_element_size=(12, 1), resizable=False, finalize=True) window_analyze_data.Hide() window_choose_data_collection_method = sg.Window('OT 292 DAQ Control', data_choose_collect_method, default_element_size=(12, 1), resizable=False, finalize=True) window_choose_data_collection_method.Hide() power_spectrum_collect_window = sg.Window('OT 292 DAQ Control', power_spectrum_collect_layout, default_element_size=(12, 1), resizable=False, finalize=True) power_spectrum_collect_window.Hide() #####################################Main Loop######################################################## # 0 = loading # 1 = menu # 2 = new dataset (folder select) # 6 = oscillations or power spectrum # 3 = collect dataset (oscillations) # 7 = collect dataset (power spectrum) # 4 = analyze dataset (folder select) # 5 = analyze previous dataset main page ''' Purpose: The Main Graphical Loop for Pysimplegui/Tkinter. This solely handels graphics and does none of the processing, movements, etc. ''' def main_loop(): while variables.Running: # End program if user closes window or # presses the OK button # display intro/loading page if variables.app_state == 0: window_menu.Hide() window_new_dataset.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_choose_data_collection_method.Hide() window_load.UnHide() event, values = window_load.read() for frame in ImageSequence.Iterator(Image.open(variables.gif_filename)): event, values = window_load.read(timeout=5) window_load['-IMAGE-'].update(data=ImageTk.PhotoImage(frame)) if event == sg.WIN_CLOSED: variables.Running = False variables.app_state = 1 # display main menu page (includes Create New Dataset and Analyze Dataset Options + manual) elif variables.app_state == 1: window_load.Hide() window_new_dataset.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_choose_data_collection_method.Hide() window_menu.UnHide() event, values = window_menu.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == "Create New Dataset": variables.app_state = 2 elif event == "Analyze Previous Dataset": variables.app_state = 4 elif event == "-Manual-": print("manual works") webbrowser.open_new(r'https://docs.google.com/document/d/1UIyrJpPVibWfCxfNz_ZTX6MNZtwCGOgMLPl32wL9eTw/edit?usp=sharing') # display new dataset page - define direction to the folder in which data will be saved elif variables.app_state == 2: window_load.Hide() window_menu.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_new_dataset.UnHide() window_choose_data_collection_method.Hide() event, values = window_new_dataset.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == '-Back1-': variables.app_state -= 1 # checks to ensure that the user provides a final destination before proceeding elif event == "Continue" and values['-IN-'] != "": print(values["-IN-"]) variables.save_folder = values["-IN-"] variables.app_state = 6 elif event == "Continue" and values['-IN-'] == "": print("No Folder Selected") # Choose Oscillations or Power Spectrum Data Collection elif variables.app_state == 6: window_load.Hide() window_menu.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_new_dataset.UnHide() window_choose_data_collection_method.UnHide() event, values = window_choose_data_collection_method.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == "Oscillations": variables.app_state = 3 elif event == "Power Spectrum": variables.app_state = 7 elif variables.app_state == 7: window_load.Hide() window_new_dataset.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.UnHide() window_choose_data_collection_method.Hide() window_menu.Hide() event, values = power_spectrum_collect_window.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == "<--Back": variables.app_state = 2 elif event == "Save Set Values": variables.laser_power = int(values['-LaserPower-']) variables.sampling_frequency = int(values['-SampFreq-']) variables.number_datapoints = int(values['-NumDat-']) # variables.number_runs_per_bead = int(values['-NumPerBead-']) # Not using right now variables.trial_num=0 # Every time you save value, trial num goes to 0 elif event == "Run Trial": controller.run_power_spectrum() # Using same fig_canvas_agg -> Think should be fine though if tkinter_graphing.figure_canvas_agg_take_data != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_take_data) tkinter_graphing.figure_canvas_agg_take_data = tkinter_graphing.display_power_spectrum_graph_previous_trial(power_spectrum_collect_window, '-PsCanvas-') controller.save_power_spectrum(variables.save_folder) controller.saved_data = {} variables.trial_num+=1 elif event == "Redo Trial": variables.trial_num-=1 controller.run_power_spectrum() # Using same fig_canvas_agg -> Think should be fine though if tkinter_graphing.figure_canvas_agg_take_data != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_take_data) tkinter_graphing.figure_canvas_agg_take_data = tkinter_graphing.display_power_spectrum_graph_previous_trial(power_spectrum_collect_window, '-PsCanvas-') controller.save_power_spectrum(variables.save_folder) controller.saved_data = {} variables.trial_num+=1 values['-LaserPower-'] = variables.laser_power values['-SampFreq-'] = variables.sampling_frequency values['-NumDat-'] = variables.number_datapoints # values['-NumPerBead-'] = variables.number_runs_per_bead # display collect data page (oscillations) elif variables.app_state == 3: window_load.Hide() window_menu.Hide() window_new_dataset.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_choose_data_collection_method.Hide() window_collect_data.UnHide() event, values = window_collect_data.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == '-Back2-': variables.app_state -= 1 # saves the data collected for the specific trial to save_folder elif event == "Accept and Run Next Trail" and variables.running_trials: # Save Here controller.oscillate(variables.oscillation_axis, variables.osc_num, variables.samp_freq, variables.frequency, variables.amplitude, variables.offset, variables.waveform) ###Load and Display Data and save it### if tkinter_graphing.figure_canvas_agg_take_data != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_take_data) tkinter_graphing.figure_canvas_agg_take_data = tkinter_graphing.display_previous_trial_graph(window_collect_data, 'figCanvas') controller.save_previous_data(variables.save_folder) controller.saved_data = {} ########################### if variables.trial_num == variables.num_trials - 1: variables.trial_num = 0 # increments whichever is being varied for the trials (default = frequency) by specified amount if values['-FreqOrAmp-'] == "Frequency": variables.frequency += float(values["-VaryAmount-"]) elif values['-FreqOrAmp-'] == "Amplitude": variables.amplitude += float(values["-VaryAmount-"]) variables.running_trials = False else: variables.trial_num += 1 elif event == "Start Trials" and not variables.running_trials: if variables.num_trials != 0: # Run Trials of set number variables.running_trials = True # params: oscaxis, numosc, sampfreq, oscfreq, amp, offset, waveform controller.oscillate(variables.oscillation_axis, variables.osc_num, variables.samp_freq, variables.frequency, variables.amplitude, variables.offset, variables.waveform) if tkinter_graphing.figure_canvas_agg_take_data != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_take_data) print("Deleted") tkinter_graphing.figure_canvas_agg_take_data = tkinter_graphing.display_previous_trial_graph(window_collect_data, 'figCanvas') controller.save_previous_data(variables.save_folder) controller.saved_data = {} if variables.trial_num == variables.num_trials - 1: variables.trial_num = 0 # increments whichever is being varied for the trials (default = frequency) by specified amount if values['-FreqOrAmp-'] == "Frequency": variables.frequency += float(values["-VaryAmount-"]) elif values['-FreqOrAmp-'] == "Amplitude": variables.amplitude += float(values["-VaryAmount-"]) variables.running_trials = False else: print("Don't do zero trials, plz") elif event == "Start Trials" and not variables.running_trials: print( 'Trials Already Started. Finish Out with \"Accept and Run Next Trial\" or Click \"Reset Trial Number\".') elif event == "Move Distance (Test)": print("Testing...") controller.test_move(variables.test_axis, variables.test_voltage) elif event == "Redo Last Trial" and variables.running_trials: controller.oscillate(variables.oscillation_axis, variables.osc_num, variables.samp_freq, variables.frequency, variables.amplitude, variables.offset, variables.waveform) ###Load and Display Data and save it### if tkinter_graphing.figure_canvas_agg_take_data != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_take_data) print("Deleted") tkinter_graphing.figure_canvas_agg_take_data = tkinter_graphing.display_previous_trial_graph(window_collect_data, 'figCanvas') controller.save_previous_data(variables.save_folder) controller.saved_data = {} print("Redo Last Trial") elif event == "Save Set Values": # saves inputted values for trial name variables.laser_power = int(values['-LaserPower-']) variables.frequency = float(values['-OscFreq-']) variables.amplitude = float(values['-Amplitude-']) variables.num_trials = int(values['-NumTrialsPerVelocity-']) variables.test_voltage = float(values['-TestVolts-']) variables.test_axis = values['-TestAxis-'] variables.osc_num = int(values['-OscNum-']) variables.samp_freq = int(values['-SampFreq-']) variables.offset = float(values['-Offset-']) variables.waveform = values['-Waveform-'] variables.oscillation_axis = values['-OscAxis-'] elif event == "Reset Trial Number": variables.trial_num = 0 variables.running_trials = False trial_text_field.Update(f"Trial # {variables.trial_num}") amplitude_text_field.Update(f"Amplitude = {variables.amplitude} V") frequency_text_field.Update(f"Frequency = {variables.frequency} Hz") frequency_input.Update(variables.frequency) amplitude_input.Update(variables.amplitude) values['-LaserPower-'] = variables.laser_power values['-OscFreq-'] = variables.frequency values['-Amplitude-'] = variables.amplitude values['-NumTrialsPerVelocity-'] = variables.num_trials values['-TestVolts-'] = variables.test_voltage values['-TestAxis-'] = variables.test_axis values['-OscNum-'] = variables.osc_num values['-SampFreq-'] = variables.samp_freq values['-Offset-'] = variables.offset values['-Waveform-'] = variables.waveform values['-OscAxis-'] = variables.oscillation_axis elif variables.app_state == 4: window_load.Hide() window_menu.Hide() window_collect_data.Hide() window_analyze_data.Hide() power_spectrum_collect_window.Hide() window_choose_data_collection_method.Hide() window_new_dataset.UnHide() event, values = window_new_dataset.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == '-Back1-': variables.app_state = 1 # checks to ensure that the user provides a final destination before proceeding elif event == "Continue" and values['-IN-'] != "": print(values["-IN-"]) variables.analyze_folder = values["-IN-"] variables.data_loader = None variables.data_loader = DataLoader(variables.analyze_folder) if len(variables.data_loader.opened_files)!=0 or len(variables.data_loader.opened_power_spectrum)!=0: # PSD vs Velocity if tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity) print("Deleted") # K vs Laser Power if tkinter_graphing.figure_canvas_agg_k_vs_laser_power != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_k_vs_laser_power) print("Deleted") k_value_text_field.Update(f"K Values = ") # Power_spectrum_k_values if tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power) print("Deleted") k_value_field_ps.Update(f"K Values = ") try: tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity = tkinter_graphing.display_psd_vs_velocity_graph( window_analyze_data, 'figCanvasScroller', variables.data_loader.grouped_files_by_laser_power[variables.canvas_scrolled_to], variables.data_loader) tkinter_graphing.figure_canvas_agg_k_vs_laser_power = tkinter_graphing.display_k_vs_laser_power( window_analyze_data, 'klaserpower', variables.data_loader) k_value_text_field.Update(f"K Values = {tkinter_graphing.k_values}") except: print("No Oscillation Data") # traceback.print_exc() try: tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power = tkinter_graphing.display_power_spectrum_k_values( window_analyze_data, 'pskvalues', variables.data_loader) k_value_field_ps.Update(f"K Values = {tkinter_graphing.power_spectrum_k_values}") except: print("No Power Spectrum Data") variables.app_state = 5 else: print("No Data to Analyze!") elif event == "Continue" and values['-IN-'] == "": print("No Folder Selected") elif variables.app_state == 5: window_load.Hide() window_menu.Hide() window_collect_data.Hide() power_spectrum_collect_window.Hide() window_new_dataset.Hide() window_choose_data_collection_method.Hide() window_analyze_data.UnHide() event, values = window_analyze_data.read() if event == sg.WIN_CLOSED: variables.Running = False break elif event == "-BackAnalyze-": variables.app_state = 4 elif event == "<-Previous" and variables.canvas_scrolled_to > 0: variables.canvas_scrolled_to -= 1 elif event == "Next->" and variables.canvas_scrolled_to < len(variables.data_loader.grouped_files_by_laser_power)-1: print(variables.canvas_scrolled_to) print(len(variables.data_loader.grouped_files_by_laser_power)) variables.canvas_scrolled_to += 1 try: # Graph PSD vs Velocity if tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity) print("Deleted") tkinter_graphing.figure_canvas_agg_analyze_data_psd_velocity = tkinter_graphing.display_psd_vs_velocity_graph( window_analyze_data, 'figCanvasScroller', variables.data_loader.grouped_files_by_laser_power[variables.canvas_scrolled_to], variables.data_loader) # Graph K vs Laser Power if tkinter_graphing.figure_canvas_agg_k_vs_laser_power != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_k_vs_laser_power) print("Deleted") tkinter_graphing.figure_canvas_agg_k_vs_laser_power = tkinter_graphing.display_k_vs_laser_power( window_analyze_data, 'klaserpower', variables.data_loader) k_value_text_field.Update(f"K Values = {tkinter_graphing.k_values}") except: print("No Oscillation Data") try: # Power_spectrum_k_values if tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power != None: tkinter_graphing.delete_figure_agg(tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power) print("Deleted") tkinter_graphing.figure_canvas_agg_ps_k_vs_laser_power = tkinter_graphing.display_power_spectrum_k_values( window_analyze_data, 'pskvalues', variables.data_loader) k_value_field_ps.Update(f"K Values = {tkinter_graphing.power_spectrum_k_values}") except: print("No Power Spectrum Data") # quit() window_load.close() window_menu.close() window_new_dataset.close() window_collect_data.close() window_analyze_data.close() if __name__ == '__main__': # Final V0.0 main_loop() ```
github_jupyter
# Learning to regress polynomial time series<br/>using a dense network #### This is a Python3 notebook ``` import numpy as np from numpy.lib.stride_tricks import as_strided import matplotlib.pyplot as plt %matplotlib inline from tensorflow import keras Sequential = keras.models.Sequential Dense = keras.layers.Dense window_size = 10 hidden_size = 20 seq_length = 100 n_train = 1000 n_test = 1000 def create_model(window_size, hidden_size, display_summary=True): """ seq_length=None: flexible sequence length. recommended for actual usage. seq_length=NUMBER: recommended for model summary. """ model = Sequential() model.add(Dense(name='window_dense', units=hidden_size, activation='relu', input_shape=(window_size,))) model.add(Dense(name='hidden1', units=hidden_size, activation='relu')) model.add(Dense(name='hidden2', units=hidden_size, activation='relu')) model.add(Dense(name='regressor', units=1)) model.compile(loss='mean_squared_error', optimizer='adam') if display_summary: model.summary() return model def generate_polynomial_sequences(seq_length, num_seqs, degree=3, span=2): seq = np.zeros((num_seqs, seq_length)) x = np.linspace(-span, span, seq_length) monoms = x[:, np.newaxis] ** range(degree + 1) coeffs = np.random.randn(num_seqs, degree + 1) polynomes = np.matmul(coeffs, monoms.T) return polynomes def plot_preds(real, predicted, num_plot=5, title=None): plt.figure() window_size = real.shape[1] - predicted.shape[1] plt.axvline(window_size, linestyle=':', color='k') x_real = np.arange(real.shape[1]) x_pred = np.arange(window_size, real.shape[1]) if title is not None: plt.title(title) for i_poly in np.random.randint(real.shape[0], size=num_plot): color = np.random.rand(3) * 0.75 plt.plot(x_real, real[i_poly,:], color=color, linewidth=5) plt.plot(x_pred, predicted[i_poly,:], '--', color='k') plt.show() def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return as_strided(a, shape=shape, strides=strides).copy() sequences_train = generate_polynomial_sequences(seq_length, n_train) sequences_test = generate_polynomial_sequences(seq_length, n_test) windows_train = rolling_window(sequences_train[:, :-1], window_size) windows_test = rolling_window(sequences_test[:, :-1], window_size) x_train = windows_train.reshape(-1, window_size) y_train = sequences_train[:, window_size:].flatten() x_test = windows_test.reshape(-1, window_size) y_test = sequences_test[:, window_size:].flatten() num_plot = 5 plot_inds = np.random.randint(sequences_train.shape[0], size=num_plot) plt.figure() plt.title('some train sequences') plt.plot(sequences_train[plot_inds,:].T) plt.show() model = create_model(window_size, hidden_size) model.fit(x_train, y_train, epochs=1) pred_test = model.predict(x_test).reshape(n_test, -1) plot_preds(sequences_test, pred_test, title='after 1 epoch') model.fit(x_train, y_train, epochs=2) pred_test = model.predict(x_test).reshape(n_test, -1) plot_preds(sequences_test, pred_test, title='after 3 epochs') model.fit(x_train, y_train, epochs=6) pred_test = model.predict(x_test).reshape(n_test, -1) plot_preds(sequences_test, pred_test, title='after 9 epochs') print('\nevaluating:\n') loss_train = model.evaluate(x_train, y_train) loss_test = model.evaluate(x_test, y_test) print('loss_train:', loss_train) print('loss_test:', loss_test) %timeit model.predict(x_test).reshape(n_test, -1) %timeit model.predict(x_test).reshape(n_test, -1) %timeit model.predict(x_test).reshape(n_test, -1) ```
github_jupyter
``` import pydicom from glob import glob from random import randint from copy import deepcopy from datetime import datetime import numpy as np import pandas as pd pydicom.config.enforce_valid_values = True pd.set_option('display.max_rows', 500) multi_arc_plan = pydicom.read_file('MVISO_VMATNEWSPLIT.dcm', force=True) square_field_plan = pydicom.read_file('MVISO_SMALLSquare.dcm', force=True) pydicom.Sequence square_beam_limiting = square_field_plan.BeamSequence[0].ControlPointSequence[0].BeamLimitingDevicePositionSequence def display_control_points(dcm, beam_index): control_point_sequence = dcm.BeamSequence[beam_index].ControlPointSequence gantry_angles = [ sequence.GantryAngle for sequence in control_point_sequence ] gantry_rotation = [ sequence.GantryRotationDirection for sequence in control_point_sequence ] collimator_angles = [ sequence.BeamLimitingDeviceAngle for sequence in control_point_sequence ] collimator_rotation = [ sequence.BeamLimitingDeviceRotationDirection for sequence in control_point_sequence ] cumulative_meterset_weight = [ sequence.CumulativeMetersetWeight for sequence in control_point_sequence ] data = np.vstack([ gantry_angles, gantry_rotation, collimator_angles, collimator_rotation, cumulative_meterset_weight]).T return pd.DataFrame(data=data, columns=[ 'Gantry', 'Gantry Rotation', 'Collimator', 'Collimator Rotation', 'Meterset Weight']) def from_bipolar(angles: np.ndarray): ref = angles<0 angles[ref] = angles[ref] + 360 return angles gantry_beam_1 = from_bipolar(np.arange(-180, 181, 6.0)) gantry_beam_1[0] = 180.1 gantry_beam_1[-1] = 179.9 gantry_beam_1 coll_beam_1 = from_bipolar(np.arange(-180, 1, 3.0)) coll_beam_1[0] = 180.1 coll_beam_1 gantry_beam_2 = from_bipolar(np.arange(180, -181, -6.0)) gantry_beam_2[-1] = 180.1 gantry_beam_2[0] = 179.9 gantry_beam_2 coll_beam_2 = from_bipolar(np.arange(180, -1, -3.0)) coll_beam_2[0] = 179.9 coll_beam_2 num_cps = len(gantry_beam_2) num_cps meterset_weight = np.linspace(0, 1, num_cps) meterset_weight total_mu = "300.000000" dose_rate = "300" meterset_weight * float(total_mu) cpfirst_beam1 = multi_arc_plan.BeamSequence[0].ControlPointSequence[0] cpmid_beam1 = multi_arc_plan.BeamSequence[0].ControlPointSequence[1] cplast_beam1 = multi_arc_plan.BeamSequence[0].ControlPointSequence[-1] cpfirst_beam1.BeamLimitingDevicePositionSequence = square_beam_limiting cpmid_beam1.BeamLimitingDevicePositionSequence = square_beam_limiting cplast_beam1.BeamLimitingDevicePositionSequence = square_beam_limiting cpfirst_beam1.BeamLimitingDeviceRotationDirection = 'CC' cpmid_beam1.BeamLimitingDeviceRotationDirection = 'CC' cpfirst_beam1.DoseRateSet = dose_rate cpmid_beam1.DoseRateSet = dose_rate cplast_beam1.DoseRateSet = dose_rate cpfirst_beam2 = multi_arc_plan.BeamSequence[1].ControlPointSequence[0] cpmid_beam2 = multi_arc_plan.BeamSequence[1].ControlPointSequence[1] cplast_beam2 = multi_arc_plan.BeamSequence[1].ControlPointSequence[-1] cpfirst_beam2.BeamLimitingDevicePositionSequence = square_beam_limiting cpmid_beam2.BeamLimitingDevicePositionSequence = square_beam_limiting cplast_beam2.BeamLimitingDevicePositionSequence = square_beam_limiting cpfirst_beam2.BeamLimitingDeviceRotationDirection = 'CW' cpmid_beam2.BeamLimitingDeviceRotationDirection = 'CW' cpfirst_beam2.DoseRateSet = dose_rate cpmid_beam2.DoseRateSet = dose_rate cplast_beam2.DoseRateSet = dose_rate beam1 = pydicom.Sequence([]) beam2 = pydicom.Sequence([]) cpfirst_beam1.GantryAngle = str(gantry_beam_1[0]) cpfirst_beam2.GantryAngle = str(gantry_beam_2[0]) cpfirst_beam1.BeamLimitingDeviceAngle = str(coll_beam_1[0]) cpfirst_beam2.BeamLimitingDeviceAngle = str(coll_beam_2[0]) beam1.append(cpfirst_beam1) beam2.append(cpfirst_beam2) for i in range(1, num_cps - 1): cpmid_beam1.GantryAngle = str(gantry_beam_1[i]) cpmid_beam2.GantryAngle = str(gantry_beam_2[i]) cpmid_beam1.BeamLimitingDeviceAngle = str(coll_beam_1[i]) cpmid_beam2.BeamLimitingDeviceAngle = str(coll_beam_2[i]) cpmid_beam1.CumulativeMetersetWeight = str(round(meterset_weight[i], 6)) cpmid_beam2.CumulativeMetersetWeight = str(round(meterset_weight[i], 6)) cpmid_beam1.ControlPointIndex = str(i) cpmid_beam2.ControlPointIndex = str(i) beam1.append(deepcopy(cpmid_beam1)) beam2.append(deepcopy(cpmid_beam2)) cplast_beam1.GantryAngle = str(gantry_beam_1[-1]) cplast_beam2.GantryAngle = str(gantry_beam_2[-1]) cplast_beam1.BeamLimitingDeviceAngle = str(coll_beam_1[-1]) cplast_beam2.BeamLimitingDeviceAngle = str(coll_beam_2[-1]) cplast_beam1.ControlPointIndex = str(num_cps-1) cplast_beam2.ControlPointIndex = str(num_cps-1) beam1.append(cplast_beam1) beam2.append(cplast_beam2) new_plan = deepcopy(multi_arc_plan) new_plan.BeamSequence[0].ControlPointSequence = beam1 new_plan.BeamSequence[1].ControlPointSequence = beam2 new_plan.BeamSequence[0].NumberOfControlPoints = str(num_cps) new_plan.BeamSequence[1].NumberOfControlPoints = str(num_cps) new_plan.BeamSequence[0].BeamName = "WL-CW" new_plan.BeamSequence[1].BeamName = "WL-CC" # new_plan display_control_points(new_plan, 0) display_control_points(new_plan, 1) new_plan.FractionGroupSequence[0].ReferencedBeamSequence[0].BeamMeterset = total_mu new_plan.FractionGroupSequence[0].ReferencedBeamSequence[1].BeamMeterset = total_mu new_plan.RTPlanLabel = 'WLutzArc' new_plan.RTPlanName = 'WLutzArc' # new_plan timestamp = datetime.now().isoformat(sep='_').split('.')[0].replace('-', '').replace(':', '') plan_file_name = f"VMAT_MVISO_{timestamp}.dcm" plan_file_name new_plan.save_as(plan_file_name) !echo {plan_file_name} !dcmsend 192.168.100.200 104 {plan_file_name} --read-dataset --aetitle CMS_SCU --call EOS_RTD -d !dcmsend --help ```
github_jupyter
``` # # COMMENTS TO DO # #Condensed code based on the code from: https://jmetzen.github.io/2015-11-27/vae.html %matplotlib inline import tensorflow as tf import tensorflow.contrib.layers as layers import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import numpy as np import os import time import glob from tensorflow.examples.tutorials.mnist import input_data def plot(samples, w, h, fw, fh, iw=28, ih=28): fig = plt.figure(figsize=(fw, fh)) gs = gridspec.GridSpec(w, h) gs.update(wspace=0.05, hspace=0.05) for i, sample in enumerate(samples): ax = plt.subplot(gs[i]) plt.axis('off') ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_aspect('equal') plt.imshow(sample.reshape(iw, ih), cmap='Greys_r') return fig def encoder(images, num_outputs_h0=8, num_outputs_h1=16, kernel_size=5, stride=2, num_hidden_fc=1024, z_dim=100): print("Encoder") h0 = layers.convolution2d( inputs=images, num_outputs=num_outputs_h0, kernel_size=kernel_size, stride=stride, activation_fn=tf.nn.relu, scope='e_cnn_%d' % (0,) ) print("Convolution 1 -> {}".format(h0)) h1 = layers.convolution2d( inputs=h0, num_outputs=num_outputs_h1, kernel_size=kernel_size, stride=stride, activation_fn=tf.nn.relu, scope='e_cnn_%d' % (1,) ) print("Convolution 2 -> {}".format(h1)) h1_dim = h1.get_shape().as_list()[1] h2_flat = tf.reshape(h1, [-1, h1_dim * h1_dim * num_outputs_h1]) print("Reshape -> {}".format(h2_flat)) h2_flat =layers.fully_connected( inputs=h2_flat, num_outputs=num_hidden_fc, activation_fn=tf.nn.relu, scope='e_d_%d' % (0,) ) print("FC 1 -> {}".format(h2_flat)) z_mean =layers.fully_connected( inputs=h2_flat, num_outputs=z_dim, activation_fn=None, scope='e_d_%d' % (1,) ) print("Z mean -> {}".format(z_mean)) z_log_sigma_sq =layers.fully_connected( inputs=h2_flat, num_outputs=z_dim, activation_fn=None, scope='e_d_%d' % (2,) ) return z_mean, z_log_sigma_sq def decoder(z, num_hidden_fc=1024, h1_reshape_dim=7, kernel_size=5, h1_channels=16, h2_channels = 8, output_channels=1, strides=2, output_dims=784): print("Decoder") batch_size = tf.shape(z)[0] h0 =layers.fully_connected( inputs=z, num_outputs=num_hidden_fc, activation_fn=tf.nn.relu, scope='d_d_%d' % (0,) ) print("FC 1 -> {}".format(h0)) h1 =layers.fully_connected( inputs=h0, num_outputs=h1_reshape_dim*h1_reshape_dim*h1_channels, activation_fn=tf.nn.relu, scope='d_d_%d' % (1,) ) print("FC 2 -> {}".format(h1)) h1_reshape = tf.reshape(h1, [-1, h1_reshape_dim, h1_reshape_dim, h1_channels]) print("Reshape -> {}".format(h1_reshape)) wdd2 = tf.get_variable('wd2', shape=(kernel_size, kernel_size, h2_channels, h1_channels), initializer=tf.contrib.layers.xavier_initializer()) bdd2 = tf.get_variable('bd2', shape=(h2_channels,), initializer=tf.constant_initializer(0)) h2 = tf.nn.conv2d_transpose(h1_reshape, wdd2, output_shape=(batch_size, h1_reshape_dim*2, h1_reshape_dim*2, h2_channels), strides=(1, strides, strides, 1), padding='SAME') h2_out = tf.nn.relu(h2 + bdd2) h2_out = tf.reshape(h2_out, (batch_size, h1_reshape_dim*2, h1_reshape_dim*2, h2_channels)) print("DeConv 1 -> {}".format(h2_out)) h2_dim = h2_out.get_shape().as_list()[1] wdd3 = tf.get_variable('wd3', shape=(kernel_size, kernel_size, output_channels, h2_channels), initializer=tf.contrib.layers.xavier_initializer()) bdd3 = tf.get_variable('bd3', shape=(output_channels,), initializer=tf.constant_initializer(0)) h3 = tf.nn.conv2d_transpose(h2_out, wdd3, output_shape=(batch_size, h2_dim*2, h2_dim*2, output_channels), strides=(1, strides, strides, 1), padding='SAME') h3_out = tf.nn.sigmoid(h3 + bdd3) #Workaround to use dinamyc batch size... h3_out = tf.reshape(h3_out, (batch_size, h2_dim*2, h2_dim*2, output_channels)) print("DeConv 2 -> {}".format(h3_out)) h3_reshape = tf.reshape(h3_out, [-1, output_dims]) print("Reshape -> {}".format(h3_reshape)) return h3_reshape mnist = input_data.read_data_sets('DATASETS/MNIST_TF', one_hot=True) #For reconstructing the same or a different image (denoising) images = tf.placeholder(tf.float32, shape=(None, 784)) images_28x28x1 = tf.reshape(images, [-1, 28, 28, 1]) images_target = tf.placeholder(tf.float32, shape=(None, 784)) is_training_placeholder = tf.placeholder(tf.bool) learning_rate_placeholder = tf.placeholder(tf.float32) z_dim = 100 with tf.variable_scope("encoder") as scope: z_mean, z_log_sigma_sq = encoder(images_28x28x1) with tf.variable_scope("reparameterization") as scope: eps = tf.random_normal(shape=tf.shape(z_mean), mean=0.0, stddev=1.0, dtype=tf.float32) # z = mu + sigma*epsilon z = tf.add(z_mean, tf.multiply(tf.sqrt(tf.exp(z_log_sigma_sq)), eps)) with tf.variable_scope("decoder") as scope: x_reconstr_mean = decoder(z) scope.reuse_variables() ##### SAMPLING ####### z_input = tf.placeholder(tf.float32, shape=[None, z_dim]) x_sample = decoder(z_input) #reconstr_loss = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(logits=x_reconstr_mean, labels=images_target), reduction_indices=1) offset=1e-7 obs_ = tf.clip_by_value(x_reconstr_mean, offset, 1 - offset) reconstr_loss = -tf.reduce_sum(images_target * tf.log(obs_) + (1-images_target) * tf.log(1 - obs_), 1) latent_loss = -.5 * tf.reduce_sum(1. + z_log_sigma_sq - tf.pow(z_mean, 2) - tf.exp(z_log_sigma_sq), reduction_indices=1) cost = tf.reduce_mean(reconstr_loss + latent_loss) optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate_placeholder).minimize(cost) init = tf.global_variables_initializer() epochs = 1000 batch_size = 100 n_batches = mnist.train.images.shape[0]//batch_size learning_rate=0.001 OUTPUT_PATH = "OUT_CVAE_MNIST_WITH_NOISE/" MODELS_PATH = "MODELS_CVAE_MNIST/" if not os.path.exists(OUTPUT_PATH): os.makedirs(OUTPUT_PATH) if not os.path.exists(MODELS_PATH): os.makedirs(MODELS_PATH) CVAE_SAVER = tf.train.Saver() print("N. Batches: {}".format(n_batches)) # Launch the graph with tf.Session() as sess: sess.run(init) for epoch in range(epochs): start_time = time.time() mean_loss = 0 for n_batch in range(n_batches): batch_xs, _ = mnist.train.next_batch(batch_size) #from https://blog.keras.io/building-autoencoders-in-keras.html noise_factor = 0.5 noise_batch_xs = batch_xs + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=batch_xs.shape) noise_batch_xs = np.clip(noise_batch_xs, 0., 1.) out_recon, out_latent_loss, out_cost, _ = sess.run([ x_reconstr_mean, latent_loss, cost, optimizer], feed_dict={ images: noise_batch_xs, images_target: batch_xs, is_training_placeholder: True, learning_rate_placeholder: learning_rate}) mean_loss += out_cost #print(" Loss {}".format(out_cost)) if epoch % 100 == 0: print("Epoch {}, Mean Loss {:.2f}, time: {}".format(epoch, mean_loss/n_batches, time.time() - start_time)) fig=plot(noise_batch_xs, 10, 10, 10, 10) #plt.show() plt.savefig(OUTPUT_PATH + 'input_{}.png'.format(str(epoch).zfill(3)), bbox_inches='tight') plt.close(fig) fig=plot(batch_xs, 10, 10, 10, 10) #plt.show() plt.savefig(OUTPUT_PATH + 'target_{}.png'.format(str(epoch).zfill(3)), bbox_inches='tight') plt.close(fig) fig=plot(out_recon, 10, 10, 10, 10) #plt.show() plt.savefig(OUTPUT_PATH + 'output_{}.png'.format(str(epoch).zfill(3)), bbox_inches='tight') plt.close(fig) # Save model weights to disk save_path = CVAE_SAVER.save(sess, MODELS_PATH + 'CONV_VAE_MNIST_WITH_NOISE.ckpt') print("Model saved in file: {}".format(save_path)) with tf.Session() as sess: sess.run(init) CVAE_SAVER.restore(sess, save_path) print("Model restored in file: {}".format(save_path)) random_gen = sess.run(x_sample,feed_dict={z_input: np.random.randn(100, z_dim)}) fig=plot(random_gen, 10, 10, 10, 10) plt.show() with tf.Session() as sess: sess.run(init) CVAE_SAVER.restore(sess, save_path) print("Model restored in file: {}".format(save_path)) INTERPOLATION_STEPS = 10 alphaValues = np.linspace(0, 1, INTERPOLATION_STEPS) random_indexes = np.random.permutation(mnist.train.images.shape[0])[:2] x_samples = mnist.train.images[random_indexes].copy() z_projected_samples = sess.run(z, feed_dict={images: x_samples}) fig=plot(x_samples, 1, 2, 10, 10) plt.show() fig=plot(z_projected_samples, 10, 10, 10, 10, 10, 10) plt.show() z_samples_interpolated = np.zeros((INTERPOLATION_STEPS, z_dim)) for i, alpha in enumerate(alphaValues): z_samples_interpolated[i] = z_projected_samples[0]*(1-alpha) + z_projected_samples[1]*alpha x_interpolated = sess.run(x_sample, feed_dict={z_input: z_samples_interpolated}) fig=plot(x_interpolated, 1, INTERPOLATION_STEPS, 10, 10) plt.show() ```
github_jupyter
# Mediation analysis with DoWhy: Direct and Indirect Effects ``` import numpy as np import pandas as pd from dowhy import CausalModel import dowhy.datasets # Warnings and logging import warnings warnings.filterwarnings('ignore') ``` ## Creating a dataset ``` # Creating a dataset with a single confounder and a single mediator (num_frontdoor_variables) data = dowhy.datasets.linear_dataset(10, num_common_causes=1, num_samples=10000, num_instruments=0, num_effect_modifiers=0, num_treatments=1, num_frontdoor_variables=1, treatment_is_binary=False, outcome_is_binary=False) df = data['df'] print(df.head()) ``` ## Step 1: Modeling the causal mechanism We create a dataset following a causal graph based on the frontdoor criterion. That is, there is no direct effect of the treatment on outcome; all effect is mediated through the frontdoor variable FD0. ``` model = CausalModel(df, data["treatment_name"],data["outcome_name"], data["gml_graph"], missing_nodes_as_confounders=True) model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) ``` ## Step 2: Identifying the natural direct and indirect effects We use the `estimand_type` argument to specify that the target estimand should be for a **natural direct effect** or the **natural indirect effect**. For definitions, see [Interpretation and Identification of Causal Mediation](https://ftp.cs.ucla.edu/pub/stat_ser/r389-imai-etal-commentary-r421-reprint.pdf) by Judea Pearl. **Natural direct effect**: Effect due to the path v0->y **Natural indirect effect**: Effect due to the path v0->FD0->y (mediated by FD0). ``` # Natural direct effect (nde) identified_estimand_nde = model.identify_effect(estimand_type="nonparametric-nde", proceed_when_unidentifiable=True) print(identified_estimand_nde) # Natural indirect effect (nie) identified_estimand_nie = model.identify_effect(estimand_type="nonparametric-nie", proceed_when_unidentifiable=True) print(identified_estimand_nie) ``` ## Step 3: Estimation of the effect Currently only two stage linear regression is supported for estimation. We plan to add a non-parametric Monte Carlo method soon as described in [Imai, Keele and Yamamoto (2010)](https://projecteuclid.org/euclid.ss/1280841733). #### Natural Indirect Effect The estimator converts the mediation effect estimation to a series of backdoor effect estimations. 1. The first-stage model estimates the effect from treatment (v0) to the mediator (FD0). 2. The second-stage model estimates the effect from mediator (FD0) to the outcome (Y). ``` import dowhy.causal_estimators.linear_regression_estimator causal_estimate_nde = model.estimate_effect(identified_estimand_nie, method_name="mediation.two_stage_regression", confidence_intervals=False, test_significance=False, method_params = { 'first_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator, 'second_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator } ) print(causal_estimate_nde) ``` Note that the value equals the true value of the natural indirect effect (up to random noise). ``` print(causal_estimate_nde.value, data["ate"]) ``` The parameter is called `ate` because in the simulated dataset, the direct effect is set to be zero. #### Natural Direct Effect Now let us check whether the direct effect estimator returns the (correct) estimate of zero. ``` causal_estimate_nie = model.estimate_effect(identified_estimand_nde, method_name="mediation.two_stage_regression", confidence_intervals=False, test_significance=False, method_params = { 'first_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator, 'second_stage_model': dowhy.causal_estimators.linear_regression_estimator.LinearRegressionEstimator } ) print(causal_estimate_nie) ``` ## Step 4: Refutations TODO
github_jupyter
CTC( Connectionist Temporal Classification,连接时序分类)是一种用于序列建模的工具,其核心是定义了特殊的**目标函数/优化准则**[1]。 > jupyter notebook 版见 [repo](https://github.com/DingKe/ml-tutorial/tree/master/ctc). # 1. 算法 这里大体根据 Alex Grave 的开山之作[1],讨论 CTC 的算法原理,并基于 numpy 从零实现 CTC 的推理及训练算法。 ## 1.1 序列问题形式化。 序列问题可以形式化为如下函数: $$N_w: (R^m)^T \rightarrow (R^n)^T$$ 其中,序列目标为字符串(词表大小为 $n$),即 $N_w$ 输出为 $n$ 维多项概率分布(e.g. 经过 softmax 处理)。 网络输出为:$y = N_w(x)$,其中,$y_k^t$ $t$ 表示时刻第 $k$ 项的概率。 ![](https://distill.pub/2017/ctc/assets/full_collapse_from_audio.svg) **图1. 序列建模【[src](https://distill.pub/2017/ctc/)】** 虽然并没为限定 $N_w$ 具体形式,下面为假设其了某种神经网络(e.g. RNN)。 下面代码示例 toy $N_w$: ``` import numpy as np np.random.seed(1111) T, V = 12, 5 m, n = 6, V x = np.random.random([T, m]) # T x m w = np.random.random([m, n]) # weights, m x n def softmax(logits): max_value = np.max(logits, axis=1, keepdims=True) exp = np.exp(logits - max_value) exp_sum = np.sum(exp, axis=1, keepdims=True) dist = exp / exp_sum return dist def toy_nw(x): y = np.matmul(x, w) # T x n y = softmax(y) return y y = toy_nw(x) print(y) print(y.sum(1, keepdims=True)) ``` ## 1.2 align-free 变长映射 上面的形式是输入和输出的一对一的映射。序列学习任务一般而言是多对多的映射关系(如语音识别中,上百帧输出可能仅对应若干音节或字符,并且每个输入和输出之间,也没有清楚的对应关系)。CTC 通过引入一个特殊的 blank 字符(用 % 表示),解决多对一映射问题。 扩展原始词表 $L$ 为 $L^\prime = L \cup \{\text{blank}\}$。对输出字符串,定义操作 $B$:1)合并连续的相同符号;2)去掉 blank 字符。 例如,对于 “aa%bb%%cc”,应用 $B$,则实际上代表的是字符串 "abc"。同理“%a%b%cc%” 也同样代表 "abc"。 $$ B(aa\%bb\%\%cc) = B(\%a\%b\%cc\%) = abc $$ 通过引入blank 及 $B$,可以实现了变长的映射。 $$ L^{\prime T} \rightarrow L^{\le T} $$ 因为这个原因,CTC 只能建模输出长度小于输入长度的序列问题。 ## 1.3 似然计算 和大多数有监督学习一样,CTC 使用最大似然标准进行训练。 给定输入 $x$,输出 $l$ 的条件概率为: $$ p(l|x) = \sum_{\pi \in B^{-1}(l)} p(\pi|x) $$ 其中,$B^{-1}(l)$ 表示了长度为 $T$ 且示经过 $B$ 结果为 $l$ 字符串的集合。 **CTC 假设输出的概率是(相对于输入)条件独立的**,因此有: $$p(\pi|x) = \prod y^t_{\pi_t}, \forall \pi \in L^{\prime T}$$ 然而,直接按上式我们没有办理有效的计算似然值。下面用动态规划解决似然的计算及梯度计算, 涉及前向算法和后向算法。 ## 1.4 前向算法 在前向及后向计算中,CTC 需要将输出字符串进行扩展。具体的,$(a_1,\cdots,a_m)$ 每个字符之间及首尾分别插入 blank,即扩展为 $(\%, a_1,\%,a_2, \%,\cdots,\%, a_m,\%)$。下面的 $l$ 为原始字符串,$l^\prime$ 指为扩展后的字符串。 定义 $$ \alpha_t(s) \stackrel{def}{=} \sum_{\pi \in N^T: B(\pi_{1:t}) = l_{1:s}} \prod_{t^\prime=1}^t y^t_{\pi^\prime} $$ 显然有, $$ \begin{align} \alpha_1(1) = y_b^1,\\ \alpha_1(2) = y_{l_1}^1,\\ \alpha_1(s) = 0, \forall s > 2 \end{align} $$ 根据 $\alpha$ 的定义,有如下递归关系: $$ \alpha_t(s) = \{ \begin{array}{l} (\alpha_{t-1}(s)+\alpha_{t-1}(s-1)) y^t_{l^\prime_s},\ \ \ if\ l^\prime_s = b \ or\ l_{s-2}^\prime = l_s^{\prime} \\ (\alpha_{t-1}(s)+\alpha_{t-1}(s-1) + \alpha_{t-1}(s-2)) y^t_{l^\prime_s} \ \ otherwise \end{array} $$ ### 1.4.1 Case 2 递归公式中 case 2 是一般的情形。如图所示,$t$ 时刻字符为 $s$ 为 blank 时,它可能由于两种情况扩展而来:1)重复上一字符,即上个字符也是 a,2)字符发生转换,即上个字符是非 a 的字符。第二种情况又分为两种情形,2.1)上一字符是 blank;2.2)a 由非 blank 字符直接跳转而来($B$) 操作中, blank 最终会被去掉,因此 blank 并不是必须的)。 ![](https://distill.pub/2017/ctc/assets/cost_regular.svg) **图2. 前向算法 Case 2 示例【[src](https://distill.pub/2017/ctc/)】** ### 1.4.2 Case 1 递归公式 case 1 是特殊的情形。 如图所示,$t$ 时刻字符为 $s$ 为 blank 时,它只能由于两种情况扩展而来:1)重复上一字符,即上个字符也是 blank,2)字符发生转换,即上个字符是非 blank 字符。$t$ 时刻字符为 $s$ 为非 blank 时,类似于 case 2,但是这时两个相同字符之间的 blank 不能省略(否则无法区分"aa"和"a"),因此,也只有两种跳转情况。 ![](https://distill.pub/2017/ctc/assets/cost_no_skip.svg) **图3. 前向算法 Case 1 【[src](https://distill.pub/2017/ctc/)】** 我们可以利用动态规划计算所有 $\alpha$ 的值,算法时间和空间复杂度为 $O(T * L)$。 似然的计算只涉及乘加运算,因此,CTC 的似然是可导的,可以尝试 tensorflow 或 pytorch 等具有自动求导功能的工具自动进行梯度计算。下面介绍如何手动高效的计算梯度。 ``` def forward(y, labels): T, V = y.shape L = len(labels) alpha = np.zeros([T, L]) # init alpha[0, 0] = y[0, labels[0]] alpha[0, 1] = y[0, labels[1]] for t in range(1, T): for i in range(L): s = labels[i] a = alpha[t - 1, i] if i - 1 >= 0: a += alpha[t - 1, i - 1] if i - 2 >= 0 and s != 0 and s != labels[i - 2]: a += alpha[t - 1, i - 2] alpha[t, i] = a * y[t, s] return alpha labels = [0, 3, 0, 3, 0, 4, 0] # 0 for blank alpha = forward(y, labels) print(alpha) ``` 最后可以得到似然 $p(l|x) = \alpha_T(|l^\prime|) + \alpha_T(|l^\prime|-1)$。 ``` p = alpha[-1, labels[-1]] + alpha[-1, labels[-2]] print(p) ``` ## 1.5 后向计算 类似于前向计算,我们定义后向计算。 首先定义 $$ \beta_t(s) \stackrel{def}{=} \sum_{\pi \in N^T: B(\pi_{t:T}) = l_{s:|l|}} \prod_{t^\prime=t}^T y^t_{\pi^\prime} $$ 显然, $$ \begin{align} \beta_T(|l^\prime|) = y_b^T,\\ \beta_T(|l^\prime|-1) = y_{l_{|l|}}^T,\\ \beta_T(s) = 0, \forall s < |l^\prime| - 1 \end{align} $$ 易得如下递归关系: $$ \beta_t(s) = \{ \begin{array}{l} (\beta_{t+1}(s)+\beta_{t+1}(s+1)) y^t_{l^\prime_s},\ \ \ if\ l^\prime_s = b \ or\ l_{s+2}^\prime = l_s^{\prime} \\ (\beta_{t+1}(s)+\beta_{t+1}(s+1) + \beta_{t+1}(s+2)) y^t_{l^\prime_s} \end{array} $$ ``` def backward(y, labels): T, V = y.shape L = len(labels) beta = np.zeros([T, L]) # init beta[-1, -1] = y[-1, labels[-1]] beta[-1, -2] = y[-1, labels[-2]] for t in range(T - 2, -1, -1): for i in range(L): s = labels[i] a = beta[t + 1, i] if i + 1 < L: a += beta[t + 1, i + 1] if i + 2 < L and s != 0 and s != labels[i + 2]: a += beta[t + 1, i + 2] beta[t, i] = a * y[t, s] return beta beta = backward(y, labels) print(beta) ``` ## 1.6 梯度计算 下面,我们利用前向、后者计算的 $\alpha$ 和 $\beta$ 来计算梯度。 根据 $\alpha$、$\beta$ 的定义,我们有: $$\alpha_t(s)\beta_t(s) = \sum_{\pi \in B^{-1}(l):\pi_t=l_s^\prime} y^t_{l_s^\prime} \prod_{t=1}^T y^t_{\pi_t} = y^t_{l_s^\prime} \cdot \sum_{\pi \in B^{-1}(l):\pi_t=l_s^\prime} \prod_{t=1}^T y^t_{\pi_t}$$ 则 $$ \frac{\alpha_t(s)\beta_t(s)}{ y^t_{l_s^\prime}} = \sum_{\pi \in B^{-1}(l):\pi_t=l_s^\prime} \prod_{t=1}^T y^t_{\pi_t} = \sum_{\pi \in B^{-1}(l):\pi_t=l_s^\prime} p(\pi|x) $$ 于是,可得似然 $$ p(l|x) = \sum_{s=1}^{|l^\prime|} \sum_{\pi \in B^{-1}(l):\pi_t=l_s^\prime} p(\pi|x) = \sum_{s=1}^{|l^\prime|} \frac{\alpha_t(s)\beta_t(s)}{ y^t_{l_s^\prime}} $$ 为计算 $\frac{\partial p(l|x)}{\partial y^t_k}$,观察上式右端求各项,仅有 $s=k$ 的项包含 $y^t_k$,因此,其他项的偏导都为零,不用考虑。于是有: $$ \frac{\partial p(l|x)}{\partial y^t_k} = \frac{\partial \frac{\alpha_t(k)\beta_t(k)}{ y^t_{k}} }{\partial y^t_k} $$ 利用除法的求导准则有: $$ \frac{\partial p(l|x)}{\partial y^t_k} = \frac{\frac{2 \cdot \alpha_t(k)\beta_t(k)}{ y^t_{k}} \cdot y^t_{k} - \alpha_t(k)\beta_t(k) \cdot 1}{{y^t_k}^2} = \frac{\alpha_t(k)\beta_t(k)}{{y^t_k}^2} $$ > 求导中,分子第一项是因为 $\alpha(k)\beta(k)$ 中包含为两个 $y^t_k$ 乘积项(即 ${y^t_k}^2$),其他均为与 $y^t_k$ 无关的常数。 $l$ 中可能包含多个 $k$ 字符,它们计算的梯度要进行累加,因此,最后的梯度计算结果为: $$ \frac{\partial p(l|x)}{\partial y^t_k} = \frac{1}{{y^t_k}^2} \sum_{s \in lab(l, k)} \alpha_t(s)\beta_t(s) $$ 其中,$lab(s)=\{s: l_s^\prime = k\}$。 一般我们优化似然函数的对数,因此,梯度计算如下: $$ \frac{\partial \ln(p(l|x))}{\partial y^t_k} =\frac{1}{p(l|x)} \frac{\partial p(l|x)}{\partial y^t_k} $$ 其中,似然值在前向计算中已经求得: $p(l|x) = \alpha_T(|l^\prime|) + \alpha_T(|l^\prime|-1)$。 对于给定训练集 $D$,待优化的目标函数为: $$ O(D, N_w) = -\sum_{(x,z)\in D} \ln(p(z|x)) $$ 得到梯度后,我们可以利用任意优化方法(e.g. SGD, Adam)进行训练。 ``` def gradient(y, labels): T, V = y.shape L = len(labels) alpha = forward(y, labels) beta = backward(y, labels) p = alpha[-1, -1] + alpha[-1, -2] grad = np.zeros([T, V]) for t in range(T): for s in range(V): lab = [i for i, c in enumerate(labels) if c == s] for i in lab: grad[t, s] += alpha[t, i] * beta[t, i] grad[t, s] /= y[t, s] ** 2 grad /= p return grad grad = gradient(y, labels) print(grad) ``` 将基于前向-后向算法得到梯度与基于数值的梯度比较,以验证实现的正确性。 ``` def check_grad(y, labels, w=-1, v=-1, toleration=1e-3): grad_1 = gradient(y, labels)[w, v] delta = 1e-10 original = y[w, v] y[w, v] = original + delta alpha = forward(y, labels) log_p1 = np.log(alpha[-1, -1] + alpha[-1, -2]) y[w, v] = original - delta alpha = forward(y, labels) log_p2 = np.log(alpha[-1, -1] + alpha[-1, -2]) y[w, v] = original grad_2 = (log_p1 - log_p2) / (2 * delta) if np.abs(grad_1 - grad_2) > toleration: print('[%d, %d]:%.2e' % (w, v, np.abs(grad_1 - grad_2))) for toleration in [1e-5, 1e-6]: print('%.e' % toleration) for w in range(y.shape[0]): for v in range(y.shape[1]): check_grad(y, labels, w, v, toleration) ``` 可以看到,前向-后向及数值梯度两种方法计算的梯度差异都在 1e-5 以下,误差最多在 1e-6 的量级。这初步验证了前向-后向梯度计算方法原理和实现的正确性。 ## 1.7 logits 梯度 在实际训练中,为了计算方便,可以将 CTC 和 softmax 的梯度计算合并,公式如下: $$ \frac{\partial \ln(p(l|x))}{\partial y^t_k} = y^t_k - \frac{1}{y^t_k \cdot p(l|x)} \sum_{s \in lab(l, k)} \alpha_t(s)\beta_t(s) $$ 这是因为,softmax 的梯度反传公式为: $$ \frac{\partial \ln(p(l|x))}{\partial u^t_k} = y^t_k (\frac{\partial \ln(p(l|x))}{\partial y^t_k} - \sum_{j=1}^{V} \frac{\partial \ln(p(l|x))}{\partial y^t_j} y^t_j) $$ 接合上面两式,有: $$ \frac{\partial \ln(p(l|x))}{\partial u^t_k} = \frac{1}{y^t_k p(l|x)} \sum_{s \in lab(l, k)} \alpha_t(s)\beta_t(s) - y^t_k $$ ``` def gradient_logits_naive(y, labels): ''' gradient by back propagation ''' y_grad = gradient(y, labels) sum_y_grad = np.sum(y_grad * y, axis=1, keepdims=True) u_grad = y * (y_grad - sum_y_grad) return u_grad def gradient_logits(y, labels): ''' ''' T, V = y.shape L = len(labels) alpha = forward(y, labels) beta = backward(y, labels) p = alpha[-1, -1] + alpha[-1, -2] u_grad = np.zeros([T, V]) for t in range(T): for s in range(V): lab = [i for i, c in enumerate(labels) if c == s] for i in lab: u_grad[t, s] += alpha[t, i] * beta[t, i] u_grad[t, s] /= y[t, s] * p u_grad -= y return u_grad grad_l = gradient_logits_naive(y, labels) grad_2 = gradient_logits(y, labels) print(np.sum(np.abs(grad_l - grad_2))) ``` 同上,我们利用数值梯度来初步检验梯度计算的正确性: ``` def check_grad_logits(x, labels, w=-1, v=-1, toleration=1e-3): grad_1 = gradient_logits(softmax(x), labels)[w, v] delta = 1e-10 original = x[w, v] x[w, v] = original + delta y = softmax(x) alpha = forward(y, labels) log_p1 = np.log(alpha[-1, -1] + alpha[-1, -2]) x[w, v] = original - delta y = softmax(x) alpha = forward(y, labels) log_p2 = np.log(alpha[-1, -1] + alpha[-1, -2]) x[w, v] = original grad_2 = (log_p1 - log_p2) / (2 * delta) if np.abs(grad_1 - grad_2) > toleration: print('[%d, %d]:%.2e, %.2e, %.2e' % (w, v, grad_1, grad_2, np.abs(grad_1 - grad_2))) np.random.seed(1111) x = np.random.random([10, 10]) for toleration in [1e-5, 1e-6]: print('%.e' % toleration) for w in range(x.shape[0]): for v in range(x.shape[1]): check_grad_logits(x, labels, w, v, toleration) ``` # 2. 数值稳定性 CTC 的训练过程面临数值下溢的风险,特别是序列较大的情况下。下面介绍两种数值上稳定的工程优化方法:1)log 域(许多 CRF 实现的常用方法);2)scale 技巧(原始论文 [1] 使用的方法)。 ## 2.1 log 域计算 log 计算涉及 logsumexp 操作。 [经验表明](https://github.com/baidu-research/warp-ctc),在 log 域计算,即使使用单精度,也表现出良好的数值稳定性,可以有效避免下溢的风险。稳定性的代价是增加了运算的复杂性——原始实现只涉及乘加运算,log 域实现则需要对数和指数运算。 ``` ninf = -np.float('inf') def _logsumexp(a, b): ''' np.log(np.exp(a) + np.exp(b)) ''' if a < b: a, b = b, a if b == ninf: return a else: return a + np.log(1 + np.exp(b - a)) def logsumexp(*args): ''' from scipy.special import logsumexp logsumexp(args) ''' res = args[0] for e in args[1:]: res = _logsumexp(res, e) return res ``` ### 2.1.1 log 域前向算法 基于 log 的前向算法实现如下: ``` def forward_log(log_y, labels): T, V = log_y.shape L = len(labels) log_alpha = np.ones([T, L]) * ninf # init log_alpha[0, 0] = log_y[0, labels[0]] log_alpha[0, 1] = log_y[0, labels[1]] for t in range(1, T): for i in range(L): s = labels[i] a = log_alpha[t - 1, i] if i - 1 >= 0: a = logsumexp(a, log_alpha[t - 1, i - 1]) if i - 2 >= 0 and s != 0 and s != labels[i - 2]: a = logsumexp(a, log_alpha[t - 1, i - 2]) log_alpha[t, i] = a + log_y[t, s] return log_alpha log_alpha = forward_log(np.log(y), labels) alpha = forward(y, labels) print(np.sum(np.abs(np.exp(log_alpha) - alpha))) ``` ### 2.1.2 log 域后向算法 基于 log 的后向算法实现如下: ``` def backward_log(log_y, labels): T, V = log_y.shape L = len(labels) log_beta = np.ones([T, L]) * ninf # init log_beta[-1, -1] = log_y[-1, labels[-1]] log_beta[-1, -2] = log_y[-1, labels[-2]] for t in range(T - 2, -1, -1): for i in range(L): s = labels[i] a = log_beta[t + 1, i] if i + 1 < L: a = logsumexp(a, log_beta[t + 1, i + 1]) if i + 2 < L and s != 0 and s != labels[i + 2]: a = logsumexp(a, log_beta[t + 1, i + 2]) log_beta[t, i] = a + log_y[t, s] return log_beta log_beta = backward_log(np.log(y), labels) beta = backward(y, labels) print(np.sum(np.abs(np.exp(log_beta) - beta))) ``` ### 2.1.3 log 域梯度计算 在前向、后向基础上,也可以在 log 域上计算梯度。 ``` def gradient_log(log_y, labels): T, V = log_y.shape L = len(labels) log_alpha = forward_log(log_y, labels) log_beta = backward_log(log_y, labels) log_p = logsumexp(log_alpha[-1, -1], log_alpha[-1, -2]) log_grad = np.ones([T, V]) * ninf for t in range(T): for s in range(V): lab = [i for i, c in enumerate(labels) if c == s] for i in lab: log_grad[t, s] = logsumexp(log_grad[t, s], log_alpha[t, i] + log_beta[t, i]) log_grad[t, s] -= 2 * log_y[t, s] log_grad -= log_p return log_grad log_grad = gradient_log(np.log(y), labels) grad = gradient(y, labels) #print(log_grad) #print(grad) print(np.sum(np.abs(np.exp(log_grad) - grad))) ``` ## 2.2 scale ### 2.2.1 前向算法 为了下溢,在前向算法的每个时刻,都对计算出的 $\alpha$ 的范围进行缩放: $$ C_t \stackrel{def}{=} \sum_s\alpha_t(s) $$ $$ \hat{\alpha}_t = \frac{\alpha_t(s)}{C_t} $$ 缩放后的 $\alpha$,不会随着时刻的积累变得太小。$\hat{\alpha}$ 替代 $\alpha$,进行下一时刻的迭代。 ``` def forward_scale(y, labels): T, V = y.shape L = len(labels) alpha_scale = np.zeros([T, L]) # init alpha_scale[0, 0] = y[0, labels[0]] alpha_scale[0, 1] = y[0, labels[1]] Cs = [] C = np.sum(alpha_scale[0]) alpha_scale[0] /= C Cs.append(C) for t in range(1, T): for i in range(L): s = labels[i] a = alpha_scale[t - 1, i] if i - 1 >= 0: a += alpha_scale[t - 1, i - 1] if i - 2 >= 0 and s != 0 and s != labels[i - 2]: a += alpha_scale[t - 1, i - 2] alpha_scale[t, i] = a * y[t, s] C = np.sum(alpha_scale[t]) alpha_scale[t] /= C Cs.append(C) return alpha_scale, Cs ``` 由于进行了缩放,最后计算概率时要时行补偿: $$ p(l|x) = \alpha_T(|l^\prime|) + \alpha_T(|l^\prime|-1) = (\hat\alpha_T(|l^\prime|) + \hat\alpha_T(|l^\prime|-1) * \prod_{t=1}^T C_t $$ $$ \ln(p(l|x)) = \sum_t^T\ln(C_t) + \ln(\hat\alpha_T(|l^\prime|) + \hat\alpha_T(|l^\prime|-1)) $$ ``` labels = [0, 1, 2, 0] # 0 for blank alpha_scale, Cs = forward_scale(y, labels) log_p = np.sum(np.log(Cs)) + np.log(alpha_scale[-1][labels[-1]] + alpha_scale[-1][labels[-2]]) alpha = forward(y, labels) p = alpha[-1, labels[-1]] + alpha[-1, labels[-2]] print(np.log(p), log_p, np.log(p) - log_p) ``` ### 2.2.2 后向算法 后向算法缩放类似于前向算法,公式如下: $$ D_t \stackrel{def}{=} \sum_s\beta_t(s) $$ $$ \hat{\beta}_t = \frac{\beta_t(s)}{D_t} $$ ``` def backward_scale(y, labels): T, V = y.shape L = len(labels) beta_scale = np.zeros([T, L]) # init beta_scale[-1, -1] = y[-1, labels[-1]] beta_scale[-1, -2] = y[-1, labels[-2]] Ds = [] D = np.sum(beta_scale[-1,:]) beta_scale[-1] /= D Ds.append(D) for t in range(T - 2, -1, -1): for i in range(L): s = labels[i] a = beta_scale[t + 1, i] if i + 1 < L: a += beta_scale[t + 1, i + 1] if i + 2 < L and s != 0 and s != labels[i + 2]: a += beta_scale[t + 1, i + 2] beta_scale[t, i] = a * y[t, s] D = np.sum(beta_scale[t]) beta_scale[t] /= D Ds.append(D) return beta_scale, Ds[::-1] beta_scale, Ds = backward_scale(y, labels) print(beta_scale) ``` ### 2.2.3 梯度计算 $$ \frac{\partial \ln(p(l|x))}{\partial y^t_k} = \frac{1}{p(l|x)} \frac{\partial p(l|x)}{\partial y^t_k} = \frac{1}{p(l|x)} \frac{1}{{y^t_k}^2} \sum_{s \in lab(l, k)} \alpha_t(s)\beta_t(s) $$ 考虑到 $$ p(l|x) = \sum_{s=1}^{|l^\prime|} \frac{\alpha_t(s)\beta_t(s)}{ y^t_{l_s^\prime}} $$ 以及 $$ \alpha_t(s) = \hat\alpha_t(s) \cdot \prod_{k=1}^t C_k $$ $$ \beta_t(s) = \hat\beta_t(s) \cdot \prod_{k=t}^T D_k $$ $$ \frac{\partial \ln(p(l|x))}{\partial y^t_k} = \frac{1}{\sum_{s=1}^{|l^\prime|} \frac{\hat\alpha_t(s) \hat\beta_t(s)}{y^t_{l^\prime_s}}} \frac{1}{{y^t_k}^2} \sum_{s \in lab(l, k)} \hat\alpha_t(s) \hat\beta_t(s) $$ 式中最右项中的各个部分我们都已经求得。梯度计算实现如下: ``` def gradient_scale(y, labels): T, V = y.shape L = len(labels) alpha_scale, _ = forward_scale(y, labels) beta_scale, _ = backward_scale(y, labels) grad = np.zeros([T, V]) for t in range(T): for s in range(V): lab = [i for i, c in enumerate(labels) if c == s] for i in lab: grad[t, s] += alpha_scale[t, i] * beta_scale[t, i] grad[t, s] /= y[t, s] ** 2 # normalize factor z = 0 for i in range(L): z += alpha_scale[t, i] * beta_scale[t, i] / y[t, labels[i]] grad[t] /= z return grad labels = [0, 3, 0, 3, 0, 4, 0] # 0 for blank grad_1 = gradient_scale(y, labels) grad_2 = gradient(y, labels) print(np.sum(np.abs(grad_1 - grad_2))) ``` ### 2.2.4 logits 梯度 类似于 y 梯度的推导,logits 梯度计算公式如下: $$ \frac{\partial \ln(p(l|x))}{\partial u^t_k} = \frac{1}{y^t_k Z_t} \sum_{s \in lab(l, k)} \hat\alpha_t(s)\hat\beta_t(s) - y^t_k $$ 其中, $$ Z_t \stackrel{def}{=} \sum_{s=1}^{|l^\prime|} \frac{\hat\alpha_t(s)\hat\beta_t(s)}{y^t_{l^\prime_s}} $$ # 3. 解码 训练和的 $N_w$ 可以用来预测新的样本输入对应的输出字符串,这涉及到解码。 按照最大似然准则,最优的解码结果为: $$ h(x) = \underset{l \in L^{\le T}}{\mathrm{argmax}}\ p(l|x) $$ 然而,上式不存在已知的高效解法。下面介绍几种实用的近似破解码方法。 ## 3.1 贪心搜索 (greedy search) 虽然 $p(l|x)$ 难以有效的计算,但是由于 CTC 的独立性假设,对于某个具体的字符串 $\pi$(去 blank 前),确容易计算: $$ p(\pi|x) = \prod_{k=1}^T p(\pi_k|x) $$ 因此,我们放弃寻找使 $p(l|x)$ 最大的字符串,退而寻找一个使 $p(\pi|x)$ 最大的字符串,即: $$ h(x) \approx B(\pi^\star) $$ 其中, $$ \pi^\star = \underset{\pi \in N^T}{\mathrm{argmax}}\ p(\pi|x) $$ 简化后,解码过程(构造 $\pi^\star$)变得非常简单(基于独立性假设): 在每个时刻输出概率最大的字符: $$ \pi^\star = cat_{t=1}^T(\underset{s \in L^\prime}{\mathrm{argmax}}\ y^t_s) $$ ``` def remove_blank(labels, blank=0): new_labels = [] # combine duplicate previous = None for l in labels: if l != previous: new_labels.append(l) previous = l # remove blank new_labels = [l for l in new_labels if l != blank] return new_labels def insert_blank(labels, blank=0): new_labels = [blank] for l in labels: new_labels += [l, blank] return new_labels def greedy_decode(y, blank=0): raw_rs = np.argmax(y, axis=1) rs = remove_blank(raw_rs, blank) return raw_rs, rs np.random.seed(1111) y = softmax(np.random.random([20, 6])) rr, rs = greedy_decode(y) print(rr) print(rs) ``` ## 3.2 束搜索(Beam Search) 显然,贪心搜索的性能非常受限。例如,它不能给出除最优路径之外的其他其优路径。很多时候,如果我们能拿到 nbest 的路径,后续可以利用其他信息来进一步优化搜索的结果。束搜索能近似找出 top 最优的若干条路径。 ``` def beam_decode(y, beam_size=10): T, V = y.shape log_y = np.log(y) beam = [([], 0)] for t in range(T): # for every timestep new_beam = [] for prefix, score in beam: for i in range(V): # for every state new_prefix = prefix + [i] new_score = score + log_y[t, i] new_beam.append((new_prefix, new_score)) # top beam_size new_beam.sort(key=lambda x: x[1], reverse=True) beam = new_beam[:beam_size] return beam np.random.seed(1111) y = softmax(np.random.random([20, 6])) beam = beam_decode(y, beam_size=100) for string, score in beam[:20]: print(remove_blank(string), score) ``` ## 3.3 前缀束搜索(Prefix Beam Search) 直接的束搜索的一个问题是,在保存的 top N 条路径中,可能存在多条实际上是同一结果(经过去重复、去 blank 操作)。这减少了搜索结果的多样性。下面介绍的前缀搜索方法,在搜索过程中不断的合并相同的前缀[2]。参考 [gist](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0),前缀束搜索实现如下: ``` from collections import defaultdict def prefix_beam_decode(y, beam_size=10, blank=0): T, V = y.shape log_y = np.log(y) beam = [(tuple(), (0, ninf))] # blank, non-blank for t in range(T): # for every timestep new_beam = defaultdict(lambda : (ninf, ninf)) for prefix, (p_b, p_nb) in beam: for i in range(V): # for every state p = log_y[t, i] if i == blank: # propose a blank new_p_b, new_p_nb = new_beam[prefix] new_p_b = logsumexp(new_p_b, p_b + p, p_nb + p) new_beam[prefix] = (new_p_b, new_p_nb) continue else: # extend with non-blank end_t = prefix[-1] if prefix else None # exntend current prefix new_prefix = prefix + (i,) new_p_b, new_p_nb = new_beam[new_prefix] if i != end_t: new_p_nb = logsumexp(new_p_nb, p_b + p, p_nb + p) else: new_p_nb = logsumexp(new_p_nb, p_b + p) new_beam[new_prefix] = (new_p_b, new_p_nb) # keep current prefix if i == end_t: new_p_b, new_p_nb = new_beam[prefix] new_p_nb = logsumexp(new_p_nb, p_nb + p) new_beam[prefix] = (new_p_b, new_p_nb) # top beam_size beam = sorted(new_beam.items(), key=lambda x : logsumexp(*x[1]), reverse=True) beam = beam[:beam_size] return beam np.random.seed(1111) y = softmax(np.random.random([20, 6])) beam = prefix_beam_decode(y, beam_size=100) for string, score in beam[:20]: print(remove_blank(string), score) ``` ## 3.4 其他解码方法 上述介绍了基本解码方法。实际中,搜索过程可以接合额外的信息,提高搜索的准确度(例如在语音识别任务中,加入语言模型得分信息, [前缀搜索+语言模型](https://github.com/PaddlePaddle/DeepSpeech/blob/develop/decoders/decoders_deprecated.py ))。 本质上,CTC 只是一个训练准则。训练完成后,$N_w$ 输出一系列概率分布,这点和常规基于交叉熵准则训练的模型完全一致。因此,特定应用领域常规的解码也可以经过一定修改用来 CTC 的解码。例如在语音识别任务中,利用 CTC 训练的声学模型可以无缝融入原来的 WFST 的解码器中[5](e.g. 参见 [EESEN](https://github.com/srvk/eesen))。 此外,[1] 给出了一种利用 CTC 顶峰特点的启发式搜索方法。 # 4. 工具 [warp-ctc](https://github.com/baidu-research/warp-ctc) 是百度开源的基于 CPU 和 GPU 的高效并行实现。warp-ctc 自身提供 C 语言接口,对于流利的机器学习工具([torch](https://github.com/baidu-research/warp-ctc/tree/master/torch_binding)、 [pytorch](https://github.com/SeanNaren/deepspeech.pytorch) 和 [tensorflow](https://github.com/baidu-research/warp-ctc/tree/master/tensorflow_binding)、[chainer](https://github.com/jheymann85/chainer_ctc))都有相应的接口绑定。 [cudnn 7](https://developer.nvidia.com/cudnn) 以后开始提供 CTC 支持。 Tensorflow 也原生支持 [CTC loss](https://www.tensorflow.org/api_docs/python/tf/nn/ctc_loss),及 greedy 和 beam search 解码器。 # 小结 1. CTC 可以建模无对齐信息的多对多序列问题(输入长度不小于输出),如语音识别、连续字符识别 [3,4]。 2. CTC 不需要输入与输出的对齐信息,可以实现端到端的训练。 3. CTC 在 loss 的计算上,利用了整个 labels 序列的全局信息,某种意义上相对逐帧计算损失的方法,"更加区分性"。 # References 1. Graves et al. [Connectionist Temporal Classification : Labelling Unsegmented Sequence Data with Recurrent Neural Networks](ftp://ftp.idsia.ch/pub/juergen/icml2006.pdf). 2. Hannun et al. [First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs](https://arxiv.org/abs/1408.2873). 3. Graves et al. [Towards End-To-End Speech Recognition with Recurrent Neural Networks](http://jmlr.org/proceedings/papers/v32/graves14.pdf). 4. Liwicki et al. [A novel approach to on-line handwriting recognition based on bidirectional long short-term memory networks](https://www.cs.toronto.edu/~graves/icdar_2007.pdf). 5. Zenkel et al. [Comparison of Decoding Strategies for CTC Acoustic Models](https://arxiv.org/abs/1708.004469). 5. Huannun. [Sequence Modeling with CTC](https://distill.pub/2017/ctc/).
github_jupyter
# CS 224U Jupyter Notebook Tutorial Lucy will demo this notebook during the Week 1 Python special session, but this notebook should also be workable on your own if you decide to do so. The contents of this notebook were tested on a MacBook computer. Feel free to stop by office hours w/ questions or ask on Piazza if you run into problems. ## Table of Contents 1. [Starting up](#start) 2. [Cells](#cells) 1. [Code](#code) 2. [Markdown](#markdown) 3. [Kernels](#kernels) 4. [Shortcuts](#shortcuts) 5. [Shutdown](#shutdown) 6. [Extra](#noness) 7. [More resources](#res) ## Starting up <a id="start"></a> This tutorial assumes that you have followed the [course setup](https://nbviewer.jupyter.org/github/cgpotts/cs224u/blob/master/setup.ipynb) instructions. This means Jupyter is installed using Conda. 1. Open up Terminal (Mac/Linux) or Command Prompt (Windows). 2. Enter a directory that you'd like to have as your `Home`, e.g. where your cloned `cs224u` Github repo resides. 3. Type `jupyter notebook` and enter. After a few moments, a new browser window should open, listing the contents of your `Home` directory. - Note that on your screen, you'll see something like `[I 17:23:47.479 NotebookApp] The Jupyter Notebook is running at: http://localhost:8888/`. This tells you where your notebook is located. So if you were to accidentally close the window, you can open it again while your server is running. For this example, navigating to `http://localhost:8888/` on your favorite web browser should open it up again. - You may also specify a port number, e.g. `jupyter notebook --port 5656`. In this case, `http://localhost:5656/` is where your directory resides. 4. Click on a notebook with `.ipynb` extension to open it. If you want to create a new notebook, in the top right corner, click on `New` and under `Notebooks`, click on `Python`. If you have multiple environments, you should choose the one you want, e.g. `Python [nlu]`. - You can rename your notebook by clicking on its name (originally "Untitled") at the top of the notebook and modifying it. - Files with `.ipynb` are formatted as a JSON and so if you open them in vim, emacs, or a code editor, it's much harder to read and edit. Jupyter Notebooks allow for **interactive computing**. ## Cells <a id="cells"></a> Cells help you organize your work into manageable chunks. The top of your notebook contains a row of buttons. If you hover over them, the tooltips explain what each one is for: saving, inserting a new cell, cut/copy/paste cells, moving cells up/down, running/stopping a cell, choosing cell types, etc. Under Edit, Insert, and Cell in the toolbar, there are more cell-related options. Notice how the bar on the left of the cell changes color depending on whether you're in edit mode or command mode. This is useful for knowing when certain keyboard shortcuts apply (discussed later). There are three main types of cells: **code**, **markdown**, and raw. Raw cells are less common than the other two, and you don't need to understand it to get going for cs224u. If you put anything in this type of cell, you can't run it. They are used for situations where you might want to convert your notebook to HTML or LaTeX using the `nbconvert` tool or File -> Download as a format that isn't `.ipynb`. Read more about raw cells [here](https://nbsphinx.readthedocs.io/en/0.4.2/raw-cells.html) if you're curious. ### Code <a id="code"></a> Use the following code cells to explore various operations. Typically it's good practice to put import statements in the first cell or at least in their own cell. The square brackets next to the cell indicate the order in which you run cells. If there is an asterisk, it means the cell is currently running. The output of a cell is usually any print statements in the cell and the value of the last line in the cell. ``` import time import pandas as pd import matplotlib.pyplot as plt import numpy as np print("cats") # run this cell and notice how both strings appear as outputs "cheese" # cut/copy and paste this cell # move this cell up and down # run this cell # toggle the output # toggle scrolling to make long output smaller # clear the output for i in range(100): print("cats") # run this cell and stop before it finishes # stop acts like a KeyboardInterrupt for i in range(100): time.sleep(1) # make loop run slowly print("cats") # running this cell leads to no output def function1(): print("dogs") # put cursor in front of this comment and split and merge this cell. def function2(): print("cheese") function1() function2() ``` One difference between coding a Python script and a notebook is how you can run code "out of order" for the latter. This means you should be careful about variable reuse. It is good practice to order cells in the order which you expect someone to use the notebook, and organize code in ways that prevent problems from happening. Clearing the output doesn't remove the old variable value. In the example below, we need to rerun cell A to start with a new `a`. If we don't keep track of how many times we've run cell B or cell C, we might encounter unexpected bugs. ``` # Cell A a = [] # Cell B # try running this cell multiple times to add more pineapple a.append('pineapple') # Cell C # try running this cell multiple times to add more cake a.append('cake') # depending on the number of times you ran # cells B and C, the output of this cell will # be different. a ``` Even deleting cell D's code after running it doesn't remove list `b` from this notebook. This means if you are modifying code, whatever outputs you had from old code may still remain in the background of your notebook. ``` # Cell D # run this cell, delete/erase it, and run the empty cell b = ['apple pie'] # b still exists after cell C is gone b ``` Restart the kernel (Kernel -> Restart & Clear Output) to start anew. To check that things run okay in its intended order, restart and run everything (Kernel -> Restart & Run All). This is especially good to do before sharing your notebook with someone else. Jupyter notebooks are handy for telling stories using your code. You can view dataframes and plots directly under each code cell. ``` # dataframe example d = {'ingredient': ['flour', 'sugar'], '# of cups': [3, 4], 'purchase date': ['April 1', 'April 4']} df = pd.DataFrame(data=d) df # plot example plt.title("pineapple locations") plt.ylabel('latitude') plt.xlabel('longitude') _ = plt.scatter(np.random.randn(5), np.random.randn(5)) ``` ### Markdown <a id="markdown"></a> The other type of cell is Markdown, which allows you to write blocks of text in your notebook. Double click on any Markdown cell to view/edit it. Don't worry if you don't remember all of these things right away. You'll write more code than Markdown essays for cs224u, but the following are handy things to be aware of. You may notice that this cell's header is prefixed with `###`. The fewer hashtags, the larger the header. You can go up to five hashtags for the smallest level header. Here is a table. You can emphasize text using underscores or asterisks. You can also include links. | Markdown | Outcome | | ----------------------------- | ---------------------------- | | `_italics_ or *italics*` | _italics_ or *italics* | | `__bold__ or **bold**` | __bold__ or **bold** | | `[link](http://web.stanford.edu/class/cs224u/)` | [link](http://web.stanford.edu/class/cs224u/) | | `[jump to Cells section](#cells)` | [jump to Cells section](#cells) | Try removing/adding the `python` in the code formatting below to toggle code coloring. ```python if text == code: print("You can write code between a pair of triple backquotes, e.g. ```long text``` or `short text`") ``` Latex also works: $y = \int_0^1 2x dx$ $$y = x^2 + x^3$$ > You can also format quotes by putting a ">" in front of each line. > > You can space your lines apart with ">" followed by no text. There are three different ways to write a bullet list (asterisk, dash, plus): * sugar * tea * earl gray * english breakfast - cats - persian - dogs + pineapple + apple + granny smith Example of a numbered list: 1. tokens 2. vectors 3. relations You can also insert images: `![alt-text](./fig/nli-rnn-chained.png "Title")` (Try removing the backquotes and look at what happens.) A line of dashes, e.g. `----------------`, becomes a divider. ------------------ ## Kernels <a id="kernels"></a> A kernel executes code in a notebook. You may have multiple conda environments on your computer. You can change which environment your notebook is using by going to Kernel -> Change kernel. When you open a notebook, you may get a message that looks something like "Kernel not found. I couldn't find a kernel matching ____. Please select a kernel." This just means you need to choose the version of Python or environment that you want to have for your notebook. If you have difficulty getting your conda environment to show up as a kernel, [this](https://stackoverflow.com/questions/39604271/conda-environments-not-showing-up-in-jupyter-notebook) may help. In our class we will be using IPython notebooks, which means the code cells run Python. Fun fact: there are also kernels for other languages, e.g. Julia. This means you can create notebooks in these other languages as well, if you have them on your computer. ## Shortcuts <a id="shortcuts"></a> Go to Help -> Keyboard Shortcuts to view the shortcuts you may use in Jupyter Notebook. Here are a few that I find useful on a regular basis: - **run** a cell, select below: shift + enter - **save** and checkpoint: command + S (just like other file types) - enter **edit** mode from command mode: press enter - enter **command** mode from edit mode: esc - **delete** a cell (command mode): select a cell and press D - **dedent** while editing: command + [ - **indent** while editing: command + ] ``` # play around with this cell with shortcuts # delete this cell # Edit -> Undo Delete Cells for i in range(10): print("jelly beans") ``` ## Shutdown <a id="shutdown"></a> Notice that when you are done working and exit out of this notebook's window, the notebook icon in the home directory listing next to this notebook is green. This means your kernel is still running. If you want to shut it down, check the box next to your notebook in the directory and click "Shutdown." To shutdown the jupyter notebook app as a whole, use Control-C in Terminal to stop the server and shut down all kernels. ## Extra <a id="noness"></a> These are some extra things that aren't top priority to know but may be interesting. **Checkpoints** When you create a notebook, a checkpoint file is also saved in a hidden directory called `.ipynb_checkpoints`. Every time you manually save the notebook, the checkpoint file updates. Jupyter autosaves your work on occasion, which only updates the `.ipynb` file but not the checkpoint. You can revert back to the latest checkpoint using File -> Revert to Checkpoint. **NbViewer** We use this in our class for viewing jupyter notebooks from our course website. It allows you to render notebooks on the Internet. Check it out [here](https://nbviewer.jupyter.org/). View -> **Cell toolbar** - **Edit Metadata**: Modify the metadata of a cell by editing its json representation. Example of metadata: whether cell output should be collapsed, whether it should be scrolled, deletability of cell, name, and tags. - **Slideshow**: For turning your notebook into a presentation. This means different cells fall under slide types, e.g. Notes, Skip, Slide. ## More resources <a id="res"></a> If you click on "Help" in the toolbar, there is a list of references for common Python tools, e.g. numpy, pandas. [IPython website](https://ipython.org/) [Markdown basics](https://daringfireball.net/projects/markdown/) [Jupyter Notebook Documentation](https://jupyter-notebook.readthedocs.io/en/stable/index.html) [Real Python Jupyter Tutorial](https://realpython.com/jupyter-notebook-introduction/) [Dataquest Jupyter Notebook Tutorial](https://www.dataquest.io/blog/jupyter-notebook-tutorial/) [Stack Overflow](https://stackoverflow.com/)
github_jupyter
# Session Recording: Operationalizing Machine Learning models with Azure ML and Power BI > I presented this session at Power Break on March 22, 2021. Topics coevered - limitations of using Python query in Power BI, importance of MLOps, deploying ML models in Azure ML and consuming them in Power BI - toc: true - badges: true - comments: true - categories: [Power BI, azureml, machinelearning, python, mlops, powerbi] - hide: false ## Machine Learning in Power BI Power BI and Azure ML have native integration with each other, which means not only you can consume the model deploy in Power BI but also use the resources/tools in Azure ML to manage the MLOps process. There is a misconception that you need Power BI Premium to use Azure ML. In this session I show that's not the case. Below are the topics covered: - Limitation of using Python or R in Power BI - Formula firewall - Privacy setting of the datasource must be Public - 30 min query timeout - Have to use personal gateway - Cannot use enhanced metadata - Dependency management - Not scaleable - No MLOps - Steps to access Azure ML models in Power BI - Give Azure ML access to Power BI tenant in IAM - Use scoring script with input/output schema ![aml](https://docs.microsoft.com/en-us/azure/machine-learning/media/concept-azure-machine-learning-architecture/architecture.svg) #### Topics Covered: - Invoking Azure ML models in Power Query in Desktop and using dataflow in service - Thoughts on batch-scoring - Other considerations: - Azure ML models can only be called for real-time inference. - Use ACI for small models and AKS for large models that requore GPU, scaleable compute, low latency, high availability - Real-time inferencing is expensive! Use batch scoring if real-time is not needed - Incremental refreshes in Power BI will work with models invoked from Azure ML but watch out for (query) performance - With Azure Arc, you can use your own Kubernets service! Deploy the model in Azure ML but use your own K8 (AWS, GCP etc.) - Premium Per User gives you access to Auto ML. If you have access to PPU, it may be worth first experimenting with AutoML before using Azure ML model to save cost. PPU Automl doesn't offer the same level of MLOps capabilities though. >youtube: https://youtu.be/oLdMFJIxWDo - **References:** - https://docs.microsoft.com/en-us/power-bi/connect-data/service-aml-integrate - https://pawarbi.github.io/blog/powerbi/r/python/2020/05/15/powerbi-python-r-tips.html - https://azure.microsoft.com/en-us/blog/innovate-across-hybrid-and-multicloud-with-new-azure-arc-capabilities/ - https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts - https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-power-bi-custom-model
github_jupyter
# Evaluate 3 Training convergence figures. ``` import pickle import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np from sys_simulator.general import load_with_pickle, sns_confidence_interval_plot from copy import deepcopy import os EXP_NAME = 'evaluate3' # ddpg ALGO_NAME = 'ddpg' filepath = "/home/lucas/dev/sys-simulator-2/data/ddpg/evaluate5/20210522-204910/log.pickle" # dql # ALGO_NAME = 'dql' # filepath = "/home/lucas/dev/sys-simulator-2/data/dql/evaluate2/20210524-232333/log.pickle" # a2c # ALGO_NAME = 'a2c' # filepath = "/home/lucas/dev/sys-simulator-2/data/a2c/evaluate2/20210524-224905/log.pickle" # output path OUTPUT_PATH = f'/home/lucas/dev/sys-simulator-2/figs/{EXP_NAME}/{ALGO_NAME}' file = open(filepath, 'rb') data = pickle.load(file) file.close() mue_sinrs = np.array(data['mue_sinrs']) mue_sinrs.shape d2d_sinrs = np.array(data['d2d_sinrs']) d2d_sinrs.shape mue_tx_powers = np.array(data['mue_tx_powers']) mue_tx_powers.shape d2d_tx_powers = np.array(data['d2d_tx_powers']) d2d_tx_powers.shape d2d_tx_powers[0] data['trajectories'].keys() data['trajectories']['DUE.TX:1'][-10:] mue_availability = np.array(data['mue_availability']) mue_availability.shape ``` ## Fonts config ``` x_font = { 'family': 'serif', 'color': 'black', 'weight': 'normal', 'size': 16, } y_font = { 'family': 'serif', 'color': 'black', 'weight': 'normal', 'size': 16, } ticks_font = { 'fontfamily': 'serif', 'fontsize': 13 } legends_font = { 'size': 13, 'family': 'serif' } ``` ## MUE SINR ``` x = list(range(mue_sinrs.shape[0])) plt.figure(figsize=(10,7)) sns.lineplot(x=x, y=mue_sinrs.reshape(-1)) plt.xlabel('Steps', fontdict=x_font) plt.ylabel('MUE SINR [dB]', fontdict=y_font) plt.xticks(**ticks_font) plt.yticks(**ticks_font) fig_name = 'mue-sinr' svg_path = f'{OUTPUT_PATH}/{fig_name}.svg' eps_path = f'{OUTPUT_PATH}/{fig_name}.eps' print(svg_path) # save fig plt.savefig(svg_path) os.system(f'magick convert {svg_path} {eps_path}') plt.show() ``` # MUE Tx Power ``` plt.figure(figsize=(10,7)) sns.lineplot(x=x, y=mue_tx_powers.reshape(-1)) plt.xlabel('Steps', fontdict=x_font) plt.ylabel('MUE Tx Power [dB]', fontdict=y_font) plt.xticks(**ticks_font) plt.yticks(**ticks_font) fig_name = 'mue-tx-power' svg_path = f'{OUTPUT_PATH}/{fig_name}.svg' eps_path = f'{OUTPUT_PATH}/{fig_name}.eps' print(svg_path) # save fig plt.savefig(svg_path) os.system(f'magick convert {svg_path} {eps_path}') plt.show() ``` ## D2D SINR ``` plt.figure(figsize=(10,7)) sns.lineplot(x=x, y=d2d_sinrs[:,0].reshape(-1), label='Device 1') sns.lineplot(x=x, y=d2d_sinrs[:,1].reshape(-1), label='Device 2') plt.xlabel('Steps', fontdict=x_font) plt.ylabel('D2D SINR [dB]', fontdict=y_font) plt.xticks(**ticks_font) plt.yticks(**ticks_font) plt.legend(prop=legends_font) fig_name = 'd2d-sinr' svg_path = f'{OUTPUT_PATH}/{fig_name}.svg' eps_path = f'{OUTPUT_PATH}/{fig_name}.eps' print(svg_path) # save fig plt.savefig(svg_path) os.system(f'magick convert {svg_path} {eps_path}') plt.show() ``` ## D2D Tx Power ``` plt.figure(figsize=(10,7)) sns.lineplot(x=x, y=d2d_tx_powers[:,0].reshape(-1), label='Device 1') sns.lineplot(x=x, y=d2d_tx_powers[:,1].reshape(-1), label='Device 2') plt.xlabel('Steps', fontdict=x_font) plt.ylabel('D2D Tx Power [dB]', fontdict=y_font) plt.xticks(**ticks_font) plt.yticks(**ticks_font) plt.legend(prop=legends_font) fig_name = 'd2d-tx-power' svg_path = f'{OUTPUT_PATH}/{fig_name}.svg' eps_path = f'{OUTPUT_PATH}/{fig_name}.eps' print(svg_path) # save fig plt.savefig(svg_path) os.system(f'magick convert {svg_path} {eps_path}') plt.show() ``` ## MUE availability ``` plt.figure(figsize=(10,7)) sns.lineplot(x=x, y=mue_availability.reshape(-1)) plt.xlabel('Steps', fontdict=x_font) plt.ylabel('MUE availability', fontdict=y_font) plt.xticks(**ticks_font) plt.yticks([0.0, 1.0], **ticks_font) fig_name = 'mue-availability' svg_path = f'{OUTPUT_PATH}/{fig_name}.svg' eps_path = f'{OUTPUT_PATH}/{fig_name}.eps' print(svg_path) # save fig plt.savefig(svg_path) os.system(f'magick convert {svg_path} {eps_path}') plt.show() ``` ## Avg mue availability ``` mue_availability.mean() ```
github_jupyter
# REGRESSION From Scratch With BOSTON HOUSE PRICE PREDICTION ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import seaborn as sns from plotnine import ggplot from plotnine import ggplot, aes, geom_line,geom_point,geom_histogram import warnings warnings.filterwarnings('ignore') # Importing of Useful libraries from sklearn library. from sklearn.preprocessing import StandardScaler # For Scaling the dataset from sklearn.model_selection import train_test_split # For Splitting the dataset from sklearn.linear_model import LinearRegression # For Linear regression from xgboost import XGBRegressor from sklearn.ensemble import RandomForestRegressor from sklearn import metrics from sklearn.decomposition import PCA # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os # Importing the dataset. names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] df = pd.read_csv('housing.csv', delim_whitespace=True, names=names) display(df.head()) print(df.shape) df.info() # Plotting the heatmap of correlation between features # Finding out the correlation between the features corr = df.corr() plt.figure(figsize=(20,20)) sns.heatmap(corr, cbar=True, square= True, fmt='.1f', annot=True, annot_kws={'size':15}, cmap='Greens') ``` # ggplot2 style plotting ``` ggplot(df , aes(x = df['CRIM'] , y = df['RAD'])) +geom_point() ggplot(df , aes(x = df['CRIM'])) +geom_histogram(colour = "black" , fill = "red" , bins = 10) # Let us Create Feature matrix and Target Vector. x_train = df.iloc[:,:-1].values y_train = df.iloc[:,-1].values test_size=0.3 X_train,X_test,y_train,y_test=train_test_split(x_train,y_train,test_size=test_size,random_state=12) ``` #### Scaling of Feature matrix. ``` sc_X=StandardScaler() x_train=sc_X.fit_transform(x_train) pca = PCA(n_components=None) x_train = pca.fit_transform(x_train) explained_variance = pca.explained_variance_ratio_ explained_variance print(f"The sum of initial 5 values is \t {0.47+0.11+0.09+0.06+0.06} , which is very good." ) print("So we will choose 5 number of features and reduce our training feature matrix to 5 features/columns. ") pca = PCA(n_components=5) x_train = pca.fit_transform(x_train) # Import XGBoost Regressor from xgboost import XGBRegressor #Create a XGBoost Regressor reg = XGBRegressor() # Train the model using the training sets reg.fit(X_train, y_train) # Model prediction on train data y_pred = reg.predict(X_train) print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred))) #Predicting Test data with the model y_test_pred = reg.predict(X_test) acc_xgb = metrics.r2_score(y_test, y_test_pred) print('R^2:', acc_xgb) # Create a Linear regressor lm = LinearRegression() # Train the model using the training sets lm.fit(X_train, y_train) # Value of y intercept lm.intercept_ # Model prediction on train data y_pred = lm.predict(X_train) # Predicting Test data with the model y_test_pred = lm.predict(X_test) # Model Evaluation acc_linreg = metrics.r2_score(y_test, y_test_pred) print('R^2:', acc_linreg) print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred))) # Create a Random Forest Regressor reg = RandomForestRegressor() # Train the model using the training sets reg.fit(X_train, y_train) # Model prediction on train data y_pred = reg.predict(X_train) # Predicting Test data with the model y_test_pred = reg.predict(X_test) # Model Evaluation acc_rf = metrics.r2_score(y_test, y_test_pred) print('R^2:', acc_rf) print('RMSE:',np.sqrt(metrics.mean_squared_error(y_train, y_pred))) models = pd.DataFrame({ 'Model': ['Linear Regression', 'Random Forest', 'XGBoost'], 'R-squared Score': [acc_linreg*100, acc_rf*100, acc_xgb*100,]}) models.sort_values(by='R-squared Score', ascending=False) ```
github_jupyter
``` import re import numpy as np import pandas as pd from pprint import pprint # Gensim import gensim import gensim.corpora as corpora from gensim.utils import simple_preprocess from gensim.models import CoherenceModel # spacy for lemmatization import spacy # Plotting tools !pip install pyLDAvis import pyLDAvis import pyLDAvis.gensim # don't skip this import matplotlib.pyplot as plt %matplotlib inline # Enable logging for gensim - optional import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR) import warnings warnings.filterwarnings("ignore",category=DeprecationWarning) import pandas as pd df=pd.read_csv('BeforeLD.csv',keep_default_na=False) df.isnull().sum() df.head(5) hash_df=dict(Counter(df['hashtags'].to_list())) top_hash_df=pd.DataFrame(list(hash_df.items()),columns = ['word','count']).sort_values('count',ascending=False)[:20] top_hash_df.head(4) fig = go.Figure(go.Bar( x=top_hash_df['word'],y=top_hash_df['count'], marker={'color': top_hash_df['count'], 'colorscale': 'blues'}, text=top_hash_df['count'], textposition = "outside", )) fig.update_layout(title_text='Top Trended Hastags',xaxis_title="Hashtags", yaxis_title="Number of Tags",template="plotly_dark",height=700,title_x=0.5) fig.show() type(df) data=df.clean_text.values.tolist() def sent_to_words(sentences): for sentence in sentences: yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations data_words = list(sent_to_words(data)) data_words # NLTK Stop words import nltk nltk.download('stopwords') from nltk.corpus import stopwords stop_words = stopwords.words('english') #stop_words.extend(['nan', 'loans', 'loan', 'debt', 'use','student','char','students','000','also']) import matplotlib.pyplot as plt from sklearn.feature_extraction.text import CountVectorizer from textblob import TextBlob def get_top_n_words(n_top_words, count_vectorizer, text_data): ''' returns a tuple of the top n words in a sample and their accompanying counts, given a CountVectorizer object and text sample ''' vectorized_headlines = count_vectorizer.fit_transform(text_data.values.astype('U')) vectorized_total = np.sum(vectorized_headlines, axis=0) word_indices = np.flip(np.argsort(vectorized_total)[0,:], 1) word_values = np.flip(np.sort(vectorized_total)[0,:],1) word_vectors = np.zeros((n_top_words, vectorized_headlines.shape[1])) for i in range(n_top_words): word_vectors[i,word_indices[0,i]] = 1 words = [word[0].encode('ascii').decode('utf-8') for word in count_vectorizer.inverse_transform(word_vectors)] return (words, word_values[0,:n_top_words].tolist()[0]) count_vectorizer = CountVectorizer(stop_words=stop_words) words, word_values = get_top_n_words(n_top_words=15, count_vectorizer=count_vectorizer, text_data=df.clean_text) fig, ax = plt.subplots(figsize=(16,8)) ax.bar(range(len(words)), word_values); ax.set_xticks(range(len(words))); ax.set_xticklabels(words, rotation='vertical'); ax.set_title('Top words in headlines dataset (excluding stop words)'); ax.set_xlabel('Word'); ax.set_ylabel('Number of occurences'); plt.show() from wordcloud import WordCloud plt.figure(figsize=(16,13)) # Python program to generate WordCloud for the clean_text column in the data comment_words = '' #stopwords = set(STOPWORDS) #stopwords=stopwords.words('english') # iterate through the csv file for val in df.clean_text: # typecaste each val to string val = str(val) # split the value tokens = val.split() # Converts each token into lowercase for i in range(len(tokens)): tokens[i] = tokens[i].lower() comment_words += " ".join(tokens)+" " wordcloud = WordCloud(width = 800, height = 800, background_color ='white', stopwords = stopwords, min_font_size = 10).generate(comment_words) # plot the WordCloud image plt.figure(figsize = (8, 8), facecolor = None) plt.imshow(wordcloud) plt.axis("off") plt.tight_layout(pad = 0) plt.show() def ngram_df(corpus,nrange,n=None): vec = CountVectorizer(stop_words = 'english',ngram_range=nrange).fit(corpus) bag_of_words = vec.transform(corpus) sum_words = bag_of_words.sum(axis=0) words_freq = [(word, sum_words[0, idx]) for word, idx in vec.vocabulary_.items()] words_freq =sorted(words_freq, key = lambda x: x[1], reverse=True) total_list=words_freq[:n] df=pd.DataFrame(total_list,columns=['text','count']) return df unigram_df=ngram_df(df['clean_text'],(1,1),20) bigram_df=ngram_df(df['clean_text'],(2,2),20) trigram_df=ngram_df(df['clean_text'],(3,3),20) #plots import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go import plotly.figure_factory as ff from plotly.colors import n_colors from plotly.subplots import make_subplots fig = make_subplots( rows=3, cols=1,subplot_titles=("Unigram","Bigram",'Trigram'), specs=[[{"type": "scatter"}], [{"type": "scatter"}], [{"type": "scatter"}] ]) fig.add_trace(go.Bar( y=unigram_df['text'][::-1], x=unigram_df['count'][::-1], marker={'color': "blue"}, text=unigram_df['count'], textposition = "outside", orientation="h", name="Months", ),row=1,col=1) fig.add_trace(go.Bar( y=bigram_df['text'][::-1], x=bigram_df['count'][::-1], marker={'color': "blue"}, text=bigram_df['count'], name="Days", textposition = "outside", orientation="h", ),row=2,col=1) fig.add_trace(go.Bar( y=trigram_df['text'][::-1], x=trigram_df['count'][::-1], marker={'color': "blue"}, text=trigram_df['count'], name="Days", orientation="h", textposition = "outside", ),row=3,col=1) fig.update_xaxes(showline=True, linewidth=2, linecolor='black', mirror=True) fig.update_yaxes(showline=True, linewidth=2, linecolor='black', mirror=True) fig.update_layout(title_text='Top N Grams',xaxis_title=" ",yaxis_title=" ", showlegend=False,title_x=0.5,height=1200,template="plotly_dark") fig.show() plt.figure(figsize=(16,13)) # Python program to generate WordCloud for the body column in the data comment_words = '' stopwords = stop_words # iterate through the csv file for val in df.body: # typecaste each val to string val = str(val) # split the value tokens = val.split() # Converts each token into lowercase for i in range(len(tokens)): tokens[i] = tokens[i].lower() comment_words += " ".join(tokens)+" " wordcloud = WordCloud(width = 800, height = 800, background_color ='white', stopwords = stopwords, min_font_size = 10).generate(comment_words) # plot the WordCloud image plt.figure(figsize = (8, 8), facecolor = None) plt.imshow(wordcloud) plt.axis("off") plt.tight_layout(pad = 0) plt.show() train_data=df['clean_text'].to_frame() print(train_data) #Pre processing #converting each string to a numerical vector using CountVectorizer(feature construction) count_vectorizer = CountVectorizer(stop_words=stop_words, max_features=40000) text_sample = train_data.sample(n=50, random_state=0).values.astype('U') print(text_sample) print('Body before vectorization: {}'.format(text_sample[1])) document_term_matrix = count_vectorizer.fit_transform(text_sample.ravel()) print('Body after vectorization: \n{}'.format(document_term_matrix[1])) #clustering algorithm (Latent Semantic Analysis or Latent Dirichilet Allocation.) #number of topic categories n_topics=3 from sklearn.decomposition import TruncatedSVD from sklearn.decomposition import LatentDirichletAllocation from sklearn.manifold import TSNE lda_model = LatentDirichletAllocation(n_components=n_topics, learning_method='online', random_state=0, verbose=0) lda_topic_matrix = lda_model.fit_transform(document_term_matrix) #Taking the argmax of each headline in this topic matrix will give the predicted topics of each headline in the sample. We can then sort these into counts of each topic. from collections import Counter def get_keys(topic_matrix): ''' returns an integer list of predicted topic categories for a given topic matrix ''' keys = topic_matrix.argmax(axis=1).tolist() return keys def keys_to_counts(keys): ''' returns a tuple of topic categories and their accompanying magnitudes for a given list of keys ''' count_pairs = Counter(keys).items() categories = [pair[0] for pair in count_pairs] counts = [pair[1] for pair in count_pairs] return (categories, counts) lda_keys = get_keys(lda_topic_matrix) lda_categories, lda_counts = keys_to_counts(lda_keys) print(document_term_matrix) # The topic categories are not that helpful in order to better # categorize them ,we find the most frequent words in each def get_top_n_words(n, keys, document_term_matrix, count_vectorizer): ''' returns a list of n_topic strings, where each string contains the n most common words in a predicted category, in order ''' top_word_indices = [] for topic in range(n_topics): temp_vector_sum=[] temp_sum=0 for i in range(len(keys)): if keys[i] == topic: temp_sum += document_term_matrix[i] temp_vector_sum.append(temp_sum) top_n_word_indices = np.flip(np.argsort(temp_vector_sum)[0][-n:],0) top_word_indices.append(top_n_word_indices) top_words = [] for topic in top_word_indices: topic_words = [] for index in topic: temp_word_vector = np.zeros((1,document_term_matrix.shape[1])) temp_word_vector[:,index] = 1 the_word = count_vectorizer.inverse_transform(temp_word_vector)[0][0] topic_words.append(the_word.encode('ascii').decode('utf-8')) top_words.append(" ".join(topic_words)) return top_words top_n_words_lda = get_top_n_words(10, lda_keys, document_term_matrix, count_vectorizer) for i in range(len(top_n_words_lda)): print("Topic {}: ".format(i+1), top_n_words_lda[i]) #Create Bigram and Trigram Models bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases. trigram = gensim.models.Phrases(bigram[data_words], threshold=100) # Faster way to get a sentence clubbed as a trigram/bigram bigram_mod = gensim.models.phrases.Phraser(bigram) trigram_mod = gensim.models.phrases.Phraser(trigram) # See trigram example print(trigram_mod[bigram_mod[data_words[0]]]) def process_words(texts, stop_words=stop_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']): """Remove Stopwords, Form Bigrams, Trigrams and Lemmatization""" texts = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts] texts = [bigram_mod[doc] for doc in texts] texts = [trigram_mod[bigram_mod[doc]] for doc in texts] texts_out = [] nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner']) for sent in texts: doc = nlp(" ".join(sent)) texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags]) # remove stopwords once more after lemmatization texts_out = [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts_out] return texts_out data_ready = process_words(data_words) # processed Text Data! # Create Dictionary id2word = corpora.Dictionary(data_ready) # Term Document Frequency corpus = [id2word.doc2bow(text) for text in data_ready] # View print(corpus[:1]) # Human readable format of corpus (term-frequency) [[(id2word[id], freq) for id, freq in cp] for cp in corpus[:1]] # Build LDA model lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=id2word, num_topics=4, random_state=100, update_every=1, chunksize=10, passes=10, alpha='symmetric', iterations=100, per_word_topics=True) # Print the Keyword in the 10 topics pprint(lda_model.print_topics()) doc_lda = lda_model[corpus] #What is the Dominant topic and its percentage contibution def format_topics_sentences(ldamodel=None, corpus=corpus, texts=data): # Init output sent_topics_df = pd.DataFrame() # Get main topic in each document for i, row_list in enumerate(ldamodel[corpus]): row = row_list[0] if ldamodel.per_word_topics else row_list # print(row) row = sorted(row, key=lambda x: (x[1]), reverse=True) # Get the Dominant topic, Perc Contribution and Keywords for each document for j, (topic_num, prop_topic) in enumerate(row): if j == 0: # => dominant topic wp = ldamodel.show_topic(topic_num) topic_keywords = ", ".join([word for word, prop in wp]) sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True) else: break sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords'] # Add original text to the end of the output contents = pd.Series(texts) sent_topics_df = pd.concat([sent_topics_df, contents], axis=1) return(sent_topics_df) df_topic_sents_keywords = format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=data_ready) # Format df_dominant_topic = df_topic_sents_keywords.reset_index() df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text'] df_dominant_topic.head(10) ##The most representative sentence for each topic # Display setting to show more characters in column pd.options.display.max_colwidth = 80 sent_topics_sorteddf_mallet = pd.DataFrame() sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic') for i, grp in sent_topics_outdf_grpd: sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet, grp.sort_values(['Perc_Contribution'], ascending=False).head(1)], axis=0) # Reset Index sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True) # Format sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Representative Text"] # Show sent_topics_sorteddf_mallet.head(10) #Word Clouds of Top N Keywords in Each topic from matplotlib import pyplot as plt from wordcloud import WordCloud, STOPWORDS import matplotlib.colors as mcolors cols = [color for name, color in mcolors.TABLEAU_COLORS.items()] # more colors: 'mcolors.XKCD_COLORS' cloud = WordCloud(stopwords=stop_words, background_color='white', width=2500, height=1800, max_words=10, colormap='tab10', color_func=lambda *args, **kwargs: cols[i], prefer_horizontal=1.0) topics = lda_model.show_topics(formatted=False) fig, axes = plt.subplots(2, 2, figsize=(8,7), sharex=True, sharey=True) for i, ax in enumerate(axes.flatten()): fig.add_subplot(ax) topic_words = dict(topics[i][1]) cloud.generate_from_frequencies(topic_words, max_font_size=300) plt.gca().imshow(cloud) plt.gca().set_title('Topic ' + str(i), fontdict=dict(size=16)) plt.gca().axis('off') plt.subplots_adjust(wspace=0, hspace=0) plt.axis('off') plt.margins(x=0, y=0) plt.tight_layout() plt.show() from collections import Counter topics = lda_model.show_topics(formatted=False) data_flat = [w for w_list in data_ready for w in w_list] counter = Counter(data_flat) out = [] for i, topic in topics: for word, weight in topic: out.append([word, i , weight, counter[word]]) df = pd.DataFrame(out, columns=['word', 'topic_id', 'importance', 'word_count']) # Plot Word Count and Weights of Topic Keywords fig, axes = plt.subplots(2, 2, figsize=(10,9), sharey=True, dpi=80) cols = [color for name, color in mcolors.TABLEAU_COLORS.items()] for i, ax in enumerate(axes.flatten()): ax.bar(x='word', height="word_count", data=df.loc[df.topic_id==i, :], color=cols[i], width=0.5, alpha=0.3, label='Word Count') ax_twin = ax.twinx() ax_twin.bar(x='word', height="importance", data=df.loc[df.topic_id==i, :], color=cols[i], width=0.2, label='Weights') ax.set_ylabel('Word Count', color=cols[i]) ax_twin.set_ylim(0, 0.030); ax.set_ylim(0, 160) ax.set_title('Topic: ' + str(i), color=cols[i], fontsize=16) ax.tick_params(axis='y', left=False) ax.set_xticklabels(df.loc[df.topic_id==i, 'word'], rotation=30, horizontalalignment= 'right') ax.legend(loc='upper left'); ax_twin.legend(loc='upper right') fig.tight_layout(w_pad=2) fig.suptitle('Word Count and Importance of Topic Keywords', fontsize=22, y=1.05) plt.show() # Sentence Coloring of N Sentences from matplotlib.patches import Rectangle def sentences_chart(lda_model=lda_model, corpus=corpus, start = 0, end = 13): corp = corpus[start:end] mycolors = [color for name, color in mcolors.TABLEAU_COLORS.items()] fig, axes = plt.subplots(end-start, 1, figsize=(15, (end-start)*0.75), dpi=60) axes[0].axis('off') for i, ax in enumerate(axes): if i > 0: corp_cur = corp[i-1] topic_percs, wordid_topics, wordid_phivalues = lda_model[corp_cur] word_dominanttopic = [(lda_model.id2word[wd], topic[0]) for wd, topic in wordid_topics] ax.text(0.01, 0.3, "Doc " + str(i-1) + ": ", verticalalignment='center', fontsize=12, color='black', transform=ax.transAxes, fontweight=500) # Draw Rectange topic_percs_sorted = sorted(topic_percs, key=lambda x: (x[1]), reverse=True) ax.add_patch(Rectangle((0.0, 0.05), 0.99, 0.90, fill=None, alpha=1, color=mycolors[topic_percs_sorted[0][0]], linewidth=1)) word_pos = 0.06 for j, (word, topics) in enumerate(word_dominanttopic): if j < 14: ax.text(word_pos, 0.5, word, horizontalalignment='left', verticalalignment='center', fontsize=12, color=mycolors[topics], transform=ax.transAxes, fontweight=700) word_pos += .009 * len(word) # to move the word for the next iter ax.axis('off') ax.text(word_pos, 0.5, '. . ', horizontalalignment='left', verticalalignment='center', fontsize=10, color='black', transform=ax.transAxes) plt.subplots_adjust(wspace=0, hspace=0) plt.suptitle('Sentence Topic Coloring for Documents: ' + str(start) + ' to ' + str(end-2), fontsize=15, y=0.95, fontweight=600) plt.tight_layout() plt.show() sentences_chart() # Sentence Coloring of N Sentences def topics_per_document(model, corpus, start=0, end=1): corpus_sel = corpus[start:end] dominant_topics = [] topic_percentages = [] for i, corp in enumerate(corpus_sel): topic_percs, wordid_topics, wordid_phivalues = model[corp] dominant_topic = sorted(topic_percs, key = lambda x: x[1], reverse=True)[0][0] dominant_topics.append((i, dominant_topic)) topic_percentages.append(topic_percs) return(dominant_topics, topic_percentages) dominant_topics, topic_percentages = topics_per_document(model=lda_model, corpus=corpus, end=-1) # Distribution of Dominant Topics in Each Document df = pd.DataFrame(dominant_topics, columns=['Document_Id', 'Dominant_Topic']) dominant_topic_in_each_doc = df.groupby('Dominant_Topic').size() df_dominant_topic_in_each_doc = dominant_topic_in_each_doc.to_frame(name='count').reset_index() # Total Topic Distribution by actual weight topic_weightage_by_doc = pd.DataFrame([dict(t) for t in topic_percentages]) df_topic_weightage_by_doc = topic_weightage_by_doc.sum().to_frame(name='count').reset_index() # Top 3 Keywords for each Topic topic_top3words = [(i, topic) for i, topics in lda_model.show_topics(formatted=False) for j, (topic, wt) in enumerate(topics) if j < 3] df_top3words_stacked = pd.DataFrame(topic_top3words, columns=['topic_id', 'words']) df_top3words = df_top3words_stacked.groupby('topic_id').agg(', \n'.join) df_top3words.reset_index(level=0,inplace=True) from matplotlib.ticker import FuncFormatter # Plot fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4), dpi=120, sharey=True) # Topic Distribution by Dominant Topics ax1.bar(x='Dominant_Topic', height='count', data=df_dominant_topic_in_each_doc, width=.5, color='firebrick') ax1.set_xticks(range(df_dominant_topic_in_each_doc.Dominant_Topic.unique().__len__())) tick_formatter = FuncFormatter(lambda x, pos: 'Topic ' + str(x)+ '\n' + df_top3words.loc[df_top3words.topic_id==x, 'words'].values[0]) ax1.xaxis.set_major_formatter(tick_formatter) ax1.set_title('Number of Documents by Dominant Topic', fontdict=dict(size=10)) ax1.set_ylabel('Number of Documents') ax1.set_ylim(0, 1000) # Topic Distribution by Topic Weights ax2.bar(x='index', height='count', data=df_topic_weightage_by_doc, width=.5, color='steelblue') ax2.set_xticks(range(df_topic_weightage_by_doc.index.unique().__len__())) ax2.xaxis.set_major_formatter(tick_formatter) ax2.set_title('Number of Documents by Topic Weightage', fontdict=dict(size=10)) plt.show() import pyLDAvis.gensim pyLDAvis.enable_notebook() vis = pyLDAvis.gensim.prepare(lda_model, corpus, dictionary=lda_model.id2word) vis # Get topic weights and dominant topics ------------ from sklearn.manifold import TSNE from bokeh.plotting import figure, output_file, show from bokeh.models import Label from bokeh.io import output_notebook # Get topic weights topic_weights = [] for i, row_list in enumerate(lda_model[corpus]): topic_weights.append([w for i, w in row_list[0]]) # Array of topic weights arr = pd.DataFrame(topic_weights).fillna(0).values # Keep the well separated points (optional) arr = arr[np.amax(arr, axis=1) > 0.35] # Dominant topic number in each doc topic_num = np.argmax(arr, axis=1) # tSNE Dimension Reduction tsne_model = TSNE(n_components=2, verbose=1, random_state=0, angle=.99, init='pca') tsne_lda = tsne_model.fit_transform(arr) # Plot the Topic Clusters using Bokeh output_notebook() n_topics = 4 mycolors = np.array([color for name, color in mcolors.TABLEAU_COLORS.items()]) plot = figure(title="t-SNE Clustering of {} LDA Topics".format(n_topics), plot_width=300, plot_height=200) plot.scatter(x=tsne_lda[:,0], y=tsne_lda[:,1], color=mycolors[topic_num]) show(plot) ```
github_jupyter