code
stringlengths
38
801k
repo_path
stringlengths
6
263
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] tags=["remove_cell"] # # Grover’s Algorithm # - # In this section, we introduce Grover's algorithm and how it can be used to solve unstructured search problems. We then implement Grover's algorithm on actual problems using Qiskit, run on a simulator and an actual device. # # # ## Contents # # 1. [Introduction](#introduction) # 1.1 [Algorithm overview](#overview) # 1.2 [Grover Step by step](#steps) # 1.3 [State preparation](#state-prep) # 1.4 [The Oracle](#oracle) # 1.5 [The diffusion operator](#diffusion) # 2. [Example: 2 Qubits](#2qubits) # 2.1 [Simulation](#2qubits-simulation) # 2.2 [Device](#2qubits-device) # 3. [Example: 3 Qubits](#3qubits) # 3.1 [Simulation](#3qubits-simulation) # 3.2 [Device](#3qubits-device) # 4. [Problems](#problems) # 4.1 [Solving Sudoku using Grover's Algorithm](#sudoku) # 4.2 [Solving the triangle problem using Grover's Algorithm](#tri) # 5. [References](#references) # # ## 1. Introduction <a id='introduction'></a> # # You have likely heard that one of the many advantages a quantum computer has over a classical computer is its superior speed searching databases. Grover's algorithm demonstrates this capability. This algorithm can speed up an unstructured search problem quadratically, but its uses extend beyond that; it can serve as a general trick or subroutine to obtain quadratic run time improvements for a variety of other algorithms. This is called the amplitude amplification trick. # # # Suppose you are given a large list of $N$ items. Among these items there is one item with a unique property that we wish to locate; we will call this one the winner $w$. Think of each item in the list as a box of a particular color. Say all items in the list are gray except the winner $w$, which is purple. # # # # ![image1](images/grover_list.png) # # To find the purple box -- the *marked item* -- using classical computation, one would have to check on average $N/2$ of these boxes, and in the worst case, all $N$ of them. On a quantum computer, however, we can find the marked item in roughly $\sqrt{N}$ steps with Grover's amplitude amplification trick. A quadratic speedup is indeed a substantial time-saver for finding marked items in long lists. Additionally, the algorithm does not use the list's internal structure, which makes it *generic;* this is why it immediately provides a quadratic quantum speed-up for many classical problems. # # + [markdown] formulas={"vspace": {"meaning": "A complex vector space is a vector space whose field of scalars is the complex numbers.", "say": "Euclidean complex n-space", "type": "Mathematical symbol"}} # ### 1.1 Algorithm Overview <a id='overview'></a> # # # Grover's algorithm consists of three main algorithms steps: state preparation, the oracle, and the diffusion operator. The state preparation is where we create the search space, which is all possible cases the answer could take. In the list example we mentioned above, the search space would be all the items of that list. # The oracle is what marks the correct answer, or answers we are looking for, and the diffusion operator magnifies these answers so they can stand out and be measured at the end of the algorithm. # # ![image2](images/grover_steps.png) # # # So how does the algorithm work? Before looking at the list of items, we have no idea where the marked item is. Therefore, any guess of its location is as good as any other, which can be expressed in terms of a # uniform superposition: $|s \rangle = \frac{1}{\sqrt{N}} \sum_{x = 0}^{N -1} | x # \rangle.$ # # If at this point we were to measure in the standard basis $ \{ | x \rangle \} $, this superposition would collapse, according to the fifth quantum law, to any one of the basis states with the same probability of $\frac{1}{N} = \frac{1}{2^n}$. Our chances of guessing the right value $w$ is therefore $1$ in $2^n$, as could be expected. Hence, on average we would need to try about $N/2 = 2^{n-1}$ times to guess the correct item. # # Enter the procedure called amplitude amplification, which is how a quantum computer significantly enhances this probability. This procedure stretches out (amplifies) the amplitude of the marked item, which shrinks the other items' amplitude, so that measuring the final state will return the right item with near-certainty. # # This algorithm has a nice geometrical interpretation in terms of two reflections, which generate a rotation in a two-dimensional plane. The only two special states we need to consider are the winner $| w \rangle$ and the uniform superposition $| s \rangle$. These two vectors span a two-dimensional plane in the vector space $\cssId{vspace}{\mathbb{C}^N}$. They are not quite perpendicular because $| w \rangle$ occurs in the superposition with amplitude $N^{-1/2}$ as well. # We can, however, introduce an additional state $|s'\rangle$ that is in the span of these two vectors, which is perpendicular to $| w \rangle$ and is obtained from $|s \rangle$ by removing $| w \rangle$ and # rescaling. # # **Step 1**: The amplitude amplification procedure starts out in the uniform superposition $| s \rangle$, which is easily constructed from $| s \rangle = H^{\otimes n} | 0 \rangle^n$ or using another symmetric entangled states. # # ![image3](images/grover_step1.jpg) # # # The left graphic corresponds to the two-dimensional plane spanned by perpendicular vectors $|w\rangle$ and $|s'\rangle$ which allows to express the initial state as $|s\rangle = \sin \theta | w \rangle + \cos \theta | s' \rangle,$ where $\theta = \arcsin \langle s | w \rangle = \arcsin \frac{1}{\sqrt{N}}$. The right graphic is a bar graph of the amplitudes of the state $| s \rangle$. # # **Step 2**: We apply the oracle reflection $U_f$ to the state $|s\rangle$. # # ![image4](images/grover_step2.jpg) # # Geometrically this corresponds to a reflection of the state $|s\rangle$ about $|s'\rangle$. This transformation means that the amplitude in front of the $|w\rangle$ state becomes negative, which in turn means that the average amplitude (indicated by a dashed line) has been lowered. # # **Step 3**: We now apply an additional reflection ($U_s$) about the state $|s\rangle$: $U_s = 2|s\rangle\langle s| - \mathbb{1}$. This transformation maps the state to $U_s U_f| s \rangle$ and completes the transformation. # # ![image5](images/grover_step3.jpg) # # Two reflections always correspond to a rotation. The transformation $U_s U_f$ rotates the initial state $|s\rangle$ closer towards the winner $|w\rangle$. The action of the reflection $U_s$ in the amplitude bar diagram can be understood as a reflection about the average amplitude. Since the average amplitude has been lowered by the first reflection, this transformation boosts the negative amplitude of $|w\rangle$ to roughly three times its original value, while it decreases the other amplitudes. We then go to **step 2** to repeat the application. This procedure will be repeated several times to zero in on the winner. # # After $t$ steps we will be in the state $|\psi_t\rangle$ where: $| \psi_t \rangle = (U_s U_f)^t | s \rangle.$ # # How many times do we need to apply the rotation? It turns out that roughly $\sqrt{N}$ rotations suffice. This becomes clear when looking at the amplitudes of the state $| \psi \rangle$. We can see that the amplitude of $| w \rangle$ grows linearly with the number of applications $\sim t N^{-1/2}$. However, since we are dealing with amplitudes and not probabilities, the vector space's dimension enters as a square root. Therefore it is the amplitude, and not just the probability, that is being amplified in this procedure. # # To calculate the number of rotations we need to know the size of the search space and the number of answers we are looking for. The get the optimal number of iterations $t$, we can follow the equation: # # $$ # t = \lfloor\frac{\pi}{4}\sqrt{\frac{N}{m}}\rfloor # $$ # # Where N is the size of the search space and m is the number of answers we want. # # # ![image6](images/grover_circuit_high_level.png) # + [markdown] gloss={"ds": {"text": "The Dicke state |Dnk\u3009is an equal-weight superposition of all n-qubit states with Hamming Weight k.", "title": "Dicke-state"}, "ss": {"text": "Also known as permutation-symmetric quantum states are states that are invariant under any permutation of their subsystems.", "title": "symmetric states"}} # &nbsp; # # ## 1.2 Grover Step by Step <a id='step'></a> # # Now that we went through how Grover's algorithm actually work, let's go a little bit in depth about the construction and different cases for each of its components. # # ## 1.2.1 Preparing the Search Space <a id='state-prep'></a> # # The first step of Grover's algorithm is the initial state preparation. As we just mentioned, the search space is all possible values we need to search through to find the answer we want. For the examples in this textbook, our 'database' is comprised of all the possible computational basis states our qubits can be in. For example, if we have 3 qubits, our list is the states $|000\rangle, |001\rangle, \dots |111\rangle$ (i.e the states $|0\rangle \rightarrow |7\rangle$). So, in this case the size of our search space will be $N = 2^{3} = 8$. # # In some cases, if we know the range within the search space where the answer is guaranteed to be, we can eliminate the redundant basis out of our search space to speed up the algorithm and decrease the size of the circuit. Generally speaking, we can prepare our state using any [symmetric states](gloss:ss), such as [GHZ-states](https://quantum-computing.ibm.com/composer/docs/iqx/example-circuits/ghz), [W-states](https://quantum-computing.ibm.com/composer/docs/iqx/example-circuits/w-state), or [Dicke-states](gloss:ds). # # For example, if we are trying to solve a problem with one answer using 4 qubits, and we prepare our state using the Hadamard gate (i.e. forming the Hilbert Space), N will be 16. But, if we know that the answer is within states when only one qubit has the value of 1 at any time, we can then use the W-state instead of the full Hilbert space to prepare our states. Doing that, decreased the size of the search space from 16 to 4 and the number of optimal iterations $t$ from 3 to 1. # - # ## 1.2.2 Creating the Oracle <a id='oracle'></a> # # The second and most important step of Grover’s algorithm is the oracle. Oracles add a negative phase to the solution states so they can standout from the rest and be measured. I.e. for any state $|x\rangle$ in the computational basis: # # $$ # U_\omega|x\rangle = \bigg\{ # \begin{aligned} # \phantom{-}|x\rangle \quad \text{if} \; x \neq \omega \\ # -|x\rangle \quad \text{if} \; x = \omega \\ # \end{aligned} # $$ # # This oracle will be a diagonal matrix, where the entry that correspond to the marked item will have a negative phase. For example, if we have three qubits and $\omega = \text{101}$, our oracle will have the matrix: # # $$ # U_\omega = # \begin{bmatrix} # 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ # 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ # 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ # 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ # 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ # 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ # 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ # 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ # \end{bmatrix} # \begin{aligned} # \\ # \\ # \\ # \\ # \\ # \\ # \leftarrow \omega = \text{101}\\ # \\ # \\ # \\ # \end{aligned} # $$ # # # What makes Grover’s algorithm so powerful is how easy it is to convert a problem to an oracle of this form. There are many computational problems in which it is difficult to _find_ a solution, but relatively easy to _verify_ a solution. For example, we can easily verify a solution to a [sudoku](https://en.wikipedia.org/wiki/Sudoku) by checking all the rules are satisfied. For these problems, we can create a function $f$ that takes a proposed solution $x$, and returns $f(x) = 0$ if $x$ is not a solution ($x \neq \omega$) and $f(x) = 1$ for a valid solution ($x = \omega$). Our oracle can then be described as: # # $$ # U_\omega|x\rangle = (-1)^{f(x)}|x\rangle # $$ # # and the oracle's matrix will be a diagonal matrix of the form: # # $$ # U_\omega = # \begin{bmatrix} # (-1)^{f(0)} & 0 & \cdots & 0 \\ # 0 & (-1)^{f(1)} & \cdots & 0 \\ # \vdots & 0 & \ddots & \vdots \\ # 0 & 0 & \cdots & (-1)^{f(2^n-1)} \\ # \end{bmatrix} # $$ # # # <!-- ::: q-block.reminder --> # # ### Detail # # <details> # <summary>Circuit Construction of a Grover Oracle</summary> # <p> # If we have our classical function $f(x)$, we can convert it to a reversible circuit of the form: # </p><p> # <img alt="A Classical Eeversible Oracle" src="images/grover_boolean_oracle.svg"> # </p><p> # If we initialize the 'output' qubit in the state $|{-}\rangle$, the phase kickback effect turns this into a Grover oracle (similar to the workings of the Deutsch-Jozsa oracle): # </p><p> # <img alt="Grover Oracle Constructed from a Classical Reversible Oracle" src="images/grover_phase_oracle.svg"> # </p><p> # We then ignore the auxiliary ($|{-}\rangle$) qubit. # </p> # </details> # # <!-- ::: --> # # # ## 1.2.3 The Diffusion Operator <a id='diffusion'></a> # # Finally, after the oracle has marked the correct answer by making it negative, the last step of Grover's algorithm which is the diffusion operator. # # The construction of the diffusion operator depends on what we decide to use to prepare our initial states. Generally, the diffusion operator has the following construction. # # ![image7](images/grover_diff.png) # # For the next part of this chapter, we will create example oracles where we know $\omega$ beforehand, and not worry ourselves with whether these oracles are useful or not. At the end of the chapter, we will cover a short example where we create an oracle to solve a problem (sudoku) and a famous graph problem, the triangle finding problem. # ## 2. Example: 2 Qubits <a id='2qubits'></a> # # Let's first have a look at the case of Grover's algorithm for $N=4$ which is realized with 2 qubits. In this particular case, only <b>one rotation</b> is required to rotate the initial state $|s\rangle$ to the winner $|w\rangle$[3]: # <ol> # <li> # Following the above introduction, in the case $N=4$ we have # # $$\theta = \arcsin \frac{1}{2} = \frac{\pi}{6}.$$ # # </li> # <li> # After $t$ steps, we have $$(U_s U_\omega)^t | s \rangle = \sin \theta_t | \omega \rangle + \cos \theta_t | s' \rangle ,$$where $$\theta_t = (2t+1)\theta.$$ # # </li> # <li> # In order to obtain $| \omega \rangle$ we need $\theta_t = \frac{\pi}{2}$, which with $\theta=\frac{\pi}{6}$ inserted above results to $t=1$. This implies that after $t=1$ rotation the searched element is found. # </li> # </ol> # # We will now follow through an example using a specific oracle. # # #### Oracle for $\lvert \omega \rangle = \lvert 11 \rangle$ # Let's look at the case $\lvert w \rangle = \lvert 11 \rangle$. The oracle $U_\omega$ in this case acts as follows: # # $$U_\omega | s \rangle = U_\omega \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle + |11\rangle \right) = \frac{1}{2}\left( |00\rangle + |01\rangle + |10\rangle - |11\rangle \right).$$ # # or: # # $$ # U_\omega = # \begin{bmatrix} # 1 & 0 & 0 & 0 \\ # 0 & 1 & 0 & 0 \\ # 0 & 0 & 1 & 0 \\ # 0 & 0 & 0 & -1 \\ # \end{bmatrix} # $$ # # which you may recognise as the controlled-Z gate. Thus, for this example, our oracle is simply the controlled-Z gate: # # ![image8](images/grover_circuit_2qbuits_oracle_11.svg) # # #### Reflection $U_s$ # In order to complete the circuit we need to implement the additional reflection $U_s = 2|s\rangle\langle s| - \mathbb{1}$. Since this is a reflection about $|s\rangle$, we want to add a negative phase to every state orthogonal to $|s\rangle$. # # One way we can do this is to use the operation that transforms the state $|s\rangle \rightarrow |0\rangle$, which we already know is the Hadamard gate applied to each qubit: # # $$H^{\otimes n}|s\rangle = |0\rangle$$ # # Then we apply a circuit that adds a negative phase to the states orthogonal to $|0\rangle$: # # $$U_0 \frac{1}{2}\left( \lvert 00 \rangle + \lvert 01 \rangle + \lvert 10 \rangle + \lvert 11 \rangle \right) = \frac{1}{2}\left( \lvert 00 \rangle - \lvert 01 \rangle - \lvert 10 \rangle - \lvert 11 \rangle \right)$$ # # i.e. the signs of each state are flipped except for $\lvert 00 \rangle$. As can easily be verified, one way of implementing $U_0$ is the following circuit: # # ![Circuit for reflection around |0>](images/grover_circuit_2qbuits_reflection_0.svg) # # Finally, we do the operation that transforms the state $|0\rangle \rightarrow |s\rangle$ (the H-gate again): # # $$H^{\otimes n}U_0 H^{\otimes n} = U_s$$ # # The complete circuit for $U_s$ looks like this: # # ![Circuit for reflection around |s>](images/grover_circuit_2qbuits_reflection.svg) # # # #### Full Circuit for $\lvert w \rangle = |11\rangle$ # Since in the particular case of $N=4$ only one rotation is required we can combine the above components to build the full circuit for Grover's algorithm for the case $\lvert w \rangle = |11\rangle$: # # ![image11](images/grover_circuit_2qubits_full_11.svg) # # ### 2.1 Qiskit Implementation # # We now implement Grover's algorithm for the above case of 2 qubits for $\lvert w \rangle = |11\rangle$. # + #initialization import matplotlib.pyplot as plt import numpy as np import math # importing Qiskit from qiskit import IBMQ, Aer, transpile, execute from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister from qiskit.providers.ibmq import least_busy # import basic plot tools from qiskit.visualization import plot_histogram # - # We start by preparing a quantum circuit with two qubits: n = 2 grover_circuit = QuantumCircuit(n) # Then we simply need to write out the commands for the circuit depicted above. First, we need to initialize the state $|s\rangle$. Let's create a general function (for any number of qubits) so we can use it again later: # + tags=["thebelab-init"] def initialize_s(qc, qubits): """Apply a H-gate to 'qubits' in qc""" for q in qubits: qc.h(q) return qc # - grover_circuit = initialize_s(grover_circuit, [0,1]) grover_circuit.draw() # Apply the Oracle for $|w\rangle = |11\rangle$. This oracle is specific to 2 qubits: grover_circuit.cz(0,1) # Oracle grover_circuit.draw() # <span id="general_diffuser"></span> # We now want to apply the diffuser ($U_s$). As with the circuit that initialises $|s\rangle$, we'll create a general diffuser (for any number of qubits) so we can use it later in other problems. # + tags=["thebelab-init"] # Diffusion operator (U_s) grover_circuit.h([0,1]) grover_circuit.z([0,1]) grover_circuit.cz(0,1) grover_circuit.h([0,1]) grover_circuit.draw() # - # This is our finished circuit. # ### 2.1.1 Experiment with Simulators <a id='2qubits-simulation'></a> # # Let's run the circuit in simulation. First, we can verify that we have the correct statevector: sv_sim = Aer.get_backend('statevector_simulator') result = sv_sim.run(grover_circuit).result() statevec = result.get_statevector() statevec #from qiskit_textbook.tools import vector2latex #vector2latex(statevec, pretext="|\\psi\\rangle =") # As expected, the amplitude of every state that is not $|11\rangle$ is 0, this means we have a 100% chance of measuring $|11\rangle$: # + grover_circuit.measure_all() qasm_sim = Aer.get_backend('qasm_simulator') result = qasm_sim.run(grover_circuit).result() counts = result.get_counts() plot_histogram(counts) # - # ### 2.1.2 Experiment with Real Devices <a id='2qubits-device'></a> # # We can run the circuit a real device as below. # + tags=["uses-hardware"] # Load IBM Q account and get the least busy backend device provider = IBMQ.load_account() provider = IBMQ.get_provider("ibm-q-internal") device = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("Running on current least busy device: ", device) # + tags=["uses-hardware"] # Run our circuit on the least busy backend. Monitor the execution of the job in the queue from qiskit.tools.monitor import job_monitor transpiled_grover_circuit = transpile(grover_circuit, device, optimization_level=3) job = device.run(transpiled_grover_circuit) job_monitor(job, interval=2) # + tags=["uses-hardware"] # Get the results from the computation results = job.result() answer = results.get_counts(grover_circuit) plot_histogram(answer) # - # We confirm that in the majority of the cases the state $|11\rangle$ is measured. The other results are due to errors in the quantum computation. # # ## 3. Example: 3 Qubits <a id='3qubits'></a> # # We now go through the example of Grover's algorithm for 3 qubits with two marked states $\lvert101\rangle$ and $\lvert110\rangle$, following the implementation found in Reference [2]. The quantum circuit to solve the problem using a phase oracle is: # # ![image12](images/grover_circuit_3qubits.png) # # <ol> # <li> # Apply Hadamard gates to $3$ qubits initialised to $\lvert000\rangle$ to create a uniform superposition: # $$\lvert \psi_1 \rangle = \frac{1}{\sqrt{8}} \left( # \lvert000\rangle + \lvert001\rangle + \lvert010\rangle + \lvert011\rangle + # \lvert100\rangle + \lvert101\rangle + \lvert110\rangle + \lvert111\rangle \right) $$ # </li> # # <li> # Mark states $\lvert101\rangle$ and $\lvert110\rangle$ using a phase oracle: # $$\lvert \psi_2 \rangle = \frac{1}{\sqrt{8}} \left( # \lvert000\rangle + \lvert001\rangle + \lvert010\rangle + \lvert011\rangle + # \lvert100\rangle - \lvert101\rangle - \lvert110\rangle + \lvert111\rangle \right) $$ # </li> # # <li> # Perform the reflection around the average amplitude: # # <ol> # <li> Apply Hadamard gates to the qubits # $$\lvert \psi_{3a} \rangle = \frac{1}{2} \left( # \lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ # </li> # # <li> Apply X gates to the qubits # $$\lvert \psi_{3b} \rangle = \frac{1}{2} \left( # -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle +\lvert111\rangle \right) $$ # </li> # # <li> Apply a doubly controlled Z gate between the 1, 2 (controls) and 3 (target) qubits # $$\lvert \psi_{3c} \rangle = \frac{1}{2} \left( # -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ # </li> # <li> Apply X gates to the qubits # $$\lvert \psi_{3d} \rangle = \frac{1}{2} \left( # -\lvert000\rangle +\lvert011\rangle +\lvert100\rangle -\lvert111\rangle \right) $$ # </li> # <li> Apply Hadamard gates to the qubits # $$\lvert \psi_{3e} \rangle = \frac{1}{\sqrt{2}} \left( # -\lvert101\rangle -\lvert110\rangle \right) $$ # </li> # </ol> # </li> # # <li> # Measure the $3$ qubits to retrieve states $\lvert101\rangle$ and $\lvert110\rangle$ # </li> # </ol> # # Note that since there are 2 solutions and 8 possibilities, we will only need to run one iteration (steps 2 & 3). # # ### 3.1 Qiskit Implementation <a id='3qubit-implementation'></a> # # We now implement Grover's algorithm for the above [example](#3qubits) for $3$-qubits and searching for two marked states $\lvert101\rangle$ and $\lvert110\rangle$. **Note:** Remember that Qiskit orders it's qubits the opposite way round to this resource, so the circuit drawn will appear flipped about the horizontal. # # We create a phase oracle that will mark states $\lvert101\rangle$ and $\lvert110\rangle$ as the results (step 1). qc = QuantumCircuit(3) qc.cz(0, 2) qc.cz(1, 2) oracle_ex3 = qc.to_gate() oracle_ex3.name = "U$_\omega$" # In the last section, we used a diffuser specific to 2 qubits, in the cell below we will create a general diffuser for any number of qubits. # # # <!-- ::: q-block.reminder --> # # ### Detail # # <details> # <summary>Creating a General Diffuser</summary> # # Remember that we can create $U_s$ from $U_0$: # # $$ U_s = H^{\otimes n} U_0 H^{\otimes n} $$ # # And a multi-controlled-Z gate ($MCZ$) inverts the phase of the state $|11\dots 1\rangle$: # # $$ # MCZ = # \begin{bmatrix} # 1 & 0 & 0 & \cdots & 0 \\ # 0 & 1 & 0 & \cdots & 0 \\ # \vdots & \vdots & \vdots & \ddots & \vdots \\ # 0 & 0 & 0 & \cdots & -1 \\ # \end{bmatrix} # \begin{aligned} # \\ # \\ # \\ # \leftarrow \text{Add negative phase to} \; |11\dots 1\rangle\\ # \end{aligned} # $$ # # Applying an X-gate to each qubit performs the transformation: # # $$ # \begin{aligned} # |00\dots 0\rangle & \rightarrow |11\dots 1\rangle\\ # |11\dots 1\rangle & \rightarrow |00\dots 0\rangle # \end{aligned} # $$ # # So: # # $$ U_0 = - X^{\otimes n} (MCZ) X^{\otimes n} $$ # # Using these properties together, we can create $U_s$ using H-gates, X-gates, and a single multi-controlled-Z gate: # # $$ U_s = - H^{\otimes n} U_0 H^{\otimes n} = H^{\otimes n} X^{\otimes n} (MCZ) X^{\otimes n} H^{\otimes n} $$ # # Note that we can ignore the global phase of -1. # # </details> # # <!-- ::: --> def diffuser(nqubits): qc = QuantumCircuit(nqubits) # Apply transformation |s> -> |00..0> (H-gates) for qubit in range(nqubits): qc.h(qubit) # Apply transformation |00..0> -> |11..1> (X-gates) for qubit in range(nqubits): qc.x(qubit) # Do multi-controlled-Z gate qc.h(nqubits-1) qc.mct(list(range(nqubits-1)), nqubits-1) # multi-controlled-toffoli qc.h(nqubits-1) # Apply transformation |11..1> -> |00..0> for qubit in range(nqubits): qc.x(qubit) # Apply transformation |00..0> -> |s> for qubit in range(nqubits): qc.h(qubit) # We will return the diffuser as a gate U_s = qc.to_gate() U_s.name = "U$_s$" return U_s # We'll now put the pieces together, with the creation of a uniform superposition at the start of the circuit and a measurement at the end. Note that since there are 2 solutions and 8 possibilities, we will only need to run one iteration. n = 3 grover_circuit = QuantumCircuit(n) grover_circuit = initialize_s(grover_circuit, [0,1,2]) grover_circuit.append(oracle_ex3, [0,1,2]) grover_circuit.append(diffuser(n), [0,1,2]) grover_circuit.measure_all() grover_circuit.draw() # ### 3.1.1 Experiment with Simulators <a id='3qubits-simulation'></a> # # We can run the above circuit on the simulator. qasm_sim = Aer.get_backend('qasm_simulator') transpiled_grover_circuit = transpile(grover_circuit, qasm_sim) results = qasm_sim.run(transpiled_grover_circuit).result() counts = results.get_counts() plot_histogram(counts) # As we can see, the algorithm discovers our marked states $\lvert101\rangle$ and $\lvert110\rangle$. # ### 3.1.2 Experiment with Real Devices <a id='3qubits-device'></a> # # We can run the circuit on the real device as below. # + tags=["uses-hardware"] backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 3 and not x.configuration().simulator and x.status().operational==True)) print("least busy backend: ", backend) # + tags=["uses-hardware"] # Run our circuit on the least busy backend. Monitor the execution of the job in the queue from qiskit.tools.monitor import job_monitor transpiled_grover_circuit = transpile(grover_circuit, device, optimization_level=3) job = device.run(transpiled_grover_circuit) job_monitor(job, interval=2) # + tags=["uses-hardware"] # Get the results from the computation results = job.result() answer = results.get_counts(grover_circuit) plot_histogram(answer) # - # As we can (hopefully) see, there is a higher chance of measuring $\lvert101\rangle$ and $\lvert110\rangle$. The other results are due to errors in the quantum computation. # ## 4. Problems <a id='problems'></a> # # The function `grover_problem_oracle` below takes a number of qubits (`n`), and a `variant` and returns an n-qubit oracle. The function will always return the same oracle for the same `n` and `variant`. You can see the solutions to each oracle by setting `print_solutions = True` when calling `grover_problem_oracle`. from qiskit_textbook.problems import grover_problem_oracle ## Example Usage n = 4 oracle = grover_problem_oracle(n, variant=1) # 0th variant of oracle, with n qubits qc = QuantumCircuit(n) qc.append(oracle, [0,1,2,3]) qc.draw() # 1. `grover_problem_oracle(4, variant=2)` uses 4 qubits and has 1 solution. # a. How many iterations do we need to have a > 90% chance of measuring this solution? # b. Use Grover's algorithm to find this solution state. # c. What happens if we apply more iterations the the number we calculated in problem 1a above? Why? # # 2. With 2 solutions and 4 qubits, how many iterations do we need for a >90% chance of measuring a solution? Test your answer using the oracle `grover_problem_oracle(4, variant=1)` (which has two solutions). # # 3. Create a function, `grover_solver(oracle, iterations)` that takes as input: # - A Grover oracle as a gate (`oracle`) # - An integer number of iterations (`iterations`) # # and returns a `QuantumCircuit` that performs Grover's algorithm on the '`oracle`' gate, with '`iterations`' iterations. # ## 4.1 Solving Sudoku using Grover's Algorithm <a id="sudoku"></a> # # The oracles used throughout this chapter so far have been created with prior knowledge of their solutions. We will now solve a simple problem using Grover's algorithm, for which we do not necessarily know the solution beforehand. Our problem is a 2×2 binary sudoku, which in our case has two simple rules: # # - No column may contain the same value twice # - No row may contain the same value twice # # If we assign each square in our sudoku to a variable like so: # # ![2×2 binary sudoku, with each square allocated to a different variable](images/binary_sudoku.png) # # we want our circuit to output a solution to this sudoku. # # Note that, while this approach of using Grover's algorithm to solve this problem is not practical (you can probably find the solution in your head!), the purpose of this example is to demonstrate the conversion of classical [decision problems](https://en.wikipedia.org/wiki/Decision_problem) into oracles for Grover's algorithm. # # ### 4.1.1 Turning the Problem into a Circuit # # We want to create an oracle that will help us solve this problem, and we will start by creating a circuit that identifies a correct solution. Similar to how we created a classical adder using quantum circuits in [_The Atoms of Computation_](https://qiskit.org/textbook/ch-states/atoms-computation.html), we simply need to create a _classical_ function on a quantum circuit that checks whether the state of our variable bits is a valid solution. # # Since we need to check down both columns and across both rows, there are 4 conditions we need to check: # # ``` # v0 ≠ v1 # check along top row # v2 ≠ v3 # check along bottom row # v0 ≠ v2 # check down left column # v1 ≠ v3 # check down right column # ``` # # Remember we are comparing classical (computational basis) states. For convenience, we can compile this set of comparisons into a list of clauses: # + tags=["thebelab-init"] clause_list = [[0,1], [0,2], [1,3], [2,3]] # - # We will assign the value of each variable to a bit in our circuit. To check these clauses computationally, we will use the `XOR` gate (we came across this in the atoms of computation). # + tags=["thebelab-init"] def XOR(qc, a, b, output): qc.cx(a, output) qc.cx(b, output) # - # Convince yourself that the `output0` bit in the circuit below will only be flipped if `input0 ≠ input1`: # We will use separate registers to name the bits in_qubits = QuantumRegister(2, name='input') out_qubit = QuantumRegister(1, name='output') qc = QuantumCircuit(in_qubits, out_qubit) XOR(qc, in_qubits[0], in_qubits[1], out_qubit) qc.draw() # This circuit checks whether `input0 == input1` and stores the output to `output0`. To check each clause, we repeat this circuit for each pairing in `clause_list` and store the output to a new bit: # + # Create separate registers to name bits var_qubits = QuantumRegister(4, name='v') # variable bits clause_qubits = QuantumRegister(4, name='c') # bits to store clause-checks # Create quantum circuit qc = QuantumCircuit(var_qubits, clause_qubits) # Use XOR gate to check each clause i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 qc.draw() # - # The final state of the bits `c0, c1, c2, c3` will only all be `1` in the case that the assignments of `v0, v1, v2, v3` are a solution to the sudoku. To complete our checking circuit, we want a single bit to be `1` if (and only if) all the clauses are satisfied, this way we can look at just one bit to see if our assignment is a solution. We can do this using a multi-controlled-Toffoli-gate: # + # Create separate registers to name bits var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit) # Compute clauses i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # Flip 'output' bit if all clauses are satisfied qc.mct(clause_qubits, output_qubit) qc.draw() # - # The circuit above takes as input an initial assignment of the bits `v0`, `v1`, `v2` and `v3`, and all other bits should be initialised to `0`. After running the circuit, the state of the `out0` bit tells us if this assignment is a solution or not; `out0 = 0` means the assignment _is not_ a solution, and `out0 = 1` means the assignment _is_ a solution. # # **Important:** Before you continue, it is important you fully understand this circuit and are convinced it works as stated in the paragraph above. # # ### 4.1.2 Uncomputing, and Completing the Oracle # # We can now turn this checking circuit into a Grover oracle using [phase kickback](https://qiskit.org/textbook/ch-gates/phase-kickback.html). To recap, we have 3 registers: # - One register which stores our sudoku variables (we'll say $x = v_3, v_2, v_1, v_0$) # - One register that stores our clauses (this starts in the state $|0000\rangle$ which we'll abbreviate to $|0\rangle$) # - And one qubit ($|\text{out}_0\rangle$) that we've been using to store the output of our checking circuit. # # To create an oracle, we need our circuit ($U_\omega$) to perform the transformation: # # $$ # U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = |x\rangle|0\rangle|\text{out}_0\oplus f(x)\rangle # $$ # # If we set the `out0` qubit to the superposition state $|{-}\rangle$ we have: # # $$ # \begin{aligned} # U_\omega|x\rangle|0\rangle|{-}\rangle # &= U_\omega|x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ # &= |x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|0\oplus f(x)\rangle - |1\oplus f(x)\rangle) # \end{aligned} # $$ # # If $f(x) = 0$, then we have the state: # # $$ # \begin{aligned} # &= |x\rangle|0\rangle\otimes \tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ # &= |x\rangle|0\rangle|-\rangle\\ # \end{aligned} # $$ # # # (i.e. no change). But if $f(x) = 1$ (i.e. $x = \omega$), we introduce a negative phase to the $|{-}\rangle$ qubit: # # $$ # \begin{aligned} # &= \phantom{-}|x\rangle|0\rangle\otimes\tfrac{1}{\sqrt{2}}(|1\rangle - |0\rangle)\\ # &= \phantom{-}|x\rangle|0\rangle\otimes -\tfrac{1}{\sqrt{2}}(|0\rangle - |1\rangle)\\ # &= -|x\rangle|0\rangle|-\rangle\\ # \end{aligned} # $$ # # This is a functioning oracle that uses two auxiliary registers in the state $|0\rangle|{-}\rangle$: # # $$ # U_\omega|x\rangle|0\rangle|{-}\rangle = \Bigg\{ # \begin{aligned} # \phantom{-}|x\rangle|0\rangle|-\rangle \quad \text{for} \; x \neq \omega \\ # -|x\rangle|0\rangle|-\rangle \quad \text{for} \; x = \omega \\ # \end{aligned} # $$ # # To adapt our checking circuit into a Grover oracle, we need to guarantee the bits in the second register (`c`) are always returned to the state $|0000\rangle$ after the computation. To do this, we simply repeat the part of the circuit that computes the clauses which guarantees `c0 = c1 = c2 = c3 = 0` after our circuit has run. We call this step _'uncomputation'_. # + var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) def sudoku_oracle(qc, clause_list, clause_qubits): # Compute clauses i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 # Flip 'output' bit if all clauses are satisfied qc.mct(clause_qubits, output_qubit) # Uncompute clauses to reset clause-checking bits to 0 i = 0 for clause in clause_list: XOR(qc, clause[0], clause[1], clause_qubits[i]) i += 1 sudoku_oracle(qc, clause_list, clause_qubits) qc.draw() # - # In summary, the circuit above performs: # # $$ # U_\omega|x\rangle|0\rangle|\text{out}_0\rangle = \Bigg\{ # \begin{aligned} # |x\rangle|0\rangle|\text{out}_0\rangle \quad \text{for} \; x \neq \omega \\ # |x\rangle|0\rangle\otimes X|\text{out}_0\rangle \quad \text{for} \; x = \omega \\ # \end{aligned} # $$ # # and if the initial state of $|\text{out}_0\rangle = |{-}\rangle$,: # # $$ # U_\omega|x\rangle|0\rangle|{-}\rangle = \Bigg\{ # \begin{aligned} # \phantom{-}|x\rangle|0\rangle|-\rangle \quad \text{for} \; x \neq \omega \\ # -|x\rangle|0\rangle|-\rangle \quad \text{for} \; x = \omega \\ # \end{aligned} # $$ # ### 4.1.3 The Full Algorithm # # All that's left to do now is to put all these components together. # + var_qubits = QuantumRegister(4, name='v') clause_qubits = QuantumRegister(4, name='c') output_qubit = QuantumRegister(1, name='out') cbits = ClassicalRegister(4, name='cbits') qc = QuantumCircuit(var_qubits, clause_qubits, output_qubit, cbits) # Initialise 'out0' in state |-> qc.initialize([1, -1]/np.sqrt(2), output_qubit) # Initialise qubits in state |s> qc.h(var_qubits) qc.barrier() # for visual separation ## First Iteration # Apply our oracle sudoku_oracle(qc, clause_list, clause_qubits) qc.barrier() # for visual separation # Apply our diffuser qc.append(diffuser(4), [0,1,2,3]) ## Second Iteration sudoku_oracle(qc, clause_list, clause_qubits) qc.barrier() # for visual separation # Apply our diffuser qc.append(diffuser(4), [0,1,2,3]) # Measure the variable qubits qc.measure(var_qubits, cbits) qc.draw(fold=-1) # - # Simulate and plot results qasm_simulator = Aer.get_backend('qasm_simulator') transpiled_qc = transpile(qc, qasm_simulator) result = qasm_sim.run(transpiled_qc).result() plot_histogram(result.get_counts()) # There are two bit strings with a much higher probability of measurement than any of the others, `0110` and `1001`. These correspond to the assignments: # ``` # v0 = 0 # v1 = 1 # v2 = 1 # v3 = 0 # ``` # and # ``` # v0 = 1 # v1 = 0 # v2 = 0 # v3 = 1 # ``` # which are the two solutions to our sudoku! The aim of this section is to show how we can create Grover oracles from real problems. While this specific problem is trivial, the process can be applied (allowing large enough circuits) to any decision problem. To recap, the steps are: # # # &nbsp; # # <!-- ::: q-block.exercise --> # # ### Your turn # # 1. Create a reversible classical circuit that identifies a correct solution # 2. Use phase kickback and uncomputation to turn this circuit into an oracle # 3. Use Grover's algorithm to solve this oracle # # <!-- ::: --> # # + [markdown] gloss={"gt": {"text": "In math, graph theory is the study of graphs. A graph is made up of nodes or points which are connected by edges.", "title": "graph theory"}} # ## 4.2 The Triangle-finding Problem Using Grover <a id='tri'></a> # # # One of the famous [graph theory](gloss:gt) problems is the [triangle-finding problem](https://en.wikipedia.org/wiki/Triangle-free_graph). In the triangle-finding problem, we are given a graph that may or may not contain a triangle. Our task is to find the triangle/s within the graph and point out the nodes that form the triangle. For example, the graph below is a 4-node graph with a triangle between nodes `1`,`2`, and `3`. # # # ![image12](images/grover_tri.png) # # # We can apply Grover's algorithm to this problem, we are going to give the algorithm a list of _edges_ and the number of nodes in the graph. The algorithm, then, will do the rest. It will try to check if there's a triangle in the graph or not, and if so it will mark the nodes forming that triangle. # # # Now, let's go through Grover's algorithm steps and see how can we construct each step to solve the triangle finding problem. But first, let's define our input, which is the list of edges. # - #Edges list edges =[(0, 1), (0, 2), (1, 2), (2, 3)] #Number of nodes n_nodes = 4 # + [markdown] variables={"\\sqrt{n": "<p><strong>SyntaxError</strong>: unexpected character after line continuation character (Temp/ipykernel_26292/2644103834.py, line 1)</p>\n"} # ### 4.2.1 The state preparation # # To solve the problem, let's first focus on the example above, which is the case of finding a triangle in a 4-node graph. to do that, we need to go over all subgraphs within our graph and check if any of them a triangle. To do that we will need 4-qubits, each qubit represents a node in the graph. The state of the qubit will indicate whether the node in any subgraph or not. For example, in the graph above, the triangle is between nodes 0,1, and 2, we can rephrase that using the state `1110`. The nodes with state 1 are in the subgraph (triangle) and the node with state 0 is not. # # For the state preparation, we will need to create a superposition of all possible states `0000`, to `1111`, which can be done simply using `4` Hadamard gates. Doing so, will we need to rotate over the oracle and diffusion `3` times. # But, if each `1` in the state represent an active node, we don't really need to look through the entire Hilbert space, we only need to look through the subgraphs with three nodes. # # This is a good example of using another type of symmetric states to prepare the search space. Here since we only need to consider states with three 1's, we can think of another way to form our search space. One way to create a superposition over only states with three active nodes is through using the W-state followed by `4 NOT gates`. This will decrease the number of rotations needed from `3` to `1`. # # So, we first need to implement the W-state. W-states have the form: # # $$ # # |W\rangle = {\frac{1}{{\sqrt{n}}}}(|100...0\rangle + ... + |01...0\rangle + |00...01\rangle) # # $$ # # In our case, we need to construct $|W_{3}\rangle$ states as described in reference 6. # + #We used the W state implementation from W state in reference 6 def control_rotation (qcir,cQbit,tQbit,theta): """ Create an intermediate controlled rotation uding only unitsry gate and controlled-NOT Args: qcir: QuantumCircuit instance to apply the controlled rotstion to. cQbit: control qubit. tQbit: target qubit. theta: rotation angle. Returns: A modified version of the QuantumCircuit instance with control rotation applied. """ theta_dash = math.asin(math.cos(math.radians(theta/2))) qcir.u(theta_dash,0,0,tQbit) qcir.cx(cQbit,tQbit) qcir.u(-theta_dash,0,0,tQbit) return qcir def wn (qcir,qbits): """ Create the W-state using the control-rotation function. Args: qcir: QuantumCircuit instance used to construct the W-state. qbits: the qubits used to construct the W-state. Returns: A modified version of the QuantumCircuit instance with the W-state construction gates. """ for i in range(len(qbits)): if i == 0: qcir.x(qbits[0]) qcir.barrier() else: p = 1/(len(qbits)-(i-1)) theta = math.degrees(math.acos(math.sqrt(p))) theta = 2* theta qcir = control_rotation(qcir,qbits[i-1],qbits[i],theta) qcir.cx(qbits[i],qbits[i-1]) qcir.barrier() return qcir,qbits sub_qbits = QuantumRegister(n_nodes) sub_cir = QuantumCircuit(sub_qbits, name="state_prep") sub_cir, sub_qbits = wn(sub_cir, sub_qbits) sub_cir.x(sub_qbits) stat_prep = sub_cir.to_instruction() inv_stat_prep = sub_cir.inverse().to_instruction() # - # ### 4.2.2 The oracle # # The oracle is what's gonna mark the correct answer. In these cases, the oracle needs to take every subgraph and count the number of edges in that subgraph. If the number of edges is `3`, then we have a triangle, if not, it will proceed to the next subgraph. # # ![image13](images/grover_tri_oracle.png) # # For every edge in the graph, we will need one or two `CNOT` gates. These `CNOT` gates will apply to two ancillary qubits, that should be in state `11` if a triangle is found. The number of ancillary qubits here is two because a triangle has `3 edges` that is $11_{b}$. Then the final step in the oracle is to apply one more `Toffoli`, that will only be active if a triangle is found by changing the state of another qubit, let's call it, `tri_flag`, to `1`. # &nbsp; # # <!-- ::: q-block.reminder --> # # ### Reminder # # <details> # <summary>Oracle</summary> # The oracle's job is to verify and mark the correct answer. So, when you construct an oracle, you're basically building a circuit to verify certain conditions. # </details> # # <!-- ::: --> # + def edge_counter(qc,qubits,anc,flag_qubit,k): bin_k = bin(k)[2:][::-1] l = [] for i in range(len(bin_k)): if int(bin_k[i]) == 1: l.append(qubits[i]) qc.mct(l,flag_qubit,[anc]) def oracle(n_nodes, edges, qc, nodes_qubits, edge_anc, ancilla, neg_base): k = 3 #k is the number of edges, in case of a triangle, it's 3 #1- edge counter #forward circuit qc.barrier() qc.ccx(nodes_qubits[edges[0][0]],nodes_qubits[edges[0][1]],edge_anc[0]) for i in range(1,len(edges)): qc.mct([nodes_qubits[edges[i][0]],nodes_qubits[edges[i][1]],edge_anc[0]], edge_anc[1], [ancilla[0]]) qc.ccx(nodes_qubits[edges[i][0]],nodes_qubits[edges[i][1]],edge_anc[0]) #---------------------------------------------------------------------------------------------------------- #Edges check Qubit edg_k = int((k/2)*(k-1)) edge_counter(qc,edge_anc,ancilla[0],neg_base[0],edg_k) #---------------------------------------------------------------------------------------------------------- #4- Reverse edge count for i in range(len(edges)-1,0,-1): qc.ccx(nodes_qubits[edges[i][0]],nodes_qubits[edges[i][1]],edge_anc[0]) qc.mct([nodes_qubits[edges[i][0]],nodes_qubits[edges[i][1]],edge_anc[0]], edge_anc[1], [ancilla[0]]) qc.ccx(nodes_qubits[edges[0][0]],nodes_qubits[edges[0][1]],edge_anc[0]) qc.barrier() # - # ### 4.2.3 The diffusion operator # # As we said before, the diffusion operator construction depends on the type of state preparation we used, in this case, the W-state. So, we need the inverses W-state, a multi-controlled Z gate and the original W-state to form the diffusion operator. def cnz(qc, num_control, node, anc): """Construct a multi-controlled Z gate Args: num_control : number of control qubits of cnz gate node : node qubits anc : ancillaly qubits """ if num_control>2: qc.ccx(node[0], node[1], anc[0]) for i in range(num_control-2): qc.ccx(node[i+2], anc[i], anc[i+1]) qc.cz(anc[num_control-2], node[num_control]) for i in range(num_control-2)[::-1]: qc.ccx(node[i+2], anc[i], anc[i+1]) qc.ccx(node[0], node[1], anc[0]) if num_control==2: qc.h(node[2]) qc.ccx(node[0], node[1], node[2]) qc.h(node[2]) if num_control==1: qc.cz(node[0], node[1]) def grover_diff(qc, nodes_qubits,edge_anc,ancilla,stat_prep,inv_stat_prep): qc.append(inv_stat_prep,qargs=nodes_qubits) qc.x(nodes_qubits) #==================================================== #3 control qubits Z gate cnz(qc,len(nodes_qubits)-1,nodes_qubits[::-1],ancilla) #==================================================== qc.x(nodes_qubits) qc.append(stat_prep,qargs=nodes_qubits) # ### 4.2.4 Putting it all together # # Now that we have all the components of the algorithm built and running, we can put them together. # Grover algo function def grover(n_nodes,stat_prep,inv_stat_prep): #N = 2**n_nodes # for optimal iterations count if the state prep is done using only H gates. N = math.comb(n_nodes, 3) #Since we are using W-state to perform initial preparation. nodes_qubits = QuantumRegister(n_nodes, name='nodes') edge_anc = QuantumRegister(2, name='edge_anc') ancilla = QuantumRegister(n_nodes-2, name = 'cccx_diff_anc') neg_base = QuantumRegister(1, name='check_qubits') class_bits = ClassicalRegister(n_nodes, name='class_reg') tri_flag = ClassicalRegister(3, name='tri_flag') qc = QuantumCircuit(nodes_qubits, edge_anc, ancilla, neg_base, class_bits, tri_flag) # Initialize qunatum flag qubits in |-> state qc.x(neg_base[0]) qc.h(neg_base[0]) # Initializing i/p qubits in superposition qc.append(stat_prep,qargs=nodes_qubits) qc.barrier() # Calculate iteration count iterations = math.floor(math.pi/4*math.sqrt(N)) # Calculate iteration count for i in np.arange(iterations): qc.barrier() oracle(n_nodes, edges, qc, nodes_qubits, edge_anc, ancilla, neg_base) qc.barrier() grover_diff(qc, nodes_qubits,edge_anc,ancilla,stat_prep,inv_stat_prep) qc.measure(nodes_qubits,class_bits) return qc # Now, let's run the code and plot the histogram to see if our algorithm works as expected. qc = grover(n_nodes,stat_prep,inv_stat_prep) qc.draw() # Simulate and plot results qasm_simulator = Aer.get_backend('qasm_simulator') #transpiled_qc = transpile(qc, qasm_simulator) # Execute circuit and show results ex = execute(qc, qasm_simulator, shots = 5000) res = ex.result().get_counts(qc) plot_histogram(res) # <!-- ::: q-block.exercise --> # # ### Your turn # # Can you extend this problem to find a triangle in any size graph? # # Try in [IBM Quantum Lab](https://quantum-computing.ibm.com/lab) # # <!-- ::: --> # # ## 5. References <a id='references'></a> # # 1. <NAME> (1996), "A fast quantum mechanical algorithm for database search", Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC 1996), [doi:10.1145/237814.237866](http://doi.acm.org/10.1145/237814.237866), [arXiv:quant-ph/9605043](https://arxiv.org/abs/quant-ph/9605043) # 2. <NAME>, <NAME>, <NAME>, <NAME>, <NAME> & <NAME> (2017), "Complete 3-Qubit Grover search on a programmable quantum computer", Nature Communications, Vol 8, Art 1918, [doi:10.1038/s41467-017-01904-7](https://doi.org/10.1038/s41467-017-01904-7), [arXiv:1703.10535 ](https://arxiv.org/abs/1703.10535) # 3. <NAME> & <NAME>, "Quantum Computation and Quantum Information", Cambridge: Cambridge University Press, 2000. # 4. <NAME>., <NAME>., <NAME>., & <NAME>. (2021). Entangled symmetric states and copositive matrices. Quantum, 5, 561. # 5. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, “Experimental realization of dicke states of up to six qubits for multiparty quantum networking,” Physical Review Letters, vol. 103, no. 2, Jul 2009. # 6. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> et al., “Efficient quantum algorithms for ghz and w states, and implementation on the ibm quantum computer,” Advanced Quantum Technologies, vol. 2, no. 5-6, p.1900015, 2019. https://doi.org/10.1002/qute.201900015 # 7. <NAME>., <NAME>., & <NAME>. (2007). Quantum algorithms for the triangle problem. SIAM Journal on Computing, 37(2), 413-424. import qiskit.tools.jupyter # %qiskit_version_table
notebooks/ch-algorithms/grover.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="o5Dr14gaiaOp" # # IEOR 4157 Fall 2020 Final Report # - <NAME> (uni: bh2569) # - <NAME> (uni: rq217) # - <NAME> (uni: sy2938) # + [markdown] id="PmfR3rJJjDLf" # ## The Environments # # Enviroment setup # + colab={"base_uri": "https://localhost:8080/"} id="2k-RS4U2TDML" outputId="ab9ff1cb-8f10-4bce-9fb9-160ce38da32a" import os repo_name = 'final-project-qrdecomposition_final' data_path = '../downloads' if not os.path.isdir(data_path): os.mkdir(data_path) # + colab={"base_uri": "https://localhost:8080/"} id="J1bOv93r38ve" outputId="3e9f464a-a1be-485d-c7b7-28ebe0d2bc9f" #sanity check for cuda import torch from torch import nn use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device, torch.__version__) # + [markdown] id="kY2BaAjx45ot" # ## Download Movielens-latest # + id="S9zxt9I94YHy" # import requests, zipfile, io # url = "http://files.grouplens.org/datasets/movielens/ml-latest.zip" # r = requests.get(url) # with zipfile.ZipFile(io.BytesIO(r.content)) as zf: # for zip_info in zf.infolist(): # if zip_info.filename[-1] == '/': # continue # zip_info.filename = os.path.basename(zip_info.filename) # zf.extract(zip_info, data_path) # - movie_info_path = '../data/movies.csv' # !cp $movie_info_path $data_path # + colab={"base_uri": "https://localhost:8080/"} id="5wUZE7TP5lo6" outputId="0ce64539-fa96-4538-8dcc-86d9e117ebb0" #sanity check for downloaded files # !ls $data_path # + [markdown] id="cMy27vgN3Lub" # ### Import Libararies # + id="BmpezFnrTGxg" ###utilities from tqdm import tqdm import time import warnings warnings.filterwarnings("ignore") ###pyspark dependencies from pyspark.sql import SparkSession import pyspark.ml as M import pyspark.sql.functions as F import pyspark.sql.window as W import pyspark.sql.types as T from pyspark.ml.recommendation import ALS ###numpy,scipy,pandas,sklearn stacks from scipy import sparse import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.compose import ColumnTransformer from sklearn.preprocessing import FunctionTransformer from sklearn.pipeline import Pipeline ###torch stacks import torch from torch import nn from pytorch_widedeep.preprocessing import DensePreprocessor from pytorch_widedeep.callbacks import ( LRHistory, EarlyStopping, ModelCheckpoint, ) from pytorch_widedeep.optim import RAdam from pytorch_widedeep.initializers import XavierNormal, KaimingNormal from pytorch_widedeep.models import Wide, DeepDense, WideDeep # + [markdown] id="wwIdrFTI3PtE" # ### Initiate Spark Session # + id="Yp9DPFjCqHd8" # os.environ["JAVA_HOME"] = "/datasets/home/65/965/yux164/.jdk/jdk-11.0.9.1+1" #for java path import psutil from pyspark.sql import SparkSession from pyspark import SparkContext, SparkConf NUM_WORKER = psutil.cpu_count(logical = False) NUM_THREAD = psutil.cpu_count(logical = True) def spark_session(): """[function for creating spark session] Returns: [Spark Session]: [the spark session] """ conf_spark = SparkConf().set("spark.driver.host", "127.0.0.1")\ .set("spark.executor.instances", NUM_WORKER)\ .set("spark.executor.cores", int(NUM_THREAD / NUM_WORKER))\ .set("spark.executor.memory", '4g')\ .set("spark.sql.shuffle.partitions", NUM_THREAD) sc = SparkContext(conf = conf_spark) sc.setLogLevel('ERROR') spark = SparkSession(sc) print('Spark UI address {}'.format(spark.sparkContext.uiWebUrl)) return spark spark = spark_session() # + [markdown] id="yIfnVDADoxWj" # ## The Objective # + [markdown] id="twB3Ovyd1KlU" # ### Business Objective --------BO EDITED-------- # # Our team's business mainly focuses on how to provide general users movies that they like. There are so many movies come out every year, but people won't watch all of them. Our business goal is to provide personalized movies that just fit for you. Once you think our services fit your taste, we could provide you even more novel movies, shows, or tv series by subscription to our website. We hope our technology could ultimately benefit each individual and push the entertainment industry forward. # + [markdown] id="z7EAItWk1L3x" # ### Intended users --------BO EDITED-------- # The recommendation system is created for the general audience so that everyone who enjoys movies benefits from our website. # + [markdown] id="fc65s6TD1OlN" # ### Business rules --------BO EDITED-------- # In order to keep the user entertained rather than only focusing on what they already know, one business rule we came up with is to include at least two different genres when recommending k movies even though the predicted rating might be low. We would love our users to explore things that they would really want to try but haven't had chance to try yet. Compare to other recommendation system, the advantage of ours is we aim for not only the accuracy but also the spirit of exploration and curiosity. # + [markdown] id="m1urqXVZ1UAm" # ### Performance requirements --------BO EDITED-------- # For performance, we would like to serve the online queries in real time. For model based algorithms, it’s fine to train it offline and then serve the model online. We will update our database regularly so that our model is the most recent one. For this homework, we did not expand our scope on serving in real time. Everything we done was in an offline setting. # + [markdown] id="Uqvg-vcX1V35" # ### Interpretability --------BO EDITED-------- # In order to interpret models better and also to serve our subscribed users better (getting to know their behaviours and interests more), we decide to make the matrix factorization algorithm to only produce non-negative matrices. In that case, we would be able to identify some elements that are important for the algorithm to learn users’ behaviours (higher value in the matrices would produce higher ratings). For the more sophisticated model (wide and deep), if possible later on, we want to try to study and understand the users behaviours through the embeddings by the neural network. # + [markdown] id="Fk_4OlS-pMTC" # ## The Data # # + [markdown] id="KB-fqhU0re-Q" # ### Sample # # We will first test our model on the sample of Movielens-ml-latest in homework2. # # **sampling methodology** **--------BO EDITED--------** # # We perform sampling w.r.t Conditional Matrix Sampling, in which, we will sample the matrix of $M$ user indices and $N$ movie indices filtering out users who do not have at least $i$ ratings and movies which do not have at least $j$ ratings. If numbers of users and movies do not meet the minimal requirements $M$ and $N$, we will keep sampling process with increased number of matrix indices for both users and movies until users and movies meet minimal requirements $M$ and $N$. # # In our case, we choose M = 20000, N = 2000, i = 100, j = 1000. 20000 users, 2000 movies, a user should at least rate 100 movies and a movie should be at least rated 1000 times. We choose a more dense matrix than homework 2 because of we need a ground truth of recommendation when we evaluate our model. That is, the base model selects 50 items to recommend, then in our test set, on average each user should have 50 items or more rated then we can evaluate our model based on the test set. # + id="q75WRyZSdpRg" #running this cell takes over minutes def sampling(ratings, num_user, num_item, user_threshold, item_threshold, random_seed, userCol='userId', itemCol='movieId', timeCol = 'timestamp', targetCol='rating'): """[method to generating sample from BIG dataset] Args: ratings (Pyspark DataFrame): [the BIG dataset] num_user (int): [the number of users needs to have in the sample] num_item (int): [the number of items needs to have in the sample] user_threshold (int): [the number of ratings a user needs to have] item_threshold (int): [the number of ratings a movie needs to have] random_seed (int): [random seed of random sample] userCol (str, optional): [user column name]. Defaults to 'userId'. itemCol (str, optional): [item column name]. Defaults to 'movieId'. timeCol (str, optional): [timesampe column name]. Defaults to 'timestamp'. targetCol (str, optional): [rating/target column name]. Defaults to 'rating'. Returns: Pyspark DataFrame: [the sample] """ n_users, n_items = 0, 0 M, N = num_item, num_user while n_users < num_user and n_items < num_item: movieid_filter = ratings.groupby(itemCol)\ .agg(F.count(userCol)\ .alias('cnt'))\ .where(F.col('cnt') >= item_threshold)\ .select(itemCol)\ .orderBy(F.rand(seed=random_seed))\ .limit(M) sample = ratings.join(movieid_filter, ratings[itemCol] == movieid_filter[itemCol])\ .select(ratings[userCol], ratings[itemCol], ratings[timeCol], ratings[targetCol]) userid_filter = sample.groupby(userCol)\ .agg(F.count(itemCol)\ .alias('cnt'))\ .where(F.col('cnt') >= user_threshold)\ .select(userCol)\ .orderBy(F.rand(seed=random_seed))\ .limit(N) sample = sample.join(userid_filter, ratings[userCol] == userid_filter[userCol])\ .select(ratings[userCol], ratings[itemCol], ratings[timeCol], ratings[targetCol]).persist() n_users, n_items = sample.select(userCol).distinct().count(), sample.select(itemCol).distinct().count() print(f'sample has {n_users} users and {n_items} items') M += 100 N += 100 return sample # - # how we generate our sample # # ```python # num_user = 20000 # num_movie = 2000 # user_threshold = 100 # item_threshold = 1000 # random_seed = 0 # ratings = spark.read.csv(os.path.join(data_path,'ratings.csv'), header=True) # sample = sampling(ratings,num_user, num_movie, user_threshold, item_threshold, random_seed) # # save sample data to '/data/sample.csv' # sample = sample.persist() # sample.toPandas().to_csv(os.path.join(data_path, 'sample.csv'), index = False) # ``` # + #load sample from local path compressed_sample_path = '../data/sample.tar.gz' # !tar -xzvf $compressed_sample_path -C $data_path # !ls $data_path sample_path = os.path.join(data_path, 'samples', 'sample.csv') sample = spark.read.csv(sample_path, header=True).select('userId', 'movieId', 'rating').persist() sample_df = pd.read_csv(sample_path).drop('timestamp', axis = 1) # + id="X72p-AB4DUtW" #sanity check for sample sample.show(10) # + [markdown] id="1FtoZYiJraho" # #### sample overview # # # + colab={"base_uri": "https://localhost:8080/"} id="j-24Vb_rq3X7" outputId="8d91de43-c282-4ab1-8bda-91f1c8623f31" print(f''' number of data points in the sample: {sample.count()}, number of unique users in the sample: {sample.select('userId').distinct().count()}, number of unique movies in the sample: {sample.select('movieId').distinct().count()}, average number of movies a user rated:{sample.groupby('userId').agg(F.count('movieId').alias('cnt')).select(F.mean('cnt')).collect()[0][0]:.2f}, average number of ratings a movie received: {sample.groupby('movieId').agg(F.count('userId').alias('cnt')).select(F.mean('cnt')).collect()[0][0]:.2f}, average rating: {sample.select(F.mean('rating')).collect()[0][0]:.2f}, standard deviation of rating: {sample.select(F.stddev('rating')).collect()[0][0]:.2f}, average rating by user: {sample.groupby('userId').agg(F.mean('rating').alias('rating')).select(F.mean('rating')).collect()[0][0]:.2f}, standard deviation of rating by user mean: {sample.groupby('userId').agg(F.mean('rating').alias('rating')).select(F.stddev('rating')).collect()[0][0]:.2f}, average rating by movie: {sample.groupby('movieId').agg(F.mean('rating').alias('rating')).select(F.mean('rating')).collect()[0][0]:.2f}, standard deviation of rating by movie mean: {sample.groupby('movieId').agg(F.mean('rating').alias('rating')).select(F.stddev('rating')).collect()[0][0]:.2f} ''') # + [markdown] id="L8UBpsFnjQq4" # ## The Evaluation # + [markdown] id="7Rm4PyxhlEsn" # ### Metrics # # + [markdown] id="W2eD8HHPm4wN" # #### Root Mean Square Error (RMSE) # $RMSE = \sqrt{\frac{(\hat{y}-y)^2}{n}}$. # # **--------BO ADDED--------** # RMSE explains on average how far is our predictions of ratings from the real ratings. One of our strategies is we trained our models to reduce this distance as much as possible using a loss very similar to RMSE which is called Mean Squared Error. RMSE is better for presentation purposes because it has the same unit as our original target. # + id="XKdCE47JnVV4" def rmse(with_pred_df, rating_col_name = "rating", pred_col_name = "prediction"): """[calculate rmse of the prediction] Args: with_pred_df (Pyspark DataFrame): [Pyspark DataFrame with target and prediction columns] rating_col_name (str, optional): [column of true values]. Defaults to "rating". pred_col_name (str, optional): [column of prediction values]. Defaults to "prediction". Returns: flaot: [rmse] """ return with_pred_df.select(F.sqrt(F.sum((F.col(rating_col_name) - \ F.col(pred_col_name))**2)/F.count(rating_col_name))).collect()[0][0] from sklearn.metrics import mean_squared_error def rmse_numpy(true, pred): return np.sqrt(mean_squared_error(true, pred)) # + [markdown] id="8VenF4GrnBf4" # #### Accuracy # # **--------BO EDITED--------** # We define user rates a movie with scoring larger or equal to 3 as good and smaller to 3 as bad. Accuracy explains the percentage of ratings that our model generated are agreed with what the true ratings users gave. # + id="ftFnkjfSnWUG" def acc(with_pred_df, rating_col_name = "rating", pred_col_name = "prediction"): """[calculate rmse of the prediction] Args: with_pred_df (Pyspark DataFrame): [Pyspark DataFrame with target and prediction columns] rating_col_name (str, optional): [column of true values]. Defaults to "rating". pred_col_name (str, optional): [column of prediction values]. Defaults to "prediction". Returns: float: [accuracy] """ TP = ((F.col(rating_col_name) >= 3) & (F.col(pred_col_name) >= 3)) TN = ((F.col(rating_col_name) < 3) & (F.col(pred_col_name) < 3)) correct = with_pred_df.filter(TP | TN) return correct.count() / with_pred_df.count() from sklearn.metrics import accuracy_score def acc_numpy(true, pred): return accuracy_score((true >=3), (pred >= 3)) # + [markdown] id="BlL2PgC23ocb" # #### Recall # # **--------BO EDITED--------** # We will adopt `Recall` as a metric when we choose our base model. This is another strategies that differentiate our system from others' and this metric also serves the purpose of our business goals. We optimize this metric because we would like to give users better experience by letting the model make more correct receommendations that the users truly like. # # The recall is the ratio `tp / (tp + fn)` where `tp` is the number of true positives and `fn` the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. In our case, we set the ratings larger or equal to 3 as positive instances and ratings smaller than 3 as negative instances. # + id="q0Wt5WbL4JdP" def recall(with_pred_df, rating_col_name = "rating", pred_col_name = "prediction"): TP = with_pred_df.filter((F.col(rating_col_name) >= 3) & (F.col(pred_col_name) >= 3)).count() FN = with_pred_df.filter((F.col(rating_col_name) >= 3) & (F.col(pred_col_name) < 3)).count() return TP / (TP + FN) from sklearn.metrics import recall_score def recall_numpy(true, pred): return recall_score((true >=3), (pred >= 3)) # + [markdown] id="rCtwRTT4nMGL" # #### ROC curve and AUC # **--------BO EDITED--------** # ROC examines the True positive rate vs. False positive rate. This measure gives us some understanding about the model on the recall as well. Beside recall, this measure also indicates how's our recall and false alarm (in this case, recommending bad movies but the model thinks is good to users) moves interactively. # # **--------BO EDITED--------** # AUC calculates the area under the ROC curve, which provide us a single scalar value to quantify. # + id="vn6o8FWenafu" import matplotlib.pyplot as plt from sklearn.metrics import roc_curve, auc from seaborn import set_style,set_palette def ROC(pred,truth): """ given prediction and groundtruth labels, computes false positive rate and true positive rate """ fpr, tpr, threshold = roc_curve(truth, pred) if auc(fpr,tpr)<0.5: fpr, tpr, threshold = roc_curve(truth, pred) return fpr,tpr def _plot_ROC(auc_dict:dict): """ plot ROC curves for the models in the provided dictionary @param auc_dict: a dictionary containing names of the models and their corresponding false positive rates and true positive rates @param display: whether to display the image or to save file, default to False and save to file @param fn: if display is False, fn will be used as the name to save the plot """ # style setup to match with the rest of the report set_style("darkgrid") set_palette("deep") for k in auc_dict.keys(): fpr,tpr=auc_dict[k] plt.plot(fpr,tpr,lw=2.5,label="{}, AUC= {:.1f}%".format(k,auc(fpr,tpr)*100)) plt.ylim(0,1) plt.xlim(0,1) plt.grid(True) plt.legend(loc='upper left') plt.plot([0,1],[0.001,1],'r--') plt.tight_layout() def plot_ROC_numpy(true, preds, model_names): plt.figure() true_binary = true >= 3 for pred, model_name in zip(preds, model_names): _plot_ROC({model_name: ROC(pred, true_binary)}) plt.show() # + [markdown] id="BpK2J6uInPld" # #### NDCG # # Normalized Discounted Cumulative Gain can be calculated as following: $NDCG = \frac{DCG}{IDCG}$, where $DCG = \frac{1}{m}\sum{u=1}^{m} \sum_{j \in Iu}^{} \frac{2^{rel{uj}} - 1}{log_2(v_j+1)}$ and $IDCG$ is the ideal DCG. # # **--------BO EDITED--------** # In short explanation, NDCG would measure the quality of our recommended k movies for a user as a whole. It's a ranking quality measure. Compare to other metrics, this measure gives us a better understanding not only on individual movies but also how these movies are located on users perferences. If recommended movies are on user's top movies list, then we say the recommendation is good. # + id="SfGhCuI8nbmf" from sklearn.metrics import ndcg_score # + [markdown] id="82wOrRgqoBoX" # ### Train Test Split # # We perform train test split following splits based on every user's activities: # - train, test : $75\%, 25\%$ # # # We only choose 75, 25 splits since scalibility of base models has already shown in hw2. # + id="GqHSeADIf02_" def train_test_split(ratings, split, usercol='userId', itemcol='movieId', timecol='timestamp', targetcol='rating'): """[function to make train test split with respect to user activities] Args: ratings (Pyspark DataFrame): [the rating DataFrame to be splitted] split (float): [proportion of training set] usercol (str, optional): [user column name]. Defaults to 'userId'. itemcol (str, optional): [item column name]. Defaults to 'movieId'. timecol (str, optional): [timestamp column name]. Defaults to 'timestamp'. targetcol (str, optional): [rating/target column name]. Defaults to 'rating'. Returns: [Pyspark DataFrame, PysparkDataFrame]: [description] """ window = W.Window.partitionBy(ratings[usercol]).orderBy(ratings[timecol].desc()) ranked = ratings.select('*', F.rank().over(window).alias('rank')) rating_count = ratings.groupby(usercol).agg(F.count(itemcol).alias('cnt')) ranked = ranked.join(rating_count, ranked.userId == rating_count.userId)\ .select(ranked[usercol], ranked[itemcol], ranked[targetcol], ranked.rank, rating_count.cnt) ranked = ranked.withColumn('position', 1 - F.col('rank')/F.col('cnt'))\ .select(usercol, itemcol,targetcol, 'position') train = ranked.where(ranked.position < split).select(usercol, itemcol, targetcol) test = ranked.where(ranked.position >= split).select(usercol, itemcol, targetcol) return train, test # - # how we split the data # # ``` python # # sample_train, sample_test = train_test_split(sample, .75) # sample_train, sample_test = sample_train.persist(), sample_test.persist() # # save to 'data/' # sample_train.toPandas().to_csv(os.path.join('../data', 'sample_train.csv'), index = False) # sample_test.toPandas().to_csv(os.path.join('../data', 'sample_test.csv'), index = False) # # ``` # + # load from local files sample_train_path = os.path.join(data_path, 'samples', 'sample_train.csv') sample_test_path = os.path.join(data_path, 'samples', 'sample_test.csv') movie_path = os.path.join(data_path, 'movies.csv') sample_train = spark.read.csv(sample_train_path, header=True) sample_test = spark.read.csv(sample_test_path, header=True) sample_train_df = pd.read_csv(sample_train_path) sample_test_df = pd.read_csv(sample_test_path) movies = spark.read.csv(movie_path, header=True) movies_df = pd.read_csv(movie_path) sample_df = sample_df.merge(movies_df) sample_train_df, sample_test_df = sample_train_df.merge(movies_df), sample_test_df.merge(movies_df) # + colab={"base_uri": "https://localhost:8080/"} id="w9yRMzjaFRw3" outputId="792d685b-03a6-4d03-e149-0ea8d09ed28b" print(sample.count(), sample_train.count(), sample_test.count()) print(sample_df.shape, sample_train_df.shape, sample_test_df.shape) # + [markdown] id="_iGjEqpPsaHV" # ## The Model # # In our project, we choose Architechture A. The folloing image indicates our detailed pipeline of Model. # + [markdown] id="jEPdF6obxuKj" # ![pipeline.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAB1gAAAIFCAYAAAB/IS+3AACAAElEQVR42uzd348kZ3no8Vzkzte+zz/AlcE/8A85sgCZKJGseAlhd2MiWT5Ywo6CIRHEOLZiZBH/CJYAQwBFtoyNDhxpIUKJwoUvogQFRbJygY4EymW0d1xyLhJpzjxNP80z77xVXT3TPdNT8/lKX+1uT3VVdU1vzbv93bfqN34DAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAM6MK1eu3Hr16tW7/+iP/uhjJLmJce6Ic4gzKQAAxvckje8BAIDxPUnj+9lz/fr1Ww4P8FuHHpDkKX0rzinOrAAAGN+TNL4HAADG9ySN72fJ8n+83PSmIrlFb8a5xRkWAADje5LG9wAAwPiepPH9rFj+z5djJ+fnn3+eJDeyd5L2P2EAANiP8f1f/MVfkORGGt8DALC/4/tPfepTJLmRxvdbpr2swLe+9a2Dn/zkJwf//u//TpIbGeeOOIe0lxtwpgUA4PzG96+88srBjRs3Dn7wgx+Q5EbGuSPOIcb3AADsz/j+2WefPfja17528PWvf50kNzLOHXEOMb7fAnEz2zauikQkT2sbWd04GwCA8xnfRxgRiUie1jayGt8DAHA+4/sIIyIRydPaRlbj+xOwvHb76iCauUpyWzNZ67nFtdwBADif8b2ZqyS3NZPV+B4AgPMf35u5SnJbM1mN70/J4YH7WL3nqjBEclvWe7Jeu3btk864AACc7fg+7p8oDJHclvWerMb3AACc/fg+7p8oDJHclvWerMb3pzxBC6wkd3iZYNdxBwDgjMf3AivJHV4m2PgeAIAzHt8LrCR3eJlg4/vTnKAFVpLb9Nvf/rYTNAAA5zi+F1hJbtMvf/nLxvcAAJzj+F5gJblN6xUoje9PeYIWWEkKrAAAzGd8L7CSFFgBAJjP+F5gJSmw7ukJWmAlKbACADCf8b3ASlJgBQBgPuN7gZWkwLqnJ2iBlaTACgDAfMb3AitJgRUAgPmM7wVWkgLrnp6gBVaSAisAAPMZ3wusJAVWAADmM74XWEkKrHt6ghZYSQqsAADMZ3wvsJIUWAEAmM/4XmAlKbDu6QlaYCUpsAIAMJ/xvcBKUmAFAGA+43uBlaTAuqcnaIGVpMAKAMB8xvcCK0mBFQCA+YzvBVaSAuuenqAFVpICKwAA8xnfC6wkBVYAAOYzvhdYSQqse3qCFlhJCqwAAMxnfC+wkhRYAQCYz/heYCUpsO7pCVpgJSmwAgAwn/G9wEpSYAUAYD7je4GVpMC6pydogZWkwAoAwHzG9wIrSYEVAID5jO8FVpIC656eoAVWkgIrAADzGd8LrCQFVgAA5jO+F1hJCqx7eoIWWEkKrAAAzGd8L7CSFFgBAJjP+F5gJSmw7ukJWmAlKbACADCf8b3ASlJgBQBgPuN7gZWkwLqnJ2iBlaTACgDAfMb3AitJgRUAgPmM7wVWkgLrnp6gBVaSAisAAPMZ3wusJAVWAADmM74XWEkKrHt6ghZYSQqsAADMZ3wvsJIUWAEAmM/4XmAlKbDu6QlaYCUpsAIAMJ/xvcBKUmAFAGA+43uBlaTAuqcnaIGVpMAKAMB8xvcCK0mBFQCA+YzvBVaSAuuenqAFVpICKwAA8xnfC6wkBVYAAOYzvhdYSQqse3qCFlhJCqwAAMxnfC+wkhRYAQCYz/heYCUpsO7pCVpgJSmwAgAwn/G9wEpSYAUAYD7je4GVpMC6pydogZWkwAoAwHzG9wIrSYEVAID5jO8FVpIC656eoAVWkgIrAADzGd8LrCQFVgAA5jO+F1hJCqx7eoIWWEkKrAAAzGd8L7CSFFgBAJjP+F5gJSmw7ukJWmAlKbACADCf8b3ASlJgBQBgPuN7gZWkwLqnJ2iBlaTACgDAfMb3AitJgRUAgPmM7wVWkgLrnp6gBVaSAisAAPMZ3wusJAVWAADmM74XWEkKrHt6ghZYSQqsAADMZ3wvsJIUWAEAmM/4XmAlKbDu6QlaYCUpsAIAMJ/xvcBKUmAFAGA+43uBlaTAuqcnaIGVpMCKOXPlypVbr169evfyZx+5VeO9Fe8xf9MgsJIUWAHjexrfAwIrSYFVYCVJgRUXmuvXr98S78HyfiR36VvxnvM3DwIrSYEVML6n8T0gsJIUWAVWkhRYceFY/o/2mz4U4Bl7M957/gZCYCUpsALG9zS+BwRWkgKrwEqSAisuDMv/2X7sw5eH/vyvya3b+xDG/3SHwEpSYAV2P77/6p88Tm5d43sIrCQFVgisJAVWXNafcUcuG/bgC28fvP/Nnx+8/63/JLfv4Xsr3mPt5cT8TYTASlJgBXYzvv+nz/3pwf+88pcHB3/zLLl1470V7zHjewisJAVWCKwkBVZcGq5cuXLrsbgqAvIMbCNrvBf9jYTASlJgBbY7vo/wJQLyLGwjq/E9BFaSAisEVpICK2bL8t5Mq/egmas8y5ms9b3nXk0QWEkKrMD2x/dmrvIsZ7Ia30NgJSmwQmAlKbDi0v18i/tjCn88S+s9Wa9du/ZJfyMhsJIUWIHt/XyL+2MKfzxL6z1Zje8hsJIUWCGwkhRYIbCSu79MsPMfBFaSAisgsHI+lwl2/oPASlJghcBKUmCFwEpu2w++dMP5DwIrSYEVEFg5E//18592/oPASlJghcBKUmCFwEoKrBBYSVJghcBKCqwQWEkKrH6+CawkBVZAYKXACuc/gZWkwAqBlRRYIbCSpMAqsJIUWAGBlQIrILCSFFjh55vASoEVAitJCqwCK0mBFRBYKbACAitJgRUQWCmwAgIrSYFVYCVJgRUCKwVW5z8IrCQFVkBgpcAKCKwkBVaBlSQFVgispMAKgZWkwAoIrBRYAYGVpMDqBC2wkhRYIbCSAisEVpIUWCGwkgIrBFaSAisEVpICKwRW0Y8CKwRWkhRYIbCSAisEVpIUWAVWkgIrILBSYAUEVpICKwRWUmCFwEqSAqvASlJgBQRWCqyAwEpSYAUEVgqscP4TWEkKrAIrSYEVEFgpsAICK0mBFRBYKbACAitJgVVgJUmBFQIrKbBCYCUpsAICKwVWQGAlKbAKrCQpsEJgJQVWCKwkBVaBFQIrBVbnPwisJAVWCKwkBVYIrKTACoGVJAVWCKykwAqBlaTACoGVpMAKP98EVgqscP4TWEkKrBBYSYEVAitJCqwCK0mBFRBYKbACAitJgRUCKymwQmAlSYFVYCUpsAICKwVWQGAlKbACAisFVkBgJSmwCqwkBVYnaAisFFgBgZWkwAoIrBRYAYGVpMAqsJKkwAqBlRRYIbCSFFgBgZUCKyCwkhRYBVaSFFghsJICKwRWkhRYIbCSAisEVpICKwRWkgIrBFZSYIXASpICKwRWUmCFwEpSYPXzTWAlKbACAisFVkBgJSmwQmAlBVYIrCQpsAqsJAVWQGClwAoIrCQFVvj5JrBSYIXAKrCSFFgFVpICKyCwUmAFBFaSAisgsFJgBQRWkgKrwEqSAisEVgqszn8QWEkKrIDASoEVEFhJCqwCK0kKrBBYSYEVAitJgdXPNwisFFgBgZWkwOoELbCSFFghsJICKwRWXlxv3Lhx8MYbb6z83ve+57hQYIXASgqsEFg5M1966aWVX/rSlxwTCqwCK0mBFRBYKbAC+xhYI6h84QtfOGaGvLffflt42gMjqMb35cEHHzy47bbbFr9ft+wmxvtgH17fusj8jW9848h7dGz5+PrUZVvjfX8Z3/sCKwTWs/Fzv/+7K7/ziUcuTcT8xQufPfLa3/3skwIrcIaB9TOf+UzXjHkxDhCkfu1rr7128MILL5z5duN78pGPfGQx5o9fT/I9HfO8j2n8Gzbeb2PLRViO5WJ/43sQz9v375vAKrCSpMAKgXWPjcH1mO+7+uTBHc9/9+Cub74rggqsEFg3CipPP/30wX333bc4l2TAq8ZjL7/88iJwiZ3na8TCdYE1YmJ+7x5++OGDRx55ZGF9rP0+x9fPa2ZuvqZwLMDme7S+lt5+xzrjNebX83nx2JT3cG4rvGwzhQVWCKzTjDCY56GfP/OpU60jQuNlmy36t3/8h4vX/g9PPiqwAmccWB977LHV+esDH/jAsXH/7/3e7x0899xzwtShcSzimJxHrIsAOSWw1u9lLBvec889i8fi13wN1fM6nvG+yvfcWOjN92gsm8vHa5k6m/c8v28Cq8BKkgIrBNY99s4Xf7gaFEdMvePzr6+87d4HVl8TWQVWCKybmtGqhruIS48//vjq3HLesxw5PbBGQG0fz+9jO5sz1nUegTXeTxky1wXWDKa57zGTNZ8X/0Ggt2wsk7NRa5ydsm9PPfXUwl58nfPfA4EVAus0H/+dD63OWxELT7KO//qrP7u0gTXCqsAKnH1gbaNcnUUYY4CcNdl+7bIaQTBi3SazJ88jsLbfq6HZr7nO83gtuU8Zf4cC6xNPPHHk63HsM5hGbD3t9+28Z/AKrAIrSQqsEFjP0bte+/GvI+rh74fi6+1PfFEIFVghsJ46sOZswDoDUOS8GIG1FxKHAmt8oHYegTUCZkbQscAay8TX2uAZ4TSflzNT87H2vVoj62lmpcY+nNdsX4EVAut++MsXn16F0TwH/ffLzwisAitwoQNrDXBhzCLc98v3nkf43MfA2j42dnnh8wisEU0jesb3K/dtKHRGgA3r97ZG1tPMSo19OM8ZvAKrwEqSAisE1j0OrGGE1dXXX/+pGCqwQmA9dWCtUS8cuy9lfG1KvIrl8h6vQ5dsjfWsW6a33qFlp95PM7Y79BrG1l+fv8m9O2P5deusxysC37rAGuvr7cNQYG2X3/TYxzKbfJ96jgXWsdec79ucVRqXsu7F2LpsLHPS+8ROmQV71u9ZgRUC69nHwQ/e/f5VaA1Pci9RgVVgBfYtsNbLs64LUfHcdYEzxhV5f9dNYmh7Kdi6nvh9xLmIcEORLpaZejnZ3vZ2EYOnzAiurzP+nTclsPbWOxZY6/K5X+HUY3Dae/WOBdaIp0P7HcufNvzn5YbXvRemvr5YbmjZfZoBLrAKrCQFVkBgnRhY7/jMVxZfe++HHjr+tXI54QixYS/C3vnqO0eWDYeWu/3RpwfXI7AC8wms+bWYAdiLaxGs6j09I271ImW912u1bjOiUu8+sLHODFHxa710caw3nxPrj2UjcsV+1eVimRoXaziOdbT39oyZk/GP5lhf7nf82gt0sWzv+XVbuY74fayz7nNvnfW4tsdtLLCui5htYK0BMS+vW7cXr6Ue+wyeYT2+8ZzYr5OE1rHAOuW9mV8bWzb3tRdf6/svj0E9VkPvy/ZesvX7Fe+p/HsQX899i2WmvmdjX9rQGs9t92Nbs2oFVgis6/2DB+4/+M4nHln8PmexTomkca/WuJxwLBvmZYbzuf/86ccXwbFa79daH484m6EyjXWHEX7zeb944bOrr+e9YuO5uXxdthrPi/3JfQ3jOe3ysVys58WPPTx4L9qY3Rv7H8vkuuIYCqzAfgbWjFAxY7AXlDJu5uVe489tZIpI1bu/a8SxDK3xax07xzpipmOsMwNbrKduq11XL9y146PcXqwrl4nXFvtdnxO/r6+jfq3GwN52hu5vWl9TXuI21tXG5vg+5AzN9nWuC6xjEXPoubH9vBRv3WbsX31PxIzP/Fq9t2se/5OE1rHAmhF1LLCuOx6971t7+ev2/R/Ho94fNl9fRuf4ev2PB/Fv8Hpv2/haxOFYR7zn6vFsZ9zWvz9ndV9cgVVgJSmwAgLrhMAakTPC6uL+rJ/5yrHnxuPvu/rk4mvvfejj3ZmuGWgjnC5C7KNPH9tWhNXFdu59YLF8rDPXfVlCq8CKyxRYI/rUyNk+LyNR3gcz/gHbm+2aUS62ExEpvpazDXObOUMw15czI3Mb7fZrbEzb2Nc+HvGq97pj3bFc3he0vuZ4PPa1RtZe8IoYlrNBc5/r8Wy3VV9bvcxtPY5tfJ1yieCTBNZ6Keg4RjmLtUbB3rpiP+P11sDcC/GnCawZPHv3Pt0ksOaxmxIjcz15rOJ45Ps19idn+LbHLreby9bva/271HtvxjHOx+rz6/s+L5dcX8M2L1sssEJgHTeCYvwdjF8zfObf1aFYGUZcjGVi5msNojWwxvMzPEZ8zYhaQ2lGydxWPj+Cb70vbN2XCKW57Vx3/r6+lhpz82uxP7GfuXzuUzwn43J8Pbafy9f9jriaryl+zYiczxVYgf0KrBmxwohFbZRr759Zl89oWP8tEBEvZ0lmvIoxS29f4usRwfLXup6MVPFYrqfdv3hOXoY2ls+gFv9eai+7m6Ezlq+voS5bX18bWON15azcMPcpHq9xtd3/jHT1XqKxXB7XGqCnXiL4JIG17ke+jhoQe6E5X3NExxpnN539e9LAmpf3nXIf1vb7ljOD6/u+zqrOfcpoX8Ny/TtSg2q+f2osjcfy8Xys/keFfE/Ea8htn8VliwVWgZWkwAoIrJ3AGjNHc4bpInLe+8CvomfMOH3zZ8eeG8u0f17E2Oe/++sIG+uIoFqev1jfMrDG47lM3PN1dWniZYjthV2BFbhYgbXOpKsz8iIqtbNS8x+JsUyNgxkcMwxlFGojYm43o9TYpWDz+XUfelEug1Ub+WrIGgtp7b091z1e41pdNmNaDZPrtlUDYm+duwysGfTaAF0jet2/3rGMY5HLbnp527HAOrbfbTTdVWCt39P2+Xns2sfbSxLXwNoL9O3z8+9DfS/nsu2M56FZyQIrBNbtGjExQmENiPn3eigWZhiNqFjDZ+8SwblsBM32vq4RMePx+ljE0l7IjZA5tp0MxWHOxs1Ztr1Im49nPM04HNtrQ27dpwypsY36elwiGNiPwBr/loiAlGGozlZsZ1kOXbI241QGzwx1vYDWi3LtDL7Ybmwrw1M7kzbjWRtq29DXi3VD0TLHbO3jvcBaI2pdpr1vaB6HdqZtznzM/c1tt+vdVWCt4bd3r9R2n3NdbYwf+z6fNLDmOttj0d4feNPAOvReq+ttv3+9SxL3/mNCjam9IFyDdf4dat+7p7mvrMAqsJIUWAGB9YSBNWahRiTNULoKr48+vZhlum5dEU5z5mkbWIeeH4/ntnuPtxFXYAUuXmDtOXRJ1QxW7dfb4JTLRaQd24exmYoZ7mpYGopymzzeC2mbPD4Uzeps3KnrzCg4tM5dBtZNLsM7doxPun8nDaxtnDyPwDq0zfZYjH1fe7OEe8cl1xHRdVtRVWCFwDrdCJw1XtaoWcNre0nh3n1ae+EzImTOFm2Xz9mvU+5tWtc5dK/XDKL18V44rcvXmbNhG4Hz8ZyROzS7V2AF9iOw9hwKZkNhrI1ZGRDb2aBDXxuKZkOzGadeKjYD15TAOvT4UKirUbedpbouTLbHMaN2+/xdBdaMfL1LQPe+NvQ6cv966zltYO3dZ3VXgXXo/dR7/wzN/B7ar/bY1Vngvb8fAqvASlJgBQTW87xE8Js/W0XTdoZpu1w8N2bAtlE0Q+lqJmxzyd9cfwTWWEcas2B74VVgBS5eYK2RKIPnUBjNGXYRhuL5aT5vSvQ6TQDct8Baj0EN1psG1joz+KwDay9u98LkusC66SVrp9yDtXef2rO6RPCUwNr+PWgvn70usE59vN6DNf6u9b5nAisE1u2bszgjQtZ7n2ZgrTM8ezNc20vxDoXPjJx1JmhetnfoMsSxnVhfe9nhse30Hs8Y3Ebk3mWS8/VW28drcBVYgf0LrL3w086ua2NRXsI3zcvjZswau79rjplq+BqKUxEce5eEHZo5GbEzHovZj7FfGXN3GVhzG71jVi8RW49XvcxynU3azsDdVWAdC9S9iDkUWOu+t7Odd3mJ4Dh+uwisee/ftPf+WRdY23vS9o5dvq/zvd1e6lpgFVhJCqyAwHpOgXV1ud5lOI37pB6Jo8v7q2YIzfu1trNO8/n1UsR5yeBVwL33gdXs2VaBFZhPYK2Xr+0Fq7icbF5WNu9JWc3LxI4FsqkBMLe1z4G1dwzq86cG1rEQuKvAmlE8Lue8r4F1yuzasWXzPTQ0I/u0gTUvrz3092BbgTUuxVwv3z10aWeBFQLr9i8PHLNII0i25qzTWKYXMXuRcSh81pmfGWxjmd6s0rz3aQbZDKQnDaxDoXjoNfWORVjvGdtuV2AF9i+wtuGnd8nSep/KvIdlNePSWGDtxdGhwBrry8gVz4t9ingaISweb6NeLhvLRLTqXdJ4m4E11x/704a13mWYe8erzoBtX8+uA2svCm8SWGN/zzKwTp25fNLAGu+f3vepdznrocA6NFu5fTzen/kfE9p7sgqsAitJgRUQWM85sK5moZZl8nk1uvYuEbzazjff/VVoXV4yOO+tOvacy6TAissSWNvZckP3A10X03Ld6yJQBqNe5MsAuM+B9STR7iJcIjjDZM7EnBJYNw1+Y8cw34NjgTXfM3l56l5EnRr6t3WJ4NZtBdbefZBPct9bgRUC63RzJurQzM6cOdreI3XsMrlD4bPeuzTWm+too2c+v16aeJNLBI8F1pitOyWwjh2znHXbu3SywArsX2CtQagXDcfCWDXDUW9mXm8dY5d9zXFOzvqMCBYBdej+sDUa9gLltgJr7NfQpV4zyGXwXTdDceh7cR4zWHOWaJ01vO4SwVMu2Tv1fZSRf2y/e5cP3sUlgjf5Xm0aWOsxzPfRLmeyCqwCK0mBFRBYNwisdZZpzjzN2ar1Ob1Ymssfu+fqMswO3YNVYAXmG1hjtlyGz/g1/tzGnfbxoei2brneLNU2NNWItA+Btd5rdV3gmhpYh17vLgNrRsxeGM24PeX+t2Pfw5MG1oinvbBZZ1jn+yq/H+3llWPZuG9pfC2W2WZgnfofDbYRWHuXAx4L0AIrBNbtuO4SvXn54KF7p44F0l5greuLyNqLlDlbta73tIE1ZsHGYzEzdux4DF32eCgut/dqFViB/QysNRy2wSnC5pQQ1bsM8Nh2hyJdRNS8vO6619OLVLsMrPka2+Xj+OVjuczQJZfbCNeG2l0F1jprdsrs1rFZmCfZv3WhPr6XYY3o+V4Yml19msBao/K6WaSnDayx7+02cvvr3icCq8BKUmAFBNYzCKwRSDOmxgzUfDyfkzNRhwLr4t6qJbLG72tgzT8v1vX8dwVW5z9cgsDahqE6izGiVkareP5QuIrHc7n4tQ18+bzcTsSxXCa2kfd6bQPWPgTWei/a3ms7aWDNxyJu9ra17cCaYTJeQ0a8vBRtPl7jeO9YZghdF9I3Daw18ua+x7byPVXfk3XZfB01rg7dT3jK96q+j2v4rpE9vj9Dr30bgTX2q87OjW3nvk4JxwIrBNaTmZe+HVsmQ2p7Kd+8R+vQ5YOH1psBdWjmbH6txtDTBtZ8fi8U92bYTj0m7f4LrMB+BtY6G7SNfvWqGTGLcChGZRiMYJUxLJbNQNveU3Uo9tX9iFAVsSxtQ1U78zbszUzcRmCtl1Ju75say+U66gzPeHzoeOX6Yt/rdk4aMNcF1vr1Ohs49iNDdX1f9AJrvWfv0OzMkwbWfJ/k12tcbcPrpoE1Z1e3MTsfj+30Lve8rcAa+1Mvbx3bystmm8EqsJIUWJ2gIbCegXe++MMjgTMiaxiPL+6xuryk72L26us/XT0vImqNrLc/+nT3Hqw5O3Wxvs+/vvj9sTBb7uUaETeWjZmtsfzQrFqBFdjfwBrRJwJVhqOIY/HndqZchr2MSPn1Grry+W3g6i0Xv+8tl0Evg2oNYzVqZczL2BhfC+vz4/d5D8z6eA1vOTs3X1N7f8uhx+MYZUiLX/O15Gur9+XMEFbXmZe0reuM2Z9D68xlatCeEjFju7GNnFmaxzIea7/H8ee6/rrNNsrW90KsK19vLDv1UrXxvPrc3G4+PhTp29fSHod67Nr30Lpjlu+V+r2qM3fzEsTtvYnrh465rVxHHo/2vRmPx3Gd8ni+X2p4r9tzD1YIrLszQ+S6GJghtZ1VGrM8MzRGNI1wGetaFykjSuZlh9sZoDVyZmSN7Z/2HqyxndzXjMKxH7nPeengOsM2nh/LxPpimRpmc+ZvLpfryf0UWIGzDawRcTLoZFCK6DN2qeC852qNfvX5GadqbMzAlZEvY1wsm8vVCJoxrJ2dWMeIPXP5dr9yu+02c3vxWEbPeG15TOLxvD9qxtIaeGP5fL3xmuLxMANgewnbjIXtsQh7sxhzH+plkev+TwmLsf18bvwa+9CGxlhfjZb5OntRsj0Gdf1tqBx737XPjdeV+9a+//L7HvtY93PdcRj7vrWzd2uAjsdzv3Lf2ks812Ce9wRu38P5eKwvHs/vZR7T+v6r2+vdU1hgFVhJCqyAwLplx/5hkWF0ETxffad7T9UaWeP39V6tOds1wmt8bbXsvQ/8Kpw2lw6OuJuBNrcd4TW2I7ACFy+wZtCqtvEtA2Pv6xG/IjxFdAx791DNdcTz4vkZ5nr3w4zAFI/nMrG+GsYyGlbjdfRey9DjvXXEvm3yePs648/52vI5bUxsj9/YOuM1132P9WUA7H2PxgLrlO9x+73MZXpRsgbBTfanF1iH7L1/4vjk+2JstnAum69jbNkpfx/qMnF84rW2Ibk9du3s1228Z/M9EebfpamvTWCFwHqymavtmHvoUr89a2SNdWW8jN/X2aLt7NYaO3tfq+usAbO9tHDsWw2meVnfGoPbWbex3Xhu+7y4fHD7uvOSwhmP43nt/Vsjqtb9jGXynrVj97UVWCGw7iaw1pmgaRu4MhD1vh5jhfg3yFAcq+uI5SI6xXKx7RqRMk5V21l88dyModUMaBGmeuurMwbrY+164jlDj/eO09DjU45lvJY4HkORMI9XDc11vVMD65C9SzDX8Bnb7s3ezGOd38cwlt0kCA6974aOWQbN+Fq8z3qX1h0LrL3vW50xHPvTbjOPR2yv3ebQe3Xo8d57KrcXv4ax/t57XmAVWHkJ/Zd/+ZeDf/zHf1z5b//2b46LwAoIrJfINr4KrMDFCqzkSVx3SV9eTAVWCKz7ZcTQ3uzVszK2PWX757mPAisE1nmasz97ASrCV85qdKx269A9WHlxFFgF1gsdHr/61a8e8Z133ln7vAiU7fMiXO56f2Pfvv/972/8vNi32Mff/u3fXpxwz2JfKbACAisFVjj/CawUWCmwQmAlBVYIrPMzL6fam7mYl9SdeolaCqwCq59vAusFDqyf+MQnVh9AxO/XPS+eU5c/i8Aa+5qBdEoE7pmvU2AVWAGBlQIrILBSYKXACgisFFgBgfUkxr9zYqwZM1nrpVojrsZlg0PHSWClwCqwXgJrMA0jZo7NXs3QGcZzz2o/n3322YUnfb7AKrA642Liz6Wb169ff+bq1avvuf/++39TYKXACuc/gZUXx7jv6MMP//reffH7uOeoYyOwws+3+PkmsFJghfOfwLrNyBqXAa6fq0fwa++ryd3F1Tz+EbTjz0P3kKXAKrDyzALrWMT8u7/7u9VNus86sJ5WgVVgdcbFxJ9L9X1zcPiPkVcP/zFy96axQWClwArnP4GVZ++NGzcO3njjjSNGdHVsBFb4+bb05jZ+vgmsFFhxWc9/AuvXu/dcNYPy7I1j3tq7ZDMFVoGVZxJYI6zGrzFDNWaq9paNsBrLZawcCqzx/AiZ3/nOdxa/Dq1vm8Y2YntD21oXWNc9v7f82OWKYztirsCKix8YThobBFYKrHD+E1hJCqyY5883gZUCKy7r+U9gJSmwCqzsBNaIhTmLNWaqtstFfMzlxgJrnRFbLydc1/niiy+uHo9oW+/9Gl87/AG/+Fr8+vd///dH7hPbRsvYn4zDuc3Yh/ZSx0OBdd3z6+uNfa37l/ehrVE2lq9fP4/LKVNgxW7+AdL8Y+RHh3740FsEVgqscP4TWEkKrLjwP99GY4PASoEVzn8CK0mBVWBlN4jWCBkhsTd7NUPoUGDNdcWy3//+91eXFc7QGpE2l80I2bskcawnnrMukNYonI9nLG1fQ+/58ft8fs5G7T0/l4vHclZujcT1dcV19uOxfP0RX+O1CqwCK+b1D5B1sUFgpcAK5z+BlaTAigv78+1YbBBYKbDC+U9gJSmwCqwcDKw5S7UNkREL62NDgTVDajtLNJ9fo2XdVntZ3pwpui6QZgxtl814m5HzpM/PZTOw1pm2dZ0RVfOxNtjWSyZ7v51vYD3JwJI8SWy4du3aJw/9XwIr9yWw+rvJ8zr/CawkdxhYybP6+baIDYdeE1i5J4GVPLfzn8BKUmAVWDkQWGskrdEwQmJExzYs1sBaZ4O226gzTWtMzW3VywdnjJ1yid+hy/729q+3bBtSh54/FFh7j+fxjHW3r4ECKwVWUmClwEqSAisFVlJgpcBKUmAVWAXWmQXWeg/VCIQZEetlcMcCa3tp33ZmZ42Z9ZLC9RK7Ne6OBdL2Pqdj9z09zfM3Caw5AzfXEa/F7FWXCIZLBIt+dIlguEQwt+GNGzcOnnrqqYNHHnlk4dtvvz271xivKV/fXF8jXSIYLhFMukQwXCKY4WuvvXbw2GOPHXzkIx9Z+KUvfWl2rzFeU76+ub5GCqwCq8C6MKJqxsG4dG7Ew6H7mfYCa+/+rUOBtW4rZq7mn+ulfacE0pgBG4+31hmkp3n+poE1nheXH84Zum2gpsCKi/8PkF5UGPr5JrDOzztf/OHB+64+eXDbvQ8szvF3vfZjgRXOfwLrmcTV++67b3HeefDBBxe+8cYbs3yt8boefvjhxWud62ukwIq9+/l2LCoM/XwTWOfnu5998uBzv/+7Bx+8+/2Lnz3/9Vd/JrDC+U9gPZO4es899yzOOx/4wAcWvvTSS7N8rfG6clLSXF8jBVaBVWA9cm/SDIRtHNzWJYLrtmKmZ6xvKNCOXSK4XmJ4yLFLBK+Ln5sG1nrv1YjUY+GZAisuzj9A1v2jQ2C9fC4iq8AK5z+B9Yz8whe+sDjnvPzyy6uZnnOe3RmzVwVWCqzYx59vAut8jcgqsML5T2A9K+Pz8DjnxL+bcqbnnGd3xuxVgZUCq8A6+8BaY2lE1jaK9gJrLJNBNoJqXT7vq9q7fHDdVkTIoVjaC6T5Q6h3SeFtPn+TwNqLtXlc2uNIgRX7/w+QTf7RIbAKrAIrnP8EVsHR66XAir3++XZzGz/fBFaBVWDFZT3/CayCo9dLgVVg5ZrAWmd31og6FljruuLrGVkjQub0/9666vrGImQvkNY4GzNF6yWBh6Lw2PPrc+rvNwmsdUZsrCPjchxP7zeBFRfi51L8o+OZw7+z79n0Hx0Cq8AqsML5T2AVHL1eCqzY359v2/r8SmAVWAVWXNbzn8AqOHq9FFgFVpb7hGb8jEhYA2hEwnb2ajwWsz1zRmY8N9aR0TKWzVmhuc4aMIf2JdYby8S6erNHY79ym3kp4TbqphmG85K8p31+vcxvrCOWj2OXxyKXzWNQZ+PWWcC9+8pSYMXl+Pl2FoH1rtd/Oumx7nPf/NmkZe/65rsXLoAuXtuhuzrW6wLrptuf+j0TWHFRzn8C6/buvfqNb3xjcc/VOOc89dRTi8sFh+19S+OxCJNPP/304lLC8dz262GEqhqtYv1T9iPWGdvvRc/4ejwey/S20xqXN45lH3/88dVrqvtRA2seg1hmSnAde61D+/a9731v9bUw9i0ea9edy8U+t1+vrz2/Fr/G98N7WWDFfH6+nUVg/eWLT096rOd/v/zMpGV/8cJnL1wAjdcW7upYrwusm25/6vdMYMVFOf8JrNu79+oLL7ywuOdqnHMee+yxxefMYXvf0ngswuQTTzyx+PdVPLf9evjcc8+tHo/fx/qn7EesM7bfi57x9Xg8lultpzUubxzLxjg+X1PdjxpY8xjEMlOC69hrHdq3GDvm18LYt3isXXcuF/vcfr2+9vxa/BrfD+9lgVVgvcRGOIwo2NreO7X9c+85deZoRs241G/EyPi1Xc/QTNp2PRmCp+xnBM9YR/xa17OL58ex6x2L+vrDiKrx+l0aWGCFwLqVsPfNdw9uf/TpRdS74/OvL8Jd/PreDz20+DWXu/PFH67CX3wt/txb3+K5D338yH8yue3eB46E1Ph9ritdrPPVd45Ewdyv25/44mJ78efFsofrjz/HvsZz7vjMVxbPj+3Eshkf87XEOsJYNr4ey+XrW8TKw/2p+x3L9sLv4ljF85evKZ9fX3vdVv55bJ3xOo4dr6VtYB3bfvta4/jF8/OYCawQWNkacS4+pLjvvvsW54mHH3548ecwl4nYl+ek+H0uG1G23qc17+Maj8fvc7k21rbhNNeZ247nxK913RE1c//C3J+6n739jX3J5euyuZ26rvoapxy3dp0RaXPb9fHY93h9eVxymXpcIpTW5+Z+RSDO4xDLZAjP71vus/eywAqBtRc4X/zYw4uo9w9PProId/HrHzxw/+LXXO7dzz65Cn/xtfhzb33xnMd/50NHzpcfvPv9R0Jq/D7XlcY6f/7Mp45Ewdyvv/3jP1xsL/68OOcdrj/+HPsaz/nOJx5ZPD+2E8tmfMzXEusIY9n4eiyXry+Wif2p+x3L9sJvPBbPz9eUz6+vvW4r/zy2zngd7fFK28A6tv32tcbxi+fnMRNYIbCyNeJcxMZ77rlnNVEn/hzmMhH78pwUv89lI8rW+7TmBKB4PH6fy7Wxtg2nuc7cdjwnfq3rjqiZ+1cnE9X97O1v7EsuX5fN7dR11dc45bi164xIm9uuj8e+x+vL45LL1OMSY8v63NyvGMPncYhlMoTn9y332XtZYBVYSQqswAUJrIs499qPV9Fu5eGfM7Au4mX8+TNf+VW0W0a+eLyuZ/X4Qx9fLBuRsZ2NudrWoYsY+vpPF8vmY0cia9mvRQA+XC6D4djjdzz/3SP7tYq4h/sV26zby/jZPh7rOxKOl19bxMvD/Ypt9EJoDapx/MK6/TaurvZ5eazjeGRwrevN7cfXpmw/I3IuI7BCYOWml8yNf/RniMzQF1G0FxgzgoYx2zKeG4FwLLBmlI31t9uMmNjOGq1hNrdVZ3pmpKwzQGPZWFcvsNbXm49FDJ0yi7UXeHuP5z7VbeVs13aZOF5jj9UAnbNup+wvBVZcvsAaRojLaJfGnzOwRoxc3IroE48sls3IF4/X9eTjEQxj2YiM7WzM3FYYy0cIjGXzsRpZ637FemK5DIZjj//zp48em3w89iu2WbeX8bN9PNZX15Ffi2Vjv2IbvRBag2ocv7Buv42r+bU81nE8MrjW9eb242tTtp8ROZcRWCGwctNL5sZMzAyRGfoiivYCY0bQMP79Fc+NcehYYK1XZWy3GTGxnTVaw2xuq870zEhZZ4DGsrGuXmCtrzcfixg6ZRZrL/D2Hs99qtvK2a7tMnG8xh6rATpn3U7ZXwqsAitJgRXYw8AacW4V/159ZxEAMwK2MTUjYM5kXQW/ex84dnnaxQzWZQTMANhG0Nheu53eftVtDz3ehsxeiKzhsxdSF8vn7NB4DflYmYWay0acHbu875EYurxcbx6X+tjQOo5sv7PeGoOPPffwNdT9E1ghsHJqYM2ZlBFM2+fkOSnDaw2s7SzVoe3mLNf2MsLtunvryOfmPtfo2q4vlqmX0u293gyxU+7NuklgzW0NheahWBy/b49nb11jx5cCKwTWjIL5WITOCIAZAduYmhEwZ7Jm8Iuw116eNh7LCJgBsI2gsb12O739qtseerwNmb0QWcNnL6SGOTs0XkM+Vmeh5rIRZ8cu71tjaF6uN49LfWxoHXX7vfXWGNw+N15D3T+BFQIrpwbWnEkZ/55qn5PnpAyvNbC2s1SHtpuzXNvLCLfr7q0jn5v7XKNru75Ypl5Kt/d6M8ROuTfrJoE1tzUUmodicf5H0no8e+saO74UWAVWkgIrcEECaxvs6uWCa1zMx3MGaTvzMy9tm7Fy8PK3h19vvza0X+seb6Pp0DaHZne2M0gzMk/Zj6H7p7b7kEG5DdK9deSyQ9uvUXrd/VsFVgisnBpY87zVC44ZI/N+o0OBdcwaROP5aRtPe/cqzfiby2yy/aHXO/T4aQJrROKcGVvvn9qb+VuPQe/1rIu1FFghsK4LrG2wq5cLrnExH88ZpO3Mz7y0bcbKocvfxtfbrw3t17rH22g6tM2h2Z3tDNKMzFP2Y+j+qe0+ZFBug3RvHbns0PZrlF53/1aBFQIrpwbWPG/1gmPGyLzf6FBgHbMG0Xh+2sbT3r1KM/7mMptsf+j1Dj1+msAakThnxtb7p/Zm/tZj0Hs962ItBVaBlaTACswksEbEy/t7hqvLBjeXER66N+uxmZjl3qWrCLmc0ZmXCT6vwNpGytVs1+UliVfHIO+dWiLp1MCa6xw75pO3X16DwAqBldsOrL1Zkm3s2zSwZngcs+5PXCo3w2tvmbzccO++rOcZWOs9W+t9XvOY1uM2ZM7kFVgFVgis2w6sEfHy/p5hXn62vYzw0L1Z25mY9d6lac7ozMsEn1dgbSNlznbNSxKnGWJrJJ0aWHOdY8d86vbraxBYIbBy24G1N0uyjX2bBtYMj2PW/YmxbYbX3jJ5ueHefVnPM7DWe7bW+7zmMa3HbcicySuwCqwCK0mBFbgkgbXeU7SaIW8oYk4JoMdmyy4vH3xWgbV9fChwRkTuHYM6u3dqYM0ZvzETeGpgPc32BVYIrDxpYM3At83AWi+N21t/716tY5f57d3PdV8Ca34t76ta7616kpm3AqvACoF1W4G13lO0miFvKGJOCaDttvLywWcVWNvHhwJnROTeMaize6cG1pzxGzOBpwbW02xfYIXAypMG1gx82wys9dK4vfX37tU6dpnf3v1c9yWw5tfyvqr13qonmXkrsAqsAitJgRWYeWBtLxF87LK6OaO1ubfq4GWAy71MhyLkvgXW3vFZF0e3MoN1ea/V02xfYIXAyk2D49A9UnuXDz7NJYLHgmbeizT2pc6kbfc59nFsxu15B9bezN14bb17rQqsAisE1vO6RHBrzmBt7606dBngei/ToQi5b4G1d3zWxdFtzGDNe62eZvsCKwRWbhoch+6R2rt88GkuETwWNPNepLEvdSZtu8+xj2Mzbs87sPZm7sZr691rVWAVWAVWkhRYcUkD6+1PfHHw3qpDM13HlsvLAB+7B2u9fPAyvu5LYF2t93Dfe5c2PklgHZvNO7j9gUsrC6wQWLmLwJozLuvM0aHoeZLAmtsdC5IZTmNbdaZru8+xTxmEY793GVhrJK371AusMaO2Db55/9i8H2vez3bqfgusAisE1tMG1r/94z8cvLfq0EzXseXyMsBtAKyXD874ui+BNdcb+967tPFJAuvYbN6h7Q9dWllghcDKXQTWnHFZZ44ORc+TBNbc7liQzHAa26ozXdt9jn3KIBz7vcvAWiNp3adeYI0ZtW3wzfvH5v1Y8362U/dbYBVYBVaSAisw08Ba497Y7NS492ouF1F2aLk6czNjYfwal8pt92FfAmudobsuIE8NrDU2Ly61XMLp2PbjvqsCKwRWgfUsAmudXZn3DX355ZdXIbOGvpME1vqcuGRufCARj4W57roPESZjmdiX3Ie6z3mZ4LH1bSOw1tm98Wsck1h/BukaWHO/Y52xP3k/1nop4/q/3OPrEZVzv+NrAqvACoF124G1xr2x2alx79VcLqLs0HJ15mbGwvg1LpXb7sO+BNY6Q3ddQJ4aWGtsjmNSw+nY9uO+qwIrBFaB9SwCax135n1D499WGTJr6DtJYK3PibHrc889t3gszHXXfYgwGcvEvuQ+1H3OywSPrW8bgbXO7o1f45jE+jNI18Ca+x3rjP3J+7HWSxnH47nf8fWIyrnf8TWBVWAVWEkKrE7QmElgjZmj9R6fEebisd4s1lwm/hwhL2JfDXl1uYyRMfM1lrvz1XdWMTX+nDNCI6yu4mFsv1w6OKNtPB7Pj+fG1+v+Lh4/3N/6+CIwLl9DnRm7CJmx7KHxvCOPHz4/H8/9ie2v9iWWXwbRnNEb5r1pV68tL5Wc2zp8rG6rrjO2uVrn4a+LY7p8fhu02+3nsc3tL47N4ffiyPa3HFkFVgis84tKGQrTmE1ZZ2XG73OGZTWiYl1PXSZ+3856HZuh2u5DxMcaEeP3dZmIpxkz2/2N/Wr3Nf83eSyXs0czjvYej9+vuy9s7He+5lg+Xm8NvLn/8UFQXXd+ONSuv7ffsVwG1oyrY+ugwAqBtc4crff4jDAXj/VmseYy8ecIeRH7asiry2WMjJmvsdzPn/nUKqbGn3NGaITVjIfxa710cEbbeDyeH8+Nr9f9jcdjf+vjYb6GOjM2vh5/DuN59fF4fj6e+xPbz32JxzOI5ozeMO9Nm68tn5vbisfqtuo6Y5u5zvg11pXPb4N2u/08trn92E58L+r2tx1ZBVYIrPMy4l2GwjRmU9ZZmfH7nGFZjX9j1fXUZeL37azXsRmq7T5EfKwRMX5fl4mxbcbMdn9jv3rj+9jHWC5nj2Yc7T0ev193X9jY73zNsXy83hp4c/8jitZ1Z4Bt19/b71guA2vG1bF1UGAVWEkKrMC+B9bXfryIcdVemIvAFzE0w17EvzbEtsulvfu3xvMz1C7W18zizGB6ZL8igL74w+7+Hnv8cP13vvF/u8v2XnM8v3ssSvDNfaqvqx6r2OaxbS0D9rp1ZrBt96+uf2z7ve2su2+uwAqBlVON2asxozIva7ttY72x/qH7p8bj6+6t2lvfLvd5l8fBe05ghcB62tmrEeOqvTAXgS9iaIa9iH9tiG2XS3v3b43nZ6iN9bWzODOYVmN7ESh7+9s+Huv/f8t43C7be83x/N7jNfjmPtXXVY9VbLPd1i87+9BbZwbbdv/q+se239vOuvvmCqwQWDnVmL0aMyrzsrbbNtYb6x+6f2o8vu7eqr317XKfd3kcvOcEVoGVJAVWzDCwkgIrBFaSAqufb5hPYCUFVgisJAVWP98EVpICKyCwUmAFBFaSAisEVlJghcBKkgKrwEpSYAUEVgqsgMBKUmCFn28CKwVWCKyiEEmBVWAlKbACAisFVkBgJSmwAgIrBVZAYCUpsAqsJCmwQmClwOr8B4GVpMAKCKwUWAGBlaTAKrCSpMAKgZUUWCGwkhRY/XyDwEqBFRBYSQqsTtACK0mBFQIrKbBCYCVJgRUCKymwQmAlKbBCYCUpsEJgFVgpsEJgJUmBFQIrKbBCYCVJgVVgJSmwAgIrBVZAYCUpsEJgJQVWCKwkKbAKrCQFVkBgpcAKCKwkBVZAYKXACuc/gZWkwCqwkhRYAYGVAisgsJIUWAGBlQIrILCSFFgFVpIUWCGwkgIrBFaSAisgsFJgBQRWkgKrwEqSAisEVlJghcBKkgIrBFYKrM5/EFhJCqwQWEkKrBBYSYEVAitJCqwQWEmBFQIrSYHVzzeBlaTACgisFFjh/CewkhRYIbCSAisEVpIUWAVWkgIrILBSYAUEVpICKwRW0Y8CKwRWkhRYBVaSAisgsFJgBQRWkgIrILBSYAUEVpICq8BKUmB1gobASoHV+Q8CK0mBFRBYKbACAitJgVVgJUmBFQIrKbBCYCUpsAICKwVWQGAlKbAKrKIQSYEVAispsEJgJUmBFQIrKbBCYCUpsEJgJSmwQmAlBVYIrCQpsEJgJQVWCKwkKbAKrCQFVkBgpcAKCKwkBVYIrKTACoGVJAVWgZWkwAoIrBRYAYGVpMAKCKwUWOH8J7CSFFgFVpICKyCwUmAFBFaSAisgsFJgBQRWkgKrwEqSAisEVlJghcBKUmAFBFYKrIDASlJgFVhJUmCFwEoKrBBYSQqsfr5BYKXACgisJAVWCKwkBVYIrKTACoGVJAVWCKykwAqBlaTACoGVpMAKP98EVgqsEFhFIZICKwRWUmCFwEqSAqvASlJgBQRWCqyAwEpSYIXASgqsEFhJUmAVWEkKrIDASoEVEFhJCqyAwEqBFRBYSQqsAitJgdUJGgIrBVZAYCUpsAICKwVWQGAlKbAKrCQpsEJgJQVWCKwkBVZAYKXACgisJAVWgZUkBVYIrKTACoGVJAVWCKwUWAVWCKwkBVYIrCQFVgispMAKgZUkBVYIrKTACoGVpMDq55vASlJgBQRWCqxw/hNYSQqsEFhJgRUCK0kKrAIrSYEVEFh5gXzwhbed/yCwkhRYAYGVM/GfPvenzn8QWEkKrBBYSQqsEFjJXRrvuXz/Xbt27ZP+RkJgJSmwAgIrL67xnjO+h8BKUmCFwEpSYMXsuXr16t3l/Xfw/jd/LvzxbDx8r9X3XrwX/Y2EwEpSYAW2O77/n1f+UvjjmRjvNeN7CKwkBVYIrCQFVlwKrly5cmv9R3BcslX84zlcHvgg3ov+RkJgJSmwAtsd38clW8U/nsPlgY3vIbCSFFghsJIUWDH7n3FvHYusZrJyhzNX27jq3AeBlaTACuxufB/hy0xW7nLmahtXnfsgsJIUWCGwkhRYMXuuX79+y+F772bzD+LF/TEjhH3wpRvkqY33Ur3navFmvAf9TYTASlJgBXY7vo/7Y0YI+9fPf5o8tfFeqvdcNb6HwEpSYIXASlJgxaVjea+mm51/HJO79KZ7M0FgJSmwAsb3NL4HBFaSAqvASpICKy4ky//p/pYPBXhGvuV/tkNgJSmwAsb3NL4HBFaSAqvASpICKy48V65cuTX+1/G1a9c+ufxAhiNev379/4SOxXrjPRXvrXiP+ZsGgZWkwAoY3+/p+P5/H/oDx8L4HgIrSQqsAitJgRXAjogPFUJHAhBYSVJgBS4+169f/7DxPSCwkqTAKrCSFFgB7PYDmB+FjgQgsJKkwArMY3x/+Hf3PxwJQGAlSYFVYCUpsALYzYcvt+TfX/cbAgRWkhRYAeN7AAIrSYEVAitJgRXA+AcwHy4fwHzYEQEEVpIUWIGLy/I+tcb3gMBKkgKrwEpSYAWww8D6o/IBjMsEAwIrSQqswMUeH/xHGd+/6ogAAitJCqwCK0mBFcB24+ot5e+uy4gBAitJCqzABebKlSu3tuN7RwUQWElSYBVYSQqsALYbWD/cCawuIwYIrCQFVuN74AJSLw+cfvSjH/0tRwYQWEkKrMb3AitJgRXA9gLrjzqB1WWCAYGVpMBqfA9czLHBf7Tj+4iujgwgsJIUWI3vBVaSAiuALXD//ff/ZvvhSxpfc4QAgZWkwGp8D1wcYqZqb2zvP1ACAitJCqwCK0mBFcCWuHr16t1DgTW+5ggBAitJgdX4Hrg4XL9+/Zmh8f3h125xhACBlaTAanwvsJIUWAGc/gOYV0c+gHnVEQIEVpICq/E9cKHGBTf9B0pAYCVJgVVgJSmwAtgRY5cHdplgQGAlSYEVuFhcvXr1PWNj+5jd6igBAitJgdX4XmAlKbACON0HMHevC6z+lzsgsJIUWI3vgYvB2OWBl950lACBlaTAanwvsJIUWAGc7gOYV9cFVpcJBgRWkgKr8T2w/yyvTnNz3fj+ypUrtzpagMBKUmA1vhdYSQqsAE7+AczBFF0mGBBYSQqsxvfAfjPl6jRLP+ZoAQIrSYHV+F5gJSmwAjjZBzDvmRpYY1lHDBBYSQqszrjA/jLl6jTLK9T8yNECBFaSAqvxvcBKUmAFcLIPYJ6ZGlhjWUcMEFhJCqzOuMB+ssnVaVyhBhBYSQqsxvcCK0mBFcDJxwM3N/gQ5qYjBgisJAVWZ1xgP9ng8sCuUAMIrCQFVuN7gZWkwArgpHz0ox/9rdb8+9v7miMGCKwkBVZnXGA/uX79+i2bjO9jeUcNEFhJCqzOuAIrSYEVwHbGCIu/v44EILCSpMAKGN8DEFhJCqwQWEkKrAB8AAMIrCQpsALG9wAEVpICq/G9wEpSYAXgAxjA+F5gJSmwAjC+BwRWkhRYBVaSAisAH8AAEFhJCqwAjO8BgZUkBVaBlaTACsAHMAAEVpICKwDjewACK0mBVWAlKbA6QQM+gAEgsJIUWAEY3wMQWEkKrAIrSQqsgA9gAAisJAVWAMb3AARWkgKrwCqwkhRYAfgABhBYSVJgBYzvAQisJAVWCKwkBVYAPoABBFaSFFgB43sAAitJCqwCK0mBFYAPYAAIrCQFVgDG94DASpICq8BKUmAF4AMYAAIrSYEVgPE9YHwvsJIUWAVWkgIrAB/AABBYSQqsAIzvAQisJAVWgZUkq9/61recoAEfwAAQWEnOxFdeecX4HjC+ByCwkpyJzz77rPG9wEpyH63/A+batWufdMYFfAADQGAleXGNc4rxPWB8D0BgJTkP45xifH8Krl69encp1Ac/+clPhCGSpzbOJfXcEucaZ1zABzAAzn58f+PGDWGI5KmNc4nxPWB8D+D8x/df+9rXhCGSpzbOJcb3p+TKlSu31oMYl/QUh0hu+fLAB3GuccYFfAAD4OzH93FJT3GI5JYvD2x8DxjfAzin8X1c0lMcIrnlywMb359igPVWG1nNZCV50pmrbVx1/XbABzAAznd8H2HETFaSJ5252sZV43vA+B7A+Y7vI4yYyUrypDNX27hqfH8Krl+/fsvhAbzZHNDF/RMjlHz7298myVHjXFHvuVq8GecYZ1rABzAAzn98H/dPjFDy5S9/mSRHjXNFveeq8T1gfA9g/8b3cf/ECCXxmRxJjhnninrPVeP7LbK8lvvNzsElyZN607XbAR/AADC+J2l8D8D4HoDxPUnj+9my/J8wb3lTkdyCb/mfL4APYAAY35M0vgdgfA/A+J6k8f2lIG5mG9X62rVrn1yesElyrXHOiHOHG2IDPoABYHxP0vgegPE9AON7ksb3AAAAPoABAAAAYHwPAAAAAACA4/gABgAAADC+<KEY>ER8AAMAAAAY3wMAAAAAAGAiPoABAAAAjO8BAAAAAAAwER/AAAAAAMb3AAAAAAAAmIgPYAAAAADjewAAAAAAAEzEBzAAAACA8T0AAAAAAAAm4gMYAAAAwPgeAAAAAAAAE/EBDAAAAGB8DwAAAAAAgIn4AAYAAAAwvgcAAAAAAMBEfAADAAAAGN8DAAAAAABgIj6AAQAAAIzvAQAAAAAAMBEfwAAAAADG9wAAAAAAAJiID2AAAAAA43sAAAAAAABMxAcwAAAAgPE9AAAAAAAAJuIDGAAAAMD4HgAAAAAAABPxAQwAAABgfA8AAAAAAICJ+AAGAAAAML4HAAAAAADARHwAAwAAABjfAwAAAAAAYCI+gAEAAACM7wEAAAAAADARH8AAAAAAxvcAAAAAAACYiA9gAAAAAON7AAAAAAAATMQHMAAAAIDxPQAAAAAAACbiAxgAAADA+B4AAAAAAAAT8QEMAAAAYHwPAAAAAACAifgABgAAADC+BwAAAAAAwER8AAMAAAAY3wMAAAAAAGAiPoABAAAAjO8BAAAAAAAwER/AAAAAAMb3AAAAAAAAmIgPYAAAAADjewAAAAAAAEzEBzAAAACA8T0AAAAAAAAm4gMYAAAAwPgeAAAAAAAAE/EBDAAAAGB8DwAAAAAAgIn4AAYAAAAwvgcAAAAAAMBEfAADAAAAGN8DAAAAAABgIj6AAQAAAIzvAQAAAAAAMBEfwAAAAADG9wAAAAAAAJiID2AAAAAA43sAAAAAAABMxAcwAAAAgPE9AAAAAAAAJuIDGAAAAMD4HgAAAAAAABPxAQwAAABgfA8AAAAAAICJ+AAGAAAAML4HAAAAAADARHwAAwAAABjfAwAAAAAAYCI+gAEAAACM7wEAAAAAADARH8AAAAAAxvcAAAAAAACYiA9gAAAAAON7AAAAAAAATMQHMAAAAIDxPQAAAAAAACbiAxgAAADA+B4AAAAAAAAT8QEMAAAAYHwPAAAAAACAifgABgAAADC+BwAAAAAAwER8AAMAAAAY3wMAAAAAAGAiPoABAAAAjO8BAAAAAAAwER/AAAAAAMb3AAAAAAAAmIgPYAAAAADjewAAAAAAAEzEBzAAAACA8T0AAAAAAAAm4gMYAAAAwPgeAAAAAAAAE/EBDAAAAGB8DwAAAAAAgIn4AAYAAAAwvgcAAAAAAMBEfAADAAAAGN8DAAAAAABgIj6AAQAAAIzvAQAAAAAAMBEfwAAAAADG9wAAAAAAAJiID2AAAAAA43sAAAAAAABMxAcwAAAAgPE9AAAAAAAAJuIDGAAAAMD4HgAAAAAAABPxAQwAAABgfA8AAAAAAICJ+AAGAAAAML4HAAAAAADARHwAAwAAABjfAwAAAAAAYCI+gAEAAACM7wEAAAAAADARH8AAAAAAxvcAAAAAAACYiA9gAAAAAON7AAAAAAAATMQHMAAAAIDxPQAAAAAAACbiAxgAAADA+B4AAAAAAAAT8QEMAAAAYHwPAAAAAACAifgABgAAADC+BwAAAAAAwER8AAMAAAAY3wMAAAAAAGAiPoABAAAAjO8BAAAAAEDhypUrt169evXuw39kf4zseLDUseAx49wR5xBnUgAAjO9pfE/jewAAAACYPdevX7/l8B9Qb5V/YJPkSX0rzinOrAAAGN+TNL4HAAAAgFmy/B/tN/2jkeQWvRnnFmdYAACM70ka3wMAAADArFj+z/ZjH7489Od/TZIb2fsQxv90BwBgP8b3X/2Tx0lyI43vAQAAAGCA9rJhD77w9sH73/z5wfvf+k+S3MzDc0ecQ9rLiTnTAgBwfuP7f/rcnx78zyt/eXDwN8+S5EbGuSPOIcb3AAAAAFC4cuXKrcfiqkhE8pS2kTXONc64AACc/fg+wohIRPK0tpHV+B4AAADApWZ5b6bVP5LMXCW5rZms9dziXk0AAJzP+N7MVZLbmslqfA8AAAAASw7/YfSxes9VYYjktqz3ZL127donnXEBADjb8X3cP1EYIrkt6z1Zje8BAAAA+ABGYCW5+8sEu08TAABnPL4XWEnu8DLBxvcAAAAAfAAjsJLcth986YYPYAAAOMfxvcBKcpv+6+c/bXwPAAAAAO0HMAIrSYEVAID5jO8FVpICKwAAAADs+AMYgZWkwAoAwHzG9wIrSYEVAAAAAHb8AYzASlJgBQBgPuN7gZWkwAoAAAAAO/4ARmAlKbACADCf8b3ASlJgBQAAAIAdfwAjsJIUWAEAmM/4XmAlKbACAAAAwI4/gBFYSQqsAADMZ3wvsJIUWAEAAABgxx/AXLTAesdnvnLwvqtPLozfX/aYddfrPz24/dGnD26794HFMbnzxR+KfBfoPRwKrAAAYJvje4GVpMAKAAAAADv+AOYizmCNiHjbbbftJE7Fut/70McP7vrmu/sfV9/82SKs5rGI/b7j86+LmBchjB++v+L7FgqsAABgm+N7gZWkwAoAAAAAO/4A5iIG1rte+/HOAmtEylj3RZgdGzE19jVnrUZwNYP14iiwAgCAXYzvBVaSAisAAAAA7PgDGIH1+KzQixIp4/XHcYjjIVgKrAIrAADG9wIrSYEVAAAAAM7gAxiB9eIqsAqsAisAAGjH9wIrSYEVAAAAAHb8Acw+Bta7Xv/pwR3Pf/fg9kefXkTEMC7ZmyGxDazx57hcbiw/FBvvfPWdg9uf+OLR9TX3WY11pL315OzWup74fTvjNZaL/V9sJ9Z1+OeNXvvhc1brP3xNsa7e8Xnvhx5aHIfYh9zvwcsJHy6/em2Hz18dk8P1t+uO9cXjYzN547lxDI/sZ7P9OL7tMu06F+spxzyfs1j28Gu9Y7/43i1fw2o7y3XEMnl8YpuLP5fjn/sd6xl6r4x9/9ptTX3/xfPq+ybjeC+wbrz95XtmX2ZeC6wAAJzv+F5gJSmwAgAAAMCOP4DZt8AaAem2ex/4VXw6/DWi1eq+qMuAl4E1l4uvZ2xc3JO0hLkwwlZ+LYPfatkSpWK9+bVjsfDNn632I+Nu7Fs7k3axrcP9imUX90iNfTx0yj1dF6+rvKZ4Tv1zhrbY58V+lq9ltBu79HHu/+KYluOVkS5fe0bMxboPl2ujYT2esc765xoyc/8yFNZjV/crj+PK/P4vt5/7Xfc5lundj7a7nsNfV9/z5fcjl2kje76WY9+/8n6I47/J+6/uWxzXjN29wDr4/ulsP7ZZj91YYBdYAQC4PON7gZWkwAoAAAAAO/4AZp8Ca0S+jFZHZi9GdO0E1jbU5XMjUrWXE14EyjLrcREAl5GszhDM8NXGqjbyrqLlhx46MpO2XWdsM+NbG/Nac7k6Y7WGz3afNr1E8Go9y9nAcQziddWwWGe01pC3Oj7L8BrrqsctQ1/7Wo4E7DgWy+NYH+8d8zgGqwhaXl99/Mh7pPeeKLH+yPetPl7Cdw2nq+9fWTbfP91tlfduff/l+6wXqrtRurP99v2zek8fPh7PyRm0AisAAMb3AitJgRUAAAAAzuADmH0KrBnPatCr8WnoEsG9QNWGv1586gXNXuxbhdMS2Wqsy1i42lYzW3UokPbuK9u7ZOzQcTlpYK3LZ8gben3tTM9VAO9dFnn5+nrfh/b41u/dUNTO49luq/c9HXpP5LGb8vi6kD0W+GugPTKjeeA90QusU98/Q9vf5FLUAisAAPMd3wusJAVWAAAAANjxBzD7FFgzMNUZgGMx8lhg6kTK3ozJsaDWnU25fCxne64LmHl/zzRnh44F1tVs0c42huLrNgLravZlmS1aPfL4ciZxL8Sui6hjEXwosG7y+Lr3xJTHV++H57979PvXBN1N1rma+dzcR7cXWNsZxkPvn6Ht74sCKwAA5zu+F1hJCqwAAAAAsOMPYPYpsK5iaHMPy9ME1t5lZttQVy+L24t3Y7MQewEzLxvc2ou8U6Lk4vLH+TrKLMWtBdZy3Hr7vbgc8Dff/fVM2jWhuXdcx75H+xJY631lx75/J1ln73s09J8BTrp9gRUAAON7gZWkwAoAAAAAZ/ABzD4F1imX0t14BmveC7QTbXuXmu3OYC33Hd3G/o9dHrkbWM9yBuvIZWbHLmM8ORbvcWCtM1hP8/47Elg79xReO4P1hO9/gRUAAON7gZWkwAoAAAAAZ/ABzD5eInhdONoksI5Fq9XlWydeIrh3T9FtBdaxeJmXD263v63Aum6mZfdetCPLrS533LmX7ib3YD23wHrKwN+9B2tnnQIrAADYxfheYCUpsAIAAADAjj+A2afAugqJA/dMPUlgzXUeW7bO2iz3E+3Gu7Ls2P1hawiNS+puFFjjMsDL2Y7tc/MenO22txlYV5c3XnP535wRHMsNzXY98lqae7VOnTV8HoE1ZxGv+/5ttM5y/972eA29V0+6fYEVAADje4GVpMAKAAAAAGfwAcw+BdY6qzTv4RnRKy6vGqEqL926SWBt720Zyyyi1zIAtvdVHYx6y8sE53ri67muuuxq/w/XH8+JfY9wtli+iY1DlwnO7cf6M3wuolt5fsS61T1rR2L01MBaj93i9R3uy+L1xfEv64/Xs1ruQw+tXmMsF7/PfayzfvO11GNTY+O+BNYahse+f5uu88h7Yvk+zmh+7L26DNgn2b7ACgCA8b3ASlJgBQAAAIAz+ABm3wJrBKkIS6vQVGLe4mslQq5moH7z3V8HvaVt5FoErYxnGQfLvTZjHe1629mksXy7X+1sw962Yl8WwXLk/qY1YNbInKH5SFwtMbQ6du/Quj+915bHoG47Xmu8lvb+oUfC78j2IwweOV6H+xDr68XVukyd3bnu8QzB7euL15KzZY88frhs+/ix90p8vawvQ398bRGKm/fApPff4TpWs3+X77267CqedrZf3z+99/9JLkktsAIAMN/xvcBKUmAFAAAAgB1/ALNvgfVYcF0z6/O81jdlPafd1rZf+y62ve3l5vzeu2jbF1gBALiY43uBlaTACgAAAAA7/gBm3wMrSYEVAABMH98LrCQFVgAAAADY8QcwAitJgRUAgPmM7wVWkgIrAAAAAOz4AxiBlaTACgDAfMb3AitJgRUAAAAAdvwBjMBKUmAFAGA+43uBlaTACgAAAAA7/gBGYCUpsAIAMJ/xvcBKUmAFAAAAgB1/ACOwkhRYAQCYz/heYCUpsAIAAADAjj+AEVhJCqwAAMxnfC+wkhRYAQAAAGDHH8AIrCQFVgAA5jO+F1hJCqwAAAAAsOMPYARWkgIrAADzGd9ftMD6nU88cvC53//dhfH7yx6z/vvlZ44ck3c/+6TId4Hew6HACgAAAACX4AOYbQbWu9782cFdr/34Vx7+XmwiBVZnXAAAznZ8f9EC6y9ffPrgnz/9+MFtt922kzj1ixc+u1hvhMuLcDz+4IH7F8cifn38dz508A9PPipiXoD38H/91Z8tvm+hwAoAAAAAl+ADmG0F1jtf/OHBbf+/vft5tSW7Djueuf+J/A/d6h82HQwd1GDjQWODbclCrdADCaIeGLdBTrcwuIkjOh60QyMievJAA4WeZCKcmUIcrFF48aAnMpkEjCEDDz3x4Oas6rNK6+6768c599S555z3+cLi3lunatfeu+pe6tX3rbXfeHP8R+WrHz4jm04YX/n6e+P8Xsvcvvr+p1/2e39PPEref/b8KsYcvwenGvO576+X33p76HN8T7ACAHC9z/fXWCI45dQWgjXajLavQVRGtmrta8hhGazXEwQrAAAAALxAL2BOIVhff/bF8A/JEDQhmEIAEqwbSdYrlNePlY2DvI9xv//pCzPmp4jIPCdYAQC4/ud7gvVhyd1ryQJNGRzzQVgSrAQrAAAAAFzwC5hTCNZXvvuDLwXYR5/fk66k6DaCNUTYiyhY42svs/USy1ETrF7AAADwVM/3BOv1BsFKsBKsAAAAAHAlL2BOIVivVfwRrNcjG3vCPsRqlE2+xPkgWL2AAQDgqZ7vL1GwRhbp333/D+9+8u1vDhIxMkojogRuT7DGmpZRGjf2ie+npGx8/vHXfufuR9/6/e6+eZ6IKWmZ7cS5o3/xffS13S+2xbkOLdkbY481ZuvY2/ZzfnL91RzPXOZtHVuOO+Yzju2dP7bnfE+tVRv71fmMn6fayrG085rXLa9vHBPbYvzt9cm5j8/rGrm5PT/L+Y822rnLNXwjpu6V7Ff0u71+vXOtuf/yvPW+mROsU/dP7/w5Z5dSHppgBQAAAIDOC5inEKy9jMPYdotZr4eOaW7/dp6fInNz7XjqflvJxiiVvOa+e4p5asd8qddq+L3b941gBQDgNp7vL02whmT76q/96vh89J3ffGv8OQViCtbYnpKxRivxqswKaZbHxPF13xRgvTVYQ2JFX7KdaCPbqZm00UZ+Ftuzj618nBp7thlfo695vpBtVf5l2zlH8fNcRm8Ivmw75W0VfDG+OF+0Ge3lWGO/VrTWfuX+rSys1zHOG/3PfeL8tV/Zl961jM/i2HpP5L61T9Fm9qe3b3zea78dW1z33vWrojbvp7X3X66Vm+OJ47O/rWCdOn+9f/L8Me/RTvv7QbACAAAAwAW+gHmMYA0hk9JvWIP17XeGnyNC7kTp4Pg+1s4MiRMlXmOd1rqG6CDJ3njz3j9g47gqpOK4bHdY43XXXrQTxw377s4VESWKX3n3g1+uB/vJzxblUrSVbffO18qmaHM4976vOb623WGsu/nIfWpfoqxt9LPOTcxDOzdTgjXGnG1n+9Fmb/9712e/Rm5bYrd3Ddv24lrXuY25bqVhHUf7UuJYwVqvT53noWzw/r7JPtexDffDbv/cJ++Teh3HdndjGe+p3DdKD/fuqU6Z4jnBOpTPzn7GvHXWkc377cE1bQRpHXN8nbr3os85lqGdjoAe7sF9ae82CFYAAK77+f6SBGvItpRFNUsxhWorWEMw9YRezcqsMraKr5RUrajL7a2sSolYMyujf9Fuis08V0TNEs1+zWVM1pK/ce4cexyf524zFA8tEZz7x/yEMIzzRN+qNG1lcTvPKQZb8Zpt588pHus8Rj9zLL1rEX3IcVcBWbNNq9yt0rPeE7lvvR41IzeO68neeq/MXb+p+6+eq90ebbTZtK1gXXv/5H4pXuO+CAlNsAIAAADABb+AeZRgDbH5w5+PYmiQbrufxwzLfUZcyKAUosN+e4k4CLD95yGPaltD6dci8FICRjshvFoxG8c92L77+phyrg+yEHfjzT5k31Ii39snx7mfjxxTfG3LsQ77ZpS5mROsKQVTzI3z1UjRnNeQiVU2VnGXIi6PzfVOq5Qbxe5ufgfRmmIuzlnayu1Vbke7j81g7WVWdu+7fV/i3GP/dtuHn8s1edQ9FXOzIhs1941jh/kv0jPmqc3EjXMNfd3Nfwrd+FqFcJ2D6EOK5XvztO9n+/tUz1nv0Spyo48EKwAA1/98f0mCNYVeSr9WfKZgnFqDNTMF6/GZOdmTTykBe9mBdf8qz1pJmqVo67mqtGvF6Vz26lTJ2JyXVgYfK1jr/jmelHhtW22mZ+7XZuRmJmorKnvXsb12U1I7+9uKyd41nboneuerc1q3L12/HPPUuXptphBuSzH3BOtjz78k8AlWAAAAAHjCFzBblggeJWKVlJ89/1L+7GVlK1IH6ZfirMjGPEfdVtuv567bl7JYDxGsKaDarMGa1ZjS+J742suwKjKzj610ncuQ7M1BFZsPsm2btsbj95Ix5WftQx7X9rO9TjnO7EuK2UEUNtdzC8E6d9+lyKzjT0EZn83eU0UI37unyvY1Way9MVexea9vMd9lznq/N3mNqyBur3Huc0+mxjq1OZ69SJ26j5QIBgDgNp7vL0mwzsmodh3KnmCqGYBTcmpJfPZk35TgnMsQjb5k5LjmBGtmVfbO0RvXqQRrZuLW8rY16vYQeFOieY3UrOOs8nVKsB6yfemeWLO93g91Dtr75JA2M2O3d/9N3avt/dMK5anzX0oQrAAAAADQeQFzbsHarp/Zkzm9z6ak0NS5WwF4kgzWKhs7JXnn+tP2/xiZNSnGUlYvSMzMjBz7EOJ3LyNbcddei17WZe1/T+49hWCdy5htt0/NZ947rZw/xT01NZ9Lx9/Lyp3IoO2J4fofFnL71H4EKwAAt/F8f0mCNTNK21K4jxGsU+ti1jK0dX3TnrybyiycEphTMSdY56RkFaBVbJ5KsNZ5m4rYJzOEl0RzLfm7RhZfimBt126dun6HtDl3/7Xz0FvP9ZDzE6wAAAAAcMEvYJ5MsO5lVU80jdmQC9mGdfsDGbYXWacUrPfKGudanc2Ye+vRDn1sSgCfUrBmOeAHsvHZF1+Wp93tX9dGncoCjj632Zm17G8dT13jc7Fv5xSsZTy1v3Wd2RSUk4J14t45xT3Vy6St2cvDtSr3WE+Q99aUXXPvZfnuqVLHBCsAALfxfH9JgnVNKd1DBWtKs7bMbM2YreK0J+96IvYx/T9UsE6VDz6VYJ0rT7wmk/aQsVyyYJ3LNn2stO3df1MZrMfe/wQrAAAAAFzwC5inEqxtNuXScUuCdQsZ1i3z+uNf/HIN0pRdZa3MKsBSbNZo16c9lWAd11LN9qM07H5t1+zL7FzVtUdjjdmmnOyw7mtnPClk50oyP5Vg7fV3uAYLpXK3FKy9e7v+h4Ls4+S9VyRr25fFe2+/ZvJS3whWAACu+/n+EgXrksg8pkRwT1qtLRGc23prip5KsM7Jyyyru5VgXcq07PUx12TtxVyma5ZbDpl5ySWCTyVY59okWAEAAADgBXoBc5ElgvdrnR5SIvhcgrXKruxnXVs1y7EurdN58gzWpuxrr5zt0pxEn9ss17GkbZHIx/bt3IL12PncNIN1L1NzLKMIL2ukLt57u/2qhM3jpkr/TpWTbjNgCVYAAG7j+f6SBGuKuaUywYcI1qlyte26o2vk3dL6sFXEzq1R2ovoT2Y7xve9UsbtuU8pWOv6n2vKOIcgbfuZUddqXTOWSxGs2W6McWpsx7bZk6FT92rcB8ecn2AFAAAAgAt+AfNkgnUvJ0NITpYPLlLvHIK1XVf1wRqsnXVX2/OM/dmN4VyCtS37WtchrRKt7euQ0ThRJjavy9x16q4t2hGxZxWsISwnruclCNZxrdr9PE2tydpbg3UyC3y/fu7Serpz5aQJVgAAbuf5/pIEa7sOZci4yN7M9T+zzOohgrWKy9g/5F+0k+dps2WnpF5dXzWOif7E+aKtLClbzxUCMrbHPtn/JelaBVvun+u/tuuvnlqw1izZOGfOe/a9J8HbMUb/UwymSI15jrai73n+VkBfimCdu37xffb50Daz9HDMRd7HOYape/WY8xOsAAAAAHDBL2CeSrDWzL0qj+KYttzt1oI1s07jaxWObd+jrVaEpTDLMWSG4lIW6ykFa27PvlXJWPubfa3rwLZjzjVCl9p6MJ6Uurtr2o67u57orq3h3AsSdG6uRsHYyOzcPpQ6bgTyUwrWe+I7M41rFnSWdy5ZprWvD+69/ThzDses1t01mJvXcX5i/jvZvwQrAADX/Xx/aYI1JFOVihkp5UJQpYRKERXlatvtdV3VOC4lV426T7TR7pNtt9mXbVRh2TtXiLXI2FyT1ZoldNvjaz/i+zrWKu+m2m37FHKuLfPbzmHu17YbgrDdr53PKkLn5rQ9Z0rDKrSntsdx0Zdor4r5PEf0p3evtNuX7oNoO/aJ69d+vub+i3u67Xeu/5vZvCmmp86f9097nqXrTrACAAAAwIW8gHmMYA15VcvKhgAKQTdkRjZlTIftjfQZZOResqZAGteWLNIszpMSNORQiKAhU7PI2HZ7Kx1nsy+L5Bra269d2mZC1rLG2X4e0xNY+Vn8HPvnfnVuBgkY/W7Ktc4JwXE91Mj03fdzkGWljZyvcUx78XZPnpZs1aHtOuYi3ur8DGu65nh2x1XZWOVynPNBm/t5rKJxTaZvZjS3c1UlfV6TbH8cc7m3hq/7zNHePZXnG++dZnte1zX3VLsO7OS9XeYixtLef3n+eu3v3Xsls7iOKT/Lcdd7tLdf+/OcmCZYAQC47Of7SxOsNUI2ZQbfqeRttDW3fuih/Zoq5ZrnOrRUcHuOuVKxW0X0ec25c4xL43zKsTz2Xjn2+s2V910zD1udn2AFAAAAgCd6AfMowboXhW0MEizEWLO9l9EZgik+S3EU8qkthxrHPTjH1Lk723vlVadEXsjEUajuvq/tRkRbVZoNn3VkVPR5lHohnnc/j2KwMzdz62Y+6OeuX1UYDn1uS/3GGrH7ec05qHMz/LyX0WM/duOPdnuyN+fn3jk7WZI5P3kthzkr1y/7uXZt16n7q55vvA5NqeDhGu1lcHtf1euYMXW+3vale6reK3P3du4bYxjH0c5Z3Hu7uW7vvakM6Xv3R/6nh979UfYb7s/97+IQC2WGCVYAAC73+f6SBasQ4k8JVgAAAAC4hRcwpygRLMShkdmuc2WUxXUGwQoAwNM+3xOsQgiCFQAAAAA2fgFDsIqnkquPLUUrCFYAAPDw+Z5gFUIQrAAAAACw8QsYglWcO4YSwxOllQXBCgAAHvd8T7AKIQhWAAAAANj4BQzBKoQgWAEAuJ3ne4JVCEGwAgAAAMDGL2AIViEEwQoAwO083xOsQgiCFQAAAAA2fgFDsAohCFYAAG7n+Z5gFUIQrAAAAACw8QsYglUIQbACAHA7z/cEqxCCYAUAAACAjV/AEKxCCIIVAIDbeb4nWIUQBCsAAAAAbPwChmAVQhCsAADczvM9wSqEIFgBAAAAYOMXMASrEIJgBQDgdp7vL0Wwfuc337p76aWXhvir994lqk4U//wX37/7+z/74zEuoU//+Offu/vdN399vN7H9KsdV8Q/ffzBqv0iHnv8Jczhpf7OEKwAAAAA0HkBc0uC9fVnX9y9+uGzMV7/4c8JL3G6+2t3P927v3b32yXfd0/VL4IVAICnfb6/pAzWkEQE6+MjBODHX/udUb618dVf+9W7P/nt33ryfkYfjhWsrWCMCGkbQnRpvzjvY4/3O0OwAgAAAMBBL2BuSrD++Bd3r370+d1Xvv7e8I/SkErHtPPKux8MbURbc/tVeXWMyMr+xjGvfPcHX55z9/1rH/90+OxFE5ZtXNo8pLB8+a23h/srr3Nex9x+7H23xe9DzOFjfx8IVgAAruv5nmC9rfjRt35/lKjPv/feg6zM2JbZo9csWKtMrvLzJ9/+5ux+pz7e7wzBCgAAAACrXsDcYongEEnHCqWQZvmP8RBmc/uGvAoxmvvH94MU3Gc2LvbxjTeHSLFa23oRsm9bYZlz8fLb79x7KRLCe82cnitSWLbX6Nwi8xy/DwQrAADX93z/IgjWEHkvgrT96z/6zihXI/Nyar+QrrcmWEOM5r8H/u77f3iwYD32eIKVYAUAAACAVS9gCNYmezUkZ4jP/T/GX/vkZ6uF7OrM1WdffClw337n7vXPnj/IOrwmwXoKaZdiMr7ea/ujz+/Ja4KVYAUAAMvP97cuWFOM3bpgjdK2IVbXjvXaSwS31zfGn5m5MQ9tqd8lwXrs8QQrwQoAAAAAq17AEKz3I+RqLfe6JPaOEaxj/97/dFLgXYNgjWzdGMdWgjVizGbdXZdLKRdMsBKsAABc8vP9JQrWWD801xAN6RVlb6cyMiPbMEVd7FdL4sb3uX5mtBP7RaZinCe+rzGX8Tm3b5TcjW0h5drzV/mZpXljvzh/FXiReZptx3giop/R99i2RkBm9mpErw9LcjbmO/uQQjHGkz/nODPyutRz5f4RsX9ew6kx5HWL8cb+cXxep7XStYrPuC45B3HuQwRrWy547fFT68PWuYrr2M5VvQej7TXXPI6PdlKk531NsAIAAADAFbyAOadgjczNkFJjNGVf731WZFpv36m1TCPjNEvtHiqUUhhme6M4nRF7xwjW7N+QwXqENIxj7s1VRJMJu7T/g312x/euy9zcV9G4dI2OFaxxPdfM71CaeWH+hzVJd+2N67su3YMT7Z1KsD64Lm1f6jVpfx8WrneML38fCFYAAF6s5/tLFKwhjeL7iBSkbdnbkE0pl0JMhaBqMzhTXKbki+0h8uKzKv/abMWe4IpzRPspydacPwVwbI+I/tQyvtmXKhtTMFaRFl/Xzt2xWZYxB3l8znnOT7Yf26PP9bq0IrIKytgv56221Y45pWLdN+bhUMHazkOd3zWCtT2+lgo+RLDGvnl943rX/yxQ98s5zPHPXfO49+s8xjlSghOsAAAAAHAFL2DOJVhHEVnK77bSZ5Bp+89DQqaw6u07Zue9/+n4eew/rOG5zz49VCjFWp8RtYzvUCb445+eVLBWadgrE7xGzKU0yzZCCLfirortGNcwL7v57WXlRh8yWzTmtGbxtuOK9tt1Uh8j8OYEa/Q775vuGq5x/cs9FX3uXa+6vu1Un8f7rWmvFZ+nEKyTvw8lqzn2yXnOe7vu25uvcW3f/X3xmN8HghUAgOt8vr+GEsEpqEJE1WzJVvDl2qJZ8nWuzZBn7b6ZqdkTh/FznHPu/DWDMtvsZRjmuatwS9nY229qXdCerHxM6d82AzYEXo4v5qVmYNZszzp/Pckbx6UcrJK8VyI457/dd61greKyCvG1gvXQ46cyWFtB3xvT0vjr9uxTCFslggEAAADgCl/AnFOwVrE3SsZGnI2S6qPPx+y9KvNqJl9KxkGCFdl1TMZern1a5VyVWyddgzUE5V56jWuMvvvB4nqvU+Vz21LDmX3bittBxE6UJW7nM/bL9WinxpV936pEcJ3bnhRuxz9I4v281uuYcjXnOAV0e4/E5zlnKaXzXtxCsNbx3ruPitDNaxJjyN+HUTo31zgzsOOa1XtJBisAAC/W8/01CNaaXZkyL4VdK7JSaOb2OQGV+9ZMx5S59Vx5/io5exKs7VcVvlPjmROsc9u3FKyH7l/nYKqNyE5tBeHUGqyHrM3aE59VdNeSx2sF6yHHr432vpwbZ8rUvOa1P235Z4IVAAAAAK7kBczZBGuUB27K4fbEZE9eDdmk+6y83J5CtCcSjxFKg5Rs1vkchVUjvR4rWLP/VZTdy2hdmwm7719IxSXxmhm5c9myvXkbrtvE2E8tWAexuzt3xJhxuxevbb/j597arDknYyZy7te5hkOmbh3rxH3Uit+TCNbO78N43YrQ7V6TIujr9vZ3hGAFAODFfL6/BsFaBVVKznxmi0zSLCcccYhgTemXWZopRFOSpgwMAduWbF1z/prlWfepZWhT4j5WsFYxvCRh21grWDMzM8sEHyJYszRyFcBbCda21G+c+xDBesjxUxH3Uq6t2rsv14w/r3lmMvdKRROsAAAAAHAlL2DOuQbrgwzETgnaJXk1CtYiNteIwjXZoG2WZJW4U20dK1jnytyubasnmatQrNnBWfJ3VjIfOG+nFqy9SFHaFeIzWa+5PfdbGvsh5zjVGqxT81CPn7om7fZa0vqBSCZYAQB4oZ7vr0WwpnRK6TklODMy02+uzbakb7Qd2YMpA7MkcVseeO355wRr28/HCtYqBOdK68Y4M0MyBPPaDNYcS64Rm3J6rWDN46sk3FKwtkI5r+khx9dSwVPHT8nVXHc35+oxgjWvbS87mWAFAAAAgCt5AXNOwRoisbeW5DGCdSxn+/Y7jxaFVU5NxZSge6xg7WWkTpXEnVo3tmarDqV9Y9te2Gap2Had1ksUrFVkjmvrRoZqJ4O2rkE7Fb35WSxjvdu/rlu6pWCN6zH+PhTBfoxgrev6nuI/HBCsAABc7/P9tQjWtiTwVIneQwVUSrTIygwJFiKtlvZNCdtKyzXnr4J1aeyPFax1vdZa8viQc031NaRsCsI6D4cI1p4k3Fqw1rVf8+upj5+b33odemOSwQoAAAAAL9ALmHMJ1swGHNa5jLUkP3veFVWHCtbe+qiHCqXMIM01Lmtkv6cE6hrBOqz5OVOatzdP7dq0k1K2Wct2WH/z45/+ct3RWLtzn9U6Ver3EgXrkJ27l45z13hufdx70nNBsNayu8M6rbtrmXO7SYngkmmb99pjMlin1jQmWAEAePGe769BsLbCs1cy+FjBmpmJKdAyozRL7sZ5IqaE79z561qr7dqZpxasbcbl3PkOFaw5R1lK+TFrsNas2a0Fa+37Umbtsce3EVI175ulMa0VrHP3EcEKAAAAAFfyAuYcgnVcQ7XJRnyMYD1lieAQa3MZoyn6evusEawhv6qki++nskmzvUNK2tb+5ZqktVRwbO9l+l6yYG2zMtv5mirfOzWmpfFnxuy9tU6bcsOnEqxV5vbufSWCAQDAY57vr0Gwpuys2Y+5b8jEuZK4vbU/p+RtFYgpynqldA85f8qykJ+53upWgrWWPJ4736GCdar88FrBGv1oM5DPJVjbUsGnPn4ua/lUGaz1PxTEvVivK8EKAAAAAFfyAuYcgrWugxly6RSCtQq+Njv0EKGUIjJL6XbL8O6zQXvZgWsEa/SjFaxTAjXnak1J2wf9ayRwLXPbE7qHiLjYt167e/PfbM95faxgrSV+B3Fc2qwCee5ctexyfL8ki+s8bSZY856Z+A8HxwjWer3be5lgBQDgxXq+v0TBGoIp5FRkh6ZsakVmLVubcjSOjzVRq0yt+4Wcis9aIZoCty2tm1KwJ1BDzGa7sd/U+ePYWmY212yN/au4zezT+DyzFOPYqfVS5wRfzWSN9nIuc/x1XdieGIzvq8Srn+W6ojXrt2bx1jYisvRye746tpDgeb66fa7Ucewf7WdmbOw7JWRrqd9THb8maznnKo9t56qOf+mat/dRXsvctvb+IFgBAAAA4IlewJwlg7VIyCxZGyJrLotvjWCt4nPM5Nu1ndvXCKUhu3MhW7RmUraSbkmwhkCL7MlWsKZErYJtaGufjXqIoKyyscq1e+WNG5k6ZruW80yJuKH93b4xjipTxzLE734wZmZWqbl2Hdk5wVpLBbdZqONavlEWOcr67s8/yOAy3lY052fDfvvx17ayPHCOb4sM1nFt35jTXR9izrKfxwrWUSbvfx8yk/mQ3weCFQCA63++vyTBGpIpBFSIuJRH8bVKx1ZmVdEUX0NWtUI0fo7PYt8q82q2ahzfbo/z9soD986fkrd3/uh7iteUsXHOHFMcE59lpCyr21I6r53L2LfOTYwjvu/1Meen14cqWePYbCN+zutUpXU+t+b2OG+MvZeluWbMU9Kz1+epLOVcx7QV34ccH/M593nbt5yriDh3Hp9jymuzNP4qZKPdPC4kbFzH2m78TLACAAAAwIW+gDnXGqyjwCpiKbcN67KGDCqCMT5PmZkycNy3yLOavZlyKUVVyqu5sr1Zuji+72WxRvu179F2rpsZfYlz3Dv/RFTZmGu+1jbreqNTmbBLZY5bUZxlY1tBOJSo3fc7x1wFX5ZMzn7U9T3vCeFS6raeJ8vtLpXlHdcdzbnYX4cHMrGI4mizlsXtzX/0qYrwqf2qMB/GWK7JWHK5Ctz9Na/3aEryeo+O8zez5m173w7H7LN1h3sg5O/ufNnv+DyzmuPr1D1eM3brdRn3PeLeIlgBALiu5/tLEqzi+mNtKV1xu0GwAgAAAEDnBcy5BGtmQobgGddR3UukIer3GXtx9GB7I67i5xBgIZcGUVvbmhCs3fN1hNiDfeb6Oxe9tveZvEt9XRPD+DuCeJiTXmbtbp7q/vXa9Po8CMZO/8Z5aMoED+PqlA5evAYT8zD3+ZBxuhtnzU6dErrDfh8+G+e9nZN27KuueT2u2b7292Foo7YTn3WuSfeebOdj38+8Bm3/CFYAAG77+Z5gFQSrIFgBAAAAYOMXMOcUrEKI2w+CFQCAp32+J1gFwSoIVgAAAADY+AUMwSqEIFgBALid53uCVZxq/dxYkzUFa3x/KWuCCoIVAAAAAJ78BQzBKoQgWAEAuJ3ne4JVnFKwtmFuCFZ/cQEAAAB4AUOwCiEIVgAAbur5nmAVQhCsAAAAALDxCxiCVQhBsAIAcDvP9wSrEIJgBQAAAICNX8AQrEIIghUAgNt5vidYhRAEKwAAAABs/AKGYBVCEKwAANzO8z3BKoQgWAEAAABg4xcwBKsQgmAFAOB2nu8JViEEwQoAAAAAG7+AIViFEAQrAAC383xPsK6Lv/+zPx7jH//8e+bkBY9/+viDe/fEP//F91cfG/vWY6MtghUAAAAAbvwFDMEqhCBYAQC4nef7axCsIaD+6r13H0TIqWOPf/6994bP/u77f7hKjsUxf/Lbv3X30ksvDV+3Gudf/9F37n7y7W8O/TtW5E7NV0aM+VAp+NTRjiHmac1xMY+9634K4R7XKe6HiLX3Yr0+3/nNt4Zj43uCFQAAAABu/AUMwXo58drHP737ytffGyN+PmT/Y8/76vufnqQdIQhWAACe/vn+WgRrlVkhOKcEa8jDNiMwhVa2EWIrRFvsmz+vlWpbCdaPv/Y7Q9u/++avD/2Mr8fKtxxvtpl9jsh26/ZD5OBTCtZ6D0QsCegQyF/9tV8d909xfcp+5XweM4cxJoIVAAAAAF6QFzAEaz9e/+HP715594O7l996+5f/6H/jzUFAvvrhs+3O++Nf3L389jvD+drzhFCNPrz+2fNf7v/si7F/q9rfHRtttPJ22D7Rzivf/cEQW8/5cJ7dnNeXLGsj5yeuV50fQbACAPCiPd9fU4ngzPgLcTgnvEKk9T5LoRpfU8ClbH1KwZr9quMKSRrbQhAem2ma/Y1oxWOMOecz4kff+v2ruAfqM/1Sn1Ng9ubgVJFZzQQrwQoAAAAAsy9gCNZORueHz0ah+upHnw+yNSIyPVOybnn+aL8nWMftu37U7YcI1kHS7vYNidt+1mtnELi7MQ9id/f9luNO8Rt9CNka3+fcp3SOr6998rNhbiJiv+zzKKab+REEKwAAL9Lz/TUJ1igLm8+gPemYojIka+/4EHIhLB9TFnYLwZqSri17m9mXx2aYTgnWdk5yn7Vld59asNYs3Cn5nNmrdV+ClWAFAAAAgCd7AUOwPowha3VCKGY53acQrJHdGsJ3jRidFcghjXdtrW0nMkJ7WaFZovhU5ZHj3NGv/Nqbk/Z8mcGb87NUVlkQrAAA3Prz/TUJ1pBm+QzayzqtJXEzS7UVlodmakYZ2pBnESl4pwRrZJ32zrtWtNW+xXnnZPGpBGtm8c5ly8a2NWvVHjL+nNNjBGtc+yUpnNeq7jt3jXNd2ra8dG8usu8ROXdTY4nzT7U5J1jjmGPX4CVYAQAAAOACX8AQrA9LA88JyyF7cuMMySnBOpn5eaBgPVU7IaJPJVijNHBEjn9qTqY+c+8SrAAAeL6/PsFaJWpbJrjK11752LY8cFsit5cNm9mP8bVmQraCNeRazQSNfUPwrS3tmzIwBGdItZBrmRn5mDVD1wjWnIvcr4rC6Edm/Wb/Qga244pjqtxux9/OZV0X9dA1YLOPOT9TAjq2R9/n5iDmusrl2qd2jPFzu6Ztvd/qGGLfmKd6z0RfWmE6JVjrvTR1zxGsAAAAAHBlL2AI1v8zuRZpL1t0bu3UyJ6MkrUh/CKyzG3N0szStiFpQyiG0F0jWMfjdtEek/0N+Zvnj/bb/er5e5merWCN7NB75y0ZvVlGOSTr+Plu7ur+ta9DieWJMWTW6lz26ZxgnZufuIa1/zlHEff6EGPd7RvXZCrDd1ybd79WbO/aCYIVAICnfr6/NsFaZWCVYCkpq5yqn0+VB+7Jt8weTfmV7fRKBGd/MvszIuXbIVIsj0nRF3Iu5Wr0Jz6vMbXO7DGCte6XGaG5BmyVhzm3VWrOjb9K8JSJsS0zRWvJ57UiOftT74M2azY/yyzTqXVoc65jXPFzznNP3NZrWkVpT7CmWE1xWvtaj+0J1vyPAPXeiWtNsAIAAADAlb+AIVgnSgTv/8E8iLSFtUdDxuX6nykB47hWCMb3KSRTGPbEYk+whsybymzNdqLtOG/tS217EKCxjuyEqHwgWEMaf/KzcT5SKEab4zn2a9Jme+P6tXtBnXOXAjPXSK1zWssDHytYp+ZnuDZ5PWMt2fp1v6brg887a9TWdWBTjmd/euWTCVYvYAAAeKrn+2sTrFNlgkNqhYSqYrB+PlUeuCffMquxLT/bE6wp09p9s92lkrPZbs2mbaVhlujNz0K4rSkfu1aw1v6m7EuZ2s5Z9jP7t3b8U9mavfLIawRrPXcrH/NemJuDKnx72a/1/klB2suWbQXr1L55virGe3MSn/fm6ZrKBROsAAAAANB5AUOw9ssAV9mWwm0qw7KKt1bsVSE4yNoiEVNGZmncpRLBuf+UYH0whv32e5ma+xLIawRr25817QzZqPu5azOAp0RqLQ98rGCdm59e/yMLdZyfIkjvbS8SOCXsvazivayOY/zeEKwAAFzK8/21CdZemeDMOE0hlp+HDOyVB14SrL2sxCnB2stMTKE7tzZnK9Ri/+hfFa1VEOcYe1m4WwjWzNhs+5/bc7+1459bb/SQcfXarCI3r3XuMzUH7TjmPstr1JOx7b3SE6l1e713enNSs6ePWaOWYAUAAACAC30BQ7BOZKU+++KebKvZqVXI1ZLCbSZj/DxX9jYlaCsNTyFYqwC8J343FqxVUrZZoFk2+YGQXSgPvIVgrZmtS9vvXeMih5fW6yVYvYABAOApnu+vUbC2ZYLb8r/187qO6JxYXCMlW8HarvvaizlJVrNtUxBGm1WypnyrJZBPLVjrfq04nYrYr/Z/afxLgrUnaZcEa53/FJpxbWr26NQc5Bz3ShO32a1zMrYdZ12ntRdLgrUVx7VUNMEKAAAAAFf8AoZgXRatIezassFtVmZ8fsh6rcOapPuSuVsJ1rFvRXSeQ7D2pHPMY9vG2vLAWwjWQ7bnWOMax/cZNUvY7wrBCgDApTzfX6NgrWItZFTI0zZjMDMoY/tUeeDHCta2HHGu+VmjrgM7JfLacrKtZI2+p7g7pFTsWsFahXSKwroma29cMfY6/sgcnRv/GsG6JmOz3S9LGWcGcCtNp+YgywD3sprbbNNjBGv0qzcfS2uw1jWFs481G5tgBQAAAIArfQFDsK6PXvnYqTK/XQG4F6op67K08FaCtZdheQ7BWo/JjNUYe09C53q1S3P3lIJ1vG5lvdk2/H4QrAAAXMrz/TUK1l6mYCseq3ibE5OtfKvytj1mrkTwMeVccwztGqLZjxxDFa2HtL9WsKbMraI3pWK7tuqxcnSNYJ2T0VOCtZbUjWvdyupTlAieWo92rkRw75oeMicZde3da1mHlWAFAAAAgM4LGIK1k13alPrtlY/NkrYp9ZYk27g26S4yW3NKUp4sg3WfYVnF5rkEa2am5nijD+2arJnVulQe+FIyWGWqEqwAAFzD8/21CtYqnnrZfVW8tdJtaQ3WqfKxPcGaWYZLInJpDFkiuI0qWUPIrpGQhwjWdg3YqXVup2Lt+KdkYvbxmDVY2+s1d83aOZiTpjmmbCszfHv3WStYc99oY+la9eYkvm+Py2s0J2IJVgAAAAC48BcwBGsn6/ONNyc/z6zTMYO1ZKWuyX6t8m9rwZprsNb2zyVYh/ZCKO/XgO2VAR7mbmauCVaCFQAAHP58f62CtZYBnlqjMkVZWz54SbBW6VjFYa6DWgVrLa8bn1c5tkaGZh9DZLaSNYRn9CHHmZJvrWSdE6zRdkrUaHMqWzfnb2pc7fjbLNw5mRiftzLzGMGafYh5audmag6yvHG9xnFsXvsq5WPfvAYhZPM61RLJdV3Y3Dfmtb2m9efenMS9Va9xXJfMqJXBCgAAAABX/AKGYO2X1e1lVY4ZoXVN030WZpbDnVpLtCcItxSsY8ZsM5ZTC9a5tWdTrNZSwa2s7m2/NME69HWfuby2vwSrFzAAADzV8/01C9aQXT2p1grRnpgKKZcyLSVXlXw1czSFWcrA+Jpt9kr5phCby5ytAjD3TylXf86Mxioyo91c47PXZki8GFvN7KyR44jPW3nay7CsfWvHtXb8db3Z2D/Xxl0S4PX4KoTjmCoro60qKntzEMfXaxzfVxlar3N7z8TPVXSn+M5t9dxxXeq+0V4VydG32D/7FvMVP8f2OnePKQ9NsAIAAADAhb2AIVj7gnUoafv+p4NUDYma2ZYRsa0n9sbjdj9HO/E12nggG9/9YBB1p16DNc4ZEf3LNqoMPqVgrdm8MZ4cc68EcO/YsTxwM5fda7LbN88VX6ck9paCtV7j+Dzuh7zGa0ocE6wAAOBcz/fXLFhDZM2VTQ1hNVXiNoRZHF+jl8UZQizOEZ+FTKz7t+3Ffilqp0r+zvU1hHAeH+23bbR9njpH9DPbaiMyVw/JhIx9s634OnXs0vjr2qS5X5vxu3St26jH5vWpc9A7pu1/9DOvcc7PVJ9i3/g8xxj71WtSx9y2W++Xqb7VrNXc1h5LsAIAAADAlb6AIVgfyryQoSnZasxlqIZkSwlYYywzu2s3ywSPorCUnh2k3Z/9l+7/SH/tP/2P7vZcKzZkXxW2ufZp29/MaK0RgjAE6YOx7qXpg/OWkr6DJN39PGSi7vZvRWXKyl6W69rywL1xtyI45mFqftqxDf3s7B/9fLC9kentNY5xDXPcGTfB6gUMAABP9Xx/zYJVXE9MrcEqbi8IVk5s6dIAAAtISURBVAAAAADovIAhWLeRtE96/gkJ/BQxZPR+9Hl3jlIQu8YEKwAAON3zPcEqCFZBsAIAAADAxi9gCFaxdbnlSxK+gmAFAODWn+8JVkGwCoIVAAAAADZ+AUOwilMK1SxLHN/nOrbmhmAFAADne74nWMXW8ZNvf/Pe0hqxDush68AKghUAAAAArv4FDMEqTp2xGmI11zU1LwSrv7gAAJz3+Z5gFUIQrAAAAACw8QsYglWccl3SVz98NkTIVqWBCVYvYAAAOP/zPcEqhCBYAQAAAGDjFzAEqxCCYAUA4Hae7wlWIQTBCgAAAAAbv4AhWIUQBCsAALfzfE+wCiEIVgAAAADY+AUMwSqEIFgBALid53uCVQhBsAIAAADAxi9gCFYhBMEKAMDtPN8TrEIIghUAAAAANn4BQ7AKIQhWAABu5/meYBVCEKwAAAAAsPELGIL1tuL1H/78y/jsufkQBCsAAC/g8z3BKoQgWAEAAABg4xcwBOuNiNXPnt+9/PY7dy+99NIQX/n6e+ZFEKwAALyAz/cEqxCCYAUAAACAjV/AEKy3ESFUX3rjzbtXvvuDIYP1lXc/MC+CYAUA4AV8vidYhRAEKwAAAABs/AKGYL2N7NU2a/X1H//C3AiCFQCAF/D5nmAVQhCsAAAAALDxCxiC9TbWXVUWWBCsAAB4vidYhRAEKwAAAACc4QUMwbqx/PzxL+5ef/bF6kzUg9rdZ6muFazRj0POsZQJe4lZssO87ObjlHMtCFYAAK7p+Z5gFUIQrAAAAACw8QsYgvV08donPxvWQU3ZGeugxvcRw9qoHdEaQvDVD5/dvfzW2+NxVRC++tHnY5uxX7Tx6vufDtte+8v//uU59p/XiH3HNnbft/sM5yn9GdrcfxbnH9Zx3fU5f87jX377naFP8TV/fu3jnw7jiK/DMft9hzE3EjbOk8eOfd1tG2XxZ8/HuRja3s1ptJPbem3mGNt2h7Z3fa2CeW6uBcEKAMAtPN8TrEIIghUAAAAANn4BQ7CeNgaZuZd3IfdSEKbwCwnZlvYdtu/2i59TEoY4bdsMcTnIwb3AfO3T/zkcM0jYvZBMOZrydBSnIWRDhH72fOhbbquysm6vsvJeX3efRd+iT1UgT22v481zRH+z33me7rzs2oz22jarNI3I+R3k7268w5zv90/RvHauBcEKAMC1P98TrEIIghUAAAAANn4BQ7BuI1hrBmkrXluhGVKwZrSOmaQpSbPNsl/K0rkSwXH82FaT9ZlSsvZz7E+T/Tp3jpSUdXuca0petsI1x1b3mzpX9i/aru1VEXxvHveCdtVcX2C5Y4IVAAAc83xPsAohCFYAAAAA2PgFDMF6HsEa0YrAWpL3nrTcl7DN7XNtzgnJmtk6l2k7J1iXznHo9gdllfeC9J6gXWizzmFm6PbOM6w5u5fUU3Od2cBKBROsAADcyvM9wSqEIFgBAAAAYOMXMATr+QRrK05T+oUADUGY0Uq/owVrR6LOycpzCNZhrdZPfjaW/u1mwB4gWPP4tmzwlNyu89yba0GwAgBw7c/3BKsQgmAFAAAAgI1fwBCs5xOsKTCzTG5KvyjXm8KxxoMSwUcK1mj/UgRrLdU7rK26L1V8rGCt66pOXZNaKnlprgXBCgDAtT/fE6xCCIIVAAAAADZ+AUOwnr9EcJvBupQ9uUUGa5bmjazacwnWkKBrZOohgnWuz3NzLwhWAABu9fmeYBVCEKwAAAAAsPELGIL1PIK1ZlFGmdxaMnipvO2xgjWFZpTBnWrzlXc/OJtgzXK8mcF7CsEa/U9RnPPaFaz7cy/NtSBYAQC49ud7glUIQbACAAAAwMYvYAjW8wjWLIVby/XmvoN0/ez5yQXrlMSN/dv1YM8hWLP9KkOjX48RrHVbrMc6JVnHuX7jzdm5FgQrAADX/nxPsAohCFYAAAAA2PgFDMG6jWANOZjre45isRGA8X1+luIxfh627bNOQwbmPvE1slLreqHxc8rbOH5YT7QIxEFA7rM34/y5byt7q3SN88TPta8pQmOfoQ+7z+I8sZbq0vZ2Ldnc3n7N/Wr54rHN3WfZ5iCGyxzU7TnOGMNw/D5bNtqI7d25LsJWEKwAAFz78z3BKoQgWAEAAABg4xcwBOuGgvX9T4evUcY2ROHkMR99PsjOPCalYsrGFLUZNeu0/SyiluDN8sSxPfqR4rftT6+dlJiDMO181u3b1Pb9eKLvMdZh+z6rdtxv9/PUueK4qTaz3ZTZGb1ywHNzLQhWAABu4fmeYBVCEKwAAAAAsPELGIL1PCWChSBYAQDAOZ7vCVYhBMEKAAAAABu/gCFYCVYhCFYAAG7n+Z5gFUIQrAAAAACw8QsYgpVgFYJgBQDgdp7vCVYhBMEKAAAAABu/gCFYTyhXP/r87uW33h4Ea3yNNU/reqlCEKwAAGDr53uCVQhBsAIAAADAxi9gCNbTxeufPR+E6r149oW5EQQrAAA42/M9wSqEIFgBAAAAYOMXMASrEIJgBQDgdp7vCVYhBMEKAAAAABu/gCFYhRAEKwAAt/N8T7AKIQhWAAAAANj4BQzBKoQgWAEAuJ3ne4JVCEGwAgAAAMDGL2AIViEEwQoAwO083xOsQgiCFQAAAAA2fgFDsAohCFYAAG7n+Z5gFUIQrAAAAACw8QsYglUIQbACAHA7z/cEqxCCYAUAAACAjV/AEKxCCIIVAIDbeb4nWIUQBCsAAAAAbPwChmAVQhCsAADczvM9wSqEIFgBAAAAYOMXMASrEIJgBQDgdp7vCVYhBMEKAAAAABu/gCFYhRCnjPibkn9fvvGNb3zfX1wAAM77fE+wCiFOGfE3xfM9AAAAAOzY/aPoN/IfSF979zukkBDiZBF/U8oLmN/wFxcAgPM+33/333yLFBJCnCzib4rnewAAAADY8Xu/93v/spT4ufvXn/w3YkgI8eiIvyX1b0v8rfEXFwCA8z/f/+8//WNiSAjx6Ii/JZ7vAQAAAKCw+8fR39Ys1l//z39DEAkhjo74G1KzV+NvjL+0AAA8zfN9ZJz933//JwSREOLoiL8hNXvV8z0AAAAA/IuH/8s912P96n/8r3f/6rP/JYQQqyL+ZtR1V/3vdgAALuf5PtZO/JsP/+ju//2HfyeEEKsi/mbUdVc93wMAAABAwx/8wR/82/YfTUII8diIvy3+wgIA4PleCOH5HgAAAABukm984xu/sfsH0z/4R6MQ4gTxD/E3xV9WAAA83wshPN8DAAAAwK2/hPmV3T+cvrb7+kldu0kIIVbE3+7/dsTfkF/xFxUAAM/3QgjP9wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMBD/j9o+zB6/tP8DQAAAABJRU5ErkJggg==) # + [markdown] id="uVfRGlivsjVp" # ### Base Algorithms # # TODO # + [markdown] id="uHoV_1hsx1K9" # #### Memory Based Collaborative Filtering # # **implementation details** # # The data first transformed into sparse matrix representation, (user by item) if user based and (item by user) if item based. # # The the prediction matrix $R$ is trained with following formula: # # $R$ is defined as $R_{i, j} = \mu_i + \frac{\sum_{v\in P_i(j)}S(i, v)\cdot (r_{vj} - \mu_v)}{\sum_{v\in P_i(j)}|S(i, v)|}$ # # where $S$ is the Pearson Similarity Matrix # # $S$ is defined as $S_{u,v} = \frac{\sum_{k\in I_u \cap I_v}(r_{uk} - \mu_u)(r_{vk} - \mu_v)}{\sqrt{\sum_{k\in I_u \cap I_v}(r_{uk} - \mu_u)^2}\sqrt{\sum_{k \in I_u \cap I_v}(r_{vk} - \mu_v)^2}}$ # # The algorithm is implemented with numpy array (for prediction) and scipy csr sparse matrix (for training). # # Every operation uses numpy matrix operations (aka. dot product, norm, etc) which optimizes the computational speed by trading off extra memories (for loop takes $\approx 10$ minutes to train and matrix operations takes $\approx 1$ minutes to train for our experimental sample in user based CF). # # **user based collabrative filtering** (todo, edit R) # # When R is (user by item) and S is (user by user), it is User Based Collabrative Filtering # # **item based collabrative filtering** (todo, edit R) # # When R is (item by user) and S is (item by item), it is Item Based Collabrative Filtering # + id="aJQ5sCu7b-vQ" class Memory_based_CF(): def __init__(self, spark, base, usercol='userId', itemcol='movieId', ratingcol='rating'): """[the memory based collabritive filtering model] Args: spark (Spark Session): [the current spark session] base (str): [user base or item base] usercol (str, optional): [user column name]. Defaults to 'userId'. itemcol (str, optional): [item column name]. Defaults to 'movieId'. ratingcol (str, optional): [rating/target column name]. Defaults to 'rating'. """ self.base = base self.usercol = usercol self.itemcol = itemcol self.ratingcol = ratingcol self.spark = spark self.X = None self.idxer = None self.similarity_matrix = None self.prediction_matrix = None def fit(self, _X): """[to train the model] Args: _X (Pyspark DataFrame): [the training set] """ X = self._preprocess(_X, True) self.X = X self.similarity_matrix = self._pearson_corr(X) self.prediction_matrix = self._get_predict() def predict(self, _X): """[to predict based on trained model] Args: _X (Pyspark DataFrame): [the DataFrame needed to make prediction] Returns: [Pyspark DataFrame]: [the DataFrame with prediction column] """ rows, cols = self._preprocess(_X, False) preds = [] for i,j in zip(rows,cols): preds.append(self.prediction_matrix[i, j]) df = self.idxer.transform(_X).select(self.usercol, self.itemcol, self.ratingcol).toPandas() df['prediction'] = preds return self.spark.createDataFrame(df) def recommend(self, X, numItem): idices = self.idxer.u_indxer.transform(X).toPandas()['userId_idx'].values.astype(int) items = np.asarray(np.argsort(self.prediction_matrix.T[idices, :])[:, -numItem:]) result = np.zeros((1, 3)) inverse_imat = pd.Series(self.idxer.i_indxer.labels) inverse_umat = pd.Series(self.idxer.u_indxer.labels) for u, i in zip(idices, items): result = np.vstack((result, np.hstack((inverse_umat.iloc[np.array([u for _ in range(len(i))])].values.reshape(-1, 1), inverse_imat.iloc[i.reshape(-k,)].values.reshape(-1, 1), np.asarray(self.prediction_matrix.T[np.array([u for _ in range(len(i))]), i]).reshape(-1, 1))))) df = pd.DataFrame(result[1:], columns = ['userId', 'movieId', 'prediction']) return self.spark.createDataFrame(df) def _preprocess(self, X, fit): """[preprocessing function before training and predicting] Args: X (Pyspark DataFrame): [training/predicting set] fit (bool): [if it is on training stage or not] Raises: NotImplementedError: [if not User base or Item base] Returns: sparse.csr_matrix: [if on training stage], numpy.array: [row and columns in np.array if on prediction stage] """ if fit: self.idxer = indexTransformer(self.usercol, self.itemcol) self.idxer.fit(X) _X = self.idxer.transform(X)\ .select(F.col(self.usercol+'_idx').alias(self.usercol), F.col(self.itemcol+'_idx').alias(self.itemcol), F.col(self.ratingcol)) _X = _X.toPandas().values if self.base == 'user': row = _X[:, 0].astype(int) col = _X[:, 1].astype(int) data = _X[:, 2].astype(float) elif self.base == 'item': row = _X[:, 1].astype(int) col = _X[:, 0].astype(int) data = _X[:, 2].astype(float) else: raise NotImplementedError return sparse.csr_matrix((data, (row, col))) else: _X = self.idxer.transform(X).select(self.usercol+'_idx', self.itemcol+'_idx').toPandas().values if self.base == 'user': row = _X[:, 0].astype(int) col = _X[:, 1].astype(int) elif self.base == 'item': row = _X[:, 1].astype(int) col = _X[:, 0].astype(int) else: raise NotImplementedError return row, col def _pearson_corr(self, A): """[generating pearson corretion matrix for the model when training] Args: A (sparse.csr_matrix): [the training set in sparse matrix form with entries of ratings] Returns: sparse.csr_matrix: [the pearson correlation matrix in sparse form] """ n = A.shape[1] rowsum = A.sum(1) centering = rowsum.dot(rowsum.T) / n C = (A.dot(A.T) - centering) / (n - 1) d = np.diag(C) coeffs = C / np.sqrt(np.outer(d, d)) return np.array(np.nan_to_num(coeffs)) - np.eye(A.shape[0]) def _get_predict(self): """[generating prediction matrix] Returns: sparse.csr_matrix: [the prediction matrix in sparse form] """ mu_iarray = np.array(np.nan_to_num(self.X.sum(1) / (self.X != 0).sum(1))).reshape(-1) mu_imat = np.vstack([mu_iarray for _ in range(self.X.shape[1])]).T x = self.X.copy() x[x==0] = np.NaN diff = np.nan_to_num(x-mu_imat) sim_norm_mat = abs(self.similarity_matrix).dot((diff!=0).astype(int)) w = self.similarity_matrix.dot(diff) / sim_norm_mat w = np.nan_to_num(w) return mu_imat + w class indexTransformer(): """[helper class for memory based model] """ def __init__(self, usercol='userId', itemcol='movieId', ratingcol='rating'): """[the index transformer for matrix purpose] Args: usercol (str, optional): [user column name]. Defaults to 'userId'. itemcol (str, optional): [item column name]. Defaults to 'movieId'. """ self.usercol = usercol self.itemcol = itemcol self.ratingcol = ratingcol self.u_indxer = M.feature.StringIndexer(inputCol=usercol, outputCol=usercol+'_idx', handleInvalid = 'skip') self.i_indxer = M.feature.StringIndexer(inputCol=itemcol, outputCol=itemcol+'_idx', handleInvalid = 'skip') self.X = None def fit(self, X): """[to train the transformer] Args: X (Pyspark DataFrame): [the DataFrame for training] """ self.X = X self.u_indxer = self.u_indxer.fit(self.X) self.i_indxer = self.i_indxer.fit(self.X) return def transform(self, X): """[to transform the DataFrame] Args: X (Pyspark DataFrame): [the DataFrame needs to be transformed] Returns: Pyspark DataFrame: [the transformed DataFrame with index] """ X_ = self.u_indxer.transform(X) X_ = self.i_indxer.transform(X_) return X_.orderBy([self.usercol+'_idx', self.itemcol+'_idx']) def fit_transform(self, X): """[combining fit and transform] Args: X (Pyspark DataFrame): [the DataFrame needs to be trained and transformed] Returns: Pyspark DataFrame: [the transformed DataFrame with index] """ self.fit(X) return self.transform(X) # + [markdown] id="e7gR1eXJyD6D" # #### Model Based Collaborative Filtering # # # **implementation details** # # The data first casted userId and movieId into integers and then fit into `pyspark.ml.recommendation.ALS`. # # Our implementation takes advantages of model based collaborative filtering algorithm implemented in `spark.ml`, in which users and products are described by a small set of latent factors that can be used to predict missing entries `spark.ml` uses the alternating least squares (ALS) algorithm to learn these latent factors. # # Since there are many parameters in ALS of `spark.ml`, we will fix `nonnegative = True` in order to increase interpertability, and we will only select `regParam`(scale of regulization term) and `rank`(number of hidden factors) to be tuned. (We also tried to tune `maxIter` parameter, but when `maxIter > 20` will blow up memory in our machine with large `rank`, and it takes much longer with nearly the same results, so we will keep `maxIter` with default `=10`.) # + id="_kPnocHEf3Uh" class Als(): """[the predictor for Pyspark ALS] """ def __init__(self, userCol, itemCol, ratingCol, regParam, seed, rank): self.userCol = userCol self.itemCol = itemCol self.ratingCol = ratingCol self.model =None self.als = ALS(userCol=userCol, itemCol=itemCol, ratingCol=ratingCol, coldStartStrategy="drop", nonnegative=True, regParam=regParam, seed=seed, rank=rank) def fit(self, _X): """[function to train parameter of predictor] Args: _X (Pyspark DataFrame): [training set] """ X = self._preprocess(_X) self.model = self.als.fit(X) def predict(self, _X): """[function to make predict over test set] Args: _X (Pyspark DataFrame): [test set] Returns: Pyspark DataFrame: [DataFrame with 'prediction' column which has the predicting value] """ X = self._preprocess(_X) return self.model.transform(X) def recommend(self, X, numItems): return self.model.recommendForUserSubset(X, numItems)\ .select(self.userCol, F.explode('recommendations').alias('recommendations'))\ .select(self.userCol, 'recommendations.*')\ .select(self.userCol, self.itemCol, F.col(self.ratingCol).alias('prediction')) def _preprocess(self, _X): """[preprocess the input dataset] Args: _X (Pyspark DataFrame): [the training or test set] Returns: Pyspark DataFrame: [the preprocessed DataFrame] """ cast_int = lambda df: df.select([F.col(c).cast('int') for c in [self.userCol, self.itemCol]] + \ [F.col(self.ratingCol).cast('float')]) return cast_int(_X) # - # #### Cold Start Model # # Strategy: TODO class code_start(): def __init__(self, movie): movie_copy = movie.withColumn("year",F.regexp_extract(movie.title,r"(\d{4})",0).cast(T.IntegerType())) movie_copy = movie_copy.withColumn("genre",F.explode(F.split(movie.genres,pattern="\|"))) movie_copy = movie_copy.select("movieId","title","genre","year") genres = movie_copy.select("genre").distinct().toPandas()['genre'].tolist() sample_copy = sample.select("userId","movieId") total = sample_copy.join(movie_copy,["movieId"],'left') popular = total.groupby("movieId").count().sort("count",ascending=False) self.movie = movie self.popular = popular def recommend(self): return self.popular.select("movieId").limit(50).select('movieId') # + [markdown] id="a042EivvzocO" # ### Advanced Algorithms # # TODO # + [markdown] id="Arsp_CaDz1Dc" # #### Wide and Deep # # TODO # + id="HXE8nv9p0FmW" class wide_deep(): def __init__(self,wide_cols='genres', deep_cols=['userId', 'movieId'], target_col = 'rating', deep_embs=[64, 64], deep_hidden=[64,32,16], deep_dropout=[0.1, 0.1, .1], deep_bachnorm=True): self.wide = None self.deep = None self.deep_hidden = deep_hidden self.deep_dropout = deep_dropout self.deep_bachnorm = deep_bachnorm self.model = None self.wide_cols = wide_cols self.deep_cols = deep_cols self.embs = [(col, dim) for col, dim in zip(deep_cols, deep_embs)] self.wide_preprocessor = self._genre_preprocessor(wide_cols) self.deep_preprocessor = DensePreprocessor(embed_cols=self.embs) self.target_col = target_col def fit(self, train, n_epochs=10, batch_size=128, val_split=.1, verbose = True): X, y = train.drop(self.target_col, axis = 1), train[self.target_col].values wide_feature = self.wide_preprocessor.fit_transform(X) deep_feature = self.deep_preprocessor.fit_transform(X) self.wide = Wide(wide_dim=np.unique(wide_feature).shape[0], pred_dim=1) self.deep = DeepDense(hidden_layers=self.deep_hidden, dropout=self.deep_dropout, batchnorm=self.deep_bachnorm, deep_column_idx=self.deep_preprocessor.deep_column_idx, embed_input=self.deep_preprocessor.embeddings_input) self.model = WideDeep(wide=self.wide, deepdense=self.deep) wide_opt = torch.optim.Adam(self.model.wide.parameters(), lr=0.01) deep_opt = RAdam(self.model.deepdense.parameters()) wide_sch = torch.optim.lr_scheduler.StepLR(wide_opt, step_size=3) deep_sch = torch.optim.lr_scheduler.StepLR(deep_opt, step_size=5) callbacks = [ LRHistory(n_epochs=n_epochs), EarlyStopping(patience=5), ModelCheckpoint(filepath="model_weights/wd_out"), ] optimizers = {"wide": wide_opt, "deepdense": deep_opt} schedulers = {"wide": wide_sch, "deepdense": deep_sch} initializers = {"wide": KaimingNormal, "deepdense": XavierNormal} self.model.compile(method='regression', optimizers=optimizers, lr_schedulers=schedulers, initializers=initializers, callbacks=callbacks, verbose=verbose) self.model.fit(X_wide=wide_feature, X_deep=deep_feature, target=y, n_epochs=n_epochs, batch_size=batch_size, val_split=val_split,) def load_pretrained(self, train, fp, device): X = train.copy() if type(self.wide_cols) == str: wide_feature = self.wide_preprocessor.fit_transform(X[[self.wide_cols]]) else: wide_feature = self.wide_preprocessor.fit_transform(X[self.wide_cols]) deep_feature = self.deep_preprocessor.fit_transform(X[self.deep_cols]) self.wide = Wide(wide_dim=np.unique(wide_feature).shape[0], pred_dim=1) self.deep = DeepDense(hidden_layers=self.deep_hidden, dropout=self.deep_dropout, batchnorm=self.deep_bachnorm, deep_column_idx=self.deep_preprocessor.deep_column_idx, embed_input=self.deep_preprocessor.embeddings_input) self.model = torch.load(fp, map_location=torch.device(device)) def predict(self, test): X = test.copy() wide_feature = self.wide_preprocessor.transform(X) deep_feature = self.deep_preprocessor.transform(X) return self.model.predict(X_wide=wide_feature, X_deep=deep_feature) def _genre_preprocessor(self, genre_feat): dense_layer = lambda X: X.toarray() genre_transformer = Pipeline(steps=[ ('tokenizer', CountVectorizer()), ('dense', FunctionTransformer(dense_layer, validate=False)) ]) preproc = ColumnTransformer(transformers=[('genre', genre_transformer, genre_feat),]) return preproc def _deep_preprocessor(self,embs): return DensePreprocessor(embed_cols=embs) # + [markdown] id="QeGO88eX0HUO" # #### Graph Neural Nets Embedding # # TODO if we have time # + id="tIEIsioGhc72" #todo # + [markdown] id="XyjT2GQ_S4QU" # ### Model Pipeline # # The model pipeline combines the models w.r.t graph above. # + def base_recommend(spark, base_model, cold_start_model, user_ids, movies, n, extra_features, user_id, item_id): userset = list(set(user_ids)) users = spark.createDataFrame(pd.DataFrame({base_model.userCol: userset})) base_recommend = base_model.recommend(users, n).toPandas() base_recommend = base_recommend.merge(movies, how='left') base_recommend = base_recommend[[user_id, item_id] + extra_features] base_recommend = base_recommend.astype({user_id: np.int64, item_id: np.int64}) cold_start_users = set(user_ids) - set(base_recommend[user_id].tolist()) for user in cold_start_users: cold_recommend = cold_start_model.recommend().toPandas().values.reshape(-1,) user_lst = [user for _ in range(n)] cold_recommendation = pd.DataFrame({user_id: user_lst, item_id: cold_recommend}) cold_recommendation = cold_recommendation.astype({user_id: np.int64, item_id: np.int64}) cold_recommendation = cold_recommendation.merge(movies, how='left') cold_recommendation = cold_recommendation[[user_id, item_id] + extra_features] base_recommend = base_recommend.append(cold_recommendation, ignore_index=True) return base_recommend def advanced_recommend(advanced_recommender, base_recommend, k, user_id, item_id): df = base_recommend.copy() prediction = advanced_model.predict(df) df['prediction'] = prediction df = df.set_index(item_id).groupby(user_id).prediction\ .apply(lambda x: x.sort_values(ascending=False)[:k]).reset_index() return df def final_recommender(spark, base_model, cold_start_model, advanced_recommender, users, movies, n = 50, k = 5, user_id = 'userId', item_id = 'movieId', extra_features = ['genres'] ): base_recommend_items = base_recommend(spark, base_model, cold_start_model, users, movies, n, extra_features, user_id, item_id) return advanced_recommend(advanced_recommender, base_recommend_items, k, user_id, item_id) # + [markdown] id="mD5pOe4e1iSE" # ## The Experiment # - compressed_sample_path = '../data/model_results.tar.gz' # !tar -xzvf $compressed_sample_path -C $data_path # !ls $data_path # + [markdown] id="HRZdIu-_15_J" # ### Choice of Base Model # + [markdown] id="QHsBSmXVIqBS" # Since `user based` CF allocates the memories over 16GB the colab assigned (Session Crashed), we will abandon choice of `user based` CF. # # Thus, we will choose our base model based on the Recall and Time performance between `item based` CF and `ALS` of Matrix Factorization (Model Based CF) for our sample data. (We will use the tuned parameter for ALS from Homework 2, which is ragParam = .15, rank = 10) # # We will test a benchmark on recommendation with constraint on test dataset to see how the recall is, and how the time cost of each base model. # - # how we train and generate our base model selection results # ``` python # ## live training and inference for base model # # this cell takes over minutes to execute # # models = {'item_based': Memory_based_CF(spark, base='item', usercol='userId', itemcol='movieId', ratingcol='rating'), # 'als': Als(userCol='userId', itemCol='movieId', ratingCol='rating', regParam=.15, seed=0, rank=10)} # # this cell takes over minutes to execute # def recommend(prediction, k, userCol = 'userId', itemCol = 'movieId',ratingCol = 'rating', predCol = 'prediction'): # window = W.Window.partitionBy(prediction[userCol]).orderBy(prediction['prediction'].desc()) # ranked = prediction.select('*', F.rank().over(window).alias('rank')) # recommended = ranked.where(ranked.rank <= k).select(F.col(userCol).cast('string'), # F.col(itemCol).cast('string'), # F.col(ratingCol).cast('double'), # F.col(predCol).cast('double')) # return recommended # recalls = [] # times = [] # predictions = [] # for model in models.keys(): # #training based model # models[model].fit(sample_train) # start = time.time() # prediction = models[model].predict(sample_test) # recommendation = recommend(prediction, 50) # recalls.append(recall(recommendation)) # end = time.time() # times.append(end - start) # predictions.append(prediction) # base_model_selection = pd.DataFrame({'recall': recalls, 'recommend time': times}, index=['item_based', 'als']) # base_model_selection.to_csv('../model_results/base_model_selection.csv') # # ``` base_model_selection = pd.read_csv(os.path.join(data_path,'model_results/base_model_selection.csv'), index_col=0) display(base_model_selection) # TODO # + [markdown] id="0JzrLISTJPmn" # For our sample dataset, from th table above, we observe that item-based CF outperforms als CF, but the running time of making recommendation by item_based dataset is much worser than by the als model. Considering real time scenario that users need instant recommedations, we will choose `ALS` as our base model. # + [markdown] id="dQl7ewpg2JSt" # #### Performance of Each Models # # - # how we train wide and deep # # ``` python # # wd = wide_deep() # wd.fit(sample_train_df) # test_pred = wd.predict(sample_test_df) # # ``` # how we generate the results # # ```python # # #getting prediction of base models # base_predictions = [pred.toPandas() for pred in predictions] # base_predictions = [pred.astype({'userId': np.int64, 'movieId': np.int64, 'rating': np.float64, 'prediction': np.float64}) \ # for pred in base_predictions] # for pred, model in zip(base_predictions, models.keys()): # pred.columns = ['userId', 'movieId','rating', model+'_prediction'] # results = sample_test_df[['userId', 'movieId','rating']].merge(base_predictions[0]) # results = results.merge(base_predictions[1]) # # results['deep_wide_prediction'] = test_pred # # results[['rating', 'item_based_prediction', # 'als_prediction', 'deep_wide_prediction']].to_csv('../model_results/model_test_results.csv', index=False) # # ``` all_preds_test = pd.read_csv(os.path.join(data_path,'model_results/model_test_results.csv')) # TODO need fit in metrics all_preds_test # + #todo rmse table, and other metrics # - plot_ROC_numpy(all_preds_test.rating.values, list(all_preds_test[['item_based_prediction', 'als_prediction', 'deep_wide_prediction']].values.T),\ ['item_based_prediction', 'als_prediction', 'deep_wide_prediction']) # ##### Observation # # TODO # # **performance wise** # # **memory wise** # # **time wise** # ### Experiment of Pipeline # How we run our pipeline # # ```python # train = sample_train_df.copy() # test = sample_test_df.copy() # use_cuda = torch.cuda.is_available() # device = torch.device("cuda" if use_cuda else "cpu") # # #users to generate recommendation # users = test.userId.unique().tolist() # # #base model has already trained in previous cells # ## train base model # base_model = Als(userCol='userId', itemCol='movieId', ratingCol='rating', regParam=.15, seed=0, rank=10) # base_model.fit(sample_train) # ## load cold start model # cold_start_model = code_start(movies) # ## train wide and deep model # advanced_model = wide_deep() # ### if we want to live train the wide and deep model # advanced_model.fit(sample_train_df) # ### if we want to load pretrained model # advanced_model.load_pretrained(train, '../trained_model/wide_deep_sample.t', device) # # #generate recommendation for users n = how many base model recommends, k = how many advanced model recommends # final_recommend_items = final_recommender(spark, # base_model, # cold_start_model, # advanced_model, # users, # movies_df, n=50, k=5) # #save results # final_recommend_items.to_csv('../model_results/final_recommendations.csv', index=False) # ``` final_recommend_items = pd.read_csv(os.path.join(data_path,'model_results/final_recommendations.csv')) # + #todo need more info # - train_known_pred = sample_train_df[['userId', 'movieId', 'rating']].merge(final_recommend_items) print(train_known_pred.shape[0] / sample_train_df.shape[0]) print(acc_numpy(train_known_pred.rating, train_known_pred.prediction)) print(rmse_numpy(train_known_pred.rating, train_known_pred.prediction)) plot_ROC_numpy(train_known_pred.rating, [train_known_pred.prediction], 'final_model') test_known_pred = sample_test_df[['userId', 'movieId', 'rating']].merge(final_recommend_items) print(test_known_pred.shape[0] / sample_test_df.shape[0]) print(acc_numpy(test_known_pred.rating, test_known_pred.prediction)) print(rmse_numpy(test_known_pred.rating, test_known_pred.prediction)) plot_ROC_numpy(test_known_pred.rating, [test_known_pred.prediction], 'final_model') # + all_known_pred = sample_df[['userId', 'movieId', 'rating']].merge(final_recommend_items) print(all_known_pred.shape[0] / sample_df.shape[0]) print(acc_numpy(all_known_pred.rating, all_known_pred.prediction)) print(rmse_numpy(all_known_pred.rating, all_known_pred.prediction)) plot_ROC_numpy(all_known_pred.rating, [all_known_pred.prediction], 'final_model') # - recall_score(test_known_pred.rating > 3, test_known_pred.prediction > 3) # + [markdown] id="x71ERwTi2ajW" # ### Conclusion # # TODO # + id="NMd-Z6xP2hjZ"
notebook/.ipynb_checkpoints/final_report_Bo_EDIT-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + def get_rank(X, n): x_rank = dict((x, i+1) for i, x in enumerate(sorted(set(X)))) return [x_rank[x] for x in X] n = int(input()) X = list(map(float, input().split())) Y = list(map(float, input().split())) rx = get_rank(X, n) ry = get_rank(Y, n) d = [(rx[i] -ry[i])**2 for i in range(n)] rxy = 1 - (6 * sum(d)) / (n * (n*n - 1)) print('%.3f' % rxy)
Day 7- Spearman's Rank Correlation Coefficient.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # + from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential([ Dense(32, input_shape=(784,)), Activation('relu'), Dense(10), Activation('softmax'), ]) # - # # # # + model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.compile(optimizer='rmsprop', loss='mse') # - # + model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) model.fit(data, labels, epochs=10, batch_size=32) # + import keras model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(10, size=(1000, 1)) one_hot_labels = keras.utils.to_categorical(labels, num_classes=10) model.fit(data, one_hot_labels, epochs=10, batch_size=32) # + import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_test = np.random.random((100, 20)) y_test = np.random.randint(2, size=(100, 1)) model = Sequential() model.add(Dense(64, input_dim=20, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(x_train, y_train, epochs=20, batch_size=128) score = model.evaluate(x_test, y_test, batch_size=128) # - # + from keras.models import Sequential from keras.layers import Dense import numpy numpy.random.seed(7) dataset = numpy.loadtxt("c:/users/hp/downloads/diabetes.csv", delimiter=",", skiprows=1) X = dataset[:,0:8] Y = dataset[:,8] model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, Y, epochs=150, batch_size=10) scores = model.evaluate(X, Y) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) # - import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt data = pd.read_csv('c:/users/hp/downloads/UCI_Credit_Card.csv') data.info() print('------------------------') print(data.describe()) print('------------------------') plt.figure(figsize=(25,25)) ax = plt.axes() corr = data.drop(['ID'], axis=1).corr() sns.heatmap(corr, vmax=1,vmin=-1, square=True, annot=True, cmap='Spectral',linecolor="white", linewidths=0.01, ax=ax) ax.set_title('Fig.9 : Correlation Coefficient Pair Plot',fontweight="bold", size=30) plt.show() # + from sklearn.preprocessing import StandardScaler from keras.utils import to_categorical predictors = data.drop(['ID','default.payment.next.month'], axis=1).as_matrix() predictors = StandardScaler().fit_transform(predictors) target = to_categorical(data['default.payment.next.month']) # - from keras.layers import Dense from keras.models import Sequential from keras.callbacks import EarlyStopping from sklearn.utils import class_weight non_default = len(data[data['default.payment.next.month']==0]) default = len(data[data['default.payment.next.month']==1]) ratio = float(default/(non_default+default)) print('Default Ratio :',ratio) # + n_cols = predictors.shape[1] early_stopping_monitor = EarlyStopping(patience=2) class_weight = {0:ratio, 1:1-ratio} model = Sequential() model.add(Dense(25, activation='relu', input_shape = (n_cols,))) model.add(Dense(25, activation='relu')) #model.add(Dense(20, activation='relu')) model.add(Dense(2, activation='softmax')) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) model.fit(predictors, target, epochs=20, validation_split=0.3, callbacks = [early_stopping_monitor],class_weight=class_weight) # - test_y_predictions = model.predict(predictors) from sklearn import metrics matrix = metrics.confusion_matrix(target.argmax(axis=1),test_y_predictions.argmax(axis=1)) matrix # + y_pred = model.predict(predictors) y_pred = (y_pred > 0.8) y_pred matrix = metrics.confusion_matrix(target.argmax(axis=1),y_pred.argmax(axis=1)) matrix # - # + import matplotlib.pyplot as plt history = model.fit(predictors, target, validation_split=0.25, epochs=50, batch_size=16, verbose=1) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # -
neural_network_keras_1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.014572, "end_time": "2021-09-11T19:02:49.116665", "exception": false, "start_time": "2021-09-11T19:02:49.102093", "status": "completed"} tags=[] # # 1. Parameters # + papermill={"duration": 0.025267, "end_time": "2021-09-11T19:02:49.150879", "exception": false, "start_time": "2021-09-11T19:02:49.125612", "status": "completed"} tags=["parameters"] # Defaults simulation_dir = 'simulations/unset' ncores = 48 # + papermill={"duration": 0.016533, "end_time": "2021-09-11T19:02:49.176310", "exception": false, "start_time": "2021-09-11T19:02:49.159777", "status": "completed"} tags=["injected-parameters"] # Parameters read_coverage = 30 mincov = 10 simulation_dir = "simulations/alpha-2.0-cov-30" iterations = 3 sub_alpha = 2.0 # + papermill={"duration": 0.017946, "end_time": "2021-09-11T19:02:49.200288", "exception": false, "start_time": "2021-09-11T19:02:49.182342", "status": "completed"} tags=[] from pathlib import Path import os simulation_data_dir = Path(simulation_dir) / 'simulated_data' initial_reads_dir = simulation_data_dir / 'reads_initial' reads_dir = simulation_data_dir / 'reads' assemblies_dir = simulation_data_dir / 'assemblies' if not reads_dir.exists(): os.mkdir(reads_dir) # + [markdown] papermill={"duration": 0.005345, "end_time": "2021-09-11T19:02:49.211796", "exception": false, "start_time": "2021-09-11T19:02:49.206451", "status": "completed"} tags=[] # # 2. Fix reads # # Fix read file names and data so they can be indexed. # + papermill={"duration": 0.193245, "end_time": "2021-09-11T19:02:49.409710", "exception": false, "start_time": "2021-09-11T19:02:49.216465", "status": "completed"} tags=[] import os # Fix warning about locale unset os.environ['LANG'] = 'en_US.UTF-8' # !pushd {initial_reads_dir}; prename 's/data_//' *.fq.gz; popd # + [markdown] papermill={"duration": 0.008732, "end_time": "2021-09-11T19:02:49.431585", "exception": false, "start_time": "2021-09-11T19:02:49.422853", "status": "completed"} tags=[] # Jackalope produces reads with non-standard identifiers where pairs of reads don't have matching identifiers. For example: # # * Pair 1: `@SH08-001-NC_011083-3048632-R/1` # * Pair 2: `@SH08-001-NC_011083-3048396-F/2` # # In order to run snippy, these paired identifiers need to match (except for the `/1` and `/2` suffix). # # So, I have to replace them all with something unique, but which matches in each pair of files. I do this by replacing the position (I think) with the read number (as it appears in the file). So the above identifiers become: # # * Pair 1: `@SH08-001-NC_011083-1/1` # * Pair 2: `@SH08-001-NC_011083-1/2` # + papermill={"duration": 1.031779, "end_time": "2021-09-11T19:02:50.469295", "exception": false, "start_time": "2021-09-11T19:02:49.437516", "status": "completed"} tags=[] import glob import os files = [os.path.basename(f) for f in glob.glob(f'{initial_reads_dir}/*.fq.gz')] # !parallel -j {ncores} -I% 'gzip -d --stdout {initial_reads_dir}/% | perl scripts/replace-fastq-header.pl | gzip > {reads_dir}/%' \ # ::: {' '.join(files)} # + papermill={"duration": 0.033328, "end_time": "2021-09-11T19:02:50.515664", "exception": false, "start_time": "2021-09-11T19:02:50.482336", "status": "completed"} tags=[] import shutil shutil.rmtree(initial_reads_dir) # + [markdown] papermill={"duration": 0.009208, "end_time": "2021-09-11T19:02:50.538543", "exception": false, "start_time": "2021-09-11T19:02:50.529335", "status": "completed"} tags=[] # # 3. Fix assemblies # # Fix assembly genome names # + papermill={"duration": 0.192747, "end_time": "2021-09-11T19:02:50.737929", "exception": false, "start_time": "2021-09-11T19:02:50.545182", "status": "completed"} tags=[] # !pushd {assemblies_dir}; prename 's/data__//' *.fa.gz; popd
evaluations/simulation/2-fix-simulated-files.simulation-alpha-2.0.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ep6V64HssDX_" colab_type="text" # # Создание датасета обучения # # Данный документ предназначен для обработки и анализа предобработанных данных и построении пригодных для обучения данных # + id="vd0LcZTTd2u3" colab_type="code" colab={} import pandas as pd import numpy as np import os import gc import seaborn as sns from tqdm import tqdm, tqdm_notebook import matplotlib.pyplot as plt # %matplotlib inline # + id="At3CocPBd13S" colab_type="code" outputId="378f4d75-0a94-4106-aef6-46c28c43c486" executionInfo={"status": "ok", "timestamp": 1584258497503, "user_tz": -180, "elapsed": 25069, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjHN-g8lRldqGjIHMSaJY1rxUBxuENPHHKk5mlq=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 121} from google.colab import drive drive.mount('/content/drive') # + id="ueD7hRg1eqP-" colab_type="code" colab={} os.chdir('/content/drive/Shared drives/Кредитные риски') # + id="WGmYw0zVEDMr" colab_type="code" colab={} from CreditRisks.metrics_library.rosstat_utils import * # + id="M9SN8G7yzqSn" colab_type="code" colab={} DIR='Датасеты/' DIR_IN=DIR+'Росстат_2019_компактный/' DIR_OUT=DIR+'revision_005/' # + [markdown] id="5R6YTT6EHlFm" colab_type="text" # # Генерация фич # + id="T7Gw2SArAu45" colab_type="code" outputId="24526656-fc2d-4998-8588-172a45d1a3af" executionInfo={"status": "ok", "timestamp": 1584223316264, "user_tz": -180, "elapsed": 1302682, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjHN-g8lRldqGjIHMSaJY1rxUBxuENPHHKk5mlq=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 102, "referenced_widgets": ["48506760ebef4a1ba9ad394f5df014f2", "c56ba6cd94b041be954ff816a727fa21", "792f271f6ec3494da5a6f8c85570664e", "<KEY>", "<KEY>", "6912cf7b5a994d87af74f0f2d9a628cb", "b985b5effb3d459d89009eb312a1a5bd", "8c9eb4d412e9404d94d29edba431d7db"]} for year in tqdm_notebook(range(YEAR_FIRST, YEAR_LAST+1)): D = pd.read_csv(f'{DIR_IN}{year}.csv', index_col='inn') D['okved2'] = D['okved'].str.extract(r'(^[0-9]+.[0-9]+)').fillna('__null__') D['okved1'] = D['okved'].str.extract(r'(^[0-9]+)').fillna('__null__') D.to_csv(f'{DIR_OUT}01_src/{year}.csv') del D gc.collect() # + [markdown] id="TKMVqKMSz5hz" colab_type="text" # # Выбор релевантных компаний # + id="hGUCjCVKz0Pe" colab_type="code" outputId="d355685b-f446-464e-e7d8-e321a9c630af" executionInfo={"status": "ok", "timestamp": 1574974685811, "user_tz": -180, "elapsed": 30356, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "<KEY>", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 70} companies = pd.read_csv(f'{DIR_IN}companies_info.csv', index_col="inn") # + id="OCZQ6tg91S6u" colab_type="code" colab={} companies = companies[companies['okfs'] == 16] companies.drop(columns=['okfs'], inplace=True) # + id="RuQFALV11g2J" colab_type="code" colab={} if 'companies' in vars(): companies.to_csv(f'{DIR_OUT}companies_relevant.csv', index_label='inn') else: companies = pd.read_csv(f'{DIR_OUT}companies_relevant.csv', index_col='inn') # + [markdown] id="1GkOf7N7nDoz" colab_type="text" # # Создание датасета существования компаний # + id="MwOl30NWvtmY" colab_type="code" colab={} def is_default(x): ''' Расчитывает рейтинг компании 9 - Актив не равен пассиву или 0 или заполнено менее 5 - неправильная отчетность 1-2 - ok 3-5 - дефолт ''' cond_def = ((x['16003'] != x['17003']) | (x['16004'] != x['17004']) | (x['16004'] == 0) | (x['17004'] == 0) | (x['non_zero'] < 5)).astype(np.uint8) cond_a = ((x['12003'] != 0) & (x['15003'] != 0) & (x['12003']/x['15003'] < 0.5)).astype(np.uint8) cond_b = (x['16004']/x['16003'] > 2).astype(np.uint8) cond_c = ((x['12303'] != 0) & (x['12304'] != 0) & ((x['12303']/x['16003'])/(x['12004']/x['16004']) > 1)).astype(np.uint8) cond_d = ((x['12103'] != 0) & (x['12104'] !=0 ) & ((x['12103']/x['12104'] > 3) | (x['12104']/x['12103'] > 3))).astype(np.uint8) # оптимизированный else # +1 для "хорошей" компании return cond_def * 9 + (1 - cond_def) * (cond_a + cond_b + cond_c + cond_d + 1) # + id="HHB2Wh0OJacj" colab_type="code" colab={} # Drop everything (keep index) companies.drop(columns=['name','okpo','okopf','okved', 'region'], inplace=True) # + id="WrvPRDGFPbfT" colab_type="code" outputId="2f52c989-6750-4309-87eb-d9104d1b05a3" executionInfo={"status": "ok", "timestamp": 1574978761427, "user_tz": -180, "elapsed": 755755, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbUw_CVz8K3RIunXwSoj-hNZ6f2buYD0JYAcB_=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 188} for year in range(YEAR_FIRST, YEAR_LAST+1): D = pd.read_csv(f'{DIR_IN}{year}.csv', index_col='inn') # 4 - before COLUMNS_VALUE D.insert(4, 'non_zero', df[COLUMNS_VALUE].astype(bool).sum(axis=1)) companies[f'{year}'] = pd.Series(0, index=companies.index, dtype=np.uint8, ) total = 0 for index, value in is_default(D).iteritems(): if index in companies.index: companies.at[index, f'{year}'] = int(value) total += 1 print(f"Done {year}, companies add {total}, total {D.shape[0]}") # + id="fdZPO50JYKVy" colab_type="code" outputId="09902d6c-b6cd-43ae-d5ca-4334a74c31fd" executionInfo={"status": "ok", "timestamp": 1574978846402, "user_tz": -180, "elapsed": 755, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 431} companies.head(100) # + id="58yUyQ6GjK7v" colab_type="code" colab={} if 'companies' in vars(): companies.to_csv(f'{DIR_OUT}companies_status.csv', index_label='inn') else: dtypes = {} for year in range(YEAR_FIRST, YEAR_LAST+1): dtypes[str(year)] = np.uint8 companies = pd.read_csv(f'{DIR_OUT}companies_status.csv', index_col='inn', dtype=dtypes) # + [markdown] id="5sI-Ih0zODQ8" colab_type="text" # # Генерация данных для обучения # + [markdown] id="uLxlUvuOBIWY" colab_type="text" # ## Генерация датасета истории # + id="qs4mxu2iPlNU" colab_type="code" outputId="fa30627f-b372-4d90-d2c6-cdd7e945794a" executionInfo={"status": "ok", "timestamp": 1574979581722, "user_tz": -180, "elapsed": 3352, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbUw_CVz8K3RIunXwSoj-hNZ6f2buYD0JYAcB_=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 67} res = ((companies[f'{YEAR_FIRST}'] != 0) & (companies[f'{YEAR_FIRST}'] != 9)).astype(np.uint8) for year in range(YEAR_FIRST+1, YEAR_LAST+1): res += ((companies[f'{year}'] != 0) & (companies[f'{year}'] != 9)).astype(np.uint8) print('Before', companies.shape[0]) # Убираем компании, которые существуют менее 2 лет companies.drop(labels=res[res < 2].index, inplace=True) res.drop(labels=res[res < 2].index, inplace=True) print('Drop fast dead', companies.shape[0]) # Новые компании, по которым мало информации companies.drop(labels=res[(res < 3) & (companies[f'{YEAR_LAST}'] != 0) & (companies[f'{YEAR_LAST}'] != 9)].index, inplace=True) print('Drop new', companies.shape[0]) # + id="uekCmQO0RgvR" colab_type="code" colab={} drop_index = [] for index, row in companies.iterrows(): exist = row[f'{YEAR_FIRST}'] != 0 die = False for year in range(YEAR_FIRST+1, YEAR_LAST+1): exist_now = (row[str(year)] != 0) & (row[str(year)] != 9) if die and exist_now: drop_index.append(index) break if exist and not exist_now: die = True if not exist and exist_now: exist = True # + id="19l1KYIOs1Z-" colab_type="code" outputId="1943ed72-3e33-4a86-f1a2-ca270ae0e1d5" executionInfo={"status": "ok", "timestamp": 1574980339027, "user_tz": -180, "elapsed": 1632, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbUw_CVz8K3RIunXwSoj-hNZ6f2buYD0JYAcB_=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 34} companies.drop(labels=drop_index, inplace=True) print('Drop companies without reports or with too bad reports', companies.shape[0]) # + id="m7HkUGcqSIz0" colab_type="code" outputId="d620dfbf-f8eb-4d6f-b46f-18cf0875ef4e" executionInfo={"status": "ok", "timestamp": 1575052659747, "user_tz": -180, "elapsed": 2472, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBbUw_CVz8K3RIunXwSoj-hNZ6f2buYD0JYAcB_=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 70} if 'companies' in vars(): companies.to_csv(f'{DIR_OUT}companies_status_filtered.csv', index_label='inn') else: dtypes = {} for year in range(YEAR_FIRST, YEAR_LAST+1): dtypes[str(year)] = np.uint8 companies = pd.read_csv(f'{DIR_OUT}companies_status_filtered.csv', index_col='inn', dtype=dtypes) # + [markdown] id="0VL8FzDcBAMZ" colab_type="text" # ## Генерация датасетов запроса # + id="vsq5aa77mnTz" colab_type="code" colab={} def generate_request_dataset(companies, year_begin, year_end, print_stats=True): # Удаляем компании, которые когда-либо подали плохую отчетность companies_ok = companies[(companies == 9).sum(axis=1) == 0] data = companies_ok.drop(columns=companies.columns) data['year_-1'] = -1 data['year_0'] = -1 data['target'] = -1 for index, row in companies_ok.iterrows(): # С компанией что-то не хорошо year = year_end data.at[index, 'target'] = ((row[f'{year}'] != 1) & (row[f'{year}'] != 2)).astype(np.uint8) year -= 1 write = 0 while year >= year_begin and write >= NUM_WRITE: if row[f'{year}'] != 0: data.at[index, f'year_{write}'] = year write -= 1 year -= 1 # Удаляем те записи, для которых не нашлось информации data.drop(index=data[(data == -1).sum(axis=1) > 0].index, inplace=True) print("Mean target: ", data['target'].mean(), "\tcount: ", len(data)) gc.collect() return data # + id="JDjDkmqkpw1W" colab_type="code" outputId="d5160f1a-19ec-409a-9597-7e0187c7aba0" executionInfo={"status": "ok", "timestamp": 1575062068370, "user_tz": -180, "elapsed": 648809, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "<KEY>", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 67} train = generate_request_dataset(companies, YEAR_FIRST, YEAR_LAST-1 ) test = generate_request_dataset(companies, YEAR_LAST-2, YEAR_LAST ) prod = generate_request_dataset(companies, YEAR_FIRST, YEAR_LAST ) # + [markdown] id="1rFqcxH3dpCI" colab_type="text" # ## Добавление категориальных признаков # + id="-yUAneGJdtTU" colab_type="code" outputId="6d291b9b-ef56-4d3c-95f0-277b483f73fb" executionInfo={"status": "ok", "timestamp": 1575062330015, "user_tz": -180, "elapsed": 13148, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 70} companies_info = pd.read_csv(f'{DIR_IN}companies_info.csv', index_col='inn') # okpo - идентификатор без смысла # name - название - смысла мало # okopf - организационно-правовая форма, сохраняется в 90% случаев # okfs - форма собственности, сохраняется в 99% случаев # okved - вид деятельности, сохраняется в 70% случаев # inn - первые 2 цифры выведены в код региона companies_info.drop(columns=['okpo', 'name', 'okopf', 'okfs', 'okved'], inplace=True) # + colab_type="code" id="0RUv_8lOok45" colab={} train = train.join(companies_info) test = test.join(companies_info) prod = prod.join(companies_info) # + id="tHMEDNUSk1wk" colab_type="code" colab={} train.to_csv(f'{DIR_OUT}companies_request_train.csv', index_label='inn') test.to_csv(f'{DIR_OUT}companies_request_test.csv', index_label='inn') prod.to_csv(f'{DIR_OUT}companies_request_prod.csv', index_label='inn') # + [markdown] id="nu_fZ1VaBNvs" colab_type="text" # ## Генерация датасета обучения # + id="J9lZfGShss9E" colab_type="code" colab={} import csv from typing import Union, List class CsvMultiReader: __files = None __readers = None header = None __last_value = None def __init__(self, year_first: int, year_last: int, fmt: str): """ Create multiply csv file reader for indexed data :param year_first: first file to read :param year_last: last file to read :param fmt: file path format string """ self.__files = {} self.__readers = {} self.__last_value = {} for year in range(year_first, year_last + 1): self.__files[year] = open(fmt.format(year)) self.__readers[year] = csv.reader(self.__files[year]) row = self.__readers[year].__next__() if self.header is None: self.header = row else: assert row == self.header self.__last_value[year] = None def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): for k, f in self.__files.items(): f.close() def get_index(self, year: int, index: int) -> Union[List, None]: """ Read file until line with index is not appear :param year: year file to read :param index: index for search :return: certain row if exists, else None """ row = self.__last_value[year] try: while row is None or int(row[0]) < index: row = next(self.__readers[year]) except StopIteration: return None self.__last_value[year] = row if int(row[0]) == index: return row else: return None # + id="grTzQQwpDPEc" colab_type="code" colab={} tasks = ['train', 'test', 'prod', ] # + id="e5y5NMENDT1y" colab_type="code" outputId="dc87665b-8b60-4aff-9e95-f9b2e4315ffa" executionInfo={"status": "ok", "timestamp": 1584224488381, "user_tz": -180, "elapsed": 468116, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u041f\u043e\u0434\u0447\u0435\u0437\u0435\u0440\u0446\u0435\u0432", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjHN-g8lRldqGjIHMSaJY1rxUBxuENPHHKk5mlq=s64", "userId": "04087359208169148337"}} colab={"base_uri": "https://localhost:8080/", "height": 314, "referenced_widgets": ["2e539f6a9b604b78b54d556c29b3713f", "84c029e04e754c24bc2fdac71fa3e937", "db928219ad97442d83d30cbd131c4677", "b9158700be4045f7a9f443388016476a", "74647be47fca479eb1109dd46e767c88", "c6f61d53c2a245ff933243322671779c", "73951619c8724be0b924cd27198c3692", "99e01f68c18a4b158805baade5350a6a", "<KEY>", "<KEY>", "5d3d52eb0c184222a6be26f78611f513", "28947e2ba18940c68ae5009c66fa6db9", "baff806770094aeb9ef10363226c4e4f", "eadea492b49b484891a2dfced7493801", "da62783f07c4458087078c7598be2168", "<KEY>", "370023de25924db9922f5f0409d8963d", "26f948b293ca47b99a2ed01efe6e9f2d", "74c66ce21c954af883b14f184722b4a8", "3a785ef21ae7472ca45119f33780a6a5", "4679c2740229438691a78fe7218e640c", "<KEY>", "e52f5049ded5483c9f0e9c966e44de26", "e3050bd1a3ff4c638d8241b8d48d7e0e"]} for task in tasks: print("Task: ", task) fin = open(f'{DIR_OUT}02_middle/companies_request_{task}.csv', newline='') fout = open(f'{DIR_OUT}03_out/companies_ready_{task}.csv', 'w', newline='') rin = csv.reader(fin) rout = csv.writer(fout) head = rin.__next__() print(head) reader = CsvMultiReader(YEAR_FIRST, YEAR_LAST, f'{DIR_OUT}01_src/{{}}.csv') for h in reader.header[1:]: head.append(f'{head[1]}_{h}') for h in reader.header[1:]: head.append(f'{head[2]}_{h}') rout.writerow(head) with tqdm_notebook(total=10**6) as pbar: for row in rin: inn, y0, y1 = map(int, row[:3]) r0 = reader.get_index(y0, inn) r1 = reader.get_index(y1, inn) if r0 is None or r1 is None: print(inn, 'is None') else: rout.writerow(row + r0[1:] + r1[1:]) pbar.update() fin.close() fout.close() reader.__exit__(0,0,0) print("DONE!\n") # + [markdown] id="CwVX4L2MhiUp" colab_type="text" # # Сохранение в формате pickle # + id="HqV6gKmu79f5" colab_type="code" colab={} NEW_DTYPES = RESULT_DTYPES.copy() for key, dtype in NEW_DTYPES.items(): try: _, _, c = key.split('_') code = int(c) except: continue NEW_DTYPES[key] = np.float32 for year in [0, -1]: for extra in [1, 2]: NEW_DTYPES[f'year_{year}_okved{extra}'] = str # + id="Yv_jc92GLMYc" colab_type="code" colab={} map_rules = [] for key, dtype in NEW_DTYPES.items(): try: _, lag, c = key.split('_') lag = int(lag) code = int(c) old_code = f"year_-1_{code - 1}" new_code = f"year_0_{code + 5}" if lag == 0 and code % 10 == 4 and old_code in NEW_DTYPES: if new_code in NEW_DTYPES: print("Ahtyng!") print(key, old_code, new_code) map_rules.append((key, old_code, new_code)) except ValueError: continue # + id="A6y7H2M687G-" colab_type="code" colab={} for task in tasks: df = pd.read_csv(f'{DIR_OUT}03_out/companies_ready_{task}.csv', dtype=NEW_DTYPES) for col1, col2, col_new in map_rules: df[col_new] = (df[col1] * (df[col1] != 0) + df[col2] * (df[col1] == 0)).astype(np.float32) # print(col1, (df[col1] == df[col2]).mean()) df.drop(columns=[col1, col2], inplace=True) df.to_pickle(f'{DIR_OUT}companies_ready_float_{task}.pkl') # print("="*80) # + id="22IZMhy1oVIA" colab_type="code" colab={}
notebooks/BigDataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Plotting is not the part of assignment but I have shown plotting to easily understand whether the output is correct or incorrect by seeing the output and plot. # # #### Note: I used matplotlib library for plotting because plotting is not the part of the assignment. # # I have not used any inbuilt function or library for the assignment. import matplotlib.pyplot as plt def plot(Polygon, P): index_for_polygon = [i for i in range(len(Polygon))] index_for_polygon.append(0) x = [] y = [] for k in range(len(index_for_polygon)): i = index_for_polygon[k] x.append(Polygon[i][0]) y.append(Polygon[i][1]) plt.plot(x, y) plt.scatter(P[0], P[1]) plt.show() # # Assignment I # + # Function to get point of intersection of two lines def line_intersection(line1, line2): xdiff = (line1[0][0] - line1[1][0], line2[0][0] - line2[1][0]) ydiff = (line1[0][1] - line1[1][1], line2[0][1] - line2[1][1]) def det(a, b): return a[0] * b[1] - a[1] * b[0] div = det(xdiff, ydiff) if div == 0: return "False", "False" d = (det(*line1), det(*line2)) x = det(d, xdiff) / div y = det(d, ydiff) / div return x, y # Function to get minimum value of Y coordinates of polygon def minimum_y(polygon): minimum = polygon[0][1] for i in range(1, len(polygon)): if polygon[i][1] < minimum: minimum = polygon[i][1] return minimum # Function to get maximum value of Y coordinates of polygon def maximum_y(polygon): maximum = polygon[0][1] for i in range(1, len(polygon)): if polygon[i][1] > maximum: maximum = polygon[i][1] return maximum # Function to get maximum value of X coordinates of polygon def maximum_x(polygon): maximum = polygon[0][0] for i in range(1, len(polygon)): if polygon[i][0] > maximum: maximum = polygon[i][0] return maximum # Function to check whether the point lies inside or outside def check_point_lies_inside_or_outside_a_polyhon(polygon, point): # Checks whether the point is one of the coordinate of polygon, if it is then return True if point in polygon: return"True" # Checks whether the Y cordinates of point is in between the maximum of Y cordinates and minimum of Y coordinates of polygon, if it is not return False if point[1] > maximum_y(polygon) or point[1] < minimum_y(polygon): return "False" # From line 54 to 78, code will tell whether the point is inside of polygon, if it is then return True otherwise do more processing count_for_inside = 0 index_for_polygon = [i for i in range(len(polygon))] index_for_polygon.append(0) for k in range(len(index_for_polygon)): if k + 1 < len(index_for_polygon): i = index_for_polygon[k] j = index_for_polygon[k + 1] max_x = maximum_x(polygon) line1 = [[point[0], point[1]], [max_x, point[1]]] line2 = [[polygon[i][0], polygon[i][1]], [polygon[j][0], polygon[j][1]]] x, y = line_intersection(line1, line2) if x >= point[0] and x <= max_x and x != "False": if x > point[0]: count_for_inside += 1 if count_for_inside % 2 == 1: return "True" else: # From line 81 to 103, code will tell whether the point on boundery of polygon, if it is then return True otherwise return False count_for_border = 0 for k in range(len(index_for_polygon)): if k + 1 < len(index_for_polygon): i = index_for_polygon[k] j = index_for_polygon[k + 1] if (polygon[i][0] - point[0]) != 0: slope1 = (polygon[i][1] - point[1])/(polygon[i][0] - point[0]) else: slope1 = 9999 if (polygon[j][0] - point[0]) != 0: slope2 = (polygon[j][1] - point[1])/(polygon[j][0] - point[0]) else: slope2 = 9999 if slope1 == slope2: count_for_border+= 1 if count_for_border > 0: break if count_for_border: return "True" else: return "False" # - # Case 1 Polygon = [[1, 0], [8, 3], [8, 8], [1, 5]] P = [3, 5] Output = check_point_lies_inside_or_outside_a_polyhon(Polygon, P) print("Input") print("Polygon:", Polygon) print("P:",P) print() print("Output:", Output) plot(Polygon, P) # Case 2 Polygon = [[-3, 2], [-2, -0.8], [0, 1.2], [2.2, 0], [2, 4.5]] P = [0, 0] Output = check_point_lies_inside_or_outside_a_polyhon(Polygon, P) print("Input") print("Polygon:", Polygon) print("P:",P) print() print("Output:", Output) plot(Polygon, P) # Case 3 Polygon = [[1, 3], [3, 1], [5, 5]] P = [3.5, 2] Output = check_point_lies_inside_or_outside_a_polyhon(Polygon, P) print("Input") print("Polygon:", Polygon) print("P:",P) print() print("Output:", Output) plot(Polygon, P) # Case 4 Polygon = [[1, 3], [8, 0], [6, 5]] P = [3, 4] Output = check_point_lies_inside_or_outside_a_polyhon(Polygon, P) print("Input") print("Polygon:", Polygon) print("P:",P) print() print("Output:", Output) plot(Polygon, P) # Case 5 Polygon = [[1, 3], [8, 0], [8, 5], [1, 8]] P = [3, 10] Output = check_point_lies_inside_or_outside_a_polyhon(Polygon, P) print("Input") print("Polygon:", Polygon) print("P:",P) print() print("Output:", Output) plot(Polygon, P)
ANEXPERTISE/Assignment 1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Exercise 1: Data preparation # # In order to use our package the spike data has to be packed in the following format: data is a numpy array of size `(num_trial,2)`, `dtype = np.ndarray`. Here `num_trial` is the number of trials, and for each trial the first column is the array of all of the ISIs (interspike intervals, `dtype=float`), and the second array contains the corresponding neuronal indeces (`dtype=int`). Neuronal indeces are integers staring from `0`. The first ISI is alawys equal to the time difference between trial start and the first spike time. If the trial end time is recorded, the last ISI will be the time difference between the last spike and the trial end time, while the corresponding index will be `-1`. # # Example: # # 1st neuron (id=0) spike times: `0.12, 0.15, 0.25`. # # 2nd neuron (id=1) spike times: `0.05, 0.2`. # # Trial 0 starts at `t=0`, and ends at `t=0.27`. # # Then the data will look like this: # # `data[0][0] = np.array([0.05,0.07,0.03,0.05,0.05,0.02])` # # `data[0][1] = np.array([1,0,0,1,0,-1])` # # # While this format is convinient for optimization, it is not standart in the field. Another disadvantage is that it is harder to visulaise the spike trains of each neuron when using this format (since the spikes of all of the neurons are contained in a single array at each trial). In this task you will write a code that converts spike data from a more conventional format to our format. # # You will load 5 trials of data generated from a ramping dynamics that is stored in the following format: `data` is a dictionary with the following keys: `trial_end_time`, and `spikes`. The first key is a 1D array of length `num_trial` with the recorded trial end times (all trials start from `t=0`), where `N` is the number of trials. The second key is a numpy array of size `(num_trial,num_neuron)`, and each entry is a 1D array that contains spike times for a single neuron on each trial. # # For the example above, the data in this format will be look like this: `data=[{"trial_end_time": np.array([0.27,]),"spikes": np.array([np.array([0.12, 0.15, 0.25]), np.array([0.05, 0.2])])`. # # Write a code that converts spike data packed in this format into the format accepted by our package. # + # Package installation - needed to run in google Collab. Skip this cell if you use local jupyter notebook # !pip install git+https://github.com/MikGen/TestBrainFlow #Make data folder and download data files. # !mkdir data import urllib.request urllib.request.urlretrieve('https://github.com/MikGen/TestBrainFlow/raw/master/Tutorial/CCN2021/data/Ex1.pkl', "data/Ex1.pkl") urllib.request.urlretrieve('https://github.com/MikGen/TestBrainFlow/raw/master/Tutorial/CCN2021/data/Ex3_datasample1.pkl', "data/Ex3_datasample1.pkl") urllib.request.urlretrieve('https://github.com/MikGen/TestBrainFlow/raw/master/Tutorial/CCN2021/data/Ex3_datasample2.pkl', "data/Ex3_datasample2.pkl") # - # Import packages for part 1 import neuralflow import numpy as np import matplotlib.pyplot as plt, matplotlib.gridspec as gridspec import scipy import pickle # + # First load the spike data with open ("data/Ex1.pkl","rb") as fp: data_spikes = pickle.load(fp) spikes = data_spikes['spikes'] trial_ends = data_spikes['trial_end_time'] # Calculate number of trials and number of neurons by using the shape of the spikes array ######INSERT YOUR CODE HERE############ num_trial = None num_neuron = None ####################################### # Allocate the data_ISI array that will have the desired format compatible with our package data_ISI = np.empty((num_trial, 2), dtype=np.ndarray) for i in range(num_trial): # spike_ind will contain all neural ids for a given trial. spike_ind = [] for j in range(num_neuron): spike_ind = spike_ind + [j]*len(spikes[i,j]) #Convert to numpy array spike_ind = np.array(spike_ind) # Now concatenate all spikes from all of the neurons on trial i into a single array. # Hint: you can subsribe all spikes from a trial k using spikes[k,:], and use np.concatenate function # to concatenate these into a single 1D array. ######INSERT YOUR CODE HERE############ spike_trial = None ####################################### # To create ISIs, we need to sort spike_trial array. Since we also need to permute spike_ind array # based on this sorting, we need to find the indices that would sort spike_trial array (Hint: use np.argsort) ######INSERT YOUR CODE HERE############ ind_sort = None ####################################### # Apply in place sorting for both spike_trial and spike_ind arrays using ind_sort ######INSERT YOUR CODE HERE############ spike_trial = None spike_ind = None ####################################### # data_ISI[i,0] consists of the first ISI (between trial start time and the first spike), the rest of the ISIs, # and the last ISI between the last spike and end of trial. data_ISI[i, 0] = np.concatenate(([spike_trial[0]],spike_trial[1:] - spike_trial[:-1], [trial_ends[i]-spike_trial[-1]])) # data_ISI[i,1] will contain spike_ind, and in the end it should have -1 to indicate the end of trial. # Use np.concatenate function to concatenate spike_ind array and an array that consists of a single element: -1. # Note that np.concatenate can concatenate arrays and lists, but it cannot concatenate arrays with a single # number. Therefore, -1 should be converted into list or numpy array before it can be concatenated with spike_ind. ######INSERT YOUR CODE HERE################## data_ISI[i, 1] = None ############################################# # + # Now let us use our class method that does the same thing. This method, however, assumes that the # spikes are 2D array of size (num_neuron,num_trial), so we need to transpose spikes before using it. # See the docstring for energy_model_data_generation.transform_spikes_to_isi() for mode details #Transpose spikes array spikes_check = spikes.T # Define time_epoch as a list of tuples, with length equal to the number of trials. # Each tuple consists of two elements: trial start time end trial end time. time_epoch = [(0,te) for te in trial_ends] # Initialize class instance em=neuralflow.EnergyModel() # Use docstring for the method below to learn more details data_check=em.transform_spikes_to_isi(spikes_check,time_epoch,last_event_is_spike = False) #Now calculate the difference between our data_ISI and the data_check and make sure the error is small error=0 for i in range(num_trial): for j in range(num_neuron): error+=np.sum(np.abs(data_ISI[i][j]-data_check[i][j])) if error<10**-8: print('Success! Please go to the next exercise!') else: print('Something is wrong. Please modify your code before proceding!') # - # ## Exercise 2: Generating spike data # # In this exercise you will generate latent trajectories and spike data from a ramping dynamics (linear potential function). # # First, you will initialize a class variable with desired parameters (potential, p0, D, firing rate functions), and visualize these parameters. Here you will have two neural responses with different firing rate functions. Then you will use our class method to generate the spike data and latent trajectories for each trial. # + # Specify the parameters for EnergyModel class instance that will be used for data generation. # See the docstring for neuralflow.EnergyModel() class. # Here we use Spectral Elements Method (SEM) to solve the eigenvalue-eigenvector problem # Our x-domain will consist of Ne elements with Np points per element, total number of grid points N = Ne*(Np-1)+1. # Nv is the number of retained eigenvectors/eigenvalues of the operator H, it affects the precision of # the computation (must be less or equal to N-2). # 'peq_model' specifies the model of the potential function (ramping dynamics corresponds to a linear potential) # D0 is the noise magnitude, p0_model specifies the model for initial probability distribution of the latent states, # boundary mode specifies boundary conditions (absorbing/reflecting), # num_neuron is the number of neurons # firing rate model specifies firing rate function for each neuron (list of dictionaries, one for each neuron). # You are encouraged to inspect neuralflow.peq_models.py and neuralflow.firing_rate_models.py models to see # the availible template models. EnergyModelParams={'pde_solve_param':{'method':{'name': 'SEM', 'gridsize': {'Np': 8, 'Ne': 64}}}, 'Nv': 447, 'peq_model':{"model": "linear_pot", "params": {"slope": -2.65}}, 'D0': 0.56, 'p0_model':{"model": "single_well", "params": {"miu": 200, "xmin": 0}}, 'boundary_mode':'absorbing', 'num_neuron':2, 'firing_model':[{"model": "linear", "params": {"r_slope": 50, "r_bias": 60}}, {"model": "sinus", "params": {"bias": 50, "amp": 40}}], 'verbose':True } # Create the class instance for data generation (and call it em_gt, which means the ground-truth energymodel) em_gt=neuralflow.EnergyModel(**EnergyModelParams) # - # Let us plot the model parametrs. Once the class instance is created, the functions peq(x), p0(x), and f(x) will be calculated and stored in the class instance variable. The potential is represented by the variable `peq`, which is equal to `peq = C*exp(-Phi(x))`, where the constant `C` notmalizes peq so that the integral of peq over the latent space is 1. For details, see Supplementary Information for <NAME>, Engel aRxiV 2020 paper. If the class instance is called `em`, then the `peq` can be accessed by calling `em.peq_` (1D array of size `em.N`, where `em.N` is the total number of grid points), the latent domain (x-grid) can be accessed by calling `em.x_d_` (1D array of size `em.N`), the p0 distribution by calling `em.p0_` (1D array of size `em.N`), D - by calling `em.D_` (float), and firing rates for all neurons - by calling `em.fr_` (2D array of size `(em.N,num_neuron)`). # + #Beginning of Ex2p1 fig=plt.figure(figsize=(20,15)) gs=gridspec.GridSpec(2,2,wspace=0.5,hspace=0.5); ax = plt.subplot(gs[0]) ax.set_title('Potential function') plt.xlabel('latent state, x', fontsize=14) plt.ylabel(r'Potential, $\Phi(x)$', fontsize=14) # Plot model potential, Phi(x)=-log(peq) versus latent domain grid x. # np.log function can be used to take the natural log ######INSERT YOUR CODE HERE################## plt.plot(None,None) ############################################# ax = plt.subplot(gs[1]) ax.set_title(r'Distribution $p_0(x)$') plt.xlabel('latent state, x', fontsize=14) plt.ylabel(r'$p_0(x)$', fontsize=14) #Plot p0(x) versus x. ######INSERT YOUR CODE HERE################## plt.plot(None,None) ############################################# ax = plt.subplot(gs[2]) ax.set_title('Firing rate function for neuron 1') plt.xlabel('latent state, x', fontsize=14) plt.ylabel(r'$f_1(x)$', fontsize=14) #Plot the firing rate function for the first neuron versus x ######INSERT YOUR CODE HERE################## plt.plot(None,None) ############################################# ax = plt.subplot(gs[3]) ax.set_title('Firing rate function for neuron 2') plt.xlabel('latent state, x', fontsize=14) plt.ylabel(r'$f_2(x)$', fontsize=14) #Plot the firing rate function for the second neuron versus x ######INSERT YOUR CODE HERE################## plt.plot(None,None) ############################################# # - # Now, let us generate the spike data. See the doc string for generate_data method for more options. # + # Specify data generation parameters num_trial = 100 data_gen_params={'deltaT':0.0001, 'time_epoch': [(0,100)]*num_trial, 'last_event_is_spike':False} #Generate the data data, time_bins, diff_traj, metadata=em_gt.generate_data(**data_gen_params) # - # ## Exercise 3: Analysis and visualization of the generated data # # In this exercise you will first find two trials with the longest and two trials with the shortest duration. For the selected 4 trials you will be asked to visualize: (i) latent trajectories, (ii) firing rates of the second neuron, (iii) spike rasters of the second neuron. # # Then, you can visually inspect the spike raster and make sure that you observe a lot of spikes when the firing rate attins higher values, and you observe little number of spikes when the firing rate is low. # # + #Beginning of Ex2p2 # Find the indeces of 2 trajectories with the longest 2 trajectories with the shortest duration # The diffusion trajectories are stored in diff_traj array # Find the duration of all trialls using time_bins list. For each trial, this list contains the array with all time # points on which latent trajectory was recorded. trial_duration = np.zeros(num_trial) ######INSERT YOUR CODE HERE################## for i in range(num_trial): trial_duration[i] = None - None ############################################# #Argsort the trial durations ind_sort = np.argsort(trial_duration) # Select 2 indeces of the trajectories with longest and shortest duration ######INSERT YOUR CODE HERE################## ind_longest = None ind_shortest = None ############################################# # Let us plot the latent trajectories for the selected 4 trials. color_set_1=[[1,0,0], [1, 0.58, 0.77], [0.77, 0, 0.77]] color_set_2=[[0.13, 0.34, 0.48], [0.34, 0.8, 0.6], [0, 1, 0]] fig=plt.figure(figsize=(15,5)) plt.title(r'The two longest and two shortest latent trajectories $x(t)$') plt.ylabel('latent state, x', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): plt.plot(time_bins[ind_longest[i]],diff_traj[ind_longest[i]],color=color_set_1[i]) plt.plot(time_bins[ind_shortest[i]],diff_traj[ind_shortest[i]],color=color_set_2[i]) # + #Beginning of Ex2p3 # Now plot the firing rate f_2(x(t)) for the 2nd neuron on the selected trials. Note that the firing rate function # for the second neuron is accessed by calling em_fitting.fr_[:,1] (as opposed to em_fitting.fr_[:,0] for the # 1st neuron) # The firing rate function is defined only at the grid points. However, the generated latent trajectory can # take arbitrary values in between the grid points. Thus, we need to interpolate this function # using scipy.interpolate.interp1d ######INSERT YOUR CODE HERE################## fr_interpolate = scipy.interpolate.interp1d(None,None) ############################################# # Now calculate firing rates f(x(t)) on the selected trials fr_long_tr, fr_short_tr = np.empty((2,),dtype=np.ndarray), np.empty((2,),dtype=np.ndarray) ######INSERT YOUR CODE HERE################## for i in range(2): fr_long_tr[i] = fr_interpolate(None) fr_short_tr[i] = fr_interpolate(None) ############################################# #Now plot it fig=plt.figure(figsize=(15,5)) plt.title(r'Firing rate $f_2(x(t))$ of the second neuron on the selected 4 trials') plt.ylabel('Firing rate, $hz$', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): plt.plot(time_bins[ind_longest[i]],fr_long_tr[i],color=color_set_1[i]) plt.plot(time_bins[ind_shortest[i]],fr_short_tr[i],color=color_set_2[i]) # + #Beginning of Ex2p4 # Now, let us plot spike rasters of the 2nd neurons on four selected trials. # To do that, we first need to extract the spike times of this neuron spikes_long_trial = np.empty((ind_longest.size,),dtype=np.ndarray) #First, consider the longest trials for i,trial in enumerate(ind_longest): ######INSERT YOUR CODE HERE################## # First, find spike times of all of the neurons at a given trial ind by taking a cumsum of data[trial][0] spikes_all = np.cumsum(None) ############################################# # Now find the corresponding neural indices nids = data[trial][1] # Now filter the spike times by index 1 (which corresponds to the 2nd neuron) spikes_long_trial[i] = spikes_all[nids==1] #Do the same thing for the shortest trials spikes_short_trial = np.empty((ind_shortest.size,),dtype=np.ndarray) for i,trial in enumerate(ind_shortest): spikes_all = np.cumsum(data[trial][0]) nids = data[trial][1] spikes_short_trial[i] = spikes_all[nids==1] #Now visualize it fig=plt.figure(figsize=(15,5)) plt.title('Spike data on 4 selected trials for the 2nd neuron') plt.ylabel('Trial number', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): plt.plot(spikes_long_trial[i],i*np.ones(spikes_long_trial[i].size),'|',color=color_set_1[i],markersize=60) plt.plot(spikes_short_trial[i],(i+2)*np.ones(spikes_short_trial[i].size),'|',color=color_set_2[i],markersize=60) plt.yticks([0,1,2,3]) # - # ## Putting it all together # # Now, you can visually inspect the spike raster and make sure that you observe a lot of spikes when the firing rate attins higher values, and you observe little number of spikes when the firing rate is low. # + fig=plt.figure(figsize=(20,25)) gs=gridspec.GridSpec(3,1,wspace=0.5,hspace=0.5); # Latent trajectories: ax = plt.subplot(gs[0]) ax.set_title(r'Three longest and three shortest latent trajectories $x(t)$') plt.ylabel('latent state, x', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): ax.plot(time_bins[ind_longest[i]],diff_traj[ind_longest[i]],color=color_set_1[i]) ax.plot(time_bins[ind_shortest[i]],diff_traj[ind_shortest[i]],color=color_set_2[i]) # Firing rates of the second neuron ax = plt.subplot(gs[1]) ax.set_title(r'Firing rate $f_2(x(t))$ of the second neuron on the selected 4 trials') plt.ylabel('Firing rate, $hz$', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): ax.plot(time_bins[ind_longest[i]],fr_long_tr[i],color=color_set_1[i]) ax.plot(time_bins[ind_shortest[i]],fr_short_tr[i],color=color_set_2[i]) # Spikes of the second neuron ax = plt.subplot(gs[2]) ax.set_title(r'Spike data on 4 selected trials for the 2nd neuron') plt.ylabel('Trial number', fontsize=14) plt.xlabel(r'$time, sec$', fontsize=14) for i in range(2): ax.plot(spikes_long_trial[i],i*np.ones(spikes_long_trial[i].size),'|',color=color_set_1[i],markersize=60) ax.plot(spikes_short_trial[i],(i+2)*np.ones(spikes_short_trial[i].size),'|',color=color_set_2[i],markersize=60) plt.yticks([0,1,2,3])
Tutorial/CCN2021/Exercises/Part1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="gXXhctqjgXO7" # ##### Copyright 2021 The Cirq Developers # + cellView="form" id="z2RJVa8qgXou" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] id="dd95be2a71eb" # # XEB and Coherent Error # + id="046b07823210" import numpy as np import cirq from cirq.contrib.svg import SVGCircuit # + [markdown] id="ace31cc4d258" # ## Set up Random Circuits # # We create a set of 10 random, two-qubit `circuits` which uses `SINGLE_QUBIT_GATES` to randomize the circuit and `SQRT_ISWAP` as the entangling gate. We will ultimately truncate each of these circuits according to `cycle_depths`. Please see [the XEB Theory notebook](./xeb_theory.ipynb) for more details. # + id="448db4e165e5" exponents = np.linspace(0, 7/4, 8) exponents # + id="91c5d7d9731f" import itertools SINGLE_QUBIT_GATES = [ cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a) for a, z in itertools.product(exponents, repeat=2) ] SINGLE_QUBIT_GATES[:10], '...' # + id="fd2a6e10afe5" import cirq.google as cg from cirq.experiments import random_quantum_circuit_generation as rqcg SQRT_ISWAP = cirq.ISWAP**0.5 q0, q1 = cirq.LineQubit.range(2) # + id="bf85fef74b6d" # Make long circuits (which we will truncate) circuits = [ rqcg.random_rotations_between_two_qubit_circuit( q0, q1, depth=100, two_qubit_op_factory=lambda a, b, _: SQRT_ISWAP(a, b), single_qubit_gates=SINGLE_QUBIT_GATES) for _ in range(10) ] # + id="c7c044ec12ac" # We will truncate to these lengths cycle_depths = np.arange(3, 100, 9) cycle_depths # + [markdown] id="2e0f9de60ef1" # ## Emulate coherent error # # We request a $\sqrt{i\mathrm{SWAP}}$ gate, but the quantum hardware may execute something subtly different. Therefore, we move to a more general 5-parameter two qubit gate, `cirq.PhasedFSimGate`. # # This is the general excitation-preserving two-qubit gate, and the unitary matrix of PhasedFSimGate(θ, ζ, χ, γ, φ) is: # # [[1, 0, 0, 0], # [0, exp(-iγ - iζ) cos(θ), -i exp(-iγ + iχ) sin(θ), 0], # [0, -i exp(-iγ - iχ) sin(θ), exp(-iγ + iζ) cos(θ), 0], # [0, 0, 0, exp(-2iγ-iφ)]]. # # This parametrization follows eq (18) in https://arxiv.org/abs/2010.07965. Please read the docstring for `cirq.PhasedFSimGate` for more information. # # With the following code, we show how `SQRT_ISWAP` can be written as a specific `cirq.PhasedFSimGate`. # + id="a598f743d18a" sqrt_iswap_as_phased_fsim = cirq.PhasedFSimGate.from_fsim_rz( theta=-np.pi/4, phi=0, rz_angles_before=(0,0), rz_angles_after=(0,0)) np.testing.assert_allclose( cirq.unitary(sqrt_iswap_as_phased_fsim), cirq.unitary(SQRT_ISWAP), atol=1e-8 ) # + [markdown] id="31492475ce1b" # We'll also create a perturbed version. Note the $\pi/16$ `phi` angle: # + id="da1de99252fa" perturbed_sqrt_iswap = cirq.PhasedFSimGate.from_fsim_rz(theta=-np.pi/4, phi=np.pi/16, rz_angles_before=(0,0), rz_angles_after=(0,0)) np.round(cirq.unitary(perturbed_sqrt_iswap), 3) # + [markdown] id="f4118b7dcbdc" # We'll use this perturbed gate along with the `GateSubstitutionNoiseModel` to create simulator which has a constant coherent error. Namely, each `SQRT_ISWAP` will be substituted for our perturbed version. # + id="625cd8c4e43c" def _sub_iswap(op): if op.gate == SQRT_ISWAP: return perturbed_sqrt_iswap.on(*op.qubits) return op noise = cirq.devices.noise_model.GateSubstitutionNoiseModel(_sub_iswap) noisy_sim = cirq.DensityMatrixSimulator(noise=noise) # + [markdown] id="0ae1dfafb03c" # ## Run the benchmark circuits # # We use the function `sample_2q_xeb_circuits` to execute all of our circuits at the requested `cycle_depths`. # + id="ba0dcff52057" from cirq.experiments.fidelity_estimation import sample_2q_xeb_circuits sampled_df = sample_2q_xeb_circuits(sampler=noisy_sim, circuits=circuits, cycle_depths=cycle_depths, repetitions=10_000) sampled_df.head() # + [markdown] id="51292810e6a8" # ## Compute fidelity assuming `SQRT_ISWAP` # # In contrast to the XEB Theory notebook, here we only have added coherent error (not depolarizing). Nevertheless, the random, scrambling nature of the circuits shows circuit fidelity decaying with depth (at least when we assume that we were trying to use a pure `SQRT_ISWAP` gate) # + id="b5390dc443ab" from cirq.experiments.fidelity_estimation import benchmark_2q_xeb_fidelities fids = benchmark_2q_xeb_fidelities(sampled_df, circuits, cycle_depths) fids.head() # + id="8c08c9ab8109" # %matplotlib inline from matplotlib import pyplot as plt xx = np.linspace(0, fids['cycle_depth'].max()) plt.plot(xx, (1-5e-3)**(4*xx), label=r'Exponential Reference') plt.plot(fids['cycle_depth'], fids['fidelity'], 'o-', label='Perturbed fSim') plt.ylabel('Circuit fidelity') plt.xlabel('Cycle Depth $d$') plt.legend(loc='best') # + [markdown] id="6025a292d19b" # ## Optimize `PhasedFSimGate` parameters # # We know what circuits we requested, and in this simulated example, we know what coherent error has happened. But in a real experiment, there is likely unknown coherent error that you would like to characterize. Therefore, we make the five angles in `PhasedFSimGate` free parameters and use a classical optimizer to find which set of parameters best describes the data we collected from the noisy simulator (or device, if this was a real experiment). # + [markdown] id="c175576bf0d6" # fids_opt = simulate_2q_xeb_fids(sampled_df, pcircuits, cycle_depths, param_resolver={'theta': -np.pi/4, 'phi': 0.1}) # + id="baff45b4ad70" import multiprocessing pool = multiprocessing.get_context('spawn').Pool() # + id="c3617e5b28de" from cirq.experiments.fidelity_estimation import \ parameterize_phased_fsim_circuit, characterize_phased_fsim_parameters_with_xeb, SqrtISwapXEBOptions options = SqrtISwapXEBOptions( characterize_theta=True, characterize_phi=True, characterize_chi=False, characterize_gamma=False, characterize_zeta=False ) p_circuits = [parameterize_phased_fsim_circuit(circuit, options) for circuit in circuits] res = characterize_phased_fsim_parameters_with_xeb(sampled_df, p_circuits, cycle_depths, options, pool=pool) # + id="55535fcc871d" res # + id="305182fdc372" _, names = options.get_initial_simplex_and_names() final_params = dict(zip(names, res.x)) final_params # + id="516392c92916" fids_opt = benchmark_2q_xeb_fidelities( sampled_df, p_circuits, cycle_depths, param_resolver=final_params) # + id="a11414898e89" xx = np.linspace(0, fids['cycle_depth'].max()) p_depol = 5e-3 # from above plt.plot(xx, (1-p_depol)**(4*xx), label=r'Exponential Reference') plt.axhline(1, color='grey', ls='--') plt.plot(fids['cycle_depth'], fids['fidelity'], 'o-', label='Perturbed fSim') plt.plot(fids_opt['cycle_depth'], fids_opt['fidelity'], 'o-', label='Refit fSim') plt.ylabel('Circuit fidelity') plt.xlabel('Cycle Depth') plt.legend(loc='best') plt.tight_layout()
docs/characterization/xeb_coherent_noise.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + import numpy.random # generate noisy/random data from sklearn import linear_model # training linear model import matplotlib.pyplot # general plotting from mpl_toolkits.mplot3d import Axes3D # 3d plotting import pandas as pd import numpy as np import operator # used for sorting array of tuples # - MIN_X = -15 MAX_X = 15 NUM_INPUTS = 100 # + ran_x = numpy.random.uniform(low=MIN_X, high=MAX_X, size=(NUM_INPUTS, 1)) # type: [[float]] dataset = pd.DataFrame(data=ran_x, columns=['x']) dataset.head() # - dataset['y'] = 0.3*dataset['x']+1 # line is y = 0.3x + 1, no noise yet dataset.plot.scatter(x='x', y='y') noise_y = numpy.random.normal(size=NUM_INPUTS) # generate noise using normal distribution dataset['y'] = dataset['y']+noise_y # add noise to dataset dataset.plot.scatter(x='x', y='y') model_one = linear_model.LinearRegression() # unfitted model print(dataset['x']) # see current shape of data print(dataset['y']) # see current shape of data reshaped_x = dataset['x'].values.reshape(-1, 1) reshaped_y = dataset['y'].values.reshape(-1, 1) print(reshaped_x) # see new shape print(reshaped_y) # see new shape model_one.fit(X=reshaped_x, y=reshaped_y) # fit model to data # function to print weights of model def print_model_fit(model): print('Intercept: {i} Coefficients: {c}'.format(i=model.intercept_, c=model.coef_)) print_model_fit(model_one) new_x_values = [[-1.23], [0.66], [1.98]] # some random values to predict predictions = model_one.predict(new_x_values) for datapoint, prediction in zip(new_x_values, predictions): print('Model prediction for {}: {}'.format(datapoint[0], prediction[0])) # scatter data and draw line on the same graph def linear_regression_against_data(model, data_x, data_y, MIN_X, MAX_X): fig = matplotlib.pyplot.figure(1) fig.suptitle('Data vs Regression') matplotlib.pyplot.xlabel('x-axis') matplotlib.pyplot.ylabel('y-axis') matplotlib.pyplot.scatter(data_x, data_y) all_X = numpy.linspace(MIN_X, MAX_X) all_Y = model.predict(list(zip(all_X))) matplotlib.pyplot.plot(all_X, all_Y) linear_regression_against_data(model_one, reshaped_x, reshaped_y, MIN_X, MAX_X) # # Solution for #1 # Answers to questions about dataset 1. # 1. I expected `print_model_fit()` to print numbers similar to: `Intercept: [1] Coefficients: [[0.3]]` because the line used to make the data was y = 0.3x + 1. # 2. The expected numbers the model should have outputted when given the new numbers should be around num*.3 + 1. I say around and not exactly because there was noise added to the data. So given the x-values -1.23, 0.66, and 1.98, I expected the model to predict y-values close to 0.631, 1.198, and 1.594. # 3. I expected the line to generally follow the trend of all the points. # 4. The line of code I would change would be `dataset['y'] = 0.3*dataset['x']+1`. I would first change 0.3 to an arbitrary number, say 5.5, and would then run all the code after that. I would make sure that the output of `print_model_fit(model_one)` reported a new coefficent, one close to the new arbitrary number. This would make sure that the model didn't just happen to work with only m=0.3. A similar approach would be taken for the y-intercept. Doing both of these would ensure that the model was working properly. model_two = linear_model.LinearRegression() model_two.fit(X=reshaped_y, y=reshaped_x) # swapping x and y # since x and y have been swapped, we have to use the min and max y-values MIN_Y = 0.3*MIN_X+1 MAX_Y = 0.3*MAX_X+1 linear_regression_against_data(model_two, reshaped_y, reshaped_x, MIN_Y, MAX_Y) model_three = linear_model.LinearRegression() model_three.fit(X=reshaped_x, y=reshaped_x) # y = x linear_regression_against_data(model_three, reshaped_x, reshaped_x, MIN_X, MAX_X) # + MIN_X_3D = -10 MAX_X_3D = 10 NUM_INPUTS_3D = 50 noise_3d = numpy.random.normal(size=NUM_INPUTS_3D) x_3d = numpy.random.uniform(low=MIN_X_3D, high=MAX_X_3D, size=NUM_INPUTS_3D) y_3d = numpy.random.uniform(low=MIN_X_3D, high=MAX_X_3D, size=NUM_INPUTS_3D) z_3d = 0.5*x_3d - 2.7*y_3d - 2 + noise_3d # + data_3d = pd.DataFrame(data=x_3d, columns=['x']) data_3d['y'] = y_3d data_3d['z'] = z_3d data_3d.head() # + model_3d = linear_model.LinearRegression() model_3d.fit(data_3d[['x', 'y']], data_3d['z']) # why don't we have to reshape data in this example? print_model_fit(model_3d) # - def plot_3d(model, x, y, z, min, max): fig = matplotlib.pyplot.figure(1) fig.suptitle('3D Data vs Linear Plane') axes = fig.gca(projection='3d') axes.set_xlabel('x') axes.set_ylabel('y') axes.set_zlabel('z') axes.scatter(x, y, z) X = Y = numpy.arange(min, max, 0.05) X, Y = numpy.meshgrid(X, Y) Z = numpy.array(model.predict(list(zip(X.flatten(), Y.flatten())))).reshape(X.shape) axes.plot_surface(X, Y, Z, alpha=0.1) matplotlib.pyplot.show() plot_3d(model_3d, x_3d, y_3d, z_3d, MIN_X_3D, MAX_X_3D) # # Solution for #2 # # 1. I expected `print_model_fit()` to print: `Intercept: [-2] Coefficients: [0.5 -2.7]` because the line used to make the data was z = 0.5x - 2.7y - 2. # 2. I expected all the data to generally lie on the plane, because the plane is supposed to fit the general trend of the data. # 3. I would take the same approach to the 2D dataset to check if the linear regression code was working properly I would change the line `z_3d = 0.5*x_3d - 2.7*y_3d - 2 + noise_3d` by changing the coefficients. Then, I would check if `print_model_fit()` changed to reflect the new values. # 4. There were a few minor differences between working with this and the 2D dataset. One was that the data did not have to be reshaped (not sure why), as well as some more code had to be written to graph the 3D data and mesh. MIN_X_QUAD = 0 MAX_X_QUAD = 20 NUM_INPUTS_QUAD = 50 x_quadratic = numpy.random.uniform(low=MIN_X_QUAD, high=MAX_X_QUAD, size=(NUM_INPUTS_QUAD, 1)) data_quadratic = pd.DataFrame(data=x_quadratic, columns=['x']) noise_quadratic = numpy.random.normal(size=NUM_INPUTS_QUAD) # y = x^2 - 20x + 1.5 # vertex: -b/2a = 10 data_quadratic['y'] = data_quadratic['x']*data_quadratic['x']-20*data_quadratic['x']+1.5+noise_quadratic x_quadratic = data_quadratic['x'].values.reshape(-1, 1) y_quadratic = data_quadratic['y'].values.reshape(-1, 1) model_quadratic = linear_model.LinearRegression() # generate model model_quadratic.fit(x_quadratic, y_quadratic) # fit model # show results print_model_fit(model_quadratic) linear_regression_against_data(model_quadratic, x_quadratic, y_quadratic, MIN_X_QUAD, MAX_X_QUAD) # + half = int(NUM_INPUTS_QUAD/2) # learned how reshape() works from https://stackoverflow.com/questions/18691084/what-does-1-mean-in-numpy-reshape # reshape x and y data to 1D arrays x_quadratic = x_quadratic.reshape(1, -1)[0] y_quadratic = y_quadratic.reshape(1, -1)[0] # combine x and y data into array of tuples where in each tuple, first element is x and second is y x_y_combined = [] for i in range(0, NUM_INPUTS_QUAD): x_y_combined.append((x_quadratic[i], y_quadratic[i])) # learned how to sort from https://algocoding.wordpress.com/2015/04/14/how-to-sort-a-list-of-tuples-in-python-3-4/#:~:text=Note%3A%20As%20usual%20in%20programming,itemgetter(1)%20.&text=If%20we%20want%20to%20sort,we%20simply%20set%20reverse%20%3D%20True%20. # sort x and y data by x values, ascending x_y_combined.sort(key = operator.itemgetter(0)) x_quadratic = [] y_quadratic = [] for tuple in x_y_combined: x_quadratic.append([tuple[0]]) y_quadratic.append([tuple[1]]) # split sorted dataset into two halves x_quadratic_left = x_quadratic[0:half] y_quadratic_left = y_quadratic[0:half] x_quadratic_right = x_quadratic[half:(NUM_INPUTS_QUAD-1)] y_quadratic_right = y_quadratic[half:(NUM_INPUTS_QUAD-1)] # - model_quadratic_left = linear_model.LinearRegression() # generate left-half model model_quadratic_left.fit(x_quadratic_left, y_quadratic_left) # fit left-half model model_quadratic_right = linear_model.LinearRegression() # generate right-half model model_quadratic_right.fit(x_quadratic_right, y_quadratic_right) # fit right-half model linear_regression_against_data(model_quadratic_left, x_quadratic_left, y_quadratic_left, MIN_X_QUAD, MAX_X_QUAD) linear_regression_against_data(model_quadratic_right, x_quadratic_right, y_quadratic_right, MIN_X_QUAD, MAX_X_QUAD) # # First Solution for Exercise #3 # As you can see in my code above, I first sorted the data by x-value, ascending. I then split the data into halves, with the first half being the left side of the parabola and the second half being the right side of the parabola. I then fit a model to each half. # # There is one problem with this approach: it assumes there is an equal amount of data of the left and right halves. When generating my data I kept this in mind to ensure that my approach would work. # # One potential method of getting around this issue is by determining the vertex of the parabola and then splitting the data accordingly. This in practice is quite difficult: you can't just pick the datapoint that has the smallest y-value, because that doesn't take in account that the parabola may open downwards. # https://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html # https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html#sklearn.preprocessing.PolynomialFeatures from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline degree = 2 # quadratic model_polynomial=make_pipeline(PolynomialFeatures(degree), linear_model.LinearRegression()) # make model # from my understanding, what the above code does is replace the matrix used for the regression with a polynomial version of it model_polynomial.fit(x_quadratic, y_quadratic) # fit model linear_regression_against_data(model_polynomial, x_quadratic, y_quadratic, MIN_X_QUAD, MAX_X_QUAD) # # Second Solution for #3 # I just looked into the sklearn documentation on how to properly do polynomial regression. # + # https://scikit-learn.org/stable/modules/linear_model.html#ridge-regression-and-classification #model_with_regularization = linear_model.Ridge(alpha=.5) # make model with Ridge regularization # https://numpy.org/doc/stable/reference/generated/numpy.logspace.html # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html#sklearn.linear_model.RidgeCV model_with_regularization = linear_model.RidgeCV(alphas=np.logspace(-5, 5, 15), normalize=True) # make model with Ridge regularization w/ built in cross validation model_with_regularization.fit(X=reshaped_x, y=reshaped_y) # fit model to data model_without_regularization = model_one # model without regularization, already fitted # print coeficcients of two models for comparison print_model_fit(model_with_regularization) print_model_fit(model_without_regularization) # graph the two lines against the data linear_regression_against_data(model_with_regularization, reshaped_x, reshaped_y, MIN_X, MAX_X) linear_regression_against_data(model_without_regularization, reshaped_x, reshaped_y, MIN_X, MAX_X) # compare the two models # compare r-values for training data (the model with regularization should have a lower r-value) # https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html r_regularization_training = model_with_regularization.score(reshaped_x, reshaped_y) r_standard_training = model_without_regularization.score(reshaped_x, reshaped_y) r_diff_training = r_regularization_training-r_standard_training # this should be negative, because the model without regularization should perform better on the training data print("Training data. With regularization: {}. Without regularization: {}. Diff: {}".format(r_regularization_training, r_standard_training, r_diff_training)) # compare r-values for test data # generate test data ran_test_x = numpy.random.uniform(low=MIN_X, high=MAX_X, size=(NUM_INPUTS, 1)) test_dataset = pd.DataFrame(data=ran_test_x, columns=['x']) test_dataset['y'] = 0.3*test_dataset['x']+1 # line is y = 0.3x + 1 # noise is added because the model without regularization should be overfitting to the specific noise of the training data # https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html?highlight=numpy%20random%20normal#numpy.random.normal noise_test_y = numpy.random.normal(scale=1.5, size=NUM_INPUTS) test_dataset['y'] = test_dataset['y']+noise_test_y test_reshaped_x = test_dataset['x'].values.reshape(-1, 1) test_reshaped_y = test_dataset['y'].values.reshape(-1, 1) # calculate r-values and print r_regularization_testing = model_with_regularization.score(test_reshaped_x, test_reshaped_y) r_standard_testing = model_without_regularization.score(test_reshaped_x, test_reshaped_y) r_diff_testing = r_regularization_testing-r_standard_testing # this should be positive, because the model without regularization should be slightly overfitted print("Test data. With regularization: {}. Without regularization: {}. Diff: {}".format(r_regularization_testing, r_standard_testing, r_diff_testing)) # - # # Solution for #4 # As seen above, the r-score of the regularized model was less than the r-score of the regular model for the training set, but higher for the test set. This makes sense: without regularization, the regular model should have slightly overfitted to the training set, causing it to perform better on the training set but worse on the new data. # # The scores are so similar because there was not too much noise added to the data, so the effect of adding regularization was not very significant. degree = 2 # quadratic model_polynomial=make_pipeline(PolynomialFeatures(degree), linear_model.RidgeCV(alphas=np.logspace(-5, 5, 15), normalize=True)) # make polynomial model with Ridge regularization w/ built in cross validation # from my understanding, what the above code does is replace the matrix used for the regression with a polynomial version of it model_polynomial.fit(x_quadratic, y_quadratic) # fit model linear_regression_against_data(model_polynomial, x_quadratic, y_quadratic, MIN_X_QUAD, MAX_X_QUAD) # # Solutions for #3 & #4 Combined # I just used the same code as my second solution for #3, but instead of a regular linear model I used one with ridge regularization. linear_regression_against_data(model_one, reshaped_x, reshaped_y, MIN_X, MAX_X) print(f'Training r^2: {model_one.score(reshaped_x, reshaped_y)}') test_x = numpy.random.uniform(low=MIN_X, high=MAX_X, size=(NUM_INPUTS, 1)) test_dataset = pd.DataFrame(data=test_x, columns=['x']) test_dataset['y'] = 0.3*test_dataset['x']+1 test_noise_y = numpy.random.normal(scale=1.0, size=NUM_INPUTS) test_dataset['y'] = test_dataset['y']+test_noise_y test_x_reshaped = test_dataset['x'].values.reshape(-1, 1) test_y_reshaped = test_dataset['y'].values.reshape(-1, 1) linear_regression_against_data(model_one, test_x_reshaped, test_y_reshaped, MIN_X, MAX_X) print(f'Test r^2: {model_one.score(test_x_reshaped, test_y_reshaped)}') # # Adding Validation # 1. Theoretically, the best (adjusted) r^2 value is 1 and the worst is 0 (technically can be negative but in practice shouldn't occur). # 2. The model is getting the scores 0.91 and 0.89 instead of 1 because of the added noise to the dataset. # 3. The score for the test dataset is slightly lower than the training dataset: it is 0.89. This suggests that either there was slight overfitting or, because the difference is so low, it may just be due to the randomness of the noise.
New Jupyter Notebooks/.ipynb_checkpoints/LinearRegression-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import math as mt import matplotlib as plt decaimento_log = np.log(0.95/0.5) fator_de_amortecimento = 1/(1+(2*np.pi/decaimento_log)**2)**0.5 fator_de_amortecimento k = np.sqrt(1-fator_de_amortecimento**2) tal = (np.arctan(k/fator_de_amortecimento)-np.arctan(k/(fator_de_amortecimento+1.6)))/k tal
10_periodo/vibracoes/Trabalho_5.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Método de Box-Muller para generar una V.A. con distribución normal # En este ejercicio se quiere generar variables aleatorias con distribución normal estándar, es decir aquellas que tienen una funcion densidad de probabilidad gasussiana con $\mu=0$ y $\sigma=1$ : # # $$f(x)=\frac{e^{-x^2/2}}{\sqrt{2\pi}}$$ # # En este caso, esta función distribución es muy difícil de invertir para usar el método de la función inversa, y la probabilidad tampoco puede encuadrarse, ya que x se encuentra en $(-\infty, \infty)$. Por ello, se utiliza el método de Box-Muller, el cual permite la generación de pares de números aleatorios independientes con distribución normal estándar a partir de 2 variables aleatorias uniformes distribuidas en [0,1]. # # # ### *Desarrollo teórico - Método Box Muller tradicional* # Sean X e Y dos variables aleatorias normales estándar. # Se pide también que sean independientes, así su función de densidad de probabilidad conjunta satisface: # # $$f(X,Y)=f(X).f(Y)$$ # # Reemplazando con las funciones de densidades de probabilidad de cada una, se obtiene: # # $$f(X,Y)=\frac{e^{-(X^2+Y^2)/2}}{2\pi}$$ # Se definen 2 variables aleatorias nuevas e independientes $R^2$ y $\theta$ tal que sigan el siguiente cambio de variables: # # $$R^2=X^2+Y^2$$ # # $$tan(\theta)=\frac{Y}{X}$$ # Se quiere encontrar la densidad de probabilidad conjunta de las variables $R^2$ y $\theta$ y para ello se utiliza el Teorema de cambio de varibles aleaatorias, ya que se conoce la f(X,Y) y las funciones que relacionan las 2 variables. # # Realizando la integral correspondiente del Teorema mencionado se llega a que: # # $$f(R^2,\theta)=\frac{1}{2\pi}.\frac{e^{-R^2/2}}{2}$$ # Al pedir que $R^2$ y $\theta$ sean independientes, se tiene que: # # $$f(R^2,\theta)=f(R^2).f(\theta)$$ # # Luego, se determina que $f(R^2,\theta)$ equivale al producto de una densidad de probabilidad exponencial con media 2 y una distribución uniforme en $[0,2\pi]$. # # # Por lo tanto, # # $$f(R^2)=\frac{e^{-R^2/2}}{2}$$ # # $$f(\theta)=\frac{1}{2\pi}$$ # # Entonces, se ve que si se sortean dos variables X e Y con distribución normal estándar y se realiza un cambio de variables, se llega a 2 nuevas variables, una con distribución uniforme y la otra con una exponencial. Si se realiza el camino inverso, a partir de estas últimas variables es posible conseguir 2 variables independientes con distribución normal estándar, usando: # # $$X=R.cos(\theta)$$ # # $$Y=R.sen(\theta)$$ # # Se utiliza este método, para encontrar X e Y usando la generación de las variables $R^2$ y $\theta$ cuyas distribuciones son más simples que de construir que la gaussiana. # Sean $U_1$ y $U_2$ dos varibles aleatorias y uniformes en el intervalo [0,1]. # # Se pueden obtener números aleatorios distribuidos exponencialmente $R^2$, por medio del método de la función inversa: # # $$R^2=-2log(1-U_1)$$ # # Y para la distribución de $\theta$ al ser también uniforme se obtiene multiplicando una de las variables U por $2\pi$: # # $$\theta=2\pi U_2$$ # Reemplazando en X e Y se obtiene: # $$X=\sqrt{-2log(1-U_1)}cos(2\pi U_2)$$ # (Ec 1) # $$Y=\sqrt{-2log(1-U_1)}sen(2\pi U_2)$$ # Se realiza el siguiente programa para realizarlo: from math import * import numpy as np import matplotlib.pyplot as plt import random import seaborn as sns sns.set() def box_muller(n): #defino la funcion lista_x=[] #listas vacías lista_y=[] for i in range(n): #recorre n veces U1=random.random() #defino numeros uniformes en (0,1) U2=random.random() X=sqrt(-2*log(1-U1))*cos(2*pi*U2) #se sigue de (Ec 1) Y=sqrt(-2*log(1-U1))*sin(2*pi*U2) lista_x.append(X) #los agrego a las listas lista_y.append(Y) return(lista_x, lista_y) #Gráfico: x, y= box_muller(n=1000) sns.jointplot(x, y, kind='scatter', alpha=0.8, color='purple') pass # Se puede ver que con la función jointplot() se grafica una distribución bivariada en el centro, junto con las distribuciones marginales de las dos variables X e Y. Tales distribuciones marginales son normales estándar. # Para ver esto mejor, se grafica por separado la variable X unicamente (análogamente se puede realizar con Y). # Se realiza un histograma. plt.hist(x, density=True, color='purple') plt.title('Distribución normal estándar empírica') plt.xlabel('X') plt.ylabel('Probabilidad') plt.show() # Para verificar si las variables aleatorias generadas siguen una distribución normal se usa un gráfico Q-Q, el cual es un método gráfico para comparar la distribución de probabilidad teórica con una extraída de una muestra aleatoria. # # Para ello se realiza el siguiente programa, que consiste en definir un 'x_emp' que contenga las variables aleatorias X generadas por el método de Box-Muller y luego ordenar sus valores por medio de la función '.sort()'. Como estos valores están ordenados se corresponden a los cuantiles empíricos. # # Ahora se quiere encontrar los valores de los cuantiles para la distribución teórica. Se usa la función 'st.norm.ppf' de scipy, la cual devuelve la posición de los cuantiles indicados por 'q=i/(x_tot)' que indica la fracción correspondiente. # # Debajo se grafican los cuantiles teóricos en función de los empíricos. from scipy import stats as st x_emp, y_emp=box_muller(n=1000) #genero variables aleatorias (empiricas) x_emp.sort() #ordeno esos valores --> Q empírico x_tot=len(x_emp) # Q_teo=[] for i in range(x_tot): #le indico que recorra el total de x ordenados b=st.norm.ppf(i/(x_tot), 0, 1) #defino el cuantil en el lugar que la distribucion empirica lo indica Q_teo.append(b) #Gráfico plt.plot(Q_teo, x_emp, '.', color='purple') plt.title('Grafico Q-Q de la variable X - Método Box Muller') plt.xlabel('Cuantiles teóricos') plt.ylabel('Cuantiles empíricos') plt.show() # Analizando visualmente el gráfico se puede ver que la función obtenida sorteando variables aleatorias es similar a la cual estoy comparando, es decir una distribución normal estándar, ya que se obtiene una recta en el gráfico Q-Q. # En los extremos se produce una desviación de la recta. # # # Por último, se calcula el tiempo de cómputo que se necesita para realizar este método (después servirá para comparar) usando la función 'timeit'. Se genera un número elevado de puntos para que al programa le lleve un tiempo considerable calcularlo. Como el tiempo de computo depende de muchos factores y es una variable aleatoria también, se indica que se realice 7 veces y se toma un promedio de esos valores. # t_box_muller=%timeit -o box_muller(n=1000000) ts_box_muller=t_box_muller.all_runs ts_box_muller print('La media de los valores del tiempo de computo para Box Muller tradicional:', np.mean(ts_box_muller)) # # # # # ### *Método Box Muller modificado* # Por último se busca mejorar la eficiencia del código generando pares aleatorios dentro del círculo de radio unidad. # Para ello, se considera el triángulo de lados definido por la hipotenusa R y el ángulo $\theta$, de lados $V_1$ y $V_2$. # # Primero, se comienza generando pares de números aleatorios distribuidos uniformemente en el intervalo [-1,1]. Para ello, sean $U_1$ y $U_2$ dos varibles uniformes en el intervalo [0,1], se puede pbtener las variables $V_1$ y $V_2$ uniformes en el intervalo [-1,1] de la forma: # # $$V_1=2.U_1-1$$ # # $$V_2=2.U_2-1$$ # # Asi, al tomar pares $(V_1,V_2)$ están distribuídos uniformemente en el cuadrado de lados 2x2 centrado en (0,0). # # Para obetener que los pares estén distribuidos de forma uniforme dentro del círculo de radio unidad se debe pedir: $R^2=V_1^2+V_2^2\le1$ # # Luego, la variable $S=R^2$ está distribuída uniformemente en [0,1]. # # # Escribiendo a $\theta$ de la siguiente forma: # $$cos(\theta)=\frac{V_1}{R}$$ # (Ec 2) # $$sen(\theta)=\frac{V_2}{R}$$ # # Se reescriben las ecuaciones para X e Y (Ec 1): # Utilizando a S como la variable aleatoria uniforme en [0,1] en vez de $U_1$ y usando (Ec 2) para el seno y coseno de $\theta$ # # $$X=\sqrt{\frac{-2log(1-s)}{S}}.V_1$$ # (Ec 3) # $$Y=\sqrt{\frac{-2log(1-s)}{S}}.V_2$$ # Para este inciso se realiza el siguiente código definiendo la función 'box_mu', muy similar a la función 'box_muller' pero siguiendo los últios pasos explicados. def box_mu(n): #defino la función lista_x=[] #listas vacías lista_y=[] for i in range(n): #cantidad de números aleatorios U1=random.random() #uniformes en (0,1) U2=random.random() V1=2*U1-1 #uniformes en (-1,1) V2=2*U2-1 S=V1**2+V2**2 #defino s if S>1: #está afuera del círculo unitario y no me interesa None else: X=sqrt(-2*log(1-S)/S)*V1 #se sigue de (Ec 3) Y=sqrt(-2*log(1-S)/S)*V2 lista_x.append(X) #los agrego a una lista lista_y.append(Y) return(lista_x, lista_y) #Gráfico: x2, y2= box_mu(n=1000) sns.jointplot(x2, y2, kind='scatter', alpha=0.8, color='crimson') pass # Sobre los ejes X e Y se pueden ver las distribuciones marginales obtenidas de las dos variables X e Y. Para confirmar que son distribuciones normales estándar se verifica este método como en el caso anterior generando un gráfico Q-Q. Se realiza solo para la variable X, pero con la variable Y es análogo. x2_emp, y2_emp=box_mu(n=1000) x2_emp.sort() #Q empírico x2_tot=len(x2_emp) # Q2_teo=[] for i in range(x2_tot): b=st.norm.ppf(i/(x2_tot), 0, 1) #defino el cuantil en el lugar que la distribucion empirica lo indica Q2_teo.append(b) #Gráfico plt.plot(Q2_teo, x2_emp, '.', color='crimson') plt.title('Grafico Q-Q de la variable X - Método Box Muller modificado') plt.xlabel('Cuantiles teóricos') plt.ylabel('Cuantiles empíricos') plt.show() # De la misma forma que en método Box Muller tradicional, se verifica que la distribución obtenida es comparable con una distribución normal estándar por medio del gráfico Q-Q. # También en este caso se calcula el tiempo de cómputo con el mismo valor de n que para el métos Box Muller tradicional: # t_box_mu=%timeit -o box_mu(n=1000000) ts_box_mu=t_box_mu.all_runs ts_box_mu print('La media de los valores del tiempo de computo para Box Muller modificado:', np.mean(ts_box_mu)) # Se puede ver que el tiempo de computo obtenido para el método Box Muller modificado es aproximadamente de 1.84s, un valor menor al del método Box Muller tradicional: 2s. Esto indica que se ganan 0.16s de tiempo de cómputo, mostranod que efectivamente se mejora la eficiencia del código. # # # # # ## Conclusiones # # En este ejercicio se ve que se puede usar la transformación de Box-Muller para generar 2 variables aleatorias independientes con una distribución normal estándar. Se verificó el método realizando una comparación con una distribución teórica por medio de un gráfico Q-Q el cual muestra una concordancia, concluyendo que el método es eficaz para la construcción de variables aleatorias normales estándar. # # Además del metódo de Box Muller tradicional se implementó una variación que mejora la eficiencia ya que disminuye el costo computacional. Se obtiene una distribución esperada de esta forma también.
g2-p7-BoxMuller.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="e4tokWP0XlH1" # # Classificação de Filmes # + [markdown] colab_type="text" id="0NyCbIiTXlH2" # **Carregamento das bibliotecas numpy, keras, tensorflow** # # Verificação do suporte a GPU pela tensorflow # + colab_type="code" executionInfo={"status": "ok", "timestamp": 1583446620795, "user_tz": 180, "elapsed": 6466, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} id="cklDvFi5XlH2" outputId="ed768879-0086-4a0b-9848-4b6db8f8a112" colab={"base_uri": "https://localhost:8080/", "height": 165} import numpy as np print("numpy version:", np.__version__) import keras print("keras version:", keras.__version__) import tensorflow as tf print("tensorflow version:", tf.__version__) print("GPU support:", tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)) # + [markdown] colab_type="text" id="4igLWL8PXlH7" # **Carregamento do dataset IMDB** # # Serão preservadas apenas as 10.000 palavras mais frequentes # + colab_type="code" id="jSa_x3f6XlH7" outputId="430c4693-744f-425e-af30-790c2f7a20e3" executionInfo={"status": "ok", "timestamp": 1583446839498, "user_tz": 180, "elapsed": 11798, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 51} # A modificação abaixo nao eh necessaria, caso a numpy seja mais recente # que a versao 1.16.5 # A alteraçao em np.load eh necessaria para evitar uma incompatibilidade # entre a keras (>=2.2.4) e a numpy (>=1.16.3). Ela ocorre no arquivo # imdb.py da keras que precisa ter np.load(path ,allow_pickle=True). Ateh # este problema ser corrigido na keras a forma mais pratica eh alterar a # funcao np.load, carregar o dataset da keras e retornar a funcao ao seu # estado original (vide a seguir). #old = np.load #np.load = lambda *a,**k: old(*a,**k,allow_pickle=True) from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) #np.load = old #del(old) # + [markdown] id="_7aVJtoemEHm" colab_type="text" # **Formato e estrutura do dataset IMDB** # # Cada elemento em train_data é uma lista de palavras codificadas que compõe um comentário sobre um filme. Por exemplo as 10 primeiras palavras da 15a lista seriam: # + id="YPCxt-InpWh7" colab_type="code" outputId="f74175da-694c-48a9-fe52-76729826601b" executionInfo={"status": "ok", "timestamp": 1583446890527, "user_tz": 180, "elapsed": 1099, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} train_data.shape # + id="A0uohLmGUwLa" colab_type="code" outputId="6b141f0c-7f4a-43e5-9de2-8fef2a2d4251" executionInfo={"status": "ok", "timestamp": 1583446955695, "user_tz": 180, "elapsed": 1129, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} print(train_data[14][:10]) # + [markdown] id="O-ORnBHtlDD2" colab_type="text" # Cada elemento em train_labels é a classificação dada ao comentário (0 negativo), (1 positivo). Por exemplo o comentário de índice 78 foi classificado como negativo # + id="CLLmfW2OlDD3" colab_type="code" outputId="23ccabee-6d7f-4805-cac9-8be5eada9e00" executionInfo={"status": "ok", "timestamp": 1583446963121, "user_tz": 180, "elapsed": 1102, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} print(train_labels[78]) # + [markdown] id="fNSkU6Q_lDD7" colab_type="text" # **Recuperação do texto a partir da codificação das palavras** # # Para reverter a sequência de números a um texto podemos lançar mão do dicionário "word_index". Primeiro carregamos o dicionário. # + id="urP-V9WUlDD8" colab_type="code" outputId="fec78580-4808-4fd5-d0f2-94ac1435cfc4" executionInfo={"status": "ok", "timestamp": 1583447386351, "user_tz": 180, "elapsed": 1500, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 51} word_index = imdb.get_word_index() # + [markdown] id="_fPFRoQTlDD_" colab_type="text" # Em seguida imprimimos parte do mesmo para entender a formatação dos dados. # Observe que a palavra 'fawn' foi codificada no dicionário pelo número 34701 # + id="lmQw91J3lDD_" colab_type="code" outputId="c642c32f-a421-4a97-9bd4-49550a8d16be" executionInfo={"status": "ok", "timestamp": 1583447392769, "user_tz": 180, "elapsed": 1084, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 102} list(word_index.items())[:5] # + [markdown] id="ZfDvTzRelDED" colab_type="text" # Vamos criar um novo dicionário de nome index_word, trocando as chaves (*keys*) e os valores (*values*) entre si. # + id="JAbAy9ktlDEE" colab_type="code" outputId="5fe3982e-9b35-4ee4-a5a1-84b5f435dc92" executionInfo={"status": "ok", "timestamp": 1583447401441, "user_tz": 180, "elapsed": 1100, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 102} index_word = dict([(valor, chave) for chave, valor in word_index.items()]) list(index_word.items())[:5] # + [markdown] id="BbwOG2xalDEH" colab_type="text" # Uma vez criado o dicionário utilizamos o mesmo para "decodificar" um comentário. Observe no entanto que a codificação no dataset imdb esta deslocada de 3. Sendo assim a palavra que em train_data foi codificada como 4, no dicionário aparece como 1. Por isso o uso de i-3 ao recuperar o valor no dicionário a partir da sua chave. # + id="F6eCwNUBlDEH" colab_type="code" outputId="9cc75544-0110-4b08-9630-10b382fba93d" executionInfo={"status": "ok", "timestamp": 1583447414085, "user_tz": 180, "elapsed": 1122, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} decoded_review = ' '.join([index_word.get(i - 3, '?') for i in train_data[0]]) print(decoded_review[:100]) # + [markdown] id="w_77Ugy9lDEK" colab_type="text" # **Dummificação dos valores de entrada** # # Para passar os dados à rede neural precisamos dummificar os valores de entrada. # # Como temos 10.000 valores diferentes possíveis (um para cada palavra) temos de criar uma matriz com 25.000 linhas (uma para cada comentário) e 10.000 colunas (uma para cada palavra). # # As palavras presentes em um comentário seriam indicadas por 1 na sua coluna respectiva na matriz. Observe porém que neste método perde-se a sequência das palavras. Nesta etapa vamos preservar apenas as palavras em si, em cada comentário. # + id="N2WpWAZ9lDEL" colab_type="code" colab={} import numpy as np def vectorize_sequences(sequencia, colunas=10000): resultado = np.zeros((len(sequencia), colunas)) for indice, valor in enumerate(sequencia): resultado[indice, valor] = 1. return resultado x_train = vectorize_sequences(train_data) x_test = vectorize_sequences(test_data) # + id="CN_H1-helDEO" colab_type="code" outputId="37b69e04-1680-44ad-fcfe-f7b07ee082d4" executionInfo={"status": "ok", "timestamp": 1583448186902, "user_tz": 180, "elapsed": 1171, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 85} x_train[0:4] # + id="tZbO9hyRsqbU" colab_type="code" outputId="72d2636e-1da5-4916-fca0-264fc78c9841" executionInfo={"status": "ok", "timestamp": 1583448237347, "user_tz": 180, "elapsed": 1014, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} x_train.shape # + [markdown] id="uSlToFYalDER" colab_type="text" # No caso dos labels, dado que eles são compostos apenas de 0s e 1s a vetorização é quase direta # + id="B623xuHhlDER" colab_type="code" colab={} y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') # + id="l69snaYElDEV" colab_type="code" outputId="64370110-cc41-4c0a-de19-98b26d430185" executionInfo={"status": "ok", "timestamp": 1583448241363, "user_tz": 180, "elapsed": 1145, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} y_train[0:4] # + id="yjmjPHOVs_me" colab_type="code" outputId="31b8c327-63a0-4690-c7b3-a53f391acd2e" executionInfo={"status": "ok", "timestamp": 1583448296321, "user_tz": 180, "elapsed": 1108, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} y_train.shape # + [markdown] id="mk3IcZXDlDEZ" colab_type="text" # **Definição da Estrutura da Rede Neural** # + id="sbK3pVnBlDEZ" colab_type="code" outputId="4b1e4c22-31c0-466e-d453-8ebd26f726c3" executionInfo={"status": "ok", "timestamp": 1583448384893, "user_tz": 180, "elapsed": 1117, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 258} from keras import models model = models.Sequential() from keras import layers model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense( 1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) #ou de forma equivalente passando funcoes como argumentos from keras import optimizers model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy']) from keras import losses model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=['accuracy']) from keras import metrics model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=[metrics.binary_accuracy]) # + [markdown] id="8GUh6usOlDEd" colab_type="text" # De forma resumida a estrutura da rede neural pode ser montada com os seguintes comandos: # + id="Ce2M9ZCJlDEe" colab_type="code" colab={} from keras import models, layers model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense( 1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # + [markdown] id="BjQAiiZJlDEj" colab_type="text" # **Treino da Rede Neural** # # Para acompanhar o treino criamos também um conjunto de dados de *pré teste* isto é separamos do conjunto de treino um grupo de dados para validação. # + id="vvri8g5nlDEk" colab_type="code" colab={} x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] # + [markdown] id="--U9q5cClDEl" colab_type="text" # O treino será executado no grupo partial_x_train em 20 épocas (passagens por todo o grupo de treino, neste caso partial_x_train). Cada época terá seus dados agrupados em mini batches de 512 amostras. A validação ocorrerá no conjunto de validação (x_val). A passagem dos parâmetros pode ser vista de forma completa a seguir: # + id="niVRpm5GlDEm" colab_type="code" outputId="0a4ae867-9c94-44f9-b34f-f8fa34f35d14" executionInfo={"status": "ok", "timestamp": 1583449188446, "user_tz": 180, "elapsed": 30866, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) # + [markdown] id="kpHsjuzflDEo" colab_type="text" # O retorno do método .fit é um objeto do tipo History, o qual neste caso foi denominado *history_dict*. Ele possui um dicionário que pode ser acessado através da propriedade *.history* # + id="JMGRp2bTlDEp" colab_type="code" outputId="50289558-bc8a-47ca-e175-86c23fb0c54e" executionInfo={"status": "ok", "timestamp": 1583449251628, "user_tz": 180, "elapsed": 1163, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} history_dict = history.history history_dict.keys() # + [markdown] id="zbAXJPlWlDEs" colab_type="text" # Com as informações armazenadas neste dicionário pode-se traçar um gráfico da evolução do erro e da precisão durante o treino. # + id="Y01Dl9S9lDEt" colab_type="code" outputId="e015a892-4646-4921-e87c-f1f245343ee3" executionInfo={"status": "ok", "timestamp": 1583449257064, "user_tz": 180, "elapsed": 1185, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 295} import matplotlib.pyplot as plt # %matplotlib inline history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len(history_dict['loss'])+1) plt.title('Trainning and Validation Loss') plt.plot(epochs, loss_values, 'bo', label='Trainning loss') #'bo' é um ponto azul (blue dot) plt.plot(epochs, val_loss_values, 'b', label='Validation loss') #'b' é uma linha azul contínua plt.legend() plt.xlabel('Epochs') plt.ylabel('Loss') plt.show() # + id="R5R8rJtvlDEv" colab_type="code" outputId="cfa8f48d-3aee-4dca-a92c-bf77cf1aa8ec" executionInfo={"status": "ok", "timestamp": 1583449700426, "user_tz": 180, "elapsed": 2455, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 295} plt.clf() # Para limpar o gráfico anterior da memória acc_values = history_dict['binary_accuracy'] val_acc_values = history_dict['val_binary_accuracy'] epochs = range(1, len(history_dict['binary_accuracy'])+1) plt.title('Trainning and Validation Accuracy') plt.plot(epochs, acc_values, 'bo', label='Trainning acc') #'bo' é um ponto azul (blue dot) plt.plot(epochs, val_acc_values, 'b', label='Validation acc') #'b' é uma linha azul contínua plt.legend() plt.xlabel('Epochs') plt.ylabel('Acc') plt.show() # + [markdown] id="k1reMGDHlDEx" colab_type="text" # Evitando o overfitting # + [markdown] id="5kkBcHRklDEy" colab_type="text" # Uma forma imediata (porém muito simplista) de evitar o overfitting seria treinando a rede em apenas 3 épocas. O que é apresentado a seguir # + id="9l_t57DdlDEz" colab_type="code" outputId="8db5ee89-5267-4b4e-d961-3d5096016714" executionInfo={"status": "ok", "timestamp": 1583449873935, "user_tz": 180, "elapsed": 6901, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 170} from keras import models, layers model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense( 1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(x_train, y_train, epochs=3, batch_size=512) print('teste') results = model.evaluate(x_test, y_test) results # + [markdown] id="U7tjDlJwlDE1" colab_type="text" # **Exercícios** # + [markdown] id="moLXgX6bPttq" colab_type="text" # * Faça os exercícios a seguir tomando por base uma rede neural: # * Sequencial # * Que irá receber um tensor de entrada 1D, com formato (10000,) # * Com duas camadas intermediárias do tipo denso, cada uma com 16 neurons com função de ativação `relu` # * Uma camada de saida com 1 neuron com função de ativação `sigmoid` # * Otimizador: 'rmsprop' # * Função de erro: 'binary_crossentropy' # * Métrica de desempenho: 'accuracy' # # 1. Determine de forma gráfica a quantidade de épocas de treino necessárias para que o erro no conjunto de validação seja aproximadamente igual ao erro no conjunto de treino e determine o valor correspondente do mesmo para as seguintes redes neurais. # 1. A rede neural descrita acima. A partir dela alterar os parâmetros a seguir (sempre em relação à rede neural descrita acima). # 2. Uma rede neural com apenas uma camada intermediária de 16 neurons # 3. Uma rede neural com três camadas intermediárias de 16 neurons # 4. Uma rede neural com três camadas intermediárias de 32 neurons # 5. Uma rede neural com três camadas intermediárias de 64 neurons # 6. Uma rede neural com três camadas intermediárias de 16 neurons e função de erro mean squared erro # 7. Uma rede neural com três camadas intermediárias de 16 neurons e função de ativação `tanh`. # 8. Uma rede neural com três camadas intermediárias de 16, 32 e 16 neurons. # 9. Uma rede neural com três camadas intermediárias de 16, 32 e 64 neurons. # # + id="W6PyS0ZLlDE2" colab_type="code" colab={}
notebooks/fgv_classes/professor_mirapalheta/02.2.keras_classbinaria_filmes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Boston Survival Guide # ##### *Team members: <NAME>, <NAME>, <NAME>, <NAME>, <NAME> # #### Our team is interested in crimes that were reported in the Boston area (including 11 districts) from June 2015 to October 2018. # #### The main purpose of our project is to analyze patterns of crimes, the potential incentives for crimes, as well as how other factors such as housing prices, income correlate with crimes. # #### Crime Data Source: https://www.kaggle.com/ankkur13/boston-crime-data # #### Housing Data Source: https://www.zillow.com/boston-ma/home-values/ # #### Income Data Source: https://statisticalatlas.com/county-subdivision/Massachusetts/Suffolk-County/Boston/Household-Income # ### Table of Content # -1 Preview of data & Data Cleaning # # -2 Data Analysis # # - 2.1 Time Analysis # # -- 2.1.1 Time series # # -- 2.1.2 Year/month/week/day/hour with the most crimes # # -- 2.1.3 Regression for time & time_Crime # # - 2.2 Location Analysis # # -- 2.2.1 Region/district/Street with the most crimes # # -- 2.2.2 Distribution of crimes by Districts -- Bubble chart # # -- 2.2.3 Location,Housing Prices and Crimes -- Distribution & Regression # # -- 2.2.4 Location_Income & Location_Crime -- Distribution & Regression # # - 2.3 Crime Attribute Analysis # # -- 2.3.1 Crime Types # # -- 2.3.2 Example.shooting # # - 2.4 Multivariate Analysis # # -- 2.4.1 Location & Crime Type # # -- 2.4.2 Crime Type & Time # # -- 2.4.3 Time & Location # # -3 Summary # # # ### 1.Preview of data & Data Cleaning # First, let's indentify all the variables in our datasets. # - Crime dataset # %%bq query SELECT * FROM `team-6-is-perfect.Boston_Crime.boston_crime` LIMIT 3 # - Housing dataset # %%bq query SELECT * FROM `team-6-is-perfect.Boston_Crime.boston_housing` LIMIT 3 # - Income dataset # %%bq query SELECT * FROM `team-6-is-perfect.Boston_Crime.boston_income_populationGrowth` LIMIT 3 # In order to match all three datasets by district, we refer to the following website: https://bpdnews.com/districts # Since car accidents and aircrafts are not really under the crime category, so we want to create a new table without those two types of crimes. # Also, we removed those missing values when we create the new table # + # %%bq query create or replace table Boston_Crime.boston as ( WITH crime AS( SELECT * EXCEPT ( OFFENSE_CODE, REPORTING_AREA, OFFENSE_DESCRIPTION,UCR_PART) FROM `team-6-is-perfect.Boston_Crime.boston_crime` ), housing AS ( SELECT Region_Name, District, `team-6-is-perfect.Boston_Crime.boston_housing`.current AS Housing_Price FROM `team-6-is-perfect.Boston_Crime.boston_housing` ), income AS ( SELECT District, Income FROM `team-6-is-perfect.Boston_Crime.boston_income_populationGrowth` ) SELECT INCIDENT_NUMBER, OFFENSE_Code_GROUP as Crime_Type, crime.DISTRICT , IF(SHOOTING='Y',1,0) as SHOOTING, OCCURRED_ON_DATE, YEAR, MONTH, DAY_OF_WEEK, HOUR, STREET, Lat, Long, Region_Name, housing.Housing_Price, income.Income FROM crime INNER JOIN housing ON crime.DISTRICT = housing.District INNER JOIN income ON crime.DISTRICT = income.District WHERE Lat != -1 AND LONG != -1 AND Lat IS NOT NULL AND Lat IS NOT NULL AND OFFENSE_Code_GROUP != 'Motor Vehicle Accident Response' AND OFFENSE_Code_GROUP!= 'Aircraft' AND crime.DISTRICT IS NOT NULL ) # - # ### 2.Data Analysis # ### 2.1 Time Analysis # #### 2.1.1 Time series # Below is an overview of the number of crimes in time order # %%bq query -n crimes_year_month SELECT TIMESTAMP(CONCAT(CAST(EXTRACT(year from OCCURRED_ON_DATE) AS STRING), "-", CAST(EXTRACT(month from OCCURRED_ON_DATE) AS STRING), "-01")) year_month, count(incident_number) as num_incident FROM Boston_Crime.boston GROUP BY year_month; # %%chart annotation --d crimes_year_month # #### 2.1.2 Year/month/week/day/hour with the most crimes # #### Question: Which year has the highest amount of crimes? # %%bq query SELECT year, count(year) as numcrime_year FROM Boston_Crime.boston Group by year Order by numcrime_year DESC; # #### Answer: 2017 has the highest amount of crime # #### Question: Which month has the highest amount of crimes? # %%bq query SELECT month, count(month) as numcrime_month FROM Boston_Crime.boston Group by month Order by numcrime_month DESC LIMIT 4; # #### Answer: On average, August has the highest amount of crimes. # #### Question: Which day of the week is the most dangerous? # %%bq query SELECT day_of_week, count(day_of_week) as numcrime_weekday FROM Boston_Crime.boston Group by day_of_week Order by numcrime_weekday DESC; # #### Answer: On average, Friday has the highest amount of crimes. # %%bq query -n crime_of_week SELECT day_of_week, count (day_of_week) as day_of_week_crime_amount,Case when day_of_week = 'Monday' then 1 when day_of_week = 'Tuesday' then 2 when day_of_week = 'Wednesday' then 3 when day_of_week = 'Thursday' then 4 when day_of_week = 'Friday' then 5 when day_of_week = 'Saturday' then 6 when day_of_week = 'Sunday' then 7 end as weekday FROM Boston_Crime.boston Group by day_of_week Order by weekday; # %%chart columns -d crime_of_week title: total crime amount in day_of_week height: 400 width: 1200 hAxis: title: Week vAxis: title: Crime Amount legend: none # #### Question: What time is the most dangerous of a day? # %%bq query SELECT hour, count(hour) as numcrime_hour FROM Boston_Crime.boston Group by hour Order by numcrime_hour DESC LIMIT 5; # #### Answer: On average, crimes mostly occur at 5pm # %%bq query -n crime_of_hour SELECT hour, count(hour) as numcrime_hour FROM Boston_Crime.boston Group by hour Order by numcrime_hour DESC # %%chart scatter -d crime_of_hour title: total crime amount of a day height: 400 width: 900 hAxis: title: time vAxis: title: crime amount # #### Question: Do crimes usually happen during the day or the night? # %%bq query SELECT Case when (hour >= 6 and hour <= 18) then 'daytime' else 'night' end as crime_happen_time, count(hour) as crime_hours FROM Boston_Crime.boston Group by crime_happen_time Order by crime_happen_time # #### Answer: It's more likely for crimes to happen in the daytime # %%bq query -n crime_time SELECT Case when (hour >= 6 and hour <= 18) then 'daytime' else 'night' end as crime_happen_time, count(hour) as crime_hours FROM Boston_Crime.boston Group by crime_happen_time # %%chart pie --d crime_time title: Daytime vs. Night Crime height: 400 width: 900 # #### 2.1.3 Regression for time & time_Crime # #### For the regression, we create three dummy variables and assign values (0 or 1) by categorizing each of them. # # - Month_type: # # -- “colder” as 1: from November to April # # -- “warmer” as 0: from May to October # # - Week_type: # # -- Weekends as 1 # # -- Weekdays as 0 # # - Hour_type: # # -- Daytime as 1: from 6am to 6pm # # -- Nighttime as 0: from 7pm to 5am # %%bq query select count (incident_number) as numcrime_in_time, case when month in (11,12,1,2,3,4) then 1 else 0 end as month_type, case when DAY_OF_WEEK in ('Saturday','Sunday') then 1 else 0 end as week_type, case when HOUR >=6 and hour <=18 then 1 else 0 end as hour_type FROM Boston_Crime.boston group by month_type,week_type,hour_type # %%bq query create or replace model`team-6-is-perfect.Boston_Crime.regression` options( model_type='linear_reg', input_label_cols=['numcrime_in_time']) as (select count (incident_number) as numcrime_in_time, case when month in (11,12,1,2,3,4) then 1 else 0 end as month_type, case when DAY_OF_WEEK in ('Saturday','Sunday') then 1 else 0 end as week_type, case when HOUR >=6 and hour <=18 then 1 else 0 end as hour_type FROM Boston_Crime.boston group by month_type,week_type,hour_type) # + # %%bq query with regression_table as (select count (incident_number) as numcrime_in_time, case when month in (11,12,1,2,3,4) then 1 else 0 end as month_type, case when DAY_OF_WEEK in ('Saturday','Sunday') then 1 else 0 end as week_type, case when HOUR >=6 and hour <=18 then 1 else 0 end as hour_type FROM Boston_Crime.boston group by month_type,week_type,hour_type) select * from ML.evaluate( model`team-6-is-perfect.Boston_Crime.regression`, table regression_table) # + # %%bq query with eval_table as (select count (incident_number) as numcrime_in_time, case when month in (11,12,1,2,3,4) then 1 else 0 end as month_type, case when DAY_OF_WEEK in ('Saturday','Sunday') then 1 else 0 end as week_type, case when HOUR >=6 and hour <=18 then 1 else 0 end as hour_type FROM Boston_Crime.boston group by month_type,week_type,hour_type) select * from ML.WEIGHTS( model`team-6-is-perfect.Boston_Crime.regression`, STRUCT(true AS standardize)) # - # #### Regression Function: predicted number of crime = 34126.823 - 5166.37 * month_type(dummy) - 17169.359 * week_type(dummy)+10282.926*hour_type(dummy) # #### R-square: 0.869 which is a reliable regression # ### 2.2 Location Analysis # #### 2.2.1 Region/District/Street with the most crimes # #### Question: Which neighboorhood is the most dangerous? # %%bq query SELECT district, count(district) as numcrime_in_district FROM Boston_Crime.boston Group by district Order by numcrime_in_district DESC LIMIT 5; # #### Answer: The most dangerous district is B2 which is Roxbury. # #### Question: What are the most dangerous streets? # %%bq query SELECT street, count(street) as numcrime_street FROM Boston_Crime.boston Group by street Order by numcrime_street DESC LIMIT 5; # #### Answer: The most dangerous street in Boston is Washington St. # #### 2.2.2 Distribution of crimes by Districts--bubble chart # #### We use a bubble chart to represent the distribution of crimes in different districts in Boston # %%bq query SELECT distinct(district), count(district) as numcrime FROM Boston_Crime.boston GROUP BY district # %%bq query SELECT distinct district, AVG(lat) AS lat, AVG(long) AS lon FROM Boston_Crime.boston GROUP BY district # + codeCollapsed=false hiddenCell=true # %%bq query WITH temp1 AS( SELECT distinct(district), count(district) as numcrime FROM Boston_Crime.boston GROUP BY district), temp2 AS( SELECT distinct district, AVG(lat) AS lat, AVG(long) AS lon FROM Boston_Crime.boston GROUP BY district) SELECT district, lat, lon, numcrime FROM temp1 LEFT JOIN temp2 USING(district) # + # %%bq query -n crime_map WITH temp1 AS( SELECT distinct(district), count(district) as numcrime FROM Boston_Crime.boston GROUP BY district), temp2 AS( SELECT distinct district, AVG(lat) AS lat, AVG(long) AS lon FROM Boston_Crime.boston GROUP BY district) SELECT district, lat, lon, numcrime FROM temp1 LEFT JOIN temp2 USING(district) # - # %%chart bubbles --data crime_map title: Crime in boston in different district height: 600 width: 1000 hAxis: title: latitude vAxis: title: longitude # #### 2.2.3 Location,Housing Prices and Crimes # We are interested in finding a relationship between crimes and housing prices # #### Question: Do housing prices have effects on the number of crimes? # ##### Distribution for Location_Housing price & Location_Crime # %%bq query SELECT District, Housing_Price, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Housing_Price # %%bq query -n housecrime SELECT Housing_Price as x, numcrime_in_district as y FROM (SELECT District, Housing_Price, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Housing_Price) # %%chart scatter --data housecrime title: housing price vs. crime amount in districts height: 500 width: 900 hAxis: title: housing price vAxis: title: crime amount # %%bq query create or replace model`team-6-is-perfect.Boston_Crime.regression2` options( model_type='linear_reg', input_label_cols=['numcrime_in_district']) as (SELECT Housing_Price, numcrime_in_district FROM (SELECT District, Housing_Price, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Housing_Price) ) # ##### Regression for Location_Housing price & Location_Crime # %%bq query with regression2_table as (SELECT Housing_Price, numcrime_in_district FROM (SELECT District, Housing_Price, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Housing_Price) ) select * from ML.evaluate( model`team-6-is-perfect.Boston_Crime.regression2`, table regression2_table) # + # %%bq query with eval2_table as (SELECT Housing_Price, numcrime_in_district FROM (SELECT District, Housing_Price, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Housing_Price) ) select * from ML.WEIGHTS( model`team-6-is-perfect.Boston_Crime.regression2`, STRUCT(true AS standardize)) # - # #### Answer: # #### Regression function: predicted number of crime = 22751.215 + 909.399 * Housing_Price # #### R-square is 0.006 which is not a reliable regression # #### 2.2.4 Location_Income & Location_Crime # #### Question: Do incomes have effects on the amount of crimes? # ##### Distribution for Location_Income & Location_Crime # %%bq query SELECT District, Income, count(District) as numcrime_in_district FROM Boston_Crime.boston GROUP By District, Income # %%bq query -n incomecrime SELECT Income as x, numcrime_in_district as y FROM (SELECT District, Income, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Income) # %%chart scatter --data incomecrime title: income vs. crime amount in districts height: 500 width: 900 hAxis: title: income vAxis: title: crime amount # ##### Regression for Location_Income & Location_Crime # %%bq query create or replace model`team-6-is-perfect.Boston_Crime.regression3` options( model_type='linear_reg', input_label_cols=['numcrime_in_district']) as (SELECT Income, numcrime_in_district FROM (SELECT District, Income, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Income) ) # + # %%bq query with regression3_table as (SELECT Income, numcrime_in_district FROM (SELECT District, Income, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Income) ) select * from ML.evaluate( model`team-6-is-perfect.Boston_Crime.regression3`, table regression3_table) # + # %%bq query with eval3_table as (SELECT Income, numcrime_in_district FROM (SELECT District, Income, count(District) as numcrime_in_district FROM Boston_Crime.boston Group By District, Income) ) select * from ML.WEIGHTS( model`team-6-is-perfect.Boston_Crime.regression3`, STRUCT(true AS standardize)) # - # #### Answer: # #### Regression function: predicted number of crime = 22751.215 - 7330.979 * Income # #### R-square is 0.365 which is a partial reliable regression # ### 2.3 Crime Attribute Analysis # #### 2.3.1 Crime Types # #### Question: What is the most common crime in Boston? # %%bq query SELECT Crime_Type, count(Crime_Type) as CrimeFrequency FROM Boston_Crime.boston Group by Crime_Type Order by CrimeFrequency DESC LIMIT 3; # #### Answer: Larceny is the most common crime in Boston and the second is Medical Assistance # ### Particularly, we analyze incidences of shootings # #### 2.3.2 Example.shooting # #### Question: How many shootings were reported during the four years? # %%bq query SELECT COUNT(shooting) AS TotalShootingOffense FROM Boston_Crime.boston Where shooting = 1 # #### Answer: There are 1012 incidences of shooting reported over the four years. # %%bq query -n shooting_in_month SELECT TIMESTAMP(CONCAT(CAST(EXTRACT(year from OCCURRED_ON_DATE) AS STRING), "-", CAST(EXTRACT(month from OCCURRED_ON_DATE) AS STRING), "-01")) year_month, count(shooting) as shooting_in_month_base FROM Boston_Crime.boston WHERE SHOOTING = 1 GROUP BY year_month ORDER BY year_month; # %%chart line --d shooting_in_month title: shooting in a month base height: 400 width: 1000 hAxis: title: Time vAxis: title: Shooting Count # #### Based on the data, there are more than 15 shootings in each month on average. # ### 2.4 Multivariate Analysis # #### 2.4.1 Location & Crime Type # %%bq query SELECT Street, Crime_Type, count(INCIDENT_NUMBER) as numcrime FROM Boston_Crime.boston Group by street,Crime_Type Order by numcrime DESC LIMIT 10 # #### Incidences of larceny happened frequently on Boylston Street # #### 2.4.2 Crime Type & Time # %%bq query SELECT hour,Crime_Type, count(INCIDENT_NUMBER) as numcrime FROM Boston_Crime.boston Group by hour,Crime_Type Order by numcrime DESC LIMIT 10 # #### Larceny occur frequently in the noon # #### 2.4.3 Time & Location # %%bq query SELECT hour,DISTRICT, count(INCIDENT_NUMBER) as numcrime FROM Boston_Crime.boston Group by hour,DISTRICT Order by numcrime DESC LIMIT 10 # #### District B2(Roxbury) is on the top of the crime list when grouped by hour and district. # ### 3. Summary # Based on our analysis above, Boston is a safe city in general. The most frequent crimes are Larceny. Crimes are more likely to happen on Washington St., the longest street in Boston, as well as in Roxbury. Most crimes happen during 4 pm to 7 pm. Housing prices don’t have a statistically significant effect on crimes, but income has partial significance to crimes. # Hope our project will help you survive in Boston!!!
Boston Survival Guide2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- def partition(n): if n == 1 or n == 0: return 1 partition(0) # + language="html" # <img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/372a9d4b8378cd2b51d03d7e3aec77fba641b19c"></img> # - def p(n): if n == 0 or n == 1: return 1 return (-1)**(k-1)*p*(n-9*k) p(5) memory = {} def p(n): if memory.get(n) is not None: return memory[n] sums = 0 diff = 1 delta = 1 nval = n if n < 0: return 0 if n == 0: return 1 while n >= 0: if diff %2 != 0: sums += p(n-delta) + p(n-(delta + diff)) else: sums = sums - (p(n-delta) + p(n-(delta + diff))) n = n - (delta + diff) delta += 2 diff += 1 memory[nval] = sums return sums [p(i) for i in range(100)] memory
Partitions.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from sklearn.tree import DecisionTreeClassifier import joblib # step 1 #load the model that has been trained model = joblib.load('music_recommender.joblib') # remove the model defined as its already trained model_prediction = model.predict([[25,0]]) # test with a sample prediction model_prediction # print the prediction # - # print music data to crosscheck its accuracy music_data
chapter Five/Lesson 3 Music model/step 4 Loading pretrained model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Módulo datos_gov - LEILA # Este libro de Jupyter contiene ejemplos con explicaciones de cómo utilizar el módulo que conecta a un usuario con el Portal de Datos Abiertos de Colombia # ### Importar la clase DatosGov del módulo datos_gov # <!-- Importar la clase DatosGov del módulo datos_gov de LEILA. Con esta clase se podrá buscar información que esté publicada en el Portal de Datos Abiertos --> from leila.datos_gov import DatosGov # Se importa la tabla de inventario de datos.gov.co. Esta tabla contiene todas las publicaciones del Portal (conjuntos de datos, enlaces externos, mapas, gráficos, etc.). inventario = DatosGov().tabla_inventario() # Las columnas de la tabla de inventario son las siguientes: # # "**numero_api**": número API del conjunto de datos. Este es un carácter único de cada conjunto de datos del Portal que se usa como insumo para abrirlo desde código. # # "**nombre**": nombre de la publicación # # "**descripcion**": descripción de la publicación # # "**dueno**": dueño de la publicación. # # "**base_publica**": indica con un "si" si la información del conjunto de datos es público y con un "no" de lo contrario # # "**tipo**": indica el tipo de la publicación, que puede ser uno de los siguientes: "conjunto de datos", "enlace externo", "mapa", "grafico", "vista filtrada", "archivo o documento", "historia", "visualizacion", "lente de datos", "formulario", "calendario". # # "**categoria**": tema general del que trata la información publicada # # "**terminos_clave**": términos clave relacionados con la publicación # # "**url**": enlace web de la publicación en el Portal de Datos Abiertos # # "**fecha_creacion**": fecha de creación de la publicación # # "**fecha_actualizacion**": última fecha de actualización de la publicación # # "**filas**": número de filas del conjunto de datos, si aplica # # "**columnas**": número de columnas del conjunto de datos, si aplica # # "**correo_contacto**": correo de contacto de la entidad dueña de los datos # # "**licencia**": nombre de la licencia los datos # # "**entidad**": nombre de la entidad dueña de los datos # # "**entidad_url**": enlace web de la entidad dueña de los datos # # "**entidad_sector**": sector de la entidad # # "**entidad_departamento**": departamento de la entidad # # "**entidad_orden**": especifica si publicación es de orden territorial, nacional, departamental o internacional # # "**entidad_dependencia**": dependencia de la entidad dueña de los datos # # "**entidad_municipio**": municipio donde opera la entidad # # "**actualizacion_frecuencia**": frecuencia de actualización de los datos. Puede ser anual, semestral, mensual, trimestral, trianual, diaria, quinquenal, semanal, entre otros. También puede no aplicar # # "**idioma**": idioma en el que se encuentra la información # # "**cobertura**": alcance de la información. Puede ser nacional, departamental, municipal, centro poblado o internacional # ### Filtrar la tabla de inventario # Es posible buscar información de interés dentro de la tabla de inventario. La búsqueda se hace a partir de términos o texto que puede ser buscado en las columnas de formato texto de la tabla de inventario, por un rango de fechas o por el número filas o columnas. # #### Ejemplo: búsqueda por términos clave # # Para hacer la búsqueda por términos clave, se construye un diccionario de Python que contenga como llaves los nombres de las columnas de texto de la tabla de inventario sobre las cuales se desea hacer el filtro. Los valores de cada llave es una lista que contiene uno o más términos clave. Este diccionario se ingresa al método "tabla_inventario" de DatosGov dentro del parámetro "filtro". # # Los términos que se ingresan al diccionario no tienen que tener tildes o mayúsculas que se encuentran en la columna original de la tabla de inventario. Por ejemplo, los resultados serán los mismos si se buscan las palabras "Economía", "economía", "economia" o "ECONOMÍA". # # Abajo se encuentra un ejemplo donde se filtra la tabla de inventario por las columnas "nombre" y "tipo". Dentro de la columna "nombre" se busca si contiene los términos "economia" o "ambiente" y si la columna "tipo" contiene el término "conjunto de datos". Es decir, se están buscando conjuntos de datos de temas de economía o ambiente. # + # Se crea el diccionario con el filtro deseado filtro = { "nombre": ["economia", "ambiente"], "tipo": ["conjunto de datos"] } # Se abre la tabla de inventario con el filtro deseado inventario = DatosGov().tabla_inventario(filtro=filtro) # - # Se imprime la tabla de inventario con el filtro aplicado en la celda anterior inventario # #### Ejemplo: búsqueda por rango de filas y columnas # # Para hacer el filtro de la tabla de inventario por el tamaño de un conjunto de datos, se tiene que incluir el nombre de las columnas "filas" y "columnas" en el diccionario. Los valores de estas llaves son listas con dos elementos cada una: el primer elemento es el valor mínimo de filas o columnas y el segundo el valor máximo. # # A continuación se muestra un ejemplo de filtro, donde se seleccionan los conjuntos de datos con mínimo 50 filas y máximo 60 y con mínimo 8 columnas y máximo 10 # + # Se crea el diccionario con el filtro deseado filtro = { "filas": [50, 60], "columnas": [8, 10] } # Se abre la tabla de inventario con el filtro deseado inventario = DatosGov().tabla_inventario(filtro=filtro) # - # Imprimir las columnas del código API, nombre, descripción, filas y columnas de la tabla de inventario filtrada inventario[["numero_api", "nombre", "descripcion", "filas", "columnas"]] # #### Ejemplo: búsqueda por fecha # La tabla de inventario también puede filtrase por fecha. Para hacerlo, se ingresa el diccionario de filtro con una de las columnas de fecha y se especifican las fechas de inicio y de fin deseadas. El siguiente ejemplo muestra cómo obtener la tabla de inventario para publicaciones creadas entre el 1 de enero de 2020 y el 1 de febrero de 2020. # + # Se crea el diccionario con el filtro deseado filtro = { "fecha_creacion": ["2020-01-01", "2020-02-01"], } # Se abre la tabla de inventario con el filtro deseado inventario = DatosGov().tabla_inventario(filtro=filtro) # - # Se muestra la tabla filtrada por fecha inventario # ### Abrir un conjunto de datos del Portal de Datos Abiertos # Para abrir un conjunto de datos.gov.co es necesario tener el código API de ese conjunto e ingresarlo al método "cargar_base" de la clase DatosGov. Con esta función se crea un objeto que contiene el dataframe y el diccionario de metadatos del conjunto, los cuales se pueden obtener con los métodos "to_dataframe" y "metadatos" # # Abajo está el código para cargar el conjunto de datos de "Pueblos indígenas a nivel Nacional 2020", el cual se encuentra en el último filtro de la tabla de inventario. # #### Cargar conjunto de datos con número API # Se define la variable "numero_api", que contiene el número API del conjunto "Pueblos indígenas a nivel Nacional 2020" numero_api = "etwv-wj8f" # Se descarga la información del conjunto de datos en la variable "data" con el método "cargar_base". # Al parámetro "api_id" se asigna el número API y "limite_filas" especifica que únicamente se descargan 200 filas del conjunto data = DatosGov().cargar_base(api_id = numero_api, limite_filas=200) # #### Obtener dataframe del conjunto de datos # Se obtiene el dataframe del conjunto de datos con el método "to_dataframe" datos = data.to_dataframe() # Se visualiza una versión reducida del dataframe datos # #### Obtener diccionario de metadatos del conjunto de datos # Los metadatos se obtienen con el método "metadatos" y se asignan a la variable "meta" meta = data.metadatos() # Se visualiza el diccionario de metadatos meta
ejemplos/ejemplo_datosgov.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:gitdev] # language: python # name: conda-env-gitdev-py # --- # ## SVM Classification # + from numpy import linalg from menpo import io as mio from menpo.visualize import print_dynamic from sklearn.utils.fixes import bincount import itertools import warnings from sklearn.utils import check_X_y, check_array from sklearn.utils.extmath import safe_sparse_dot from dAAMs.lda import lda, predict, chunk, n_fold_generate from menpo.feature import igo, hog, no_op, double_igo as digo, dsift, fast_dsift, hellinger_vector_128_dsift from sklearn import svm import numpy as np # %matplotlib inline # - # ### verification generation import itertools images = mio.import_images("/homes/yz4009/wd/databases/ear/EarVerification/VGGEAR/bound/") images[0].path.name data = [] for img in mio.import_images("/homes/yz4009/wd/databases/ear/EarVerification/VGGEAR/bound/"): label = int(img.path.stem.split("-")[0]) name = img.path.name data.append((label,name)) def pair_generation(data, n_folds=3): np.random.seed(10) n_data = len(data) n_id = len(np.unique([d[0] for d in data])) fold_limit = n_data / n_folds grouped_data = itertools.groupby(data, lambda x:x[0]) positives = [] negatives = [] folds = [] def if_exist(item, lists): return np.array([item[0] == l and (item[1] == d).all() for l,d in lists]).any() def fold_process(pos, neg): while len(neg) < len(pos) / 2: r_pos = pos[np.random.randint(0,len(pos))] try: isexist = r_pos in neg except: isexist = if_exist(r_pos, neg) if not isexist: # if not r_pos in neg: neg.append(r_pos) pair_negs = [] for r_neg in neg: found = False while not found: r_pos = pos[np.random.randint(0,len(pos))] if not r_pos[0] == r_neg[0]: pair_negs.append(r_pos) found = True return list(zip(pos[::2],pos[1::2])), list(zip(neg,pair_negs)) for identity,g_it in grouped_data: p_list = [] for d in g_it: p_list.append(d) if len(p_list) % 2 > 0: negatives.append(p_list.pop()) positives += p_list if len(positives) + len(negatives) > fold_limit: folds.append(fold_process(positives, negatives)) positives = [] negatives = [] folds.append(fold_process(positives, negatives)) return folds # data = mio.import_pickle('/homes/yz4009/wd/PickleModel/EarRecognition/LDA-VGG-Data-dsift.pkl') # data = mio.import_pickle('/homes/yz4009/wd/PickleModel/EarRecognition/LDA-VGG-Data-dsift.pkl', encoding='latin1') # data = mio.import_pickle('/homes/yz4009/wd/PickleModel/EarRecognition/LDA-VGG-Data.pkl', encoding='latin1') # data = mio.import_pickle('/homes/yz4009/wd/PickleModel/EarRecognition/VGGEAR-bound-PEP.pkl', encoding='latin1') data = mio.import_pickle('/homes/yz4009/wd/PickleModel/EarRecognition/WPUTEDB-.pkl', encoding='latin1') # + # %%time folds = pair_generation(data, n_folds=5) # - print(len(folds[0][0])) new_folds = [] for p,n in folds: np.random.shuffle(p) np.random.shuffle(n) new_folds.append([p[:185],n[:185]]) print(len(new_folds[0][0])) mio.export_pickle(folds,'/homes/yz4009/wd/PickleModel/EarRecognition/WPUTEDB-5folds-PEP.pkl', overwrite=True) mio.export_pickle(folds,'/homes/yz4009/wd/PickleModel/EarRecognition/VGGEAR-5folds-PEP.pkl', overwrite=True) mio.export_pickle(new_folds,"/homes/yz4009/wd/databases/ear/EarVerification/VGGEAR/protocol.pkl") folds = mio.import_pickle("/homes/yz4009/wd/databases/ear/EarVerification/WPUTEDB/protocol-yx.pkl") new_folds = [] for pos,neg in folds: one_fold = [] for (_,i1),(_,i2) in pos[:185]: one_fold.append([[i1,i2],1]) for (_,i1),(_,i2) in neg[:185]: one_fold.append([[i1,i2],0]) new_folds.append(one_fold) mio.export_pickle(new_folds, "/homes/yz4009/wd/databases/ear/EarVerification/WPUTEDB/protocol.pkl", overwrite=True) len(new_folds[0])
DeformableModelsOfEars/Fold-Generation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp auth # - #export import base64 import os import getpass import requests # + #export from tightai.conf import CLI_ENDPOINT, API_ENDPOINT, TIGHTAI_LOCAL_DIRECTORY, TIGHTAI_LOCAL_CREDENTIALS from tightai.handlers import credentials AUTH_PATH = TIGHTAI_LOCAL_CREDENTIALS # - #hide test = False if test: CLI_ENDPOINT = "http://cli.desalsa.io:8000" # + #export class Auth: def username_input(self, label='Username / Email'): username = input("{label}: ".format(label=label)) if username == None or username == "": print("Username is required. \n") return self.username_input() return username def email_input(self): email = input("Email: ") if email == None or email == "": print("Email is required. \n") return self.email_input() return email def password_input(self, label='Password (typing hidden on purpose)'): pw = getpass.getpass("{label}: ".format(label=label)) if pw == None or pw == "": print("Password is required. \n") return self.password_input() return pw def login(self, username=None, password=None): if username is None: username = self.username_input() if password is None: password = self.password_input() TIGHTAI_LOCAL_DIRECTORY.mkdir(parents=True, exist_ok=True) data = {"username": username, "password": password} cli_login = f"{CLI_ENDPOINT}/login/" r = requests.post(cli_login, data=data) if r.status_code in range(200, 299): response_data = r.json() user = response_data.get("user") token = response_data.get("token") if user is not None: username = user.get('username') credentials.to_environ(username=username, token=token) credentials.to_file(username=username, token=token) print("You are now logged in") #return True else: try: response = r.json() except: return "There was an error in your request." if "non_field_errors" in response: errors = response['non_field_errors'] for e in errors: print(e) #return None #return True def logout(self): return credentials.remove() def signup(self): username = self.username_input(label='Username') email = self.email_input() password = self.password_input(label='Password') confirm_password = self.password_input(label='Confirm Password') if confirm_password != password: print("Passwords do not match. Please try again.") password = <PASSWORD>_input(label='Password') confirm_password = <PASSWORD>(label='Confirm Password') data = {"username": username, "email": email, "password": password, "password2": confirm_password} cli_login = "{base_endpoint}/register/".format(base_endpoint=CLI_ENDPOINT) r = requests.post(cli_login, data=data) if r.status_code in range(200, 299): response_data = r.json() print("You are now registered. \nPlease confirm your email. \nWe wil send you a confirmation from `<EMAIL>` shortly.\nThank you.") user = response_data.get("user") token = response_data.get("token") if user is not None: username = user.get('username') credentials.to_environ(username=username, token=token) credentials.to_file(username=username, token=token) #return True else: try: response = r.json() except: return "There was an error in your request." if "non_field_errors" in response: errors = response['non_field_errors'] for e in errors: print(e) #return None #return True auth = Auth() path = AUTH_PATH login = auth.login logout = auth.logout signup = auth.signup # - logout() login()
03_auth.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Lambda School Data Science # # *Unit 2, Sprint 1, Module 2* # # --- # + [markdown] colab_type="text" id="7IXUfiQ2UKj6" # # Regression 2 # # ## Assignment # # You'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com. # # - [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test. # - [ ] Engineer at least two new features. (See below for explanation & ideas.) # - [ ] Fit a linear regression model with at least two features. # - [ ] Get the model's coefficients and intercept. # - [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data. # - [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack! # - [ ] As always, commit your notebook to your fork of the GitHub repo. # # # #### [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering) # # > "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — <NAME>, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf) # # > "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — <NAME>, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf) # # > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. # # #### Feature Ideas # - Does the apartment have a description? # - How long is the description? # - How many total perks does each apartment have? # - Are cats _or_ dogs allowed? # - Are cats _and_ dogs allowed? # - Total number of rooms (beds + baths) # - Ratio of beds to baths # - What's the neighborhood, based on address or latitude & longitude? # # ## Stretch Goals # - [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression # - [ ] If you want more introduction, watch [<NAME>, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4) # (20 minutes, over 1 million views) # - [ ] Add your own stretch goal(s) ! # + colab={} colab_type="code" id="o9eSnDYhUGD7" # %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = 'C:/Users/ryanh/DS-Unit-2-Linear-Models/data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # + colab={} colab_type="code" id="cvrw-T3bZOuW" import numpy as np import pandas as pd # Read New York City apartment rental listing data df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv') assert df.shape == (49352, 34) # Remove the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes df = df[(df['price'] >= np.percentile(df['price'], 0.5)) & (df['price'] <= np.percentile(df['price'], 99.5)) & (df['latitude'] >= np.percentile(df['latitude'], 0.05)) & (df['latitude'] < np.percentile(df['latitude'], 99.95)) & (df['longitude'] >= np.percentile(df['longitude'], 0.05)) & (df['longitude'] <= np.percentile(df['longitude'], 99.95))] # + # set df to read all columns and inspect head pd.set_option('display.max_columns', None) df.head(3) # + # convert to datetime df['created'] = df['created'].apply(np.datetime64) # + # create description length and amenities df['description_length'] = df['description'].apply(lambda s: len(s) if isinstance(s, str) else 0) df['amenities'] = df['cats_allowed'] + df['hardwood_floors'] + df['dogs_allowed'] + df['doorman'] + df['dishwasher'] + df['laundry_in_building'] + df['fitness_center'] + df['laundry_in_unit'] + df['roof_deck'] + df['outdoor_space'] + df['dining_room'] + df['high_speed_internet'] + df['balcony'] + df['swimming_pool'] + df['terrace'] + df['loft'] + df['garden_patio'] + df['wheelchair_access'] + df['common_outdoor_space'] # + # create test and train data train_date1 = np.datetime64("2016-03-01") train_date2 = np.datetime64("2016-05-01") test_date1 = np.datetime64("2016-05-01") test_date2 = np.datetime64("2016-06-01") train = df[(train_date1 <= df["created"]) & (df["created"] < train_date2)] test = df[(test_date1 <= df["created"]) & (df["created"] < test_date2)] # + # imports from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score # create features and targets features = ["description_length", "amenities"] target = "price" # create model and fit model based on features and targets model = LinearRegression() model.fit(train[features], train[target]) # mae mean_absolute_error(model.predict(test[features]), test[target]) # + # model's coefficients and intercept coef = model.coef_ intercept = model.intercept_ print(f"coef: {coef}, intercepts: {intercept}") # + # metrics for test data predict = model.predict(test[features]) actual = test[target] mae = mean_absolute_error(predict, actual) R2 = r2_score(predict, actual) RMSE = mean_squared_error(predict, actual)**0.5 print(f""" Metrics for test data: MAE: {mae} R2: {R2} RMSE: {RMSE} """) # + # metrics for train data predict = model.predict(train[features]) actual = train[target] mae = mean_absolute_error(predict, actual) R2 = r2_score(predict, actual) RMSE = mean_squared_error(predict, actual)**0.5 print(f""" Metrics for the train data: MAE: {mae} R2: {R2} RMSE: {RMSE} """)
module2-regression-2/LS_DS_212_assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # <a href="https://colab.research.google.com/github/RecoHut-Projects/recohut/blob/master/tutorials/modeling/T541654_group_rec_ddpg_ml1m_pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # # Group Recommendations with Actor-critic RL Agent in MDP Environment on ML-1m Dataset # <img src='https://github.com/RecoHut-Stanzas/S758139/raw/main/images/group_recommender_actorcritic_1.svg'> # ## **Step 1 - Setup the environment** # ### **1.1 Install libraries** # !pip install -q -U git+https://github.com/RecoHut-Projects/recohut.git -b v0.0.3 # ### **1.2 Download datasets** # !wget -q --show-progress https://files.grouplens.org/datasets/movielens/ml-1m.zip # ### **1.3 Import libraries** # + from typing import Tuple, List, Dict import os import pandas as pd from collections import deque, defaultdict import shutil import zipfile import torch import numpy as np from scipy.sparse import coo_matrix # + # Utils from recohut.transforms.user_grouping import GroupGenerator from recohut.models.layers.ou_noise import OUNoise # Models from recohut.models.actor_critic import Actor, Critic from recohut.models.embedding import GroupEmbedding # RL from recohut.rl.memory import ReplayMemory from recohut.rl.agents.ddpg import DDPGAgent from recohut.rl.envs.recsys import Env # - # ### **1.4 Set params** class Config(object): """ Configurations """ def __init__(self): # Data self.data_folder_path = './data/silver' self.item_path = os.path.join(self.data_folder_path, 'movies.dat') self.user_path = os.path.join(self.data_folder_path, 'users.dat') self.group_path = os.path.join(self.data_folder_path, 'groupMember.dat') self.saves_folder_path = os.path.join('saves') # Recommendation system self.history_length = 5 self.top_K_list = [5, 10, 20] self.rewards = [0, 1] # Reinforcement learning self.embedding_size = 32 self.state_size = self.history_length + 1 self.action_size = 1 self.embedded_state_size = self.state_size * self.embedding_size self.embedded_action_size = self.action_size * self.embedding_size # Numbers self.item_num = None self.user_num = None self.group_num = None self.total_group_num = None # Environment self.env_n_components = self.embedding_size self.env_tol = 1e-4 self.env_max_iter = 1000 self.env_alpha = 0.001 # Actor-Critic network self.actor_hidden_sizes = (128, 64) self.critic_hidden_sizes = (32, 16) # DDPG algorithm self.tau = 1e-3 self.gamma = 0.9 # Optimizer self.batch_size = 64 self.buffer_size = 100000 self.num_episodes = 10 # recommended = 1000 self.num_steps = 5 # recommended = 100 self.embedding_weight_decay = 1e-6 self.actor_weight_decay = 1e-6 self.critic_weight_decay = 1e-6 self.embedding_learning_rate = 1e-4 self.actor_learning_rate = 1e-4 self.critic_learning_rate = 1e-4 self.eval_per_iter = 10 # OU noise self.ou_mu = 0.0 self.ou_theta = 0.15 self.ou_sigma = 0.2 self.ou_epsilon = 1.0 # GPU if torch.cuda.is_available(): self.device = torch.device("cuda:0") else: self.device = torch.device("cpu") # ## **Step 2 - Data preparation** data_path = './ml-1m' output_path = './data/silver' # + ratings = pd.read_csv(os.path.join(data_path,'ratings.dat'), sep='::', engine='python', header=None) group_generator = GroupGenerator( user_ids=np.arange(ratings[0].max()+1), item_ids=np.arange(ratings[1].max()+1), ratings=ratings, output_path=output_path, rating_threshold=4, num_groups=1000, group_sizes=[2, 3, 4, 5], min_num_ratings=20, train_ratio=0.7, val_ratio=0.1, negative_sample_size=100, verbose=True) shutil.copyfile(src=os.path.join(data_path, 'movies.dat'), dst=os.path.join(output_path, 'movies.dat')) shutil.copyfile(src=os.path.join(data_path, 'users.dat'), dst=os.path.join(output_path, 'users.dat')) # - os.listdir(output_path) class DataLoader(object): """ Data Loader """ def __init__(self, config: Config): """ Initialize DataLoader :param config: configurations """ self.config = config self.history_length = config.history_length self.item_num = self.get_item_num() self.user_num = self.get_user_num() self.group_num, self.total_group_num, self.group2members_dict, self.user2group_dict = self.get_groups() if not os.path.exists(self.config.saves_folder_path): os.mkdir(self.config.saves_folder_path) def get_item_num(self) -> int: """ Get number of items :return: number of items """ df_item = pd.read_csv(self.config.item_path, sep='::', index_col=0, engine='python') self.config.item_num = df_item.index.max() return self.config.item_num def get_user_num(self) -> int: """ Get number of users :return: number of users """ df_user = pd.read_csv(self.config.user_path, sep='::', index_col=0, engine='python') self.config.user_num = df_user.index.max() return self.config.user_num def get_groups(self): """ Get number of groups and group members :return: group_num, total_group_num, group2members_dict, user2group_dict """ df_group = pd.read_csv(self.config.group_path, sep=' ', header=None, index_col=None, names=['GroupID', 'Members']) df_group['Members'] = df_group['Members']. \ apply(lambda group_members: tuple(map(int, group_members.split(',')))) group_num = df_group['GroupID'].max() users = set() for members in df_group['Members']: users.update(members) users = sorted(users) total_group_num = group_num + len(users) df_user_group = pd.DataFrame() df_user_group['GroupID'] = list(range(group_num + 1, total_group_num + 1)) df_user_group['Members'] = [(user,) for user in users] df_group = df_group.append(df_user_group, ignore_index=True) group2members_dict = {row['GroupID']: row['Members'] for _, row in df_group.iterrows()} user2group_dict = {user: group_num + user_index + 1 for user_index, user in enumerate(users)} self.config.group_num = group_num self.config.total_group_num = total_group_num return group_num, total_group_num, group2members_dict, user2group_dict def load_rating_data(self, mode: str, dataset_name: str, is_appended=True) -> pd.DataFrame(): """ Load rating data :param mode: in ['user', 'group'] :param dataset_name: name of the dataset in ['train', 'val', 'test'] :param is_appended: True to append all datasets before this dataset :return: df_rating """ assert (mode in ['user', 'group']) and (dataset_name in ['train', 'val', 'test']) rating_path = os.path.join(self.config.data_folder_path, mode + 'Rating' + dataset_name.capitalize() + '.dat') df_rating_append = pd.read_csv(rating_path, sep=' ', header=None, index_col=None, names=['GroupID', 'MovieID', 'Rating', 'Timestamp']) print('Read data:', rating_path) if is_appended: if dataset_name == 'train': df_rating = df_rating_append elif dataset_name == 'val': df_rating = self.load_rating_data(mode=mode, dataset_name='train') df_rating = df_rating.append(df_rating_append, ignore_index=True) else: df_rating = self.load_rating_data(mode=mode, dataset_name='val') df_rating = df_rating.append(df_rating_append, ignore_index=True) else: df_rating = df_rating_append return df_rating def _load_rating_matrix(self, df_rating: pd.DataFrame()): """ Load rating matrix :param df_rating: rating data :return: rating_matrix """ group_ids = df_rating['GroupID'] item_ids = df_rating['MovieID'] ratings = df_rating['Rating'] rating_matrix = coo_matrix((ratings, (group_ids, item_ids)), shape=(self.total_group_num + 1, self.config.item_num + 1)).tocsr() return rating_matrix def load_rating_matrix(self, dataset_name: str): """ Load group rating matrix :param dataset_name: name of the dataset in ['train', 'val', 'test'] :return: rating_matrix """ assert dataset_name in ['train', 'val', 'test'] df_user_rating = self.user2group(self.load_rating_data(mode='user', dataset_name=dataset_name)) df_group_rating = self.load_rating_data(mode='group', dataset_name=dataset_name) df_group_rating = df_group_rating.append(df_user_rating, ignore_index=True) rating_matrix = self._load_rating_matrix(df_group_rating) return rating_matrix def user2group(self, df_user_rating): """ Change user ids to group ids :param df_user_rating: user rating :return: df_user_rating """ df_user_rating['GroupID'] = df_user_rating['GroupID'].apply(lambda user_id: self.user2group_dict[user_id]) return df_user_rating def _load_eval_data(self, df_data_train: pd.DataFrame(), df_data_eval: pd.DataFrame(), negative_samples_dict: Dict[tuple, list]) -> pd.DataFrame(): """ Write evaluation data :param df_data_train: train data :param df_data_eval: evaluation data :param negative_samples_dict: one dictionary mapping (group_id, item_id) to negative samples :return: data for evaluation """ df_eval = pd.DataFrame() last_state_dict = defaultdict(list) groups = [] histories = [] actions = [] negative_samples = [] for group_id, rating_group in df_data_train.groupby(['GroupID']): rating_group.sort_values(by=['Timestamp'], ascending=True, ignore_index=True, inplace=True) state = rating_group[rating_group['Rating'] == 1]['MovieID'].values.tolist() last_state_dict[group_id] = state[-self.config.history_length:] for group_id, rating_group in df_data_eval.groupby(['GroupID']): rating_group.sort_values(by=['Timestamp'], ascending=True, ignore_index=True, inplace=True) action = rating_group[rating_group['Rating'] == 1]['MovieID'].values.tolist() state = deque(maxlen=self.history_length) state.extend(last_state_dict[group_id]) for item_id in action: if len(state) == self.config.history_length: groups.append(group_id) histories.append(list(state)) actions.append(item_id) negative_samples.append(negative_samples_dict[(group_id, item_id)]) state.append(item_id) df_eval['group'] = groups df_eval['history'] = histories df_eval['action'] = actions df_eval['negative samples'] = negative_samples return df_eval def load_negative_samples(self, mode: str, dataset_name: str): """ Load negative samples :param mode: in ['user', 'group'] :param dataset_name: name of the dataset in ['val', 'test'] :return: negative_samples_dict """ assert (mode in ['user', 'group']) and (dataset_name in ['val', 'test']) negative_samples_path = os.path.join(self.config.data_folder_path, mode + 'Rating' + dataset_name.capitalize() + 'Negative.dat') negative_samples_dict = {} with open(negative_samples_path, 'r') as negative_samples_file: for line in negative_samples_file.readlines(): negative_samples = line.split() ids = negative_samples[0][1:-1].split(',') group_id = int(ids[0]) if mode == 'user': group_id = self.user2group_dict[group_id] item_id = int(ids[1]) negative_samples = list(map(int, negative_samples[1:])) negative_samples_dict[(group_id, item_id)] = negative_samples return negative_samples_dict def load_eval_data(self, mode: str, dataset_name: str, reload=False): """ Load evaluation data :param mode: in ['user', 'group'] :param dataset_name: in ['val', 'test'] :param reload: True to reload the dataset file :return: data for evaluation """ assert (mode in ['user', 'group']) and (dataset_name in ['val', 'test']) exp_eval_path = os.path.join(self.config.saves_folder_path, 'eval_' + mode + '_' + dataset_name + '_' + str(self.config.history_length) + '.pkl') if reload or not os.path.exists(exp_eval_path): if dataset_name == 'val': df_rating_train = self.load_rating_data(mode=mode, dataset_name='train') else: df_rating_train = self.load_rating_data(mode=mode, dataset_name='val') df_rating_eval = self.load_rating_data(mode=mode, dataset_name=dataset_name, is_appended=False) if mode == 'user': df_rating_train = self.user2group(df_rating_train) df_rating_eval = self.user2group(df_rating_eval) negative_samples_dict = self.load_negative_samples(mode=mode, dataset_name=dataset_name) df_eval = self._load_eval_data(df_rating_train, df_rating_eval, negative_samples_dict) df_eval.to_pickle(exp_eval_path) print('Save data:', exp_eval_path) else: df_eval = pd.read_pickle(exp_eval_path) print('Load data:', exp_eval_path) return df_eval # ## **Step 3 - Training & Evaluation** class Evaluator(object): """ Evaluator """ def __init__(self, config: Config): """ Initialize Evaluator :param config: configurations """ self.config = config def evaluate(self, agent: DDPGAgent, df_eval: pd.DataFrame(), mode: str, top_K=5): """ Evaluate the agent :param agent: agent :param df_eval: evaluation data :param mode: in ['user', 'group'] :param top_K: length of the recommendation list :return: avg_recall_score, avg_ndcg_score """ recall_scores = [] ndcg_scores = [] for _, row in df_eval.iterrows(): group = row['group'] history = row['history'] item_true = row['action'] item_candidates = row['negative samples'] + [item_true] np.random.shuffle(item_candidates) state = [group] + history items_pred = agent.get_action(state=state, item_candidates=item_candidates, top_K=top_K) recall_score = 0 ndcg_score = 0 for k, item in enumerate(items_pred): if item == item_true: recall_score = 1 ndcg_score = np.log2(2) / np.log2(k + 2) break recall_scores.append(recall_score) ndcg_scores.append(ndcg_score) avg_recall_score = float(np.mean(recall_scores)) avg_ndcg_score = float(np.mean(ndcg_scores)) print('%s: Recall@%d = %.4f, NDCG@%d = %.4f' % (mode.capitalize(), top_K, avg_recall_score, top_K, avg_ndcg_score)) return avg_recall_score, avg_ndcg_score def train(config: Config, env: Env, agent: DDPGAgent, evaluator: Evaluator, df_eval_user: pd.DataFrame(), df_eval_group: pd.DataFrame()): """ Train the agent with the environment :param config: configurations :param env: environment :param agent: agent :param evaluator: evaluator :param df_eval_user: user evaluation data :param df_eval_group: group evaluation data :return: """ rewards = [] for episode in range(config.num_episodes): state = env.reset() agent.noise.reset() episode_reward = 0 for step in range(config.num_steps): action = agent.get_action(state) new_state, reward, _, _ = env.step(action) agent.replay_memory.push((state, action, reward, new_state)) state = new_state episode_reward += reward if len(agent.replay_memory) >= config.batch_size: agent.update() rewards.append(episode_reward / config.num_steps) print('Episode = %d, average reward = %.4f' % (episode, episode_reward / config.num_steps)) if (episode + 1) % config.eval_per_iter == 0: for top_K in config.top_K_list: evaluator.evaluate(agent=agent, df_eval=df_eval_user, mode='user', top_K=top_K) for top_K in config.top_K_list: evaluator.evaluate(agent=agent, df_eval=df_eval_group, mode='group', top_K=top_K) config = Config() dataloader = DataLoader(config) rating_matrix_train = dataloader.load_rating_matrix(dataset_name='val') df_eval_user_test = dataloader.load_eval_data(mode='user', dataset_name='test') df_eval_group_test = dataloader.load_eval_data(mode='group', dataset_name='test') env = Env(config=config, rating_matrix=rating_matrix_train, dataset_name='val') noise = OUNoise(embedded_action_size=config.embedded_action_size, ou_mu=config.ou_mu, ou_theta=config.ou_theta, ou_sigma=config.ou_sigma, ou_epsilon=config.ou_epsilon) agent = DDPGAgent(config=config, noise=noise, group2members_dict=dataloader.group2members_dict, verbose=True) evaluator = Evaluator(config=config) train(config=config, env=env, agent=agent, evaluator=evaluator, df_eval_user=df_eval_user_test, df_eval_group=df_eval_group_test) # ## **Closure** # For more details, you can refer to https://github.com/RecoHut-Stanzas/S758139. # <a href="https://github.com/RecoHut-Stanzas/S758139/blob/main/reports/S758139_Report.ipynb" alt="S758139_Report"> <img src="https://img.shields.io/static/v1?label=report&message=active&color=green" /></a> <a href="https://github.com/RecoHut-Stanzas/S758139" alt="S758139"> <img src="https://img.shields.io/static/v1?label=code&message=github&color=blue" /></a> # !pip install -q watermark # %reload_ext watermark # %watermark -a "Sparsh A." -m -iv -u -t -d # --- # **END**
tutorials/modeling/T541654_group_rec_ddpg_ml1m_pytorch.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import math import os from matplotlib.pyplot import cm from animation.scene import Video, GIFfromMP4Video from spaces.threeD import State from spaces.twoD import SGDVisOneVariable # - # # Momentum # + # 3d surfaces # test functions from spaces.threeD import State from spaces.twoD import SGDVisOneVariable state = State( space_lim_min=-7, space_lim_max=7, x_initial=-6, y_initial=0, test_function="parabolic", iteration=240, ) steps_standard = state.run_gd(epsilon=0.001, alpha=0.9, nesterov=False) steps_nesterov = state.run_gd(epsilon=0.001, alpha=0.9, nesterov=True) steps_adam = state.run_adam(epsilon=0.01) steps_adagrad = state.run_adagrad(epsilon=0.1) steps_rmsprop = state.run_rmsprop(epsilon=0.1) # - state.plot_steps( [ # steps_standard, # steps_nesterov, # steps_adam, steps_adagrad, steps_rmsprop, ], colors=["purple", "gray", "green", "red"], steps_until_n=90, n_back=90, ) for i in range(240): file_path = "frames/plot_{0:03}.png".format(i) fig = state.plot_steps( [steps_adagrad, steps_rmsprop], steps_until_n=i, azimuth=5 + 5 * math.log(i + 1), elevation=20 + 6 * math.log(i + 1), color_map=cm.gray, n_back=20, plot_title="Adagrad (blue) vs RMSProp (green)", colors=["blue", "green"], ) fig.savefig(file_path) # + FILE_NAME_WO_EXTENSION = "adagrad-vs-rmsprop" video = Video(dir_to_save="frames", video_name=FILE_NAME_WO_EXTENSION, frame_rate=29) cmd_video = video.get_fmpeg_video_cmd() os.system(cmd_video) video # + gif = GIFfromMP4Video(file_name=FILE_NAME_WO_EXTENSION, dir_to_save="", frame_rate=29) cmd_gif = gif.get_fmpeg_gif_cmd() os.system(cmd_gif) gif
experiments-3d.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Visualization for Exploration # This notebook details data visualization for exploring a dataset. The goal is to understand more about the data as a human, not to make beautiful graphs, communicate, or feature engineering input into models. # + import pandas as pd import numpy as np import scipy.stats as st #ggplot equivalent: plotnine from plotnine import * #scales package equivalent: mizani from mizani.breaks import * from mizani.formatters import * #widgets from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets #utility import utils def clean_comma(x): return float(str(x).replace(',','')) # - ''' Snippet for plotnine with thai font by @korakot https://gist.github.com/korakot/01d181229b21411b0a20784e0ca20d3d ''' import matplotlib # # !wget https://github.com/Phonbopit/sarabun-webfont/raw/master/fonts/thsarabunnew-webfont.ttf -q # # !cp thsarabunnew-webfont.ttf /usr/share/fonts/truetype/ matplotlib.font_manager._rebuild() matplotlib.rc('font', family='TH Sarabun New') theme_set(theme_minimal(11, 'TH Sarabun New')); df = pd.read_csv('data/taladrod.csv') df['sales_price'] = df.sales_price.map(clean_comma) df['market_price'] = df.market_price.map(clean_comma) df.head() # ## Warming Up: Missing Values # We use grammar of graphics implementation `ggplot` (ported to Python as `plotnine`) to explore the `taladrod` dataset. Grammar of graphics is an especially useful tool since we do not know exactly what kind of plots we want to see and want to be able to add them up as we go. # # ![Grammar of Graphics](images/ggpyramid.png) # Source: [A Comprehensive Guide to the Grammar of Graphics for Effective Visualization of Multi-dimensional Data](https://towardsdatascience.com/a-comprehensive-guide-to-the-grammar-of-graphics-for-effective-visualization-of-multi-dimensional-1f92b4ed4149) missing = utils.check_missing(df) missing['over90'] = missing.per_missing.map(lambda x: True if x>0.9 else False) missing.head() g = (ggplot(missing,aes(x='rnk',y='per_missing',fill='over90')) + #base plot geom_col() + #type of plot geom_text(aes(x='rnk',y='per_missing+0.1',label='round(100*per_missing,2)')) +#annotate scale_y_continuous(labels=percent_format()) + #y-axis tick theme_minimal() + coord_flip()#theme and flipping plot ) g #drop columns with too many missing values df.drop(missing[missing.over90==True].col_name,1,inplace=True) df.head() # ## Categorical Variables # We want to know primarily two things about our categorical variables: # 1. How each variable is distributed # 2. How each variable relate to the dependent variable # * 2.1 when dependent variable is numerical # * 2.2 when dependent variable is categorical cat_vars = ['brand','series','gen','color','gear','contact_location'] cat_df = df[cat_vars].copy() cat_df.head() # To simplify the data cleaning step, we "otherify" values that appear less than 3% of the time in all categorical columns. #otherify popular values; you can (should?) also have a mapping dict for col in cat_vars: cat_df = utils.otherify(cat_df,col,th=0.03) # ### Value Distribution # Even without plotting them out, we can see the value distribution in each variable using `ipywidgets`. interact(utils.value_dist, df =fixed(cat_df), col = widgets.Dropdown(options=list(cat_df.columns),value='brand')) # **Exercise** Implement `cat_plot` function that plots value distribution for each categorical variable. # + def cat_plot(df,col): return utils.cat_plot(df,col) #input dataframe and column #output histogram plot of value distribution interact(cat_plot, df=fixed(cat_df), col = widgets.Dropdown(options=list(cat_df.columns),value='brand')) # + #excluding others def cat_plot_noothers(df,col): x = df.copy() x = x[x[col]!='others'] return utils.cat_plot(x,col) + utils.thai_text(8) interact(cat_plot_noothers, df=fixed(cat_df), col = widgets.Dropdown(options=list(cat_df.columns),value='gen')) # - # ### Numerical and Categorical Variables #relationship between dependent variable and categorical variable cat_df['sales_price'] = utils.boxcox(df['sales_price']) cat_df.head() #relationship between sales price and color cat_df.groupby('color').sales_price.describe() # **Exercise** Implement `numcat_plot` function that plots the relationship between a dependent numerical variable and an independent categorical as displayed above. Useful geoms are `geom_boxplot`, `geom_violin` and `geom_jitter`. Optionally remove outliers before plotting. def numcat_plot(df,num,cat, no_outliers=True, geom=geom_boxplot()): return utils.numcat_plot(df,num,cat, no_outliers, geom) #plot the summary above interact(numcat_plot, df=fixed(cat_df), num=fixed('sales_price'), no_outliers = widgets.Checkbox(value=True), geom=fixed(geom_boxplot()), #geom_violin, geom_jitter cat= widgets.Dropdown(options=list(cat_df.columns)[:-1],value='gen')) interact(numcat_plot, df=fixed(cat_df), num=fixed('sales_price'), no_outliers = widgets.Checkbox(value=True), geom=fixed(geom_violin()), #geom_violin, geom_jitter cat= widgets.Dropdown(options=list(cat_df.columns)[:-1],value='series')) # Sometimes we want to see the numerical distribution filled with categories. This is especially useful plotting the results of a binary classification. # + def numdist_plot(df, num,cat, geom=geom_density(alpha=0.5), no_outliers=True): return utils.numdist_plot(df, num, cat, geom, no_outliers) #either #density: geom_density(alpha=0.5) #histogram: geom_histogram(binwidth=0.5, position='identity',alpha=0.5) #position: identity or dodge numdist_plot(cat_df,'sales_price','gear') # - numdist_plot(cat_df,'sales_price','gear', geom=geom_histogram(binwidth=0.5, position='dodge',alpha=0.5)) # ### Categorical and Categorical Variables # **Exercise** We can cross-tabulate categorical variables to see their relationship by using `facet_wrap`; for instance, if our dependent variable is `gear` and indpendent variable of interest is `color`. def catcat_plot(df, cat_dep, cat_ind): return utils.catcat_plot(df,cat_dep,cat_ind) interact(catcat_plot, df=fixed(cat_df), cat_dep=widgets.Dropdown(options=list(cat_df.columns)[:-1],value='gear'), cat_ind= widgets.Dropdown(options=list(cat_df.columns)[:-1],value='color')) # ### Multiple Ways of Relationships # You can use `facet_grid` to display multiple ways of relationships; but keep in mind that this is probably what your model is doing anyways so it might not be most human-readable plot to explore. #getting fancy; not necessarily the best idea new_df = utils.remove_outliers(cat_df,'sales_price') g = (ggplot(new_df, aes(x='gen',y='sales_price')) + geom_boxplot() + theme_minimal() + facet_grid('contact_location~color') + theme(axis_text_x = element_text(angle = 90, hjust = 1)) ) + utils.thai_text(8) g # ## Numerical Variables # We want to know two things about numerical variables: # 1. Their distributions # 2. Their relationships with one another; possibly this involves transforming variables to make them less skewed aka more difficult to see variations import datetime now = datetime.datetime.now() df['nb_year'] = now.year - df['year'] num_vars = ['nb_year','sales_price','market_price','subscribers'] num_df = df[num_vars].dropna() #this is why you need to deal with missing values BEFORE exploration num_df.describe() # `seaborn` has an excellent `pairplot` implementation which not only shows the distribution of values but also their relathionships. It seems like we can get what we want easily; however, as we can see `sales_price` and `market_price` are a little skewed, making it more difficult to see their relationships with other more spread out variables. import seaborn as sns sns.pairplot(num_df) #non-normal data is a problem! # In a lot of cases, a variable with normally distributed values have more variations and easier for us to see their relationships with other variables. We will try to transform our skewed variables to more "normal" ones to see if that helps. # # **Q-Q plot** compares two probability distributions by plotting their quantiles against each other. We can use this to determine the normality of a variable by plotting the sample quantiles (from the data we have) against its theoretical quantiles (where the quantiles would be if the variable is normally distributed). interact(utils.qq_plot, df=fixed(num_df), col=widgets.Dropdown(options=list(num_df.columns))) # **Box-Cox transformation** is a statistical technique used to make data look like more normally distributed. # # \begin{align} # g_\lambda(y) = \left\{ # \begin{array}{lr}\displaystyle\frac{y^\lambda - 1}{\lambda} & \lambda \neq 0\\ # & \\ # \log(y) & \lambda = 0 # \end{array} # \right. # \end{align} # **Exercise** Implement `boxcox` transformation according to the equation above. def boxcox(ser,lamb=0): pass #input a column from pandas dataframe #output transformed column # One way of choosing the hyperparameter $\lambda$ is to look at the Q-Q plot and choose transformation which makes the slope closest to 1. # + #see transformation results def what_lamb(df,col,lamb): sample_df = df.copy() former_g = utils.qq_plot(sample_df,col) sample_df[col] = utils.boxcox(sample_df[col],lamb) print(utils.qq_plot(sample_df,col),former_g) interact(what_lamb, df=fixed(num_df), col=widgets.Dropdown(options=list(num_df.columns),value='sales_price'), lamb=widgets.FloatSlider(min=-3,max=3,step=0.5,value=0) ) # - # This can also be automated by plotting a slope for each arbitary $\lambda$; for instance from -3 to 3. lamb_df = utils.boxcox_lamb_df(num_df.subscribers) interact(utils.boxcox_plot, df=fixed(num_df), col=widgets.Dropdown(options=list(num_df.columns),value='sales_price'), ls=fixed([i/10 for i in range(-30,31,5)]) ) #transform sales and market prices for col in ['sales_price','market_price']: num_df['new_'+col] = utils.boxcox(num_df[col], utils.boxcox_lamb(num_df[col])) # You can see that post transformation, we can see the (lack of) relationships between variables clearer. sns.pairplot(num_df[['nb_year','new_sales_price','new_market_price','subscribers']]) #a little better! # For our example, we have only four numerical variables; but imagine when you have ten or more. You may want to plot their distributions separately from relationships. num_m = num_df.melt() num_m.head() # **Exercise** Implement `value_dist_plot` to plot value distribution of all variables. def value_dist_plot(df,bins=30): return utils.value_dist_plot(df,bins) #input dataframe with only numerical variables #output distribution plot for each variable value_dist_plot(num_df) # Likewise in case there are too many pairs of relationships, you might plot the relationships pair-by-pair with `ipywidget` and `seaborn`'s `jointplot` function. interact(utils.jointplot, df=fixed(num_df), col_x= widgets.Dropdown(options=list(num_df.columns),value='sales_price'), col_y=widgets.Dropdown(options=list(num_df.columns),value='market_price'), kind=widgets.Dropdown(options=['scatter','resid','reg','hex','kde','point'],value='scatter')) # As you might have noticed, we have not used any statistical concept to describe the relationship, and that is by design. We can also see correlation table with a simple `pandas` function: #correlation plot if you must; but it's just ONE number for the relationship num_df.corr(method='pearson').style.background_gradient(cmap='coolwarm') # + def pearson_corr(x,y): sub_x = x - x.mean() sub_y = y - y.mean() return (sub_x * sub_y).sum() / np.sqrt((sub_x**2).sum() * (sub_y**2).sum()) #spearman and kendall: pearson with rank variables pearson_corr(df.nb_year,df.sales_price) # - # However, the famous Anscombe plots show us that it is always better to look at distribution rather than a summary number. # # ![Anscombe's Quartet](images/anscombe.png) # # Source: [A Comprehensive Guide to the Grammar of Graphics for Effective Visualization of Multi-dimensional Data](https://towardsdatascience.com/a-comprehensive-guide-to-the-grammar-of-graphics-for-effective-visualization-of-multi-dimensional-1f92b4ed4149)
explore.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import struct import numpy as np def load_mnist(path, kind='train'): labels_path = os.path.join(path, '%s-labels-idx1-ubyte' % kind) images_path = os.path.join(path, '%s-images-idx3-ubyte' % kind) with open(labels_path, 'rb') as lbpath: magic, n = struct.unpack('>II', lbpath.read(8)) labels = np.fromfile(lbpath, dtype=np.uint8) with open(images_path, 'rb') as imgpath: magic, num, rows, cols = struct.unpack('>IIII', imgpath.read(16)) images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784) return images, labels # - # reding the MNIST dataset into variables X_train, y_train = load_mnist('./data/mnist', kind='train') print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1])) X_test, y_test = load_mnist('./data/mnist', kind='t10k') print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1])) # visualizing examples of the digits import matplotlib.pyplot as plt fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(10): img = X_train[y_train == i][0].reshape(28, 28) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show() # plotting multiple examples of the same digit to see the diference fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(25): img = X_train[y_train == 7][i].reshape(28, 28) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show() # + # implementing a multi-layer perceptron import numpy as np from scipy.special import expit import sys class NeuralNetMLP(object): """ Feedforward neural network / Multi-layer perceptron classifier. Parameters ------------ n_output : int Number of output units, should be equal to the number of unique class labels. n_features : int Number of features (dimensions) in the target dataset. Should be equal to the number of columns in the X array. n_hidden : int (default: 30) Number of hidden units. l1 : float (default: 0.0) Lambda value for L1-regularization. No regularization if l1=0.0 (default) l2 : float (default: 0.0) Lambda value for L2-regularization. No regularization if l2=0.0 (default) epochs : int (default: 500) Number of passes over the training set. eta : float (default: 0.001) Learning rate. alpha : float (default: 0.0) Momentum constant. Factor multiplied with the gradient of the previous epoch t-1 to improve learning speed w(t) := w(t) - (grad(t) + alpha*grad(t-1)) decrease_const : float (default: 0.0) Decrease constant. Shrinks the learning rate after each epoch via eta / (1 + epoch*decrease_const) shuffle : bool (default: True) Shuffles training data every epoch if True to prevent circles. minibatches : int (default: 1) Divides training data into k minibatches for efficiency. Normal gradient descent learning if k=1 (default). random_state : int (default: None) Set random state for shuffling and initializing the weights. Attributes ----------- cost_ : list Sum of squared errors after each epoch. """ def __init__(self, n_output, n_features, n_hidden=30, l1=0.0, l2=0.0, epochs=500, eta=0.001, alpha=0.0, decrease_const=0.0, shuffle=True, minibatches=1, random_state=None): np.random.seed(random_state) self.n_output = n_output self.n_features = n_features self.n_hidden = n_hidden self.w1, self.w2 = self._initialize_weights() self.l1 = l1 self.l2 = l2 self.epochs = epochs self.eta = eta self.alpha = alpha self.decrease_const = decrease_const self.shuffle = shuffle self.minibatches = minibatches def _encode_labels(self, y, k): """Encode labels into one-hot representation Parameters ------------ y : array, shape = [n_samples] Target values. Returns ----------- onehot : array, shape = (n_labels, n_samples) """ onehot = np.zeros((k, y.shape[0])) for idx, val in enumerate(y): onehot[val, idx] = 1.0 return onehot def _initialize_weights(self): """Initialize weights with small random numbers.""" w1 = np.random.uniform(-1.0, 1.0, size=self.n_hidden*(self.n_features + 1)) w1 = w1.reshape(self.n_hidden, self.n_features + 1) w2 = np.random.uniform(-1.0, 1.0, size=self.n_output*(self.n_hidden + 1)) w2 = w2.reshape(self.n_output, self.n_hidden + 1) return w1, w2 def _sigmoid(self, z): """Compute logistic function (sigmoid) Uses scipy.special.expit to avoid overflow error for very small input values z. """ # return 1.0 / (1.0 + np.exp(-z)) return expit(z) def _sigmoid_gradient(self, z): """Compute gradient of the logistic function""" sg = self._sigmoid(z) return sg * (1 - sg) def _add_bias_unit(self, X, how='column'): """Add bias unit (column or row of 1s) to array at index 0""" if how == 'column': X_new = np.ones((X.shape[0], X.shape[1]+1)) X_new[:, 1:] = X elif how == 'row': X_new = np.ones((X.shape[0]+1, X.shape[1])) X_new[1:, :] = X else: raise AttributeError('`how` must be `column` or `row`') return X_new def _feedforward(self, X, w1, w2): """Compute feedforward step Parameters ----------- X : array, shape = [n_samples, n_features] Input layer with original features. w1 : array, shape = [n_hidden_units, n_features] Weight matrix for input layer -> hidden layer. w2 : array, shape = [n_output_units, n_hidden_units] Weight matrix for hidden layer -> output layer. Returns ---------- a1 : array, shape = [n_samples, n_features+1] Input values with bias unit. z2 : array, shape = [n_hidden, n_samples] Net input of hidden layer. a2 : array, shape = [n_hidden+1, n_samples] Activation of hidden layer. z3 : array, shape = [n_output_units, n_samples] Net input of output layer. a3 : array, shape = [n_output_units, n_samples] Activation of output layer. """ a1 = self._add_bias_unit(X, how='column') z2 = w1.dot(a1.T) a2 = self._sigmoid(z2) a2 = self._add_bias_unit(a2, how='row') z3 = w2.dot(a2) a3 = self._sigmoid(z3) return a1, z2, a2, z3, a3 def _L2_reg(self, lambda_, w1, w2): """Compute L2-regularization cost""" return (lambda_/2.0) * (np.sum(w1[:, 1:] ** 2) + np.sum(w2[:, 1:] ** 2)) def _L1_reg(self, lambda_, w1, w2): """Compute L1-regularization cost""" return (lambda_/2.0) * (np.abs(w1[:, 1:]).sum() + np.abs(w2[:, 1:]).sum()) def _get_cost(self, y_enc, output, w1, w2): """Compute cost function. Parameters ---------- y_enc : array, shape = (n_labels, n_samples) one-hot encoded class labels. output : array, shape = [n_output_units, n_samples] Activation of the output layer (feedforward) w1 : array, shape = [n_hidden_units, n_features] Weight matrix for input layer -> hidden layer. w2 : array, shape = [n_output_units, n_hidden_units] Weight matrix for hidden layer -> output layer. Returns --------- cost : float Regularized cost. """ term1 = -y_enc * (np.log(output)) term2 = (1 - y_enc) * np.log(1 - output) cost = np.sum(term1 - term2) L1_term = self._L1_reg(self.l1, w1, w2) L2_term = self._L2_reg(self.l2, w1, w2) cost = cost + L1_term + L2_term return cost def _get_gradient(self, a1, a2, a3, z2, y_enc, w1, w2): """ Compute gradient step using backpropagation. Parameters ------------ a1 : array, shape = [n_samples, n_features+1] Input values with bias unit. a2 : array, shape = [n_hidden+1, n_samples] Activation of hidden layer. a3 : array, shape = [n_output_units, n_samples] Activation of output layer. z2 : array, shape = [n_hidden, n_samples] Net input of hidden layer. y_enc : array, shape = (n_labels, n_samples) one-hot encoded class labels. w1 : array, shape = [n_hidden_units, n_features] Weight matrix for input layer -> hidden layer. w2 : array, shape = [n_output_units, n_hidden_units] Weight matrix for hidden layer -> output layer. Returns --------- grad1 : array, shape = [n_hidden_units, n_features] Gradient of the weight matrix w1. grad2 : array, shape = [n_output_units, n_hidden_units] Gradient of the weight matrix w2. """ # backpropagation sigma3 = a3 - y_enc z2 = self._add_bias_unit(z2, how='row') sigma2 = w2.T.dot(sigma3) * self._sigmoid_gradient(z2) sigma2 = sigma2[1:, :] grad1 = sigma2.dot(a1) grad2 = sigma3.dot(a2.T) # regularize grad1[:, 1:] += self.l2 * w1[:, 1:] grad1[:, 1:] += self.l1 * np.sign(w1[:, 1:]) grad2[:, 1:] += self.l2 * w2[:, 1:] grad2[:, 1:] += self.l1 * np.sign(w2[:, 1:]) return grad1, grad2 def predict(self, X): """Predict class labels Parameters ----------- X : array, shape = [n_samples, n_features] Input layer with original features. Returns: ---------- y_pred : array, shape = [n_samples] Predicted class labels. """ if len(X.shape) != 2: raise AttributeError('X must be a [n_samples, n_features] array.\n' 'Use X[:,None] for 1-feature classification,' '\nor X[[i]] for 1-sample classification') a1, z2, a2, z3, a3 = self._feedforward(X, self.w1, self.w2) y_pred = np.argmax(z3, axis=0) return y_pred def fit(self, X, y, print_progress=False): """ Learn weights from training data. Parameters ----------- X : array, shape = [n_samples, n_features] Input layer with original features. y : array, shape = [n_samples] Target class labels. print_progress : bool (default: False) Prints progress as the number of epochs to stderr. Returns: ---------- self """ self.cost_ = [] X_data, y_data = X.copy(), y.copy() y_enc = self._encode_labels(y, self.n_output) delta_w1_prev = np.zeros(self.w1.shape) delta_w2_prev = np.zeros(self.w2.shape) for i in range(self.epochs): # adaptive learning rate self.eta /= (1 + self.decrease_const*i) if print_progress: sys.stderr.write('\rEpoch: %d/%d' % (i+1, self.epochs)) sys.stderr.flush() if self.shuffle: idx = np.random.permutation(y_data.shape[0]) X_data, y_enc = X_data[idx], y_enc[:, idx] mini = np.array_split(range(y_data.shape[0]), self.minibatches) for idx in mini: # feedforward a1, z2, a2, z3, a3 = self._feedforward(X_data[idx], self.w1, self.w2) cost = self._get_cost(y_enc=y_enc[:, idx], output=a3, w1=self.w1, w2=self.w2) self.cost_.append(cost) # compute gradient via backpropagation grad1, grad2 = self._get_gradient(a1=a1, a2=a2, a3=a3, z2=z2, y_enc=y_enc[:, idx], w1=self.w1, w2=self.w2) delta_w1, delta_w2 = self.eta * grad1, self.eta * grad2 self.w1 -= (delta_w1 + (self.alpha * delta_w1_prev)) self.w2 -= (delta_w2 + (self.alpha * delta_w2_prev)) delta_w1_prev, delta_w2_prev = delta_w1, delta_w2 return self # + # initializing a new neural network with 784 input units, 50 diffen units, and 10 output units nn = NeuralNetMLP(n_output=10, n_features=X_train.shape[1], n_hidden=50, l2=0.1, l1=0.0, epochs=1000, eta=0.001, alpha=0.001, decrease_const=0.00001, shuffle=True, minibatches=50, random_state=1) nn.fit(X_train, y_train, print_progress=True) # - # visualizing the cost plt.plot(range(len(nn.cost_)), nn.cost_) plt.ylim([0, 2000]) plt.ylabel('Cost') plt.xlabel('Epochs * 50') plt.tight_layout() plt.show() # + # plotting a smoother version of the cost function against the number of epochs by averaging over the mini-batch intervals batches = np.array_split(range(len(nn.cost_)), 1000) cost_ary = np.array(nn.cost_) cost_avgs = [np.mean(cost_ary[i]) for i in batches] plt.plot(range(len(cost_avgs)), cost_avgs, color='red') plt.ylim([0, 2000]) plt.ylabel('Cost') plt.xlabel('Epochs') plt.tight_layout() plt.show()
ch12/.ipynb_checkpoints/01-image-recognition-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install easytorch from easytorch import EasyTorch, ETTrainer, ConfusionMatrix, ETMeter from torchvision import datasets, transforms from torch import nn import torch.nn.functional as F import torch from IPython.display import Image transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # **Define neural network. I just burrowed from here: https://github.com/pytorch/examples/blob/master/mnist/main.py** class MNISTNet(nn.Module): def __init__(self): super(MNISTNet, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output # + class MNISTTrainer(ETTrainer): def _init_nn_model(self): self.nn['model'] = MNISTNet() def iteration(self, batch): inputs = batch[0].to(self.device['gpu']).float() labels = batch[1].to(self.device['gpu']).long() out = self.nn['model'](inputs) loss = F.nll_loss(out, labels) _, pred = torch.max(out, 1) meter = self.new_meter() meter.averages.add(loss.item(), len(inputs)) meter.metrics['cfm'].add(pred, labels.float()) return {'loss': loss, 'meter': meter, 'predictions': pred} def init_experiment_cache(self): self.cache['log_header'] = 'Loss|Accuracy,F1,Precision,Recall' self.cache.update(monitor_metric='f1', metric_direction='maximize') def new_meter(self): return ETMeter( cfm=ConfusionMatrix(num_classes=10) ) # - train_dataset = datasets.MNIST('../data', train=True, download=True, transform=transform) val_dataset = datasets.MNIST('../data', train=False, transform=transform) dataloader_args = {'train': {'dataset': train_dataset}, 'validation': {'dataset': val_dataset}} runner = EasyTorch(phase='train', batch_size=128, epochs=5, gpus=[0], dataloader_args=dataloader_args) runner.run(MNISTTrainer) Image('net_logs/experiment/experiment_train_log_0.png') Image('net_logs/experiment/experiment_train_log_1.png') # ### Saved logs import json import pprint as ppr log = json.loads(open('net_logs/experiment/experiment_log.json').read()) log
examples/MNIST_easytorch_CNN.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/scsanjay/ml_from_scratch/blob/main/03.%20Naive%20Bayes/MultinomialNaiveBayes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="R1KLHqUdOrlx" # # Implementation of Multinomial Naive Bayes # + id="jo5E-QjUb5I4" import numpy as np # + id="UTRwsEa7NXKF" class MultinomialNaiveBayes: """ Parameters ---------- alpha : float, default=1.0 fit_prior : bool, default=True class_prior : array-like of shape (n_classes,), default=None Attributes ---------- class_count_ : ndarray of shape (n_classes,) class_log_prior_ : ndarray of shape (n_classes, ) classes_ : ndarray of shape (n_classes,) n_classes_ : int feature_count_ : ndarray of shape (n_classes, n_features) feature_log_prob_ : ndarray of shape (n_classes, n_features) n_features_ : int """ def __init__(self, alpha=1.0, fit_prior=True, class_prior=None): self.alpha = alpha self.fit_prior = fit_prior self.class_prior = class_prior def fit(self, X, y): """ Parameters ---------- X : array-like of shape (n_samples, n_features) y : array-like of shape (n_samples,) Returns ------- self : object """ # convert train data to numpy array if in other form X_train = np.array(X) y_train = np.array(y) n_samples = len(y_train) # get distinct class labels self.classes_ = np.sort(np.unique(y_train)) # get total number of class available self.n_classes_ = len(self.classes_) # get frequency for each class self.class_count_ = np.zeros(self.n_classes_) for idx, class_ in enumerate(self.classes_): self.class_count_[idx] = np.count_nonzero(y_train == class_) # get log priors self.class_log_prior_ = np.zeros(self.n_classes_) if self.class_prior is not None: self.class_log_prior_ = np.log(np.array(self.class_prior)) elif self.fit_prior == False: self.class_log_prior_ = np.full(self.n_classes, -np.log(self.n_classes)) else: self.class_log_prior_ = np.log(self.class_count_/n_samples) # number of features self.n_features_ = X_train.shape[1] # get feature counts and log likelihood probabilities # for each class and each features self.feature_count_ = np.zeros((self.n_classes_, self.n_features_)) self.feature_log_prob_ = np.zeros((self.n_classes_, self.n_features_)) for i, class_ in enumerate(self.classes_): # get data according to class temp_data = X_train[np.where(y_train==class_)] self.feature_count_[i] = np.sum(temp_data, axis=0) self.feature_log_prob_[i] = np.log( (self.feature_count_[i]+self.alpha)/ (np.sum(self.feature_count_[i])+self.alpha*self.n_features_) ) return self def predict(self, X): """ Parameters ---------- X : array-like of shape (n_samples, n_features) Returns ------- C : ndarray of shape (n_samples,) """ # convert test data to numpy array if in other form X_test = np.array(X) y_pred = np.empty(len(X_test)) #predict class for each test data for idx, x in enumerate(X_test): # SUM(xij*log_likelyhood(fij))+log_prior(j) # where j is class and fij is probability of feature i given j class y_pred[idx] = self.classes_[np.argmax(np.dot(x, self.feature_log_prob_.T) + self.class_log_prior_)] return y_pred # + [markdown] id="tUChhFKVPpO3" # # Compare the implementation with sklearn.naive_bayes.MultinomialNB # + id="LBNYFvH2AA1A" from sklearn.naive_bayes import MultinomialNB # + id="1XIfC_n9Ca4M" # Let's create some data X_train = np.array([ [2,1,3,1,0], [1,3,2,0,1], [0,0,1,2,3], [1,0,0,3,1], [1,0,0,2,2] ]) y_train = np.array([1, 1, 0, 0, 0]) X_test = np.array([ [3,1,2,1,0], [0,1,0,1,3] ]) # + [markdown] id="LSCJlFm9QhwD" # my_clf is object of the implemented Multinomial Naive Bayes # + colab={"base_uri": "https://localhost:8080/"} id="n6EK8b-bC8V5" outputId="8421b7f4-5248-400a-dac4-eaf6260e5d75" my_clf = MultinomialNaiveBayes() my_clf.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="S3lfUHdqRRyK" outputId="42477bd3-e4e8-41fc-b66c-ef5bace34a85" help(my_clf) # + [markdown] id="CE75GNLqQ_KS" # clf is object of sklearn's implementation # + colab={"base_uri": "https://localhost:8080/"} id="dVLINBzoC_AF" outputId="2e010af1-535d-437f-e6e4-b2b229f1ee65" clf = MultinomialNB() clf.fit(X_train, y_train) # + [markdown] id="MxAf7wfbSNHv" # ### Let's compare the attributes # + colab={"base_uri": "https://localhost:8080/"} id="X-sSB6gaRy7L" outputId="965119a5-6891-40e6-cc70-699387235d46" print(my_clf.class_log_prior_) print(clf.class_log_prior_) # + colab={"base_uri": "https://localhost:8080/"} id="aRGmp5JjSAXd" outputId="d02aa498-f5dc-40a6-b0cc-7fd27d3d24f8" print(my_clf.feature_count_) print(clf.feature_count_) # + colab={"base_uri": "https://localhost:8080/"} id="4lHsLun5OgVP" outputId="e7fb07f9-4ef6-4b44-ed01-51fa01817df2" print(my_clf.feature_log_prob_) print(clf.feature_log_prob_) # + [markdown] id="<KEY>" # ### Let's compare the predict # + colab={"base_uri": "https://localhost:8080/"} id="kOMhJW2JOVaz" outputId="563c0ce4-0269-4a9d-a3a5-9858aab6df31" my_clf.predict(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="4uYookUxDLEi" outputId="fd7d4c92-49fd-4847-d7e4-5fc4ca30551b" clf.predict(X_test) # + [markdown] id="GTZFhkiTS0DB" # ### Let's try setting prior # + colab={"base_uri": "https://localhost:8080/"} id="FSIEouYKTMZF" outputId="18ad02f8-2184-465a-cb2b-090ca46441e6" my_clf = MultinomialNaiveBayes(class_prior=[1000,1]) my_clf.fit(X_train, y_train) my_clf.predict(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="c0pXD0RnTUl7" outputId="7ff9f7e1-723f-4e7e-f195-bc9ab62c0760" clf = MultinomialNB(class_prior=[1000,1]) clf.fit(X_train, y_train) clf.predict(X_test) # + [markdown] id="DccQA3mpU3uN" # ### Let's try with aplha=10 # + colab={"base_uri": "https://localhost:8080/"} id="jt32ywvOVCW0" outputId="8c55d7d5-3c39-4680-9572-bf30631c23df" my_clf = MultinomialNaiveBayes(alpha=10) my_clf.fit(X_train, y_train) print(my_clf.feature_log_prob_) # + colab={"base_uri": "https://localhost:8080/"} id="fujkURDoVYsB" outputId="f4374395-3650-4861-e96b-accc0aee77e9" clf = MultinomialNB(alpha=10) clf.fit(X_train, y_train) print(clf.feature_log_prob_) # + [markdown] id="233EgXmCT1NY" # ## Everything seems to be working fine
03. Naive Bayes/MultinomialNaiveBayes.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ivfUwthfWPbs" # # Introducción # # With the growth of data-centric Machine Learning, Active Learning has grown in popularity amongst businesses and researchers. Active Learning seeks to progressively train ML models so that the resultant model requires lesser amount of training data to achieve competitive scores. # # The structure of an Active Learning pipeline involves a classifier and an oracle. The oracle is an annotator that cleans, selects, labels the data, and feeds it to the model when required. The oracle is a trained individual or a group of individuals that ensure consistency in labeling of new data. # # The process starts with annotating a small subset of the full dataset and training an initial model. The best model checkpoint is saved and then tested on a balanced test set. The test set must be carefully sampled because the full training process will be dependent on it. Once we have the initial evaluation scores, the oracle is tasked with labeling more samples; the number of data points to be sampled is usually determined by the business requirements. After that, the newly sampled data is added to the training set, and the training procedure repeats. This cycle continues until either an acceptable score is reached or some other business metric is met. # # This tutorial provides a basic demonstration of how Active Learning works by demonstrating a ratio-based (least confidence) sampling strategy that results in lower overall false positive and negative rates when compared to a model trained on the entire dataset. This sampling falls under the domain of uncertanity sampling, in which new datasets are sampled based on the uncertanity that the model outputs for the corresponding label. In our example, we compare our model's false positive and false negative rates and annotate the new data based on their ratio. # # Some other sampling techniques include: # # Committee sampling: Using multiple models to vote for the best data points to be sampled # Entropy reduction: Sampling according to an entropy threshold, selecting more of the samples that produce the highest entropy score. # Minimum margin based sampling: Selects data points closest to the decision boundary # + id="xTF6wyPaWGay" # importamos las librerías import tensorflow_datasets as tfds import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt import re import string tfds.disable_progress_bar() # + colab={"base_uri": "https://localhost:8080/"} id="ujm7IIcHXFMQ" outputId="97bbb2fb-2185-4d6b-ea14-f066ce9ca5c0" # Carga y preprocesado de los datos dataset = tfds.load( "imdb_reviews", split="train + test", as_supervised=True, batch_size=-1, shuffle_files=False, ) reviews, labels = tfds.as_numpy(dataset) print("Total reviews :", reviews.shape[0]) # + id="HcY5UGXTXw8H" # Active Learning comienza con un etiquetado de pocos datos. val_split = 2500 test_split = 2500 train_split = 7500 # Separamos las muestras positivas y negativas x_positive, y_positive = reviews[labels == 1], labels[labels == 1] x_negative, y_negative = reviews[labels == 0], labels[labels == 0] # + id="jH7-UH7sYJxD" # Creamos el conjutno train, val, test x_val, y_val = ( tf.concat((x_positive[:val_split], x_negative[:val_split]), 0), tf.concat((y_positive[:val_split], y_negative[:val_split]), 0), ) x_test, y_test = ( tf.concat( ( x_positive[val_split : val_split + test_split], x_negative[val_split : val_split + test_split], ), 0, ), tf.concat( ( y_positive[val_split : val_split + test_split], y_negative[val_split : val_split + test_split], ), 0, ), ) x_train, y_train = ( tf.concat( ( x_positive[val_split + test_split : val_split + test_split + train_split], x_negative[val_split + test_split : val_split + test_split + train_split], ), 0, ), tf.concat( ( y_positive[val_split + test_split : val_split + test_split + train_split], y_negative[val_split + test_split : val_split + test_split + train_split], ), 0, ), ) # + id="aJvqDUKuY3h8" # La parte restante la guardamos por separado # Remaining pool of samples are stored separately. These are only labeled as and when required x_pool_positive, y_pool_positive = ( x_positive[val_split + test_split + train_split :], y_positive[val_split + test_split + train_split :], ) x_pool_negative, y_pool_negative = ( x_negative[val_split + test_split + train_split :], y_negative[val_split + test_split + train_split :], ) # + colab={"base_uri": "https://localhost:8080/"} id="AiChOQLwZCde" outputId="697f178e-d153-42ae-b294-7c2673d7c77d" # Creating TF Datasets for faster prefetching and parallelization train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) pool_negatives = tf.data.Dataset.from_tensor_slices( (x_pool_negative, y_pool_negative) ) pool_positives = tf.data.Dataset.from_tensor_slices( (x_pool_positive, y_pool_positive) ) print(f"Initial training set size: {len(train_dataset)}") print(f"Validation set size: {len(val_dataset)}") print(f"Testing set size: {len(test_dataset)}") print(f"Unlabeled negative pool: {len(pool_negatives)}") print(f"Unlabeled positive pool: {len(pool_positives)}") # + [markdown] id="EKZnq-26ZT8h" # # Paso de procesado y limpieza # + id="EawJMhNOZXHy" def custom_standardization(input_data): lowercase = tf.strings.lower(input_data) stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ") return tf.strings.regex_replace( stripped_html, f"[{re.escape(string.punctuation)}]", "" ) vectorizer = layers.TextVectorization( 3000, standardize=custom_standardization, output_sequence_length=150 ) # Adapting the dataset vectorizer.adapt( train_dataset.map(lambda x, y: x, num_parallel_calls=tf.data.AUTOTUNE).batch(256) ) # + id="5LaUVh0MZbIA" # Aplicamos la vectorización def vectorize_text(text, label): text = vectorizer(text) return text, label train_dataset = train_dataset.map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ).prefetch(tf.data.AUTOTUNE) pool_negatives = pool_negatives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) pool_positives = pool_positives.map(vectorize_text, num_parallel_calls=tf.data.AUTOTUNE) val_dataset = val_dataset.batch(256).map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ) test_dataset = test_dataset.batch(256).map( vectorize_text, num_parallel_calls=tf.data.AUTOTUNE ) # + [markdown] id="iE20jQ3eZn3R" # # Ajuste de hyperparámetros Helper Functions # + id="tJGHK_35Zrpl" # Helper function for merging new history objects with older ones def append_history(losses, val_losses, accuracy, val_accuracy, history): losses = losses + history.history["loss"] val_losses = val_losses + history.history["val_loss"] accuracy = accuracy + history.history["binary_accuracy"] val_accuracy = val_accuracy + history.history["val_binary_accuracy"] return losses, val_losses, accuracy, val_accuracy # Plotter function def plot_history(losses, val_losses, accuracies, val_accuracies): plt.plot(losses) plt.plot(val_losses) plt.legend(["train_loss", "val_loss"]) plt.xlabel("Epochs") plt.ylabel("Loss") plt.show() plt.plot(accuracies) plt.plot(val_accuracies) plt.legend(["train_accuracy", "val_accuracy"]) plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.show() # + [markdown] id="O-SNjUvGZ23P" # # Creación del modelo de LSTM model. # + id="WD0BI9dBZ7OE" def create_model(): model = keras.models.Sequential( [ layers.Input(shape=(150,)), layers.Embedding(input_dim=300, output_dim=128), layers.Bidirectional(layers.LSTM(32, return_sequences=True)), layers.GlobalMaxPool1D(), layers.Dense(20, activation='relu'), layers.Dropout(0.5), layers.Dense(1, activation='sigmoid'), ] ) model.summary() return model # + [markdown] id="F8yyxNika-7a" # # Train del modelo. Se utilizan 40,000 muestras etiquetadas # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Q-jtq26TbC4S" outputId="a064261f-c1db-4970-b737-2d49b567d88d" def train_full_model(full_train_dataset, val_dataset, test_dataset): model = create_model() model.compile( loss="binary_crossentropy", optimizer="rmsprop", metrics=[ keras.metrics.BinaryAccuracy(), keras.metrics.FalseNegatives(), keras.metrics.FalsePositives(), ], ) # We will save the best model at every epoch and load the best one for evaluation on the test set history = model.fit( full_train_dataset.batch(256), epochs=25, validation_data=val_dataset, callbacks=[ keras.callbacks.EarlyStopping(patience=4, verbose=1), keras.callbacks.ModelCheckpoint( "FullModelCheckpoint.h5", verbose=1, save_best_only=True ), ], ) # Plot history plot_history( history.history["loss"], history.history["val_loss"], history.history["binary_accuracy"], history.history["val_binary_accuracy"], ) # Loading the best checkpoint model = keras.models.load_model("FullModelCheckpoint.h5") print("-" * 100) print( "Test set evaluation: ", model.evaluate(test_dataset, verbose=0, return_dict=True), ) print("-" * 100) return model # Sampling the full train dataset to train on full_train_dataset = ( train_dataset.concatenate(pool_positives) .concatenate(pool_negatives) .cache() .shuffle(20000) ) # Training the full model full_dataset_model = train_full_model(full_train_dataset, val_dataset, test_dataset)
05_DeepLearning/04_review_classification_by_Active_Learning.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Predict Training Time and SageMaker Training Instance RAM and CPU resource consumption for Synthetic Data # # This notebook walks through how you can use the `canary_training` library to generate projections of training time, RAM, and CPU usage (collectivley refered to here as "resource consumption"). # # To briefly summarize, the canary_training library works by creating many small training jobs on small percentages of the data (generally, 1,2 and 3 percent). Based on the statistics gathered (using the SageMaker Profiler) it then extrapolates the resource consumption for the complete training job. # # **Note** If you are using a SageMaker Notebook Instance, please use the `conda_python3` kernel. If you are using SageMaker Studio, please use `Python 3 (Data Science)` kernel. import sagemaker import pandas import logging logger = logging.getLogger('log') #set logs if not done already if not logger.handlers: logger.setLevel(logging.INFO) # This notebook relies on the `canary_training` package, which will be used for generating extrapolations. #In SageMaker Studio #install from canary training library, which is in directory above. # !pip install ../canary_training/ #in a SageMaker Notebook Instance # #!pip install /home/ec2-user/SageMaker/canary_training/Canary_Training/canary_training #make sure this points to the canary_training directory from canary_training import * # ## Setup the Canary Job estimator and parameters # Before using canary_training to generate predictions of resource consumption, we need to define a few things. # # 1. A standard SageMaker estimator which defines our model. # 2. The instance(s) that we want to test. # 3. How many data points we want to make predictions based on. # # In this example, we will try to predict resource consumption (i.e. CPU, RAM, and training time) when training on a `ml.m5.2xlarge`. # # This uses a synthetic dataset that has 10 GB of data. This dataset had 20 columns, and the dataset was partitioned into 100 files each 100 MB. # # In this notebook, we use the SageMaker XGBoost built-in algorithm to generate an ML model. # # **Note**: The dataset used for the ML model is located here: `s3://aws-hcls-ml/public_assets_support_materials/canary_training_data/10_gb_20_cols/`. # First we will set canary training configuration and options. We will be training on 1%,2% and 3% of the data in triplicate. # + import boto3 import sagemaker from sagemaker import image_uris from sagemaker.session import Session from sagemaker.inputs import TrainingInput from time import gmtime,strftime import random role = sagemaker.get_execution_role() region = boto3.Session().region_name sagemaker_session = sagemaker.Session() output_bucket = sagemaker_session.default_bucket() instance_types=["ml.m5.2xlarge"] #instance_types=["ml.m5.4xlarge","ml.m4.16xlarge","ml.p3.2xlarge"] #you can test multiple instances if you wish for canary training. #set canary training parameters and inputs output_s3_location=f"s3://{output_bucket}/synthetic_output_data" #create a random local temporary directory which will be copied to s3 #If this exists already, you can just point to it already random_number=random.randint(10000000, 99999999) the_temp_dir=f"canary-training-temp-dir-{str(random_number)}" training_percentages=[.01,.01,.01,.02,.02,.02,.03,.03,.03] #train jobs in triplicate in order to increase statistical confidence # - print(output_bucket) # Now we set standard SageMaker Estimator parameters. Because this is just a test, we use the same data for both the `training` and `validation` channel. # + #location of input data for training make sure to exclude the final "/". "taxi_yellow_trip_data_processed" and not "taxi_yellow_trip_data_processed/" data_location='s3://aws-hcls-ml/public_assets_support_materials/canary_training_data/10_gb_20_cols' hyperparameters = { "max_depth":"5", "eta":"0.2", "gamma":"4", "min_child_weight":"6", "subsample":"0.7", "objective":"reg:squarederror", "num_round":"50"} # set an output path where the trained model will be saved job_name = f"canary-train-experiment-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}-{str(random.random())}".replace(".","") xgboost_container = sagemaker.image_uris.retrieve("xgboost", region, "1.2-1") instance_type="None" # construct a SageMaker estimator that calls the xgboost-container estimator = sagemaker.estimator.Estimator(image_uri=xgboost_container, hyperparameters=hyperparameters, role=role, instance_count=1, instance_type=instance_type, volume_size=300, #large dataset needs lots of disk space output_path=f'{output_s3_location}/{the_temp_dir}') # - # ## Set up canary training jobs # # We will set up the canary training by: # 1. Creating samples of the underlying data # 2. Create manifest files that will be used for these smaller training jobs # 3. Copy the underlying manifest files to S3. # 4. Build estimators for SageMaker that will be used for these smaller training jobs. # + ct=CanaryTraining(data_location=data_location,output_s3_location=output_s3_location, the_temp_dir=the_temp_dir,instance_types=instance_types,estimator=estimator,training_percentages=training_percentages) ct.prepare_canary_training_data() # - # ## Kick of canary training jobs # Now that we have the list of estimators, let's kick off the canary training jobs. # **Note**: By default, the canary_training library kicks off all of the jobs in parallel. For this example, this will mean that there will be 9 jobs on a `ml.m5.24xlarge` running. If your account does not support this many jobs of that instance type (and you cannot request an increase), you can run each job serially. # # If you run the jobs in parallel, the total amount of time taken is about 15 minutes. If you run them one-after-another, it takes about 1.5 hours. #kick off in parallel ct.kick_off_canary_training_jobs(training_channels_list=['train','validation'],wait=False) #set wait equal to True if you cannot/do not want to run all jobs in paralell # ## Wait until the jobs are finished before continuing in the next section!!! # Before continuing, please make sure that all the jobs kicked off for canary training are finished. You can see these jobs in the `SageMake Training` console. # ## Gather Statistics and Perform Extrapolations # # In the next section we will gather statistics around the training jobs, and use them to **extrapolate** resource consumption for the entire training job. We will do three things: # # 1. Extract relevant information from the training job and the SageMaker Profiler around CPU, RAM, and Training Time. # 2. Report the extrapolated CPU usage, RAM, and Training Time and cost. # 3. Report the raw CPU usage, RAM, and Training Time for the canary training jobs themselves. This will allow the user to make an informed decision based on this detailed information. # # (note that if a statistic is not relevant, it will have a value of -1 or -1.1) #submitted_jobs_information predicted_resource_usage_df,raw_actual_resource_usage_df=ct.get_predicted_resource_consumption() predicted_resource_usage_df.head() # Now report the raw info from the canary jobs. # # # **Note** Due to the stochastic nature of the canary jobs, the forecasts that you get may change between run to run. # ## Inspect Canary Training Job Results # You can inspect the underlying data for the canary training results. This is the data that was used to create the forcasts. While the forecasts may be useful, we strongly encourage data scientists to inspect the raw results as well. Note that CPUUtilization,MemoryUsedPercent,GPUUtilization,and GPUMemoryUtilization are all p99 values. raw_actual_resource_usage_df.head() # ## (Optional) Lets now kick off the actual training full job. # If you wish, feel free to kick off the entire training job to check the results. # # **NOTE** This training job takes around 33 minutes (2000 seconds) to run. # + #estimator.instance_type="ml.m5.2xlarge" #content_type = "csv" #train_input = data_location #validation_input = data_location #train_input=sagemaker.inputs.TrainingInput(train_input,content_type='csv') #validation_input=sagemaker.inputs.TrainingInput(validation_input,content_type='csv') #job_name=job_name=f"full-training--job-{strftime('%Y-%m-%d-%H-%M-%S', gmtime())}-{str(random.random())}".replace(".","") #estimator.fit(inputs={'train': train_input, 'validation': validation_input},job_name=job_name,wait=False,logs="All")
Canary_Training/quick_start_example_notebooks/0_quick_start_canary_training_example_synthetic_data.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="Yyarj65ix5n3" colab_type="code" outputId="2b8c7327-c5e8-4901-d467-f391ba58d876" executionInfo={"status": "ok", "timestamp": 1583450524679, "user_tz": 180, "elapsed": 4064, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 165} import numpy as np print("numpy version:", np.__version__) import keras print("keras version:", keras.__version__) import tensorflow as tf print("tensorflow version:", tf.__version__) print("GPU support:", tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)) # + [markdown] id="lVbWSckgx5n9" colab_type="text" # # Classificação de notícias em múltiplas categorias # + [markdown] id="1mk8JGiaTkKy" colab_type="text" # No exemplo anterior aprendemos como classificar vetores de entrada em duas categorias mutuamente exclusivas, utilizando uma arquitetura densamente conectadas. Neste exemplo vamos aprender como classificar vetores de entrada em categorias múltiplas. # # O exemplo utilizará um grupo de notícias da Reuters com 46 tópicos mutuamente exclusivos. Como cada notícia só pode ser classificada em um único tópico este é um caso de "classificação em múltiplas categorias com um único rótulo". Se os vetores de entrada pudessem ser classificados em múltiplas categorias teríamos uma situação de "múltiplas categorias com múltiplos rótulos". # + [markdown] id="9EaUYmJWx5n-" colab_type="text" # ## O conjunto de dados Reuters # + [markdown] id="UZ4MkVQ-TgW_" colab_type="text" # O dataset _Reuters_ é composto de um grupo de pequenas notícias da Reuters publicadas em 1986. Existem 46 tópicos distintos, onde cada tópico contém pelo menos 10 notícias no conjunto de treino. # # Abaixo temos o processo de download do mesmo da internet, já com os ajustes necessários na função _np.load_. O parãmetro *num_words=10000* restringe a base de notícias às 10.000 palavras mais utilizadas. Com isso, os grupos de treino e teste terão respectivamente 8.982 e 2.246 registros. # + id="IreB4907x5n-" colab_type="code" outputId="6f25f3f3-754d-4919-ab97-86e512b55fed" executionInfo={"status": "ok", "timestamp": 1583450628707, "user_tz": 180, "elapsed": 2458, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 51} #old = np.load #np.load = lambda *a,**k: old(*a,**k,allow_pickle=True) from keras.datasets import reuters (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000) #np.load = old #del(old) # + id="eP_qE87Qx5oC" colab_type="code" outputId="8f3e6156-fbc7-4839-b231-ea5a7c12538c" executionInfo={"status": "ok", "timestamp": 1583450803572, "user_tz": 180, "elapsed": 837, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} len(train_data) # + id="4TXleUBEx5oG" colab_type="code" outputId="2302d8c8-d0e5-45b4-fb0e-88879f5ab222" executionInfo={"status": "ok", "timestamp": 1583450812616, "user_tz": 180, "elapsed": 1114, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} len(test_data) # + [markdown] id="2mkBqXuax5oJ" colab_type="text" # Uma vez executado o download, pode-se observar os registros e constatar que cada palavra foi codificada por um número. Por exemplo o 11o registros no grupo de teste é apresentado a seguir: # + id="mDok1EmIx5oJ" colab_type="code" outputId="588a647e-b303-4520-b4dc-c02e8a36d4a2" executionInfo={"status": "ok", "timestamp": 1583450817207, "user_tz": 180, "elapsed": 1093, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 68} np.array(train_data[10]) # + [markdown] id="QTlvU3JXx5oM" colab_type="text" # Utilizando o dicionário de palavras com a codificação das mesmas pode-se decoficar a notícia. Observe que os índices sofreram um deslocamento (offiset) de 3 posições uma vez que os índices 0, 1 e 2 de cada notícia são utilizados para armazenar metadados relativos a mesma, a saber: "padding", "star of sequence" e "unknown" # + id="EYCzr53Xx5oN" colab_type="code" outputId="fef59e1e-aadc-48f7-8638-ed14823406f3" executionInfo={"status": "ok", "timestamp": 1583450885187, "user_tz": 180, "elapsed": 1494, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 51} word_index = reuters.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) # + id="hhz6365Mx5oP" colab_type="code" outputId="6be7f89f-93b1-4d4a-d026-453889c71357" executionInfo={"status": "ok", "timestamp": 1583450894211, "user_tz": 180, "elapsed": 1038, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 54} decoded_newswire # + [markdown] id="W4l5u4Qfx5oS" colab_type="text" # O vetor de saida é composto do rótulo associado com cada notícia, a saber um número entre 0 e 45. # + id="OHrnBwasx5oT" colab_type="code" outputId="95f0c6e0-4310-4e88-f8d5-5a42d72770d3" executionInfo={"status": "ok", "timestamp": 1583450947423, "user_tz": 180, "elapsed": 1481, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} train_labels[10] # + [markdown] id="yVoNG8hKx5oW" colab_type="text" # ## Preparação dos dados # + [markdown] id="2jERr6bWTcbb" colab_type="text" # A célula abaixo vetoriza os dados de entrada preparando os mesmos para uso pela rede neural. O processo de vetorização consiste em criar uma matriz com n linhas (8.982 no conjunto de treino) e 10.000 colunas uma para cada palavra. Caso a palavra esteja presente na notícia a coluna correspondente terá o valor de 1, caso contrário receberá o valor de 0. Observe que neste tipo de vetorização estaremos interessados apenas se a palavra está ou não presente na notícia. A ordem das palavras ou a quantidade de vezes que a palavra aparece na notícia serão perdidas nesta vetorização. # + id="naj6JIMox5oX" colab_type="code" colab={} import numpy as np def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results x_train = vectorize_sequences(train_data) x_test = vectorize_sequences(test_data) # + id="LJIcnTniveCl" colab_type="code" outputId="99b7fe47-b839-4682-b0d0-4f8e4705ab6d" executionInfo={"status": "ok", "timestamp": 1583451014903, "user_tz": 180, "elapsed": 1011, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 136} x_train[:10] # + id="zvG4X-s1wXXQ" colab_type="code" outputId="c18a4913-8db4-4f04-f656-95b4338a8fd3" executionInfo={"status": "ok", "timestamp": 1583451018005, "user_tz": 180, "elapsed": 1097, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} x_train.shape # + id="VPizdDYZwaXk" colab_type="code" outputId="b33da0cd-feb6-4cec-94c9-e8fbf123f36e" executionInfo={"status": "ok", "timestamp": 1583451042086, "user_tz": 180, "elapsed": 1009, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} x_train.min(), x_train.max() # + [markdown] id="FwceUsSQx5oa" colab_type="text" # Um processo similar será utilizado para codificar os rótulos das notícias. Neste caso teremos no conjunto de treino uma matriz de saida com 8.982 linhas e 46 colunas, uma para cada tópico. Cada linha desta matriz terá apenas uma posição marcada com "1" e as demais com "0" indicando que cada notícia só poderá ser classificada em um único tópico. # # Este tipo de vetorização em inglès se chama *one hot encoding*. # + id="6iE0Mgs-x5oa" colab_type="code" colab={} def to_one_hot(labels, dimension=46): results = np.zeros((len(labels), dimension)) for i, label in enumerate(labels): results[i, label] = 1. return results one_hot_train_labels = to_one_hot(train_labels) one_hot_test_labels = to_one_hot(test_labels) # + id="joYJjoVAxH44" colab_type="code" outputId="28de75d4-3d31-49bf-a4b7-92a9200f2851" executionInfo={"status": "ok", "timestamp": 1583451253178, "user_tz": 180, "elapsed": 957, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 119} one_hot_train_labels[:2] # + [markdown] id="OwG_wxyIx5od" colab_type="text" # O tipo de vetorização apresentado acima pode também ser realizado com a função ```to_categorical``` disponível na biblioteca Keras (vide a seguir): # + id="YfkpUgKqx5oe" colab_type="code" colab={} from keras.utils.np_utils import to_categorical one_hot_train_labels = to_categorical(train_labels) one_hot_test_labels = to_categorical(test_labels) # + [markdown] id="2aPjbgIqx5oh" colab_type="text" # ## Desenvolvimento da rede neural # # + [markdown] id="14gvEQYbTYj9" colab_type="text" # Uma vez que o conjunto de saida terá 46 classes, serão utilizados níveis intermediários na rede neural com 64 neurons. Isto ocorre para evitar a perda de informação a respeito das classes ao ser executada a passagem *para frente* na rede (*feed forward* em inglês). # # Outro ponto a considerar é que, como deve ser escolhida uma entre 46 possíveis classes, o nível de saida terá 46 neurons com a função de ativação *softmax*. # # Recordando, a função de ativação softmax calcula as probabilidades do vetor de entrada pertencer a cada um dos possíveis níveis de saida e escolhe o valor "1" para aquele nível com a maior probabilidade e "0" para os demais. # + id="YC66mZt6x5oi" colab_type="code" outputId="4da31a52-9c5d-4d76-a974-5c233f4856c7" executionInfo={"status": "ok", "timestamp": 1583451553888, "user_tz": 180, "elapsed": 1108, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 139} from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) # + [markdown] id="t0C_PNr3x5ol" colab_type="text" # A seguir o modelo é compilado (montado) a partir da escolha do método de otimização da rede (neste caso *rmsprop*), da função de erro (*categorical crossentropy*) e da métrica de acompanhamento (*accuracy*, isto é a precisão dos resultados). A função de erro *categorical crossentropy* é tipica dos problemas de classificação (no caso de um problema de previsão de valor utiliza-se os mínimos quadrados). # # Como referência, a categorical crossentropy é calculada como $ Erro = \sum y_i.log(p_i) $ onde $y_i$ é o valor do label na categoria $i$ e $p_i$ a probabilidade do label pertencer a categoria $i$. # + id="ol1O2viIx5om" colab_type="code" outputId="ba1cc6f1-6477-4ccb-8cb6-cd17d43098a4" executionInfo={"status": "ok", "timestamp": 1583451592517, "user_tz": 180, "elapsed": 957, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 105} model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # + [markdown] id="y0nLtfy4x5op" colab_type="text" # ## Treino da rede neural # + [markdown] id="TNUPChGfTTzG" colab_type="text" # Vamos separar do conjunto de treino, 1.000 registros para uso como conjunto de validação. # + id="NR3LBHH9x5op" colab_type="code" colab={} x_val = x_train[:1000] partial_x_train = x_train[1000:] y_val = one_hot_train_labels[:1000] partial_y_train = one_hot_train_labels[1000:] # + [markdown] id="AC5gTfHCx5ot" colab_type="text" # Em seguida vamos treinar a rede neural utilizando 20 épocas de treino. # + id="_slD0CPLx5ou" colab_type="code" outputId="e6b7e9d9-763a-4ac9-bc0b-1148a7c77a8d" executionInfo={"status": "ok", "timestamp": 1583451806207, "user_tz": 180, "elapsed": 17779, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) # + [markdown] id="yZIXlVuTx5oz" colab_type="text" # Após executar o treino, vamos observar a evolução das curvas de erro e precisão nos dois conjuntos de dados (treino e validação). # + id="LibsJhycx5o0" colab_type="code" outputId="067eeb0e-79a8-48a3-a356-f106d4ef380c" executionInfo={"status": "ok", "timestamp": 1583451828173, "user_tz": 180, "elapsed": 1139, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 297} import matplotlib.pyplot as plt # %matplotlib inline loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'bo', label='Erro no treino') plt.plot(epochs, val_loss, 'b', label='Erro na validação') plt.title('Erro nos Conjuntos de Treino e Validação') plt.xlabel('Época') plt.ylabel('Erro') plt.legend() plt.show() # + id="LnGPbBkXx5o3" colab_type="code" outputId="6edc5847-6e46-4cdb-a53c-e31491ae80db" executionInfo={"status": "ok", "timestamp": 1583451882878, "user_tz": 180, "elapsed": 1179, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 297} plt.clf() acc = history.history['acc'] val_acc = history.history['val_acc'] plt.plot(epochs, acc, 'bo', label='Precisão no treino') plt.plot(epochs, val_acc, 'b', label='Precisão na validação') plt.title('Precisão nos conjuntos de treino e validação') plt.xlabel('Época') plt.ylabel('Erro') plt.legend() plt.show() # + [markdown] id="sLxN5-ISx5o6" colab_type="text" # Os gráficos apontam para um overfitting a partir de 8 épocas de treino. Sendo assim, vamos treinar a rede em um total de 8 épocas e em seguida avaliar os resultados no conjunto de teste. # + id="GMWrEA77x5o6" colab_type="code" outputId="c601248b-2102-4d58-c794-300ded37d0b0" executionInfo={"status": "ok", "timestamp": 1583451953198, "user_tz": 180, "elapsed": 4835, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 343} model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=8, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, one_hot_test_labels) # + id="WeSgZYkMx5o9" colab_type="code" outputId="a19e50f4-1ddb-46c9-ea48-85b164bb1b8a" executionInfo={"status": "ok", "timestamp": 1583451967426, "user_tz": 180, "elapsed": 1119, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} results # + [markdown] id="1_qgTETJx5pA" colab_type="text" # Obtivemos um resultado de ~78% de acertos. Isto significa que classificamos erroneamente ~19% dos casos. Em um sistema completamente ao acaso, com duas categorias o "0" seria escolhido em 50% das vezes. Sendo assim, os resultados são promissores quando comparados com um sistema de escolha aleatória. # + id="zDGjLPSex5pA" colab_type="code" outputId="b6b0fc6f-ea62-403c-d2e6-7db72a65782c" executionInfo={"status": "ok", "timestamp": 1583452132269, "user_tz": 180, "elapsed": 1154, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} import copy test_labels_copy = copy.copy(test_labels) np.random.shuffle(test_labels_copy) float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels) # + [markdown] id="Kw5EHhLIx5pE" colab_type="text" # ### Gerando previsões a partir de novos conjuntos de dados # + [markdown] id="QoC96bOoTJSw" colab_type="text" # Para tal utilizaremos o método ```.predict``` dos objetos tipo ```model``` da biblioteca keras # + id="AO9bkm6ax5pF" colab_type="code" colab={} predictions = model.predict(x_test) # + [markdown] id="JGMufiuAx5pH" colab_type="text" # Cada entrada em ```predictions``` é um vetor com 46 posições, cada uma delas com a probabilidade da notícia pertencer a cada um dos tópicos. A soma das entradas em ```predictions``` será portanto igual a 1. A maior entrada em predictions é a classe prevista. # + id="L1Rh__zgx5pI" colab_type="code" outputId="24f22b56-e468-4ff9-e2e0-4177eae6a801" executionInfo={"status": "ok", "timestamp": 1567274447032, "user_tz": 180, "elapsed": 1491, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAWGB61oq68YgCpmsgzde-siuuqzysncEyJ80ex=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} predictions[0].shape, np.sum(predictions[0]), np.argmax(predictions[0]) # + [markdown] id="_ccN5uk7x5pP" colab_type="text" # ## Uma forma distinta de formatar os vetores de entrada e a função de erro # + [markdown] id="LK0_Lfc0TGDI" colab_type="text" # Outra forma seria utilizar os tensores com números inteiros ao invés da formatação do tipo *one hot encoder*. Para tal a função de erro deverá ser do tipo *sparse_categorical_crossentropy* # + id="t4yg8Ar7x5pQ" colab_type="code" outputId="d8458b53-f28d-461f-bf49-9f90a3d2646b" executionInfo={"status": "ok", "timestamp": 1567274940244, "user_tz": 180, "elapsed": 1489, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mAWGB61oq68YgCpmsgzde-siuuqzysncEyJ80ex=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 34} y_train = np.array(train_labels) y_test = np.array(test_labels) y_train # + id="wdhlj3vqx5pS" colab_type="code" colab={} model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc']) # + [markdown] id="zGdItEgQx5pV" colab_type="text" # ## A importância de ter camadas com quantidade suficiente de neurons # + [markdown] id="HFpL-Gw6TCVt" colab_type="text" # No exemplo a seguir poderemos observar o efeito de criar um gargalo "informacional" em uma das camadas intermediárias, onde teremos apenas 4 neurons. Neste caso o desempenho da rede terá um teto de aprox.70%. # + id="Qe37U2sFx5pW" colab_type="code" outputId="e66e06d0-82c7-4b66-af35-840fb1a34fd4" executionInfo={"status": "ok", "timestamp": 1583453929627, "user_tz": 180, "elapsed": 11397, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgrwR1J-1sr_G_5mgX_S_hGV2TcQ8I2Vn0v8Ekq=s64", "userId": "00687033162702398655"}} colab={"base_uri": "https://localhost:8080/", "height": 751} model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(10000,))) model.add(layers.Dense(4, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=128, validation_data=(x_val, y_val)) # + [markdown] id="4nOYL8qwx5pZ" colab_type="text" # ## Conclusões # + [markdown] id="BlAiiY7KS7_8" colab_type="text" # * Ao classificar um conjunto de pontos com N categorias, o último nível de ser do tipo ```Dense```com N neurons. # * Se o sistema for do tipo multi categoria com um único rótulo por registro a rede deverá terminar com a função de ativação do tipo `softmax` # * Em problemas de classificação a função de erro mais comum é a *Categorical crossentropy* # * Os rótulos por sua vez podem ser formatados de duas formas: # * "one hot encoding" com função de erro do tipo `categorical_crossentropy` # * como números inteiros com função de erro `sparse_categorical_crossentropy`. # * Caso existam muitas categorias deve-se evitar a criação de gargalos informacionais, mantendo todos os níveis intermediários com um número de neurons pelo menos igual ao número de categorias. # + [markdown] id="nE3lyrxex5pY" colab_type="text" # ## Exercícios # + [markdown] id="hJsVBukJSzyO" colab_type="text" # * Partindo da rede neural desenvolvida neste exercício, determine de forma gráfica o número de épocas de treino necessárias para que o erro no conjunto de validação iguale o erro no conjunto de treino, o valor do erro em si e a precisão obtida no conjunto de teste para as seguintes situações: # 1. 1 camada intermediária com 32 neurons # 2. 2 camadas intermediárias com 32 neurons # 3. 3 camadas intermediárias com 32 neurons # 4. 1 camada intermediária com 128 neurons # 5. 2 camadas intermediárias com 128 neurons # 6. 3 camadas intermediárias com 128 neurons # # * Explique porque foi utilizada como função de ativação no nível de saida a função `softmax` # # * Para uma amostra em específica de dados de entrada apresentada à rede neural, qual o significado dos valores nos dez neurons do nível de saida? Quanto deve ser a soma dos mesmos? Por quê? Como é executada a previsão de categoria de classificação neste caso? # # * No caso do nível de saida com a função sigmóide, o que representa o valor de saida? Como pode neste caso ser executada a previsão de categoria? # # * Defina o conceito de "gargalo informacional" ("information bottleneck") em uma rede neural. Quais os possíveis efeitos do mesmo. # # * Por que neste exemplo a função de erro mais indicada é a `categorical_crossentropy` e não a "root mean square"?
notebooks/fgv_classes/professor_mirapalheta/02.3.keras_classmultipla_noticias.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Content-based Recommender v1 # # Yolanda # + import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import linear_kernel ds = pd.read_csv("rest.csv") # - # input format: # - two columns # - id: restaurant names # - description: Menu ds.head() # Code here # + tags=[] tf = TfidfVectorizer(analyzer='word', ngram_range=(1, 3), min_df=0, stop_words='english') tfidf_matrix = tf.fit_transform(ds['description']) cosine_similarities = linear_kernel(tfidf_matrix, tfidf_matrix) results = {} for idx, row in ds.iterrows(): similar_indices = cosine_similarities[idx].argsort()[:-100:-1] similar_items = [(cosine_similarities[idx][i], ds['id'][i]) for i in similar_indices] results[row['id']] = similar_items[1:] print('done!') def item(id): return ds.loc[ds['id'] == id]['description'].tolist()[0].split(' - ')[0] # Just reads the results out of the dictionary. def recommend(item_id, num): print("Recommending " + str(num) + " products similar to " +item_id+ item(item_id) ) print("-------") recs = results[item_id][:num] for rec in recs: print( "Name: " + rec[1]) print("Menu: "+ item(rec[1]) +" (score:" + str(rec[0]) + ")") recommend(item_id="Crystal Food Mania", num=5) # + tags=[] recommend(item_id="<NAME>", num=5) # + tags=[] recommend(item_id="Kareem's", num=5) # -
content_based_recommendation_v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Applied Data Science Capstone # # This notebook has been created as part of the IBM Applied Data Science Capstone on Coursera. # # The aim of this project is to compare neighbourhoods in the Toronto area using data obtained through the Foursquare API and Python tools including pandas, numpy, matplotlib and seaborn. import pandas as pd import numpy as np print('Hello Capstone Project Course!')
toronto_clustering_exercise/notebooks/00-introduction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="qYk44mBwJf6E" colab_type="text" # **Important Note: About the notebook: The idea was to download torrents to GDrive using colabs. Do not use this for wrong purposes.** # # # --- # # # **Note:** To get more disk space: # > Go to Runtime -> Change Runtime and give GPU as the Hardware Accelerator. # Please note that there is a limit on this disk space. Do some searchings on the maximum storage that you can get from colabs. # + [markdown] id="91uCrUr_gCNC" colab_type="text" # # > ## Start Session (Run the following code segment) # # You'll asked to provide a token to your GDrive. Please use the suggested link at the end to get the token. Once you got it, paste it on the space and press enter. # + id="5wqr-i-QfG1F" colab_type="code" colab={} # !apt install python3-libtorrent import libtorrent as lt from google.colab import drive ses = lt.session() ses.listen_on(49152, 65535) drive.mount("/content/drive") # + [markdown] id="0Er7A41igfpT" colab_type="text" # ## Use one of the followings # + [markdown] id="lZ7yUWwhgv3C" colab_type="text" # ### Add torrent file (give the path to your torrent file) # + colab_type="code" id="0et2A6N3udA0" colab={} from google.colab import files downloads = [] source = files.upload() for index in range(len(source)): params = { "save_path": "/content/drive/My Drive/Torrent/c", "ti": lt.torrent_info(list(source.keys())[index]), } downloads.append(ses.add_torrent(params)) # + [markdown] id="a9PuxJSng--I" colab_type="text" # ### Provide the magnet link # # Once you run this code, there will be a blank space to provide the magnet link. You can provide multiple magnet links to a one run(input links one by one following adding a link and pressing enter). After you done with adding links type **Exit** to continue to the next step. # + id="Cwi1GMlxy3te" colab_type="code" colab={} params = {"save_path": "/content/drive/My Drive/Torrent"} while True: magnet_link = input("Enter Magnet Link Or Type Exit: ") if magnet_link.lower() == "exit": break downloads.append( lt.add_magnet_uri(ses, magnet_link, params) ) # + [markdown] id="zaE9wiLehytg" colab_type="text" # ### Start the downloading. # # The files will be save to your GDrive in the folder named as **Torrent** # + colab_type="code" id="DBNoYYoSuDBT" colab={} import time from IPython.display import display import ipywidgets as widgets state_str = [ "queued", "checking", "downloading metadata", "downloading", "finished", "seeding", "allocating", "checking fastresume", ] layout = widgets.Layout(width="auto") style = {"description_width": "initial"} download_bars = [ widgets.FloatSlider( step=0.01, disabled=True, layout=layout, style=style ) for _ in downloads ] display(*download_bars) while downloads: next_shift = 0 for index, download in enumerate(downloads[:]): bar = download_bars[index + next_shift] if not download.is_seed(): s = download.status() bar.description = " ".join( [ download.name(), str(s.download_rate / 1000), "kB/s", state_str[s.state], ] ) bar.value = s.progress * 100 else: next_shift -= 1 ses.remove_torrent(download) downloads.remove(download) bar.close() # Seems to be not working in Colab (see https://github.com/googlecolab/colabtools/issues/726#issue-486731758) download_bars.remove(bar) print(download.name(), "complete") time.sleep(1)
cola_drive.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <NAME>: # We are in a competition to win the archery contest in Sherwood. With our bow and arrows we shoot on a target and try to hit as close as possible to the center. # # The center of the target is represented by the values (0, 0) on the coordinate axes. # # ![](images/arrows.jpg) # # ## Goals: # * data structures: lists, sets, tuples # * logical operators: if-elif-else # * loop: while/for # * minimum (optional sorting) # # ## Description: # In the 2-dimensional space, a point can be defined by a pair of values that correspond to the horizontal coordinate (x) and the vertical coordinate (y). The space can be divided into 4 zones (quadrants): Q1, Q2, Q3, Q4. Whose single point of union is the point (0, 0). # # If a point is in Q1 both its x coordinate and the y are positive. I leave a link to wikipedia to familiarize yourself with these quadrants. # # https://en.wikipedia.org/wiki/Cartesian_coordinate_system # # https://en.wikipedia.org/wiki/Euclidean_distance # # ## Shots # ``` # points = [(4, 5), (-0, 2), (4, 7), (1, -3), (3, -2), (4, 5), # (3, 2), (5, 7), (-5, 7), (2, 2), (-4, 5), (0, -2), # (-4, 7), (-1, 3), (-3, 2), (-4, -5), (-3, 2), # (5, 7), (5, 7), (2, 2), (9, 9), (-8, -9)] # ``` # # ## Tasks # 1. <NAME> is famous for hitting an arrow with another arrow. Did you get it? # 2. Calculate how many arrows have fallen in each quadrant. # 3. Find the point closest to the center. Calculate its distance to the center. # 4. If the target has a radius of 9, calculate the number of arrows that must be picked up in the forest. # + # Variables points = [(4, 5), (-0, 2), (4, 7), (1, -3), (3, -2), (4, 5), (3, 2), (5, 7), (-5, 7), (2, 2), (-4, 5), (0, -2), (-4, 7), (-1, 3), (-3, 2), (-4, -5), (-3, 2), (5, 7), (5, 7), (2, 2), (9, 9), (-8, -9)] # - # 1. <NAME> is famous for hitting an arrow with another arrow. Did you get it? for i in points: if points.count(i) > 1: print("Yeah!", i) else: print("OOOPS") # 2. Calculate how many arrows have fallen in each quadrant. q1 = [] q2 = [] q3 = [] q4 = [] for a,b in points: if a>0 and b>0: q1.append((a,b)) elif a>0 and b<0: q2.append((a,b)) elif a<0 and b<0: q3.append((a,b)) else: q4.append((a,b)) print(len(q1), "arrows have fallen in Q1") print(len(q2), "arrows have fallen in Q2") print(len(q3), "arrows have fallen in Q3") print(len(q4), "arrows have fallen in q4") # + code_folding=[] # 3. Find the point closest to the center. Calculate its distance to the center # Defining a function that calculates the distance to the center can help. distances = [] info = [] for a,b in points: dist = ((a-0)**2+(b-0)**2)**1/2 info.append((a,b,dist)) distances.append(dist) for a,b,c in info: if c == min(distances): print((a,b), "is the closest point to the center and is",c, "away") else: pass # - # 4. If the target has a radius of 9, calculate the number of arrows that # must be picked up in the forest. in_the_forest=0 for a,b,c in info: if c > 9: in_the_forest += 1 else: pass print("You will need to pick", in_the_forest, "arrows")
robin-hood/robin-hood.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + [markdown] origin_pos=0 # # Networks Using Blocks (VGG) # :label:`sec_vgg` # # While AlexNet offered empirical evidence that deep CNNs # can achieve good results, it did not provide a general template # to guide subsequent researchers in designing new networks. # In the following sections, we will introduce several heuristic concepts # commonly used to design deep networks. # # Progress in this field mirrors that in chip design # where engineers went from placing transistors # to logical elements to logic blocks. # Similarly, the design of neural network architectures # had grown progressively more abstract, # with researchers moving from thinking in terms of # individual neurons to whole layers, # and now to blocks, repeating patterns of layers. # # The idea of using blocks first emerged from the # [Visual Geometry Group](http://www.robots.ox.ac.uk/~vgg/) (VGG) # at Oxford University, # in their eponymously-named *VGG* network. # It is easy to implement these repeated structures in code # with any modern deep learning framework by using loops and subroutines. # # # ## VGG Blocks # # The basic building block of classic CNNs # is a sequence of the following: # (i) a convolutional layer # with padding to maintain the resolution, # (ii) a nonlinearity such as a ReLU, # (iii) a pooling layer such # as a max pooling layer. # One VGG block consists of a sequence of convolutional layers, # followed by a max pooling layer for spatial downsampling. # In the original VGG paper :cite:`Simonyan.Zisserman.2014`, # the authors # employed convolutions with $3\times3$ kernels with padding of 1 (keeping height and width) # and $2 \times 2$ max pooling with stride of 2 # (halving the resolution after each block). # In the code below, we define a function called `vgg_block` # to implement one VGG block. # # + [markdown] origin_pos=1 tab=["tensorflow"] # The function takes two arguments # corresponding to the number of convolutional layers `num_convs` # and the number of output channels `num_channels`. # # + origin_pos=5 tab=["tensorflow"] from d2l import tensorflow as d2l import tensorflow as tf def vgg_block(num_convs, num_channels): blk = tf.keras.models.Sequential() for _ in range(num_convs): blk.add(tf.keras.layers.Conv2D(num_channels,kernel_size=3, padding='same',activation='relu')) blk.add(tf.keras.layers.MaxPool2D(pool_size=2, strides=2)) return blk # + [markdown] origin_pos=6 # ## VGG Network # # Like AlexNet and LeNet, # the VGG Network can be partitioned into two parts: # the first consisting mostly of convolutional and pooling layers # and the second consisting of fully-connected layers. # This is depicted in :numref:`fig_vgg`. # # ![From AlexNet to VGG that is designed from building blocks.](../img/vgg.svg) # :width:`400px` # :label:`fig_vgg` # # # The convolutional part of the network connects several VGG blocks from :numref:`fig_vgg` (also defined in the `vgg_block` function) # in succession. # The following variable `conv_arch` consists of a list of tuples (one per block), # where each contains two values: the number of convolutional layers # and the number of output channels, # which are precisely the arguments required to call # the `vgg_block` function. # The fully-connected part of the VGG network is identical to that covered in AlexNet. # # The original VGG network had 5 convolutional blocks, # among which the first two have one convolutional layer each # and the latter three contain two convolutional layers each. # The first block has 64 output channels # and each subsequent block doubles the number of output channels, # until that number reaches 512. # Since this network uses 8 convolutional layers # and 3 fully-connected layers, it is often called VGG-11. # # + origin_pos=7 tab=["tensorflow"] conv_arch = ((1, 64), (1, 128), (2, 256), (2, 512), (2, 512)) # + [markdown] origin_pos=8 # The following code implements VGG-11. This is a simple matter of executing a for-loop over `conv_arch`. # # + origin_pos=11 tab=["tensorflow"] def vgg(conv_arch): net = tf.keras.models.Sequential() # The convulational part for (num_convs, num_channels) in conv_arch: net.add(vgg_block(num_convs, num_channels)) # The fully-connected part net.add(tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(4096, activation='relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(4096, activation='relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(10)])) return net net = vgg(conv_arch) # + [markdown] origin_pos=12 # Next, we will construct a single-channel data example # with a height and width of 224 to observe the output shape of each layer. # # + origin_pos=15 tab=["tensorflow"] X = tf.random.uniform((1, 224, 224, 1)) for blk in net.layers: X = blk(X) print(blk.__class__.__name__,'output shape:\t', X.shape) # + [markdown] origin_pos=16 # As you can see, we halve height and width at each block, # finally reaching a height and width of 7 # before flattening the representations # for processing by the fully-connected part of the network. # # ## Training # # Since VGG-11 is more computationally-heavy than AlexNet # we construct a network with a smaller number of channels. # This is more than sufficient for training on Fashion-MNIST. # # + origin_pos=18 tab=["tensorflow"] ratio = 4 small_conv_arch = [(pair[0], pair[1] // ratio) for pair in conv_arch] # Recall that this has to be a function that will be passed to # `d2l.train_ch6()` so that model building/compiling need to be within # `strategy.scope()` in order to utilize the CPU/GPU devices that we have net = lambda: vgg(small_conv_arch) # + [markdown] origin_pos=19 # Apart from using a slightly larger learning rate, # the model training process is similar to that of AlexNet in :numref:`sec_alexnet`. # # + origin_pos=20 tab=["tensorflow"] lr, num_epochs, batch_size = 0.05, 10, 128 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=224) d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr) # + [markdown] origin_pos=21 # ## Summary # # * VGG-11 constructs a network using reusable convolutional blocks. Different VGG models can be defined by the differences in the number of convolutional layers and output channels in each block. # * The use of blocks leads to very compact representations of the network definition. It allows for efficient design of complex networks. # * In their VGG paper, Simonyan and Ziserman experimented with various architectures. In particular, they found that several layers of deep and narrow convolutions (i.e., $3 \times 3$) were more effective than fewer layers of wider convolutions. # # ## Exercises # # 1. When printing out the dimensions of the layers we only saw 8 results rather than 11. Where did the remaining 3 layer information go? # 1. Compared with AlexNet, VGG is much slower in terms of computation, and it also needs more GPU memory. Analyze the reasons for this. # 1. Try changing the height and width of the images in Fashion-MNIST from 224 to 96. What influence does this have on the experiments? # 1. Refer to Table 1 in the VGG paper :cite:`Simonyan.Zisserman.2014` to construct other common models, such as VGG-16 or VGG-19. # # + [markdown] origin_pos=24 tab=["tensorflow"] # [Discussions](https://discuss.d2l.ai/t/277) #
d2l-en/tensorflow/chapter_convolutional-modern/vgg.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Stochastic Differential Equation (SDEs) # # Continous-state dynamical systems lead determinitical Ordinary Differential Equations systems (ODEs). Stochastic versions of this model are Stochastic Differential Equations (SDEs). # ## Fitzhugh Nagumo Model # # # \begin{equation} # \begin{split} # \frac{dv}{dt} =& v-v^3-w+I_{ext}\\ # \frac{dw}{dt} =& \frac{1}{\tau}(v-a-bw) # \end{split} # \end{equation} # + import numpy as np import pandas as pd def fitzhugh_nagumo(x, t, a, b, tau, I): """ Fitzhugh-Nagumo model. """ v, w = x dvdt = v-v**3-w+I dwdt = (v-a-b*w)/tau return np.array([dvdt, dwdt]) # + from functools import partial from scipy.integrate import odeint def integration_SDE(model, noise_flow, y0, t) : ''' Euler integration. y(t) = f(Y(t),t)dt + g(Yt,t)dBt y(0) = y0 ''' y = np.zeros((len(t),len(y0))) y[0] = y0 for n, dt in enumerate(np.diff(t),1): y[n] = y[n-1] + model(y[n-1],dt) * dt + model(y[n-1],dt) * np.random.normal(0,np.sqrt(dt)) return y n_runs = 10 t_span = np.linspace(0, 1000, num=10000) brownian_noise = lambda y,t: 0.01 initial_conditions = [(-0.5,-0.1), [0, -0.16016209760708508]] I_ext = [0, 0.19, 0.22, 0.5] import matplotlib.pyplot as plt fig, ax = plt.subplots(len(I_ext), 1, figsize=(15, 10*len(I_ext))) for idx, current_ext in enumerate(I_ext): # Evaluate fitzhugh_nagumo model with specified pameters a, b, tau, I in param param = {'a': -0.3, 'b':1.4, 'tau':20, 'I':current_ext} model = partial(fitzhugh_nagumo, **param) ic = initial_conditions[1] sde_solutions = np.zeros((10000,2,n_runs)) for i in range(n_runs): sde_solutions[:,:,i] = integration_SDE(model, brownian_noise, y0=ic, t=t_span) ode_solution = odeint(model, y0=ic, t=t_span) v_sde, w_sde = (sde_solutions[:,0,:], sde_solutions[:,1,:]) # Drop nans in case stochastic run results in ill solution v_sde = pd.DataFrame(v_sde).dropna(axis=1).to_numpy() v_ode, w_ode = (ode_solution[:,0], ode_solution[:,1]) ax[idx].plot(t_span, v_ode, label='V - ODE', color='k') ax[idx].plot(t_span, np.median(v_sde, 1), label=r'Median V - SDE', color='r', linestyle='-.') ax[idx].plot(t_span, v_sde, color='r', linestyle='-.', alpha=0.2) ax[idx].set_xlabel('Time (ms)') ax[idx].set_ylabel('Membrane Potential (mV)') ax[idx].set_title(r'External Stimulus $I_e=${}'.format(param['I'])) ax[idx].set_ylim([-2,2]) ax[idx].legend() # -
SDEs/.ipynb_checkpoints/example-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: cssp37 # language: python # name: cssp37 # --- # # # <b>Tutorial 4: Advanced data analysis</b> # # # ## Learning Objectives: # # In this session we will learn: # 1. to calculate frequency of wet days # 2. to calculate percentiles # 3. how to calculate some useful climate extremes statistics # ## Contents # # 1. [Frequency of wet days](#freq) # 2. [Percentiles](#percent) # 3. [Investigating extremes](#extremes) # 4. [Exercises](#exercise) # <div class="alert alert-block alert-warning"> # <b>Prerequisites</b> <br> # - Basic programming skills in python<br> # - Familiarity with python libraries Iris, Numpy and Matplotlib<br> # - Basic understanding of climate data<br> # - Tutorial 1, 2 and 3 # </div> # ___ # ## 1. Frequency of wet days<a id='freq'></a> # ### 1.1 Import libraries # Import the necessary libraries. Current datasets are in zarr format, we need zarr and xarray libraries to access the data import numpy as np import xarray as xr import zarr import iris import os from iris.analysis import Aggregator import dask dask.config.set(scheduler=dask.get) import dask.array as da import iris.quickplot as qplt import iris.plot as iplt import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt from catnip.preparation import extract_rot_cube, add_bounds from scripts.xarray_iris_coord_system import XarrayIrisCoordSystem as xics xi = xics() xr.set_options(display_style='text') # Work around for AML bug that won't display HTML output. # <div class="alert alert-block alert-warning"> # <b>Prerequisites</b> <br> # - Basic programming skills in python<br> # - Familiarity with python libraries Iris, Numpy and Matplotlib<br> # - Basic understanding of climate data<br> # - Tutorials 1, 2 and 3 # </div> # ___ # ### 1.2 Set up authentication for the Azure blob store # # The data for this course is held online in an Azure Blob Storage Service. To access this we use a SAS (shared access signature). You should have been given the credentials for this service before the course, but if not please ask your instructor. We use the getpass module here to avoid putting the token into the public domain. Run the cell below and in the box enter your SAS and press return. This will store the password in the variable SAS. import getpass # SAS WITHOUT leading '?' SAS = getpass.getpass() # We now use the Zarr library to connect to this storage. This is a little like opening a file on a local file system but works without downloading the data. This makes use of the Azure Blob Storage service. The zarr.ABStore method returns a zarr.storage.ABSStore object which we can now use to access the Zarr data in the same way we would use a local file. If you have a Zarr file on a local file system you could skip this step and instead just use the path to the Zarr data below when opening the dataset. store = zarr.ABSStore(container='metoffice-20cr-ds', prefix='daily/', account_name="metdatasa", blob_service_kwargs={"sas_token":SAS}) type(store) # ### 1.3 Read daily data # A Dataset consists of coordinates and data variables. Let's use the xarray's **open_zarr()** method to read all our zarr data into a dataset object and display it's metadata # use the open_zarr() method to read in the whole dataset metadata dataset = xr.open_zarr(store) # print out the metadata dataset # Convert the dataset into an iris cubelist. # + from xarray_iris_coord_system import XarrayIrisCoordSystem as xics xi = xics() # create an empty list to hold the iris cubes cubelist = iris.cube.CubeList([]) # use the DataSet.apply() to convert the dataset to Iris Cublelist dataset.apply(lambda da: cubelist.append(xi.to_iris(da))) # print out the cubelist. cubelist # - # --- # <div class="alert alert-block alert-info"> # <b>Note:</b> The following <b>sections</b> demonstrate analysis of moderate extremes. The basis of climate extremes analysis is a common set of standard extreme climate indices, defined by the World Climate Research Programme <a href="https://www.wcrp-climate.org/etccdi">Expert Team on Climate Change Detection and Indices (ETCCDI)</a> # # <br>There are 27 climate extremes indices, nicely summarised by the <a href="https://www.climdex.org/learn/indices/">Climdex</a> website. # </div> # ### 1.4 Calculate number of wet days ($\mathrm{pr} \geq 1 mm \;day^{-1}$) # # In this section we'll be looking at wet days, a threshold measure giving the count of days when $\mathrm{pr} \geq 1 mm \;day^{-1}$, and R95p, the 95th percentile of precipitation on wet days ($\mathrm{pr} \geq 1 mm \;day^{-1}$) in the 1851-1900 period over the Shanghai region. # Extract the 'precipitation_flux' cube pflx = cubelist.extract_strict('precipitation_flux') # To avoid warnings when collapsing coordinates and also when plotting, add bounds to all coordinates pflx = add_bounds(pflx,['time', 'grid_latitude', 'grid_longitude']) # convert units to mm/day (equivalent to 'kg m-2 day-1') pflx.convert_units('kg m-2 day-1') # Applying the time and region constraint # + # define time constraint and extract 1851-1900 period start_time = 1851 end_time = 1900 # define the time constraint time_constraint = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time) # laod the data into cubes applying the time constraint pflx = pflx.extract(time_constraint) # extract Shangai region and constain with time # defining Shangai region coords min_lat=29.0 max_lat=32.0 min_lon=118.0 max_lon=123.0 # extract data for the the Shanghai region using extract_rot_cube() function pflx = extract_rot_cube(pflx, min_lat, min_lon, max_lat, max_lon) # - # now use the iris COUNT aggregator to count the number of days with > 1mm precip wetdays = pflx.collapsed('time', iris.analysis.COUNT, function=lambda values: values > 1) wetdays.rename('number of wet days (>=1mm/day)') # + # Find wet days as a percentage of total days total_days = len(pflx.coord('time').points) pcent_wetdays = (wetdays / total_days) * 100 # renaming the cube name and units pcent_wetdays.rename('percentage of wet days (>=1mm/day)') pcent_wetdays.units = '%' # - # Now, we can plot the number and percententage of wet days fig = plt.figure(figsize=(12, 6)) fig.suptitle('Number of wet days (1851-1900)', fontsize=16) ax1 = fig.add_subplot(1, 2, 1, projection=ccrs.PlateCarree()) qplt.pcolormesh(wetdays) ax1.coastlines() ax1 = fig.add_subplot(1, 2, 2, projection=ccrs.PlateCarree()) qplt.pcolormesh(pcent_wetdays) ax1.coastlines() plt.show() # <div class="alert alert-block alert-success"> # <b>Task:</b><br><ul> # <li> Calculate and visualise the percentage difference of wet days from past (1851-1880) to recent (1981-2010) 30 years period. # </ul> # </div> # + # Write your code here .. # + # Write your code here .. # - # ___ # ## 2. Percentiles<a id='percent'></a> # ### 2.1 Calculating 95th percentile of precipitation # # In this section we will calculate the extreme precipitation i.e. 95th percentile of rainfall on wet days over Shanghai region from 1981-2010. # + # Extract the 'precipitation_flux' cube pflx = cubelist.extract_strict('precipitation_flux') # change the units to kg m-2 d-1 pflx.convert_units('kg m-2 d-1') # + # define time constraint and extract 1851-1900 period start_time = 1981 end_time = 2010 # define the time constraint cons = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time) # laod the data into cubes applying the time constraint pflx = pflx.extract(cons) # extract Shangai region and constain with time # defining Shangai region coords min_lat=29.0 max_lat=32.0 min_lon=118.0 max_lon=123.0 # extract data for the the Shanghai region using extract_rot_cube() function pflx = extract_rot_cube(pflx, min_lat, min_lon, max_lat, max_lon) # - # make a copy of the cube, mask where daily rainfall < 1 so that only wet days # are included in the calculation pflx_wet = pflx.copy() pflx_wet.data = np.ma.masked_less(pflx_wet.data, 1.0) # Now we can use the *iris.analysis.PERCENTILE* method to calculate the percentile. pflx_pc95 = pflx_wet.collapsed('time', iris.analysis.PERCENTILE, percent=95.) pflx_pc95.rename('R95p of daily rainfall') fig = plt.figure(figsize=(12, 6)) fig.suptitle('Extreme rainfall', fontsize=16) qplt.pcolormesh(pflx_pc95) plt.gca().coastlines() plt.show() # ___ # ## 3. Investigate extremes<a id='extremes'></a> # ### 3.1 Calculate the extreme index TX90P # Calculate the frequency of warm days in the present (extreme index TX90P), i.e. the number of days which exceed the 90th percentile temperatures in the baseline. Then calculate the numbers of days as a percentage. # + # first extract the air_temperature at 1.5m cubes from the cubelist air_temp = cubelist.extract('air_temperature' & iris.AttributeConstraint(Height='1.5 m')) # constraint for the maximum temperature max_temp_cons = iris.Constraint(cube_func=lambda c: (len(c.cell_methods) > 0) and (c.cell_methods[0].method == 'maximum')) # define time constraint and extract 1851-1900 period (the baseline) start_time = 1851 end_time = 1900 # define the time constraint time_constraint = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time) # applying the pressure, maximum temperature and time constraints getting a single cube max_temp = air_temp.extract_strict(max_temp_cons & time_constraint) # defining Shangai region coords min_lat=29.0 max_lat=32.0 min_lon=118.0 max_lon=123.0 # extract data for the the Shanghai region using extract_rot_cube() function max_temp = extract_rot_cube(max_temp, min_lat, min_lon, max_lat, max_lon) # - max_temp_pc90 = max_temp.collapsed('time', iris.analysis.PERCENTILE, percent=90.) max_temp_pc90.rename('R90p of daily maximum temperature') # + # extract a single cube of maximum air_temperature at 1.5m cube from the cubelist max_temp = cubelist.extract_strict('air_temperature' & iris.AttributeConstraint(Height='1.5 m') & max_temp_cons) # extract data for the the Shanghai region using extract_rot_cube() function max_temp = extract_rot_cube(max_temp, min_lat, min_lon, max_lat, max_lon) # Now extract present day start_time = 1981 end_time = 2010 time_constraint = iris.Constraint(time=lambda cell: start_time <= cell.point.year <= end_time) max_temp = max_temp.extract(time_constraint) # - # Now we need to calculate the number of warm days, we do so by counting all the data points that are greater than 90th percentile of the baseline period within the last 30 years. We can use numpy method **np.where** which return 1 where max_temp is greater then max_temp_pc90 and returns 0 otherwise. # # # + # make new cube to hold the counts nwarmdays = max_temp_pc90.copy() # Use broadcasting to identify all cells where daily temperatures in the future exceed the 95th percentile temp_gt_pc90 = np.where(max_temp.data >= max_temp_pc90.data, 1, 0) # using np.ma.sum to sum the number of warm days above the 90th percentile nwarmdays.data = np.ma.sum(temp_gt_pc90, axis=0) # the sum above removes the mask - reinstate it with nwarmdays.data.mask = max_temp_pc90.data.mask nwarmdays.units = '1' # - qplt.pcolormesh(nwarmdays) plt.gca().coastlines() plt.show() # Calculate percentage of warmest days by using **iris.analysis.maths** ndays = max_temp.shape[0] # calculating percentage nwd_pcent = iris.analysis.maths.divide(iris.analysis.maths.multiply(nwarmdays, 100), ndays) nwd_pcent.units="%" # Ploting the percentage of warm days qplt.pcolormesh(nwd_pcent) plt.title('Percentage of warm days') plt.gca().coastlines() plt.tight_layout() plt.show() # <div class="alert alert-block alert-success"> # <b>Task:</b><br><ul> # <li> Calculate and plot the past (1851-1880) and present (1981-2010) 90th percentile of maximum temperature and the difference between them. # </ul> # </div> # + # Enter your code here .. # + # Enter your code here .. # - # ___ # ## 4. Exercises<a id='exercise'></a> # # In this exercise we will calculate the percentage of total precipitation from 1981-2010 which falls on very wet days (where a very wet day is one on which daily rainfall exceeds the 95th percentile of the baseline) over Shanghai region. # # Further we also calculate the percentage of very wet days in the past (1851-1880) and see the difference by plotting the difference of heavy rainfall in the past and present. # ### Exercise 1: calculate the percentage of total precipitation from 1981-2010 on very wet days (=> 95th Percentile) # + # write your code here ... # - # ### Exercise 2: calculate the percentage of total precipitation from 1951-1880 on very wet days (=> 95th Percentile) # + # write your code here ... # - # ### Exercise 3: Calculate the difference # + # write your code here ... # - # ### Exercise 4: Plot the percentages and difference # + # write your code here ... # - # ___ # <div class="alert alert-block alert-success"> # <b>Summary</b><br> # In this session we learned how:<br> # <ul> # <li>to calculate extreme values and percentages # <li>to calcuate basic extreme value indices # </ul> # # </div> #
notebooks/CSSP_20CRDS_Tutorials/tutorial_4_advance_analysis.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # [<NAME>](http://www.sebastianraschka.com) # # [back](https://github.com/rasbt/matplotlib-gallery) to the `matplotlib-gallery` at [https://github.com/rasbt/matplotlib-gallery](https://github.com/rasbt/matplotlib-gallery) # %load_ext watermark # %watermark -u -v -d -p matplotlib,numpy # <font size="1.5em">[More info](http://nbviewer.ipython.org/github/rasbt/python_reference/blob/master/ipython_magic/watermark.ipynb) about the `%watermark` extension</font> # %matplotlib inline # # Scatter plots in matplotlib # # Sections # - [Basic scatter plot](#Basic-scatter-plot) # - [Scatter plot with labels](#Scatter-plot-with-labels) # - [Scatter plot of 2 classes with decision boundary](#Scatter-plot-of-2-classes-with-decision-boundary) # - [Increasing point size with distance from the origin](#Increasing-point-size-with-distance-from-the-origin) # <br> # <br> # # Basic scatter plot # [[back to top](#Sections)] # + from matplotlib import pyplot as plt import numpy as np # Generating a Gaussion dataset: # creating random vectors from the multivariate normal distribution # given mean and covariance mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) x2_samples = np.random.multivariate_normal(mu_vec1+0.2, cov_mat1+0.2, 100) x3_samples = np.random.multivariate_normal(mu_vec1+0.4, cov_mat1+0.4, 100) # x1_samples.shape -> (100, 2), 100 rows, 2 columns plt.figure(figsize=(8,6)) plt.scatter(x1_samples[:,0], x1_samples[:,1], marker='x', color='blue', alpha=0.7, label='x1 samples') plt.scatter(x2_samples[:,0], x1_samples[:,1], marker='o', color='green', alpha=0.7, label='x2 samples') plt.scatter(x3_samples[:,0], x1_samples[:,1], marker='^', color='red', alpha=0.7, label='x3 samples') plt.title('Basic scatter plot') plt.ylabel('variable X') plt.xlabel('Variable Y') plt.legend(loc='upper right') plt.show() # - # <br> # <br> # # Scatter plot with labels # [[back to top](#Sections)] # + import matplotlib.pyplot as plt x_coords = [0.13, 0.22, 0.39, 0.59, 0.68, 0.74, 0.93] y_coords = [0.75, 0.34, 0.44, 0.52, 0.80, 0.25, 0.55] fig = plt.figure(figsize=(8,5)) plt.scatter(x_coords, y_coords, marker='s', s=50) for x, y in zip(x_coords, y_coords): plt.annotate( '(%s, %s)' %(x, y), xy=(x, y), xytext=(0, -10), textcoords='offset points', ha='center', va='top') plt.xlim([0,1]) plt.ylim([0,1]) plt.show() # - # <br> # <br> # # Scatter plot of 2 classes with decision boundary # [[back to top](#Sections)] # + # 2-category classification with random 2D-sample data # from a multivariate normal distribution import numpy as np from matplotlib import pyplot as plt def decision_boundary(x_1): """ Calculates the x_2 value for plotting the decision boundary.""" return 4 - np.sqrt(-x_1**2 + 4*x_1 + 6 + np.log(16)) # Generating a Gaussion dataset: # creating random vectors from the multivariate normal distribution # given mean and covariance mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[2,0],[0,2]]) x1_samples = np.random.multivariate_normal(mu_vec1, cov_mat1, 100) mu_vec1 = mu_vec1.reshape(1,2).T # to 1-col vector mu_vec2 = np.array([1,2]) cov_mat2 = np.array([[1,0],[0,1]]) x2_samples = np.random.multivariate_normal(mu_vec2, cov_mat2, 100) mu_vec2 = mu_vec2.reshape(1,2).T # to 1-col vector # Main scatter plot and plot annotation f, ax = plt.subplots(figsize=(7, 7)) ax.scatter(x1_samples[:,0], x1_samples[:,1], marker='o', color='green', s=40, alpha=0.5) ax.scatter(x2_samples[:,0], x2_samples[:,1], marker='^', color='blue', s=40, alpha=0.5) plt.legend(['Class1 (w1)', 'Class2 (w2)'], loc='upper right') plt.title('Densities of 2 classes with 25 bivariate random patterns each') plt.ylabel('x2') plt.xlabel('x1') ftext = 'p(x|w1) ~ N(mu1=(0,0)^t, cov1=I)\np(x|w2) ~ N(mu2=(1,1)^t, cov2=I)' plt.figtext(.15,.8, ftext, fontsize=11, ha='left') # Adding decision boundary to plot x_1 = np.arange(-5, 5, 0.1) bound = decision_boundary(x_1) plt.plot(x_1, bound, 'r--', lw=3) x_vec = np.linspace(*ax.get_xlim()) x_1 = np.arange(0, 100, 0.05) plt.show() # - # <br> # <br> # # Increasing point size with distance from the origin # [[back to top](#Sections)] # + import numpy as np import matplotlib.pyplot as plt fig = plt.figure(figsize=(8,6)) # Generating a Gaussion dataset: # creating random vectors from the multivariate normal distribution # given mean and covariance mu_vec1 = np.array([0,0]) cov_mat1 = np.array([[1,0],[0,1]]) X = np.random.multivariate_normal(mu_vec1, cov_mat1, 500) R = X**2 R_sum = R.sum(axis=1) plt.scatter(X[:, 0], X[:, 1], color='gray', marker='o', s=32. * R_sum, edgecolor='black', alpha=0.5) plt.show() # -
Unit02/extension_studies/Matplotlib_gallery/ipynb/scatterplots.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import json # The module we need to decode JSON # #### Decode all JSON Columns # + df = pd.read_csv("tmdb_5000_movies.csv") sample_type = type(df["genres"][0]) print(f"The data type before using apply function: {sample_type}\n") # Column names that contain JSON json_cols = ['genres', 'keywords', 'production_companies', 'spoken_languages', 'production_countries'] def clean_json(x): "Create apply function for decoding JSON" return json.loads(x) # Apply the function column wise to each column of interest for x in json_cols: df[x] = df[x].apply(clean_json) sample_type2 = type(df["genres"][0]) print(f"The data type after using apply function: {sample_type2}") # - # #### One-Hot-Encode all JSON Data # + def clean_json2(x): # store values ls = [] # loop through the list f dictionaries for y in range(len(x[0])): # Access each key and value in each dictionary for k, v in x[0][y].items(): # append column names to ls ls.append(str(k)+ "_" +str(v)) # create a new column or change 0 to 1 if keyword exists for z in range(len(ls)): # If column not in the df columns then make a new column and assign zero values while changing the current row to 1 if ls[z] not in df.columns: df[ls[z]] = 0 df[ls[z]].iloc[x.name] = 1 else: df[ls[z]].iloc[x.name] = 1 return print("Original Shape",df.shape) # Loop over all columns and clean json and create new columns for x in json_cols: df[[x]].apply(clean_json2, axis=1) print("New Shape", df.shape) # - df.head()
json/part3.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # Isotropic Total Variation (ADMM) # ================================ # # This example compares denoising via isotropic and anisotropic total # variation (TV) regularization <cite data-cite="rudin-1992-nonlinear"/> # <cite data-cite="goldstein-2009-split"/>. It solves the denoising problem # # $$\mathrm{argmin}_{\mathbf{x}} \; (1/2) \| \mathbf{y} - \mathbf{x} # \|_2^2 + \lambda R(\mathbf{x}) \;,$$ # # where $R$ is either the isotropic or anisotropic TV regularizer. # In SCICO, switching between these two regularizers is a one-line # change: replacing an # [L1Norm](../_autosummary/scico.functional.rst#scico.functional.L1Norm) # with a # [L21Norm](../_autosummary/scico.functional.rst#scico.functional.L21Norm). # Note that the isotropic version exhibits fewer block-like artifacts on # edges that are not vertical or horizontal. # + import jax from xdesign import SiemensStar, discrete_phantom import scico.numpy as snp import scico.random from scico import functional, linop, loss, plot from scico.optimize.admm import ADMM, LinearSubproblemSolver from scico.util import device_info plot.config_notebook_plotting() # - # Create a ground truth image. N = 256 # image size phantom = SiemensStar(16) x_gt = snp.pad(discrete_phantom(phantom, 240), 8) x_gt = jax.device_put(x_gt) # convert to jax type, push to GPU x_gt = x_gt / x_gt.max() # Add noise to create a noisy test image. σ = 0.75 # noise standard deviation noise, key = scico.random.randn(x_gt.shape, seed=0) y = x_gt + σ * noise # Denoise with isotropic total variation. # + λ_iso = 1.4e0 f = loss.SquaredL2Loss(y=y) g_iso = λ_iso * functional.L21Norm() # The append=0 option makes the results of horizontal and vertical finite # differences the same shape, which is required for the L21Norm. C = linop.FiniteDifference(input_shape=x_gt.shape, append=0) solver = ADMM( f=f, g_list=[g_iso], C_list=[C], rho_list=[1e1], x0=y, maxiter=100, subproblem_solver=LinearSubproblemSolver(cg_kwargs={"tol": 1e-3, "maxiter": 20}), itstat_options={"display": True, "period": 10}, ) print(f"Solving on {device_info()}\n") solver.solve() x_iso = solver.x print() # - # Denoise with anisotropic total variation for comparison. # + # Tune the weight to give the same data fidelty as the isotropic case. λ_aniso = 1.2e0 g_aniso = λ_aniso * functional.L1Norm() solver = ADMM( f=f, g_list=[g_aniso], C_list=[C], rho_list=[1e1], x0=y, maxiter=100, subproblem_solver=LinearSubproblemSolver(cg_kwargs={"tol": 1e-3, "maxiter": 20}), itstat_options={"display": True, "period": 10}, ) solver.solve() x_aniso = solver.x print() # - # Compute and print the data fidelity. for x, name in zip((x_iso, x_aniso), ("Isotropic", "Anisotropic")): df = f(x) print(f"Data fidelity for {name} TV was {df:.2e}") # Plot results. # + plt_args = dict(norm=plot.matplotlib.colors.Normalize(vmin=0, vmax=1.5)) fig, ax = plot.subplots(nrows=2, ncols=2, sharex=True, sharey=True, figsize=(11, 10)) plot.imview(x_gt, title="Ground truth", fig=fig, ax=ax[0, 0], **plt_args) plot.imview(y, title="Noisy version", fig=fig, ax=ax[0, 1], **plt_args) plot.imview(x_iso, title="Isotropic TV denoising", fig=fig, ax=ax[1, 0], **plt_args) plot.imview(x_aniso, title="Anisotropic TV denoising", fig=fig, ax=ax[1, 1], **plt_args) fig.subplots_adjust(left=0.1, right=0.99, top=0.95, bottom=0.05, wspace=0.2, hspace=0.01) fig.colorbar( ax[0, 0].get_images()[0], ax=ax, location="right", shrink=0.9, pad=0.05, label="Arbitrary Units" ) fig.suptitle("Denoising comparison") fig.show() # zoomed version fig, ax = plot.subplots(nrows=2, ncols=2, sharex=True, sharey=True, figsize=(11, 10)) plot.imview(x_gt, title="Ground truth", fig=fig, ax=ax[0, 0], **plt_args) plot.imview(y, title="Noisy version", fig=fig, ax=ax[0, 1], **plt_args) plot.imview(x_iso, title="Isotropic TV denoising", fig=fig, ax=ax[1, 0], **plt_args) plot.imview(x_aniso, title="Anisotropic TV denoising", fig=fig, ax=ax[1, 1], **plt_args) ax[0, 0].set_xlim(N // 4, N // 4 + N // 2) ax[0, 0].set_ylim(N // 4, N // 4 + N // 2) fig.subplots_adjust(left=0.1, right=0.99, top=0.95, bottom=0.05, wspace=0.2, hspace=0.01) fig.colorbar( ax[0, 0].get_images()[0], ax=ax, location="right", shrink=0.9, pad=0.05, label="Arbitrary Units" ) fig.suptitle("Denoising comparison (zoomed)") fig.show()
notebooks/denoise_tv_admm.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Appendix 3: Python Libraries Crash Course # ## Part 2: Multi-Dimensional Numpy Arrays # ### Nested Lists project_1 = [-200, 20, 50, 70, 100, 50] project_2 = [-50, 10, 25, 25, 50] project_3 = [-1000, 200, 200, 300, 500, 500, 750, 250] all_proj = [project_1, project_2, project_3] all_proj type(all_proj) len(all_proj) all_proj[0] all_proj[2] all_proj[-1] all_proj[:2] all_proj[-2:] all_proj[0][0] all_proj[:][0] I0 = [] for proj in all_proj: I0.append(proj[0]) I0 # ### 2-dim Numpy Arrays import numpy as np cf1 = [-200, 20, 50, 70, 100, 50] cf2 = [-150, 25, 60, 50, 50, 40] cf3 = [-250, 10, 25, 50, 125, 200] nl = [cf1, cf2, cf3] nl np.array(nl) cfs = np.array(nl) cfs type(cfs) cfs.shape cfs[0] cfs[-1] cfs[1:] cfs[0][0] cfs[0, 0] # ### Slicing 2-dim Numpy Arrays (Part 1) import numpy as np cf1 = [-200, 20, 50, 70, 100, 50] cf2 = [-150, 25, 60, 50, 50, 40] cf3 = [-250, 10, 25, 50, 125, 200] cfs = np.array([cf1, cf2, cf3]) cfs cfs.shape cfs[:, 0] cfs[:, -1] cfs[:, :2] cfs[2, 1:] cfs[-2:, :2] project_1 = [-200, 20, 50, 70, 100, 50] project_2 = [-50, 10, 25, 25, 50] project_3 = [-1000, 200, 200, 300, 500, 500, 750, 250] array = np.array([project_1, project_2, project_3]) array array.shape # + # array[:, 0] # - project_1 = np.array([-200, 20, 50, 70, 100, 50, 0, 0]) project_2 = np.array([-50, 10, 25, 25, 50, 0, 0, 0]) project_3 = np.array([-1000, 200, 200, 300, 500, 500, 750, 250]) projects = np.array([project_1, project_2, project_3]) projects projects.shape projects[:, 0] project_1 = np.array([-200, 20, 50, 70, 100, 50]) project_1.resize(8) project_1 # ### Slicing 2-dim Numpy Arrays (Part 2) import numpy as np cf1 = np.array([-200, 20, 50, 70, 100, 50]) cf2 = np.array([-150, 25, 60, 50, 50, 40]) cf3 = np.array([-250, 10, 25, 50, 125, 200]) cfs = np.array([cf1, cf2, cf3]) cfs cf1[[0, 3, -1]] cfs[[0,-1], 0] cfs[0, [0, 2, -1]] # ### Recap: Changing Elements in a Numpy Array / slice import numpy as np cf1 = [-200, 20, 50, 70, 100, 50] cf2 = [-150, 25, 60, 50, 50, 40] cf3 = [-250, 10, 25, 50, 125, 200] cfs = np.array([cf1, cf2, cf3]) cfs cfs[:, 0] = -200 cfs cfs[:, 0] = cfs[:, 0] - 10 cfs cfs[:, 0] = [-200, -150, -250] cfs I0s = cfs[:, 0] I0s I0s[0] = -180 I0s cfs cf5 = cfs[:, -1].copy() cf5 cf5[-1] = 150 cf5 cfs # ### Row- and Column wise operations import numpy as np np.set_printoptions(precision=2, suppress= True) project_1 = [-200, 20, 50, 70, 100, 50, 0, 0] project_2 = [-50, 10, 25, 25, 50, 0, 0, 0] project_3 = [-1000, 200, 200, 300, 500, 500, 750, 250] projects = np.array([project_1, project_2, project_3]) projects np.sum(projects) np.sum(projects, axis = 0) np.sum(projects, axis = 1) np.cumsum(projects, axis = 1) np.mean(projects, axis = 1) np.mean(projects, axis = 0) np.mean(projects)
Appendix3_Materials/Video_Lectures_NBs/NB_02_multidim_arrays.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### CSS Selector # - HTML 엘리먼트에 CSS 스타일을 적용시킬 때 엘리먼트를 선택하기위한 방법 # - 선택하는 방법 # - tag 이름 # - id 값 # - class 값 # - attribute 값 # ###### 1. 태그 이름으로 선택 # - data1 엘리먼트를 선택 # - css-selector : div # ###### 2. 아이디 값으로 선택 # - data2를 아이디 값으로 선택 # - css-selector : #txt # ###### 3. 클래스 값으로 선택 # - data3를 클래스 값으로 선택 # - css-selector : .no2 # ###### 4. 속성값으로 선택 # - data4를 속성 값으로 선택 # - css=selector : '[val='d4'] [id='da4'] # ###### 5. 혼합해서 사용 # - span 태그, class 값이 no5 엘리먼트를 선택 # - css-selector : 'span.no5' # + language="html" # <div, id='wrap', class='dss'>data1</div> # <p, id='txt', class='dss-txt no1'>data2</p> # <span, class='dss-txt no2', val='d3'>data3</span> # <span, class='no5', id='da4', val='d4'>data4</span> # - # ###### 6. not selector # - 선택된 엘리먼트에서 특정 조건의 엘리먼트를 제거해서 선택 # - data2 엘리먼트만 제외한 ds 클래스를 선택 # - css-selector : '.ds:not(.dss2) # ###### 7. nth-child # - n번쨰의 엘리먼트를 선택 # - data3 선택하는 방법 # - css-selector : .ds:nth-child(3) # - <span>data0</span>이 추가되면 data2를 선택 (nth-dhild(n) 부분이 먼저 실행) # + language="html" # <div> # <span>data0</span> # <p class='ds dss1'> data1</p> # <p class='ds dss2'> data2</p> # <p class='ds dss3'> data3</p> # <p class='ds dss4'> data4</p> # </div> # - # ###### 8. 계층구조로 엘리먼트 선택 # - 바로 아래 단계의 엘리먼트 선택 # - .wrap-1 > h5 : inner1 선택 # - 모든 하위 엘리먼트를 선택 # - .wrap-1 h5 : inner1, inner2 전부 선택 # + language="html" # # <div, class='wrap-1'> # <h5>inner1<h5> # <div class='wrap-2'> # <h5>inner2<h5> # </div> # </div> # -
python/Crawling/07_css_selector.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import time from collections import namedtuple import numpy as np import tensorflow as tf with open('Downloads/text.txt', 'r') as f: text=f.read() vocab = sorted(set(text)) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32) #print(vocab) #print(vocab_to_int) #print(int_to_vocab) #encoded contains the entire text, encoded character-wise. Example: MONICA: 29 56 ...etc where 29 is M and 56 is O #print(encoded) def get_batches(arr, batch_size, n_steps): # Get the number of characters per batch and number of batches we can make chars_per_batch = batch_size * n_steps n_batches = len(arr)//chars_per_batch # Keep only enough characters to make full batches arr = arr[:n_batches * chars_per_batch] # Reshape into batch_size rows arr = arr.reshape((batch_size, -1)) for n in range(0, arr.shape[1], n_steps): # The features x = arr[:, n:n+n_steps] # The targets, shifted by one y_temp = arr[:, n+1:n+n_steps+1] # For the very last batch, y will be one character short at the end of # the sequences which breaks things. To get around this, I'll make an # array of the appropriate size first, of all zeros, then add the targets. # This will introduce a small artifact in the last batch, but it won't matter. y = np.zeros(x.shape, dtype=x.dtype) y[:,:y_temp.shape[1]] = y_temp yield x, y #batches = get_batches(encoded, 10, 50) #x,y = next(batches) #print(x,y) # - def build_inputs(batch_size, num_steps): ''' Define placeholders for inputs, targets, and dropout''' # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') return inputs, targets, keep_prob def build_lstm(lstm_size, num_layers, batch_size, keep_prob): ''' Build LSTM cell. lstm_size: Size of the hidden layers in the LSTM cells num_layers: Number of LSTM layers''' #Build the LSTM Cell def build_cell(lstm_size, keep_prob): # Use a basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)]) initial_state = cell.zero_state(batch_size, tf.float32) return cell, initial_state def build_output(lstm_output, in_size, out_size): ''' Build a softmax layer, return the softmax output and logits. Arguments --------- x: Input tensor in_size: Size of the input tensor, for example, size of the LSTM cells out_size: Size of this softmax layer ''' # Reshape output so it's a bunch of rows, one row for each step for each sequence. # That is, the shape should be batch_size*num_steps rows by lstm_size columns seq_output = tf.concat(lstm_output, axis=1) x = tf.reshape(seq_output, [-1, in_size]) # Connect the RNN outputs to a softmax layer with tf.variable_scope('softmax'): softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(out_size)) # Since output is a bunch of rows of RNN cell outputs, logits will be a bunch # of rows of logit outputs, one for each step and sequence logits = tf.matmul(x, softmax_w) + softmax_b # Use softmax to get the probabilities for predicted characters out = tf.nn.softmax(logits, name='predictions') return out, logits def build_loss(logits, targets, lstm_size, num_classes): ''' Calculate the loss from the logits and the targets. Arguments --------- logits: Logits from final fully connected layer targets: Targets for supervised learning lstm_size: Number of LSTM hidden units num_classes: Number of classes in targets ''' # One-hot encode targets and reshape to match logits, one row per batch_size per step y_one_hot = tf.one_hot(targets, num_classes) y_reshaped = tf.reshape(y_one_hot, logits.get_shape()) # Softmax cross entropy loss loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped) loss = tf.reduce_mean(loss) return loss def build_optimizer(loss, learning_rate, grad_clip): ''' Build optmizer for training, using gradient clipping. Arguments: loss: Network loss learning_rate: Learning rate for optimizer ''' # Optimizer for training, using gradient clipping to control exploding gradients tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) return optimizer class CharRNN: def __init__(self, num_classes, batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): # When we're using this network for sampling later, we'll be passing in # one character at a time, so providing an option for that if sampling == True: batch_size, num_steps = 1, 1 else: batch_size, num_steps = batch_size, num_steps tf.reset_default_graph() # Build the input placeholder tensors self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps) # Build the LSTM cell cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob) ### Run the data through the RNN layers # First, one-hot encode the input tokens x_one_hot = tf.one_hot(self.inputs, num_classes) # Run each sequence step through the RNN and collect the outputs outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state) self.final_state = state # Get softmax predictions and logits self.prediction, self.logits = build_output(outputs, lstm_size, num_classes) # Loss and optimizer (with gradient clipping) self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes) self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip) # + epochs = 150 # Print losses every N interations print_every_n = 50 # Save every N iterations save_every_n = 200 model = CharRNN(len(vocab), batch_size=64, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001) batch_size=64 num_steps=50 lstm_size=128 num_layers=2 learning_rate=0.001 saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/______.ckpt') counter = 0 for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for x, y in get_batches(encoded, batch_size, num_steps): counter += 1 start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.6, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.loss, model.final_state, model.optimizer], feed_dict=feed) if (counter % print_every_n == 0): end = time.time() print('Epoch: {}/{}... '.format(e+1, epochs), 'Training Step: {}... '.format(counter), 'Training loss: {:.4f}... '.format(batch_loss), '{:.4f} sec/batch'.format((end-start))) if (counter % save_every_n == 0): saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size)) # - tf.train.get_checkpoint_state('checkpoints') def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): samples = [c for c in prime] model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.prediction, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) tf.train.latest_checkpoint('checkpoints') checkpoint = tf.train.latest_checkpoint('checkpoints') samp = sample(checkpoint, 10000, lstm_size, len(vocab), prime="Far") print(samp)
proj.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Reference: https://www.tensorflow.org/versions/r1.1/get_started/mnist/beginners # Reference: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html # Reference: https://github.com/marcoancona/DeepExplain/blob/master/examples/mnist_tensorflow.ipynb from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf # import "Skater" related functions # %matplotlib inline from skater.util.image_ops import load_image, show_image, normalize, add_noise, flip_pixels, image_transformation from skater.util.image_ops import in_between, greater_than, greater_than_or_equal, equal_to from skater.core.local_interpretation.dnni.deep_interpreter import DeepInterpreter from skater.core.visualizer.image_relevance_visualizer import visualize # - # ### Download the MNIST dataset current_level = tf.logging.get_verbosity() tf.logging.set_verbosity(tf.logging.ERROR) mnist = input_data.read_data_sets("/tmp/", one_hot=True) tf.logging.set_verbosity(current_level) # ### Initialize TensorFlow session sess = tf.Session() # ### Initialize variables for the Network # + # Parameters learning_rate = 0.005 num_steps = 2000 batch_size = 128 # Network Parameters n_hidden_1 = 256 # 1st layer number of neurons n_hidden_2 = 256 # 2nd layer number of neurons num_input = 784 # MNIST data input (img shape: 28*28) num_classes = 10 # MNIST total classes (0-9 digits) # tf Graph input as tensors X = tf.placeholder("float", [None, num_input] , name="input") Y = tf.placeholder("float", [None, num_classes], name="output") # weights and biases for each Layer weights = { 'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1], mean=0.0, stddev=0.05)), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], mean=0.0, stddev=0.05)), 'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes], mean=0.0, stddev=0.05)) } biases = { 'b1': tf.Variable(tf.zeros([n_hidden_1])), 'b2': tf.Variable(tf.zeros([n_hidden_2])), 'out': tf.Variable(tf.zeros([num_classes])) } # - # ### Create a model # + def model(x, act=tf.nn.relu): layer_1 = act(tf.add(tf.matmul(x, weights['h1']), biases['b1'])) layer_2 = act(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])) out_layer = tf.add(tf.matmul(layer_2, weights['out']), biases['out'], name="absolute_output") return out_layer # Construct model logits = model(X) # - # ### Specify a loss function # + loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_op = optimizer.minimize(loss_op) # - # ### Define Evaluation correct_predictions = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # ### Train the model # + # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() sess.run(init) for step in range(1, num_steps+1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) sess.run(train_op, feed_dict={X: batch_x, Y: batch_y}) if step % 100 == 0 or step == 1: # Calculate batch loss and accuracy loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x, Y: batch_y}) print("Step {} Minibatch Loss= {:.4f} Training Accuracy= {:.3f}".format(step, loss, acc)) print("success") # - # ### Evaluate performance on Test dataset # + # Calculate accuracy for MNIST test images test_x = mnist.test.images test_y = mnist.test.labels print("Test accuracy:", sess.run(accuracy, feed_dict={X: test_x, Y: test_y})) # - # ### Persist the model for future use # + # Reference: https://stackoverflow.com/questions/33759623/tensorflow-how-to-save-restore-a-model #init = tf.global_variables_initializer()# #sess.run(init) saver = tf.train.Saver() saver.save(sess, './data/models/simple_mnist_mlp/simple_mnist_mlp', global_step=num_steps) # - # ### Interpret with Skater test_idx = 189 input_x_i = test_x[[test_idx]] input_y_i = test_y[test_idx].reshape(1, 10) with DeepInterpreter(session=sess) as di: # 1. Restore the persisted model # 2. Retrieve the input tensor from the restored model saver = tf.train.import_meta_graph('./data/models/simple_mnist_mlp/simple_mnist_mlp-2000.meta') saver.restore(sess, tf.train.latest_checkpoint('./data/models/simple_mnist_mlp/')) graph = tf.get_default_graph() X = graph.get_tensor_by_name("input:0") Y = graph.get_tensor_by_name("output:0") target_tensor = model(X) y_class = tf.argmax(target_tensor, 1) xs = input_x_i ys = input_y_i print("X shape: {}".format(xs.shape)) print("Y shape: {}".format(ys.shape)) # Predictions eval_dict = {X: xs, Y: ys} predicted_class = sess.run(y_class, feed_dict=eval_dict) print("Predicted Class: {}".format(predicted_class)) #relevance_scores = di.explain('elrp', target_tensor * ys, X, xs, use_case='image') relevance_scores = { 'elrp': di.explain('elrp', target_tensor * ys, X, xs, use_case='image'), 'integrated gradient': di.explain('ig', target_tensor * ys, X, xs, use_case='image'), } # + # %matplotlib inline import matplotlib.pyplot as plt input_x = [input_x_i.reshape(28, 28)] input_y = input_y_i n_cols = int(len(relevance_scores)) + 1 # +1 to add a column for the original image n_rows = len(input_x) fig, axes = plt.subplots(nrows=n_rows, ncols=n_cols, figsize=(6*n_cols, 6*n_rows)) # set the properties for text font = {'family': 'avenir', 'color': 'white', 'weight': 'normal', 'size': 14, } fig.patch.set_facecolor('black') for index, xi in enumerate(input_x): ax = axes.flatten()[index*n_cols] visualize(xi, cmap='gray', axis=axes[index], alpha_edges=1.0, alpha_bgcolor=1).set_title('Original Image: {}'.format(input_y[index]), fontdict=font) for j, r_type in enumerate(relevance_scores): axj = axes.flatten()[index*n_cols+j+1] # Remember to reshape the relevance_score matrix as a 2-D array # Red: highlights positive relevance # Blue: highlights negative relevance visualize(relevance_scores[r_type][index].reshape(28, 28), original_input_img=xi, axis=axj, percentile=99, alpha_edges=1.0, alpha_bgcolor=0.75).set_title('Relevance Type: "{}"'.format(r_type), fontdict=font) # -
examples/image_interpretability/mnist_mlp_tensorflow.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Tutorial: Hand gesture classification with EMG data using Riemannian metrics # In this notebook we are using EMG time series collected by 8 electrodes placed on the arm skin. We are going to show how to: # # - Process these kind of signal into covariance matrices that we can manipulate with geomstats tools. # - How to apply ML algorithms on this data to classify four different hand gestures present in the data (Rock, Paper, Scissors, Ok). # - How do the different methods (using Riemanian metrics, projecting on tangent space, Euclidean metric) compare to each other. # <img src="figures/paper_rock_scissors.png" /> # ## Context # # The data are acquired from somOS-interface: an sEMG armband that allows you to interact via bluetooth with an Android smartphone (you can contact <NAME> (<EMAIL>) or <NAME> (<EMAIL>) for more info on how to make this kind of armband yourself). # # An example of application is to record static signs that are linked with different actions (moving a cursor and clicking, sign recognition for command based personal assistants, ...). In these experiments, we want to evaluate the difference in performance (measured as the accuracy of sign recognition) between three different real life situations where we change the conditions of training (when user record signs or "calibrate" the device) and testing (when the app guess what sign the user is doing): # # - 1. What is the accuracy when doing sign recognition right after training? # - 2. What is the accuracy when calibrating, removing and replacing the armband at the same position and then testing? # - 3. What is the accuracy when calibrating, removing the armband and giving it to someone else that is testing it without calibration? # # To simulate these situations, we record data from two different users (rr and mg) and in two different sessions (s1 or s2). The user put the bracelet before every session and remove it after every session. # # Quick description of the data: # # - Each row corresponds to one acquisition, there is an acquisition every ~4 ms for 8 electrodes which correspond to a 250Hz acquisition rate. # - The time column is in ms. # - The columns c0 to c7 correspond to the electrical value recorded at each of the 8 electrodes (arbitrary unit). # - The label correspond to the sign being recorded by the user at this time point ('rest', 'rock', 'paper', 'scissors', or 'ok). 'rest' correspond to a rested arm. # - the exp identify the user (rr and mg) and the session (s1 or s2) # # Note: Another interesting use case, not explored in this notebook, would be to test what is the accruacy when calibrating, removing the armband and giving it to someone else that is calibrating it on its own arm before testing it. The idea being that transfer learning might help getting better results (or faster calibration) than calibrating on one user. # ## Setup # Before starting this tutorial, we set the working directory to be the root of the geomstats repository. In order to have the code working on your machine, you need to change this path to the path of your geomstats repository. # + import os import subprocess import matplotlib matplotlib.interactive(True) import matplotlib.pyplot as plt geomstats_gitroot_path = subprocess.check_output( ['git', 'rev-parse', '--show-toplevel'], universal_newlines=True) os.chdir(geomstats_gitroot_path[:-1]) print('Working directory: ', os.getcwd()) import geomstats.backend as gs gs.random.seed(2021) # - # ## Parameters N_ELECTRODES = 8 N_SIGNS = 4 # ## The Data # # + import geomstats.datasets.utils as data_utils data = data_utils.load_emg() # - data.head() # + tags=["nbsphinx-thumbnail"] fig, ax = plt.subplots(N_SIGNS, figsize=(20, 20)) label_list = ['rock', 'scissors', 'paper', 'ok'] for i, label_i in enumerate(label_list): sign_df = data[data.label==label_i].iloc[:100] for electrode in range(N_ELECTRODES): ax[i].plot(sign_df.iloc[:, 1 + electrode]) ax[i].title.set_text(label_i) # - # We are removing the sign 'rest' for the rest of the analysis. data = data[data.label != 'rest'] # ### Preprocessing into covariance matrices # + import numpy as np import pandas as pd ### Parameters. N_STEPS = 100 LABEL_MAP = {'rock': 0, 'scissors': 1, 'paper': 2, 'ok': 3} MARGIN = 1000 # - # Unpacking data into arrays for batching data_dict = { 'time': gs.array(data.time), 'raw_data': gs.array(data[['c{}'.format(i) for i in range(N_ELECTRODES)]]), 'label': gs.array(data.label), 'exp': gs.array(data.exp)} # + from geomstats.datasets.prepare_emg_data import TimeSeriesCovariance cov_data = TimeSeriesCovariance(data_dict, N_STEPS, N_ELECTRODES, LABEL_MAP, MARGIN) cov_data.transform() # - # We check that these matrics belong to the space of SPD matrices. # + import geomstats.geometry.spd_matrices as spd manifold = spd.SPDMatrices(N_ELECTRODES) # - gs.all(manifold.belongs(cov_data.covs)) # #### Covariances plot of the euclidean average fig, ax = plt.subplots(2, 2, figsize=(20, 10)) for label_i, i in cov_data.label_map.items(): label_ids = np.where(cov_data.labels==i)[0] sign_cov_mat = cov_data.covs[label_ids] mean_cov = np.mean(sign_cov_mat, axis=0) ax[i // 2, i % 2].matshow(mean_cov) ax[i // 2, i % 2].title.set_text(label_i) # Looking at the euclidean average of the spd matrices for each sign, does not show a striking difference between 3 of our signs (scissors, paper, and ok). Minimum Distance to Mean (MDM) algorithm will probably performed poorly if using euclidean mean here. # #### Covariances plot of the Frechet Mean of the affine invariant metric from geomstats.learning.frechet_mean import FrechetMean from geomstats.geometry.spd_matrices import SPDMetricAffine metric_affine = SPDMetricAffine(N_ELECTRODES) mean_affine = FrechetMean(metric=metric_affine, point_type='matrix') fig, ax = plt.subplots(2, 2, figsize=(20, 10)) for label_i, i in cov_data.label_map.items(): label_ids = np.where(cov_data.labels==i)[0] sign_cov_mat = cov_data.covs[label_ids] mean_affine.fit(X=sign_cov_mat) mean_cov = mean_affine.estimate_ ax[i // 2, i % 2].matshow(mean_cov) ax[i // 2, i % 2].title.set_text(label_i) # We see that the average matrices computed using the affine invariant metric are now more differenciated from each other and can potentially give better results, when using MDM to predict the sign linked to a matrix sample. # ## Sign Classification # We are now going to train some classifiers on those matrices to see how we can accurately discriminate these 4 hand positions. # The baseline accuracy is defined as the accuracy we get by randomly guessing the signs. In our case, the baseline accuracy is 25%. from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_validate from sklearn.preprocessing import StandardScaler # Hiding the numerous sklearn warnings import warnings warnings.filterwarnings('ignore') # !pip install tensorflow from tensorflow.keras.wrappers.scikit_learn import KerasClassifier import tensorflow as tf # N_EPOCHS is the number of epochs on which to train the MLP. Recommended is ~100 N_EPOCHS = 10 N_FEATURES = int(N_ELECTRODES * (N_ELECTRODES + 1) / 2) # ### A. Test on the same session and user as Training/Calibration # # In this first part we are training our model on the same session that we are testing it on. In real life, it corresponds to a user calibrating his armband right before using it. To do this, we are splitting every session in k-folds, training on $(k-1)$ fold to test on the $k^{th}$ last fold. # class ExpResults: """Class handling the score collection and plotting among the different experiments. """ def __init__(self, exps): self.exps = exps self.results = {} self.exp_ids = {} # Compute the index corresponding to each session only once at initialization. for exp in set(self.exps): self.exp_ids[exp] = np.where(self.exps==exp)[0] def add_result(self, model_name, model, X, y): """Add the results from the cross validated pipeline. For the model 'pipeline', it will add the cross validated results of every session in the model_name entry of self.results. Parameters ---------- model_name : str Name of the pipeline/model that we are adding results from. model : sklearn.pipeline.Pipeline sklearn pipeline that we are evaluating. X : array data that we are ingesting in the pipeline. y : array labels corresponding to the data. """ self.results[model_name] = {'fit_time': [], 'score_time': [], 'test_score': [], 'train_score': []} for exp in self.exp_ids.keys(): ids = self.exp_ids[exp] exp_result = cross_validate(pipeline, X[ids], y[ids], return_train_score=True) for key in exp_result.keys(): self.results[model_name][key] += list(exp_result[key]) print('Average training score: {:.4f}, Average test score: {:.4f}'.format( np.mean(self.results[model_name]['train_score']), np.mean(self.results[model_name]['test_score']))) def plot_results(self, title, variables, err_bar=None, save_name=None, xlabel='Model', ylabel='Acc'): """Plot bar plot comparing the different pipelines' results. Compare the results added previously using the 'add_result' method with bar plots. Parameters ---------- title : str Title of the plot. variables : list of array List of the variables to plot (e.g. train_score, test_score,...) err_bar : list of float list of error to use for plotting error bars. If None, std is used by default. save_name : str path to save the plot. If None, plot is not saved. xlabel : str Label of the x-axis. ylabel : str Label of the y-axis. """ ### Some defaults parameters. w = 0.5 colors = ['b', 'r', 'gray'] ### Reshaping the results for plotting. x_labels = self.results.keys() list_vec = [] for variable in variables: list_vec.append(np.array([self.results[model][variable] for model in x_labels]).transpose()) rand_m1 = lambda size: np.random.random(size) * 2 - 1 ### Plots parameters. label_loc = np.arange(len(x_labels)) center_bar = [w * (i - 0.5) for i in range(len(list_vec))] ### Plots values. avg_vec = [np.nanmean(vec, axis=0) for vec in list_vec] if err_bar is None: err_bar = [np.nanstd(vec, axis=0) for vec in list_vec] ### Plotting the data. fig, ax = plt.subplots(figsize=(20, 15)) for i, vec in enumerate(list_vec): label_i = variable[i] + ' (n = {})'.format(len(vec)) rects = ax.bar(label_loc + center_bar[i], avg_vec[i], w, label=label_i, yerr=err_bar[i], color=colors[i], alpha=0.6) for j, x in enumerate(label_loc): ax.scatter((x + center_bar[i]) + rand_m1(vec[:, j].size) * w/4, vec[:, j], color=colors[i], edgecolor='k') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.set_title(title) ax.set_xticks(label_loc) ax.set_xticklabels(x_labels) ax.legend() plt.legend() ### Saving the figure with a timestamp as a name. if save_name is not None: plt.savefig(save_name) exp_arr = data.exp.iloc[cov_data.batches] intra_sessions_results = ExpResults(exp_arr) # #### A.0. Using Logistic Regression on the vectorized Matrix (Euclidean Method) # + pipeline = Pipeline( steps=[('standardize', StandardScaler()), ('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))]) intra_sessions_results.add_result(model_name='logreg_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels) # - # #### A.1. Using MLP on the vectorized Matrix (Euclidean Method) # + def create_model(weights='initial_weights.hd5', n_features=N_FEATURES, n_signs=N_SIGNS): """Function to create model, required for using KerasClassifier and wrapp a Keras model inside a scikitlearn form. We added a weight saving/loading to remove the randomness of the weight initialization (for better comparison). """ model = tf.keras.models.Sequential([ tf.keras.layers.Dense(n_features, activation='relu', input_shape=(n_features,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(17, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(n_signs, activation='softmax'), ]) model.compile(loss = 'sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) if weights is None: model.save_weights('initial_weights.hd5') else: model.load_weights(weights) return model def create_model_covariance(weights='initial_weights.hd5'): return create_model(weights=weights, n_features=N_FEATURES) # - # Use the line below to generate the 'initial_weights.hd5' file generate_weights = create_model(weights=None) # + pipeline = Pipeline( steps=[('standardize', StandardScaler()), ('mlp', KerasClassifier(build_fn=create_model, epochs=N_EPOCHS, verbose=0))]) intra_sessions_results.add_result(model_name='mlp_eucl', model=pipeline, X=cov_data.covecs, y=cov_data.labels) # - # #### A.2. Using Tangent space projection + Logistic Regression # + from geomstats.learning.preprocessing import ToTangentSpace pipeline = Pipeline( steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)), ('standardize', StandardScaler()), ('logreg', LogisticRegression(solver='lbfgs', multi_class='multinomial'))]) intra_sessions_results.add_result(model_name='logreg_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels) # - # #### A.3. Using Tangent space projection + MLP # + pipeline = Pipeline( steps=[('feature_ext', ToTangentSpace(geometry=metric_affine)), ('standardize', StandardScaler()), ('mlp', KerasClassifier(build_fn=create_model_covariance, epochs=N_EPOCHS, verbose=0))]) intra_sessions_results.add_result(model_name='mlp_affinvariant_tangent', model=pipeline, X=cov_data.covs, y=cov_data.labels) # - # #### A.4. Using Euclidean MDM # + from geomstats.learning.mdm import RiemannianMinimumDistanceToMeanClassifier from geomstats.geometry.spd_matrices import SPDMetricEuclidean pipeline = Pipeline( steps=[('clf', RiemannianMinimumDistanceToMeanClassifier( riemannian_metric=SPDMetricEuclidean(n=N_ELECTRODES), n_classes=N_SIGNS))]) intra_sessions_results.add_result(model_name='mdm_eucl', model=pipeline, X=cov_data.covs, y=cov_data.labels) # - # #### A.5. Using Riemannian MDM # + pipeline = Pipeline( steps=[('clf', RiemannianMinimumDistanceToMeanClassifier( riemannian_metric=SPDMetricAffine(n=N_ELECTRODES), n_classes=N_SIGNS))]) intra_sessions_results.add_result(model_name='mdm_affinvariant', model=pipeline, X=cov_data.covs, y=cov_data.labels) # - # #### Summary plots intra_sessions_results.plot_results('intra_sess', ['test_score'])
notebooks/usecase_emg_sign_classification_in_spd_manifold.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="2cZSnB12OSVA" colab_type="code" colab={} from gtts import gTTS # + id="CI_ZkI1ZPTO5" colab_type="code" colab={} texto = 'aquecimento global' # + id="UpoptNbVRQ9b" colab_type="code" colab={} language = 'pt-br' # + id="E9fnzndiRSou" colab_type="code" colab={} fala = gTTS(text = texto, lang = language, slow =False) # + id="7bJs3A5DReli" colab_type="code" colab={} fala.save('aquecimento.mp3') # + id="gy5mGviGYnml" colab_type="code" colab={} with open('star.txt') as livro: texto = livro.read() # + id="FN-KQ1U9Z2Rs" colab_type="code" colab={} texto = livro.read() # + id="e0dmKaYQbFw0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 205} outputId="1eea147a-311f-48f5-8d15-9534420ab13e" texto # + id="QlraJJNEaszG" colab_type="code" colab={} language = 'en' # + id="ThPGtBGRawAm" colab_type="code" colab={} fala = gTTS(text = texto, lang=language, slow=False) # + id="vUZmbTo3a_7I" colab_type="code" colab={} fala.save('star.mp3') # + id="2h32vrgnbK-g" colab_type="code" colab={}
texto.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 2021년 5월 28일 금요일 # ### Programmers - 약수의 개수와 덧셈 (Python) # ### 문제 : https://programmers.co.kr/learn/courses/30/lessons/77884 # ### 블로그 : https://somjang.tistory.com/entry/Programmers-%EC%95%BD%EC%88%98%EC%9D%98-%EA%B0%9C%EC%88%98%EC%99%80-%EB%8D%A7%EC%85%88-Python # ### Solution def solution(left, right): answer = 0 for num in range(left, right + 1): operator = 1 divisor_num = len([n for n in range(1, num+1) if num % n == 0]) if divisor_num % 2 == 1: operator = -1 answer += num * operator return answer
DAY 301 ~ 400/DAY379_[Programmers] 약수의 개수와 덧셈 (Python).ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''PyTorchTest'': conda)' # name: python3 # --- # + import torch import torch.nn as nn import numpy as np from numpy.random import default_rng def set_seed(seed=0): seed = 42069 # set a random seed for reproducibility torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(seed) torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # + def init_weight(): # y0 dim: (1, 2) w1 = np.array([[0.2, 0.3], [0.4, 0.2], [0.3, 0.4]], dtype='f') # y1 dim: (1, 3) w2 = np.array([[0.2, 0.3, 0.4], [0.4, 0.2, 0.3], [0.3, 0.4, 0.2]], dtype='f') # y2 dim: (1, 3) w3 = np.array([[0.2, 0.3, 0.4], [0.4, 0.2, 0.3]], dtype='f') # y3 dim: (1, 2) return w1, w2, w3 class CMlp(nn.Module): def __init__(self, encrypt=False): super(CMlp, self).__init__() w1, w2, w3 = init_weight() self.fc1 = nn.Linear(2, 3, False) self.fc1.weight.data = torch.from_numpy(w1) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(3, 3, False) self.fc2.weight.data = torch.from_numpy(w2) self.relu2 = nn.ReLU() self.fc3 = nn.Linear(3, 2, False) self.fc3.weight.data = torch.from_numpy(w3) if encrypt: rng = default_rng(0) self.r1 = np.absolute(rng.standard_normal((3, 1), dtype='f')) self.r2 = np.absolute(rng.standard_normal((3, 1), dtype='f')) self.r3 = np.absolute(rng.standard_normal((2, 1), dtype='f')) self.fc1.weight.data = torch.from_numpy(w1 * self.r1) self.fc2.weight.data = torch.from_numpy(w2 * self.r2 / self.r1.transpose()) self.fc3.weight.data = torch.from_numpy(w3 / self.r2.transpose() + self.r3) self.y2 = None self.y3 = None self.alpha = None def forward(self, x): y1 = self.fc1(x) self.y2 = self.fc2(self.relu1(y1)) self.alpha = self.y2.sum() self.y3 = self.fc3(self.relu2(self.y2)) self.y3.retain_grad() return self.y3 # + pycharm={"name": "#%%\n"} # setup gpu or cpu # device = 'cuda' if torch.cuda.is_available() else 'cpu' device = 'cpu' r = (0.2, 0.4, 0.8) x = torch.tensor([[0.2, 0.3]], device=device) y_hat = torch.tensor([[0.5, 0.5]], device=device) net = CMlp().to(device) print('----------- plaintext weight ---------------') for p in net.parameters(): print(p.data) y = net(x) print('y: ', y) criterion = nn.MSELoss() loss = criterion(y, y_hat) loss.backward(retain_graph=True) print('----------- plaintext grad -----------------') for p in net.parameters(): print(p.grad) w_gradlist = [p.grad for p in net.parameters()] print('----------- ciphertext weight ---------------') net_c = CMlp(encrypt=True).to(device) for p in net_c.parameters(): print(p.data) y_c = net_c(x) c_loss = criterion(y_c, y_hat) c_loss.backward(retain_graph=True) print('----------- ciphertext grad ---------------') for p in net_c.parameters(): print(p.grad) c_w_gradlist = [p.grad for p in net_c.parameters()] r_a = torch.from_numpy(net_c.r3).to(device) print('Get yc: ', y_c) print('Get yc from y: ', y + net_c.alpha * r_a.t()) print('Ly derivative') print(y - y_hat) print(y.grad) print('Lhaty derivative') print(y_c - y_hat) print(y_c.grad) y_c_grad = y_c.grad # + [markdown] pycharm={"name": "#%% md\n"} # $ \frac{\partial \widehat{L}}{\partial \widehat{W}^{(l)}} $ # - optim = torch.optim.Optimizer(net.parameters(), {}) optim_c = torch.optim.Optimizer(net_c.parameters(), {}) # ### set grad to zero for p in net_c.parameters(): print(p.grad) optim_c.zero_grad() for p in net_c.parameters(): print(p.grad) # ## get $\frac{\partial \alpha}{\partial \widehat{w}^{(l)}}$ net_c.alpha.backward(retain_graph=True) alpha_gradlist = [p.grad.detach().clone() for p in net_c.parameters()] for p in net_c.parameters(): print(p.grad) # ### set grad to zero and get $\mathbf{r}^t \frac{\partial \widehat{y}^{L}}{\partial \widehat{w}^{(l)}}$ optim_c.zero_grad() r = r_a.t() for p in net_c.parameters(): print(p.grad) y_c.backward(r, retain_graph=True) c_yw_gradlist = [p.grad for p in net_c.parameters()] print(c_w_gradlist[2]) print(w_gradlist[2]) print(c_yw_gradlist[2]) # Compute $\frac{\partial \widehat{L}}{\partial \widehat{W}}$ # # \begin{equation} # \frac{\partial \widehat{L}}{\partial \widehat{W}} = \frac{1}{R^{(l)}} \circ \frac{\partial L}{\partial W} + r^T \cdot \alpha \frac{\partial \widehat{y}^{(L)}}{\partial \widehat{W}} + r^T \cdot (\frac{ \partial \widehat{L}}{\partial \widehat{y}^{(L)}})^{T} \frac{\partial \alpha}{\partial \widehat{W}} - r^T r \alpha \frac{\partial \alpha}{\partial \widehat{W}} # \end{equation} # # ## Layer3 c_w3 grad with simple computing # # \begin{equation} # \frac{\partial \widehat{L}}{\partial \widehat{W}} = \frac{1}{R^{(l)}} \circ \frac{\partial L}{\partial W} + r^T \cdot \alpha \frac{\partial \widehat{y}^{(L)}}{\partial \widehat{W}} # \end{equation} # print(w_gradlist[2] * net_c.r2.transpose() + c_yw_gradlist[2] * net_c.alpha) print(y_c_grad.reshape(1, -1)) t = r.matmul(y_c_grad.reshape(1, -1).t()) print(w_gradlist[1]) print(alpha_gradlist[1]) # ## Layer2 c_w2 grad print(w_gradlist[1] / net_c.r2 * net_c.r1.transpose() + c_yw_gradlist[1] * net_c.alpha + alpha_gradlist[1] * t - r.matmul(r.t()) * net_c.alpha * alpha_gradlist[1]) # ## Layer1 c_w1 grad print(w_gradlist[0] / net_c.r1 + c_yw_gradlist[0] * net_c.alpha + alpha_gradlist[0] * t - r.matmul(r.t()) * net_c.alpha * alpha_gradlist[0])
test/crypto_mlp.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="e7DCzf-EHzL_" # # CLIPDraw # Synthesize drawings to match a text prompt! # # # ![](https://kvfrans.com/content/images/2021/06/Screen-Shot-2021-06-10-at-8.47.23-PM.png) # # # > This work presents CLIPDraw, an algorithm that synthesizes novel drawings based on natural language input. CLIPDraw does not require any training; rather a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, CLIPDraw operates over vector strokes rather than pixel images, a constraint that biases drawings towards simpler human-recognizable shapes. Results compare between CLIPDraw and other synthesis-through-optimization methods, as well as highlight various interesting behaviors of CLIPDraw, such as satisfying ambiguous text in multiple ways, reliably producing drawings in diverse artistic styles, and scaling from simple to complex visual representations as stroke count is increased. # # # # # # This Colab notebook goes along with the paper: [CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders](https://arxiv.org/abs/2106.14843) # # by **<NAME>**, <NAME>, <NAME> # # Read the blog post for cool results and analysis! [https://kvfrans.com/clipdraw-exploring-text-to-drawing-synthesis/](https://kvfrans.com/clipdraw-exploring-text-to-drawing-synthesis/) # # Feel free to tweet me any cool creations, [@kvfrans](https://twitter.com/kvfrans) # # Code adapted from diffvg: https://github.com/BachiLi/diffvg/blob/master/apps/painterly_rendering.py # # --- # # # # # # # **STEPS:** # # # 1. Click "Connect" in the top right corner # 2. Runtime -> Change runtime type -> Hardware accelerator -> GPU # 2. Click the run button on "Pre Installation". This will install dependencies, it may take a while. # 2. **Important:** Runtime -> Restart Runtime # 3. Run the "Imports and Notebook Utilities" and "Load CLIP" sections. # # # 5. "Curve Optimizer" will synthesize a drawing to match your text. You can edit the text prompt at the top of the code block. # 6. "Video Renderer" can create videos that show the optimization process, and videos that render a drawing stroke-by-stroke. # + id="5dyyH781qzIC" cellView="form" #@title Pre Installation {vertical-output: true} import subprocess CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1] print("CUDA version:", CUDA_version) if CUDA_version == "10.0": torch_version_suffix = "+cu100" elif CUDA_version == "10.1": torch_version_suffix = "+cu101" elif CUDA_version == "10.2": torch_version_suffix = "" else: torch_version_suffix = "+cu110" # # !pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex # %cd /content/ # !pip install svgwrite # !pip install svgpathtools # !pip install cssutils # !pip install numba # !pip install torch-tools # !pip install visdom # !git clone https://github.com/BachiLi/diffvg # %cd diffvg # # !ls # !git submodule update --init --recursive # !python setup.py install # !pip install ftfy regex tqdm # !pip install git+https://github.com/openai/CLIP.git --no-deps # + colab={"base_uri": "https://localhost:8080/"} id="hjt9T3ARukAg" cellView="form" outputId="add047b2-7848-40d8-f7de-589706feab3e" #@title Imports and Notebook Utilities {vertical-output: true} # %tensorflow_version 2.x import os import io import PIL.Image, PIL.ImageDraw import base64 import zipfile import json import requests import numpy as np import matplotlib.pylab as pl import glob from IPython.display import Image, HTML, clear_output from tqdm import tqdm_notebook, tnrange os.environ['FFMPEG_BINARY'] = 'ffmpeg' import moviepy.editor as mvp from moviepy.video.io.ffmpeg_writer import FFMPEG_VideoWriter def imread(url, max_size=None, mode=None): if url.startswith(('http:', 'https:')): r = requests.get(url) f = io.BytesIO(r.content) else: f = url img = PIL.Image.open(f) if max_size is not None: img = img.resize((max_size, max_size)) if mode is not None: img = img.convert(mode) img = np.float32(img)/255.0 return img def np2pil(a): if a.dtype in [np.float32, np.float64]: a = np.uint8(np.clip(a, 0, 1)*255) return PIL.Image.fromarray(a) def imwrite(f, a, fmt=None): a = np.asarray(a) if isinstance(f, str): fmt = f.rsplit('.', 1)[-1].lower() if fmt == 'jpg': fmt = 'jpeg' f = open(f, 'wb') np2pil(a).save(f, fmt, quality=95) def imencode(a, fmt='jpeg'): a = np.asarray(a) if len(a.shape) == 3 and a.shape[-1] == 4: fmt = 'png' f = io.BytesIO() imwrite(f, a, fmt) return f.getvalue() def im2url(a, fmt='jpeg'): encoded = imencode(a, fmt) base64_byte_string = base64.b64encode(encoded).decode('ascii') return 'data:image/' + fmt.upper() + ';base64,' + base64_byte_string def imshow(a, fmt='jpeg'): display(Image(data=imencode(a, fmt))) def tile2d(a, w=None): a = np.asarray(a) if w is None: w = int(np.ceil(np.sqrt(len(a)))) th, tw = a.shape[1:3] pad = (w-len(a))%w a = np.pad(a, [(0, pad)]+[(0, 0)]*(a.ndim-1), 'constant') h = len(a)//w a = a.reshape([h, w]+list(a.shape[1:])) a = np.rollaxis(a, 2, 1).reshape([th*h, tw*w]+list(a.shape[4:])) return a from torchvision import utils def show_img(img): img = np.transpose(img, (1, 2, 0)) img = np.clip(img, 0, 1) img = np.uint8(img * 254) # img = np.repeat(img, 4, axis=0) # img = np.repeat(img, 4, axis=1) pimg = PIL.Image.fromarray(img, mode="RGB") imshow(pimg) def zoom(img, scale=4): img = np.repeat(img, scale, 0) img = np.repeat(img, scale, 1) return img class VideoWriter: def __init__(self, filename='_autoplay.mp4', fps=30.0, **kw): self.writer = None self.params = dict(filename=filename, fps=fps, **kw) def add(self, img): img = np.asarray(img) if self.writer is None: h, w = img.shape[:2] self.writer = FFMPEG_VideoWriter(size=(w, h), **self.params) if img.dtype in [np.float32, np.float64]: img = np.uint8(img.clip(0, 1)*255) if len(img.shape) == 2: img = np.repeat(img[..., None], 3, -1) self.writer.write_frame(img) def close(self): if self.writer: self.writer.close() def __enter__(self): return self def __exit__(self, *kw): self.close() if self.params['filename'] == '_autoplay.mp4': self.show() def show(self, **kw): self.close() fn = self.params['filename'] display(mvp.ipython_display(fn, **kw)) # !nvidia-smi -L import numpy as np import torch import os # torch.set_default_tensor_type('torch.cuda.FloatTensor') print("Torch version:", torch.__version__) # # !pip install DALL-E # + cellView="form" colab={"base_uri": "https://localhost:8080/"} id="Z-Wt7UjTi8Le" outputId="73f1514b-5e05-4275-a54a-789ba6213998" #@title Load CLIP {vertical-output: true} # os.environ['CUDA_LAUNCH_BLOCKING'] = '1' import os import clip import torch import torch.nn.functional as F import torchvision from torchvision import transforms from torchvision.datasets import CIFAR100 # Load the model device = torch.device('cuda') model, preprocess = clip.load('ViT-B/32', device, jit=False) nouns = "aardvark abyssinian accelerator accordion account accountant acknowledgment acoustic acrylic act action active activity actor actress adapter addition address adjustment adult advantage advertisement advice afghanistan africa aftermath afternoon aftershave afterthought age agenda agreement air airbus airmail airplane airport airship alarm albatross alcohol algebra algeria alibi alley alligator alloy almanac alphabet alto aluminium aluminum ambulance america amount amusement anatomy anethesiologist anger angle angora animal anime ankle answer ant antarctica anteater antelope anthony anthropology apartment apology apparatus apparel appeal appendix apple appliance approval april aquarius arch archaeology archeology archer architecture area argentina argument aries arithmetic arm armadillo armchair armenian army arrow art ash ashtray asia asparagus asphalt asterisk astronomy athlete atm atom attack attempt attention attic attraction august aunt australia australian author authorisation authority authorization avenue babies baboon baby back backbone bacon badge badger bag bagel bagpipe bail bait baker bakery balance balinese ball balloon bamboo banana band bandana bangladesh bangle banjo bank bankbook banker bar barbara barber barge baritone barometer base baseball basement basin basket basketball bass bassoon bat bath bathroom bathtub battery battle bay beach bead beam bean bear beard beast beat beautician beauty beaver bed bedroom bee beech beef beer beet beetle beggar beginner begonia behavior belgian belief believe bell belt bench bengal beret berry bestseller betty bibliography bicycle bike bill billboard biology biplane birch bird birth birthday bit bite black bladder blade blanket blinker blizzard block blood blouse blow blowgun blue board boat bobcat body bolt bomb bomber bone bongo bonsai book bookcase booklet boot border botany bottle bottom boundary bow bowl bowling box boy bra brace bracket brain brake branch brand brandy brass brazil bread break breakfast breath brian brick bridge british broccoli brochure broker bronze brother brother-in-law brow brown brush bubble bucket budget buffer buffet bugle building bulb bull bulldozer bumper bun burglar burma burn burst bus bush business butane butcher butter button buzzard cabbage cabinet cable cactus cafe cake calculator calculus calendar calf call camel camera camp can canada canadian cancer candle cannon canoe canvas cap capital cappelletti capricorn captain caption car caravan carbon card cardboard cardigan care carnation carol carp carpenter carriage carrot cart cartoon case cast castanet cat catamaran caterpillar cathedral catsup cattle cauliflower cause caution cave c-clamp cd ceiling celery celeste cell cellar cello celsius cement cemetery cent centimeter century ceramic cereal certification chain chair chalk chance change channel character chard charles chauffeur check cheek cheese cheetah chef chemistry cheque cherries cherry chess chest chick chicken chicory chief child children chill chime chimpanzee chin china chinese chive chocolate chord christmas christopher chronometer church cicada cinema circle circulation cirrus citizenship city clam clarinet class claus clave clef clerk click client climb clipper cloakroom clock close closet cloth cloud cloudy clover club clutch coach coal coast coat cobweb cockroach cocktail cocoa cod coffee coil coin coke cold collar college collision colombia colon colony color colt column columnist comb comfort comic comma command commission committee community company comparison competition competitor composer composition computer condition condor cone confirmation conga congo conifer connection consonant continent control cook cooking copper copy copyright cord cork cormorant corn cornet correspondent cost cotton couch cougar cough country course court cousin cover cow cowbell crab crack cracker craftsman crate crawdad crayfish crayon cream creator creature credit creditor creek crib cricket crime criminal crocodile crocus croissant crook crop cross crow crowd crown crush cry cub cuban cucumber cultivator cup cupboard cupcake curler currency current curtain curve cushion custard customer cut cuticle cycle cyclone cylinder cymbal dad daffodil dahlia daisy damage dance dancer danger daniel dash dashboard database date daughter david day dead deadline deal death deborah debt debtor decade december decimal decision decrease dedication deer defense deficit degree delete delivery den denim dentist deodorant department deposit description desert design desire desk dessert destruction detail detective development dew diamond diaphragm dibble dictionary dietician difference digestion digger digital dill dime dimple dinghy dinner dinosaur diploma dipstick direction dirt disadvantage discovery discussion disease disgust dish distance distribution distributor diving division divorced dock doctor dog dogsled doll dollar dolphin domain donald donkey donna door dorothy double doubt downtown dragon dragonfly drain drake drama draw drawbridge drawer dream dredger dress dresser dressing drill drink drive driver driving drizzle drop drug drum dry dryer duck duckling dugout dungeon dust eagle ear earth earthquake ease east edge edger editor editorial education edward eel effect egg eggnog eggplant egypt eight elbow element elephant elizabeth ellipse emery employee employer encyclopedia end enemy energy engine engineer engineering english enquiry entrance environment epoch epoxy equinox equipment era error estimate ethernet ethiopia euphonium europe evening event examination example exchange exclamation exhaust ex-husband existence expansion experience expert explanation ex-wife eye eyebrow eyelash eyeliner face facilities fact factory fahrenheit fairies fall family fan fang farm farmer fat father father-in-law faucet fear feast feather feature february fedelini feedback feeling feet felony female fender ferry ferryboat fertilizer fiber fiberglass fibre fiction field fifth fight fighter file find fine finger fir fire fired fireman fireplace firewall fish fisherman flag flame flare flat flavor flax flesh flight flock flood floor flower flugelhorn flute fly foam fog fold font food foot football footnote force forecast forehead forest forgery fork form format fortnight foundation fountain fowl fox foxglove fragrance frame france freckle freeze freezer freighter french freon friction friday fridge friend frog front frost frown fruit fuel fur furniture galley gallon game gander garage garden garlic gas gasoline gate gateway gauge gazelle gear gearshift geese gemini gender geography geology geometry george geranium german germany ghana ghost giant giraffe girdle girl gladiolus glass glider gliding glockenspiel glove glue goal goat gold goldfish golf gondola gong good-bye goose gore-tex gorilla gosling government governor grade grain gram granddaughter grandfather grandmother grandson grape graphic grass grasshopper gray grease great-grandfather great-grandmother greece greek green grenade grey grill grip ground group grouse growth guarantee guatemalan guide guilty guitar gum gun gym gymnast hacksaw hail hair haircut half-brother half-sister halibut hall hallway hamburger hammer hamster hand handball handicap handle handsaw harbor hardboard hardcover hardhat hardware harmonica harmony harp hat hate hawk head headlight headline health hearing heart heat heaven hedge height helen helicopter helium hell helmet help hemp hen heron herring hexagon hill himalayan hip hippopotamus history hobbies hockey hoe hole holiday home honey hood hook hope horn horse hose hospital hot hour hourglass house hovercraft hub hubcap humidity humor hurricane hyacinth hydrant hydrofoil hydrogen hyena hygienic ice icebreaker icicle icon idea ikebana illegal imprisonment improvement impulse inch income increase index india indonesia industry ink innocent input insect instruction instrument insulation insurance interactive interest internet interviewer intestine invention inventory invoice iran iraq iris iron island israel italian italy jacket jaguar jail jam james january japan japanese jar jasmine jason jaw jeans jeep jeff jelly jellyfish jennifer jet jewel jogging john join joke joseph journey judge judo juice july jumbo jump jumper june jury justice jute kale kamikaze kangaroo karate karen kayak kendo kenneth kenya ketchup kettle kettledrum kevin key keyboard keyboarding kick kidney kilogram kilometer kimberly kiss kitchen kite kitten kitty knee knickers knife knight knot knowledge kohlrabi korean laborer lace ladybug lake lamb lamp lan land landmine language larch lasagna latency latex lathe laugh laundry laura law lawyer layer lead leaf learning leather leek leg legal lemonade lentil leo leopard letter lettuce level libra library license lier lift light lightning lilac lily limit linda line linen link lion lip lipstick liquid liquor lisa list literature litter liver lizard llama loaf loan lobster lock locket locust look loss lotion love low lumber lunch lunchroom lung lunge lute luttuce lycra lynx lyocell lyre lyric macaroni machine macrame magazine magic magician maid mail mailbox mailman makeup malaysia male mall mallet man manager mandolin manicure manx map maple maraca marble march margaret margin maria marimba mark mark market married mary mascara mask mass match math mattock may mayonnaise meal measure meat mechanic medicine meeting melody memory men menu mercury message metal meteorology meter methane mexican mexico mice michael michelle microwave middle mile milk milkshake millennium millimeter millisecond mimosa mind mine minibus mini-skirt minister mint minute mirror missile mist mistake mitten moat modem mole mom monday money monkey month moon morning morocco mosque mosquito mother mother-in-law motion motorboat motorcycle mountain mouse moustache mouth move multi-hop multimedia muscle museum music musician mustard myanmar nail name nancy napkin narcissus nation neck need needle neon nepal nephew nerve nest net network news newsprint newsstand nic nickel niece nigeria night nitrogen node noise noodle north north america north korea norwegian nose note notebook notify novel november number numeric nurse nut nylon oak oatmeal objective oboe observation occupation ocean ocelot octagon octave october octopus odometer offence offer office oil okra olive onion open opera operation ophthalmologist opinion option orange orchestra orchid order organ organisation organization ornament ostrich otter ounce output outrigger oval oven overcoat owl owner ox oxygen oyster package packet page pail pain paint pair pajama pakistan palm pamphlet pan pancake pancreas panda pansy panther panties pantry pants panty pantyhose paper paperback parade parallelogram parcel parent parentheses park parrot parsnip part particle partner partridge party passbook passenger passive pasta paste pastor pastry patch path patient patio patricia paul payment pea peace peak peanut pear pedestrian pediatrician peen peer-to-peer pelican pen penalty pencil pendulum pentagon peony pepper perch perfume period periodical peripheral permission persian person peru pest pet pharmacist pheasant philippines philosophy phone physician piano piccolo pickle picture pie pig pigeon pike pillow pilot pimple pin pine ping pink pint pipe pisces pizza place plain plane planet plant plantation plaster plasterboard plastic plate platinum play playground playroom pleasure plier plot plough plow plywood pocket poet point poison poland police policeman polish politician pollution polo polyester pond popcorn poppy population porch porcupine port porter position possibility postage postbox pot potato poultry pound powder power precipitation preface prepared pressure price priest print printer prison probation process processing produce product production professor profit promotion propane property prose prosecution protest protocol pruner psychiatrist psychology ptarmigan puffin pull puma pump pumpkin punch punishment puppy purchase purple purpose push pvc pyjama pyramid quail quality quart quarter quartz queen question quicksand quiet quill quilt quince quit quiver quotation rabbi rabbit racing radar radiator radio radish raft rail railway rain rainbow raincoat rainstorm rake ramie random range rat rate raven ravioli ray rayon reaction reading reason receipt recess record recorder rectangle red reduction refrigerator refund regret reindeer relation relative religion relish reminder repair replace report representative request resolution respect responsibility rest restaurant result retailer revolve revolver reward rhinoceros rhythm rice richard riddle rifle ring rise risk river riverbed road roadway roast robert robin rock rocket rod roll romania romanian ronald roof room rooster root rose rotate route router rowboat rub rubber rugby rule run russia russian rutabaga ruth sack sagittarius sail sailboat sailor salad salary sale salesman salmon salt sampan samurai sand sandra sandwich santa sarah sardine satin saturday sauce saudi arabia sausage save saw saxophone scale scallion scanner scarecrow scarf scene scent schedule school science scissors scooter scorpio scorpion scraper screen screw screwdriver sea seagull seal seaplane search seashore season seat second secretary secure security seed seeder segment select selection self semicircle semicolon sense sentence separated september servant server session sex shade shadow shake shallot shame shampoo shape share shark sharon shears sheep sheet shelf shell shield shingle ship shirt shock shoe shoemaker shop shorts shoulder shovel show shrimp shrine siamese siberian side sideboard sidecar sidewalk sign signature silica silk silver sing singer single sink sister sister-in-law size skate skiing skill skin skirt sky slash slave sled sleep sleet slice slime slip slipper slope smash smell smile smoke snail snake sneeze snow snowboarding snowflake snowman snowplow snowstorm soap soccer society sociology sock soda sofa softball softdrink software soil soldier son song soprano sort sound soup sousaphone south africa south america south korea soy soybean space spade spaghetti spain spandex spark sparrow spear specialist speedboat sphere sphynx spider spike spinach spleen sponge spoon spot spring sprout spruce spy square squash squid squirrel stage staircase stamp star start starter state statement station statistic steam steel stem step step-aunt step-brother stepdaughter step-daughter step-father step-grandfather step-grandmother stepmother step-mother step-sister stepson step-son step-uncle steven stew stick stinger stitch stock stocking stomach stone stool stop stopsign stopwatch store storm story stove stranger straw stream street streetcar stretch string structure study sturgeon submarine substance subway success sudan suede sugar suggestion suit summer sun sunday sundial sunflower sunshine supermarket supply support surfboard surgeon surname surprise susan sushi swallow swamp swan sweater sweatshirt sweatshop swedish sweets swim swimming swing swiss switch sword swordfish sycamore syria syrup system table tablecloth tabletop tachometer tadpole tail tailor taiwan talk tank tanker tanzania target taste taurus tax taxi taxicab tea teacher teaching team technician teeth television teller temper temperature temple tempo tendency tennis tenor tent territory test text textbook texture thailand theater theory thermometer thing thistle thomas thought thread thrill throat throne thumb thunder thunderstorm thursday ticket tie tiger tights tile timbale time timer timpani tin tip tire titanium title toad toast toe toenail toilet tomato tom-tom ton tongue tooth toothbrush toothpaste top tornado tortellini tortoise touch tower town toy tractor trade traffic trail train tramp transaction transmission transport trapezoid tray treatment tree trial triangle trick trigonometry trip trombone trouble trousers trout trowel truck trumpet trunk t-shirt tsunami tub tuba tuesday tugboat tulip tuna tune turkey turkey turkish turn turnip turnover turret turtle tv twig twilight twine twist typhoon tyvek uganda ukraine ukrainian umbrella uncle underclothes underpants undershirt underwear unit united kingdom unshielded use utensil uzbekistan vacation vacuum valley value van vase vault vegetable vegetarian veil vein velvet venezuela venezuelan verdict vermicelli verse vessel vest veterinarian vibraphone vietnam view vinyl viola violet violin virgo viscose vise vision visitor voice volcano volleyball voyage vulture waiter waitress walk wall wallaby wallet walrus war warm wash washer wasp waste watch watchmaker water waterfall wave wax way wealth weapon weasel weather wedge wednesday weed weeder week weight whale wheel whip whiskey whistle white wholesaler whorl wilderness william willow wind windchime window windscreen windshield wine wing winter wire wish witch withdrawal witness wolf woman women wood wool woolen word work workshop worm wound wrecker wren wrench wrinkle wrist writer xylophone yacht yak yam yard yarn year yellow yew yogurt yoke yugoslavian zebra zephyr zinc zipper zone zoo zoology" nouns = nouns.split(" ") noun_prompts = ["a drawing of a " + x for x in nouns] # Calculate features with torch.no_grad(): nouns_features = model.encode_text(torch.cat([clip.tokenize(noun_prompts).to(device)])) print(nouns_features.shape, nouns_features.dtype) # + id="4XIVMSJuWgxG" #@title Curve Optimizer {vertical-output: true} # %cd /content/diffvg/apps/ prompt = "Watercolor painting of an underwater submarine." neg_prompt = "A badly drawn sketch." neg_prompt_2 = "Many ugly, messy drawings." text_input = clip.tokenize(prompt).to(device) text_input_neg1 = clip.tokenize(neg_prompt).to(device) text_input_neg2 = clip.tokenize(neg_prompt_2).to(device) use_negative = False # Use negative prompts? # Thanks to <NAME> for this. # In the CLIPDraw code used to generate examples, we don't normalize images # before passing into CLIP, but really you should. Turn this to True to do that. use_normalized_clip = False # Calculate features with torch.no_grad(): text_features = model.encode_text(text_input) text_features_neg1 = model.encode_text(text_input_neg1) text_features_neg2 = model.encode_text(text_input_neg2) import pydiffvg import torch import skimage import skimage.io import random import ttools.modules import argparse import math import torchvision import torchvision.transforms as transforms pydiffvg.set_print_timing(False) gamma = 1.0 # ARGUMENTS. Feel free to play around with these, especially num_paths. args = lambda: None args.num_paths = 256 args.num_iter = 1000 args.max_width = 50 # Use GPU if available pydiffvg.set_use_gpu(torch.cuda.is_available()) device = torch.device('cuda') pydiffvg.set_device(device) canvas_width, canvas_height = 224, 224 num_paths = args.num_paths max_width = args.max_width # Image Augmentation Transformation augment_trans = transforms.Compose([ transforms.RandomPerspective(fill=1, p=1, distortion_scale=0.5), transforms.RandomResizedCrop(224, scale=(0.7,0.9)), ]) if use_normalized_clip: augment_trans = transforms.Compose([ transforms.RandomPerspective(fill=1, p=1, distortion_scale=0.5), transforms.RandomResizedCrop(224, scale=(0.7,0.9)), transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) ]) # Initialize Random Curves shapes = [] shape_groups = [] for i in range(num_paths): num_segments = random.randint(1, 3) num_control_points = torch.zeros(num_segments, dtype = torch.int32) + 2 points = [] p0 = (random.random(), random.random()) points.append(p0) for j in range(num_segments): radius = 0.1 p1 = (p0[0] + radius * (random.random() - 0.5), p0[1] + radius * (random.random() - 0.5)) p2 = (p1[0] + radius * (random.random() - 0.5), p1[1] + radius * (random.random() - 0.5)) p3 = (p2[0] + radius * (random.random() - 0.5), p2[1] + radius * (random.random() - 0.5)) points.append(p1) points.append(p2) points.append(p3) p0 = p3 points = torch.tensor(points) points[:, 0] *= canvas_width points[:, 1] *= canvas_height path = pydiffvg.Path(num_control_points = num_control_points, points = points, stroke_width = torch.tensor(1.0), is_closed = False) shapes.append(path) path_group = pydiffvg.ShapeGroup(shape_ids = torch.tensor([len(shapes) - 1]), fill_color = None, stroke_color = torch.tensor([random.random(), random.random(), random.random(), random.random()])) shape_groups.append(path_group) # Just some diffvg setup scene_args = pydiffvg.RenderFunction.serialize_scene(\ canvas_width, canvas_height, shapes, shape_groups) render = pydiffvg.RenderFunction.apply img = render(canvas_width, canvas_height, 2, 2, 0, None, *scene_args) points_vars = [] stroke_width_vars = [] color_vars = [] for path in shapes: path.points.requires_grad = True points_vars.append(path.points) path.stroke_width.requires_grad = True stroke_width_vars.append(path.stroke_width) for group in shape_groups: group.stroke_color.requires_grad = True color_vars.append(group.stroke_color) # Optimizers points_optim = torch.optim.Adam(points_vars, lr=1.0) width_optim = torch.optim.Adam(stroke_width_vars, lr=0.1) color_optim = torch.optim.Adam(color_vars, lr=0.01) # Run the main optimization loop for t in range(args.num_iter): # Anneal learning rate (makes videos look cleaner) if t == int(args.num_iter * 0.5): for g in points_optim.param_groups: g['lr'] = 0.4 if t == int(args.num_iter * 0.75): for g in points_optim.param_groups: g['lr'] = 0.1 points_optim.zero_grad() width_optim.zero_grad() color_optim.zero_grad() scene_args = pydiffvg.RenderFunction.serialize_scene(\ canvas_width, canvas_height, shapes, shape_groups) img = render(canvas_width, canvas_height, 2, 2, t, None, *scene_args) img = img[:, :, 3:4] * img[:, :, :3] + torch.ones(img.shape[0], img.shape[1], 3, device = pydiffvg.get_device()) * (1 - img[:, :, 3:4]) if t % 5 == 0: pydiffvg.imwrite(img.cpu(), '/content/res/iter_{}.png'.format(int(t/5)), gamma=gamma) img = img[:, :, :3] img = img.unsqueeze(0) img = img.permute(0, 3, 1, 2) # NHWC -> NCHW loss = 0 NUM_AUGS = 4 img_augs = [] for n in range(NUM_AUGS): img_augs.append(augment_trans(img)) im_batch = torch.cat(img_augs) image_features = model.encode_image(im_batch) for n in range(NUM_AUGS): loss -= torch.cosine_similarity(text_features, image_features[n:n+1], dim=1) if use_negative: loss += torch.cosine_similarity(text_features_neg1, image_features[n:n+1], dim=1) * 0.3 loss += torch.cosine_similarity(text_features_neg2, image_features[n:n+1], dim=1) * 0.3 # Backpropagate the gradients. loss.backward() # Take a gradient descent step. points_optim.step() width_optim.step() color_optim.step() for path in shapes: path.stroke_width.data.clamp_(1.0, max_width) for group in shape_groups: group.stroke_color.data.clamp_(0.0, 1.0) if t % 10 == 0: show_img(img.detach().cpu().numpy()[0]) # show_img(torch.cat([img.detach(), img_aug.detach()], axis=3).cpu().numpy()[0]) print('render loss:', loss.item()) print('iteration:', t) with torch.no_grad(): im_norm = image_features / image_features.norm(dim=-1, keepdim=True) noun_norm = nouns_features / nouns_features.norm(dim=-1, keepdim=True) similarity = (100.0 * im_norm @ noun_norm.T).softmax(dim=-1) values, indices = similarity[0].topk(5) print("\nTop predictions:\n") for value, index in zip(values, indices): print(f"{nouns[index]:>16s}: {100 * value.item():.2f}%") # + id="Eru5XUuOCi6c" cellView="form" #@title Video Renderer {vertical-output: true} # Render a picture with each stroke. with torch.no_grad(): for i in range(args.num_paths): print(i) scene_args = pydiffvg.RenderFunction.serialize_scene(\ canvas_width, canvas_height, shapes[:i+1], shape_groups[:i+1]) img = render(canvas_width, canvas_height, 2, 2, t, None, *scene_args) img = img[:, :, 3:4] * img[:, :, :3] + torch.ones(img.shape[0], img.shape[1], 3, device = pydiffvg.get_device()) * (1 - img[:, :, 3:4]) pydiffvg.imwrite(img.cpu(), '/content/res/stroke_{}.png'.format(i), gamma=gamma) print("ffmpeging") # Convert the intermediate renderings to a video. from subprocess import call call(["ffmpeg", "-y", "-framerate", "60", "-i", "/content/res/iter_%d.png", "-vb", "20M", "/content/res/out.mp4"]) call(["ffmpeg", "-y", "-framerate", "60", "-i", "/content/res/stroke_%d.png", "-vb", "20M", "/content/res/out_strokes.mp4"]) call(["ffmpeg", "-y", "-i", "/content/res/out.mp4", "-filter_complex", "[0]trim=0:2[hold];[0][hold]concat[extended];[extended][0]overlay", "/content/res/out_longer.mp4"]) call(["ffmpeg", "-y", "-i", "/content/res/out_strokes.mp4", "-filter_complex", "[0]trim=0:2[hold];[0][hold]concat[extended];[extended][0]overlay", "/content/res/out_strokes_longer.mp4"]) display(mvp.ipython_display("/content/res/out_longer.mp4")) display(mvp.ipython_display("/content/res/out_strokes_longer.mp4")) # + id="b2jqNT0VYPWp" cellView="form" #@title Pixel Optimizer (Ignore) {vertical-output: true} # %cd /content/diffvg/apps/ prompt = "Underwater" text_input = clip.tokenize(prompt).to(device) # Calculate features with torch.no_grad(): text_features = model.encode_text(text_input) import torch import skimage import skimage.io import random import ttools.modules import argparse import math import torchvision import torchvision.transforms as transforms class ImageBase(torch.nn.Module): def __init__(self): super().__init__() self.p = torch.nn.Parameter(torch.ones(224, 224, 3)) def forward(self): return torch.nn.functional.sigmoid(self.p) device = torch.device('cuda') canvas_width, canvas_height = 224, 224 augment_trans = transforms.Compose([ transforms.RandomPerspective(fill=1, p=1), transforms.RandomResizedCrop(224, scale=(0.7,0.9)), ]) ib = ImageBase().to(device) t_img = imread('https://lh5.googleusercontent.com/mjvIYutjtOGEEU2cBYuFMvCrBCg4-MGh3DqCRlLqwn5I6VvdKdtwWvAYlndQbv-VUudPcecQ_TEGFjYaTuS_r0LNI83Sp8MlXJb6OarJ9mu-IkmKPlg9Gaw3gOjQvvgvuUB5ghJjlaE') target = torch.from_numpy(t_img).to(torch.float32) ib.p = target # Optimize optim = torch.optim.Adam(ib.parameters(), lr=0.01) # Adam iterations. for t in range(args.num_iter): optim.zero_grad() img = ib() # Convert img from HWC to NCHW img = img.unsqueeze(0) img = img.permute(0, 3, 1, 2) # NHWC -> NCHW loss = 0 for n in range(16): img_aug = augment_trans(img) image_features = model.encode_image(img_aug) loss -= torch.cosine_similarity(text_features, image_features, dim=1) # loss += torch.abs(torch.mean(1-img_aug)) * 0.1 # Backpropagate the gradients. loss.backward() # Take a gradient descent step. optim.step() if t % 10 == 0: show_img(img.detach().cpu().numpy()[0]) show_img(img_aug.detach().cpu().numpy()[0]) print('render loss:', loss.item()) print('iteration:', t) with torch.no_grad(): im_norm = image_features / image_features.norm(dim=-1, keepdim=True) noun_norm = nouns_features / nouns_features.norm(dim=-1, keepdim=True) similarity = (100.0 * im_norm @ noun_norm.T).softmax(dim=-1) values, indices = similarity[0].topk(5) print("\nTop predictions:\n") for value, index in zip(values, indices): print(f"{nouns[index]:>16s}: {100 * value.item():.2f}%") # Convert the intermediate renderings to a video. from subprocess import call call(["ffmpeg", "-framerate", "24", "-i", "results/painterly_rendering/iter_%d.png", "-vb", "20M", "results/painterly_rendering/out.mp4"])
clipdraw.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # DALI binary arithmetic operators - type promotions # # In this example, we will describe the rules regarding type promotions for binary arithmetic operators in DALI. Details on using arithmetic operators in DALI can be found in "DALI expressions and arithmetic operators" notebook. # ## Prepare the test pipeline # # First, we will prepare the helper code, so we can easily manipulate the types and values that will appear as tensors in the DALI pipeline. # # We will be using numpy as source for the custom provided data and we also need to import several things from DALI, needed to create Pipeline and use ExternalSource Operator. # + import numpy as np from nvidia.dali.pipeline import Pipeline import nvidia.dali.ops as ops import nvidia.dali.types as types from nvidia.dali.types import Constant batch_size = 1 # - # ### Defining the data # # As we are dealing with binary operators, we need two inputs. # We will create a simple helper function that returns two numpy arrays of given numpy types with arbitrary selected values. It is to make the manipulation of types easy. In an actual scenario the data processed by DALI arithmetic operators would be tensors produced by other Operator containing some images, video sequences or other data. # # Keep in mind that shapes of both inputs need to match as those will be element-wise operations. # + left_magic_values = [42, 8] right_magic_values = [9, 2] def get_data(left_type, right_type): return ([left_type(left_magic_values)], [right_type(right_magic_values)]) batch_size = 1 # - # ### Defining the pipeline # # The next step is to define the Pipeline. We override `Pipeline.iter_setup`, a method called by the pipeline before every `Pipeline.run`. It is meant to feed the data into `ExternalSource()` operators indicated by `self.left` and `self.right`. # The data will be obtained from `get_data` function to which we pass the left and right types. # # Note, that we do not need to instantiate any additional operators, we can use regular Python arithmetic expressions on the results of other operators in the `define_graph` step. # # For convenience, we'll wrap the usage of arithmetic operations in a lambda called `operation`, specified when creating the pipeline. # # `define_graph` will return both our data inputs and the result of applying `operation` to them. class ArithmeticPipeline(Pipeline): def __init__(self, operation, left_type, right_type, batch_size, num_threads, device_id): super(ArithmeticPipeline, self).__init__(batch_size, num_threads, device_id, seed=12) self.left_source = ops.ExternalSource() self.right_source = ops.ExternalSource() self.operation = operation self.left_type = left_type self.right_type = right_type def define_graph(self): self.left = self.left_source() self.right = self.right_source() return self.left, self.right, self.operation(self.left, self.right) def iter_setup(self): (l, r) = get_data(self.left_type, self.right_type) self.feed_input(self.left, l) self.feed_input(self.right, r) # ## Type promotion rules # # Type promotions for binary operators are described below. The type promotion rules are commutative. They apply to `+`, `-`, `*`, and `//`. The `/` always returns a float32 for integer inputs, and applies the rules below when at least one of the inputs is a floating point number. # # | Operand Type | Operand Type | Result Type | Additional Conditions | # |:------------:|:------------:|:-----------:| --------------------- | # | T | T | T | | # | floatX | T | floatX | where T is not a float | # | floatX | floatY | float(max(X, Y)) | | # | intX | intY | int(max(X, Y)) | | # | uintX | uintY | uint(max(X, Y)) | | # | intX | uintY | int2Y | if X <= Y | # | intX | uintY | intX | if X > Y | # # `bool` type is considered the smallest unsigned integer type and is treated as `uint1` with respect to the table above. # # The bitwise binary `|`, `&`, and `^` operations abide by the same type promotion rules as arithmetic binary operations, but their inputs are restricted to integral types (bool included). # # Only multiplication `*` and bitwise operations `|`, `&`, `^` can accept two `bool` inputs. # ### Using the Pipeline # # Let's create a Pipeline that adds two tensors of type `uint8`, run it and see the results. # + def build_and_run(pipe, op_name): pipe.build() pipe_out = pipe.run() l = pipe_out[0].as_array() r = pipe_out[1].as_array() out = pipe_out[2].as_array() print("{} {} {} = {}; \n\twith types {} {} {} -> {}\n".format(l, op_name, r, out, l.dtype, op_name, r.dtype, out.dtype)) pipe = ArithmeticPipeline((lambda x, y: x + y), np.uint8, np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run(pipe, "+") # - # Let's see how all of the operators behave with different type combinations by generalizing the example above. # You can use the `np_types` or `np_int_types` in the loops to see all possible type combinations. To reduce the output we limit ourselves to only few of them. We also set some additional printing options for numpy to make the output more aligned. np.set_printoptions(precision=2) # + arithmetic_operations = [((lambda x, y: x + y) , "+"), ((lambda x, y: x - y) , "-"), ((lambda x, y: x * y) , "*"), ((lambda x, y: x / y) , "/"), ((lambda x, y: x // y) , "//")] bitwise_operations = [((lambda x, y: x | y) , "|"), ((lambda x, y: x & y) , "&"), ((lambda x, y: x ^ y) , "^")] np_types = [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64, np.float32, np.float64] for (op, op_name) in arithmetic_operations: for left_type in [np.uint8]: for right_type in [np.uint8, np.int32, np.float32]: pipe = ArithmeticPipeline(op, left_type, right_type, batch_size=batch_size, num_threads=2, device_id = 0) build_and_run(pipe, op_name) for (op, op_name) in bitwise_operations: for left_type in [np.uint8]: for right_type in [np.uint8, np.int32]: pipe = ArithmeticPipeline(op, left_type, right_type, batch_size=batch_size, num_threads=2, device_id = 0) build_and_run(pipe, op_name) # - # ## Using Constants # # Instead of operating only on Tensor data, DALI expressions can also work with constants. Those can be either values of Python `int` and `float` types used directly, or those values wrapped in `nvidia.dali.types.Constant`. Operation between tensor and constant results in the constant being broadcasted to all elements of the tensor. The same costant is used with all samples in the batch. # # *Note: Currently all values of integral constants are passed to DALI as int32 and all values of floating point constants are passed to DALI as float32.* # # The Python `int` values will be treated as `int32` and the `float` as `float32` in regard to type promotions. # # The DALI `Constant` can be used to indicate other types. It accepts `DALIDataType` enum values as second argument and has convenience member functions like `.uint8()` or `.float32()` that can be used for conversions. # # As our expressions will consist of a tensor and a constant, we will adjust our previous pipeline and the helper functions - they only need to generate one tensor. # + class ArithmeticConstantsPipeline(Pipeline): def __init__(self, operation, tensor_data_type,batch_size, num_threads, device_id): super(ArithmeticConstantsPipeline, self).__init__(batch_size, num_threads, device_id, seed=12) self.tensor_source = ops.ExternalSource() self.operation = operation self.tensor_data_type = tensor_data_type def define_graph(self): self.tensor = self.tensor_source() return self.tensor, self.operation(self.tensor) def iter_setup(self): (t, _) = get_data(self.tensor_data_type, self.tensor_data_type) self.feed_input(self.tensor, t) def build_and_run_with_const(pipe, op_name, constant, is_const_left = False): pipe.build() pipe_out = pipe.run() t_in = pipe_out[0].as_array() t_out = pipe_out[1].as_array() if is_const_left: print("{} {} {} = \n{}; \n\twith types {} {} {} -> {}\n".format(constant, op_name, t_in, t_out, type(constant), op_name, t_in.dtype, t_out.dtype)) else: print("{} {} {} = \n{}; \n\twith types {} {} {} -> {}\n".format(t_in, op_name, constant, t_out, t_in.dtype, op_name, type(constant), t_out.dtype)) # - # Now, the `ArithmeticConstantsPipeline` can be parametrized with a function taking the only tensor and returning the result of arithmetic operation between that tensor and a constant. # # We also adjusted our print message. # # Now we will check all the examples we mentioned at the beginning: `int`, `float` constants and `nvidia.dali.types.Constant`. # + constant = 10 pipe = ArithmeticConstantsPipeline((lambda x: x + constant), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "+", constant) constant = 10 pipe = ArithmeticConstantsPipeline((lambda x: x + constant), np.float32, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "+", constant) constant = 42.3 pipe = ArithmeticConstantsPipeline((lambda x: x + constant), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "+", constant) constant = 42.3 pipe = ArithmeticConstantsPipeline((lambda x: x + constant), np.float32, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "+", constant) # - # As we can see the value of the constant is applied to all the elements of the tensor to which it is added. # # Now let's check how to use the DALI Constant wrapper. # # Passing an `int` or `float` to DALI Constant marks it as `int32` or `float32` respectively # + constant = Constant(10) pipe = ArithmeticConstantsPipeline((lambda x: x * constant), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "*", constant) constant = Constant(10.0) pipe = ArithmeticConstantsPipeline((lambda x: constant * x), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "*", constant, True) # - # We can either explicitly specify the type as a second argument, or use convenience conversion member functions. # + constant = Constant(10, types.DALIDataType.UINT8) pipe = ArithmeticConstantsPipeline((lambda x: x * constant), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "*", constant) constant = Constant(10.0, types.DALIDataType.UINT8) pipe = ArithmeticConstantsPipeline((lambda x: constant * x), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "*", constant, True) constant = Constant(10).uint8() pipe = ArithmeticConstantsPipeline((lambda x: constant * x), np.uint8, batch_size = batch_size, num_threads = 2, device_id = 0) build_and_run_with_const(pipe, "*", constant, True) # - # ## Treating tensors as scalars # # If one of the tensors is considered a scalar input, the same rules apply. #
docs/examples/general/expressions/expr_type_promotions.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.0 # language: julia # name: julia-1.6 # --- # # Some Typical Financial Calculations # # This notebook (a) estimates CAPM equations and autocorrelations; (b) implements a simple trading strategy; (c) calculates Value at Risk using a simple model for time-varying volatility; (d) calculates the Black-Scholes option price and implied volatility; (e) calculates and draws the mean-variance frontier (w/w.o short selling restrictions). # ## Load Packages and Extra Functions # # The [Roots](https://github.com/JuliaMath/Roots.jl) package solves non-linear equations and the [StatsBase](https://github.com/JuliaStats/StatsBase.jl) package has methods for estimating autocorrelations etc. # + using Printf, Dates, DelimitedFiles, LinearAlgebra, Roots, Distributions, StatsBase include("jlFiles/printmat.jl") # + using Plots #pyplot(size=(600,400)) #use pyplot or gr gr(size=(480,320)) default(fmt = :svg) # - # # Load Data # + x = readdlm("Data/MyData.csv",',',skipstart=1) #monthly return data ym = round.(Int,x[:,1]) #yearmonth, like 200712 Rme = x[:,2] #market excess return Rf = x[:,3] #interest rate R = x[:,4] #return small growth stocks Re = R - Rf #excess returns T = size(Rme,1) dN = Date.(string.(ym),"yyyymm") #convert to string and then Julia Date printmat([dN[1:4] Re[1:4] Rme[1:4]]) # - # # CAPM # # The CAPM regression is # # $R_{it}^{e} =\alpha_{i}+\beta_{i}R_{mt}^{e}+\varepsilon_{it}$, # # where $R_{it}^{e}$ is the excess return of asset $i$ and $R_{mt}^{e}$ is the market excess return. Theory says that $\alpha=0$, which is easily tested. # + x = [ones(T) Rme] #regressors y = copy(Re) #to get standard OLS notation b = x\y #OLS u = y - x*b #residuals covb = inv(x'x)*var(u) #cov(b), see any textbook stdb = sqrt.(diag(covb)) #std(b) R2 = 1 - var(u)/var(y) printmat([b stdb b./stdb],colNames=["coeff","std","t-stat"],rowNames=["α","β"]) printlnPs("R2: ",R2) printlnPs("no. of observations: ",T) # - # # Return Autocorrelation # # That is, the correlation of $R_{t}^{e}$ and $R_{t-s}^{e}$. # # It can be shown that the t-stat of an autocorrelation is $\sqrt{T}$ times the autocorrelation. # + plags = 1:5 xCorr = autocor(Re,plags) #using the StatsBase package println("Autocorrelations (different lags) of the excess returns in Re") printmat([xCorr sqrt(T)*xCorr],colNames=["autocorr","t-stat"],rowNames=string.(plags),cell00="lag") # - # # A Trading Strategy # # The next cell implements a very simple momentum trading strategy. # # 1. If $R_{t-1}^{e}\ge0$, then we hold the market index and shorten the riskfree from $t-1$ to $t$. This means that we will earn $R_{t}^{e}$. # # 2. Instead, if $R_{t-1}^{e}<0$, then we do the opposite. This means that we will earn $-R_{t}^{e}$. # # This simple strategy could be coded without using a loop, but "vectorization" does not speed up much. # + (w,Rp) = (fill(NaN,T),fill(NaN,T)) for t = 2:T w[t] = (Re[t-1] < 0)*(-1) + (Re[t-1] >= 0)*1 #w is -1 or 1 Rp[t] = w[t]*Re[t] end μ = [mean(Rp[2:end]) mean(Re[2:end])] σ = [std(Rp[2:end]) std(Re[2:end])] printlnPs("The annualized mean excess return of the strategy and a passive portfolio are: ",μ*12) printlnPs("The annualized Sharpe ratios are: ",sqrt(12)*μ./σ) # - # # Value at Risk # # The next cell constructs an simple estimate of $\sigma_t^2$ as a backward looking moving average (the RiskMetrics approach): # # $\sigma_t^2 = \lambda \sigma_{t-1}^2 + (1-\lambda) (R_{t-1} -\mu)^2$, # where $\mu$ is the average return (for all data). # # Then, we calculate the 95% VaR by assuming a $N(\mu,\sigma_t^2)$ distribution: # # $\textrm{VaR}_{t} = - (\mu-1.64\sigma_t)$. # # If the model is correct, then $-R_t > \text{VaR}_{t}$ should only happen 5% of the times. # + μ = mean(Rme) λ = 0.95 #weight on old volatility σ² = fill(var(Rme),T) #RiskMetrics approach to estimate variance for t = 2:T σ²[t] = λ*σ²[t-1] + (1-λ)*(Rme[t-1]-μ)^2 end VaR95 = -(μ .- 1.64*sqrt.(σ²)); #VaR at 95% level # + xTicksLoc = [Date(1980);Date(1990);Date(2000);Date(2010)] xTicksLab = Dates.format.(xTicksLoc,"Y") p1 = plot( dN,VaR95, color = :blue, legend = false, xticks = (xTicksLoc,xTicksLab), ylim = (0,11), title = "1-month Value at Risk (95%)", ylabel = "%", annotation = (Date(1982),1,text("(for US equity market)",8,:left)) ) display(p1) # - # # Options # ## Black-Scholes Option Price # # Let $S$ be the the current spot price of an asset and $y$ be the interest rate. # # The Black-Scholes formula for a European call option with strike price $K$ and time to expiration $m$ is # # $C =S\Phi(d_{1}) -e^{-ym}K\Phi(d_{2})$, where # # $d_{1} =\frac{\ln(S/K)+(y+\sigma^{2}/2)m}{\sigma\sqrt{m}} \ \text{ and } \ d_{2}=d_{1}-\sigma\sqrt{m}$ # # and where $\Phi(d)$ denotes the probability of $x\leq d$ when $x$ has an $N(0,1)$ distribution. All variables except the volatility ($\sigma$) are directly observable. # + Φ(x) = cdf(Normal(0,1),x) #a one-line function, Pr(z<=x) for N(0,1) """ Calculate Black-Scholes european call option price """ function OptionBlackSPs(S,K,m,y,σ) d1 = ( log(S/K) + (y+1/2*σ^2)*m ) / (σ*sqrt(m)) d2 = d1 - σ*sqrt(m) c = S*Φ(d1) - exp(-y*m)*K*Φ(d2) return c end # + σ = 0.4 c1 = OptionBlackSPs(10,10,0.5,0.1,σ) printlnPs("\n","call price according to Black-Scholes: ",c1) K = range(7,stop=13,length=51) c = OptionBlackSPs.(10,K,0.5,0.1,σ); # - p1 = plot( K,c, color = :red, legend = false, title = "Black-Scholes call option price", xlabel = "strike price", ylabel = "option price" ) display(p1) # # Implied Volatility # # is the $\sigma$ value that makes the Black-Scholes equation give the same option price as observed on the market. It is often interpreted as the "market uncertainty." # # The next cell uses the call option price calculated above as the market price. The implied volatility should then equal the volatility used above (this is a way to check your coding). # # The next few cells instead use some data on options on German government bonds. # + #solve for implied vol iv = find_zero(σ->OptionBlackSPs(10,10,0.5,0.1,σ)-c1,(0.00001,5)) printlnPs("Implied volatility: ",iv,", compare with: $σ") # - # LIFFE Bunds option data, trade date April 6, 1994 K = [ #strike prices; Mx1 vector 92.00; 94.00; 94.50; 95.00; 95.50; 96.00; 96.50; 97.00; 97.50; 98.00; 98.50; 99.00; 99.50; 100.0; 100.5; 101.0; 101.5; 102.0; 102.5; 103.0; 103.5 ]; C = [ #call prices; Mx1 vector 5.13; 3.25; 2.83; 2.40; 2.00; 1.64; 1.31; 1.02; 0.770; 0.570; 0.400; 0.280; 0.190; 0.130; 0.0800; 0.0500; 0.0400; 0.0300; 0.0200; 0.0100; 0.0100 ]; S = 97.05 #spot price m = 48/365 #time to expiry in years y = 0.0 #Interest rate: LIFFE=>no discounting N = length(K) # + iv = fill(NaN,N) #looping over strikes for i = 1:N iv[i] = find_zero(sigma->OptionBlackSPs(S,K[i],m,y,sigma)-C[i],(0.00001,5)) end println("Strike and iv for data: ") printmat([K iv]) # - p1 = plot( K,iv, color = :red, legend =false, title = "Implied volatility", xlabel = "strike price", annotation = (98,0.09,text("Bunds options April 6, 1994",8,:left)) ) display(p1) # # Mean-Variance Frontier # # Given a vector of average returns ($\mu$) and a variance-covariance matrix ($\Sigma$), the mean-variance frontier shows the lowest possible portfolio uncertainty for a given expected portfolio return (denoted $\mu\text{star}$ below). # # It is thus the solution to a quadratic minimization problem. The cells below will use the explicit (matrix) formulas for this solution, but we often have to resort to numerical methods when there are portfolio restrictions. # # It is typically plotted with the portfolio standard deviation on the horizontal axis and the portfolio expected return on the vertical axis. # # We calculate and plot two different mean-variance frontiers: (1) when we only consider risky assets; (2) when we also consider a risk-free asset. # + μ = [11.5; 9.5; 6]/100 #expected returns Σ = [166 34 58; #covariance matrix 34 64 4; 58 4 100]/100^2 Rf = 0.03 #riskfree return (an interest rate) println("μ: ") printmat(μ) println("Σ: ") printmat(Σ) println("Rf: ") printmat(Rf) # + function MVCalc(μstar,μ,Σ) #calculate the std of a portfolio on MVF of risky assets n = length(μ) #μstar is a scalar, μ a vector and Σ a matrix oneV = ones(n) Σ_1 = inv(Σ) A = μ'Σ_1*μ B = μ'Σ_1*oneV C = oneV'Σ_1*oneV λ = (C*μstar - B)/(A*C-B^2) δ = (A-B*μstar)/(A*C-B^2) w = Σ_1 *(μ*λ + δ*oneV) StdRp = sqrt(w'Σ*w) return StdRp,w end function MVCalcRf(μstar,μ,Σ,Rf) #calculates the std of a portfolio on MVF of (Risky,Riskfree) n = length(μ) μe = μ .- Rf #expected excess returns Σ_1 = inv(Σ) w = (μstar - Rf)/(μe'Σ_1*μe) * Σ_1 *μe StdRp = sqrt(w'Σ*w) return StdRp,w end # + μstar = range(Rf,stop=0.15,length=201) L = length(μstar) StdRp = fill(NaN,L) #risky assets only for i = 1:L #loop over different expected portfolio returns StdRp[i] = MVCalc(μstar[i],μ,Σ)[1] #[1] to get the first output end # - p1 = plot( StdRp*100,μstar*100, linecolor = :red, xlim = (0,15), ylim = (0,15), label = "MVF", legend = :topleft, title = "MVF, only risky assets", xlabel = "Std(Rp), %", ylabel = "ERp, %" ) scatter!(sqrt.(diag(Σ))*100,μ*100,color=:red,label="assets") display(p1) StdRpRf = fill(NaN,L) #with riskfree too for i = 1:L StdRpRf[i] = MVCalcRf(μstar[i],μ,Σ,Rf)[1] end p1 = plot( [StdRp StdRpRf]*100,[μstar μstar]*100, legend = nothing, linestyle = [:solid :dash], linecolor = [:red :blue], xlim = (0,15), ylim = (0,15), title = "MVF, risky and riskfree assets", xlabel = "Std(Rp), %", ylabel = "ERp, %" ) scatter!(sqrt.(diag(Σ))*100,μ*100,color=:red) display(p1) # # Mean-Variance Frontier without Short Selling (extra) # # The code below solves (numerically) the following minimization problem # # $\min \text{Var}(R_p) \: \text{ s.t. } \: \text{E}R_p = \mu^*$, # # and where we also require $w_i\ge 0$ and $\sum_{i=1}^{n}w_{i}=1$. # # To solve this, we use the [JuMP](https://github.com/JuliaOpt/JuMP.jl) package (version >= 0.19) together with [Ipopt](https://github.com/JuliaOpt/Ipopt.jl). using JuMP, Ipopt function MeanVarNoSSPs(μ,Σ,μstar) #MV with no short-sales, numerical minimization n = length(μ) if minimum(μ) <= μstar <= maximum(μ) #try only if feasible model = Model(optimizer_with_attributes(Ipopt.Optimizer,"print_level" => 1)) @variable(model,w[i=1:n] >= 0.0) #no short sales @objective(model,Min,w'*Σ*w) #minimize portfolio variance @constraint(model,sum(w) == 1.0) #w sums to 1 @constraint(model,w'*μ == μstar) #mean equals μstar optimize!(model) if has_values(model) w_p = value.(w) StdRp = sqrt(w_p'Σ*w_p) end else (w_p,StdRp) = (NaN,NaN) end return StdRp,w_p end Std_no_ss = fill(NaN,length(μstar)) for i = 1:length(μstar) #risky assets only, no short sales Std_no_ss[i] = MeanVarNoSSPs(μ,Σ,μstar[i])[1] end p1 = plot( [StdRp Std_no_ss]*100,[μstar μstar]*100, linecolor = [:red :green], linestyle = [:solid :dash], linewidth = 2, label = ["no constraints" "no short sales"], xlim = (0,15), ylim = (0,15), legend = :topleft, title = "MVF (with/without constraints)", xlabel = "Std(Rp), %", ylabel = "ERp, %" ) scatter!(sqrt.(diag(Σ))*100,μ*100,color=:red,label="assets") display(p1)
Tutorial_05_Finance.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import matplotlib.pyplot as plt import re import nltk import numpy as np import seaborn as sns sns.set_style('dark') from itertools import chain from collections import Counter from nltk.corpus import stopwords stop_words = set(stopwords.words('english')) # - train = pd.read_csv('train.csv') test = pd.read_csv('test.csv') train.head() # ## Label plt.figure(figsize=(7, 6)) sns.countplot(data=train, x='label') plt.title('Values per label', fontsize=13) plt.xlabel('Label', fontsize=13) plt.ylabel('Count', fontsize=13) # ## Tweet # ### Hashtags def detect_hashtags(text): text = str(text) count = re.findall(r'\#[\w]+', text) return len(count) train['hashtags'] = train['tweet'].apply(detect_hashtags) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='hashtags', binwidth=2) plt.title('Hashtags per text', fontsize=13) plt.xlabel('hashtags', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max hashtags per text: {}'.format(max(train['hashtags']))) print(train['tweet'][np.argmax(train['hashtags'])]) # ### Usernames def detect_usernames(text): text = str(text) count = re.findall(r'\@[\w]+', text) return len(count) train['usernames'] = train['tweet'].apply(detect_usernames) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='usernames', binwidth=2) plt.title('Usernames per text', fontsize=13) plt.xlabel('Usernames', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max usernames (mentions) per text: {}'.format(max(train['usernames']))) print(train['tweet'][np.argmax(train['usernames'])]) # ### Words def count_words(text): text = str(text) text = text.split(' ') return len(text) train['words'] = train['tweet'].apply(count_words) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='words', binwidth=2) plt.title('Words per text', fontsize=13) plt.xlabel('Words', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max characters per text: {}'.format(max(train['words']))) print(train['tweet'][np.argmax(train['words'])]) # ### Numbers def count_numbers(text): text = str(text) count = re.findall(r'\d', text) return len(count) train['numbers'] = train['tweet'].apply(count_numbers) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='numbers', binwidth=2) plt.title('Numbers per text', fontsize=13) plt.xlabel('Numbers', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max numbers per text: {}'.format(max(train['numbers']))) print(train['tweet'][np.argmax(train['numbers'])]) # ### Characters def count_characters(text): text = str(text) count = re.findall(r'\w', text) return len(count) train['characters'] = train['tweet'].apply(count_characters) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='characters', binwidth=2) plt.title('Characters per text', fontsize=13) plt.xlabel('Characters', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max characters per text: {}'.format(max(train['characters']))) print(train['tweet'][np.argmax(train['characters'])]) # ### Special Characters def count_special_characters(text): text = str(text) count = re.findall(r'[^\w\s]', text) return len(count) train['specials'] = train['tweet'].apply(count_special_characters) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='specials', binwidth=2) plt.title('Special characters per text', fontsize=13) plt.xlabel('Specials', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max special characters per text: {}'.format(max(train['specials']))) print(train['tweet'][np.argmax(train['specials'])]) # ### Stopwords def count_stopwords(text): text = str(text) count = [word for word in text.split(' ') if word in stop_words] return len(count) train['stopwords'] = train['tweet'].apply(count_stopwords) train.head() plt.figure(figsize=(7, 6)) sns.histplot(data=train, x='stopwords', binwidth=2) plt.title('Stopwords per text', fontsize=13) plt.xlabel('Stopwords', fontsize=13) plt.ylabel('Label', fontsize=13) print('Max stopwords per text: {}'.format(max(train['stopwords']))) # ### Word count counter = Counter(chain.from_iterable(map(str.split, train['tweet'].tolist()))) common_words = counter.most_common(20) common_words def check_stopword(common_words): common_words = [(word[0], 'yes') if word[0] in stop_words else (word[0], 'no') for word in common_words] return common_words check_stopword(common_words) # ### Word count per label #label 0 train_label_zero = train.copy() train_label_zero = train_label_zero[train_label_zero['label'] == 0] counter_zero = Counter(chain.from_iterable(map(str.split, train_label_zero['tweet'].tolist()))) counter_zero = counter_zero.most_common(35) counter_zero check_stopword(counter_zero) #label 1 train_label_one = train.copy() train_label_one = train_label_one[train_label_one['label'] == 1] counter_one = Counter(chain.from_iterable(map(str.split, train_label_one['tweet'].tolist()))) counter_one = counter_one.most_common(20) counter_one check_stopword(counter_one)
Code/Studies/EDA/EDA.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os os.sys.path.append(os.path.dirname(os.path.abspath('.'))) # ## 数据准备 # + import numpy as np data = np.array([[2.5, 3.5, 3, 3.5, 2.5, 3], [3, 3.5, 1.5, 5, 3.5, 3], [2.5, 3, 0, 3.5, 0, 4], [0, 3.5, 3, 0, 4, 4], [3, 4, 2, 3, 2, 3], [3, 4, 0, 5, 3.5, 3], [0, 4.5, 0, 4, 1, 0]]) n_users, n_items = data.shape # - # 有了user-item数据后,可以计算两两user之间的相似度: # + from metrics.pairwise.euclidean_distances import euclidean_distances dist_mat=euclidean_distances(data) # 两两用户之间的距离矩阵 sim_mat=1/(1+dist_mat) # 将距离转化成相似度 # - # 指定一个用户$user_{i}$,首先找到跟其最相似的前$k$个用户: i = 6 # 最后一个用户 k = 3 # 使用最相似的前3个用户 top_k_sim = sim_mat[i][sim_mat[i] != 1].argsort( )[-1:-k-1:-1] # 首先排除相似度为1的用户,然后取前k个最相似的用户 # 推荐的本质就是为用户推荐其未曾见过或用过的东西,所以找出指定用户未评分的物品,然后计算相似用户对该物品的加权评分: # + cand_items_mask = (data[i] == 0) # 提取未评价物品的布尔索引 cand_items = np.arange(len(data[i]))[cand_items_mask] # 候选推荐物品的索引 # 相似用户对候选物品的评分矩阵,形状为(top_users,cand_items) scores = data[top_k_sim, :][:, cand_items_mask] # 对已评分用户相似度的求和,作为分母 denominator = np.sum( sim_mat[i, top_k_sim], axis=0) scores = np.sum( scores * sim_mat[i, top_k_sim].reshape(-1, 1), axis=0) # 以相似度加权并求和 scores = scores/denominator # 除以相似度的累加 idx = np.argsort(scores)[::-1] # 按分数排序后的索引 scores = scores[idx] cand_items = cand_items[idx] print(scores, cand_items) # - # 封装测试: # + def CF(data, i, k=5): ''' i: 用户idx k: 使用前k个最相似的用户 ''' dist_mat = euclidean_distances(data) # 两两row之间的距离矩阵 sim_mat = 1/(1+dist_mat) # 将距离转化成相似度 top_k_sim = sim_mat[i][sim_mat[i] != 1].argsort()[-1:-k-1:-1] cand_items_msak = (data[i] == 0) cand_items = np.arange(len(data[i]))[cand_items_msak] # 相似用户对候选物品的评分矩阵,形状为(top_users,cand_items) scores = data[top_k_sim, :][:, cand_items_msak] # 对已评分用户相似度的求和,作为分母 denominator = np.sum( sim_mat[i, top_k_sim], axis=0) scores = np.sum( scores * sim_mat[i, top_k_sim].reshape(-1, 1), axis=0) # 以相似度加权并求和 scores = scores/denominator # 除以相似度的累加 idx = np.argsort(scores)[::-1] # 按分数排序后的索引 scores = scores[idx] cand_items = cand_items[idx] return [(item, score) for item, score in zip(cand_items, scores)] CF(data, 6, 3) # - # 如果需要针对物品推荐用户,将data矩阵转置即可。 data_T = data.T CF(data_T, 2, 2)
recommend/1. user_based_CF.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # A step towards the Single Particle Model # In the [previous notebook](./2-a-pde-model.ipynb) we saw how to solve a PDE model in pybamm. Now it is time to solve a real-life battery problem! We consider the problem of spherical diffusion in the negative electrode particle within the single particle model. That is, # \begin{equation*} # \frac{\partial c}{\partial t} = \nabla \cdot (D \nabla c), # \end{equation*} # with the following boundary and initial conditions: # \begin{equation*} # \left.\frac{\partial c}{\partial r}\right\vert_{r=0} = 0, \quad \left.\frac{\partial c}{\partial r}\right\vert_{r=R} = -\frac{j}{FD}, \quad \left.c\right\vert_{t=0} = c_0, # \end{equation*} # where $c$ is the concentration, $r$ the radial coordinate, $t$ time, $R$ the particle radius, $D$ the diffusion coefficient, $j$ the interfacial current density, $F$ Faraday's constant, and $c_0$ the initial concentration. # # In this example we use the following parameters: # # | Symbol | Units | Value | # |:-------|:-------------------|:-----------------------------------------------| # | $R$ | m | $10 \times 10^{-6}$ | # | $D$ | m${^2}$ s$^{-1}$ | $3.9 \times 10^{-14}$ | # | $j$ | A m$^{-2}$ | $1.4$ | # | $F$ | C mol$^{-1}$ | $96485$ | # | $c_0$ | mol m$^{-3}$ | $2.5 \times 10^{4}$ | # # # Note that all battery models in PyBaMM are written in dimensionless form for better numerical conditioning This is discussed further in [the simple SEI model notebook](./5-a-simple-SEI-model.ipynb). # ## Setting up the model # As before, we begin by importing the pybamm library into this notebook, along with any other packages we require, and start with an empty `pybamm.BaseModel` # # + import pybamm import numpy as np import matplotlib.pyplot as plt model = pybamm.BaseModel() # - # We then define all of the model variables and parameters. Parameters are created using the `pybamm.Parameter` class and are given informative names (with units). Later, we will provide parameter values and the `Parameter` objects will be turned into numerical values. For more information please see the [parameter values notebook](../parameter-values.ipynb). # + R = pybamm.Parameter("Particle radius [m]") D = pybamm.Parameter("Diffusion coefficient [m2.s-1]") j = pybamm.Parameter("Interfacial current density [A.m-2]") F = pybamm.Parameter("Faraday constant [C.mol-1]") c0 = pybamm.Parameter("Initial concentration [mol.m-3]") c = pybamm.Variable("Concentration [mol.m-3]", domain="negative particle") # - # Now we define our model equations, boundary and initial conditions, as in the previous example. # + # governing equations N = -D * pybamm.grad(c) # flux dcdt = -pybamm.div(N) model.rhs = {c: dcdt} # boundary conditions lbc = pybamm.Scalar(0) rbc = -j / F / D model.boundary_conditions = {c: {"left": (lbc, "Dirichlet"), "right": (rbc, "Neumann")}} # initial conditions model.initial_conditions = {c: c0} # - # Finally, we add any variables of interest to the dictionary `model.variables` model.variables = { "Concentration [mol.m-3]": c, "Surface concentration [mol.m-3]": pybamm.surf(c), "Flux [mol.m-2.s-1]": N, } # ## Using the model # In order to discretise and solve the model we need to provide values for all of the parameters. This is done via the `pybamm.ParameterValues` class, which accepts a dictionary of parameter names and values param = pybamm.ParameterValues( { "Particle radius [m]": 10e-6, "Diffusion coefficient [m2.s-1]": 3.9e-14, "Interfacial current density [A.m-2]": 1.4, "Faraday constant [C.mol-1]": 96485, "Initial concentration [mol.m-3]": 2.5e4, } ) # Here all of the parameters are simply scalars, but they can also be functions or read in from data (see [parameter values notebook](../parameter-values.ipynb)). # As in the previous example, we define the particle geometry. Note that in this example the definition of the geometry contains a parameter, the particle radius $R$ r = pybamm.SpatialVariable("r", domain=["negative particle"], coord_sys="spherical polar") geometry = {"negative particle": {"primary": {r: {"min": pybamm.Scalar(0), "max": R}}}} # Both the model and geometry can now be processed by the parameter class. This replaces the parameters with the values param.process_model(model) param.process_geometry(geometry) # We can now set up our mesh, choose a spatial method, and discretise our model # + submesh_types = {"negative particle": pybamm.MeshGenerator(pybamm.Uniform1DSubMesh)} var_pts = {r: 20} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"negative particle": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model); # - # The model is now discretised and ready to be solved. # ### Solving the model # As is the previous example, we choose a solver and times at which we want the solution returned. # + # solve solver = pybamm.ScipySolver() t = np.linspace(0, 3600, 600) solution = solver.solve(model, t) # post-process, so that the solution can be called at any time t or space r # (using interpolation) c = solution["Concentration [mol.m-3]"] c_surf = solution["Surface concentration [mol.m-3]"] # plot fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 4)) ax1.plot(solution.t, c_surf(solution.t)) ax1.set_xlabel("Time [s]") ax1.set_ylabel("Surface concentration [mol.m-3]") r = mesh["negative particle"][0].nodes # radial position time = 1000 # time in seconds ax2.plot(r * 1e6, c(t=time, r=r), label="t={}[s]".format(time)) ax2.set_xlabel("Particle radius [microns]") ax2.set_ylabel("Concentration [mol.m-3]") ax2.legend() plt.tight_layout() plt.show() # - # In the [next notebook](./4-comparing-full-and-reduced-order-models.ipynb) we consider the limit of fast diffusion in the particle. This leads to a reduced-order model for the particle behaviour, which we compare with the full (Fickian diffusion) model.
examples/notebooks/Creating Models/3-negative-particle-problem.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import tensorflow as tf from tensorflow.keras.datasets import cifar10 from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten from tensorflow.keras.layers import Conv2D, MaxPool2D import pickle import numpy as np from tensorflow.keras.optimizers import Nadam # + pickle_in = open("X1.pickle","rb") X = pickle.load(pickle_in) pickle_in = open("y1.pickle","rb") y = pickle.load(pickle_in) X = X/255.0 model = Sequential() model.add(Conv2D(input_shape=X.shape[1:],filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Flatten()) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=10, activation="softmax")) model.compile(optimizer='nadam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(X, y, batch_size=10, epochs=1, validation_split=0.25) model.save('VGG19.model') model.save_weights('VGG19.h5') # -
VGG19.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Toolkit & Setup # En esta clase presentaremos algunos conceptos y herramientas que utilizaremos durante el curso. El objetivo final es instalar correctamente el _toolkit_ de MAT281. # + [markdown] slideshow={"slide_type": "slide"} # ## Sistema Operativo # + [markdown] slideshow={"slide_type": "slide"} # * Personalmente recomiendo **Linux**, en particular distribuciones como Ubuntu, Mint o Fedora por su simplicidad y transversalidad a la hora de instalar. # * En ocasiones las implementaciones en **Windows** no están completamente integradas e inclusive en ocasiones no están disponibles. Actualmente contamos con dos alternativas para utilizar Linux en Windows: # - [**Windows Subsystem for Linux**](https://docs.microsoft.com/en-us/windows/wsl/about) # - [**Docker**](https://www.docker.com/) # * En el caso que poseas un equipo con **macOS** no debería haber problema. # + [markdown] slideshow={"slide_type": "slide"} # ## Interfaz de Línea de Comandos (*Command Line Interface* / CLI) # + [markdown] slideshow={"slide_type": "slide"} # * Es un método que permite a los usuarios interactuar con algún programa informático por medio de líneas de texto. # * Típicamente se hace uso de una terminal/*shell* (ver imagen). # * En el día a día dentro de la oficina facilita flujo de trabajo. # * Permite moverse entre manipular directorios y ficheros, instalar/actualizar herramientas, aplicaciones, softwares, etc. # - # ![cli](https://upload.wikimedia.org/wikipedia/commons/2/29/Linux_command-line._Bash._GNOME_Terminal._screenshot.png) # *Screenshot of a sample bash session in GNOME Terminal 3, Fedora 15. [Wikipedia](https://en.wikipedia.org/wiki/Command-line_interface)* # + [markdown] slideshow={"slide_type": "slide"} # ## Entorno Virtual # + [markdown] slideshow={"slide_type": "slide"} # * __Problemas recurrentes__ # - Dependencias de librerías (*packages*) incompatibles. # - Dificultad a la hora de compartir y reproducir resultados, e.g. no conocer las versiones de las librerías instaladas. # - Tener una máquina virtual para cada desarrollo es tedioso y costoso. # - Miedo constante a instalar algo nuevo y tu desarrollo (nunca) vuelva a funcionar. # * __Solución__ # - Aislar el desarrollo con tal de mejorar la compatibilidad y reproducibilidad de resultados. # * __¿Cómo?__ # - Utilizando entornos virtuales. # + [markdown] slideshow={"slide_type": "subslide"} # ### Conda # + [markdown] slideshow={"slide_type": "subslide"} # ![Conda](https://conda.io/docs/_images/conda_logo.svg) # # *Package, dependency and environment management for any language—Python, R, Ruby, Lua, Scala, Java, JavaScript, C/ C++, FORTRAN.* [(Link)](https://conda.io/docs/) # + [markdown] slideshow={"slide_type": "subslide"} # __¿Por qué Conda?__ # # * Open Source # * Gestor de librerías __y__ entornos virtuales. # * Compatible con Linux, Windows y macOS. # * Es agnóstico al lenguaje de programación (inicialmente fue desarrollado para Python). # * Es de fácil instalación y uso. # - Miniconda: Instalador de `conda` (Recomendado). # - Anaconda: Instalador de `conda` y otros paquetes científicos. # + [markdown] slideshow={"slide_type": "subslide"} # __Otras alternativas__ # * ```pip``` + ```virtualenv```, el primero es el gestor favorito de librerías de Python y el segundo es un gestos de entornos virtuales, el contra es que es exclusivo de Python. # - Ojo! En Conda también puedes instalar por ```pip```. # * __Dockers__ es una herramienta muy de moda en grandes proyectos debido a ser, en palabras simples, un intermedio entre entornos virtuales y máquinas virtuales. # + [markdown] slideshow={"slide_type": "slide"} # ## Python # + [markdown] slideshow={"slide_type": "slide"} # Las principales librerías científicas a instalar y que ocuparemos durante el curso son: # # * [Numpy](http://www.numpy.org/): Computación científica. # * [Pandas](https://pandas.pydata.org/): Análisis de datos. # * [Matplotlib](https://matplotlib.org/): Visualización. # * [Altair](https://altair-viz.github.io/): Visualización Declarativa. # * [Scikit-Learn](http://scikit-learn.org/stable/): Machine Learning. # # A medida que avance el semestre detallaremos mejor cada una de estas (y otras) librerías. # + [markdown] slideshow={"slide_type": "slide"} # ## Project Jupyter # + [markdown] slideshow={"slide_type": "slide"} # *[Project Jupyter](https://jupyter.org/index.html) exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages.* # # <img src="https://2.bp.blogspot.com/-Q23VBETHLS0/WN_lgpxinkI/AAAAAAAAA-k/f3DJQfBre0QD5rwMWmGIGhBGjU40MTAxQCLcB/s1600/jupyter.png" alt="" align="center"/> # + [markdown] slideshow={"slide_type": "subslide"} # ### Jupyter Notebook # + [markdown] slideshow={"slide_type": "subslide"} # Es una aplicación web que permite crear y compartir documentos que contienen código, ecuaciones, visualizaciones y texto. Entre sus usos se encuentra: # # * Limpieza de datos # * Transformación de datos # * Simulaciones numéricas # * Modelamiendo Estadístico # * Visualización de Datos # * Machine Learning # * Mucho más. # + [markdown] slideshow={"slide_type": "subslide"} # ![Jupyter Notebook Example](https://jupyter.org/assets/jupyterpreview.png) # + [markdown] slideshow={"slide_type": "subslide"} # ### Jupyter Lab # + [markdown] slideshow={"slide_type": "subslide"} # * Es la siguiente generación de la interfaz de usuario de *Project Jupyter*. # * Similar a Jupyter Notebook cuenta con la facilidad de editar archivos .ipynb (notebooks) y heramientas como una terminal, editor de texto, explorador de archivos, etc. # * Eventualmente Jupyter Lab reemplazará a Jupyter Notebok (aunque la versión estable fue liberada hace algunos meses). # * Cuenta con una serie de extensiones que puedes instalar (y desarrollar inclurisve. # # Puedes probar Jupyter Lab con solo dos clicks! # # 1. Ingresar a este link: https://github.com/jupyterlab/jupyterlab-demo # 2. Hacer click en el icono de binder: ![Binder](https://mybinder.org/badge_logo.svg) # - # ### Otros proyectos # Entre los más conocidos se encuentran: # # * [JupyterHub](https://jupyterhub.readthedocs.io/): Distribuir Jupyter Noterbooks a múltiples usuarios. # * [nbviewer](https://nbviewer.jupyter.org/): Compartir Jupyter Notebooks. # * [Jupyter Book](https://jupyterbook.org/): Construir y publicar libros de tópicos computacionales. # * [Jupyter Docker Stacks](https://jupyter-docker-stacks.readthedocs.io/): Imágenes de Jupyter para utilizar en Docker. # + [markdown] slideshow={"slide_type": "slide"} # ## Git # + [markdown] slideshow={"slide_type": "slide"} # _[__Git__](https://git-scm.com/) is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency._ # # Un control de versiones es un sistema que registra los cambios realizados en un archivo o conjunto de archivos a lo largo del tiempo, de modo que puedas recuperar versiones específicas más adelante. Aunque en los ejemplos de este libro usarás archivos de código fuente como aquellos cuya versión está siendo controlada, en realidad puedes hacer lo mismo con casi cualquier tipo de archivo que encuentres en una computadora ([fuente](https://git-scm.com/book/es/v2/Inicio---Sobre-el-Control-de-Versiones-Acerca-del-Control-de-Versiones)). # # Es importante comprender que _Git_ es la herramienta que permite versionar tus proyectos, sin embargo, a la hora de querer aprovechar más funcionalidades, como compartir o sincronizar tus trabajos se hace necesario utilizar servicios externos. Los más famosos son: # # * GitHub # * GitLab # * Bitbucket # # Piensa lo siguiente, cualquiera podría implementar un correo electrónico entre dos computadoras conectadas entre ellas por LAN pero no conectadas a Internet. Sin embargo la gente utiliza servicios como Gmail, Outlook, etc. con tal de aprovechar de mejor manera las funcionalidades que ofrece la tecnología del correo electrónico. Esta es una analogía perfecta entre las diferencias de Git y los servicios como GitHub o GitLab. # + [markdown] slideshow={"slide_type": "subslide"} # ## GitHub # + [markdown] slideshow={"slide_type": "subslide"} # _[GitHub](https://github.com/) is a development platform inspired by the way you work. From open source to business, you can host and review code, manage projects, and build software alongside 30 million developers._ # Es decir, es una plataforma para alojar proyectos que utilizan Git como sistema de control. # # El **material del curso** estará disponible a través de GitHub en el siguiente link: https://github.com/aoguedao/mat281_2020S2. # # De manera experimental este año, se está implementando una página web del curso utilizando Jupyter Book y alojada en GitHub: https://aoguedao.github.io/mat281_2020S2 # - # ## Resumen # * Sistema operativo: Cualquiera, sin embargo se recomiendan alternativas basadas en Unix. # * Lenguaje de programación: Python # * Entorno virtual: Conda, preferentemetne a través de miniconda. # * Entorno de trabajo: Jupyter Lab. # * Versionamiento: Git & GitHub.
lessons/M1L02_toolkit_and_setup.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Add an object to the interactive # # *This will take a name and use the information from wikipedia to generate an entry* # + # %load_ext autoreload # %autoreload 2 #This class has all the methods needed to create an object. It is minimally commented for now... from addObjectUtils import SVLobject # - # ## Create a few new objects obj = SVLobject() obj.name = "SN 1987a" obj.category = "Nebulae" obj.fileName = "userObjects/SN1987a.json" obj.view = 0.025 obj.createObject() obj = SVLobject() obj.name = "<NAME>" obj.category = "Nebulae" obj.fileName = "userObjects/CassiopeiaA.json" obj.view = 0.5 obj.createObject() obj = SVLobject() obj.name = "Parker Probe" obj.category = "Satellites" obj.fileName = "userObjects/ParkerProbe.json" obj.createObject() #This one takes a VERY long time # -- lots of images that it can't find captions for, and has to search again and again through entire parsetree # -- also a lot of strange looking captions coming out here obj = SVLobject() obj.name = "New Horizons" obj.category = "Satellites" obj.fileName = "userObjects/NewHorizons.json" obj.createObject() # ### Compile all the objects for the interactive from compileObjects import compileAll compileAll()
data/addObject.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Connect to Dremio with R (use this as a template) # + ################################################ # Connect to Dremio # ################################################ # Don't edit this if (!require(odbc)) { install.packages("odbc"); require("odbc") } if (!require(getPass)) { install.packages("getPass"); require("getPass") } require(DBI) dremio_host <- 'dremio-client.dremio.svc.cluster.local' dremio_port <- 31010 dremio_driver <- Sys.getenv('DREMIO_DRIVER') cnxn <- DBI::dbConnect( odbc::odbc(), driver = "Dremio ODBC Driver 64-bit", uid = getPass::getPass(prompt = "Dremio Username: "), pwd = getPass::get<PASSWORD>(prompt = "Dremio Password: "), host = dremio_host, port = dremio_port, AuthenticationType = "Basic Authentication", ConnectionType = "Direct" ) print("Connected.") # - # ## An overview of what's available print("Catalogs:") sql = "SELECT * FROM INFORMATION_SCHEMA.CATALOGS LIMIT 5" request <- dbSendQuery(cnxn, sql) df <- dbFetch(request, n = 100) df print("Tables") dbListTables(con) print("Columns:") sql = "SELECT * FROM INFORMATION_SCHEMA.COLUMNS LIMIT 5" request <- dbSendQuery(cnxn, sql) df <- dbFetch(request, n = 100) df dbListFields(cnxn, "dremiosharedstorage.shared.\"12100121.csv\"") # + # If you want to close the connection # close(channel) # + # For more commands, see the SQL Reference # https://docs.dremio.com/sql-reference/ ################################################ # End of Connect to Dremio # ################################################ # - # # Get started with your analysis! sql = "SELECT * FROM dremiosharedstorage.shared.\"12100121.csv\"" request <- dbSendQuery(cnxn, sql) df <- dbFetch(request) df # The upstream data that I'm using didn't label the columns, so I have to. colnames(df) <- df[1,] df <- df[-1, ] names(df) library(dplyr) values = df %>% select(VALUE) # string to number values <- as.data.frame(lapply(values, as.numeric)) mean(values$VALUE) sd(values$VALUE) v = values$VALUE #hist(c(values$VALUE), "Values", breaks = 20) hist(v, main="legend", breaks = 100) # + # Exclude outliers hist(v[v < 4000], main="legend", xlim=c(-50, 4000), breaks = 60)
.dremio/02-Dremio-R-Connect.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <br> # # # MACHINE LEARNING AND STATISTICS PROJECT 2020 # # In this notebook I am creating my project for the Machine Learning and Statistics 2020 module. # the following sections: # # References used for this project completion, project instructions as specified by the lecturer, the purpose of the project, and the different sections explaining the processes with markdown cells along with further comments inside the code cells. # # *** # References # # - [Fundamentals DA project's repository ](https://github.com/Ainara12/Fundamentals-Project/blob/master/Fundamentals%20DA-Project.ipynb) # # - [Introduction to Linear and Polynomial Regression](https://towardsdatascience.com/introduction-to-linear-regression-and-polynomial-regression-f8adc96f31cb) # # - [Numpy polyfit function documentation](https://numpy.org/doc/stable/reference/generated/numpy.polyfit.html) # # - [Numpy poly1d function documentation](https://numpy.org/doc/stable/reference/generated/numpy.poly1d.html) # # - [Lecturer's simple linear regression Jupyter notebook](https://github.com/ianmcloughlin/jupyter-teaching-notebooks/blob/master/simple-linear-regression.ipynb) # # - [Machine Learning Polynomial Regression](https://www.w3schools.com/python/python_ml_polynomial_regression.asp) # # - [Introduction to Keras](https://keras.io/getting_started/intro_to_keras_for_engineers/) # # - [Lecturer's linear regression in Keras Jupyter notebook](https://github.com/ianmcloughlin/jupyter-teaching-notebooks/blob/master/keras-linear.ipynb) # # - [Neural network with Keras tutorial](https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/) # # - [Classification and regression using Keras tutorial](https://stackabuse.com/tensorflow-2-0-solving-classification-and-regression-problems/) # # - [Linear regression using Python Sklearn video tutorial](https://www.youtube.com/watch?v=b0L47BeklTE&ab_channel=RylanFowers) # # - [How to deploy a machine learning module using flask](https://towardsdatascience.com/deploy-a-machine-learning-model-using-flask-da580f84e60c) # # - [Saving a machine learning model](https://www.geeksforgeeks.org/saving-a-machine-learning-model/) # # - [How to save ML models](https://www.kaggle.com/prmohanty/python-how-to-save-and-load-ml-models) # # *** # <br> # # ## Project instructions # # ![Project%20Instructions.jpg](attachment:Project%20Instructions.jpg) # <br> # # ## Overall purpose of this project # # My goal with this project is to use machine learning to make predictions using the 'powerproduction' dataset. I have some works done for this in my project for the module Fundamentals of Data Analysis that I will use as reference to create a model that predicts output wind turbine power from wind speed values [Link to Fundamentals DA project in github](https://github.com/Ainara12/Fundamentals-Project/blob/master/Fundamentals%20DA--Project.ipynb).Once this model is complete I will create a script that runs a web service based on this model and a Dockerfile to build and run the web service in a container. # # To achieve this goal I am using the information and knowledge gathered during the course of this module along with the references consulted and detailed in this document. # # <br> # # ## Dataset analysis # # ### Loading and observing raw dataset # # Before we get into create a prediction model, I am looking into the raw dataset to get a feel into how it looks and see some descriptive statistics. See works in code cells below: # # + #Importing the modules I am going to use import pandas as pd #to load and organize dataset import matplotlib.pyplot as plt #for visualisation import numpy as np #to work with arrays and apply regression functions #loading dataset data=pd.read_csv(r'Powerproduction dataset.csv', header=0) #converting this dataset into dataframe with pandas powerdata=pd.DataFrame (data, columns = ['speed','power']) print(powerdata) # + #initially describing dataset using pandas functionalities powerdata.describe() # - # <br> # # The code cells above show us the structure of this dataset. This dataset is based on a real life scenario in which we have 499 rows (initially 500 but I dropped first row as had no information on it) and two columns for wind speed and power generated. # The dataset shows the relation between the wind speed and the power output derived from this speed. # # Using command describe we get some general information such as the mean or average for 'speed' being 12.62 while power has 48.11, the std and minimum and max values. # # Now on the cells below I am plotting this data so we can see a clearer picture of the correlation of these 2 variables. # + #Plotting raw dataset # %matplotlib inline plt.rcParams['figure.figsize']=([15, 10]) #plotting, speed is shown blue dots while power is the green section plt.plot( 'speed', 'y1', data=powerdata, marker='o', markerfacecolor='blue', markersize=3, color='skyblue', linewidth=4) plt.plot( 'power', 'y2', data=powerdata, marker='2', color='olive', linewidth=4) plt.legend()#adding legend plt.show() # - # <br> # # As seen above and considering my analysis in [Fundamentals-DA project's repository](https://github.com/Ainara12/Fundamentals-Project/blob/master/Fundamentals%20DA--Project.ipynb), I have concluded that this dataset seems to fit better in a Polynomial Linear Regression. # # A Polynomial Linear Regression, uses the relationship between the variables to find the best way to draw a line between data points, but this line does not need to be straight. # # This type of linear regression has some advantages that might be useful for this dataset analysis and prediction, such as : # # - It has a broad range of functions that can be a fit under it. # - It fits a wide range of curvature source. # # Some inconvenientes might also arise: # # - The presence of outliers can affect very hard the results # - There are in general fewer model validation tools for the detection of outliers in this type of regression than for simple linear. # # See below an example on how I created the curve that fits this data and found the **R-Squared value** to be very high ( closer to 1). # # In order to apply the polynomial regression to our plot I am creating a variable that will use **numpy poly1d** and **numpy polyfit** functions to generate this curve to fit the data. # # + #Applying Polynomial regression to our plot: #first I separate the 2 variables to represent them power=powerdata['speed'] speed=powerdata['power'] Polynom=np.poly1d(np.polyfit(speed, power,deg=3)) xline=np.linspace(0.0, 120, 200) # creatingand even spaced axis #plotting these elements with a scatter plot plt.rcParams['figure.figsize']=([15, 10]) plt.scatter( speed, power)# Using scatterplot to represent the two variables plt.plot(xline, Polynom(xline), 'r*')#using function to add our xline along wit the result of the poly1d and polyfit functions plt.show() # - # <br> # # Now we calculate **R-squared value** also called **Coefficient of determination** , this value measures how well a regression model fits to the data. We can calculate using the **numpy polyfit function** and then proceed to square this value. This coefficient can be positive or negative. # # **Numpy polyfit function** uses Pearson's correlation coefficient , which differs from the **coefficient of determination** above described as it measures the strength of the linear relationship between 2 sets of observations, in this case how much the output power depends on the wind speed;power output is the dependent variable (*y* value) . It also tells whether the relationship is positive or negative. # # In this case since we have a Polynomial linear regression , I am using **Sklearn** to obtain the **coefficient of determination** .I followed steps found on this [guide](https://www.w3schools.com/python/python_ml_polynomial_regression.asp) # # I am describing the process on cell below. # # + #calculating R-squared value with sklearn #first, I import sklearn and specific module from sklearn.metrics import r2_score #Using r2_score functionality with my previous created variable 'Polynom' r2=r2_score(power,Polynom(speed) ) print('The R-squared value is: ',r2) # - # <br> # # ### How we interpret the R-squared value? # # The closest a value is to 1 the better fit, in this case with 0.7321827537382541 we have a moderate to very good fit. This implies that most of the changes in the dependent variable or *y* (power) are explained by the corresponding changes in *x* or independent variable (speed). # <br> # # ## Presenting and training *Model* # # In this section I am going to create a model to predict values, based on what we have learn in previous sections about this dataset. # Once I created the model I will analyse its accuracy. # # # <br> # # ### Presenting model # # My model is based on this [guide/tutorial](https://www.w3schools.com/python/python_ml_polynomial_regression.asp) in which after confirming that Polynomial Regression fits the data very well, I use module **r2_score** from **sklearn** in order to predict how much power will be generated from a specific wind speed value. # # In this example we are going to test the model considering that wind speed is :2.179. # Let's see the steps I have taken below and what output power result we obtain: # + #Using my variable Polynom created in previous steps Polynom=np.poly1d(np.polyfit(speed, power,deg=3)) #I enter the value that I want to find out with this model Power_generated = Polynom(2.179) print('This is the amount of power generated considering win speed is 2.179 mph:') print(Power_generated) # - # # <br> # # ## Experimenting with Keras # # In this section of the project I am experimenting with sklearn neural networks, following [lecture's tutorials](https://web.microsoftstream.com/video/b3c0a6ba-86b6-4f4a-bc1d-48d26c868bea). # # My focus here is to have a portion of the data that will be part of the train model and reach then more accurate results. # See my attemp below: # ### What is Keras for Machine learning? # # Keras for machine learning is Keras for machine learning is a Python library that can be used on top of TensorFlow to make deep learning models implementation faster and easier. # # # I am using this [tutorial](https://stackabuse.com/tensorflow-2-0-solving-classification-and-regression-problems/) as reference. # See steps below. # # - First I import the necessary 'Sklearn' modules to split the data into training and test sets. # - I import 'Tensorflow keras layers' and 'Tensorflow keras models' to create and train the model. # # + #First I import the needed modules to divide dataset into training and #test sets from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=0) from tensorflow.keras.layers import Input, Dense, Activation,Dropout from tensorflow.keras.models import Model # - # <br> # # - Next step is to create the model using 'Sequential'. There are two ways to build Keras models ( Functional and Sequential) and in this case I am using same way as the tutorial as this is a simplest type of model, with a linear stock of layers. For this dataset I think is enough to use Sequential. # # + #Creating model model = kr.models.Sequential() model.add(kr.layers.Dense(2, input_shape=(1,), activation='sigmoid', kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform")) model.add(kr.layers.Dense(1, activation='linear', kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform")) model.compile(kr.optimizers.Adam(lr=0.001), loss='mean_squared_error')#Adjusting learn rate to 0.001 for more accuracy # - # <br> # # In the next step we are training the model, I selected batch size 10 which means the model will pass 10 values every time and an epoch of 500. # + #training model history= power_model = model.fit(x_train, y_train, batch_size=10, epochs=1000, verbose=1, validation_split=0.2) # - # <br> # # Now we plot the result. And can see that the result changes as the model learn everytime. # + #plotting results #resizing plt.rcParams['figure.figsize']=([15, 10]) plt.plot(poly['x'], poly['y'], label='actual') plt.plot(poly['x'], model.predict(poly['x']), label='prediction') plt.legend() # + #checking History object to see record of the loss values #of the loss values and metric values during training '''Source:https://www.tensorflow.org/guide/keras/train_and_evaluate ''' history.history # + # Making predictions with this model: prediction=model.predict([4.0,0.234]) print('This is the amount of power generated for the wind speed you have provided:') print(prediction) # - # <br> # # To evaluate this model, I used the method shown in this tutorial, which is the root mean squared error method. # This method consist of he square root of the mean of the square of all of the error. # + #Evaluating the performance of a regression model on test set using #root mean squared error method from sklearn.metrics import mean_squared_error from math import sqrt pred_train = model.predict(x_train) print(np.sqrt(mean_squared_error(y_train,pred_train))) pred = model.predict(x_test) print(np.sqrt(mean_squared_error(y_test,pred))) # - # <br> # # ### Conclusion about this model # # Looking at the shape that the prediction is creating it seems that this prediction model is more accurate. # Cosidering the evaluation method we have performed in the markdown cell above using root mean squared error, we can see that both the train and the set model are performing good as they both give similar values. # In cases where for example, the train model is performing better than the test model we talk about 'overfitting'. # 'Overfitting' means that the model has an excessively complex structure and learns both the existing relations among data and noise. On the other hand, 'Underfitting'is often the consequence of model being unable to encapsulate the relations among data. # [source](https://realpython.com/train-test-split-python-data/#underfitting-and-overfitting). # # # # <br> # # ## Creating *Model* using Sklearn # # After my approaches made in previous sections, I have decided to use one last approach with module 'Sklearn'. # Using Keras and Tensorflow to create ad train models using this dataset was good practice and for sure I will keep learning about it. # # I would like now to use this model for linear regression using [this tutorial](https://www.youtube.com/watch?v=b0L47BeklTE&ab_channel=RylanFowers) which I think it might be easier and more accurate to make predictions as my model to be included on the second part of this project. # See details of the process in the markdown and code cells below: #First we import the modules we are going to use from sklearn.linear_model import LinearRegression #import linear regression model from sklearn.model_selection import train_test_split #to divide data import pandas as pd import numpy as np import matplotlib.pyplot as plt # + #loading data in this section for better understanding Powerdataset=pd.read_csv('Powerproduction dataset.csv', delimiter=',') x=Powerdataset['speed'] y=Powerdataset['power'] #plotting dataset to have this included on this section: Powerdataset.plot(kind='scatter', x= 'speed', y='power') plt.show() # + #Let's create our linear regression model #doing test train split X_train, X_test, y_train, y_test= train_test_split(Powerdataset.speed, Powerdataset.power) # + #Let's see how this split looks plt.scatter(X_train, y_train, label='Training Data', color='g', alpha=.7) plt.scatter(X_test, y_test, label='Test Data', color= 'r', alpha=.7) plt.legend() plt.title('Test train split') plt.show() # - # <br> # # Once we have our data split so one part is used for training and the other part is used for testing, as we can see in the visualization above this cell, we move onto the Model creation section. # + #Model creation #naming model LR as Linear Regression as in the tutorial for #easier understanding LR=LinearRegression() LR.fit(X_train.values.reshape(-1,1), y_train.values)#Adding x_train and y_train values and reshaping X_train values as they need to be in #a 1d shape for this to work # - # <br> # # The next step is to use the model to predict in out test data. See below: # + # Predicting prediction=LR.predict(X_test.values.reshape(-1,1)) #Plotting X_test against prediction results in same plot plt.plot(X_test, prediction, label='Linear Regression', color='r') plt.scatter(X_test, y_test, label='Actual Test Data', color='b', alpha=.7) plt.legend() plt.show() # - # <br> # # As seen in previous sections where we found the **R squared value** for Simple Linear regression model used on this dataset is 0.531347729791333, which is a low to moderate fit. # Let's try this model to predict the power output based on the wind speed values we enter as input on our model. # # # # + #Making predictions for specific values #using command predict we enter a sample wind speed: print('This is the power generated considering your input: ') LR.predict(np.array([[25.00]]))[0] # - # <br> # # We can see that the power generated result we received is the same as the power result we can see in our plot above. The model seems to be accurate enough. # Finally we will 'Sklearn' score function to evaluate this model's accuracy. # # + #score function LR.score(X_test.values.reshape(-1,1),y_test.values) # - # <br> # # Considering the maximum score is 1.0, this does not seem a bad model to fit in this dataset. # <br> # # ### Saving my model # # The last action I am going to do with this model is to save it using 'Joblib' module so it can be used by our server to predict the power values depending on the wind speed. I have used this [source](https://www.kaggle.com/prmohanty/python-how-to-save-and-load-ml-models) and this [source](https://towardsdatascience.com/deploy-a-machine-learning-model-using-flask-da580f84e60c) to find out the best approach. #First we import the module joblib import joblib # + joblib_file = "joblib_LR_Model.pkl" joblib.dump(LR, "joblib_LR_Model.pkl" ) #saving model as a pickle file. # + # Load from file joblib_LR_model = joblib.load(joblib_file) joblib_LR_model # + #Calculate test score score = joblib_LR_model.score(X_test.values.reshape(-1,1),y_test.values) print(score) print("Test score: {0:.2f} %".format(100 * score)) # + # Predict the Labels using the reloaded Model Ypredict = joblib_LR_model.predict(np.array([[25.00]]))[0] print('This is the power generated considering your input:',Ypredict) # - # <br> # # This was my first approach to save the model. I ended using 'Pickle' instead as it seemed to work better and more simply. # For this I used a Linear regression model following this [tutorial](https://towardsdatascience.com/deploy-a-machine-learning-model-using-flask-da580f84e60c) # See process below. # # #Importing needed modules from sklearn.model_selection import train_test_split #sklearn to select model and LR from sklearn.linear_model import LinearRegression import pickle #importing to save model in disk # + #loading dataset dataset = pd.read_csv('Powerproduction dataset.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 1].values # - #train data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 0) # + #creating model regressor = LinearRegression() regressor.fit(X_train, y_train) y_pred = regressor.predict(X_test) # - #Saving model with picker pickle.dump(regressor, open('model.pkl','wb')) # + #Loading and printing model to make predictions model = pickle.load(open('model.pkl','rb')) print(model.predict([[7.20]]))#this command predicts for example how much power generates with 7.20 mps of wind # - # <br> # # ## Web service development # # In this section I am going to go through the steps I followed to complete the last part of the tasks which is to develop a web service that will work in conjunction with my Machine Learning model. # # This web server will respond with predicted power values based on speed values sent as HTTP requests. # # You can find my references for this section below: # # *** # References: # # - [Lecturer's example app repository](https://github.com/ianmcloughlin/random-app) # # - [Creating a virtual environment ](https://realpython.com/python-virtual-environments-a-primer/) # # - [Flask Quick start documentation](https://flask.palletsprojects.com/en/1.1.x/quickstart/) # # - [Bootstrap introduction](https://getbootstrap.com/docs/5.0/getting-started/introduction/) # # - [Stackoverflow post ](https://stackoverflow.com/questions/65515800/the-method-is-not-allowed-for-the-requested-url-using-flask-post-method) # # - [Deploy machine learning model using flask](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) # # - [Send data from textbox into flask](https://stackoverflow.com/questions/12277933/send-data-from-a-textbox-into-flask) # # - [Python Flask tutorial](https://www.youtube.com/watch?v=MwZwr5Tvyxo&list=RDCMUCCezIgC97PvUuR4_gbFUs5g&index=2&ab_channel=CoreySchafer) # # - [Deploy machine learning models using flask](https://www.kdnuggets.com/2019/10/easily-deploy-machine-learning-models-using-flask.html) # # *** # <br> # # ### App creation with Flask first attempt # # My first intention was to create an app that loaded the model saved using 'Pickle' or 'Joblib' modules as displayed above and then calculate the predictions to be shown in 'results' tab for the users. # # Finally, this approach did not work as I wanted and it shows a graphic with the prediction values extracted from the linear regression model using Sklearn Linear Regression as shown in previous sections of this notebook. # # Please see below documentation on how I tried to create this version , the works are available in this repository on the drafts folder in a subfolder called *initial_app_trial*. # # Please see my code for this initial app below for your reference. For this I used this [guide](https://towardsdatascience.com/how-to-easily-deploy-machine-learning-models-using-flask-b95af8fe34d4) as reference: # # -The structure I followed was to create an app called 'app.py' as a server , a separate file to respond to the http requests and added the model file . Then I created separate folders *Static*, for images and *templates* for my html index file which I intended to contain a button where user could select a value for wind speed and then obtaining a prediction of the power generated based on the model. # + #initial app trial app.py code. from flask import Flask, render_template, jsonify,request,redirect # here I import flask, render_template to load my html document, #jsonify module which serializes data to JSON format, request to manage object requests import pickle import numpy as np app = Flask(__name__) #creating instance of an app with Flask model = pickle.load(open('model.pkl','rb')) #Loading my model with pickle @app.route('/') #defining root route def home(): return render_template('index.html') @app.route('/predict',methods=['POST']) #defining predict route with method post , here I wanted to get values from the index # form def predict(): int_features = [int(x) for x in request.form.values()] final_features = [np.array(int_features)] prediction = model.predict(final_features) output = round(prediction[0], 2) return render_template('index.html', prediction_text='Power generated would be{}'.format(output)) @app.route('/results',methods=['POST']) def results(): data = request.get_json(force=True) prediction = model.predict([np.array(list(data.values()))]) output = prediction[0] return jsonify(output) if __name__=='__main__': #function to add debugging options app.run(debug=True, port=5000) # + #request. py file code import requests url = 'http://127.0.0.1:5000/results' #selecting url and the data to be returned as wind. r = requests.post(url,json={'wind':12}) print(r.json()) # + #Finally this is the index html document that I wanted to used as form to introduce the inputs <!DOCTYPE html> <html > <head> <meta charset="UTF-8"> <title>Deployment Tutorial 1</title> <link href='https://fonts.googleapis.com/css?family=Pacifico' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Arimo' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Hind:300' rel='stylesheet' type='text/css'> <link href='https://fonts.googleapis.com/css?family=Open+Sans+Condensed:300' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}"> </head> <body style="background: #000;"> <div class="login"> <h1>Power output prediction</h1> <!-- Main Input For Receiving Query to our ML --> <form action="{{ url_for('predict')}}"method="post"> <input type="text" name="wind" placeholder="wind" required="required" /> <button type="submit" class="btn btn-primary btn-block btn-large">Predict power output</button> </form> <br> <br> {{ prediction_text }} </div> </body> </html> # - # <br> # # Since I was getting repeatedly some error messages when trying this approach. I posted a question in Stackoverflow that gave me some insights. I would like to add this here as reference that I ask for help and I was advised some things that I tried. See [post here](https://stackoverflow.com/questions/65515800/the-method-is-not-allowed-for-the-requested-url-using-flask-post-method). # <br> # # ### App creation with Flask last attempt # # After many trials ( I have some of them included on my drafts folder), I finally created an app that initially I wanted to get the model saved and use to make predictions. I am going to keep working on this, then I am leaving some of the parts of this code on it , although they might not be functional at the moment. # # The result is in the folder named **lr_app**[direct link here](https://github.com/Ainara12/Machine-Learning-Statistics-Project2020/tree/master/my_project/lr_app). # # In order to create this app ( and previous attempts) I created a virtual environment following this [guide](https://realpython.com/python-virtual-environments-a-primer/) since I was having issues to use Flask with my Python configuration. # # Once I created this environment I was able to create my folder including this environment and work with Flask. See here below a copy of the code I used to create this app and some pictures of what you can see when running it. # # The lr_app includes the following files which are all included on the app folder: # # - Static folder: Includes images used. # - templates: Includes the html files. 'About.html'for instructions in how to get to the different parts of the web app and 'myform.html' which is the form to enter the values. # - .dockerignore file. # - Dockefile file as requested contains the commands that user need to call on command line to assemble the image. # - model.pkl : Model saved with Pickle to be loaded in the app. # - README : This readme file details instructions to run app in different OS and also instructions in how to build and run Docker image # - Requirements.txt: Includes the packages needed to run this app. # - server.py : This is the app code # # + #code for server.py #importing necessary modules from flask import Flask, render_template, request import pickle app = Flask(__name__) model = pickle.load(open('model.pkl','rb')) #Loading my model with pickle @app.route('/') def index(): return render_template('about.html') return app.send_static_file('wind-turbines.jpg') @app.route('/submit_form',methods=['POST','GET']) #Creating a submit form for user to enter the wind speed necessary values def submit_form(): return render_template('myform.html')# myform file is in 'templates' folder return using render_template method if request.method == 'POST': # Get the data from the POST request. data = request.form('text', type=int) return redirect(url_for('predict', pred=data)) #returning data entered through form as redirect url. else: print('Something went wrong here') @app.route('/<pred>') #accessing to this adding the entered value into the route returns the plot with our model #prediction values in relation to the actual values def predict(pred): return app.send_static_file('Prediction plot LR.png') if __name__=='__main__': app.run(debug=True, port=5000) # + #myform.html document code <!DOCTYPE html> <html> <head> <title>Power output calculator</title> </head> <body> <form> <form action="/submit_form" method="post" clas="reveal-content"> Please enter wind speed to calculate power output <br> <input name="text" type="number" step="0.01"> <button type="Submit" class="btn btn-default btn-lg">Send</button> </form> </form> </body> </html> # + <!DOCTYPE html> <html> <head> <title>About page</title> </head> <body> <h1> Instructions for this web app</h1> 'With this app you can enter wind speed values to get a prediction graphic based on my model based on the Powerproduction dataset. Please navigate to the next page to access to submit form using the route /submit_form' </body> </html> # - # <br> # # The final page returns an image of the prediction using Linear Regression in relation to the actual data. # # ![Prediction%20plot%20LR.png](attachment:Prediction%20plot%20LR.png) # # <br> # # As a conclusion to this project I would like to add here the images to show how the web app looks in case there are any errors when try to run it. # # This is the initial page with instructions in how to operate this web app: # # ![Root%20page.jpg](attachment:Root%20page.jpg) # # <br> # # This is the main page with the form : # # ![Submitform%20page.png](attachment:Submitform%20page.png) # # # <br> # # When we enter a value it goes to the bar and we can move to the 'prediction' page where the image displays: # # ![last%20page.png](attachment:last%20page.png) # ## END OF PROJECT
Machine Learning & Statistics-Project 2020.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch import torch.utils.data import torchvision from glob import glob import json import os from pathlib import Path import random from PIL import Image config = json.load(open(os.path.expanduser("~/.thesis.conf"))) db_folder = Path(config['datasets']) / Path("archive_org/") identifier = "pointlinetoplane00kand" os.chdir(str(db_folder)) # + class ArchiveOrgDataset(torch.utils.data.Dataset): def __init__(self, path, transform=None): self.path = path self.jp2 = glob(str(path / '*'/ '*' / '*.jp2')) self.transform = transform def __len__(self): return len(self.jp2) def __getitem__(self, index): assert index < len(self.jp2) img = Image.open(self.jp2[index]) if self.transform: img = self.transform(img) return img class RotateTask(object): def __init__(self, interpolation=Image.BILINEAR, resample=False, expand=False, center=None): self.interpolation = interpolation self.resample = resample self.expand = expand self.center = None self.directions = {0: 0, 1: 90, 2: 180, 3: 270 } def __call__(self, img): d = random.randint(0,3) rimg = img.rotate(self.directions[d], resample=Image.BILINEAR) rimg = torchvision.transforms.functional.to_tensor(rimg) return dict(x=rimg, label=d) # - createRotateTask = torchvision.transforms.Compose([ torchvision.transforms.RandomResizedCrop(64,scale=(0.8,1), ratio=(1,1)), RotateTask() ]) aset = ArchiveOrgDataset(db_folder, transform=createRotateTask) aset[2] data_loader = torch.utils.data.DataLoader(aset,batch_size=4)
notebooks/data/archive_dataset.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import mne import pickle import sys import os # BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # sys.path.append(BASE_DIR) # %matplotlib inline mne.utils.set_config('MNE_USE_CUDA', 'true') mne.cuda.init_cuda(verbose=True) # - #Load transformed data from saved file into list data=pickle.load(open('pickled/OpenBCISession_2020-02-14_11-09-00-SEVEN', 'rb')) # + #Naming system for blocks into integers bloc={ "sync":1, "baseline":2, "stressor":3, "survey":4, "rest":5, "slowBreath":6, "paced":7 } def createMNEObj(data, name='Empty'): #Create Metadata sampling_rate = 125 channel_names = ['Fp1', 'Fp2', 'C3', 'C4', 'P7', 'P8', 'O1', 'O2', 'F7', 'F8', 'F3', 'F4', 'T7', 'T8', 'P3', 'P4', 'time', 'bpm', 'ibi', 'sdnn', 'sdsd', 'rmssd', 'pnn20', 'pnn50', 'hr_mad', 'sd1', 'sd2', 's', 'sd1/sd2', 'breathingrate', 'segment_indices1', 'segment_indices2', 'block'] channel_types = ['eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'eeg', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'misc', 'stim'] n_channels = len(channel_types) info = mne.create_info(ch_names=channel_names, sfreq=sampling_rate, ch_types=channel_types) info['description'] = name print(info) transformed = [] start=-1.0 for i in range(len(data)): add=[] add=data[i][1:17] # print(data[i][19].keys()) if start==-1: start=data[i][18].hour*3600 + data[i][18].minute*60 + data[i][18].second + data[i][18].microsecond/1000 add.append(0.0) else: tim=data[i][18].hour*3600 + data[i][18].minute*60 + data[i][18].second + data[i][18].microsecond/1000 add.append(tim-start) # add.append(str(data[i][18].hour)+':'+str(data[i][18].minute)+':'+str(data[i][18].second)+':'+str(int(data[i][18].microsecond/1000))) # try: add.append(data[i][19]['bpm']) # except Exception as e: # print(e, i) # print(data[i][19]) # print(len(data)) add.append(data[i][19]['ibi']) add.append(data[i][19]['sdnn']) add.append(data[i][19]['sdsd']) add.append(data[i][19]['rmssd']) add.append(data[i][19]['pnn20']) add.append(data[i][19]['pnn50']) add.append(data[i][19]['hr_mad']) add.append(data[i][19]['sd1']) add.append(data[i][19]['sd2']) add.append(data[i][19]['s']) add.append(data[i][19]['sd1/sd2']) add.append(data[i][19]['breathingrate']) add.append(data[i][19]['segment_indices'][0]) add.append(data[i][19]['segment_indices'][1]) add.append(bloc[data[i][20]]) transformed.append(np.array(add)) transformed=np.array(transformed) print(transformed[0]) #have to convert rows to columns to fit MNE structure transformed=transformed.transpose() print(transformed[0], transformed[1], transformed[2], transformed[3]) print(len(transformed[0])) loaded=mne.io.RawArray(transformed, info) return loaded # - raw=createMNEObj(data) #raw.filter(0.5, 50, fir_design='firwin') mne.io.Raw.filter(raw,l_freq=1,h_freq=50) # montage = mne.channels.read_custom_montage('./cap.txt') montage = mne.channels.make_standard_montage('easycap-M1') raw.set_montage(montage, raise_if_subset=False) layout = mne.channels.read_layout('cap', path='./') # layout.plot() # same result as: mne.viz.plot_layout(biosemi_layout) raw.plot_psd_topo(layout=layout) raw.plot_psd() mne.io.Raw.filter(raw,l_freq=1,h_freq=50) raw.plot_psd() # + def hasNan(nl): for thing in nl: if type(thing) is list: if hasNan(thing): return True if thing != thing: return True return False hasNan(raw) # - # picks = mne.pick_channels_regexp(raw.ch_names, regexp='EEG 05.') raw.plot(duration=10.0, start=0.0, n_channels=1) raw.plot() raw.plot(duration=10.0, start=0.0, order=['bpm', 'ibi', 'sdnn', 'sdsd', 'rmssd', 'pnn20', 'pnn50', 'hr_mad', 'sd1', 'sd2', 's', 'sd1/sd2'] n_channels=16) data = raw.get_data() print(data.shape) raw.plot_sensors(kind='topomap', ch_type='eeg'); start, stop = 0, 200 data, times = raw[:, start:stop] # fetch all channels and the first 10 time points print(data.shape) print(times.shape) raw.load_data()
mnePipeline-Copy2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from __future__ import absolute_import from __future__ import division from __future__ import print_function from datetime import datetime import os.path import sys import time import math import numpy as np from six.moves import xrange import tensorflow as tf import threading from config import * from data_pre import kitti from utils.util import * from nets import * # + FLAGS = tf.app.flags.FLAGS tf.app.flags.DEFINE_string('dataset','KITTI',"""Current kitti dataset.""") tf.app.flags.DEFINE_string('data_path','',"""Root directory of data""") tf.app.flags.DEFINE_string('image_set','train',"""can be set as train, trainval,val or test""") tf.app.flags.DEFINE_string('train_dir','../log',"""event log""") tf.app.flags.DEFINE_integer('max_steps',1000000,"""maximum number of batches to run""") tf.app.flags.DEFINE_string('net','squeezeSeg',"""net architecture""") tf.app.flags.DEFINE_string('pretrained_model_path','',"""path to pretrained model""") tf.app.flags.DEFINE_integer('summary_step',10,"""Number of steps to save summary""") tf.app.flags.DEFINE_integer('checkpoint_step',5000,"""number of steps to save""") tf.app.flags.DEFINE_string('gpu','0',"""gpu_id""") # -
src/.ipynb_checkpoints/train-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 1、使用插入排序对链接列表进行排序。插入排序的图形示例。 # 2、最初,部分排序的列表(黑色)仅包含列表中的第一个元素。 # 3、每次迭代时,都会从输入数据中删除一个元素(红色)并将其原位插入排序列表中 # # 插入排序算法: # 1、迭代插入排序,每次重复消耗一个输入元素,并增加已排序的输出列表。 # 2、在每次迭代中,插入排序都会从输入数据中删除一个元素,在排序列表中找到它所属的位置,然后将其插入。 # 3、重复直到没有输入元素剩余为止。 # - # <img src='147.jpg'> class Solution: def insertionSortList(self, head: ListNode) -> ListNode: if not head or not head.next: return head dummy = ListNode(-float('inf')) p = q = dummy cur = head while cur: if q.val < cur.val: q.next = cur q = cur cur = cur.next else: q.next = cur.next while p.next and p.next.val < cur.val: p = p.next cur.next = p.next p.next = cur p = dummy cur = q.next return dummy.next
Linked List/0902/147. Insertion Sort List.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Intro to Pyro models # # This is Part I of the time series tutorial https://pyro.ai/time # See also [Part II](https://pyro.ai/time/part_ii_inference.ipynb) and # [Part III](https://pyro.ai/time/part_iii_custom.ipynb). # ## Setup # # First install Pyro # ```sh # pip install pyro-ppl # ``` # and import Pyro and PyTorch # + import math import torch import pyro import pyro.distributions as dist from matplotlib import pyplot # %matplotlib inline # %config InlineBackend.rc = {'figure.facecolor': (1, 1, 1, 1)} pyro.enable_validation() print(pyro.__version__) print(torch.__version__) # - # ### The Pyro language has two basic primitives # ```py # x = pyro.sample(“x”, Bernoulli(0.5)) # assert isinstance(x, torch.Tensor) # # pyro.sample(“data”, Normal(0., 1.), # obs=data) # # theta = pyro.param(“theta”, torch.ones(100), # constraint=positive) # ``` # # The first `sample()` generates a random value and records it in the Pyro runtime. # # The second `sample()` conditions on observed data. # # The final `param()` statement declares a learnable parameter. # ### Pyro models are Python functions # Here is a Pyro model for Poisson regression def model(counts): slope = pyro.sample("slope", dist.Normal(0, 0.1)) intercept = pyro.sample("intercept", dist.Normal(math.log(10), 1)) for t in range(len(counts)): rate = torch.exp(intercept + slope * t) counts[t] = pyro.sample("count_{}".format(t), dist.Poisson(rate), obs=counts[t]) return slope, intercept, counts # Note this has two sample statements and an observe statement. There are no learnable parameters. # ## Pyro models can generate data # # Running a Pyro model will generate a sample from the prior. # + pyro.set_rng_seed(0) # We pass counts = [None, ..., None] to indicate time duration. true_slope, true_intercept, true_counts = model([None] * 10) print("true_counts = {}".format(torch.stack(true_counts))) pyplot.figure(figsize=(10, 6), dpi=300) pyplot.plot([c.item() for c in true_counts]); # - # ## We can guess parameters of the model from data # # We just saw one way to use a Pyro model -- to call it via `model()`. # # Another way to use a model is to pass the model to an inference algorithm and let the algorithm guess what the model is doing based on observed data (here `true_counts`). This way of using a model is a _nonstandard interpretation_ of the model. # + # %%time # This uses Pyro inference algorithms which we'll discuss later in Part II. from pyro.infer.autoguide import AutoDelta from pyro.infer import SVI, Trace_ELBO from pyro.optim import Adam guide = AutoDelta(model) svi = SVI(model, guide, Adam({"lr": 0.1}), Trace_ELBO()) for i in range(101): loss = svi.step(true_counts) # true_counts is passed as argument to model() if i % 10 == 0: print("loss = {}".format(loss)) # - print("true_slope = {}".format(true_slope)) print("true_intercept = {}".format(true_intercept)) guess = guide() print("guess = {}".format(guess)) # ## The `guide` is also a Pyro model # # Note that the `guide` is also a Pyro model (created by `AutoDelta`). The guide has `pyro.param` statements that declare tunable parameters. We'll write custom guides later in [Part III](https://pyro.ai/time/part_iii_custom.ipynb). # ## We then predict future data # # We've now seen two ways to use a model: to generate data from the prior, and to learn parameters from data. # # A third way to use a Pyro model is to predict new observed data by guiding the model. This uses two of Pyro's effects: # - `trace` records guesses made by the guide, and # - `replay` conditions the model on those guesses, allowing the model to generate conditional samples. # # We'll wrap this idiom as a `forecast()` method: # + from pyro import poutine def forecast(forecast_steps=10): counts = true_counts + [None] * forecast_steps # observed data + blanks to fill in guide_trace = poutine.trace(guide).get_trace(counts) _, _, counts = poutine.replay(model, guide_trace)(counts) return counts # - # ## Pyro predicts by drawing samples # # We can now call `forecast()` multiple times to generate samples. num_samples = 20 pyplot.figure(figsize=(10, 6), dpi=300) for _ in range(num_samples): full_counts = forecast(10) forecast_counts = full_counts[len(true_counts):] pyplot.plot([c.item() for c in full_counts], "r", label=None if _ else "forecast", alpha=0.3) pyplot.plot([c.item() for c in true_counts], "k-", label="truth") pyplot.legend(); # ## Summary # # We've seen three ways to interpret Pyro models: # 1. Running the models to generate data from the prior. # This is the _standard interpretation_. # 2. Training a guide to guess hidden variables in the model from data. # 3. Replaying the model using the trained guide. # ## Next steps # # In [Part II](https://pyro.ai/time/part_ii_inferenc.ipynb) we'll explore other inference methods in Pyro.
2019-08-time-series/part_i_models.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MissOh DataLoader # ### AnotherMissOh Visual Structure # - json_data['file_name'] : 'AnotherMissOh01.mp4' # - json_data['visual_results'] # - json_data['visual_results'][0].keys() : dict_keys(['start_time', 'end_time', 'vid', 'image_info']) # - { # 'start_time': '00:02:51;16', # 'end_time': '00:02:54;15', # 'vid': 'AnotherMissOh01_001_0078', # 'image_info': ...} # - json_data['visual_results'][0]['image_info'] # - [{'frame_id': 'AnotherMissOh01_001_0078_IMAGE_0000004295', # 'place': 'none', # 'persons': [ # {'person_id': 'Haeyoung1', # 'person_info': { # 'face_rect': {'min_x': 515, 'min_y': 0, 'max_x': 845, 'max_y': 443}, # 'full_rect': {'min_x': 278, 'min_y': 2, 'max_x': 1025, 'max_y': 769}, # 'behavior': 'stand up', # 'predicate': 'none', # 'emotion': 'Neutral', # 'face_rect_score': '0.5', # 'full_rect_score': '0.9'}, # 'related_objects': []}], # 'objects': []}, # - {'frame_id': 'AnotherMissOh01_001_0078_IMAGE_0000004311', # 'place': '', # 'persons': [{ # 'person_id':'Haeyoung1', # 'person_info': { # 'face_rect': {'min_x': 515, 'min_y': 0, 'max_x': 831, 'max_y': 411}, # 'full_rect': {'min_x': 270, 'min_y': 0, 'max_x': 1025, 'max_y': 768}, # 'behavior': 'stand up', # 'predicate': 'none', # 'emotion': 'Neutral', # 'face_rect_score': '0.5', # 'full_rect_score': '0.9'}, # 'related_objects': []}], # 'objects': []},] # + # # !apt-get install graphviz xdg-utils # - import sys, os sys.path.append("../") # go to parent dir # + import os from torch.utils.data import Dataset, DataLoader import cv2 import pickle import numpy as np import glob from torchvision.transforms import Compose, Resize, ToTensor, Normalize from PIL import Image import json import argparse import matplotlib.pyplot as plt from Yolo_v2_pytorch.src.utils import * from graphviz import Digraph, Graph # - def is_not_blank(s): return bool(s and s.strip()) MissOh_CLASSES = ['person'] print(MissOh_CLASSES[0]) global colors colors = pickle.load(open("../Yolo_v2_pytorch/src/pallete", "rb")) print(colors[0]) import sys, os sys.path.append("../") # go to parent dir # + import os import glob import argparse import pickle import cv2 import numpy as np from Yolo_v2_pytorch.src.utils import * import torch.nn.functional as F from torch.utils.data import DataLoader from Yolo_v2_pytorch.src.yolo_net import Yolo from Yolo_v2_pytorch.src.anotherMissOh_dataset import AnotherMissOh, Splits, SortFullRect, PersonCLS,PBeHavCLS, FaceCLS, ObjectCLS, P2ORelCLS from torchvision.transforms import Compose, Resize, ToTensor from PIL import Image import matplotlib.pyplot as plt import time from lib.place_model import place_model, label_mapping, accuracy, label_remapping, place_buffer from lib.person_model import person_model from lib.behavior_model import behavior_model from lib.pytorch_misc import optimistic_restore, de_chunkize, clip_grad_norm, flatten from lib.focal_loss import FocalLossWithOneHot, FocalLossWithOutOneHot, CELossWithOutOneHot from lib.face_model import face_model from lib.object_model import object_model from lib.relation_model import relation_model from lib.emotion_model import emotion_model, crop_face_emotion, EmoCLS num_persons = len(PersonCLS) num_behaviors = len(PBeHavCLS) num_faces = len(FaceCLS) num_objects = len(ObjectCLS) num_relations = len(P2ORelCLS) num_emos = len(EmoCLS) def get_args(): parser = argparse.ArgumentParser( "You Only Look Once: Unified, Real-Time Object Detection") parser.add_argument("--image_size", type=int, default=448, help="The common width and height for all images") parser.add_argument("--batch_size", type=int, default=1, help="The number of images per batch") parser.add_argument("--conf_threshold", type=float, default=0.35) parser.add_argument("--nms_threshold", type=float, default=0.5) parser.add_argument("--pre_trained_model_type", type=str, choices=["model", "params"], default="model") parser.add_argument("--data_path_test", type=str, default="./Yolo_v2_pytorch/missoh_test/", help="the root folder of dataset") parser.add_argument("--saved_path", type=str, default="./checkpoint/refined_models") parser.add_argument("--img_path", type=str, default="./data/AnotherMissOh/AnotherMissOh_images_ver3.2/") parser.add_argument("--json_path", type=str, default="./data/AnotherMissOh/AnotherMissOh_Visual_ver3.2/") parser.add_argument("-model", dest='model', type=str, default="baseline") parser.add_argument("-display", dest='display', action='store_true') parser.add_argument("-emo_net_ch", dest='emo_net_ch',type=int, default=64) args = parser.parse_args([]) return args # get args. opt = get_args() print(opt) # - import networkx as nx from networkx.drawing.nx_pydot import read_dot #from networkx.drawing.nx_agraph import read_dot from networkx.readwrite import json_graph opt.img_path = "../data/AnotherMissOh/AnotherMissOh_images_ver3.2/" opt.json_path = "../data/AnotherMissOh/AnotherMissOh_Visual_ver3.2/" opt.saved_path = "../checkpoint/refined_models" opt.display = True # + tform = [ Resize((448, 448)), # should match to Yolo_V2 ToTensor(), # Normalize(# should match to Yolo_V2 #mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ] transf = Compose(tform) # splits the episodes int train, val, test train, val, test = Splits(num_episodes=18) # load datasets train_set = AnotherMissOh(train, opt.img_path, opt.json_path, False) val_set = AnotherMissOh(val, opt.img_path, opt.json_path, False) test_set = AnotherMissOh(test, opt.img_path, opt.json_path, False) episode = 7 infer = [episode] infer_set = AnotherMissOh(infer, opt.img_path, opt.json_path, False) # model path model_path = "{}/anotherMissOh_{}.pth".format( opt.saved_path,opt.model) # + print(torch.cuda.is_available()) if torch.cuda.is_available(): torch.cuda.manual_seed(123) device = torch.cuda.current_device() else: torch.manual_seed(123) # set test loader params test_params = {"batch_size": opt.batch_size, "shuffle": False, "drop_last": False, "collate_fn": custom_collate_fn} # set test loader test_loader = DataLoader(infer_set, **test_params) # face model if True: model_face = face_model(num_persons, num_faces, device) trained_face = '../checkpoint/refined_models' + os.sep + "{}".format( 'anotherMissOh_only_params_face.pth') model_face.load_state_dict(torch.load(trained_face)) print("loaded with {}".format(trained_face)) else: trained_face = '../checkpoint/face' + os.sep + "{}".format( 'anotherMissOh_face.pth') model_face =torch.load(trained_face) print("loaded with {}".format(trained_face)) model_face.cuda(device) model_face.eval() # emotion model if True: model_emo = emotion_model(opt.emo_net_ch, num_persons, device) trained_emotion = '../checkpoint/refined_models' + os.sep + "{}".format( 'anotherMissOh_only_params_emotion_integration.pth') model_emo.load_state_dict(torch.load(trained_emotion)) print("loaded with {}".format(trained_emotion)) model_emo.cuda(device) model_emo.eval() # + # load the color map for detection results colors = pickle.load(open("../Yolo_v2_pytorch/src/pallete", "rb")) width, height = (1024, 768) width_ratio = float(opt.image_size) / width height_ratio = float(opt.image_size) / height # - def is_not_blank(s): return bool(s and s.strip()) def graph_to_json(episode, scene, frm, info, save_file=None): if save_file is None: save_file = 'temp_graph' import string strseq = string.ascii_uppercase # define graph dot = Digraph('G',filename='{}.gv'.format(save_file),engine='fdp') dot.attr('graph', rotate = '0', dpi='600',rankdir='TB', size='10,8') dot.attr('node', height='0.1', fontsize='6') dot.attr('edge', fontsize='6') place = "{}".format(info['place']) sound = "{}".format(info['sound']) if not is_not_blank(place): place = 'none' if not is_not_blank(sound): sound = 'none' num_of_persons = len(info['persons']) num_of_objects = len(info['objects']) frm_graph = 'episode_{}_scene_{}_frame_{}'.format( episode, scene, frm) #dot.node(frm_graph, style='filled', color='lightgrey') episode_node = "episode_{:02d}".format(episode) scene_node = "scene_{:03d}".format(scene) frame_node = "frame_{:04d}".format(frm) dot.node(episode_node, style='filled', color='lightgrey') dot.node(scene_node, style='filled', color='lightgrey') dot.node(frame_node, style='filled', color='lightgrey') # backgrounds-------------------------------------------- dot.node(place, style='filled', color='lightblue') dot.node(sound, style='filled', color='lightblue') if is_not_blank(episode_node) and is_not_blank(scene_node): dot.edge(episode_node, scene_node) if is_not_blank(scene_node) and is_not_blank(frame_node): dot.edge(scene_node, frame_node) if is_not_blank(frame_node) and is_not_blank(place): dot.edge(frame_node, place) if is_not_blank(frame_node) and is_not_blank(sound): dot.edge(frame_node, sound) # person ------------------------------------------------ for person_id in info['persons'].keys(): if is_not_blank(person_id): dot.node(person_id) # behavior--- if 'behavior' in info['persons'][person_id].keys(): behavior_id = info['persons'][person_id]['behavior'] else: behavior_id = 'none' if is_not_blank(behavior_id): dot.node(behavior_id, style='filled', color='green') # emotion--- if 'emotion' in info['persons'][person_id].keys(): emotion_id = info['persons'][person_id]['emotion'] else: emotion_id = 'none' if is_not_blank(emotion_id): dot.node(emotion_id, style='filled', color='blue') if is_not_blank(frame_node) and is_not_blank(person_id): dot.edge(frame_node, person_id) if is_not_blank(person_id) and is_not_blank(behavior_id): dot.edge(person_id, behavior_id) if is_not_blank(person_id) and is_not_blank(emotion_id): dot.edge(person_id, emotion_id) # relation --------------------------------------------- for object_id in info['objects'].keys(): if is_not_blank(object_id): dot.node(object_id, style='filled', color='gold') for person_id in info['relations'].keys(): if person_id not in info['persons'].keys(): dot.node(person_id) dot.edge(frame_node, person_id) for object_id in info['relations'][person_id].keys() : if object_id not in info['objects'].keys(): dot.node(object_id) dot.edge(frame_node, object_id) predicate = info['relations'][person_id][object_id] dot.edge(person_id, object_id,label=predicate, color='red') # convert dot graph to json if False: dot_to_json =json.dumps(json_graph.node_link_data(dot)) else: dot_to_json = json.dumps(info) with open('{}.json'.format(save_file), 'w') as f: json.dump(dot_to_json, f) # show in image dot.format = 'png' dot.render('{}.gv'.format(save_file), view=True) graph = cv2.imread('{}.gv.png'.format(save_file)) graph = cv2.resize(graph, dsize=(0, 0), fx=600.0/graph.shape[0], fy=600.0/graph.shape[0]) if True: plt.figure(figsize=(10,10)) plt.imshow(graph) plt.show() plt.close() # + clip = 0 frm = 0 #---------------graph structure-------------------------- graph_json = {} graph_json['persons'] = {} graph_json['objects'] = {} graph_json['relations'] = {} # ---------------persons--------------------------------- graph_json['persons']['Haeyoung1'] = {} graph_json['persons']['Haeyoung1']['emotion'] = 'happy' graph_json['persons']['Haeyoung1']['behavior'] = 'talking' graph_json['persons']['Deogi'] = {} graph_json['persons']['Deogi']['emotion'] = 'happy' graph_json['persons']['Deogi']['behavior'] = 'eating' # ---------------objects--------------------------------- graph_json['objects']['spoon'] = {} graph_json['objects']['spoon']['Deogi'] = 'N_R' # ---------------Relations------------------------------- graph_json['relations']['Deogi'] = {} graph_json['relations']['Deogi']['spoon'] = 'holding' # ---------------Backgrounds----------------------------- graph_json['place'] = 'kitchen' graph_json['sound'] = 'talking' info = graph_json print(info) graph_to_json(episode, clip, frm, graph_json) # - # Sequence buffers buffer_images = [] graph_info = {} # load test clips for iter, batch in enumerate(test_loader): image, info = batch scene = iter episode = episode # sort label info on fullrect image, label, behavior_label, obj_label, face_label, emo_label, frame_id = SortFullRect( image, info, is_train=False) try : image = torch.cat(image,0).cuda(device) except: continue # -----------------(2) inference ------------------------- # face if np.array(face_label).size > 0 : face_logits = model_face(image) predictions_face = post_processing(face_logits, opt.image_size, FaceCLS, model_face.detector.anchors, opt.conf_threshold, opt.nms_threshold) if len(predictions_face) != 0: num_preds = len(predictions_face) num_face_per_pred = [len(pred) for pred in predictions_face] image_c = image.permute(0,2,3,1).cpu() face_crops, _ = crop_face_emotion(image_c, predictions_face, None, opt) face_crops = face_crops.cuda(device).contiguous() emo_logits_raw = model_emo(face_crops) emo_logits, idx = [], 0 for pl in num_face_per_pred: emo_logits.append(emo_logits_raw[idx:idx+pl]) idx = idx+pl for idx, frame in enumerate(frame_id): # ---------------(3) mkdir for evaluations---------------------- f_info = frame[0].split('/') save_dir = '../results/drama-graph/{}/{}/{}/'.format( f_info[4], f_info[5], f_info[6]) if not os.path.exists(save_dir): os.makedirs(save_dir) f_file = f_info[7] mAP_file = "{}_{}_{}_{}".format(f_info[4], f_info[5], f_info[6], f_info[7].replace("jpg", "txt")) if opt.display: # AnotherMissOh07_002_0036_IMAGE_0000002672.txt print("frame.__len__{}, mAP_file:{}".format(len(frame_id), mAP_file)) # --------------(5) visualization of inferences ---------- # out of try : pdb.set_trace = lambda : None try: # for some empty video clips img = image[idx] # ToTensor function normalizes image pixel values into [0,1] np_img = img.cpu().numpy() np_img = np.transpose(np_img,(1,2,0)) * 255 output_image = cv2.cvtColor(np_img,cv2.COLOR_RGB2BGR) output_image = cv2.resize(output_image, (width, height)) #************************************** graph_json = {} graph_json['persons'] = {} graph_json['objects'] = {} graph_json['relations'] = {} graph_json['sound'] = 'none' #************************************** # face and emotion if len(predictions_face) != 0: prediction_face = predictions_face[idx] prediction_emo = emo_logits[idx] for pi,pred in enumerate(prediction_face): xmin = int(max(pred[0] / width_ratio, 0)) ymin = int(max(pred[1] / height_ratio, 0)) xmax = int(min((pred[2]) / width_ratio, width)) ymax = int(min((pred[3]) / height_ratio, height)) color = colors[FaceCLS.index(pred[5])] cv2.rectangle(output_image, (xmin, ymin), (xmax, ymax), color, 2) text_size = cv2.getTextSize( pred[5] + ' : %.2f' % pred[4], cv2.FONT_HERSHEY_PLAIN, 1, 1)[0] cv2.rectangle( output_image, (xmin, ymin), (xmin + text_size[0] + 100, ymin + text_size[1] + 20), color, -1) cv2.putText( output_image, pred[5] + ' : %.2f' % pred[4], (xmin, ymin + text_size[1] + 4), cv2.FONT_HERSHEY_PLAIN, 1, (255, 255, 255), 1) # save detection results pred_cls = pred[5] cat_pred = '%s %s %s %s %s %s\n' % ( pred_cls, str(pred[4]), str(xmin), str(ymin), str(xmax), str(ymax)) print("face_pred:{}".format(cat_pred)) print("detected {}".format( save_dir + "{}".format(f_file))) #************************************************** graph_json['persons'][pred_cls] = {} #************************************************** # update emotion model and the prediction if True: emo_ij = F.softmax(prediction_emo[pi], dim=0).argmax().detach().cpu().numpy() emo_txt = EmoCLS[emo_ij] cv2.putText(output_image, emo_txt, (xmin, ymin), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,255), 2, cv2.LINE_AA) #****************************************************** graph_json['persons'][pred_cls]['emotion'] = emo_txt #****************************************************** else: print("non-detected {}".format( save_dir + "{}".format(f_file))) # save output image cv2.imwrite(save_dir + "{}".format(f_file), output_image) # save images plt_output_image = cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB) plt.figure(figsize=(20,10)) plt.imshow(plt_output_image.astype('uint8')) plt.show() plt.close() #***************************************** frm_name = "episode_{:02d}_scene_{:03d}_frame_{:04d}".format(episode, scene, idx) save_file = save_dir + frm_name print(graph_json) graph_to_json(episode, scene, idx, graph_json, save_file) #***************************************** except: print("excepted...") continue
jupyter/drama-graph-infer-face_emotion.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Activity 04 - Breast Cancer Diagnosis Classification using Artificial Neural Networks (with Answers) # # In this activity we will be using the Breast Cancer dataset [https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)]( https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) ) available under the [UCI Machine Learning Repository] (https://archive.ics.uci.edu/ml/index.php). The dataset contains characteristics of the cell nuclei present in the digitized image of a fine needle aspirate (FNA) of a breast mass, with the labels _malignant_ and _benign_ for each cell nucleus. Throughout this activity we will use the measurements provided in the dataset to classify between malignant and benign cells. # # ## Import the Required Packages # For this exercise we will require the Pandas package for loading the data, the matplotlib package for plotting as well as scikit-learn for creating the Neural Network model, doing some feature selection as well as model selection. Import all of the required packages and relevant modules for these tasks. import pandas as pd import matplotlib.pyplot as plt from sklearn.neural_network import MLPClassifier from sklearn.model_selection import train_test_split from sklearn import preprocessing # ## Load the Data # Load the Breast Cancer Diagnosis dataset using Pandas and examine the first 5 rows df = pd.read_csv('../Datasets/breast-cancer-data.csv') df.head() # Dissect the data into input (X) and output (y) variables X, y = df[[c for c in df.columns if c != 'diagnosis']], df.diagnosis # ## Feature Engineering # As we see in the above 5 rows of the dataset, different columns have different scales of magnitude, hence, before constructing and training a neural network model, we normalize the dataset. For this, we use the MinMaxScaler api from sklearn which normalizes each column values between 0 to 1, as discussed in the Logistic Regression section of this chapter (see Exercise 3.03) X_array = X.values #returns a numpy array min_max_scaler = preprocessing.MinMaxScaler() X_array_scaled = min_max_scaler.fit_transform(X_array) X = pd.DataFrame(X_array_scaled, columns=X.columns) # Let us examine first five rows of the normalized dataset. X.head() # ## Constructing the Neural Network Model # Before we can construct the model we must first convert the dignosis values into labels that can be used within the model. Replace: # # 1. The diagnosis string *benign* with the value 0 # 2. The diagnosis string *malignant* with the value 1 diagnoses = [ 'benign', # 0 'malignant', # 1 ] output = [diagnoses.index(diag) for diag in y] # Also, in order to impartially evaluate the model, we should split the training dataset into a training and a validation set. train_X, valid_X, train_y, valid_y = train_test_split(X, output, test_size=0.2, random_state=123) # Create the model using the normalized dataset and the assigned *diagnosis* labels model = MLPClassifier(solver='sgd', hidden_layer_sizes=(100,), max_iter=1000, random_state=1, learning_rate_init=.01) model.fit(X=train_X, y=train_y) # Compute the accuracy of the model against the validation set: model.score(valid_X, valid_y)
Activity04/Activity04.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # combine_rows to set # Translations between `pyCPA.core.combine_rows()` and PRIMAP2's # `da.pr.set()` and `ds.pr.set()`. # # ## case 1 # # Task: Sum three categories to one (new) category, restricted # to a specific classification and measure. Also add the # name of the new category. # # ### pyCPA.core.combine_rows() # + pycharm={"name": "#%%\n"} grouping_1 = { "category": [["1.AA", "1.B", "1.C"], "+", "1."], "classification": [["Total for category"], "+", "Total for category"], "measure": [["Net emissions/removals"], "+", "Net emissions/removals"], "categoryName": [["*"], "+", "Energy"], } other_cols = {} cols_to_remove = [] inplace = True verbose = False combine_rows( scmrun, grouping_1, other_cols, cols_to_remove, inplace, verbose ) # - # ### ds.pr.set() # + pycharm={"name": "#%%\n"} # restrict and compute new value cat1 = ds.pr.loc[ { "category": ["1.AA", "1.B", "1.C"], "classification": "Total for category", "measure": "Net emissions/removals", } ].sum("category") # set new value; never inplace, so needs ds = ds… ds = ds.pr.set("category", "1.", cat1, existing="overwrite") # update categoryName ds["categoryName"].pr.loc[{"category": "1."}] = "Energy" # - # ## case 1 # Task: Calculate a (new) category as the sum of two categories, # minus a third category. # Again, restrict the result # to a specific classification and measure and add the # (new) category to the data. Also add the # name of the new category. # # ### pyCPA.core.combine_rows() # + pycharm={"name": "#%%\n"} grouping_3 = { "category": [["1.AA", "1.B", "1.C"], ["+", "-", "+"], "1"], "classification": [["Total for category"], "+", "Total for category"], "measure": [["Net emissions/removals"], "+", "Net emissions/removals"], "categoryName": [["*"], "+", "Energy"], } combined_data = combine_rows( scmrun, grouping_3, other_cols, cols_to_remove, inplace, verbose ) # + [markdown] pycharm={"name": "#%% md\n"} # ### ds.pr.set() # + pycharm={"name": "#%%\n"} # restrict restricted = ds.pr.loc[ { "classification": "Total for category", "measure": "Net emissions/removals", } ] # compute new value cat1 = ( restricted.pr.loc[{"category": "1.AA"}] - restricted.pr.loc[{"category": "1.B"}] + restricted.pr.loc[{"category": "1.C"}] ) # set new value ds = ds.pr.set("category", "1", cat1, existing="overwrite") # update categoryName ds["categoryName"].pr.loc[{"category": "1"}] = "Energy"
rosetta/combine_rows-set.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="xipxpNzPAl8f" # !pip install -I numpy==1.19.2 # !pip install snowflake-connector-python import warnings warnings.filterwarnings("ignore") # !pip install -I pyarrow==5.0.0 # + id="L0MjBet4Ravf" # import basic data science libraries import pandas as pd import numpy as np # + id="SxcPzQwFR9Pk" import snowflake.connector import getpass # using a simpler way to use your login info without embedding it in the notebook # other enterprise connection patterns (e.g., SSO) are in the Snowflake docs: https://docs.snowflake.com/en/user-guide/python-connector-example.html snowflake_username = getpass.getpass("Enter Snowflake Username") snowflake_pwd = <PASSWORD>pass("Enter Snowflake Password") snowflake_acct = 'nna57244.us-east-1' print(snowflake_username) print(snowflake_acct) # + colab={"base_uri": "https://localhost:8080/"} id="tOUWxQAy9iqH" outputId="df0c5738-f805-4ba7-897b-eec2fc2bfaf0" ctx = snowflake.connector.connect( user=snowflake_username, password=<PASSWORD>, account=snowflake_acct ) cs = ctx.cursor() try: cs.execute("SELECT current_version()") one_row = cs.fetchone() print(one_row[0]) cs.execute("USE DATABASE PREDICTIVE_MAINTENANCE") query_output = cs.execute( "select top 18 UDI, FAILURE_SCORE from DAILY_SCORED_MACHINES ORDER BY FAILURE_SCORE DESC;" ) df_snowflake_scored_data = query_output.fetch_pandas_all() finally: cs.close() ctx.close() # + colab={"base_uri": "https://localhost:8080/", "height": 607} id="e1khUpc6yySY" outputId="05362b09-70a6-46aa-9cb8-49e86eb5923c" df_snowflake_scored_data
notebooks/pm_snowflake_daily_top18_dec2021_v1.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: GeoNotebook + GeoPySpark # language: python # name: geonotebook3 # --- # # Analyzing the Crop Data Layer # # In this exercise, we will be analyzing the 2016 [Cropland Data Layer](https://www.nass.usda.gov/Research_and_Science/Cropland/SARS1a.php), # or CDL, which is a 30mx30m national scale land cover data layer created annually for the continental United States using satellite imagery and extensive agricultural ground truth. # # There are 3 objectives in this exercise: # # - __Objective 1__: View the entire CDL layer on the map. # - __Objective 2__: View CDL, cropped to your state polygon, on the map. # - __Objective 3__: Determine the 3 most popular crops that were grown in your state in 2016, according to the CDL. # + import geopyspark as gps from pyspark import SparkContext import json from shapely.geometry import mapping, shape from shapely.ops import transform import pyproj import urllib.request, json from functools import partial from geonotebook.wrappers import TMSRasterData, GeoJsonData import numpy as np import pandas as pd import matplotlib.pyplot as plt from pyspark import SparkContext # - # ### Setup: State data and Spark initialization # # The next 2 cells grab the shapes for our state and start up the spark context. # + # Grab data for Nevada state_name, county_name = "NV", "Mineral" def get_state_shapes(state, county): project = partial( pyproj.transform, pyproj.Proj(init='epsg:4326'), pyproj.Proj(init='epsg:3857')) state_url = "https://raw.githubusercontent.com/johan/world.geo.json/master/countries/USA/{}.geo.json".format(state) county_url = "https://raw.githubusercontent.com/johan/world.geo.json/master/countries/USA/{}/{}.geo.json".format(state,county) read_json = lambda url: json.loads(urllib.request.urlopen(url).read().decode("utf-8")) state_ll = shape(read_json(state_url)['features'][0]['geometry']) state_wm = transform(project, state_ll) county_ll = shape(read_json(county_url)['features'][0]['geometry']) county_wm = transform(project, county_ll) return (state_ll, state_wm, county_ll, county_wm) (state_ll, state_wm, county_ll, county_wm) = get_state_shapes(state_name, county_name) # - # Set up our spark context conf = gps.geopyspark_conf(appName="Exercise 1") \ .setMaster("local[*]") \ .set(key='spark.ui.enabled', value='true') \ .set(key="spark.driver.memory", value="8G") \ .set("spark.hadoop.yarn.timeline-service.enabled", False) sc = SparkContext(conf=conf) # ### Setup: color and value data for CDL # # The following values are necessary to accomplish the objectives. # # See the final cells of the notebook for expanded versions of `values_to_crops` and `crops`; they are on one line here for notebook readability. # + # URI for the catalog catalog_uri = "s3://datahub-catalogs-us-east-1" # Layer Name for querying the catalog cdl_layer_name = "cdl-2016-zoomed" # ColorMap for rendering the CDL values cdl_colormap = gps.ColorMap.from_break_map({1: 0xffd300ff,2: 0xff2626ff,3: 0x00a8e5ff,4: 0xff9e0cff,5: 0x267000ff,6: 0xffff00ff,10: 0x70a500ff,11: 0x00af4cff,12: 0xdda50cff,13: 0xdda50cff,14: 0x7fd3ffff,21: 0xe2007cff,22: 0x896354ff,23: 0xd8b56bff,24: 0xa57000ff,25: 0xd69ebcff,26: 0x707000ff,27: 0xad007cff,28: 0xa05989ff,29: 0x700049ff,30: 0xd69ebcff,31: 0xd1ff00ff,32: 0x7f99ffff,33: 0xd6d600ff,34: 0xd1ff00ff,35: 0x00af4cff,36: 0xffa5e2ff,37: 0xa5f28cff,38: 0x00af4cff,39: 0xd69ebcff,41: 0xa800e5ff,42: 0xa50000ff,43: 0x702600ff,44: 0x00af4cff,45: 0xb27fffff,46: 0x702600ff,47: 0xff6666ff,48: 0xff6666ff,49: 0xffcc66ff,50: 0xff6666ff,51: 0x00af4cff,52: 0x00ddafff,53: 0x54ff00ff,54: 0xf2a377ff,55: 0xff6666ff,56: 0x00af4cff,57: 0x7fd3ffff,58: 0xe8bfffff,59: 0xafffddff,60: 0x00af4cff,61: 0xbfbf77ff,63: 0x93cc93ff,64: 0xc6d69eff,65: 0xccbfa3ff,66: 0xff00ffff,67: 0xff8eaaff,68: 0xba004fff,69: 0x704489ff,70: 0x007777ff,71: 0xb29b70ff,72: 0xffff7fff,74: 0xb5705bff,75: 0x00a582ff,76: 0xead6afff,77: 0xb29b70ff,81: 0xf2f2f2ff,82: 0x9b9b9bff,83: 0x4c70a3ff,87: 0x7fb2b2ff,88: 0xe8ffbfff,92: 0x00ffffff,111: 0x4c70a3ff,112: 0xd3e2f9ff,121: 0x9b9b9bff,122: 0x9b9b9bff,123: 0x9b9b9bff,124: 0x9b9b9bff,131: 0xccbfa3ff,141: 0x93cc93ff,142: 0x93cc93ff,143: 0x93cc93ff,152: 0xc6d69eff,176: 0xe8ffbfff,190: 0x7fb2b2ff,195: 0x7fb2b2ff,204: 0x00ff8cff,205: 0xd69ebcff,206: 0xff6666ff,207: 0xff6666ff,208: 0xff6666ff,209: 0xff6666ff,210: 0xff8eaaff,211: 0x334933ff,212: 0xe57026ff,213: 0xff6666ff,214: 0xff6666ff,216: 0xff6666ff,217: 0xb29b70ff,218: 0xff8eaaff,219: 0xff6666ff,220: 0xff8eaaff,221: 0xff6666ff,222: 0xff6666ff,223: 0xff8eaaff,224: 0x00af4cff,225: 0xffd300ff,226: 0xffd300ff,227: 0xff6666ff,229: 0xff6666ff,230: 0x896354ff,231: 0xff6666ff,232: 0xff2626ff,233: 0xe2007cff,234: 0xff9e0cff,235: 0xff9e0cff,236: 0xa57000ff,237: 0xffd300ff,238: 0xa57000ff,239: 0x267000ff,240: 0x267000ff,241: 0xffd300ff,242: 0x000099ff,243: 0xff6666ff,244: 0xff6666ff,245: 0xff6666ff,246: 0xff6666ff,247: 0xff6666ff,248: 0xff6666ff,249: 0xff6666ff,250: 0xff6666ff,251: 0xffd300ff,252: 0x267000ff,253: 0xa57000ff,254: 0x267000ff}) # A map of CDL raster values to the category they represent. values_to_crops = {0: 'Background',1: 'Corn',2: 'Cotton',3: 'Rice',4: 'Sorghum',5: 'Soybeans',6: 'Sunflower',10: 'Peanuts',11: 'Tobacco',12: 'Sweet Corn',13: 'Pop or Orn Corn',14: 'Mint',21: 'Barley',22: 'Durum Wheat',23: 'Spring Wheat',24: 'Winter Wheat',25: 'Other Small Grains',26: 'Dbl Crop WinWht/Soybeans',27: 'Rye',28: 'Oats',29: 'Millet',30: 'Speltz',31: 'Canola',32: 'Flaxseed',33: 'Safflower',34: 'Rape Seed',35: 'Mustard',36: 'Alfalfa',37: 'Other Hay/Non Alfalfa',38: 'Camelina',39: 'Buckwheat',41: 'Sugarbeets',42: 'Dry Beans',43: 'Potatoes',44: 'Other Crops',45: 'Sugarcane',46: 'Sweet Potatoes',47: 'Misc Vegs & Fruits',48: 'Watermelons',49: 'Onions',50: 'Cucumbers',51: 'Chick Peas',52: 'Lentils',53: 'Peas',54: 'Tomatoes',55: 'Caneberries',56: 'Hops',57: 'Herbs',58: 'Clover/Wildflowers',59: 'Sod/Grass Seed',60: 'Switchgrass',61: 'Fallow/Idle Cropland',63: 'Forest',64: 'Shrubland',65: 'Barren',66: 'Cherries',67: 'Peaches',68: 'Apples',69: 'Grapes',70: 'Christmas Trees',71: 'Other Tree Crops',72: 'Citrus',74: 'Pecans',75: 'Almonds',76: 'Walnuts',77: 'Pears',81: 'Clouds/No Data',82: 'Developed',83: 'Water',87: 'Wetlands',88: 'Nonag/Undefined',92: 'Aquaculture',111: 'Open Water',112: 'Perennial Ice/Snow ',121: 'Developed/Open Space',122: 'Developed/Low Intensity',123: 'Developed/Med Intensity',124: 'Developed/High Intensity',131: 'Barren',141: 'Deciduous Forest',142: 'Evergreen Forest',143: 'Mixed Forest',152: 'Shrubland',176: 'Grassland/Pasture',190: 'Woody Wetlands',195: 'Herbaceous Wetlands',204: 'Pistachios',205: 'Triticale',206: 'Carrots',207: 'Asparagus',208: 'Garlic',209: 'Cantaloupes',210: 'Prunes',211: 'Olives',212: 'Oranges',213: 'Honeydew Melons',214: 'Broccoli',216: 'Peppers',217: 'Pomegranates',218: 'Nectarines',219: 'Greens',220: 'Plums',221: 'Strawberries',222: 'Squash',223: 'Apricots',224: 'Vetch',225: 'Dbl Crop WinWht/Corn',226: 'Dbl Crop Oats/Corn',227: 'Lettuce',229: 'Pumpkins',230: 'Dbl Crop Lettuce/Durum Wht',231: 'Dbl Crop Lettuce/Cantaloupe',232: 'Dbl Crop Lettuce/Cotton',233: 'Dbl Crop Lettuce/Barley',234: 'Dbl Crop Durum Wht/Sorghum',235: 'Dbl Crop Barley/Sorghum',236: 'Dbl Crop WinWht/Sorghum',237: 'Dbl Crop Barley/Corn',238: 'Dbl Crop WinWht/Cotton',239: 'Dbl Crop Soybeans/Cotton',240: 'Dbl Crop Soybeans/Oats',241: 'Dbl Crop Corn/Soybeans',242: 'Blueberries',243: 'Cabbage',244: 'Cauliflower',245: 'Celery',246: 'Radishes',247: 'Turnips',248: 'Eggplants',249: 'Gourds',250: 'Cranberries',254: 'Dbl Crop Barley/Soybeans'} # A reverse map of above, allowing you to lookup CDL values from category name. crops_to_values = {v: k for k, v in values_to_crops.items()} # List of crop category names which are relevant to objective 3. crops = ['Corn','Cotton','Rice','Sorghum','Soybeans','Sunflower','Peanuts','Tobacco','Sweet Corn','Pop or Orn Corn','Mint','Barley','Durum Wheat','Spring Wheat','Winter Wheat','Other Small Grains','Rye','Oats','Millet','Speltz','Canola','Flaxseed','Safflower','Rape Seed','Mustard','Alfalfa','Other Hay/Non Alfalfa','Camelina','Buckwheat','Sugarbeets','Dry Beans','Potatoes','Other Crops','Sugarcane','Sweet Potatoes','Misc Vegs & Fruits','Watermelons','Onions','Cucumbers','Chick Peas','Lentils','Peas','Tomatoes','Caneberries','Hops','Herbs','Clover/Wildflowers','Cherries','Peaches','Apples','Grapes','Pecans','Almonds','Walnuts','Pears','Pistachios','Triticale','Carrots','Asparagus','Garlic','Cantaloupes','Prunes','Olives','Oranges','Honeydew Melons','Broccoli','Peppers','Pomegranates','Nectarines','Greens','Plums','Strawberries','Squash','Apricots','Vetch','Lettuce','Pumpkins','Blueberries','Cabbage','Cauliflower','Celery','Radishes','Turnips','Eggplants','Gourds','Cranberries'] # - # ## Object 1: View the entire CDL layer on the map. # # Build a TMS server from the catalog at `s3://datahub-catalogs-us-east-1` and the layer with name `cdl_layer_name`, and use the `cdl_colormap` declared above. # # Center the map on your state. # ## Objective 2: View CDL, cropped to your state polygon, on the map. # # Query the catalog for the `cdl_layer_name` layer at zoom 13 for tiles intersecting your state. (Hint: make sure to use the correct projection for the `query_geom`!). # # Mask the layer by the query geometry, and view the resulting layer on the map. # ## Objective 3: Determine the 3 most popular crops that were grown in your state in 2016, according to the CDL. # # Count the number of cells per crop value for your state. # Create a bar graph of the counts per crop to see the most # popular crops in your state. # # __Note__: Take a screenshot or write down the top crops in your state - it will come in handy in Exercise 3! # ## Objective 4: View a specific crop on the map. # # Choose a crop from the above graph that has a high value. Use a color ramp that # highlights your chosen crop in red, and hides all other crops. Then search # for dense areas of your crop. # ## Extra credit: View a specific crop on the map, using numpy to filter values. # # Accomplish the same thing as Object 4. Instead of using a ColorMap to accomplish this, use map_tiles to map over the tiles of layer and set all values that don't match the crop value to tile.no_data_value. Then paint the resulting layer on the map. # ## Reference: CDL values and crop names # # Below is an expanded dictionary of values to CDL categories, # as well as a list of crop names (with non-crop categories removed). values_to_crops = {0: 'Background', 1: 'Corn', 2: 'Cotton', 3: 'Rice', 4: 'Sorghum', 5: 'Soybeans', 6: 'Sunflower', 10: 'Peanuts', 11: 'Tobacco', 12: 'Sweet Corn', 13: 'Pop or Orn Corn', 14: 'Mint', 21: 'Barley', 22: 'Durum Wheat', 23: 'Spring Wheat', 24: 'Winter Wheat', 25: 'Other Small Grains', 26: 'Dbl Crop WinWht/Soybeans', 27: 'Rye', 28: 'Oats', 29: 'Millet', 30: 'Speltz', 31: 'Canola', 32: 'Flaxseed', 33: 'Safflower', 34: 'Rape Seed', 35: 'Mustard', 36: 'Alfalfa', 37: 'Other Hay/Non Alfalfa', 38: 'Camelina', 39: 'Buckwheat', 41: 'Sugarbeets', 42: 'Dry Beans', 43: 'Potatoes', 44: 'Other Crops', 45: 'Sugarcane', 46: 'Sweet Potatoes', 47: 'Misc Vegs & Fruits', 48: 'Watermelons', 49: 'Onions', 50: 'Cucumbers', 51: 'Chick Peas', 52: 'Lentils', 53: 'Peas', 54: 'Tomatoes', 55: 'Caneberries', 56: 'Hops', 57: 'Herbs', 58: 'Clover/Wildflowers', 59: 'Sod/Grass Seed', 60: 'Switchgrass', 61: 'Fallow/Idle Cropland', 63: 'Forest', 64: 'Shrubland', 65: 'Barren', 66: 'Cherries', 67: 'Peaches', 68: 'Apples', 69: 'Grapes', 70: 'Christmas Trees', 71: 'Other Tree Crops', 72: 'Citrus', 74: 'Pecans', 75: 'Almonds', 76: 'Walnuts', 77: 'Pears', 81: 'Clouds/No Data', 82: 'Developed', 83: 'Water', 87: 'Wetlands', 88: 'Nonag/Undefined', 92: 'Aquaculture', 111: 'Open Water', 112: 'Perennial Ice/Snow ', 121: 'Developed/Open Space', 122: 'Developed/Low Intensity', 123: 'Developed/Med Intensity', 124: 'Developed/High Intensity', 131: 'Barren', 141: 'Deciduous Forest', 142: 'Evergreen Forest', 143: 'Mixed Forest', 152: 'Shrubland', 176: 'Grassland/Pasture', 190: 'Woody Wetlands', 195: 'Herbaceous Wetlands', 204: 'Pistachios', 205: 'Triticale', 206: 'Carrots', 207: 'Asparagus', 208: 'Garlic', 209: 'Cantaloupes', 210: 'Prunes', 211: 'Olives', 212: 'Oranges', 213: 'Honeydew Melons', 214: 'Broccoli', 216: 'Peppers', 217: 'Pomegranates', 218: 'Nectarines', 219: 'Greens', 220: 'Plums', 221: 'Strawberries', 222: 'Squash', 223: 'Apricots', 224: 'Vetch', 225: 'Dbl Crop WinWht/Corn', 226: 'Dbl Crop Oats/Corn', 227: 'Lettuce', 229: 'Pumpkins', 230: 'Dbl Crop Lettuce/Durum Wht', 231: 'Dbl Crop Lettuce/Cantaloupe', 232: 'Dbl Crop Lettuce/Cotton', 233: 'Dbl Crop Lettuce/Barley', 234: 'Dbl Crop Durum Wht/Sorghum', 235: 'Dbl Crop Barley/Sorghum', 236: 'Dbl Crop WinWht/Sorghum', 237: 'Dbl Crop Barley/Corn', 238: 'Dbl Crop WinWht/Cotton', 239: 'Dbl Crop Soybeans/Cotton', 240: 'Dbl Crop Soybeans/Oats', 241: 'Dbl Crop Corn/Soybeans', 242: 'Blueberries', 243: 'Cabbage', 244: 'Cauliflower', 245: 'Celery', 246: 'Radishes', 247: 'Turnips', 248: 'Eggplants', 249: 'Gourds', 250: 'Cranberries', 254: 'Dbl Crop Barley/Soybeans'} crops = ['Corn', 'Cotton', 'Rice', 'Sorghum', 'Soybeans', 'Sunflower', 'Peanuts', 'Tobacco', 'Sweet Corn', 'Pop or Orn Corn', 'Mint', 'Barley', 'Durum Wheat', 'Spring Wheat', 'Winter Wheat', 'Other Small Grains', 'Rye', 'Oats', 'Millet', 'Speltz', 'Canola', 'Flaxseed', 'Safflower', 'Rape Seed', 'Mustard', 'Alfalfa', 'Other Hay/Non Alfalfa', 'Camelina', 'Buckwheat', 'Sugarbeets', 'Dry Beans', 'Potatoes', 'Other Crops', 'Sugarcane', 'Sweet Potatoes', 'Misc Vegs & Fruits', 'Watermelons', 'Onions', 'Cucumbers', 'Chick Peas', 'Lentils', 'Peas', 'Tomatoes', 'Caneberries', 'Hops', 'Herbs', 'Clover/Wildflowers', 'Cherries', 'Peaches', 'Apples', 'Grapes', 'Pecans', 'Almonds', 'Walnuts', 'Pears', 'Pistachios', 'Triticale', 'Carrots', 'Asparagus', 'Garlic', 'Cantaloupes', 'Prunes', 'Olives', 'Oranges', 'Honeydew Melons', 'Broccoli', 'Peppers', 'Pomegranates', 'Nectarines', 'Greens', 'Plums', 'Strawberries', 'Squash', 'Apricots', 'Vetch', 'Lettuce', 'Pumpkins', 'Blueberries', 'Cabbage', 'Cauliflower', 'Celery', 'Radishes', 'Turnips', 'Eggplants', 'Gourds', 'Cranberries']
exercises/exercise1/Exercise 1 - Analyzing CDL.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # digitraffic.fi API-datan käsittelyä pythonilla # # Liikenneviraston tieliikenteen mittauspisteiden dataa on vapaasti saatavilla verkossa. # Tässä koodiesimerkissä luemme tätä dataa ja muokkaamme siitä GeoJSON-formaatin mukaisen mittauspisteluettelon, joka voidaan tulostaa kartan yhteydessä (openstreetmap). Anturidataa on tarjolla hyvinkin paljon, mutta käytämme siitä vain ilman lämpötiloja. # + deletable=true editable=true from io import BytesIO import gzip from urllib.request import Request, urlopen # apurutiini joka ilmoittaa www-palvelimelle, että voimme vastaanottaa gzip-pakattua dataa (Reqeust) # jos data tulee tässä muodossa (response), niin käytämme moduulia gzip sen purkamiseen def ReadDataURL( url ): req = Request( url ) req.add_header( 'Accept-encoding', 'gzip' ) response = urlopen( req ) if response.info().get('Content-Encoding') == 'gzip': buffer = BytesIO( response.read() ) with gzip.GzipFile( fileobj=buffer ) as MemoryFile: # gzip-pakattu data palautetaan purettuna: return MemoryFile.read() else: # "tavallinen" data palautetaan sellaisenaan: return response.read() # + deletable=true editable=true import json # metadata sisältää asemien sijaintitiedot ja nimet metaRaw = ReadDataURL( 'https://tie.digitraffic.fi/api/v1/metadata/weather-stations?lastUpdated=false' ) # verkosta tuleva data on JSON-formaatissa. # muutetaan se Pythonin dictionary rakenteeksi: metaData = json.loads( metaRaw.decode('utf-8') ) # yhden aseman tiedot tulostettuna print( json.dumps( metaData['features'][0], indent=2 ) ) # + # otetaan talteen kunkin aseman tunnisteella sen koordinaatit ja nimi STATIONS = dict() for station in metaData['features']: # tämä 'geometry' ja 'properties' rakenne on sama kuin GeoJSON-formaatissa: STATIONS[ station['id'] ] = { 'geometry' : station['geometry'], 'properties': {'name' : station['properties']['name'] } } STATIONS # + deletable=true editable=true # ajankohtainen anturidata kaikilta asemilta löytyy tästä URL:sta: weatherRaw = ReadDataURL('https://tie.digitraffic.fi/api/v1/data/weather-data?lastUpdated=false') weatherData = json.loads( weatherRaw.decode('utf-8') ) print( json.dumps( weatherData['weatherStations'][0], indent=2) ) # testitulostus yhdestä asemasta # + # poimitaan jokaiselta asemalta talteen "ILMA" anturin arvo for station in weatherData['weatherStations']: for sensor in station['sensorValues']: if sensor['name'] == 'ILMA': # STATIONS-rakenne on aiemmin jo luotu asemien tiedoista # tässä lisäämme yhden 'properties' kentän lämpötilaa varten: STATIONS[sensor['roadStationId']]['properties']['Temperature'] = sensor['sensorValue'] # kaikki kerätty data STATIONS # + # luodaan dictionary, joka sisältää asemien tiedot GeoJSON-formaatissa # https://en.wikipedia.org/wiki/GeoJSON GEO = dict() GEO['type'] = 'FeatureCollection' GEO['features'] = list() # tässä listassa on kaikki "pisteet" eli mittausasemat for station in STATIONS.keys(): # aiemmin kerätty data on jo oikeassa muodossa GeoJSON:n kannalta, # joten tyyppitiedon lisäämisen jälkeen # talletamme vain kunkin aseman 'features' listaan ITEM = STATIONS[station] ITEM['type'] = 'Feature' GEO['features'].append( ITEM ) GEO # + deletable=true editable=true # jupyter notebookin lisäosien avulla tehdään tekstikenttä (label) # ja kartta (map) from ipyleaflet import Map, GeoJSON import ipywidgets as ipyw label = ipyw.Label(layout=ipyw.Layout(width='100%')) label.value = u'0.0' # kartan päälle luodaan GeoJSON pisteitä, jotka vastaavat edellä kerättyjä sääasemia layer = GeoJSON( data = GEO ) # kun karttapistettä klikataan, niin tulostetaan tekstikenttään GeoJSON properties -tietoihin # talletettuna oleva lämpötila ja aseman nimi def click_handler(event=None,id=None,properties=None): label.value = str( properties.get('Temperature') ) +' ℃ : ' + properties.get('name') layer.on_click(click_handler) # luodaan kartta keskitettynä Tampereelle map = Map( zoom=7, center=[61.4978,23.7610]) # liitetään kartan päälle sääasemien pisteet map.add_layer( layer ) # näytetään tekstikenttä ja kartta tässä notebook:ssa ipyw.VBox( [label,map] ) # + deletable=true editable=true
python35-digitraffic-temperature-map.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="Hav2F1sxNHmy" # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/enterprise/healthcare/EntityResolution_ICDO_SNOMED.ipynb) # + [markdown] colab_type="text" id="DYxY5b8WNHmw" # <img src="https://nlp.johnsnowlabs.com/assets/images/logo.png" width="180" height="50" style="float: left;"> # + colab={} colab_type="code" id="8SQ9KP8dNHnG" import json import os from pyspark.ml import Pipeline from pyspark.sql import SparkSession from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl # + colab={} colab_type="code" id="lpwcEH4qNHnN" from pyspark.sql import functions as F import pandas as pd pd.set_option("display.max_colwidth", 1000) # + [markdown] colab_type="text" id="i_hdH_5LNHnW" # # ICD-O - SNOMED Entity Resolution - version 2.4.6 # # ## Example for ICD-O Entity Resolution Pipeline # A common NLP problem in medical applications is to identify histology behaviour in documented cancer studies. # # In this example we will use Spark-NLP to identify and resolve histology behavior expressions and resolve them to an ICD-O code. # # Some cancer related clinical notes (taken from https://www.cancernetwork.com/case-studies): # https://www.cancernetwork.com/case-studies/large-scrotal-mass-multifocal-intra-abdominal-retroperitoneal-and-pelvic-metastases # https://oncology.medicinematters.com/lymphoma/chronic-lymphocytic-leukemia/case-study-small-b-cell-lymphocytic-lymphoma-and-chronic-lymphoc/12133054 # https://oncology.medicinematters.com/lymphoma/epidemiology/central-nervous-system-lymphoma/12124056 # https://oncology.medicinematters.com/lymphoma/case-study-cutaneous-t-cell-lymphoma/12129416 # # Note 1: Desmoplastic small round cell tumor # <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> # A 35-year-old African-American man was referred to our urology clinic by his primary care physician for consultation about a large left scrotal mass. The patient reported a 3-month history of left scrotal swelling that had progressively increased in size and was associated with mild left scrotal pain. He also had complaints of mild constipation, with hard stools every other day. He denied any urinary complaints. On physical examination, a hard paratesticular mass could be palpated in the left hemiscrotum extending into the left groin, separate from the left testicle, and measuring approximately 10 × 7 cm in size. A hard, lower abdominal mass in the suprapubic region could also be palpated in the midline. The patient was admitted urgently to the hospital for further evaluation with cross-sectional imaging and blood work. # # Laboratory results, including results of a complete blood cell count with differential, liver function tests, coagulation panel, and basic chemistry panel, were unremarkable except for a serum creatinine level of 2.6 mg/dL. Typical markers for a testicular germ cell tumor were within normal limits: the beta–human chorionic gonadotropin level was less than 1 mIU/mL and the alpha fetoprotein level was less than 2.8 ng/mL. A CT scan of the chest, abdomen, and pelvis with intravenous contrast was obtained, and it showed large multifocal intra-abdominal, retroperitoneal, and pelvic masses (Figure 1). On cross-sectional imaging, a 7.8-cm para-aortic mass was visualized compressing the proximal portion of the left ureter, creating moderate left hydroureteronephrosis. Additionally, three separate pelvic masses were present in the retrovesical space, each measuring approximately 5 to 10 cm at their largest diameter; these displaced the bladder anteriorly and the rectum posteriorly. # # The patient underwent ultrasound-guided needle biopsy of one of the pelvic masses on hospital day 3 for definitive diagnosis. Microscopic examination of the tissue by our pathologist revealed cellular islands with oval to elongated, irregular, and hyperchromatic nuclei; scant cytoplasm; and invading fibrous tissue—as well as three mitoses per high-powered field (Figure 2). Immunohistochemical staining demonstrated strong positivity for cytokeratin AE1/AE3, vimentin, and desmin. Further mutational analysis of the cells detected the presence of an EWS-WT1 fusion transcript consistent with a diagnosis of desmoplastic small round cell tumor. # </div> # # Note 2: SLL and CLL # <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> # A 72-year-old man with a history of diabetes mellitus, hypertension, and hypercholesterolemia self-palpated a left submandibular lump in 2012. Complete blood count (CBC) in his internist’s office showed solitary leukocytosis (white count 22) with predominant lymphocytes for which he was referred to a hematologist. Peripheral blood flow cytometry on 04/11/12 confirmed chronic lymphocytic leukemia (CLL)/small lymphocytic lymphoma (SLL): abnormal cell population comprising 63% of CD45 positive leukocytes, co-expressing CD5 and CD23 in CD19-positive B cells. CD38 was negative but other prognostic markers were not assessed at that time. The patient was observed regularly for the next 3 years and his white count trend was as follows: 22.8 (4/2012) --> 28.5 (07/2012) --> 32.2 (12/2012) --> 36.5 (02/2013) --> 42 (09/2013) --> 44.9 (01/2014) --> 75.8 (2/2015). His other counts stayed normal until early 2015 when he also developed anemia (hemoglobin [HGB] 10.9) although platelets remained normal at 215. He had been noticing enlargement of his cervical, submandibular, supraclavicular, and axillary lymphadenopathy for several months since 2014 and a positron emission tomography (PET)/computed tomography (CT) scan done in 12/2014 had shown extensive diffuse lymphadenopathy within the neck, chest, abdomen, and pelvis. Maximum standardized uptake value (SUV max) was similar to low baseline activity within the vasculature of the neck and chest. In the abdomen and pelvis, however, there was mild to moderately hypermetabolic adenopathy measuring up to SUV of 4. The largest right neck nodes measured up to 2.3 x 3 cm and left neck nodes measured up to 2.3 x 1.5 cm. His right axillary lymphadenopathy measured up to 5.5 x 2.6 cm and on the left measured up to 4.8 x 3.4 cm. Lymph nodes on the right abdomen and pelvis measured up to 6.7 cm and seemed to have some mass effect with compression on the urinary bladder without symptoms. He underwent a bone marrow biopsy on 02/03/15, which revealed hypercellular marrow (60%) with involvement by CLL (30%); flow cytometry showed CD38 and ZAP-70 positivity; fluorescence in situ hybridization (FISH) analysis showed 13q deletion/monosomy 13; IgVH was unmutated; karyotype was 46XY. # </div> # # Note 3: CNS lymphoma # <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> # A 56-year-old woman began to experience vertigo, headaches, and frequent falls. A computed tomography (CT) scan of the brain revealed the presence of a 1.6 x 1.6 x 2.1 cm mass involving the fourth ventricle (Figure 14.1). A gadolinium-enhanced magnetic resonance imaging (MRI) scan confirmed the presence of the mass, and a stereotactic biopsy was performed that demonstrated a primary central nervous system lymphoma (PCNSL) with a diffuse large B-cell histology. Complete blood count (CBC), lactate dehydrogenase (LDH), and beta-2-microglobulin were normal. Systemic staging with a positron emission tomography (PET)/CT scan and bone marrow biopsy showed no evidence of lymphomatous involvement outside the CNS. An eye exam and lumbar puncture showed no evidence of either ocular or leptomeningeal involvement. # </div> # # Note 4: Cutaneous T-cell lymphoma # <div style="border:2px solid #747474; background-color: #e3e3e3; margin: 5px; padding: 10px"> # An 83-year-old female presented with a progressing pruritic cutaneous rash that started 8 years ago. On clinical exam there were numerous coalescing, infiltrated, scaly, and partially crusted erythematous plaques distributed over her trunk and extremities and a large fungating ulcerated nodule on her right thigh covering 75% of her total body surface area (Figure 10.1). Lymphoma associated alopecia and a left axillary lymphadenopathy were also noted. For the past 3–4 months she reported fatigue, severe pruritus, night sweats, 20 pounds of weight loss, and loss of appetite. # </div> # + [markdown] colab_type="text" id="RhNMBEeqNHnY" # Let's create a dataset with all four case studies # + colab={} colab_type="code" id="GMM6CNKWNHnZ" notes = [] notes.append("""A 35-year-old African-American man was referred to our urology clinic by his primary care physician for consultation about a large left scrotal mass. The patient reported a 3-month history of left scrotal swelling that had progressively increased in size and was associated with mild left scrotal pain. He also had complaints of mild constipation, with hard stools every other day. He denied any urinary complaints. On physical examination, a hard paratesticular mass could be palpated in the left hemiscrotum extending into the left groin, separate from the left testicle, and measuring approximately 10 × 7 cm in size. A hard, lower abdominal mass in the suprapubic region could also be palpated in the midline. The patient was admitted urgently to the hospital for further evaluation with cross-sectional imaging and blood work. Laboratory results, including results of a complete blood cell count with differential, liver function tests, coagulation panel, and basic chemistry panel, were unremarkable except for a serum creatinine level of 2.6 mg/dL. Typical markers for a testicular germ cell tumor were within normal limits: the beta–human chorionic gonadotropin level was less than 1 mIU/mL and the alpha fetoprotein level was less than 2.8 ng/mL. A CT scan of the chest, abdomen, and pelvis with intravenous contrast was obtained, and it showed large multifocal intra-abdominal, retroperitoneal, and pelvic masses (Figure 1). On cross-sectional imaging, a 7.8-cm para-aortic mass was visualized compressing the proximal portion of the left ureter, creating moderate left hydroureteronephrosis. Additionally, three separate pelvic masses were present in the retrovesical space, each measuring approximately 5 to 10 cm at their largest diameter; these displaced the bladder anteriorly and the rectum posteriorly. The patient underwent ultrasound-guided needle biopsy of one of the pelvic masses on hospital day 3 for definitive diagnosis. Microscopic examination of the tissue by our pathologist revealed cellular islands with oval to elongated, irregular, and hyperchromatic nuclei; scant cytoplasm; and invading fibrous tissue—as well as three mitoses per high-powered field (Figure 2). Immunohistochemical staining demonstrated strong positivity for cytokeratin AE1/AE3, vimentin, and desmin. Further mutational analysis of the cells detected the presence of an EWS-WT1 fusion transcript consistent with a diagnosis of desmoplastic small round cell tumor.""") notes.append("""A 72-year-old man with a history of diabetes mellitus, hypertension, and hypercholesterolemia self-palpated a left submandibular lump in 2012. Complete blood count (CBC) in his internist’s office showed solitary leukocytosis (white count 22) with predominant lymphocytes for which he was referred to a hematologist. Peripheral blood flow cytometry on 04/11/12 confirmed chronic lymphocytic leukemia (CLL)/small lymphocytic lymphoma (SLL): abnormal cell population comprising 63% of CD45 positive leukocytes, co-expressing CD5 and CD23 in CD19-positive B cells. CD38 was negative but other prognostic markers were not assessed at that time. The patient was observed regularly for the next 3 years and his white count trend was as follows: 22.8 (4/2012) --> 28.5 (07/2012) --> 32.2 (12/2012) --> 36.5 (02/2013) --> 42 (09/2013) --> 44.9 (01/2014) --> 75.8 (2/2015). His other counts stayed normal until early 2015 when he also developed anemia (hemoglobin [HGB] 10.9) although platelets remained normal at 215. He had been noticing enlargement of his cervical, submandibular, supraclavicular, and axillary lymphadenopathy for several months since 2014 and a positron emission tomography (PET)/computed tomography (CT) scan done in 12/2014 had shown extensive diffuse lymphadenopathy within the neck, chest, abdomen, and pelvis. Maximum standardized uptake value (SUV max) was similar to low baseline activity within the vasculature of the neck and chest. In the abdomen and pelvis, however, there was mild to moderately hypermetabolic adenopathy measuring up to SUV of 4. The largest right neck nodes measured up to 2.3 x 3 cm and left neck nodes measured up to 2.3 x 1.5 cm. His right axillary lymphadenopathy measured up to 5.5 x 2.6 cm and on the left measured up to 4.8 x 3.4 cm. Lymph nodes on the right abdomen and pelvis measured up to 6.7 cm and seemed to have some mass effect with compression on the urinary bladder without symptoms. He underwent a bone marrow biopsy on 02/03/15, which revealed hypercellular marrow (60%) with involvement by CLL (30%); flow cytometry showed CD38 and ZAP-70 positivity; fluorescence in situ hybridization (FISH) analysis showed 13q deletion/monosomy 13; IgVH was unmutated; karyotype was 46XY.""") notes.append("A 56-year-old woman began to experience vertigo, headaches, and frequent falls. A computed tomography (CT) scan of the brain revealed the presence of a 1.6 x 1.6 x 2.1 cm mass involving the fourth ventricle (Figure 14.1). A gadolinium-enhanced magnetic resonance imaging (MRI) scan confirmed the presence of the mass, and a stereotactic biopsy was performed that demonstrated a primary central nervous system lymphoma (PCNSL) with a diffuse large B-cell histology. Complete blood count (CBC), lactate dehydrogenase (LDH), and beta-2-microglobulin were normal. Systemic staging with a positron emission tomography (PET)/CT scan and bone marrow biopsy showed no evidence of lymphomatous involvement outside the CNS. An eye exam and lumbar puncture showed no evidence of either ocular or leptomeningeal involvement.") notes.append("An 83-year-old female presented with a progressing pruritic cutaneous rash that started 8 years ago. On clinical exam there were numerous coalescing, infiltrated, scaly, and partially crusted erythematous plaques distributed over her trunk and extremities and a large fungating ulcerated nodule on her right thigh covering 75% of her total body surface area (Figure 10.1). Lymphoma associated alopecia and a left axillary lymphadenopathy were also noted. For the past 3–4 months she reported fatigue, severe pruritus, night sweats, 20 pounds of weight loss, and loss of appetite.") # Notes column names docid_col = "doc_id" note_col = "text_feed" data = spark.createDataFrame([(i,n.lower()) for i,n in enumerate(notes)]).toDF(docid_col, note_col) # + [markdown] colab_type="text" id="dCPjYF8BNHng" # And let's build a SparkNLP pipeline with the following stages: # - DocumentAssembler: Entry annotator for our pipelines; it creates the data structure for the Annotation Framework # - SentenceDetector: Annotator to pragmatically separate complete sentences inside each document # - Tokenizer: Annotator to separate sentences in tokens (generally words) # - WordEmbeddings: Vectorization of word tokens, in this case using word embeddings trained from PubMed, ICD10 and other clinical resources. # - EntityResolver: Annotator that performs search for the KNNs, in this case trained from ICDO Histology Behavior. # + [markdown] colab_type="text" id="qcX8fqmANHnh" # In order to find cancer related chunks, we are going to use a pretrained Search Trie wrapped up in our TextMatcher Annotator; and to identify treatments/procedures we are going to use our good old NER. # # - NerDLModel: TensorFlow based Named Entity Recognizer, trained to extract PROBLEMS, TREATMENTS and TESTS # - NerConverter: Chunk builder out of tokens tagged by the Ner Model # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="bnrv5H_vNHnj" outputId="f8510f00-93e1-4d0a-922e-3514ba165ce1" docAssembler = DocumentAssembler().setInputCol(note_col).setOutputCol("document") sentenceDetector = SentenceDetector().setInputCols("document").setOutputCol("sentence") tokenizer = Tokenizer().setInputCols("sentence").setOutputCol("token") #Working on adjusting WordEmbeddingsModel to work with the subset of matched tokens word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\ .setInputCols("sentence", "token")\ .setOutputCol("word_embeddings") # + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="wFTmht9XNHnw" outputId="a5112bfd-142c-4291-911a-8c1d4b5ec6dd" icdo_ner = NerDLModel.pretrained("ner_bionlp", "en", "clinical/models")\ .setInputCols("sentence", "token", "word_embeddings")\ .setOutputCol("icdo_ner") icdo_chunk = NerConverter().setInputCols("sentence","token","icdo_ner").setOutputCol("icdo_chunk").setWhiteList(["Cancer"]) icdo_chunk_embeddings = ChunkEmbeddings()\ .setInputCols("icdo_chunk", "word_embeddings")\ .setOutputCol("icdo_chunk_embeddings") icdo_chunk_resolver = ChunkEntityResolverModel.pretrained("chunkresolve_icdo_clinical", "en", "clinical/models")\ .setInputCols("token","icdo_chunk_embeddings")\ .setOutputCol("tm_icdo_code") # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="_dR03PWjNHn3" outputId="67b886ba-3c37-4711-f504-cc903bc79c33" clinical_ner = NerDLModel.pretrained("ner_clinical", "en", "clinical/models") \ .setInputCols(["sentence", "token", "word_embeddings"]) \ .setOutputCol("ner") ner_converter = NerConverter() \ .setInputCols(["sentence", "token", "ner"]) \ .setOutputCol("ner_chunk").setWhiteList(["PROBLEM"]) ner_chunk_tokenizer = ChunkTokenizer()\ .setInputCols("ner_chunk")\ .setOutputCol("ner_token") ner_chunk_embeddings = ChunkEmbeddings()\ .setInputCols("ner_chunk", "word_embeddings")\ .setOutputCol("ner_chunk_embeddings") # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="IZl49BCVNHn-" outputId="a417d9b5-e7bb-49c7-a497-aa340af92c61" ner_snomed_resolver = \ ChunkEntityResolverModel.pretrained("chunkresolve_snomed_findings_clinical","en","clinical/models")\ .setInputCols("ner_token","ner_chunk_embeddings").setOutputCol("snomed_result")\ .setEnableWmd(True).setEnableTfidf(True).setEnableJaccard(True)\ .setCaseSensitive(False).setDistanceWeights([1,7,7,0,0,0]).setExtramassPenalty(1).setNeighbours(30).setAllDistancesMetadata(True) # + colab={} colab_type="code" id="EQenYMniNHoE" pipelineFull = Pipeline().setStages([ docAssembler, sentenceDetector, tokenizer, word_embeddings, clinical_ner, ner_converter, ner_chunk_embeddings, ner_chunk_tokenizer, ner_snomed_resolver, icdo_ner, icdo_chunk, icdo_chunk_embeddings, icdo_chunk_resolver ]) # + [markdown] colab_type="text" id="5-fevCw_NHoL" # Let's train our Pipeline and make it ready to start transforming # + colab={} colab_type="code" id="BmJI1pN4NHoM" pipelineModelFull = pipelineFull.fit(data) # + colab={} colab_type="code" id="_z_FMNcJNHoT" output = pipelineModelFull.transform(data).cache() # + [markdown] colab_type="text" id="dsqbRaEiNHoZ" # ### EntityResolver: # Trained on an augmented ICDO Dataset from JSL Data Market it provides histology codes resolution for the matched expressions. Other than providing the code in the "result" field it provides more metadata about the matching process: # # - target_text -> Text to resolve # - resolved_text -> Best match text # - confidence -> Relative confidence for the top match (distance to probability) # - confidence_ratio -> Relative confidence for the top match. TopMatchConfidence / SecondMatchConfidence # - alternative_codes -> List of other plausible codes (in the KNN neighborhood) # - alternative_confidence_ratios -> Rest of confidence ratios # - all_k_results -> All resolved codes for metrics calculation purposes # - sentence -> SentenceId # - chunk -> ChunkId # + colab={} colab_type="code" id="w9zgN6I-TcEh" # + colab={} colab_type="code" id="cDZ0ywHaTczN" def quick_metadata_analysis(df, doc_field, chunk_field, code_fields): code_res_meta = ", ".join([f"{cf}.metadata" for cf in code_fields]) expression = f"explode(arrays_zip({chunk_field}.begin, {chunk_field}.end, {chunk_field}.result, {chunk_field}.metadata, "+code_res_meta+")) as a" top_n_rest = [(f"float(a['{i+4}'].confidence) as {(cf.split('_')[0])}_conf", f"arrays_zip(split(a['{i+4}'].all_k_results,':::'),split(a['{i+4}'].all_k_resolutions,':::')) as {cf.split('_')[0]+'_opts'}") for i, cf in enumerate(code_fields)] top_n_rest_args = [] for tr in top_n_rest: for t in tr: top_n_rest_args.append(t) return df.selectExpr(doc_field, expression) \ .orderBy(docid_col, F.expr("a['0']"), F.expr("a['1']"))\ .selectExpr(f"concat_ws('::',{doc_field},a['0'],a['1']) as coords", "a['2'] as chunk","a['3'].entity as entity", *top_n_rest_args) # + colab={} colab_type="code" id="NxGONpdfTb_B" icdo = \ quick_metadata_analysis(output, docid_col, "icdo_chunk",["tm_icdo_code"]).toPandas() # + colab={} colab_type="code" id="_WMLu8vbU7kZ" snomed = \ quick_metadata_analysis(output, docid_col, "ner_chunk",["snomed_result"]).toPandas() # + colab={} colab_type="code" id="qBJt01j4VjL5" # + colab={"base_uri": "https://localhost:8080/", "height": 390} colab_type="code" id="lCYB49SdVkbu" outputId="90cac713-03b6-4179-c2b2-4e8815b7a0bc" icdo # + colab={"base_uri": "https://localhost:8080/", "height": 572} colab_type="code" id="0aKIffkvVjJ5" outputId="c9793b72-5c93-4541-a904-8232ce<PASSWORD>" snomed # + colab={} colab_type="code" id="yP9T-rZyVjEK" # -
jupyter/enterprise/healthcare/databricks/EntityResolution_ICDO_SNOMED.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: dspy3 # language: python # name: dspy3 # --- # imports from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.utils import to_categorical # define 5 documents docs = ['Well done!', 'Good work work', 'Great effort', 'Nice work', 'Excellent!'] t = Tokenizer() t.fit_on_texts(docs) # 'work' appears 3 times print(t.word_counts) print(t.word_index) # 'work' appears in 2 documents print(t.word_docs) new_doc = ['Excellent work done good great, Bob!'] # Whether or not each word is present in the document. mode = 'binary' is the default. encoded_doc = t.texts_to_matrix(new_doc, mode = 'binary') print(encoded_doc[0]) # matches up with t.word_index # ignores "Bob" encoded_seq = t.texts_to_sequences(new_doc) print(encoded_seq[0]) # adds zeros to the front of the sequence print(pad_sequences([encoded_seq[0]], maxlen=10).shape) print(pad_sequences([encoded_seq[0]], maxlen=10)[0].shape) to_categorical([5], num_classes=10) # # Features/weights of all images # - transfer learning: strip off last layer of CNN - probably a fully connected layer with softmax activation, for classification - take the weights (? x 1) and feed into an RNN (specifically LSTM) # - greedy search vs beam search for image caption # - think of a tree structure # - greedy search: given a word, choose the most likely next word; then, given the first two words, choose the most likely third word, etc. # - greedy search may not result in globally optimal outcome # - beam search: given a word, limit to top N most likely next words.... # - other extreme: form every possible caption and choose the best # - model architecture of CNN: ResNet model, which is pretrained on the ImageNet dataset # - reshape each of 8,000 color images # ![caption_tree.png](caption_tree.png) # + run_control={"marked": false} def extract_features(directory): """Modify ResNet and pass all images through modified ResNet; collect results in a dictionary""" # load the CNN model; need to import ResNet50 model = ResNet50() # pop off the last layer of this model model.layers.pop() print(model.summary()) # output is the new last layer of the model; is this step necessary? # need to import Model model = Model(inputs = model.inputs, outputs = model.layers[-1].output) # view architecture / parameters print(model.summary()) # pass all 8K images through the model and collect weights in a dictionary features = {} # need to import listdir for name in listdir(directory): filename = directory + '/' + name # load and reshape image # shouldn't target_size = (3,224,224)? image = load_img(filename, target_size = (224,224)) # convert the image pixels to a (3 dimensional?) numpy array, then to a 4 dimensional array image = img_to_array(image) image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2])) # preprocess image in a black box before passing into model image = preprocess_input(image) feature = model.predict(image, verbose = 0) # image_id - all but .jpg - will be a key in features dictionary image_id = name.split('.')[0] features[image_id] = feature print('>%s' % name) return features # - # imports from os import listdir # will dump the features dictionary into a .pkl file from pickle import dump, load from keras.applications.resnet50 import ResNet50, preprocess_input # from keras.applications.vgg16 import VGG16, preprocess_input # from keras.applications.vgg19 import VGG19 from keras.preprocessing.image import load_img, img_to_array from keras.models import Model, load_model # used after copying model from EC2 instance # checking the functionality of listdir listdir('../Flicker8k_Dataset/')[:5] # + directory = 'Flicker8k_Dataset/' features = extract_features(directory) # + print('Extracted features for %d images' % len(features)) dump(features, open('resnet_features.pkl','wb')) # why 'wb' and not just 'w'? # - # # Images with multiple descriptions (human captions) def load_doc(filename): """Open and read text file containing human captions - load into memory""" # open the file in read mode file = open(filename, 'r') # read all the human captions doc = file.read() # close the context manager file.close() return doc # + filename_captions = '../Flickr8k_text/Flickr8k.token.txt' doc = load_doc(filename_captions) # - def load_descriptions(doc): """Dictionary of photo identifier (aka image_id) to list of 5 textual descriptions""" descriptions = {} # iterate through lines of doc for line in doc.split('\n'): tokens = line.split() # tokens is a list, split by whitespace if len(tokens) < 2: continue # move on to next line; continue vs pass? image_id, image_desc = tokens[0], tokens[1:] image_id = image_id.split('.')[0] # again, drop the .jpg # re-join the description after previously splitting image_desc = ' '.join(image_desc) if image_id not in descriptions.keys(): descriptions[image_id] = [] descriptions[image_id].append(image_desc) # .append for lists, .update for sets return descriptions # + descriptions = load_descriptions(doc) print(len(descriptions)) # this means there are 92 images not included in any of train, dev, and test sets # - # ## Clean the descriptions and reduce the size of the vocab # # - convert all words to lowercase # - remove all punctuation; what's the easiest way to do this? # - remove words with fewer than 2 characters, e.g. "a" # - remove words containing at least one number def clean_descriptions(descriptions): """Clean textual descriptions through a series of list comprehensions""" # make a translation table to filter out punctuation table = str.maketrans('', '', string.punctuation) # why can't it be ", " ??!! for key, desc_list in descriptions.items(): # for desc in desc_list: for i in range(len(desc_list)): desc = desc_list[i] # tokenize the description desc = desc.split() # convert to lowercase via list comprehension desc = [word.lower() for word in desc] # probably can remove punctuation before converting to lowercase desc = [word.translate(table) for word in desc] desc = [word for word in desc if len(word) > 1] desc = [word for word in desc if word.isalpha()] # overwrite desc_list[i] desc_list[i] = ' '.join(desc) import string clean_descriptions(descriptions) string.punctuation # + def to_vocabulary(descriptions): """Determine the size of the vocabulary: the number of unique words""" vocab = set() for key, desc_list in descriptions.items(): for desc in desc_list: vocab.update(desc.split()) return vocab # vocab = [] # for key, desc_list in descriptions.items(): # for desc in desc_list: # vocab.append(word for word in desc.split()) # return set(vocab) # + vocabulary = to_vocabulary(descriptions) print('Size of vocabulary: %d' % len(vocabulary)) # - def save_descriptions(descriptions, filename): """One line per description, not one line per image!""" lines = [] for key, desc_list in descriptions.items(): for desc in desc_list: lines.append(key + ' ' + desc) print(len(lines)) data = '\n'.join(lines) file = open(filename, 'w') # why not "wb"? "wb" only for .pkl file.write(data) file.close() save_descriptions(descriptions, 'descriptions.txt') # Note that $40460 = 8092\times 5$. # # Just the training images and descriptions def load_set(filename): """Obtain list of image_id's for training images for filtering purposes""" doc = load_doc(filename) dataset = [] for line in doc.split('\n'): if len(line) < 1: continue # will there be any line with zero characters ?! identifier = line.split('.')[0] dataset.append(identifier) return set(dataset) # why are we allowed to de-duplicate only at the very end? def load_clean_descriptions(filename, dataset): """Load RELEVANT clean descriptions into memory, wrapped in startseq, endseq""" descriptions = {} doc = load_doc(filename) for line in doc.split('\n'): tokens = line.split() image_id, image_desc = tokens[0], tokens[1:] # done this before if image_id in dataset: if image_id not in descriptions.keys(): descriptions[image_id] = [] # wrap description in startseq, endseq image_desc = 'startseq ' + ' '.join(image_desc) + ' endseq' descriptions[image_id].append(image_desc) return descriptions def load_photo_features(filename, dataset): """Load FEATURES of relevant photos, as a dictionary""" all_features = load(open(filename, 'rb')) # filter based on image_id's with a dictionary comprehension features = {image_id: all_features[image_id] for image_id in dataset} return features filename_training = '../Flickr8k_text/Flickr_8k.trainImages.txt' train = load_set(filename_training) print('Number of training images: %d' % len(train)) train_descriptions = load_clean_descriptions('descriptions.txt', train) print(len(train_descriptions)) train_features = load_photo_features('resnet_features.pkl', train) print(len(train_features)) def to_lines(descriptions): """All descriptions, of training images, in a list - prior to encoding""" all_desc = [] for key, desc_list in descriptions.items(): for desc in desc_list: all_desc.append(desc) # keys not included in all_desc return all_desc def create_tokenizer(descriptions): """Fit Keras tokenizer on training descriptions""" all_desc = to_lines(descriptions) tokenizer = Tokenizer() tokenizer.fit_on_texts(all_desc) return tokenizer # return fitted tokenizer tokenizer = create_tokenizer(train_descriptions) training_vocab_size = len(tokenizer.word_index) + 1 # add 1 due to zero indexing # tokenizer.word_index is a dictionary with keys being the (unique) words in the training vocabulary # training vocab contains the words "startseq", "endseq" print('Size of vocabulary - training images: %d' % training_vocab_size) import numpy as np def max_length(descriptions): """Return maximum length across all training descriptions""" all_desc = to_lines(descriptions) return max(len(desc.split()) for desc in all_desc) max_length = max_length(train_descriptions) print('Length of longest caption among training images: %d' % max_length) def create_sequences(tokenizer,max_length,descriptions,photos): # more like create_arrays """Input - output pairs for each image""" X1, X2, y = [], [], [] for key, desc_list in descriptions.items(): for desc in desc_list: # encode each description; recall: each description begins with "startseq" and ends with "endseq" seq = tokenizer.texts_to_sequences([desc])[0] # already fitted tokenizer on training descriptions # convert seq into several X2, y pairs for i in range(1,len(seq)): in_seq, out_seq = seq[:i], seq[i] # add zeros to the front of in_seq so that len(in_seq) = max_length in_seq = pad_sequences([in_seq], maxlen = max_length)[0] # encode (one-hot-encode) out_seq out_seq = to_categorical([out_seq], num_classes = training_vocab_size)[0] X1.append(photos[key][0]) # why not just photos[key] ??? X2.append(in_seq) y.append(out_seq) return np.array(X1), np.array(X2), np.array(y) # return numpy arrays for model training X1train, X2train, ytrain = create_sequences(tokenizer, max_length, train_descriptions, train_features) print(X1train.shape) print(X2train.shape) print(ytrain.shape) # # Model structure and training def define_model(max_length, training_vocab_size): """Model which feeds photo features into an LSTM layer/cell and generates captions one word at a time""" input_1 = Input(shape = (2048,)) f1 = Dropout(0.5)(input_1) # for regularization # fully connected layer with 256 nodes, 256 = 2 ** 8, 2048 = 2 ** 11 f2 = Dense(256, activation = 'relu')(f1) # input_shape = , "leaky relu" input_2 = Input(shape = (max_length,)) # recall that after padding, len(in_seq) = max_length # 5 human captions per image s1 = Embedding(input_dim = training_vocab_size, output_dim = 256, mask_zero = True)(input_2) # embed each word as a vector with 256 components s2 = Dropout(0.5)(s1) s3 = LSTM(256)(s2) decoder1 = add([f2,s3]) # f2 + s3 decoder2 = Dense(256, activation = 'relu')(decoder1) outputs = Dense(training_vocab_size, activation = 'softmax')(decoder2) model = Model(inputs = [input_1, input_2], outputs = outputs) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') # model.fit, model.predict # categorical_crossentropy vs BLEU score # can't directly optimize for BLEU score print(model.summary()) # plot_model(model, to_file = 'model.png', show_shapes = True) return model # - 6,000 training images - 30,000 training captions # - ~7,500 unique words in training captions - this is training_vocab_size # - after tokenizing, think of tokenizer.word_index dictionary # - values in this dictionary range from 1 to training_vocab_size # - from the documentation: If mask_zero is set to True (ignore zeros added during padding), input_dim should equal size of vocabulary + 1. # imports from keras.utils.vis_utils import plot_model from keras.layers import Dense, Embedding, Input, LSTM, Dropout from keras.layers.merge import add from keras.callbacks import ModelCheckpoint model = define_model(max_length, training_vocab_size) # - for embedding layer, $1940224 = 256\times 7579$ # - $524544 = (256\times 2048) + 256$ # - for LSTM layer/cell, $525312 = 4(256^2 + (256\times 256) + 256)$ # - $65792 = (256\times 256) + 256$ # - $1947803 = (256\times 7579) + 7579$ # ![Plot-of-the-Caption-Generation-Deep-Learning-Model.png](Plot-of-the-Caption-Generation-Deep-Learning-Model.png) # + run_control={"marked": false} # check validation loss after each epoch and save models which improve val_loss filepath = 'resnet_model-ep{epoch:02d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5' # .hdf5 checkpoint = ModelCheckpoint(filepath, monitor = 'val_loss', verbose = 1, save_best_only = True, mode = 'min') # - # dev images, i.e. validation images filename_dev = '../Flickr8k_text/Flickr_8k.devImages.txt' dev = load_set(filename_dev) print('Number of images in dev dataset: %d' % len(dev)) # include only descriptions pertaining to dev images dev_descriptions = load_clean_descriptions('descriptions.txt', dev) print(len(dev_descriptions)) # include only features pertaining to dev images dev_features = load_photo_features('resnet_features.pkl', dev) print(len(dev_features)) # same max_length = 34, same tokenizer trained on training captions X1dev, X2dev, ydev = create_sequences(tokenizer, max_length, dev_descriptions, dev_features) print(X1dev.shape) print(X2dev.shape) print(ydev.shape) # finally, let's fit the captioning model which was defined by define_model # why 20 epochs? verbose = 2 more or less verbose than verbose = 1? model.fit([X1train,X2train], ytrain, epochs=20, verbose=2, callbacks=[checkpoint], validation_data=([X1dev,X2dev], ydev)) # # Model evaluation by BLEU scores # So far, we have used the training images to fit the captioning model, and the development images to determine val_loss. Now we will use the *test* images for the first time, to evaluate the trained model. def word_from_id(integer, tokenizer): """Convert integer (value) to corresponding vocabulary word (key) using tokenizer.word_index dictionary""" for word, index in tokenizer.word_index.items(): if index == integer: return word return None def generate_caption(model, photo, tokenizer, max_length): """Given a photo feature vector, generate a caption, word by word, using the model just trained""" # caption begins with "startseq" in_text = 'startseq' # iterate over maximum potential length of caption for i in range(max_length): # encode in_text using tokenizer.word_index sequence = tokenizer.texts_to_sequences([in_text])[0] # pad this sequence so that its length is max_length = 34 sequence = pad_sequences([sequence], maxlen = max_length) # predict next word in the sequence; y_vec is vector of probabilities with 7579 components y_vec = model.predict([photo,sequence], verbose = 0) # pick out the position of the word with greatest probability y_int = np.argmax(y_vec) # convert this position into English word by means of the function we just wrote word = word_from_id(y_int, tokenizer) if word is None: break # recursion: append word as input for generating the next word in_text += ' ' + word if word == 'endseq': break return in_text def evaluate_model(model, photos, descriptions, tokenizer, max_length): """Compare the generated caption with the 5 human descriptions across the whole test set""" actual, generated = [], [] for key, desc_list in descriptions.items(): yhat = generate_caption(model, photos[key], tokenizer, max_length) # each desc begins with "startseq" and ends with "endseq" # split_desc is a list of 5 sublists split_desc = [desc.split() for desc in desc_list] # actual is a list of lists of lists actual.append(split_desc) # generated is a list of lists generated.append(yhat.split()) print(len(actual)) print(len(generated)) # compute BLEU scores print('BLEU-1: %f' % corpus_bleu(actual, generated, weights = (1.0,0,0,0))) print('BLEU-2: %f' % corpus_bleu(actual, generated, weights = (0.5,0.5,0,0))) print('BLEU-3: %f' % corpus_bleu(actual, generated, weights = (0.33,0.33,0.33,0))) print('BLEU-4: %f' % corpus_bleu(actual, generated, weights = (0.25,0.25,0.25,0.25))) # + language="bash" # # pip install nltk # - from nltk.translate.bleu_score import corpus_bleu # test images, previously unused # shouldn't there be 1,092 test images? filename_test = '../Flickr8k_text/Flickr_8k.testImages.txt' test = load_set(filename_test) print('Number of images in test dataset: %d' % len(test)) # include only descriptions pertaining to test images test_descriptions = load_clean_descriptions('descriptions.txt', test) print(len(test_descriptions)) # include only features pertaining to test images test_features = load_photo_features('resnet_features.pkl', test) print(len(test_features)) # load the model which was trained on an AWS EC2 instance filename_model = '../resnet_model-ep03-loss3.586-val_loss3.777.h5' model = load_model(filename_model) evaluate_model(model, test_features, test_descriptions, tokenizer, max_length) # # Generate captions for entirely new images dump(tokenizer, open('tokenizer.pkl', 'wb'), protocol=3) type(tokenizer) def extract_features_2(filename): """Extract features for just one photo, unlike extract_features""" # instantiate the ResNet50 CNN model model = ResNet50() model.layers.pop() model = Model(inputs = model.inputs, outputs = model.layers[-1].output) # not strictly necessary # reshape image before passing through pretrained ResNet model image = load_img(filename, target_size=(224,224)) image = img_to_array(image) print(image.shape) image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2])) print(image.shape) image = preprocess_input(image) features_2 = model.predict(image, verbose = 0) # the prediction is a vector with 2048 components return features_2 photo = extract_features_2('example.jpg') caption = generate_caption(model, photo, tokenizer, max_length) caption = caption.split() caption = ' '.join(caption[1:-1]) print(caption.upper()) # ![example.jpg](example.jpg) import matplotlib.pyplot as plt # %matplotlib inline # %config InlineBackend.figure_format='retina' dog = plt.imread('example.jpg') plt.imshow(dog); sorted(tokenizer.word_counts.items(), key=lambda kv: kv[1], reverse=True)[:50] # # Coding practice - data structures # # *Cracking the Coding Interview* # ## Class of nodes for binary trees, and functions for traversal class Node: def __init__(self, value): self.val = value self.left = None self.right = None def trav(self): if self.left: self.left.trav() print(self.val) if self.right: self.right.trav() def preorder(self): print(self.val) if self.left: self.left.preorder() if self.right: self.right.preorder() def postorder(self): if self.left: self.left.postorder() if self.right: self.right.postorder() print(self.val) node_8 = Node(8) node_3 = Node(3) node_10 = Node(10) node_8.left = node_3 node_8.right = node_10 node_1 = Node(1) node_6 = Node(6) node_3.left = node_1 node_3.right = node_6 node_4 = Node(4) node_7 = Node(7) node_6.left = node_4 node_6.right = node_7 node_14 = Node(14) node_13 = Node(13) node_10.right = node_14 node_14.left = node_13 # ## Function to create minimal / balanced BST from sorted array # + def min_bst_helper(start,end,arr): if start > end: return mid = (start + end) // 2 n = Node(arr[mid]) # print(n.val) n.left = min_bst_helper(start,mid - 1,arr) n.right = min_bst_helper(mid + 1,end,arr) return n def min_bst(sort_arr): return min_bst_helper(0,len(sort_arr) - 1,sort_arr) # - sort_arr = [1,3,4,6,7,8,10,13,14] min_bst(sort_arr).val min_bst(sort_arr).left.val min_bst(sort_arr).right.val test_list = [] test_set = set() test_list.append(4) test_list.append(3) # test_list.insert(0,3) test_list test_list.append(3) test_set.update([3]) test_list test_set test_list.append(3) test_set.update([3]) test_list test_set test_list.append(4) test_list test_list.pop() # the last thing that was appended gets popped off, like a stack test_list # ## Heaps - specifically, min heaps from heapq import heappush, heappop test_heap = [] heappush(test_heap, 3) heappush(test_heap, 4) heappush(test_heap, 2) heappush(test_heap, 5) heappush(test_heap, 1) heappush(test_heap, 7) heappush(test_heap, 8) heappush(test_heap, 6) test_heap print(test_heap[0]) print(min(test_heap)) # ## Class of stacks, which are basically just Python lists - LIFO! class Stack: def __init__(self): self.stack = [] def stackpop(self): if len(self.stack) == 0: return "Can't pop since it's empty!" else: return self.stack.pop() def stackpush(self,val): return self.stack.append(val) def stackpeak(self): if len(self.stack) == 0: return "Can't peek since it's empty" else: return self.stack[-1] test_stack = Stack() test_stack.stack test_stack.stackpop() test_stack.stackpush(3) test_stack.stack # ## Towers of Hanoi, a meta-class problem (OOP) # + class Tower: def __init__(self, i): self.disks = Stack() self.index = i # def index(self): # return self.index def add(self, d): # d is the value of the disk we are trying to place if len(self.disks.stack) != 0 and self.disks.stackpeak() <= d: print("Error placing disk " + str(d)) else: self.disks.stackpush(d) def move_top_to(self, t): # t is the index of another tower top = self.disks.stackpop() t.add(top) def move_disks(self, n, destination, buffer): # destination, buffer are indices for the other two towers if n > 0: self.move_disks(n-1, buffer, destination) self.move_top_to(destination) buffer.move_disks(n-1, destination, self) # - def hanoi(n): # n is the number of disks towers = [] for i in range(3): # towers[i] = Tower(i) towers.append(Tower(i)) for j in range(n, 0, -1): towers[0].add(j) # populating Tower(0) with the n disks towers[0].move_disks(n, towers[2], towers[1]) return towers towers = hanoi(5) towers[0].disks.stack towers[2].disks.stack towers[1].disks.stack # ## Making change - RECURSION # + def count_ways(amount): denoms = [100,50,25,10,5,1] return count_ways_helper(amount, denoms, 0) def count_ways_helper(amount, denoms, index): if index >= len(denoms) - 1 or amount == 0: return 1 # not ways += 1? don't increment index by 1. denom_amount = denoms[index] ways = 0 # clearing and resetting ways?! for i in range(amount): if i * denom_amount > amount: break amount_remaining = amount - (i * denom_amount) ways += count_ways_helper(amount_remaining, denoms, index + 1) return ways # - count_ways(100) def num_ways(amount): ways = 0 for i in range(amount + 1): for j in range((amount // 5) + 1): for k in range((amount // 10) + 1): for l in range((amount // 25) + 1): if i + 5*j + 10*k + 25*l == amount: ways += 1 return ways num_ways(500) # ## Class of queues - FIFO! class Queue: def __init__(self): self.queue = [] def queuepop(self): # dequeue if len(self.queue) == 0: return "Can't pop since it's empty" else: self.queue.pop() def queuepush(self,val): # enqueue # return self.queue.append(val) return self.queue.insert(0,val) test_queue = Queue() test_queue.queue test_queue.queuepop() test_queue.queuepush(2) test_queue.queuepush(3) test_queue.queue test_queue.queuepop() test_queue.queue # ## Class of nodes for singly linked lists class Node_LL: def __init__(self,value): self.val = value self.next = None def traverse(self): node = self while node != None: print(node.val) node = node.next def trav_recursive(self): print(self.val) if self.next: self.next.trav_recursive() node1 = Node_LL(12) # the head node node2 = Node_LL(99) node3 = Node_LL(37) node1.next = node2 node2.next = node3 node1.traverse() node1.trav_recursive() # ## Class of nodes for doubly linked lists class Node_DLL: def __init__(self,value): self.val = value self.next = None self.prev = None def traverse_forward(self): node = self while node != None: print(node.val) node = node.next def traverse_backward(self): node = self while node != None: print(node.val) node = node.prev def delete(self): self.prev.next = self.next self.next.prev = self.prev node1 = Node_DLL(12) node2 = Node_DLL(99) node3 = Node_DLL(37) node1.next = node2 node2.next = node3 node3.prev = node2 node2.prev = node1 node1.traverse_forward() node3.traverse_backward() node2.delete() node1.next.val node3.prev.val # ## Breadth first search / traversal for binary trees (w/o queues) def bfs(node): result = [] current_level = [node] while current_level != []: next_level = [] for node in current_level: result.append(node.val) if node.left: next_level.append(node.left) if node.right: next_level.append(node.right) current_level = next_level return result bfs(node_8) node_8.trav() node_8.preorder() node_8.postorder() # ## Miscellaneous functions def word_count_helper(string): output_dict = {} for word in string.split(' '): if word in output_dict.keys(): output_dict[word] += 1 # value is count, not a list else: output_dict[word] = 1 return output_dict word_count_helper('hello hello world') def max_profit(prices): if not prices: print('There are no prices!') else: max_profit = 0 max_price = prices[-1] # min_price = prices[0] for price in prices[::-1]: if max_price - price > max_profit: max_profit = max_price - price if price > max_price: max_price = price # if max_profit < price - min_price: # max_profit = price - min_price # if price < min_price: # min_price = price return max_profit prices = [3,-1,4,9.5,0] max_profit(prices) max_profit([]) def magic_slow(sort_arr): magic_indices = [] for i in range(len(sort_arr)): if sort_arr[i] == i: magic_indices.append(i) if len(magic_indices) == 0: return "There are no magic indices" else: return magic_indices magic_slow([0,1,2,3]) magic_slow([1,2,3,4]) magic_slow([-40,-20,-1,1,2,3,5,7,9,12,13]) # + def magic_fast_helper(arr, start, end): if start > end: return mid = (start + end) // 2 if arr[mid] == mid: magic_indices.append(mid) elif arr[mid] > mid: return magic_fast_helper(arr, start, mid - 1) else: return magic_fast_helper(arr, mid + 1, end) def magic_fast(sort_arr): return magic_fast_helper(sort_arr, 0, len(sort_arr) - 1) # - magic_indices = [] magic_fast([-40,-20,-1,1,2,3,5,7,9,12,13]) print(magic_indices) def power(n): if n == 0: return [[]] if n == 1: return [[], [1]] temp_list = [] for subset in power(n-1): temp_list.append(subset + [n]) if n > 1: return power(n-1) + temp_list power(5) def kaprekar(number): if len(str(number)) != 4 or len(set(str(number))) == 1: return "Invalid input" else: ascending = int(''.join(sorted(str(number)))) descending = int(''.join(sorted(str(number), reverse = True))) output = descending - ascending count = 1 while output != 6174: ascending = int(''.join(sorted(str(output)))) descending = int(''.join(sorted(str(output), reverse = True))) output = descending - ascending count += 1 return count kaprekar(5790) # ## Fibonacci: memoization & recursion def fibonacci(n): """Return nth number in Fibonacci sequence using memoization""" if n < 0: print('Value Error: input must be nonnegative integer!') else: if n == 0: return 0 if n == 1: return 1 a = 0 b = 1 for i in range(2,n): c = a + b a = b b = c return a + b fibonacci(-1) fibonacci(35) def fib(n): """Return the nth Fibonacci number using recursion""" if n < 0: print('Value Error: input must be nonnegative integer!') else: if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) fib(35) fib(-1)
Image_Captioning_ResNet.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/Ajjukota/Earthquake_Prediction_Model_with_NeuralNetwork/blob/main/earthuake.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="7pS2zx5WsGG_" # **Importing necessary librares** # + id="vGByHNhZsKcA" import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] id="MF60dbP8toqn" # Importing the data # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7CgpmdW5jdGlvbiBfdXBsb2FkRmlsZXMoaW5wdXRJZCwgb3V0cHV0SWQpIHsKICBjb25zdCBzdGVwcyA9IHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCk7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICAvLyBDYWNoZSBzdGVwcyBvbiB0aGUgb3V0cHV0RWxlbWVudCB0byBtYWtlIGl0IGF2YWlsYWJsZSBmb3IgdGhlIG5leHQgY2FsbAogIC8vIHRvIHVwbG9hZEZpbGVzQ29udGludWUgZnJvbSBQeXRob24uCiAgb3V0cHV0RWxlbWVudC5zdGVwcyA9IHN0ZXBzOwoKICByZXR1cm4gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpOwp9CgovLyBUaGlzIGlzIHJvdWdobHkgYW4gYXN5bmMgZ2VuZXJhdG9yIChub3Qgc3VwcG9ydGVkIGluIHRoZSBicm93c2VyIHlldCksCi8vIHdoZXJlIHRoZXJlIGFyZSBtdWx0aXBsZSBhc3luY2hyb25vdXMgc3RlcHMgYW5kIHRoZSBQeXRob24gc2lkZSBpcyBnb2luZwovLyB0byBwb2xsIGZvciBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcC4KLy8gVGhpcyB1c2VzIGEgUHJvbWlzZSB0byBibG9jayB0aGUgcHl0aG9uIHNpZGUgb24gY29tcGxldGlvbiBvZiBlYWNoIHN0ZXAsCi8vIHRoZW4gcGFzc2VzIHRoZSByZXN1bHQgb2YgdGhlIHByZXZpb3VzIHN0ZXAgYXMgdGhlIGlucHV0IHRvIHRoZSBuZXh0IHN0ZXAuCmZ1bmN0aW9uIF91cGxvYWRGaWxlc0NvbnRpbnVlKG91dHB1dElkKSB7CiAgY29uc3Qgb3V0cHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKG91dHB1dElkKTsKICBjb25zdCBzdGVwcyA9IG91dHB1dEVsZW1lbnQuc3RlcHM7CgogIGNvbnN0IG5leHQgPSBzdGVwcy5uZXh0KG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSk7CiAgcmV0dXJuIFByb21pc2UucmVzb2x2ZShuZXh0LnZhbHVlLnByb21pc2UpLnRoZW4oKHZhbHVlKSA9PiB7CiAgICAvLyBDYWNoZSB0aGUgbGFzdCBwcm9taXNlIHZhbHVlIHRvIG1ha2UgaXQgYXZhaWxhYmxlIHRvIHRoZSBuZXh0CiAgICAvLyBzdGVwIG9mIHRoZSBnZW5lcmF0b3IuCiAgICBvdXRwdXRFbGVtZW50Lmxhc3RQcm9taXNlVmFsdWUgPSB2YWx1ZTsKICAgIHJldHVybiBuZXh0LnZhbHVlLnJlc3BvbnNlOwogIH0pOwp9CgovKioKICogR2VuZXJhdG9yIGZ1bmN0aW9uIHdoaWNoIGlzIGNhbGxlZCBiZXR3ZWVuIGVhY2ggYXN5bmMgc3RlcCBvZiB0aGUgdXBsb2FkCiAqIHByb2Nlc3MuCiAqIEBwYXJhbSB7c3RyaW5nfSBpbnB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIGlucHV0IGZpbGUgcGlja2VyIGVsZW1lbnQuCiAqIEBwYXJhbSB7c3RyaW5nfSBvdXRwdXRJZCBFbGVtZW50IElEIG9mIHRoZSBvdXRwdXQgZGlzcGxheS4KICogQHJldHVybiB7IUl0ZXJhYmxlPCFPYmplY3Q+fSBJdGVyYWJsZSBvZiBuZXh0IHN0ZXBzLgogKi8KZnVuY3Rpb24qIHVwbG9hZEZpbGVzU3RlcChpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IGlucHV0RWxlbWVudCA9IGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKGlucHV0SWQpOwogIGlucHV0RWxlbWVudC5kaXNhYmxlZCA9IGZhbHNlOwoKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIG91dHB1dEVsZW1lbnQuaW5uZXJIVE1MID0gJyc7CgogIGNvbnN0IHBpY2tlZFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgaW5wdXRFbGVtZW50LmFkZEV2ZW50TGlzdGVuZXIoJ2NoYW5nZScsIChlKSA9PiB7CiAgICAgIHJlc29sdmUoZS50YXJnZXQuZmlsZXMpOwogICAgfSk7CiAgfSk7CgogIGNvbnN0IGNhbmNlbCA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2J1dHRvbicpOwogIGlucHV0RWxlbWVudC5wYXJlbnRFbGVtZW50LmFwcGVuZENoaWxkKGNhbmNlbCk7CiAgY2FuY2VsLnRleHRDb250ZW50ID0gJ0NhbmNlbCB1cGxvYWQnOwogIGNvbnN0IGNhbmNlbFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgY2FuY2VsLm9uY2xpY2sgPSAoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9OwogIH0pOwoKICAvLyBXYWl0IGZvciB0aGUgdXNlciB0byBwaWNrIHRoZSBmaWxlcy4KICBjb25zdCBmaWxlcyA9IHlpZWxkIHsKICAgIHByb21pc2U6IFByb21pc2UucmFjZShbcGlja2VkUHJvbWlzZSwgY2FuY2VsUHJvbWlzZV0pLAogICAgcmVzcG9uc2U6IHsKICAgICAgYWN0aW9uOiAnc3RhcnRpbmcnLAogICAgfQogIH07CgogIGNhbmNlbC5yZW1vdmUoKTsKCiAgLy8gRGlzYWJsZSB0aGUgaW5wdXQgZWxlbWVudCBzaW5jZSBmdXJ0aGVyIHBpY2tzIGFyZSBub3QgYWxsb3dlZC4KICBpbnB1dEVsZW1lbnQuZGlzYWJsZWQgPSB0cnVlOwoKICBpZiAoIWZpbGVzKSB7CiAgICByZXR1cm4gewogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgICAgfQogICAgfTsKICB9CgogIGZvciAoY29uc3QgZmlsZSBvZiBmaWxlcykgewogICAgY29uc3QgbGkgPSBkb2N1bWVudC5jcmVhdGVFbGVtZW50KCdsaScpOwogICAgbGkuYXBwZW5kKHNwYW4oZmlsZS5uYW1lLCB7Zm9udFdlaWdodDogJ2JvbGQnfSkpOwogICAgbGkuYXBwZW5kKHNwYW4oCiAgICAgICAgYCgke2ZpbGUudHlwZSB8fCAnbi9hJ30pIC0gJHtmaWxlLnNpemV9IGJ5dGVzLCBgICsKICAgICAgICBgbGFzdCBtb2RpZmllZDogJHsKICAgICAgICAgICAgZmlsZS5sYXN0TW9kaWZpZWREYXRlID8gZmlsZS5sYXN0TW9kaWZpZWREYXRlLnRvTG9jYWxlRGF0ZVN0cmluZygpIDoKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJ24vYSd9IC0gYCkpOwogICAgY29uc3QgcGVyY2VudCA9IHNwYW4oJzAlIGRvbmUnKTsKICAgIGxpLmFwcGVuZENoaWxkKHBlcmNlbnQpOwoKICAgIG91dHB1dEVsZW1lbnQuYXBwZW5kQ2hpbGQobGkpOwoKICAgIGNvbnN0IGZpbGVEYXRhUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICAgIGNvbnN0IHJlYWRlciA9IG5ldyBGaWxlUmVhZGVyKCk7CiAgICAgIHJlYWRlci5vbmxvYWQgPSAoZSkgPT4gewogICAgICAgIHJlc29sdmUoZS50YXJnZXQucmVzdWx0KTsKICAgICAgfTsKICAgICAgcmVhZGVyLnJlYWRBc0FycmF5QnVmZmVyKGZpbGUpOwogICAgfSk7CiAgICAvLyBXYWl0IGZvciB0aGUgZGF0YSB0byBiZSByZWFkeS4KICAgIGxldCBmaWxlRGF0YSA9IHlpZWxkIHsKICAgICAgcHJvbWlzZTogZmlsZURhdGFQcm9taXNlLAogICAgICByZXNwb25zZTogewogICAgICAgIGFjdGlvbjogJ2NvbnRpbnVlJywKICAgICAgfQogICAgfTsKCiAgICAvLyBVc2UgYSBjaHVua2VkIHNlbmRpbmcgdG8gYXZvaWQgbWVzc2FnZSBzaXplIGxpbWl0cy4gU2VlIGIvNjIxMTU2NjAuCiAgICBsZXQgcG9zaXRpb24gPSAwOwogICAgZG8gewogICAgICBjb25zdCBsZW5ndGggPSBNYXRoLm1pbihmaWxlRGF0YS5ieXRlTGVuZ3RoIC0gcG9zaXRpb24sIE1BWF9QQVlMT0FEX1NJWkUpOwogICAgICBjb25zdCBjaHVuayA9IG5ldyBVaW50OEFycmF5KGZpbGVEYXRhLCBwb3NpdGlvbiwgbGVuZ3RoKTsKICAgICAgcG9zaXRpb24gKz0gbGVuZ3RoOwoKICAgICAgY29uc3QgYmFzZTY0ID0gYnRvYShTdHJpbmcuZnJvbUNoYXJDb2RlLmFwcGx5KG51bGwsIGNodW5rKSk7CiAgICAgIHlpZWxkIHsKICAgICAgICByZXNwb25zZTogewogICAgICAgICAgYWN0aW9uOiAnYXBwZW5kJywKICAgICAgICAgIGZpbGU6IGZpbGUubmFtZSwKICAgICAgICAgIGRhdGE6IGJhc2U2NCwKICAgICAgICB9LAogICAgICB9OwoKICAgICAgbGV0IHBlcmNlbnREb25lID0gZmlsZURhdGEuYnl0ZUxlbmd0aCA9PT0gMCA/CiAgICAgICAgICAxMDAgOgogICAgICAgICAgTWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCk7CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPSBgJHtwZXJjZW50RG9uZX0lIGRvbmVgOwoKICAgIH0gd2hpbGUgKHBvc2l0aW9uIDwgZmlsZURhdGEuYnl0ZUxlbmd0aCk7CiAgfQoKICAvLyBBbGwgZG9uZS4KICB5aWVsZCB7CiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICB9CiAgfTsKfQoKc2NvcGUuZ29vZ2xlID0gc2NvcGUuZ29vZ2xlIHx8IHt9OwpzY29wZS5nb29nbGUuY29sYWIgPSBzY29wZS5nb29nbGUuY29sYWIgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYi5fZmlsZXMgPSB7CiAgX3VwbG9hZEZpbGVzLAogIF91cGxvYWRGaWxlc0NvbnRpbnVlLAp9Owp9KShzZWxmKTsK", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73} id="FXcHmdGUsa-B" outputId="d9319051-3e68-4cf8-d79d-4266a2ac7308" from google.colab import files uploaded = files.upload() # + [markdown] id="5JsbqIOkwgjx" # # + colab={"base_uri": "https://localhost:8080/", "height": 256} id="IHHUdDaKt3MV" outputId="22cd74ab-9575-48be-bc6f-6408317cbf5e" data = pd.read_csv('eathquakedata.csv') data.head() # + [markdown] id="xQB3Vxy-wkqX" # Converting the date and time to Unix time # + colab={"base_uri": "https://localhost:8080/", "height": 256} id="bxZjwXiOuULQ" outputId="8f8ade4a-bdd5-4f9a-a05f-89e6dda0aa1b" import datetime import time timestamp = [] for d, t in zip(data['Date'], data['Time']): try: ts = datetime.datetime.strptime(d+' '+t, '%m/%d/%Y %H:%M:%S') timestamp.append(time.mktime(ts.timetuple())) except ValueError: # print('ValueError') timestamp.append('ValueError') timeStamp = pd.Series(timestamp) data['Timestamp'] = timeStamp.values finalData = data.drop(['Date', 'Time'], axis=1) finalData = finalData[finalData.Timestamp != 'ValueError'] finalData.head() # + colab={"base_uri": "https://localhost:8080/", "height": 857} id="9mzEsSXr05my" outputId="90e4c2eb-1be6-46c4-b0c8-88e885ed4b0d" # !apt-get install libgeos-3.5.0 # !apt-get install libgeos-dev # !pip install https://github.com/matplotlib/basemap/archive/master.zip # + [markdown] id="6RJfkvEe1u7f" # Visualizaing the data # + colab={"base_uri": "https://localhost:8080/", "height": 441} id="pp9JYqDzvClg" outputId="4cb230e2-c434-40f1-faac-fb572a1f1fe5" from mpl_toolkits.basemap import Basemap m = Basemap(projection='mill',llcrnrlat=-80,urcrnrlat=80, llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution='c') longitudes = data["Longitude"].tolist() latitudes = data["Latitude"].tolist() x,y = m(longitudes,latitudes) fig = plt.figure(figsize=(12,10)) plt.title("All affected areas") m.plot(x, y, "o", markersize = 2, color = 'blue') m.drawcoastlines() m.fillcontinents(color='coral',lake_color='aqua') m.drawmapboundary() m.drawcountries() plt.show() # + [markdown] id="0mh4744O3hot" # # + colab={"base_uri": "https://localhost:8080/"} id="zZwcvxrQ0D34" outputId="7ca7bbc4-dfaf-4aed-f105-14420fe6d196" X = finalData[['Timestamp', 'Latitude', 'Longitude']] y = finalData[['Magnitude', 'Depth']] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test =train_test_split(X,y, test_size=0.3, random_state = 42) X_train.shape # + [markdown] id="cqOoNp5a3iva" # Creating a neural network to fit the data from the training set. # --- # > This neural network consists of three dense layers with (16,16,2) nodes # # 1.Relu # 2.Softmax # 3.Activation # # # # # # # # # + id="dJLJ8Axx3Blh" from keras.models import Sequential from keras.layers import Dense def create_model(neurons, activation, optimizer, loss): model = Sequential() model.add(Dense(neurons, activation=activation, input_shape=(3,))) model.add(Dense(neurons, activation=activation)) model.add(Dense(2, activation='softmax')) model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy']) return model # + id="gZqC3hvX4nhW" from keras.wrappers.scikit_learn import KerasClassifier model = KerasClassifier(build_fn=create_model, verbose=0) neurons = [16] batch_size = [10] epochs = [10] activation = ['sigmoid', 'relu'] optimizer = ['SGD', 'Adadelta'] loss = ['squared_hinge'] param_grid = dict(neurons=neurons, batch_size=batch_size, epochs=epochs, activation=activation, optimizer=optimizer, loss=loss) # + colab={"base_uri": "https://localhost:8080/"} id="dar-PrQp4qgk" outputId="4ef437e1-ce48-46da-de78-9c7a6798e9ed" from sklearn.model_selection import learning_curve, GridSearchCV grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1) ## #######fixing the error############ X_train=np.asarray(X_train).astype(np.int) y_train=np.asarray(y_train).astype(np.int) ###################################### grid_result = grid.fit(X_train, y_train) print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) means = grid_result.cv_results_['mean_test_score'] stds = grid_result.cv_results_['std_test_score'] params = grid_result.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # + colab={"base_uri": "https://localhost:8080/"} id="ub1yRhVM40i0" outputId="9d4d4011-2909-4bb7-9539-9c7a6362d2c9" model = Sequential() model.add(Dense(16, activation='relu', input_shape=(3,))) model.add(Dense(16, activation='relu')) model.add(Dense(2, activation='softmax')) ## #######fixing the error############ X_train=np.asarray(X_train).astype(np.int) y_train=np.asarray(y_train).astype(np.int) X_test=np.asarray(X_test).astype(np.int) y_test=np.asarray(y_test).astype(np.int) test_loss=np.asarray(test_loss).astype(np.int) test_acc=np.asarray(test_acc).astype(np.int) ###################################### model.compile(optimizer='SGD', loss='squared_hinge', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=10, epochs=20, verbose=1, validation_data=(X_test, y_test)) [test_loss, test_acc] = model.evaluate(X_test, y_test) print("Evaluation result on Test Data : Loss = {}, accuracy = {}".format(test_loss, test_acc)) # + [markdown] id="85Asj1qe_k7A" # # # > **The accuaracy of this neural network predicting the earthquake is nearly 92%** # # # # # + id="1UAQGF6M9l75"
earthuake.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All). # # Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: NAME = "" COLLABORATORS = "" # --- # <!--NOTEBOOK_HEADER--> # *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks); # content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* # <!--NAVIGATION--> # < [Packing and Relax](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.02-Packing-design-and-regional-relax.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Protein Design 2](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.04-Protein-Design-2.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.03-Design-with-a-resfile-and-relax.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a> # # Protein Design with a Resfile and FastRelax # # Keywords: FastDesign, FastRelax, ResfileCommandOperation, Resfile, ResidueSelector, MoveMapFactory, TaskFactory, TaskOperation, NoRepackDisulfides, IncludeCurrent, ReadResfile, conf2pdb_chain(), pose_from_rcsb(), create_score_function(), CA_rmsd() # ## Overview # # In this Workshop, we will learn the classic way to design proteins, but in the same breath introduce the concept of design using a flexible backbone protocol. # # This protocol, is essentially design during FastRelax. A separate class, FastDesign, has a bit more options for design, but essentially, they are the same. # # Many modern designs have used this FastDesign/RelaxedDesign protocol - including many Science papers from the Baker lab and the RosettaAntibodyDesign (RAbD) protocol that we will cover in another tutorial. # # Before this workshop, you should read about the resfile syntax here: https://www.rosettacommons.org/docs/latest/rosetta_basics/file_types/resfiles # *Warning*: This notebook uses `pyrosetta.distributed.viewer` code, which runs in `jupyter notebook` and might not run if you're using `jupyterlab`. # Notebook setup import sys if 'google.colab' in sys.modules: # !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.mount_pyrosetta_install() print ("Notebook is set for PyRosetta use in Colab. Have fun!") # **Make sure you are in the directory with the pdb files:** # # `cd google_drive/My\ Drive/student-notebooks/` # + import logging logging.basicConfig(level=logging.INFO) import pyrosetta import pyrosetta.toolbox from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # - # ### Initialize PyRosetta pyrosetta.init("-ignore_unrecognized_res 1 -ex1 -ex2aro -detect_disulf 0") # For this tutorial, let's use the well-studied native protein crambin from PDB ID 1AB1 (http://www.rcsb.org/structure/1AB1). # # Setup the input pose and scorefunction: start_pose = pyrosetta.toolbox.rcsb.pose_from_rcsb("1AB1", ATOM=True, CRYS=False) pose = start_pose.clone() scorefxn = pyrosetta.create_score_function("ref2015_cart.wts") # Make of list of which residues are cysteine: cys_res = [] for i, aa in enumerate(start_pose.sequence(), start=1): if aa == "C": cys_res.append(i) print(cys_res) # Inspect `start_pose` using the `PyMolMover` or `dump_pdb()` # ## Design strategy: # # Design away the cysteine residues (i.e. disulfide bonds) using a resfile, allowing all side-chains to re-pack and all backbone and side-chain torsions to minimize using the `FastRelax` mover. # # Read more about resfile file structure at https://www.rosettacommons.org/docs/latest/rosetta_basics/file_types/resfiles # To write a resfile, we need to know which chain to mutate. # # We can see that the pose consists of only chain "A" by printing the `pose.pdb_info()` object: print(pose.pdb_info()) # More programmatically, we could find which chains are in the `pose` using `pyrosetta.rosetta.core.pose.conf2pdb_chain(pose)` which returns a `pyrosetta.rosetta.std.map_unsigned_long_char` object which is iterable. print(pyrosetta.rosetta.core.pose.conf2pdb_chain(pose)) for k, v in pyrosetta.rosetta.core.pose.conf2pdb_chain(pose).items(): print(v) # So we could write a resfile to disc indicating design specifications to mutate only the cysteine residues in chain "A". Use the syntax described below, and save your resfile in this directory as `resfile`. # # https://www.rosettacommons.org/docs/latest/rosetta_basics/file_types/resfiles # + deletable=false nbgrader={"cell_type": "code", "checksum": "2744d118ca06e7d661aa5160e7c6b1a9", "grade": true, "grade_id": "cell-4dfc4f43e2031e10", "locked": false, "points": 0, "schema_version": 3, "solution": true} # YOUR CODE HERE raise NotImplementedError() # - # Note that we don't necessarily need a resfile to use resfile commands. We can now do this in an intuitive way through code and `ResidueSelectors` using the `ResfileCommandOperation`. The main docs for the XML interface are available below, however the code-level interface is extremely similar. Use the ? to get more info on this. The operation is located in `pyrosetta.rosetta.core.pack.task.operation` as we saw this location in the previous tutorial. # # https://www.rosettacommons.org/docs/latest/scripting_documentation/RosettaScripts/TaskOperations/taskoperations_pages/ResfileCommandOperation # Now we can setup the TaskOperations for the `FastRelax` mover. These tell `FastRelax` which residues to design or repack during the packer steps in `FastRelax`. You should be familiar with this from the previous tutorial # # We use the `IncludeCurrent` to include the current rotamer of from the crystal structure during packing. # + # The task factory accepts all the task operations tf = pyrosetta.rosetta.core.pack.task.TaskFactory() # These are pretty standard tf.push_back(pyrosetta.rosetta.core.pack.task.operation.InitializeFromCommandline()) tf.push_back(pyrosetta.rosetta.core.pack.task.operation.IncludeCurrent()) tf.push_back(pyrosetta.rosetta.core.pack.task.operation.NoRepackDisulfides()) # Include the resfile tf.push_back(pyrosetta.rosetta.core.pack.task.operation.ReadResfile(resfile)) # Convert the task factory into a PackerTask to take a look at it packer_task = tf.create_task_and_apply_taskoperations(pose) # View the PackerTask print(packer_task) # - # The PackerTask looks as intended! # # Now we can set up a `MoveMap` or a `MoveMapFactory` to specify which torsions are free to minimize during the minimization steps of the `FastDesign` mover # Set up a MoveMapFactory mmf = pyrosetta.rosetta.core.select.movemap.MoveMapFactory() mmf.all_bb(setting=True) mmf.all_bondangles(setting=True) mmf.all_bondlengths(setting=True) mmf.all_chi(setting=True) mmf.all_jumps(setting=True) mmf.set_cartesian(setting=True) # + # Set up a MoveMap # mm = pyrosetta.rosetta.core.kinematics.MoveMap() # mm.set_bb(True) # mm.set_chi(True) # mm.set_jump(True) # If needed, you could turn off bb and chi torsions for individual residues like this: # vector1 of true/false for each residue in the pose # subset_to_minimize = do_something_set.apply(pose) # for i in range(1, pose.size() + 1): # if (not subset_to_minimize[i]): # mm.set_bb(i, False) # mm.set_chi(i, False) # - # Because some Movers only take as input a `MoveMap`, for backwards-compatibility one could generate a `MoveMap` from a `MoveMapFactory` using the `MoveMapFactory` function `create_movemap_from_pose(pose)` # Now let's double-check some more `pose` information to verify that we are ready for `FastRelax`: display_pose = pyrosetta.rosetta.protocols.fold_from_loops.movers.DisplayPoseLabelsMover() display_pose.tasks(tf) display_pose.movemap_factory(mmf) display_pose.apply(pose) # Setting up `FastRelax` prints the default `relaxscript`, showing the `ramp_repack_min` settings with the following assignments: # >ramp_repack_min [scale:fa_rep] [min_tolerance] [coord_cst_weight] # + fr = pyrosetta.rosetta.protocols.relax.FastRelax(scorefxn_in=scorefxn, standard_repeats=1) fr.cartesian(True) fr.set_task_factory(tf) fr.set_movemap_factory(mmf) fr.min_type("lbfgs_armijo_nonmonotone") # For non-Cartesian scorefunctions, use "dfpmin_armijo_nonmonotone" #Note that this min_type is automatically set when you set the cartesian option. # But it is good to be aware of this - as not all protocols will do this for you. #fr.set_movemap(mm) # Could have optionally specified a MoveMap instead of MoveMapFactory #fr.minimize_bond_angles(True) # If not using MoveMapFactory, could specify bond angle minimization here #fr.minimize_bond_lengths(True) # If not using MoveMapFactory, could specify bond length minimization here # - # For recommendations on setting `fr.min_type()` for the scorefunction being used, see: https://www.rosettacommons.org/docs/latest/rosetta_basics/structural_concepts/minimization-overview#recommendations # Run Fast(Design)! Note: this takes ~1min 31s # + deletable=false nbgrader={"cell_type": "code", "checksum": "85fef49f0467d2e5df695c1c0b935a7b", "grade": true, "grade_id": "cell-5bcc2bfb4357035d", "locked": false, "points": 0, "schema_version": 3, "solution": true} # YOUR CODE HERE raise NotImplementedError() # - # ### Analysis # # Inspect the resulting design! # By how many Angstroms RMSD did the backbone Cα atoms move? pyrosetta.rosetta.core.scoring.CA_rmsd(start_pose, pose) # What is the delta `total_score` from `start_pose` to `pose`? Why is it large? delta_total_score = scorefxn(pose) - scorefxn(start_pose) print(delta_total_score) # What is the per-residue energy difference for each mutated position between `start_pose` and `pose`? # + deletable=false nbgrader={"cell_type": "code", "checksum": "aeea781ee5e947300b9eef9988b903f3", "grade": true, "grade_id": "cell-32cfb702cb53c564", "locked": false, "points": 0, "schema_version": 3, "solution": true} # YOUR CODE HERE raise NotImplementedError() # - # <!--NAVIGATION--> # < [Packing and Relax](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.02-Packing-design-and-regional-relax.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Protein Design 2](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.04-Protein-Design-2.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/06.03-Design-with-a-resfile-and-relax.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
student-notebooks/06.03-Design-with-a-resfile-and-relax.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Imports and setup # + from utilities import * import numpy as np import cv2 import os import re import importlib import pandas as pd import sklearn from sklearn.metrics import mean_squared_error from sklearn import preprocessing from functools import reduce from matplotlib import cm import matplotlib as mpl import matplotlib.animation as animation import matplotlib.colors as colors import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from mpl_toolkits.axes_grid1 import make_axes_locatable # %matplotlib widget plt.rcParams.update({ 'grid.linewidth': 0, 'grid.color': 'lightgrey', 'savefig.facecolor': (0.0, 0.0, 0.0, 0.0), 'savefig.transparent': True, }) # - # ## Basic functions # + def scanDir(directory, extension='avi', filter_string=None, filter_out=False, verbose=False): file_list=[] for root, dirs, files in os.walk(directory): for name in files: if name.lower().endswith(extension): filename = os.path.join(root, name) if verbose == True: print("Found file with extension ."+ extension + ": " + filename) file_list.append(filename) continue else: continue if filter_string != None: if filter_out: file_list = [file for file in file_list if not re.search(filter_string, file)] else: file_list = [file for file in file_list if re.search(filter_string, file)] return(file_list) def convertRStoLS(df): newdf = df.copy() colnames = newdf.columns cols_to_flip = [col for col in colnames if 'ABAD' in col or 'LAR' in col] for col in cols_to_flip: newdf[col] = newdf[col]*-1 return(newdf) def convertCMtoMM(df): newdf = df.copy() colnames = newdf.columns data_cols = [col for col in colnames if not 'frame' in col] for col in data_cols: newdf[col] = newdf[col]*10 return(newdf) def format3dPlot(axObj, title, xRange, yRange, zRange, view=None, color='grey', minimal=False): axObj.set_title(title) if view: axObj.view_init(view[0], view[1]) if minimal: axObj.set_axis_off() axObj.set_xlabel('- add X + abd', size='small', color=colors['red']) axObj.set_ylabel('- sup Y + prn', size='small', color=colors['green']) axObj.set_zlabel('- ext Z + flx', size='small', color=colors['blue']) axObj.set_xlim(xRange[0], xRange[1]) axObj.set_ylim(yRange[0], yRange[1]) axObj.set_zlim(zRange[0], zRange[1]) axObj.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) axObj.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) axObj.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) axObj.minorticks_off() axObj.tick_params(reset=True,colors=color, labelsize='x-small') return axObj def getShoulderRotationRanges(df): maxRx = math.ceil(max(df['Rx'])/10)*10 maxcRx = math.ceil(max(df['cRx'])/10)*10 minRx = math.floor(min(df['Rx'])/10)*10 mincRx = math.floor(min(df['cRx'])/10)*10 maxRy = math.ceil(max(df['Ry'])/10)*10 minRy = math.floor(min(df['Ry'])/10)*10 maxRz = math.ceil(max(df['Rz'])/10)*10 minRz = math.floor(min(df['Rz'])/10)*10 result = {'maxRx':maxRx, 'maxcRx':maxcRx, 'minRx':minRx, 'mincRx':mincRx, 'maxRy':maxRy, 'minRy':minRy, 'maxRz':maxRz, 'minRz':minRz} return result def getElbowRotationRanges(df): maxRx = math.ceil(max(df['eRx'])/10)*10 maxcRx = math.ceil(max(df['ceRx'])/10)*10 minRx = math.floor(min(df['eRx'])/10)*10 mincRx = math.floor(min(df['ceRx'])/10)*10 maxRy = math.ceil(max(df['eRy'])/10)*10 minRy = math.floor(min(df['eRy'])/10)*10 maxRz = math.ceil(max(df['eRz'])/10)*10 minRz = math.floor(min(df['eRz'])/10)*10 result = {'maxRx':maxRx, 'maxcRx':maxcRx, 'minRx':minRx, 'mincRx':mincRx, 'maxRy':maxRy, 'minRy':minRy, 'maxRz':maxRz, 'minRz':minRz} return result def getClavscapRotationRanges(df): maxRx = math.ceil(max(df['clavRx'])/10)*10 minRx = math.floor(min(df['clavRx'])/10)*10 result = {'maxRx':maxRx, 'minRx':minRx} return result def getTotalRotationRanges(df): shoulder = getShoulderRotationRanges(df) elbow = getElbowRotationRanges(df) clavscap = getClavscapRotationRanges(df) maxRx = max(shoulder['maxRx'],elbow['maxRx'],clavscap['maxRx']) maxcRx = max(shoulder['maxcRx'],elbow['maxcRx']) minRx = min(shoulder['minRx'],elbow['minRx'],clavscap['minRx']) mincRx = min(shoulder['mincRx'],elbow['mincRx']) maxRy = max(shoulder['maxRy'],elbow['maxRy']) minRy = min(shoulder['minRy'],elbow['minRy']) maxRz = max(shoulder['maxRz'],elbow['maxRz']) minRz = min(shoulder['minRz'],elbow['minRz']) result = {'maxRx':maxRx, 'maxcRx':maxcRx, 'minRx':minRx, 'mincRx':mincRx, 'maxRy':maxRy, 'minRy':minRy, 'maxRz':maxRz, 'minRz':minRz} return result def on_move(event): if event.inaxes == ax0: ax1.view_init(elev=ax0.elev, azim=ax0.azim) elif event.inaxes == ax1: ax0.view_init(elev=ax1.elev, azim=ax1.azim) else: return fig.canvas.draw_idle() def on_move4(event): if event.inaxes == ax0: ax1.view_init(elev=ax0.elev, azim=ax0.azim) ax2.view_init(elev=ax0.elev, azim=ax0.azim) ax3.view_init(elev=ax0.elev, azim=ax0.azim) elif event.inaxes == ax1: ax0.view_init(elev=ax1.elev, azim=ax1.azim) ax2.view_init(elev=ax1.elev, azim=ax1.azim) ax3.view_init(elev=ax1.elev, azim=ax1.azim) elif event.inaxes == ax2: ax0.view_init(elev=ax2.elev, azim=ax2.azim) ax1.view_init(elev=ax2.elev, azim=ax2.azim) ax3.view_init(elev=ax2.elev, azim=ax2.azim) elif event.inaxes == ax3: ax0.view_init(elev=ax3.elev, azim=ax3.azim) ax1.view_init(elev=ax3.elev, azim=ax3.azim) ax2.view_init(elev=ax3.elev, azim=ax3.azim) else: return fig.canvas.draw_idle() def on_move6(event): if event.inaxes == ax0: ax1.view_init(elev=ax0.elev, azim=ax0.azim) ax2.view_init(elev=ax0.elev, azim=ax0.azim) ax3.view_init(elev=ax0.elev, azim=ax0.azim) ax4.view_init(elev=ax0.elev, azim=ax0.azim) ax5.view_init(elev=ax0.elev, azim=ax0.azim) elif event.inaxes == ax1: ax0.view_init(elev=ax1.elev, azim=ax1.azim) ax2.view_init(elev=ax1.elev, azim=ax1.azim) ax3.view_init(elev=ax1.elev, azim=ax1.azim) ax4.view_init(elev=ax1.elev, azim=ax1.azim) ax5.view_init(elev=ax1.elev, azim=ax1.azim) elif event.inaxes == ax2: ax0.view_init(elev=ax2.elev, azim=ax2.azim) ax1.view_init(elev=ax2.elev, azim=ax2.azim) ax3.view_init(elev=ax2.elev, azim=ax2.azim) ax4.view_init(elev=ax2.elev, azim=ax2.azim) ax5.view_init(elev=ax2.elev, azim=ax2.azim) elif event.inaxes == ax3: ax0.view_init(elev=ax3.elev, azim=ax3.azim) ax1.view_init(elev=ax3.elev, azim=ax3.azim) ax2.view_init(elev=ax3.elev, azim=ax3.azim) ax4.view_init(elev=ax3.elev, azim=ax3.azim) ax5.view_init(elev=ax3.elev, azim=ax3.azim) elif event.inaxes == ax4: ax0.view_init(elev=ax4.elev, azim=ax4.azim) ax1.view_init(elev=ax4.elev, azim=ax4.azim) ax2.view_init(elev=ax4.elev, azim=ax4.azim) ax3.view_init(elev=ax4.elev, azim=ax4.azim) ax5.view_init(elev=ax4.elev, azim=ax4.azim) elif event.inaxes == ax5: ax0.view_init(elev=ax5.elev, azim=ax5.azim) ax1.view_init(elev=ax5.elev, azim=ax5.azim) ax2.view_init(elev=ax5.elev, azim=ax5.azim) ax3.view_init(elev=ax5.elev, azim=ax5.azim) ax4.view_init(elev=ax5.elev, azim=ax5.azim) else: return fig.canvas.draw_idle() def addCosGrid(gridAx, xRange, yRange, zRange, interval, zLevels=1, alpha=0, **kwargs): xMin = math.floor(xRange[0]) xMax = math.ceil(xRange[1])+1 yMin = math.floor(yRange[0]) yMax = math.ceil(yRange[1])+1 zMin = math.floor(zRange[0]) zMax = math.ceil(zRange[1])+1 xs= np.arange(xMin, xMax+1, 1) ys= np.arange(yMin, yMax, 1) xSize = abs(xMin)+abs(xMax) ySize = abs(yMin)+abs(yMax) zSize = abs(zMin)+abs(zMax) alphas = np.ones(xSize)*alpha xx, yy = np.meshgrid(xs, ys) cxx = xx*np.cos(np.radians(yy)) zMaxMinMax = max((abs(zMin),abs(zMax))) if zLevels <2: zs = np.zeros(zSize) else: zs = np.linspace(zMaxMinMax*-1, zMaxMinMax, zLevels) for zLevel in list(range(zLevels)): zz = np.ones((cxx.shape[0],cxx.shape[1]))*zs[zLevel] gridAx.plot_wireframe(cxx, yy, zz, rcount = xSize/interval, ccount=ySize/interval, **kwargs) def animate(i): ax0.view_init(elev=45, azim=i/10) ax1.view_init(elev=45, azim=i/10) return fig # - # ## Ingest data # + # input_dir=r"/Volumes/spierce_lab/lab/NSF forelimb project/Sophie echidna project/Filtered XROMM trials" # all_mots = scanDir(input_dir,"mot") # all_mots = [path for path in all_mots if "XROMM" not in os.path.basename(path)] # mot_dict = [{'id':os.path.splitext(os.path.basename(mot))[0],'path':os.path.dirname(mot),'mot_df':pd.read_csv(mot,sep='\t',header=6)} for mot in all_mots] # frame_ranges = {'44L': # {'9':(35,799), # '13':(2,799), # '14':(2,799), # }, # '46L': # {'15':(2,800), # '16':(2,800), # '17':(2,800), # '18':(2,800), # }, # '46R': # {'2':(2,800), # '3':(2,800), # '4':(2,800), # '9':(2,800), # }, # '48L': # {'4':(74,800), # '5':(74,800), # '6':(77,800), # '7':(51,800), # '8':(59,800), # }, # '48R': # {'15':(67,800), # '16':(79,800), # '17':(83,800), # '18':(87,800), # '19':(67,800), # }, # } # for trial in mot_dict: # animal = trial['id'].rsplit('_',1)[0].replace('_','').replace('HS','') # trial['animal'] = animal # run = trial['id'].rsplit('_',1)[-1].replace('tr','').replace('run','').replace('Run','') # trial['run'] = run # all_MMAs = scanDir(trial['path'] ,"csv") # # maya_MMAs = [path for path in all_MMAs if not any (exclusions in os.path.basename(path) for exclusions in ["plt","SIMM"])] # maya_MMAs = [path for path in all_MMAs if "redo" in os.path.basename(path)] # simm_MMAs = [path for path in all_MMAs if "plt" in os.path.basename(path)] # maya_shoulder = [path for path in maya_MMAs if "houlder" in path] # simm_shoulder = [path for path in simm_MMAs if "houlder" in path and "MMA" in path] # maya_elbow = [path for path in maya_MMAs if "lbow" in path] # simm_elbow = [path for path in simm_MMAs if "lbow" in path and "MMA" in path] # maya_clav = [path for path in maya_MMAs if "lav" in path and "houlder" not in path] # simm_clav = [path for path in simm_MMAs if "lav" in path and "MMA" in path] # # if len(maya_shoulder) != len(simm_shoulder): # # print("shoulder "+trial['id']) # # print(maya_shoulder) # # print(simm_shoulder) # # if len(maya_elbow) != len(simm_elbow): # # print("elbow "+trial['id']) # # print(maya_elbow) # # print(simm_elbow) # # if len(maya_clav) != len(simm_clav): # # print("clav "+trial['id']) # # print(maya_clav) # # print(simm_clav) # maya_crop = frame_ranges[animal][run] # simm_shoulder_dfs = [pd.read_csv(simm) for simm in simm_shoulder] # maya_shoulder_dfs = [pd.read_csv(maya).iloc[maya_crop[0]:maya_crop[1]+1].reset_index(drop=True) for maya in maya_shoulder] # simm_elbow_dfs = [pd.read_csv(simm) for simm in simm_elbow] # maya_elbow_dfs = [pd.read_csv(maya).iloc[maya_crop[0]:maya_crop[1]+1].reset_index(drop=True) for maya in maya_elbow] # simm_clav_dfs = [pd.read_csv(simm) for simm in simm_clav] # maya_clav_dfs = [pd.read_csv(maya).iloc[maya_crop[0]:maya_crop[1]+1].reset_index(drop=True) for maya in maya_clav] # for joint in [maya_shoulder_dfs,maya_elbow_dfs,maya_clav_dfs]: # for maya_df in joint: # maya_df['frame'] = maya_df.index + 1 # trial['simm_shoulder'] = simm_shoulder_dfs # trial['maya_shoulder'] = maya_shoulder_dfs # trial['simm_elbow'] = simm_elbow_dfs # trial['maya_elbow'] = maya_elbow_dfs # trial['simm_clav'] = simm_clav_dfs # trial['maya_clav'] = maya_clav_dfs # simm_colnames = [] # maya_colnames = [] # for trial in mot_dict: # for joint in ['simm_shoulder','simm_elbow','simm_clav']: # for simm_df in trial[joint]: # simm_colnames.append(list(simm_df.columns)) # for joint in ['maya_shoulder','maya_elbow','maya_clav']: # for maya_df in trial[joint]: # maya_colnames.append(list(maya_df.columns)) # simm_colnames = dict.fromkeys(sorted(list(set([item for sublist in simm_colnames for item in sublist])))) # maya_colnames = dict.fromkeys(sorted(list(set([item for sublist in maya_colnames for item in sublist])))) # simm_replacements = { # r'(?=.*frame).*':'frame', # r'(?=.*bic)(?=.*brev).*':'biceps_brevis', # r'(?=.*bic)(?=.*long).*':'biceps_longus', # r'(?=.*coraco)(?=.*long).*':'coracobrachialis_longus', # r'(?=.*delt)(?=.*clav).*':'deltoid_clav', # r'(?=.*lat)(?=.*pt1).*':'latissimus_1', # r'(?=.*lat)(?=.*pt2).*':'latissimus_vert', # r'(?=.*lat)(?=.*pt3).*':'latissimus_3', # r'(?=.*lat)(?=.*pt4).*':'latissimus_scap', # r'(?=.*pec)(?=.*pt1).*':'pectoralis_intermediate', # r'(?=.*pec)(?=.*pt2).*':'pectoralis_cran', # r'(?=.*pec)(?=.*pt3).*':'pectoralis_caud', # r'(?=.*triceps).*':'triceps_longus', # r'(?=.*elbow).*':'elbow', # r'(?=.*shoulder).*':'shoulder', # r'(?=.*clavscap).*':'clavscap.ABAD', # r'(?=.*rotation).*':'LAR', # r'(?=.*uction).*':'ABAD', # r'(?=.*exion).*':'FLEX', # } # maya_replacements = { # r'(?=.*frame).*':'frame', # r'(?=.*bicep).*':'biceps_brevis', # r'(?=.*cb).*':'coracobrachialis_longus', # r'(?=.*clavd).*':'deltoid_clav', # r'(?=.*lat)(?=.*scap).*':'latissimus_scap', # r'(?=.*lat)(?=.*vert).*':'latissimus_vert', # r'(?=.*pec)(?=.*o1_).*':'pectoralis_cran', # r'(?=.*pec)(?=.*o2_).*':'pectoralis_caud', # r'(?=.*pec)(?=.*pt1).*':'pectoralis_cran', # r'(?=.*pec)(?=.*pt2).*':'pectoralis_caud', # r'(?=.*triceps).*':'triceps_longus', # r'(?=.*elbow).*':'elbow', # r'(?=.*shoulder).*':'shoulder', # r'(?=.*_clav_).*':'clavscap', # r'(?=.*yma).*':'LAR', # r'(?=.*xma).*':'ABAD', # r'(?=.*zma).*':'FLEX', # } # simm_colnames_new = {} # maya_colnames_new = {} # for colname in simm_colnames: # ids = [] # for condition in simm_replacements: # if re.match(condition, colname, re.IGNORECASE): # ids.append(simm_replacements[condition]) # simm_colnames_new[colname] = 'simm.'+'.'.join(ids) if 'frame' not in colname else '.'.join(ids) # for colname in maya_colnames: # ids = [] # for condition in maya_replacements: # if re.match(condition, colname, re.IGNORECASE): # ids.append(maya_replacements[condition]) # maya_colnames_new[colname] = 'maya.'+'.'.join(ids) if 'frame' not in colname else '.'.join(ids) # for trial in mot_dict: # dfs = [] # for joint in ['shoulder','elbow','clav']: # for simm_df in trial['simm'+'_'+joint]: # simm_df.rename(simm_colnames_new, axis=1, inplace=True) # dfs.append(simm_df) # for maya_df in trial['maya'+'_'+joint]: # maya_df.rename(maya_colnames_new, axis=1, inplace=True) # maya_df = convertCMtoMM(maya_df) # if trial['animal'][-1] == 'R': # maya_df = convertRStoLS(maya_df) # dfs.append(maya_df) # dfs.append(trial['mot_df']) # df = reduce(lambda df1,df2: pd.merge(df1,df2,on='frame'), dfs) # df['animal'] = trial['animal'] # df['run'] = trial['run'] # trial['df'] = df # dfs = [trial['df'] for trial in mot_dict] # df = reduce(lambda df1,df2: pd.merge(df1,df2,how='outer'), dfs) # all_data = df.copy() # shoulder_rot_df = all_data.loc[:,['shoulder_abduction_adduction','shoulder_LA_Rotation','shoulder_flexion_extension']] # shoulder_rot_df['radRy_LAR'] = np.radians(shoulder_rot_df['shoulder_LA_Rotation']) # shoulder_rot_df['cosRy_LAR'] = np.cos(shoulder_rot_df['radRy_LAR']) # shoulder_rot_df['Rx_ABADcosRy_LAR'] = shoulder_rot_df['cosRy_LAR'].multiply(shoulder_rot_df['shoulder_abduction_adduction']) # all_data['cRx'] = shoulder_rot_df['Rx_ABADcosRy_LAR'] # elbow_rot_df = all_data.loc[:,['elbow_abduction_adduction','elbow_LA_Rotation','elbow_flexion_extension']] # elbow_rot_df['radRy_LAR'] = np.radians(elbow_rot_df['elbow_LA_Rotation']) # elbow_rot_df['cosRy_LAR'] = np.cos(elbow_rot_df['radRy_LAR']) # elbow_rot_df['Rx_ABADcosRy_LAR'] = elbow_rot_df['cosRy_LAR'].multiply(elbow_rot_df['elbow_abduction_adduction']) # all_data['ceRx'] = elbow_rot_df['Rx_ABADcosRy_LAR'] # all_data = all_data.rename({'shoulder_abduction_adduction':'Rx','shoulder_LA_Rotation':'Ry','shoulder_flexion_extension':'Rz','elbow_abduction_adduction':'eRx','elbow_LA_Rotation':'eRy','elbow_flexion_extension':'eRz','clavscap_angle':'clavRx'} ,axis=1) # all_data.to_csv('/Users/phil/Desktop/phil2021feb_all_echidna_newaxes_alljoints.csv') # all_data.to_csv(input_dir+'/phil2021feb_all_echidna_newaxes_alljoints.csv') # all_data.to_csv('/Users/phil/Development/possumpolish/echidna_plots/phil2021feb_all_echidna_newaxes_alljoints.csv') # - # ## Define plotting parameters all_data = pd.read_csv('/Users/phil/Development/possumpolish/phil2021feb_all_echidna_newaxes_alljoints.csv',index_col=0) colors = {'red':'#B74B4B', 'green':'#8EC15A', 'blue':'#5083D2'} viewYZ = (0,0) viewXZ = (0,90) viewXY = (90,90) view3Q = (45,45) view3Qneg = (-45,225) view3Qst = (-45,45) view3Qsw = (135,45) vX, vY = viewXY # [col for col in all_data.columns if muscle in col and joint in col and axis in col and src in col] # ### RMSE muscle_dict = makeMuscleDict(muscles_to_compare, joints) muscle_dict_RMSE = {muscle:{joint:{axis:{src:[col for col in all_data.columns if muscle in col and joint in col and axis in col and src in col] for src in ['maya','simm']} for axis in ['ABAD','LAR','FLEX']} for joint in muscle_dict_for_RMSE[muscle].keys()} for muscle in muscle_dict_for_RMSE.keys()} def calculateRMSE(muscle_dict, all_data): muscle_dict_for_RMSE = {muscle:{joint:{axis:{src:[col for col in all_data.columns if muscle in col and joint in col and axis in col and src in col] for src in ['maya','simm']} for axis in ['ABAD','LAR','FLEX']} for joint in muscle_dict[muscle].keys()} for muscle in muscle_dict.keys()} for muscle in muscle_dict_for_RMSE: for joint in muscle_dict_for_RMSE[muscle]: for axis in ['ABAD','LAR','FLEX']: df = all_data.filter(like='.'.join([muscle,joint,axis])).dropna() mayacols = (df.filter(like='maya').columns) simmcols = (df.filter(like='simm').columns) if len(mayacols) == len(simmcols): rmse = mean_squared_error(df.filter(like='maya'),df.filter(like='simm'), squared=False) muscle_dict_for_RMSE[muscle][joint][axis] = rmse else: del(muscle_dict_for_RMSE[muscle][joint][axis]) return muscle_dict_for_RMSE calculateRMSE(muscle_dict, all_data) muscle_dict_for_RMSE['deltoid_clav'] mean_squared_error(all_data['.'.join(['maya',muscle,joint,axis])].dropna(),all_data['.'.join(['simm',muscle,joint,axis])].dropna(),squared=False) # ## Set up dictionary for muscle plots muscles_to_compare = ['biceps_brevis','coracobrachialis_longus','deltoid_clav','latissimus_vert','latissimus_scap','pectoralis_cran','pectoralis_caud','triceps_longus'] joints = ['shoulder','elbow','clavscap'] def makeMuscleDict(muscles_to_compare, joints): muscle_dict = dict.fromkeys(muscles_to_compare) for muscle in muscles_to_compare: muscle_dict[muscle] = dict.fromkeys(joints) subset_list = [name for name in all_data.columns if muscle in name] for joint in joints: subsubset_list = [name for name in subset_list if joint in name] if joint == 'clavscap': subsubset_list = [name for name in subsubset_list if 'ABAD' in name] if len(subsubset_list): subset_df = all_data.dropna(subset=subsubset_list) seplist = "|".join(subsubset_list) muscle_dict[muscle][joint] = subset_df.iloc[:,subset_df.columns.str.contains('frame|animal|run|Rx|Ry|Rz|cRx|ceRx|eRx|eRy|eRz|clavRx|'+seplist)].copy() else: del muscle_dict[muscle][joint] return muscle_dict # ## Plot rom, broken out by animal-side # + def plotROMSeparate(df, joint): maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(df).values() plt.close('all') plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[12,12], constrained_layout=True) ax0 = fig.add_subplot(221, projection='3d', proj_type = 'ortho') ax1 = fig.add_subplot(222, projection='3d', proj_type = 'ortho') ax2 = fig.add_subplot(223, projection='3d', proj_type = 'ortho') ax3 = fig.add_subplot(224, projection='3d', proj_type = 'ortho') groups = all_data.groupby("animal") for name, group in groups: if joint == 'shoulder': xs = group.cRx ys = group.Ry zs = group.Rz elif joint == 'elbow': xs = group.ceRx ys = group.eRy zs = group.eRz ax0.scatter(xs,ys,zs, s=3, depthshade=False, label=name) ax1.scatter(xs,ys,zs, s=3, depthshade=False, label=name) ax2.scatter(xs,ys,zs, s=3, depthshade=False, label=name) ax3.scatter(xs,ys,zs, s=3, depthshade=False, label=name) format3dPlot(ax0, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax1, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax2, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax3, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) fig.suptitle(joint+' ROM for all sides, color by side', fontsize=16) plt.legend() plotROMSeparate(all_data,'shoulder') # plotROMSeparate(all_data,'elbow') # - # ## Plot rom, pooled # + # rom, broken out by animal-side def plotROMPooled(df, joint): maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(df).values() plt.close('all') plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[12,12], constrained_layout=True) ax0 = fig.add_subplot(221, projection='3d', proj_type = 'ortho') ax1 = fig.add_subplot(222, projection='3d', proj_type = 'ortho') ax2 = fig.add_subplot(223, projection='3d', proj_type = 'ortho') ax3 = fig.add_subplot(224, projection='3d', proj_type = 'ortho') if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz ax0.scatter(xs,ys,zs, s=3, depthshade=False, label=joint) ax1.scatter(xs,ys,zs, s=3, depthshade=False, label=joint) ax2.scatter(xs,ys,zs, s=3, depthshade=False, label=joint) ax3.scatter(xs,ys,zs, s=3, depthshade=False, label=joint) format3dPlot(ax0, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax1, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax2, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax3, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) fig.suptitle(joint+' ROM for all sides, color by side', fontsize=16) plotROMPooled(all_data,'shoulder') # plotROMPooled(all_data,'elbow') # - # ## Plot convex hulls # + from scipy.spatial import Delaunay import numpy as np from collections import defaultdict def alpha_shape_3D(pos, alpha): """ Compute the alpha shape (concave hull) of a set of 3D points. Parameters: pos - np.array of shape (n,3) points. alpha - alpha value. return outer surface vertex indices, edge indices, and triangle indices """ tetra = Delaunay(pos) # Find radius of the circumsphere. # By definition, radius of the sphere fitting inside the tetrahedral needs # to be smaller than alpha value # http://mathworld.wolfram.com/Circumsphere.html tetrapos = np.take(pos,tetra.vertices,axis=0) normsq = np.sum(tetrapos**2,axis=2)[:,:,None] ones = np.ones((tetrapos.shape[0],tetrapos.shape[1],1)) a = np.linalg.det(np.concatenate((tetrapos,ones),axis=2)) Dx = np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[1,2]],ones),axis=2)) Dy = -np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[0,2]],ones),axis=2)) Dz = np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[0,1]],ones),axis=2)) c = np.linalg.det(np.concatenate((normsq,tetrapos),axis=2)) r = np.sqrt(Dx**2+Dy**2+Dz**2-4*a*c)/(2*np.abs(a)) # Find tetrahedrals tetras = tetra.vertices[r<alpha,:] # triangles TriComb = np.array([(0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)]) Triangles = tetras[:,TriComb].reshape(-1,3) Triangles = np.sort(Triangles,axis=1) # Remove triangles that occurs twice, because they are within shapes TrianglesDict = defaultdict(int) for tri in Triangles:TrianglesDict[tuple(tri)] += 1 Triangles=np.array([tri for tri in TrianglesDict if TrianglesDict[tri] ==1]) #edges EdgeComb=np.array([(0, 1), (0, 2), (1, 2)]) Edges=Triangles[:,EdgeComb].reshape(-1,2) Edges=np.sort(Edges,axis=1) Edges=np.unique(Edges,axis=0) Vertices = np.unique(Edges) return Vertices,Edges,Triangles # + def plotAlphaShape(df, joint, alpha=50): maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(df).values() plt.close('all') plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[12,12], constrained_layout=True) ax0 = fig.add_subplot(221, projection='3d', proj_type = 'ortho') ax1 = fig.add_subplot(222, projection='3d', proj_type = 'ortho') ax2 = fig.add_subplot(223, projection='3d', proj_type = 'ortho') ax3 = fig.add_subplot(224, projection='3d', proj_type = 'ortho') if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz alphaVert,alphaEdge, alphaTri = alpha_shape_3D(np.array([xs,ys,zs]).T,alpha) ax0.plot_trisurf(xs,ys,alphaTri,zs, shade=True, linewidth=0, antialiased=True) ax1.plot_trisurf(xs,ys,alphaTri,zs, shade=True, linewidth=0, antialiased=True) ax2.plot_trisurf(xs,ys,alphaTri,zs, shade=True, linewidth=0, antialiased=True) ax3.plot_trisurf(xs,ys,alphaTri,zs, shade=True, linewidth=0, antialiased=True) # ax0.scatter(xs,ys,zs, s=3, depthshade=False, label=name) # ax1.scatter(xs,ys,zs, s=3, depthshade=False, label=name) # ax2.scatter(xs,ys,zs, s=3, depthshade=False, label=name) # ax3.scatter(xs,ys,zs, s=3, depthshade=False, label=name) format3dPlot(ax0, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax1, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax2, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(ax3, 'ROM', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) fig.suptitle(joint+' ROM for all trials, pooled, convex hulled @ alpha '+str(alpha), fontsize=16) plt.legend() plotAlphaShape(all_data,'elbow', 19) plt.savefig('/Users/phil/Development/possumpolish/echidna_plots/elbowEnvelope.svg', format='svg') # plotAlphaShape(all_data,'elbow') # + import rpy2 import rpy2.robjects as ro from rpy2.robjects.packages import importr from rpy2.robjects import pandas2ri pandas2ri.activate() from rpy2.robjects.conversion import localconverter import rpy2.ipython.html rpy2.ipython.html.init_printing() from rpy2.robjects.lib.dplyr import DataFrame from rpy2.robjects import rl alphashape3d = importr('alphashape3d') tidyverse = importr('tidyverse') ro.r(''' getCriticalAlpha <- function(components, alphas){ criticalIndex <- 1 for (i in 1:length(components)){ comp = as.numeric(components[[i]]) if (any(comp != 1)){ criticalIndex <- i } } criticalIndex <- criticalIndex + 1 print(alphas[[criticalIndex]]) return(list(criticalIndex, alphas[[criticalIndex]])) } ''') getCriticalAlpha = ro.r['getCriticalAlpha'] with localconverter(ro.default_converter + pandas2ri.converter): rdf_glenoid = ro.r['as.matrix'](ro.conversion.py2rpy(all_data[['cRx','Ry','Rz']].copy())) rdf_elbow = ro.r['as.matrix'](ro.conversion.py2rpy(all_data[['ceRx','eRy','eRz']].copy())) subTenAlphas = np.linspace(0.1,9.9, 99) supTenAlphas = np.linspace(10, 100, 91) alphas = np.concatenate((subTenAlphas,supTenAlphas)) alphaCols = [str(round(alpha,3)).replace('.','_') for alpha in alphas] def getAlphaObjects(rdf, alphas): alphaCols = [str(round(alpha,3)).replace('.','_') for alpha in alphas] print("...calculating alphas...") alphaShapes = alphashape3d.ashape3d(rdf, alphas, pert=True) print("...calculating components...") components = alphashape3d.components_ashape3d(alphaShapes, indexAlpha="all") print("...calculating volumes...") volumes = alphashape3d.volume_ashape3d(alphaShapes, indexAlpha="all") print("...calculating critical alpha...") [crit_index, crit_val] = getCriticalAlpha(components, alphas) shapes = np.array(alphaShapes, dtype=object) tetras_df = pd.DataFrame(alphaShapes[0]).rename(columns={**{0:'index1',1:'index2',2:'index3',3:'index4',4:'intervals'},**dict(zip(range(5,len(alphas)+5),alphaCols))}) triangles_df = pd.DataFrame(alphaShapes[1]).rename(columns={**{0:'index1',1:'index2',2:'index3',3:'on_convex_hull',4:'attached',5:'intervals1',6:'intervals2',7:'intervals3'},**dict(zip(range(8,len(alphas)+8),alphaCols))}) edges_df = pd.DataFrame(alphaShapes[2]).rename(columns={**{0:'index1',1:'index2',2:'on_convex_hull',3:'attached',4:'intervals1',5:'intervals2',6:'intervals3'},**dict(zip(range(7,len(alphas)+7),alphaCols))}) vertices_df = pd.DataFrame(alphaShapes[3]).rename(columns={**{0:'index',1:'on_convex_hull',2:'intervals1',3:'intervals2'},**dict(zip(range(4,len(alphas)+4),alphaCols))}) vertices_df['x'],vertices_df['y'],vertices_df['z'] = alphaShapes[4].T[0],alphaShapes[4].T[1],alphaShapes[4].T[2] components_df = pd.DataFrame(np.array(components, dtype=object).T).rename(columns=dict(zip(range(0,len(alphas)+1),alphaCols))) volumes_df = pd.DataFrame(np.array(volumes, dtype=object)).transpose().rename(columns=dict(zip(range(0,len(alphas)+1),alphaCols))) return {'tetrahedrons':tetras_df, 'triangles': triangles_df, 'edges': edges_df, 'vertices': vertices_df, 'components': components_df, 'volumes': volumes_df, 'crit_val':crit_val, 'crit_index':crit_index } # - glenoid_alphas = getAlphaObjects(rdf_glenoid, alphas) elbow_alphas = getAlphaObjects(rdf_elbow, alphas) len(alphas) # + # # %pip install kneed from kneed import DataGenerator, KneeLocator volumes_df = glenoid_alphas['volumes'].T.join(elbow_alphas['volumes'].T, lsuffix='glenoid', rsuffix='elbow').replace(0, np.nan) volumes_df.index = pd.to_numeric([index.replace('_','.') for index in volumes_df.index]) volumes_df.columns=['glenoid','elbow'] volumes_df['glenoid-elbow ratio'] = volumes_df['glenoid']/volumes_df['elbow'] volumes_df['ratio d1'] = np.gradient(volumes_df['glenoid-elbow ratio']) zero_crossings = [volumes_df.index[index] for index in np.where(np.diff(np.sign(volumes_df['ratio d1'])))[0]] zero_crossing_rows = volumes_df.loc[zero_crossings].loc[(volumes_df['elbow'] >= volumes_df['elbow'].max()/2)&(volumes_df['glenoid'] >= volumes_df['glenoid'].max()/2)] first_zero_crossing_row = zero_crossing_rows.iloc[0] second_zero_crossing_row = zero_crossing_rows.iloc[1] kneedle = KneeLocator(list(volumes_df.index), volumes_df.loc[:,'glenoid-elbow ratio'], S=20.0, curve="concave", direction="increasing") fig, ax = plt.subplots() ax1= volumes_df.loc[:,['glenoid','elbow']].plot(ax=ax, style=['#3091FA','#F09A47']) ax2= volumes_df.loc[:,['glenoid-elbow ratio']].plot(secondary_y=True, style='green',ax=ax, label='glenoid-elbow ratio') ax1.set_ylabel('Volume') ax2.set_ylabel('Ratio') # volumes_df.loc[:,['ratio d1']].plot(secondary_y=True, style='g',ax=ax) ax.vlines(kneedle.knee, volumes_df.min().min(), volumes_df.max().max(), color='r', label='knee') # ax.vlines(second_zero_crossing_row.name, volumes_df.min().min(), volumes_df.max().max(), color='r', label='second 0 crossing of ratio d1') # ax.vlines(allEnclosed_d, volumes_df.min().min(), volumes_df.max().max(), color='blue', linestyle=':', label='glenoid critical') # ax.vlines(allEnclosed_s, volumes_df.min().min(), volumes_df.max().max(), color='orange', linestyle=':', label='elbow critical') # ax.vlines(waterTight_d, volumes_df.min().min(), volumes_df.max().max(), color='blue', linestyle='dashed', label='g;enoi watertight') # ax.vlines(waterTight_s, volumes_df.min().min(), volumes_df.max().max(), color='orange', linestyle='dashed', label='tegu watertight') print(kneedle.knee) plt.savefig('/Users/phil/Development/possumpolish/echidna_plots/volumeAnalysis.svg', format='svg') # + def addDetailGrid(gridAx, xRange, yRange, zRange, interval, zLevels=1, alpha=0, **kwargs): xMin = math.floor(xRange[0]) xMax = math.ceil(xRange[1])+1 yMin = math.floor(yRange[0]) yMax = math.ceil(yRange[1])+1 zMin = math.floor(zRange[0]) zMax = math.ceil(zRange[1])+1 xs= np.arange(xMin, xMax+1, 1) ys= np.arange(yMin, yMax, 1) xSize = abs(xMin)+abs(xMax) ySize = abs(yMin)+abs(yMax) zSize = abs(zMin)+abs(zMax) alphas = np.ones(xSize)*alpha xx, yy = np.meshgrid(xs, ys) cxx = xx*np.cos(np.radians(yy)) zMaxMinMax = max((abs(zMin),abs(zMax))) if zLevels <2: zs = np.zeros(zSize) else: zs = np.linspace(zMaxMinMax*-1, zMaxMinMax, zLevels) for zLevel in list(range(zLevels)): zz = np.ones((cxx.shape[0],cxx.shape[1]))*zs[zLevel] gridAx.plot_wireframe(cxx, yy, zz, rcount = xSize/interval, ccount=ySize/interval, **kwargs) def format3dPlotFancy(axObj, title, xRange, yRange, zRange, view=None, color='grey', minimal=False): axObj.set_title(title) if view: axObj.view_init(view[0], view[1]) if minimal: axObj.set_axis_off() axObj.set_xlabel('- add X + abd', size='small', color=colors['red']) axObj.set_ylabel('- sup Y + prn', size='small', color=colors['green']) axObj.set_zlabel('- ext Z + flx', size='small', color=colors['blue']) axObj.set_xlim(xRange[0], xRange[1]) axObj.set_ylim(yRange[0], yRange[1]) axObj.set_zlim(zRange[0], zRange[1]) axObj.w_xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0)) axObj.w_xaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) axObj.w_yaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) axObj.w_zaxis.line.set_color((1.0, 1.0, 1.0, 0.0)) # axObj.minorticks_off() axObj.tick_params(reset=True,colors=color, width=10, bottom=False, top=False, left=False, right=False , labelrotation=45, which='both',labelsize='x-small') return axObj def getTotalRotationRanges(df): shoulder = getShoulderRotationRanges(df) elbow = getElbowRotationRanges(df) clavscap = getClavscapRotationRanges(df) maxRx = max(shoulder['maxRx'],elbow['maxRx'],clavscap['maxRx']) maxcRx = max(shoulder['maxcRx'],elbow['maxcRx']) minRx = min(shoulder['minRx'],elbow['minRx'],clavscap['minRx']) mincRx = min(shoulder['mincRx'],elbow['mincRx']) maxRy = max(shoulder['maxRy'],elbow['maxRy']) minRy = min(shoulder['minRy'],elbow['minRy']) maxRz = max(shoulder['maxRz'],elbow['maxRz']) minRz = min(shoulder['minRz'],elbow['minRz']) result = {'maxRx':maxRx, 'maxcRx':maxcRx, 'minRx':minRx, 'mincRx':mincRx, 'maxRy':maxRy, 'minRy':minRy, 'maxRz':maxRz, 'minRz':minRz} return result def greaterOfTwoRanges(dict_1_morekeys, dict_2_fewerkeys): result_dict = dict_1_morekeys.copy() for key in dict_2_fewerkeys.keys(): if key[0:3] == 'min': result_dict[key] = dict_1_morekeys[key] if dict_1_morekeys[key] < dict_2_fewerkeys[key] else dict_2_fewerkeys[key] elif key[0:3] == 'max': result_dict[key] = dict_1_morekeys[key] if dict_1_morekeys[key] > dict_2_fewerkeys[key] else dict_2_fewerkeys[key] else: raise NameError('value comparison failed') return result_dict def plotAlphaShapeCombined(df, alpha=50): ##for big range: glenoid_dict = {'maxRx':118/2, 'minRx':-118/2, 'maxRy':54/2, 'minRy':-54/2, 'maxRz':26/2, 'minRz':-26/2} elbow_dict = {'maxRx':22/2, 'minRx':-22/2, 'maxRy':20/2, 'minRy':-20/2, 'maxRz':114/2, 'minRz':-114/2} model_dict = greaterOfTwoRanges(glenoid_dict, elbow_dict) result = greaterOfTwoRanges(getTotalRotationRanges(df), model_dict) maxabs = max(np.abs(list(result.values()))) ##for regular range: maxRx,maxcRx,minRx,mincRx,maxRy,minRy,maxRz,minRz = getTotalRotationRanges(df).values() plt.close('all') plt.rcParams['grid.linewidth'] = 0.1 plt.rcParams['axes.linewidth'] = 0 plt.rcParams['grid.color'] = 'grey' fig = plt.figure(figsize=[12,6], constrained_layout=True) ax0 = fig.add_subplot(231, projection='3d', proj_type = 'ortho') ax1 = fig.add_subplot(232, projection='3d', proj_type = 'ortho') ax2 = fig.add_subplot(233, projection='3d', proj_type = 'ortho') ax3 = fig.add_subplot(234, projection='3d', proj_type = 'ortho') ax4 = fig.add_subplot(235, projection='3d', proj_type = 'ortho') ax5 = fig.add_subplot(236, projection='3d', proj_type = 'ortho') alphaVertShoulder,alphaEdgeShoulder, alphaTriShoulder = alpha_shape_3D(np.array([df.cRx,df.Ry,df.Rz]).T,alpha) alphaVertElbow,alphaEdgeElbow, alphaTriElbow = alpha_shape_3D(np.array([df.ceRx,df.eRy,df.eRz]).T,alpha) # plot shoulder ax0.plot_trisurf(df.cRx,df.Ry,alphaTriShoulder,df.Rz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C0', edgecolor = 'C0') ax1.plot_trisurf(df.cRx,df.Ry,alphaTriShoulder,df.Rz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C0', edgecolor = 'C0') ax2.plot_trisurf(df.cRx,df.Ry,alphaTriShoulder,df.Rz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C0', edgecolor = 'C0') ax0.plot3D(df.Rx, df.Ry, df.Rz, marker='.', color='#1852CC', markersize=1, zorder=3, alpha=0.1) ax1.plot3D(df.Rx, df.Ry, df.Rz, marker='.', color='#1852CC', markersize=1, zorder=3, alpha=0.1) ax2.plot3D(df.Rx, df.Ry, df.Rz, marker='.', color='#1852CC', markersize=1, zorder=3, alpha=0.1) #plot elbow ax3.plot_trisurf(df.ceRx,df.eRy,alphaTriElbow,df.eRz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C1',edgecolor = 'C1') ax4.plot_trisurf(df.ceRx,df.eRy,alphaTriElbow,df.eRz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C1',edgecolor = 'C1') ax5.plot_trisurf(df.ceRx,df.eRy,alphaTriElbow,df.eRz, shade=False, linewidth=0.02, antialiased=True, alpha=0.65, color='C1',edgecolor = 'C1') ax3.plot3D(df.eRx, df.eRy, df.eRz, marker='.', color='#E84A00', markersize=1, zorder=3, alpha=0.05) ax4.plot3D(df.eRx, df.eRy, df.eRz, marker='.', color='#E84A00', markersize=1, zorder=3, alpha=0.05) ax5.plot3D(df.eRx, df.eRy, df.eRz, marker='.', color='#E84A00', markersize=1, zorder=3, alpha=0.05) ##for regular range # format3dPlotFancy(ax0, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz),view=viewYZ) # format3dPlotFancy(ax1, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz),view=viewXZ) # format3dPlotFancy(ax2, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) # format3dPlotFancy(ax3, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) # format3dPlotFancy(ax4, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) # format3dPlotFancy(ax5, '', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) ##for big range format3dPlotFancy(ax0, '', (-maxabs, maxabs), (-maxabs, maxabs), (-maxabs, maxabs),view=viewYZ) format3dPlotFancy(ax1, '', (-maxabs, maxabs), (-maxabs, maxabs),(-maxabs, maxabs),view=viewXZ) format3dPlotFancy(ax2, '', (-maxabs, maxabs), (-maxabs, maxabs),(-maxabs, maxabs), view=viewXY) format3dPlotFancy(ax3, '', (-maxabs, maxabs), (-maxabs, maxabs),(-maxabs, maxabs), view=viewYZ) format3dPlotFancy(ax4, '', (-maxabs, maxabs), (-maxabs, maxabs),(-maxabs, maxabs), view=viewXZ) format3dPlotFancy(ax5, '', (-maxabs, maxabs), (-maxabs, maxabs),(-maxabs, maxabs),view=viewXY) ax0.grid(True) fig.suptitle(' ROM for all trials, pooled, convex hulled @ alpha '+str(alpha), fontsize=16) plt.legend() plotAlphaShapeCombined(all_data, 20) plt.savefig('/Users/phil/Development/possumpolish/echidna_plots/newROMbigRange.svg', format='svg') # - # ## Plot RGB muscles extrema = {"glenoid cRx min":all_data.loc[:,'cRx'].min(), "glenoid cRx max":all_data.loc[:,'cRx'].max(), "glenoid Ry min":all_data.loc[:,'Ry'].min(), "glenoid Ry max":all_data.loc[:,'Ry'].max(), "glenoid Rz min":all_data.loc[:,'Rz'].min(), "glenoid Rz max":all_data.loc[:,'Rz'].max(), "elbow cRx min":all_data.loc[:,'ceRx'].min(), "elbow cRx max":all_data.loc[:,'ceRx'].max(), "elbow Ry min":all_data.loc[:,'eRy'].min(), "elbow Ry max":all_data.loc[:,'eRy'].max(), "elbow Rz min":all_data.loc[:,'eRz'].min(), "elbow Rz max":all_data.loc[:,'eRz'].max(), } extrema # + # %matplotlib widget maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(all_data).values() def rgbDict(muscle_dict, global_norm = None): result_dict = muscle_dict.copy() for muscle in result_dict: for joint in result_dict[muscle]: result_dict[muscle][joint] = rgbMap(result_dict[muscle][joint].copy(), joint, global_norm) return result_dict def rgbMap(df, joint, global_norm = None): meta = df.filter(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx']) mmas = df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint) mmas_pos = mmas.apply(lambda x : np.where(x > 0, x, 0),axis=0) mmas_neg = mmas.apply(lambda x : np.where(x < 0, abs(x), 0),axis=0) mmas_pos.columns = [name+'.pos' for name in mmas_pos.columns] mmas_neg.columns = [name+'.neg' for name in mmas_neg.columns] mmas_binned = mmas_pos.join(mmas_neg) if global_norm: mmas_normalized = mmas_binned/global_norm else: mmMax = mmas_binned.max().max() mmas_normalized = mmas_binned/mmMax mean_simmPos = mmas_normalized.filter(regex=r'(simm.*pos.*)').mean(axis=1) mean_simmNeg = mmas_normalized.filter(regex=r'(simm.*neg.*)').mean(axis=1) mean_mayaPos = mmas_normalized.filter(regex=r'(maya.*pos.*)').mean(axis=1) mean_mayaNeg = mmas_normalized.filter(regex=r'(maya.*neg.*)').mean(axis=1) mean_max = np.array([mean_simmPos,mean_simmNeg,mean_mayaPos,mean_mayaNeg]).max() meta['simm.scale.pos'] = mean_simmPos/mean_max meta['simm.scale.neg'] = mean_simmNeg/mean_max meta['maya.scale.pos'] = mean_mayaPos/mean_max meta['maya.scale.neg'] = mean_mayaNeg/mean_max result = meta.join(mmas_normalized) return(result) def plotPosNegMMAs(df, joint): if joint == 'clavscap': return # plt.close('all') plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[12,12], constrained_layout=True) four = gridspec.GridSpec(2, 2, figure=fig) topleft = four[0].subgridspec(3, 3) topright = four[1].subgridspec(3, 3) bottomleft = four[2].subgridspec(3, 3) bottomright = four[3].subgridspec(3, 3) tl_ax0 = fig.add_subplot(topleft[:-1,:], projection='3d', proj_type = 'ortho') tl_ax1 = fig.add_subplot(topleft[-1,0], projection='3d', proj_type = 'ortho') tl_ax2 = fig.add_subplot(topleft[-1,1], projection='3d', proj_type = 'ortho') tl_ax3 = fig.add_subplot(topleft[-1,2], projection='3d', proj_type = 'ortho') tr_ax0 = fig.add_subplot(topright[:-1,:], projection='3d', proj_type = 'ortho') tr_ax1 = fig.add_subplot(topright[-1,0], projection='3d', proj_type = 'ortho') tr_ax2 = fig.add_subplot(topright[-1,1], projection='3d', proj_type = 'ortho') tr_ax3 = fig.add_subplot(topright[-1,2], projection='3d', proj_type = 'ortho') bl_ax0 = fig.add_subplot(bottomleft[1:,:], projection='3d', proj_type = 'ortho') bl_ax1 = fig.add_subplot(bottomleft[0,0], projection='3d', proj_type = 'ortho') bl_ax2 = fig.add_subplot(bottomleft[0,1], projection='3d', proj_type = 'ortho') bl_ax3 = fig.add_subplot(bottomleft[0,2], projection='3d', proj_type = 'ortho') br_ax0 = fig.add_subplot(bottomright[1:,:], projection='3d', proj_type = 'ortho') br_ax1 = fig.add_subplot(bottomright[0,0], projection='3d', proj_type = 'ortho') br_ax2 = fig.add_subplot(bottomright[0,1], projection='3d', proj_type = 'ortho') br_ax3 = fig.add_subplot(bottomright[0,2], projection='3d', proj_type = 'ortho') simm_pos_cols = df.filter(regex=r'(simm.*pos.*)').columns.tolist()[1:]+['simm.scale.pos'] maya_pos_cols = df.filter(regex=r'(maya.*pos.*)').columns.tolist()[1:]+['maya.scale.pos'] simm_neg_cols = df.filter(regex=r'(simm.*neg.*)').columns.tolist()[1:]+['simm.scale.neg'] maya_neg_cols = df.filter(regex=r'(maya.*neg.*)').columns.tolist()[1:]+['maya.scale.neg'] if len(simm_pos_cols) == 2: df['zeros'] = 0 axistype = simm_pos_cols[0].split('.')[-2] for colset in [simm_pos_cols, maya_pos_cols, simm_neg_cols, maya_neg_cols]: if axistype == 'ABAD': colset.insert(1,'zeros') colset.insert(1,'zeros') if axistype == 'LAR': colset.insert(0,'zeros') colset.insert(2,'zeros') if axistype == 'FLEX': colset.insert(0,'zeros') colset.insert(0,'zeros') if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz for axview in [tl_ax0,tl_ax1,tl_ax2,tl_ax3]: axview.scatter(xs,ys,zs, s=df['simm.scale.pos']*10, c=df[simm_pos_cols], depthshade=False, edgecolors='none', vmin=0, vmax=1) for axview in [tr_ax0,tr_ax1,tr_ax2,tr_ax3]: axview.scatter(xs,ys,zs, s=df['maya.scale.pos']*10, c=df[maya_pos_cols], depthshade=False, edgecolors='none', vmin=0, vmax=1) for axview in [bl_ax0,bl_ax1,bl_ax2,bl_ax3]: axview.scatter(xs,ys,zs, s=df['simm.scale.neg']*10, c=df[simm_neg_cols], depthshade=False, edgecolors='none', vmin=0, vmax=1) for axview in [br_ax0,br_ax1,br_ax2,br_ax3]: axview.scatter(xs,ys,zs, s=df['maya.scale.neg']*10, c=df[maya_neg_cols], depthshade=False, edgecolors='none', vmin=0, vmax=1) format3dPlot(tl_ax0, 'SIMM +', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(tl_ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tl_ax1, 'X', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(tl_ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tl_ax2, 'Y', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(tl_ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tl_ax3, 'Z', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(tl_ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tr_ax0, 'Maya +', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(tr_ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tr_ax1, 'X', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(tr_ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tr_ax2, 'Y', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(tr_ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(tr_ax3, 'Z', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(tr_ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(bl_ax0, 'SIMM -', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(bl_ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(bl_ax1, 'X', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(bl_ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(bl_ax2, 'Y', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(bl_ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(bl_ax3, 'Z', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(bl_ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(br_ax0, 'Maya -', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(br_ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(br_ax1, 'X', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewYZ) addCosGrid(br_ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(br_ax2, 'Y', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXZ) addCosGrid(br_ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) format3dPlot(br_ax3, 'Z', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=viewXY) addCosGrid(br_ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) plot_title = list(set([" ".join(col.split('.')[1:3]) for col in df.columns if len(col.split('.')) > 3]))[0] fig.suptitle(plot_title, fontsize=16) global_mmMax = abs(all_data.filter(regex=r'\.[ABDEFLRX]{3,}').max()).max() muscle_dict = makeMuscleDict(muscles_to_compare, joints) rgbMaps = rgbDict(muscle_dict) for muscle in rgbMaps: for joint in rgbMaps[muscle]: plotPosNegMMAs(rgbMaps[muscle][joint],joint) # - # ## Make RGB cube # + plt.close('all') x = np.linspace(0,1,101) y = np.linspace(0,1,101) z = np.linspace(0,1,101) xx, yy, zz = np.meshgrid(x, y, z) NT = np.product(xx.shape) data = { "x": np.reshape(xx,NT), "y": np.reshape(yy,NT), "z": np.reshape(zz,NT) } cube = pd.DataFrame(data=data) cube fig = plt.figure(figsize=[12,6], constrained_layout=False) ax0 = fig.add_subplot(121, projection='3d', proj_type = 'ortho') ax1 = fig.add_subplot(122, projection='3d', proj_type = 'ortho') ax0.scatter(cube.x,cube.y,cube.z, s=1, c=cube[['x','y','z']], depthshade=False) ax1.scatter(cube.x,cube.y,cube.z, s=1, c=cube[['x','y','z']], depthshade=False) format3dPlot(ax0, '', (0, 1), (0, 1),(0, 1), view=view3Q, minimal=True) format3dPlot(ax1, '', (0, 1), (0, 1),(0, 1), view=view3Qneg, minimal=True) ax0.text(1, 1, 1, "XYZ (1,1,1)", color='black', fontsize=12, horizontalalignment='center', verticalalignment='center') ax0.text(1, 0, 0, "X (1,0,0)", color='black', fontsize=12, horizontalalignment='right', verticalalignment='center') ax0.text(0, 1, 0, "Y (0,1,0)", color='black', fontsize=12, horizontalalignment='left', verticalalignment='center') ax0.text(0, 0, 1, "Z (0,0,1)", color='black', fontsize=12, horizontalalignment='center', verticalalignment='bottom') ax1.text(0, 0, 0, "(0,0,0)", color='white', fontsize=12, horizontalalignment='center', verticalalignment='center') ax1.text(1, 1, 0, "XY (1,1,0)", color='black', fontsize=12, horizontalalignment='center', verticalalignment='top') ax1.text(0, 1, 1, "YZ (0,1,1)", color='black', fontsize=12, horizontalalignment='right', verticalalignment='center') ax1.text(1, 0, 1, "XZ (1,0,1)", color='black', fontsize=12, horizontalalignment='left', verticalalignment='center') fig.suptitle('color key', fontsize=16) plt.legend() # - # ## Check for duplicated/weird data # + def getDuplicateColumns(df): duplicateColumnNames = {} for x in range(df.shape[1]): col = df.iloc[:, x] for y in range(x + 1, df.shape[1]): otherCol = df.iloc[:, y] if col.equals(otherCol): if col.isnull().values.all(): pass else: colExists = duplicateColumnNames.get(col.name) duplicateColumnNames[col.name] = duplicateColumnNames[col.name].append(otherCol.name) if colExists else [otherCol.name] if len(duplicateColumnNames): return duplicateColumnNames else: return None def checkOM(df): match = {} ten_x = {} unknown = {} flagged_columns = {} df_filtered = df.dropna(axis=1,how='all').filter(regex=r'(.*\..*\.)') muscles = {name.split('.',1)[-1] for name in df_filtered.columns} for muscle in muscles: simm_name = 'simm.'+muscle maya_name = 'maya.'+muscle if (simm_name in df.columns) and (maya_name in df.columns): simm_mean = df[simm_name].mean() maya_mean = df[maya_name].mean() simm_om = math.floor(math.log10(abs(simm_mean))) maya_om = math.floor(math.log10(abs(maya_mean))) if simm_om != maya_om: maya_10x_om = math.floor(math.log10(abs(maya_mean*10))) if simm_om == maya_10x_om: ten_x[muscle] = {'simm_mean':simm_mean, 'maya_mean':maya_mean, 'simm_om':simm_om, 'maya_om':maya_om, 'maya_10x_om':maya_10x_om} else: unknown[muscle] = {'simm_mean':simm_mean, 'maya_mean':maya_mean, 'simm_om':simm_om, 'maya_om':maya_om, 'maya_10x_om':maya_10x_om} else: match[muscle] = {'simm_mean':simm_mean, 'maya_mean':maya_mean, 'simm_om':simm_om, 'maya_om':maya_om} return {'match': match, 'ten_x': ten_x, 'unknown': unknown} animal_classes = set(all_data['animal']) run_dict = {animal:dict.fromkeys({run for run in set(all_data[all_data['animal'] == animal]['run'])},{'order_of_magnitude':{},'same_as':[]}) for animal in animal_classes} for animal in run_dict: for run in run_dict[animal]: duplicated = getDuplicateColumns(all_data[(all_data['run']==run) & (all_data['animal']==animal)]) run_dict[animal][run]['same_as'].append(duplicated) orders_of_magnitude = checkOM(all_data[(all_data['run']==run) & (all_data['animal']==animal)]) run_dict[animal][run]['order_of_magnitude'] = orders_of_magnitude matches, tens, unknowns = [],[],[] for animal in run_dict: for run in run_dict[animal]: instance = run_dict[animal][run]['order_of_magnitude'] matches.append({animal+'.'+str(run):instance['match']}) tens.append({animal+'.'+str(run):instance['ten_x']}) unknowns.append({animal+'.'+str(run):instance['unknown']}) # - # ## Plot interval-scaled per-muscle difference # + # %matplotlib widget maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(all_data).values() def diffMap(muscle_dict, global_norm=True): result_dict = muscle_dict.copy() maxes = [] for muscle in result_dict: for joint in result_dict[muscle]: df = result_dict[muscle][joint] meta = df.filter(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx']) mmas = df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint) for axis in ['ABAD','LAR','FLEX']: cols_to_compare = mmas.filter(like=axis) if len(cols_to_compare.columns): simm_minus_maya = cols_to_compare[cols_to_compare.columns[0]] - cols_to_compare[cols_to_compare.columns[1]] mmas['simm_minus_maya.'+axis] = simm_minus_maya total = meta.join(mmas) maxes.append(simm_minus_maya.abs().max()) result_dict[muscle][joint] = total globalAbsMax = np.array(maxes).max() if global_norm: for muscle in result_dict.keys(): for joint in result_dict[muscle]: df = result_dict[muscle][joint] diff_cols = df.filter(like='simm_minus_maya').columns for col in diff_cols: df[col] /= globalAbsMax return([result_dict, globalAbsMax]) def diffMapIntervalScaled(muscle_dict): result_dict = muscle_dict.copy() for muscle in result_dict: for joint in result_dict[muscle]: df = result_dict[muscle][joint] meta = df.filter(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx']) mmas = df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint) for axis in ['ABAD','LAR','FLEX']: cols_to_compare = mmas.filter(like=axis) if len(cols_to_compare.columns): simm_col = cols_to_compare[cols_to_compare.columns[0]] maya_col = cols_to_compare[cols_to_compare.columns[1]] max_max = max(simm_col.max(), maya_col.max()) min_min = min(simm_col.min(), maya_col.min()) interval = max_max-min_min interval_scaled = abs((simm_col-maya_col)/interval) mmas['interval_scaled.'+axis] = interval_scaled total = meta.join(mmas) result_dict[muscle][joint] = total return(result_dict) def plotDiffMMAs(df, joint, norm_factor=1): vmin = norm_factor*-1 vmax = norm_factor if joint == 'clavscap': return plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[15,5], constrained_layout=True) if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz if 'simm_minus_maya.ABAD' in df.columns: ax0 = fig.add_subplot(131, projection='3d', proj_type = 'ortho') abad = ax0.scatter(xs,ys,zs, s=abs(df['simm_minus_maya.ABAD'])*1, c=df['simm_minus_maya.ABAD'], cmap='RdBu', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax0, 'ABAD', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) if 'simm_minus_maya.LAR' in df.columns: ax1 = fig.add_subplot(132, projection='3d', proj_type = 'ortho') lar = ax1.scatter(xs,ys,zs, s=abs(df['simm_minus_maya.LAR'])*1, c=df['simm_minus_maya.LAR'], cmap='RdBu', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax1, 'LAR', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) if 'simm_minus_maya.FLEX' in df.columns: ax2 = fig.add_subplot(133, projection='3d', proj_type = 'ortho') flex = ax2.scatter(xs,ys,zs, s=abs(df['simm_minus_maya.FLEX'])*1, c=df['simm_minus_maya.FLEX'], cmap='RdBu', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax2, 'FLEX', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) plot_title = list(set([" ".join(col.split('.')[1:3]+["simm - maya"]) for col in df.columns if len(col.split('.')) > 3]))[0] fig.suptitle(plot_title, fontsize=16) divider = make_axes_locatable(ax0) fig.colorbar(abad) def plotDiffMMAsIntervalScaled(df, joint): vmin = 0 vmax = 1 if joint == 'clavscap': return plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' fig = plt.figure(figsize=[15,5], constrained_layout=True) if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz col_prefix = 'interval_scaled' ABADcol, LARcol, FLEXcol = col_prefix+'.ABAD',col_prefix+'.LAR',col_prefix+'.FLEX' if ABADcol in df.columns: ax0 = fig.add_subplot(131, projection='3d', proj_type = 'ortho') abad = ax0.scatter(xs,ys,zs, s=abs(df[ABADcol])*1, c=df[ABADcol], cmap='gist_heat_r', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax0, 'ABAD', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) if LARcol in df.columns: ax1 = fig.add_subplot(132, projection='3d', proj_type = 'ortho') lar = ax1.scatter(xs,ys,zs, s=abs(df[LARcol])*1, c=df[LARcol], cmap='gist_heat_r', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax1, 'LAR', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) if FLEXcol in df.columns: ax2 = fig.add_subplot(133, projection='3d', proj_type = 'ortho') flex = ax2.scatter(xs,ys,zs, s=abs(df[FLEXcol])*1, c=df[FLEXcol], cmap='gist_heat_r', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax2, 'FLEX', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) plot_title = list(set([" ".join(col.split('.')[1:3]+[col_prefix]) for col in df.columns if len(col.split('.')) > 3]))[0] fig.suptitle(plot_title, fontsize=16) divider = make_axes_locatable(ax0) fig.colorbar(abad) muscle_dict = makeMuscleDict(muscles_to_compare, joints) # diffmaps, norm_factor = diffMap(muscle_dict, False) # # for muscle in diffmaps: # # for joint in diffmaps[muscle]: # # plotDiffMMAs(diffmaps[muscle][joint],joint,norm_factor) diffMapInterval = diffMapIntervalScaled(muscle_dict) for muscle in diffMapInterval: for joint in diffMapInterval[muscle]: plotDiffMMAsIntervalScaled(diffMapInterval[muscle][joint],joint) # - # ## Plot per-muscle per-axis moment arms # # + # %matplotlib widget maxRx, maxcRx, minRx, mincRx, maxRy, minRy, maxRz, minRz = getTotalRotationRanges(all_data).values() def separateMap(muscle_dict): result_dict = muscle_dict.copy() for muscle in result_dict: for joint in result_dict[muscle]: df = result_dict[muscle][joint] meta = df.filter(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx']) mmas = df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint) muscle_max = mmas.max().max() muscle_min = mmas.min().min() abs_max = max(abs(muscle_max),abs(muscle_min)) mmas /= abs_max total = meta.join(mmas) result_dict[muscle][joint] = total return(result_dict) def plotXYZSeparate(df, joint, output_dir=None): vmin = -1 vmax = 1 if joint == 'clavscap': return plt.rcParams['grid.linewidth'] = 0 plt.rcParams['grid.color'] = 'lightgrey' if joint == 'shoulder': xs = df.cRx ys = df.Ry zs = df.Rz elif joint == 'elbow': xs = df.ceRx ys = df.eRy zs = df.eRz mma_cols = [col for col in df.columns if re.search(r'[a-z].*\.[a-z].*\.[a-z].*\.[a-z].*', col, re.IGNORECASE)] mma_name = mma_cols[0].split('.',1)[1].rsplit('.',1)[0] #vertical fig = plt.figure(figsize=[10,15], constrained_layout=True) #ABAD ax0 = fig.add_subplot(321, projection='3d', proj_type = 'ortho') abad_simm_col = 'simm.'+mma_name+'.ABAD' abad_simm = ax0.scatter(xs,ys,zs, s=abs(df[abad_simm_col])*1, c=df[abad_simm_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax0, 'SIMM ABDUCTION-ADDUCTION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax0, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) ax1 = fig.add_subplot(322, projection='3d', proj_type = 'ortho') abad_maya_col = 'maya.'+mma_name+'.ABAD' abad_maya = ax1.scatter(xs,ys,zs, s=abs(df[abad_maya_col])*1, c=df[abad_maya_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax1, 'MAYA ABDUCTION-ADDUCTION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax1, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) #LAR ax2 = fig.add_subplot(323, projection='3d', proj_type = 'ortho') lar_simm_col = 'simm.'+mma_name+'.LAR' lar_simm = ax2.scatter(xs,ys,zs, s=abs(df[lar_simm_col])*1, c=df[lar_simm_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax2, 'SIMM LONG-AXIS ROTATION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax2, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) ax3 = fig.add_subplot(324, projection='3d', proj_type = 'ortho') lar_maya_col = 'maya.'+mma_name+'.LAR' lar_maya = ax3.scatter(xs,ys,zs, s=abs(df[lar_maya_col])*1, c=df[lar_maya_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax3, 'MAYA LONG-AXIS ROTATION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax3, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) #FLEX ax4 = fig.add_subplot(325, projection='3d', proj_type = 'ortho') flex_simm_col = 'simm.'+mma_name+'.FLEX' flex_simm = ax4.scatter(xs,ys,zs, s=abs(df[flex_simm_col])*1, c=df[flex_simm_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax4, 'SIMM FLEXION-EXTENSION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax4, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) ax5 = fig.add_subplot(326, projection='3d', proj_type = 'ortho') flex_maya_col = 'maya.'+mma_name+'.FLEX' flex_maya = ax5.scatter(xs,ys,zs, s=abs(df[flex_maya_col])*1, c=df[flex_maya_col], cmap='PuOr', depthshade=False, edgecolors='none', vmin=vmin, vmax=vmax) format3dPlot(ax5, 'MAYA FLEXION-EXTENSION', (mincRx, maxcRx), (minRy, maxRy), (minRz, maxRz), view=view3Q, minimal=True) addCosGrid(ax5, (minRx,maxRx), (minRy,maxRy), (minRz,maxRz), 5, zLevels=1, color='grey', linewidths=0.1) plot_title = mma_name fig.suptitle(plot_title, fontsize=16) fig.colorbar(abad_simm, aspect=100, ax=[ax0, ax2, ax4],location='right') if output_dir: plt.savefig(output_dir+plot_title+'.svg', format='svg') muscle_dict = makeMuscleDict(muscles_to_compare, joints) sepMap = separateMap(muscle_dict) # plotXYZSeparate(sepMap['biceps_brevis']['shoulder'],'shoulder') output_dir = "/Users/phil/Development/possumpolish/echidna_plots/separate/" for muscle in sepMap: for joint in sepMap[muscle]: musc_plot = plotXYZSeparate(sepMap[muscle][joint],joint, output_dir) # - # ## boxplots # + # %matplotlib inline def posNegMap(df): df = df.copy() meta = df.filter(['frame','animal','run']) mmas = all_data.filter(regex=r'[a-z].*\.') mmas_pos = mmas.apply(lambda x : np.where(x > 0, x, np.nan),axis=0) mmas_neg = mmas.apply(lambda x : np.where(x < 0, x, np.nan),axis=0) mmas_pos.columns = [name+'.pos' for name in mmas_pos.columns] mmas_neg.columns = [name+'.neg' for name in mmas_neg.columns] mmas_binned = mmas_pos.join(mmas_neg) newdf = meta.join(mmas_binned) unpivot = newdf.melt(id_vars=['frame','animal','run']) unpivot[['source','muscle','joint','axis','valence']] = unpivot['variable'].str.split('.',expand=True) return unpivot def perMuscleBoxplot(df, muscle, joint, output_dir=None): pad = 1 df = df[(df['muscle'] == muscle)&(df['joint'] == joint)].drop(['frame','run'],axis=1) if not len(df): return simm = df[df['source']=='simm'] maya = df[df['source']=='maya'] yMax = max(simm['value'].max(),maya['value'].max())+pad yMin = min(simm['value'].min(),maya['value'].min())-pad if pd.isna(yMin): return fig, (axSimm, axMaya) = plt.subplots(1, 2, sharey=False, figsize=[20,10], constrained_layout=True) if not joint == 'clavscap': simmPlot = simm.boxplot(column='value',by=['source','axis','valence'], ax=axSimm, patch_artist=True, rot=90, positions=[2,1,6,5,4,3], boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) mayaPlot = maya.boxplot(column='value',by=['source','axis','valence'], ax=axMaya, patch_artist=True, rot=90, positions=[2,1,6,5,4,3], boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) axSimm.set_xticklabels(['ADDUCTION','ABDUCTION','SUPINATION','PRONATION','EXTENSION','FLEXION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) axMaya.set_xticklabels(['ADDUCTION','ABDUCTION','SUPINATION','PRONATION','EXTENSION','FLEXION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) faces = ['red', 'red', 'dodgerblue', 'dodgerblue', 'lawngreen', 'lawngreen'] else: simm = simm[simm['axis']=='ABAD'] maya = maya[maya['axis']=='ABAD'] yMax = max(simm['value'].max(),maya['value'].max())+pad yMin = min(simm['value'].min(),maya['value'].min())-pad simmPlot = simm.boxplot(column='value',by=['source','axis','valence'], ax=axSimm, patch_artist=True, rot=90, positions=[2,1], boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) mayaPlot = maya.boxplot(column='value',by=['source','axis','valence'], ax=axMaya, patch_artist=True, rot=90, positions=[2,1], boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) axSimm.set_xticklabels(['ADDUCTION','ABDUCTION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) axMaya.set_xticklabels(['ADDUCTION','ABDUCTION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) faces = ['red', 'red'] simmPatches = [child for child in simmPlot.get_children() if type(child)== mpl.patches.PathPatch] mayaPatches = [child for child in mayaPlot.get_children() if type(child)== mpl.patches.PathPatch] for simmPatch, mayaPatch, face in zip(simmPatches, mayaPatches, faces): simmPatch.set_facecolor(face) mayaPatch.set_facecolor(face) axSimm.set_title('SIMM', fontsize=20) axSimm.set_ylim(yMin, yMax) axSimm.set_xlabel('') axSimm.set_ylabel('Muscle Moment Arm (mm)',fontsize=16) axSimm.tick_params(axis='y', which='major', labelsize=14) axSimm.axhline(c='grey',ls='dotted',linewidth=2) axMaya.set_title('Maya', fontsize=20) axMaya.set_ylim(yMin, yMax) axMaya.set_xlabel('') axMaya.tick_params(axis='y', which='major', labelsize=14) axMaya.axhline(c='grey',ls='dotted',linewidth=2) plot_title = df['variable'].unique()[0].split('.',1)[1].rsplit('.',1)[0].rsplit('.',1)[0] fig.suptitle(plot_title, fontsize=25) if output_dir: plt.savefig(output_dir+plot_title+'.svg', format='svg') output_dir = "/Users/phil/Development/possumpolish/echidna_plots/boxplots/" boxPlotDf = posNegMap(all_data) for muscle in muscles_to_compare: for joint in ['clavscap']: perMuscleBoxplot(boxPlotDf, muscle, joint, output_dir) # - # ## boxplots (summed) # + # %matplotlib inline def posNegMapForSum(df): df = df.copy() meta = df.filter(['frame','animal','run']) mmas = all_data.filter(regex=r'[a-z].*\.') mmas_pos = mmas.apply(lambda x : np.where(x > 0, x, np.nan),axis=0) mmas_neg = mmas.apply(lambda x : np.where(x < 0, x, np.nan),axis=0) mmas_pos.columns = [name+'.pos' for name in mmas_pos.columns] mmas_neg.columns = [name+'.neg' for name in mmas_neg.columns] mmas_binned = mmas_pos.join(mmas_neg) newdf = meta.join(mmas_binned) unpivot = newdf.melt(id_vars=['frame','animal','run']) unpivot[['source','muscle','joint','axis','valence']] = unpivot['variable'].str.split('.',expand=True) return unpivot def summedMomentArms(df, muscles_to_compare, joint): df['uid'] = df['animal']+'_'+df['run'].astype(str)+'_'+df['frame'].astype(str) df = df[(df['muscle'].isin(muscles_to_compare))&(df['joint'] == joint)].drop(['run','animal','variable'],axis=1) if not len(df): return result = pd.DataFrame(index=df['uid'].unique(), columns=['simm_ABAD_pos','simm_ABAD_neg','simm_LAR_pos','simm_LAR_neg','simm_FLEX_pos','simm_FLEX_neg','maya_ABAD_pos','maya_ABAD_neg','maya_LAR_pos','maya_LAR_neg','maya_FLEX_pos','maya_FLEX_neg']) for uid in result.index: current = df[df['uid']==uid] simm = current[current['source']=='simm'] maya = current[current['source']=='maya'] simm_pos = simm[simm['valence']=='pos'] simm_neg = simm[simm['valence']=='neg'] maya_pos = maya[maya['valence']=='pos'] maya_neg = maya[maya['valence']=='neg'] result.loc[uid, 'simm_ABAD_pos'] = simm_pos[simm_pos['axis']=='ABAD']['value'].sum() result.loc[uid, 'simm_ABAD_neg'] = simm_neg[simm_neg['axis']=='ABAD']['value'].sum() result.loc[uid, 'maya_ABAD_pos'] = maya_pos[maya_pos['axis']=='ABAD']['value'].sum() result.loc[uid, 'maya_ABAD_neg'] = maya_neg[maya_neg['axis']=='ABAD']['value'].sum() result.loc[uid, 'simm_LAR_pos'] = simm_pos[simm_pos['axis']=='LAR']['value'].sum() result.loc[uid, 'simm_LAR_neg'] = simm_neg[simm_neg['axis']=='LAR']['value'].sum() result.loc[uid, 'maya_LAR_pos'] = maya_pos[maya_pos['axis']=='LAR']['value'].sum() result.loc[uid, 'maya_LAR_neg'] = maya_neg[maya_neg['axis']=='LAR']['value'].sum() result.loc[uid, 'simm_FLEX_pos'] = simm_pos[simm_pos['axis']=='FLEX']['value'].sum() result.loc[uid, 'simm_FLEX_neg'] = simm_neg[simm_neg['axis']=='FLEX']['value'].sum() result.loc[uid, 'maya_FLEX_pos'] = maya_pos[maya_pos['axis']=='FLEX']['value'].sum() result.loc[uid, 'maya_FLEX_neg'] = maya_neg[maya_neg['axis']=='FLEX']['value'].sum() result.replace(0, np.nan, inplace=True) # print(uid+' done') return result def summedBoxplot(df, joint, output_dir=None): pad = 5 yMax = df.max().max()+pad yMin = df.min().min()-pad simm = df.filter(regex=r'simm_') maya = df.filter(regex=r'maya_') if joint == 'clavscap': simm = simm.filter(regex=r'_ABAD') maya = maya.filter(regex=r'_ABAD') fig, (axSimm, axMaya) = plt.subplots(1, 2, sharey=False, figsize=[20,10], constrained_layout=True) simmPlot = simm.boxplot(ax=axSimm, patch_artist=True, rot=90, boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) mayaPlot = maya.boxplot(ax=axMaya, patch_artist=True, rot=90, boxprops=dict(edgecolor='black',linewidth=2), capprops=dict(color='black',linewidth=2), whiskerprops=dict(color='black',linewidth=2), flierprops=dict(color='black', markeredgecolor='black',markersize=4, marker='x'), medianprops=dict(color='black',linewidth=2)) if not joint == 'clavscap': axSimm.set_xticklabels(['ABDUCTION','ADDUCTION','PRONATION','SUPINATION','FLEXION','EXTENSION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) axMaya.set_xticklabels(['ABDUCTION','ADDUCTION','PRONATION','SUPINATION','FLEXION','EXTENSION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) faces = ['red', 'red', 'lawngreen', 'lawngreen', 'dodgerblue', 'dodgerblue'] else: print('clavscap') axSimm.set_xticklabels(['ADDUCTION','ABDUCTION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) axMaya.set_xticklabels(['ADDUCTION','ABDUCTION'], rotation=45, fontsize=16, ha="right", rotation_mode="anchor" ) faces = ['red', 'red'] simmPatches = [child for child in axSimm.get_children() if type(child)== mpl.patches.PathPatch] mayaPatches = [child for child in axMaya.get_children() if type(child)== mpl.patches.PathPatch] for simmPatch, mayaPatch, face in zip(simmPatches, mayaPatches, faces): simmPatch.set_facecolor(face) mayaPatch.set_facecolor(face) axSimm.set_title('SIMM', fontsize=20) axSimm.set_ylim(yMin, yMax) axSimm.set_xlabel('') axSimm.set_ylabel('Muscle Moment Arm (mm)',fontsize=16) axSimm.tick_params(axis='y', which='major', labelsize=14) axSimm.axhline(c='grey',ls='dotted',linewidth=2) axMaya.set_title('Maya', fontsize=20) axMaya.set_ylim(yMin, yMax) axMaya.set_xlabel('') axMaya.tick_params(axis='y', which='major', labelsize=14) axMaya.axhline(c='grey',ls='dotted',linewidth=2) plot_title = joint+'_summmed_moment_arms' fig.suptitle(plot_title, fontsize=25) if output_dir: plt.savefig(output_dir+plot_title+'.svg', format='svg') output_dir = "/Users/phil/Development/possumpolish/echidna_plots/boxplots/" # boxPlotDf = posNegMapForSum(all_data) # shoulderSum = summedMomentArms(boxPlotDf, muscles_to_compare, 'shoulder') # elbowSum = summedMomentArms(boxPlotDf, muscles_to_compare, 'elbow') # clavscapSum = summedMomentArms(boxPlotDf, muscles_to_compare, 'clavscap') # summedBoxplot(shoulderSum, 'shoulder', output_dir=output_dir) # summedBoxplot(elbowSum, 'elbow', output_dir=output_dir) summedBoxplot(clavscapSum, 'clavscap', output_dir=output_dir) # - # ## Check minmax rom for fig.1 # 46R z is 48 from here vs 34 in fig, everything else is the same animal_sides = all_data['animal'].unique() rom_ranges = {} for animal_side in animal_sides: df= all_data[all_data['animal'] == animal_side] x_max, x_min = df['Rx'].max(), df['Rx'].min() y_max, y_min = df['Ry'].max(), df['Ry'].min() z_max, z_min = df['Rz'].max(), df['Rz'].min() rom_ranges[animal_side] = {'x':abs(x_max-x_min), 'y':abs(y_max-y_min), 'z':abs(z_max-z_min), } rom_ranges # ## Plot per-trial per-axis moment arms vs time # + def plot2Dcomparison(df, joint, trials): absMaxMMA = abs(df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint)).max().max()*1.25 maxFrame = df['frame'].max() df = df[df['trial'].isin(trials)] fig = plt.figure(figsize=[12,8]) axes = [] for num in range(len(trials)): axes.insert(num, fig.add_subplot(int('32'+str(num+1)))) dfX = df[df['trial']==trials[num]] simm_xs = dfX.filter(regex=r'(simm.*ABAD)') simm_ys = dfX.filter(regex=r'(simm.*LAR)') simm_zs = dfX.filter(regex=r'(simm.*FLEX)') maya_xs = dfX.filter(regex=r'(maya.*ABAD)') maya_ys = dfX.filter(regex=r'(maya.*LAR)') maya_zs = dfX.filter(regex=r'(maya.*FLEX)') axes[num].title.set_text(trials[num]) axes[num].plot(dfX.frame,maya_xs, lw=0.75, c='#FF0000', linestyle='solid', label='ABAD experimental estimate') axes[num].plot(dfX.frame,simm_xs, lw=1, c='#FF0000', linestyle='dotted', label='ABAD model prediction') if joint != 'clavscap': axes[num].plot(dfX.frame,maya_ys, lw=0.75, c='#00CC00', linestyle='solid', label='LAR experimental estimate') axes[num].plot(dfX.frame,simm_ys, lw=1, c='#00CC00', linestyle='dotted', label='LAR model prediction') axes[num].plot(dfX.frame,maya_zs, lw=0.75, c='#0000FF', linestyle='solid', label='FE experimental estimate') axes[num].plot(dfX.frame,simm_zs, lw=1, c='#0000FF', linestyle='dotted', label='FE model prediction') axes[num].axhline(c='#060606', lw=0.5) axes[num].set_xlim(0, maxFrame) axes[num].set_ylim(-absMaxMMA, absMaxMMA) plot_title = list(set([" ".join(col.split('.')[1:3]) for col in df.columns if len(col.split('.')) > 3]))[0] fig.suptitle(plot_title, fontsize=16) plt.subplots_adjust(hspace=0.5, wspace=0.25) plt.legend(fontsize='small',bbox_to_anchor=(2, 0.5),loc='center right', ncol=1) muscle_dict = makeMuscleDict(muscles_to_compare, joints) for muscle in muscle_dict: for joint in muscle_dict[muscle]: muscle_dict[muscle][joint]['trial'] = muscle_dict[muscle][joint]['animal'] + ['_run']+muscle_dict[muscle][joint]['run'].astype(str) trials = sorted(muscle_dict[muscle][joint]['trial'].unique()) plot2Dcomparison(muscle_dict[muscle][joint],joint, trials) # - # ## Plot 2D diffs # + def plot2Ddiff(df, joint, trials): absMaxMMA = abs(df.drop(['frame','animal','run','Rx','Ry','Rz','cRx','eRx','eRy','eRz','ceRx'],axis=1).filter(like=joint)).max().max()*1.25 maxFrame = df['frame'].max() df = df[df['trial'].isin(trials)] fig = plt.figure(figsize=[12,8]) axes = [] for num in range(len(trials)): axes.insert(num, fig.add_subplot(int('32'+str(num+1)))) dfX = df[df['trial']==trials[num]] simm_xs = dfX.filter(regex=r'(simm.*ABAD)') simm_ys = dfX.filter(regex=r'(simm.*LAR)') simm_zs = dfX.filter(regex=r'(simm.*FLEX)') maya_xs = dfX.filter(regex=r'(maya.*ABAD)') maya_ys = dfX.filter(regex=r'(maya.*LAR)') maya_zs = dfX.filter(regex=r'(maya.*FLEX)') diff_xs = simm_xs.values - maya_xs.values diff_ys = simm_ys.values - maya_ys.values diff_zs = simm_zs.values - maya_zs.values axes[num].title.set_text(trials[num]) axes[num].plot(dfX.frame,diff_xs, lw=0.75, c='#FF0000', linestyle='solid', label='ABAD model - experimental') if joint != 'clavscap': axes[num].plot(dfX.frame,diff_ys, lw=0.75, c='#00CC00', linestyle='solid', label='LAR model - experimental') axes[num].plot(dfX.frame,diff_zs, lw=1, c='#0000FF', linestyle='dotted', label='FE model - experimental') axes[num].axhline(c='#060606', lw=0.5) axes[num].set_xlim(0, maxFrame) axes[num].set_ylim(-absMaxMMA, absMaxMMA) plot_title = list(set([" ".join(col.split('.')[1:3]) for col in df.columns if len(col.split('.')) > 3]))[0] fig.suptitle(plot_title, fontsize=16) plt.subplots_adjust(hspace=0.5, wspace=0.25) plt.legend(fontsize='small',bbox_to_anchor=(2, 0.5),loc='center right', ncol=1) muscle_dict = makeMuscleDict(muscles_to_compare, joints) for muscle in muscle_dict: for joint in muscle_dict[muscle]: muscle_dict[muscle][joint]['trial'] = muscle_dict[muscle][joint]['animal'] + ['_run']+muscle_dict[muscle][joint]['run'].astype(str) trials = sorted(muscle_dict[muscle][joint]['trial'].unique()) plot2Ddiff(muscle_dict[muscle][joint],joint, trials)
.ipynb_checkpoints/Echidna3DPlots-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Week 8 of Introduction to Biological System Design # ## Feedforward Loops # ### <NAME> # # Pre-requisite: To get the best out of this notebook, make sure that you have basic understanding of ordinary differential equations (ODE) and Hill functions to model gene regulatory effects. For more information on ODE modeling you may refer to any standard book on engineering math and [BFS](http://www.cds.caltech.edu/~murray/BFSwiki/index.php?title=Main_Page) for more information on Hill functions. You can learn more about how to numerically simulate ODEs deterministically from the [week3_intro_ode.ipynb](https://pages.hmc.edu/pandey/reading/week3_intro_ode.ipynb) notebook. Further, it is also assumed that you have a working knowledge of gene expression processes, use of Hill functions for gene regulation, and biological system motifs. Computational examples with Hill functions are discussed in [week4_hill_functions.ipynb](https://pages.hmc.edu/pandey/reading/week4_hill_functions.ipynb) whereas design choices underlying biological motifs are shown in [week6_system_analysis.ipynb](https://pages.hmc.edu/pandey/reading/week6_system_analysis.ipynb). This notebook builds on the code discussed in week6_system_analysis.ipynb to analyze feedforward loop motifs. # # Disclaimer: Concepts demonstrated in this notebook have been inspired from the discussion on feedforward loops in [Alon](https://www.taylorfrancis.com/books/mono/10.1201/9781420011432/introduction-systems-biology-uri-alon) and [Biocircuits Lecture by Elowitz and Bois](https://www.taylorfrancis.com/books/mono/10.1201/9781420011432/introduction-systems-biology-uri-alon). # + # To plot heatmaps in this notebook, you may need to # install a package called "seaborn" # To install seaborn, run the following command # (or install the package "seaborn" using the Anaconda Navigator search) # # !pip install seaborn # - # # Coherent Feedforward Loops (CFFL) # Consider the motif where X --> Y --> Z and X --> Z indirectly as well. # # ## C1-FFL with AND logic def c1_ffl_and(x,t,*args): """ ODE model for C1-FFL with AND logic. """ k, n_X, K_X, n_Y, K_Y, d_Z = args X, Y, Z = x dZ_dt = k * (X**n_X)/(K_X**n_X + X**n_X) *\ (Y**n_Y)/(K_Y**n_Y + Y**n_Y) - d_Z * Z # Since X and Y don't change, the rate of change # of X and Y is equal to zero. We are only modeling # rate of change of Z. return np.array([0, 0, dZ_dt]) # + import numpy as np from scipy.integrate import odeint X = np.linspace(0, 5, 10) Y = np.linspace(0, 5, 10) timepoints = np.linspace(0,100,10) Z_ss = np.zeros((len(X),len(Y))) # parameters: k = 1 n_X = 1 K_X = 2 n_Y = 1 K_Y = 2 d_Z = 1 for i, x0 in enumerate(X): for j, y0 in enumerate(Y): initial_condition = np.array([x0,y0,0]) solution = odeint(c1_ffl_and, y0 = initial_condition, t = timepoints, args = (k, n_X, K_X, n_Y, K_Y, d_Z)) # Store steady-state value Z_ss[i,j] = solution[:,2][-1] # - import seaborn as sn ax = sn.heatmap(Z_ss, xticklabels = np.around(X,1), yticklabels = np.around(Y,1)) ax.tick_params(labelsize = 12) cbar_ax = ax.figure.axes[-1] cbar_ax.tick_params(labelsize = 12) cbar_ax.set_ylabel('Z', fontsize = 14) ax.set_xlabel('X', fontsize = 14) ax.set_ylabel('Y', fontsize = 14); # ### C1-FFL with AND logic exhibits delayed response from scipy import signal timepoints = np.linspace(0, 100, 100, endpoint = True) max_toxin_value = 20 #arbitrary units toxin_signal = max_toxin_value*np.ones_like(timepoints) *\ -1*signal.square(2*np.pi*2*timepoints, duty = 0.55) for i, s in enumerate(toxin_signal): if s < 0: toxin_signal[i] = 0 fig, ax = plt.subplots(figsize = (12,4)) ax.plot(toxin_signal, color = 'black', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Toxin signal, X, (A.U.)', fontsize = 14) ax.tick_params(labelsize = 14) def c1_ffl_and(x,t,*args): """ ODE model for C1-FFL with AND logic. """ k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z = args X, Y, Z = x dY_dt = k_Y * (X**n_X)/(K_X**n_X + X**n_X) - d_Y * Y dZ_dt = k_Z * (X**n_X)/(K_X**n_X + X**n_X) *\ (Y**n_Y)/(K_Y**n_Y + Y**n_Y) - d_Z * Z # Since X is fixed input, it doesn't change. # the rate of change # of X is equal to zero. We are only modeling # rate of change of Y and Z. return np.array([0, dY_dt, dZ_dt]) # + fig, ax = plt.subplots(figsize = (12,4)) fig.suptitle('Response of C1-FFL (AND logic) to Pulsating Signal', fontsize = 18); # parameters: k_Y = 1 k_Z = 1 n_X = 3 K_X = 1 n_Y = 3 K_Y = 5 d_Y = 1 d_Z = 1 # Normalize the values def normalize(solution): """ Normalize by maximum value in the odeint solution except when the values are zero, to avoid division by zero. """ normalized_solution = np.zeros_like(solution.T) for i, val_array in enumerate(solution.T): max_value = np.max(val_array) for j, val in enumerate(val_array): if max_value == 0: normalized_solution[i, j] = val else: normalized_solution[i, j] = val/max_value return normalized_solution.T # Plot X ax.plot(toxin_signal/np.max(toxin_signal), color = 'black', lw = 3, label = 'X') # For X = 0 previous_time = 0 array_nonzero = np.where(toxin_signal != 0)[0] next_time = array_nonzero[0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([0, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3, label = 'Y') ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3, label = 'Z') # For X = max_toxin_value previous_time = next_time array_zero = np.where(toxin_signal == 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time,next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) y_ss = normalized_solution[:,1][-1] z_ss = normalized_solution[:,2][-1] # For X = 0 again previous_time = next_time array_zero = np.where(toxin_signal != 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([0, y_ss, z_ss]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) # For X = max_toxin_value, again previous_time = next_time next_time = int(timepoints[-1]) # last point t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Signals', fontsize = 14) ax.tick_params(labelsize = 14) ax.legend(fontsize = 14) # - # ### C1-FFL with AND logic filters short pulses from scipy import signal timepoints = np.linspace(0, 100, 100, endpoint = True) max_toxin_value = 20 #arbitrary units toxin_signal = max_toxin_value*np.ones_like(timepoints) *\ -1*signal.square(2*np.pi*2*timepoints, duty = 0.95) for i, s in enumerate(toxin_signal): if s < 0: toxin_signal[i] = 0 fig, ax = plt.subplots(figsize = (12,4)) ax.plot(toxin_signal, color = 'black', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Toxin signal, X, (A.U.)', fontsize = 14) ax.tick_params(labelsize = 14) # + fig, ax = plt.subplots(figsize = (12,4)) fig.suptitle('C1-FFL filters short pulses', fontsize = 18); # parameters: k_Y = 40 k_Z = 40 n_X = 3 K_X = 25 n_Y = 3 K_Y = 20 d_Y = 1 d_Z = 1 # Plot X ax.plot(toxin_signal, color = 'black', lw = 3, label = 'X') # For X = 0 previous_time = 0 array_nonzero = np.where(toxin_signal != 0)[0] next_time = array_nonzero[0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([0, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3, label = 'Y') ax.plot(t_solve, solution[:,2], 'b', lw = 3, label = 'Z') # For X = max_toxin_value previous_time = next_time array_zero = np.where(toxin_signal == 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time,next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) y_ss = solution[:,1][-1] z_ss = solution[:,2][-1] # For X = 0 again previous_time = next_time array_zero = np.where(toxin_signal != 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([0, y_ss, z_ss]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) # For X = max_toxin_value, again previous_time = next_time next_time = int(timepoints[-1]) # last point t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_and, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Signals', fontsize = 14) ax.tick_params(labelsize = 14) ax.legend(fontsize = 14) # - # ## C1-FFL with OR logic def c1_ffl_or(x,t,*args): """ ODE model for C1-FFL with AND logic. """ k, n_X, K_X, n_Y, K_Y, d_Z = args X, Y, Z = x dZ_dt = k * ((X**n_X)/(K_X**n_X + X**n_X) +\ (Y**n_Y)/(K_Y**n_Y + Y**n_Y)) - d_Z * Z # Since X and Y don't change, the rate of change # of X and Y is equal to zero. We are only modeling # rate of change of Z. return np.array([0, 0, dZ_dt]) # + X = np.linspace(0, 5, 10) Y = np.linspace(0, 5, 10) timepoints = np.linspace(0,100,10) Z_ss = np.zeros((len(X),len(Y))) # parameters: k = 1 n_X = 1 K_X = 2 n_Y = 1 K_Y = 2 d_Z = 1 for i, x0 in enumerate(X): for j, y0 in enumerate(Y): initial_condition = np.array([x0,y0,0]) solution = odeint(c1_ffl_or, y0 = initial_condition, t = timepoints, args = (k, n_X, K_X, n_Y, K_Y, d_Z)) # Store steady-state value Z_ss[i,j] = solution[:,2][-1] # - import seaborn as sn ax = sn.heatmap(Z_ss, xticklabels = np.around(X,1), yticklabels = np.around(Y,1)) ax.tick_params(labelsize = 12) cbar_ax = ax.figure.axes[-1] cbar_ax.tick_params(labelsize = 12) cbar_ax.set_ylabel('Z', fontsize = 14) ax.set_xlabel('X', fontsize = 14) ax.set_ylabel('Y', fontsize = 14); # ### C1-FFL with OR logic exhibits delayed response def c1_ffl_or(x,t,*args): """ ODE model for C1-FFL with OR logic. """ k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z = args X, Y, Z = x dY_dt = k_Y * (X**n_X)/(K_X**n_X + X**n_X) - d_Y * Y dZ_dt = k_Z * (X**n_X)/(K_X**n_X + X**n_X) +\ (Y**n_Y)/(K_Y**n_Y + Y**n_Y) - d_Z * Z # Since X is fixed input, it doesn't change. # the rate of change # of X is equal to zero. We are only modeling # rate of change of Y and Z. return np.array([0, dY_dt, dZ_dt]) from scipy import signal timepoints = np.linspace(0, 100, 100, endpoint = True) max_toxin_value = 20 #arbitrary units toxin_signal = max_toxin_value*np.ones_like(timepoints) *\ -1*signal.square(2*np.pi*2*timepoints, duty = 0.55) for i, s in enumerate(toxin_signal): if s < 0: toxin_signal[i] = 0 fig, ax = plt.subplots(figsize = (12,4)) ax.plot(toxin_signal, color = 'black', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Toxin signal, X, (A.U.)', fontsize = 14) ax.tick_params(labelsize = 14) # + fig, ax = plt.subplots(figsize = (12,4)) fig.suptitle('Response of C1-FFL (OR logic) to Pulsating Signal', fontsize = 18); # parameters: k_Y = 1 k_Z = 1 n_X = 1 K_X = 1 n_Y = 1 K_Y = 1 d_Y = 1 d_Z = 1 # Normalize the values def normalize(solution): """ Normalize by maximum value in the odeint solution except when the values are zero, to avoid division by zero. """ normalized_solution = np.zeros_like(solution.T) for i, val_array in enumerate(solution.T): max_value = np.max(val_array) for j, val in enumerate(val_array): if max_value == 0: normalized_solution[i, j] = val else: normalized_solution[i, j] = val/max_value return normalized_solution.T # Plot X ax.plot(toxin_signal/np.max(toxin_signal), color = 'black', lw = 3, label = 'X') # For X = 0 previous_time = 0 array_nonzero = np.where(toxin_signal != 0)[0] next_time = array_nonzero[0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_or, y0 = np.array([0, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3, label = 'Y') ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3, label = 'Z') # For X = max_toxin_value previous_time = next_time array_zero = np.where(toxin_signal == 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time,next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_or, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) y_ss = normalized_solution[:,1][-1] z_ss = normalized_solution[:,2][-1] # For X = 0 again previous_time = next_time array_zero = np.where(toxin_signal != 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_or, y0 = np.array([0, y_ss, z_ss]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) # For X = max_toxin_value, again previous_time = next_time next_time = int(timepoints[-1]) # last point t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(c1_ffl_or, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Signals', fontsize = 14) ax.tick_params(labelsize = 14) ax.legend(fontsize = 14) # - # # Incoherent Feedforward Loops (IFFL) # Consider the motif where X --> Y --| Z and X --> Z indirectly as well. def i1_ffl(x,t,*args): """ ODE model for I1-FFL. """ k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z = args X, Y, Z = x dY_dt = k_Y * (X**n_X)/(K_X**n_X + X**n_X) - d_Y * Y dZ_dt = k_Z * (X**n_X)/(K_X**n_X + X**n_X) *\ (K_Y**n_Y)/(K_Y**n_Y + Y**n_Y) - d_Z * Z # Since X is fixed input, it doesn't change. # the rate of change # of X is equal to zero. We are only modeling # rate of change of Y and Z. return np.array([0, dY_dt, dZ_dt]) from scipy import signal timepoints = np.linspace(0, 100, 100, endpoint = True) max_toxin_value = 20 #arbitrary units toxin_signal = max_toxin_value*np.ones_like(timepoints) *\ -1*signal.square(2*np.pi*2*timepoints, duty = 0.55) for i, s in enumerate(toxin_signal): if s < 0: toxin_signal[i] = 0 fig, ax = plt.subplots(figsize = (12,4)) ax.plot(toxin_signal, color = 'black', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Toxin signal, X, (A.U.)', fontsize = 14) ax.tick_params(labelsize = 14) # + fig, ax = plt.subplots(figsize = (12,4)) fig.suptitle('I1-FFL generates a pulse', fontsize = 18); # parameters: k_Y = 20 k_Z = 20 n_X = 4 K_X = 10 n_Y = 4 K_Y = 10 d_Y = 1 d_Z = 1 # Plot X ax.plot(toxin_signal, color = 'black', lw = 3, label = 'X') # For X = 0 previous_time = 0 array_nonzero = np.where(toxin_signal != 0)[0] next_time = array_nonzero[0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([0, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3, label = 'Y') ax.plot(t_solve, solution[:,2], 'b', lw = 3, label = 'Z') # For X = max_toxin_value previous_time = next_time array_zero = np.where(toxin_signal == 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time,next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) y_ss = solution[:,1][-1] z_ss = solution[:,2][-1] # For X = 0 again previous_time = next_time array_zero = np.where(toxin_signal != 0)[0] next_time = array_zero[np.where(array_zero > previous_time)][0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([0, y_ss, z_ss]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) # For X = max_toxin_value, again previous_time = next_time next_time = int(timepoints[-1]) # last point t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) ax.plot(t_solve, solution[:,1], 'r', lw = 3) ax.plot(t_solve, solution[:,2], 'b', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Signals', fontsize = 14) ax.tick_params(labelsize = 14) ax.legend(fontsize = 14) # - from scipy import signal timepoints = np.linspace(0, 100, 100, endpoint = True) max_toxin_value = 20 #arbitrary units toxin_signal = max_toxin_value*np.ones_like(timepoints) *\ -1*signal.square(2*np.pi*1*timepoints, duty = 0.3) for i, s in enumerate(toxin_signal): if s < 0: toxin_signal[i] = 0 toxin_signal[-1] = 20 fig, ax = plt.subplots(figsize = (12,4)) ax.plot(toxin_signal, color = 'black', lw = 3) ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Toxin signal, X, (A.U.)', fontsize = 14) ax.tick_params(labelsize = 14) def unregulated(x, t, *args): k, d = args return k - d*x # + fig, ax = plt.subplots(figsize = (12,4)) fig.suptitle('Response of I1-FFL to Pulsating Signal', fontsize = 18); # parameters (IFFL): k_Y = 1 k_Z = 1 n_X = 4 K_X = 1 n_Y = 4 K_Y = 1 d_Y = 0.5 d_Z = 0.5 # parameters (unregulated): k = 1 d = 0.5 # Normalize the values def normalize(solution): """ Normalize by maximum value in the odeint solution except when the values are zero, to avoid division by zero. """ normalized_solution = np.zeros_like(solution.T) for i, val_array in enumerate(solution.T): max_value = np.max(val_array) for j, val in enumerate(val_array): if max_value == 0: normalized_solution[i, j] = val else: normalized_solution[i, j] = val/max_value return normalized_solution.T # Plot X ax.plot(toxin_signal/np.max(toxin_signal), color = 'black', lw = 3, label = 'X') # For X = 0 previous_time = 0 array_nonzero = np.where(toxin_signal != 0)[0] next_time = array_nonzero[0] t_solve = np.linspace(previous_time, next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([0, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3, label = 'Y') ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3, label = 'Z') # For X = max_toxin_value previous_time = next_time array_zero = np.where(toxin_signal == 0)[0] next_time = int(timepoints[-1]) t_solve = np.linspace(previous_time,next_time, next_time - previous_time, endpoint = True) solution = odeint(i1_ffl, y0 = np.array([max_toxin_value, 0, 0]), t = t_solve, args = (k_Y, k_Z, n_X, K_X, n_Y, K_Y, d_Y, d_Z )) normalized_solution = normalize(solution) ax.plot(t_solve, normalized_solution[:,1], 'r', lw = 3) ax.plot(t_solve, normalized_solution[:,2], 'b', lw = 3) unreg_solution = odeint(unregulated, y0 = np.array([0]), t = t_solve, args = (k,d)) unreg_normalized_solution = normalize(unreg_solution) ax.plot(t_solve, unreg_normalized_solution, color = 'orange', lw = 3, label = 'unregulated') y_ss = normalized_solution[:,1][-1] z_ss = normalized_solution[:,2][-1] ax.set_xlabel('Time (days)', fontsize = 14) ax.set_ylabel('Signals', fontsize = 14) ax.tick_params(labelsize = 14) ax.legend(fontsize = 14) # -
reading/week8_feedforward_loops.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized) # language: python # name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/tensorflow-2.3-cpu-py37-ubuntu18.04-v1 # --- # + [markdown] Collapsed="true" # # MLOps workshop with Amazon SageMaker # # ## Module 03: Transform the data and train a model using SageMaker managed training job. # # In this module we will use the same dataset and model, but will update the model to use features of SageMaker to scale dataset transformation and model training beyond Jupyter notebook. # # This notebook includes all key steps such as preprocessing data with SageMaker Processing, and model training and deployment with SageMaker hosted training and inference. Automatic Model Tuning in SageMaker is used to tune the model's hyperparameters. If you are using TensorFlow 2, you can use the Amazon SageMaker prebuilt TensorFlow 2 framework container with training scripts similar to those you would use outside SageMaker. # - # !pip install sagemaker -U # + Collapsed="false" import boto3 import os import sagemaker import tensorflow as tf sess = sagemaker.session.Session() bucket = sess.default_bucket() region = boto3.Session().region_name data_dir = os.path.join(os.getcwd(), 'data') os.makedirs(data_dir, exist_ok=True) train_dir = os.path.join(os.getcwd(), 'data/train') os.makedirs(train_dir, exist_ok=True) test_dir = os.path.join(os.getcwd(), 'data/test') os.makedirs(test_dir, exist_ok=True) raw_dir = os.path.join(os.getcwd(), 'data/raw') os.makedirs(raw_dir, exist_ok=True) batch_dir = os.path.join(os.getcwd(), 'data/batch') os.makedirs(batch_dir, exist_ok=True) print(f'SageMaker Version: {sagemaker.__version__}') # + [markdown] Collapsed="false" # # SageMaker Processing for dataset transformation <a class="anchor" id="SageMakerProcessing"> # # Next, we'll import the dataset and transform it with SageMaker Processing, which can be used to process terabytes of data in a SageMaker-managed cluster separate from the instance running your notebook server. In a typical SageMaker workflow, notebooks are only used for prototyping and can be run on relatively inexpensive and less powerful instances, while processing, training and model hosting tasks are run on separate, more powerful SageMaker-managed instances. SageMaker Processing includes off-the-shelf support for Scikit-learn, as well as a Bring Your Own Container option, so it can be used with many different data transformation technologies and tasks. An alternative to SageMaker Processing is [SageMaker Data Wrangler](https://aws.amazon.com/sagemaker/data-wrangler/), a visual data preparation tool integrated with the SageMaker Studio UI. # # To work with SageMaker Processing, first we'll load the Boston Housing dataset, save the raw feature data and upload it to Amazon S3 so it can be accessed by SageMaker Processing. We'll also save the labels for training and testing. # + Collapsed="false" import numpy as np from tensorflow.python.keras.datasets import boston_housing from sklearn.preprocessing import StandardScaler (x_train, y_train), (x_test, y_test) = boston_housing.load_data() np.save(os.path.join(raw_dir, 'x_train.npy'), x_train) np.save(os.path.join(raw_dir, 'x_test.npy'), x_test) np.save(os.path.join(raw_dir, 'y_train.npy'), y_train) np.save(os.path.join(raw_dir, 'y_test.npy'), y_test) s3_prefix = 'tf-2-workflow' rawdata_s3_prefix = '{}/data/raw'.format(s3_prefix) raw_s3 = sess.upload_data(path='./data/raw/', key_prefix=rawdata_s3_prefix) print(raw_s3) # + [markdown] Collapsed="false" # Next, simply supply an ordinary Python data preprocessing script as shown below. For this example, we're using a SageMaker prebuilt Scikit-learn framework container, which includes many common functions for processing data. There are few limitations on what kinds of code and operations you can run, and only a minimal API contract: input and output data must be placed in specified directories. If this is done, SageMaker Processing automatically loads the input data from S3 and uploads transformed data back to S3 when the job is complete. # + Collapsed="false" # %%writefile preprocessing.py import glob import numpy as np import os from sklearn.preprocessing import StandardScaler if __name__=='__main__': input_files = glob.glob('{}/*.npy'.format('/opt/ml/processing/input')) print('\nINPUT FILE LIST: \n{}\n'.format(input_files)) scaler = StandardScaler() x_train = np.load(os.path.join('/opt/ml/processing/input', 'x_train.npy')) scaler.fit(x_train) for file in input_files: raw = np.load(file) # only transform feature columns if 'y_' not in file: transformed = scaler.transform(raw) if 'train' in file: if 'y_' in file: output_path = os.path.join('/opt/ml/processing/train', 'y_train.npy') np.save(output_path, raw) print('SAVED LABEL TRAINING DATA FILE\n') else: output_path = os.path.join('/opt/ml/processing/train', 'x_train.npy') np.save(output_path, transformed) print('SAVED TRANSFORMED TRAINING DATA FILE\n') else: if 'y_' in file: output_path = os.path.join('/opt/ml/processing/test', 'y_test.npy') np.save(output_path, raw) print('SAVED LABEL TEST DATA FILE\n') else: output_path = os.path.join('/opt/ml/processing/test', 'x_test.npy') np.save(output_path, transformed) print('SAVED TRANSFORMED TEST DATA FILE\n') # + [markdown] Collapsed="false" # Before starting the SageMaker Processing job, we instantiate a `SKLearnProcessor` object. This object allows you to specify the instance type to use in the job, as well as how many instances. Spinning a cluster is just a matter of setting `instance_count` to 2 or more, but our transformation has a `StandardScaler` which must be run over all training data and applied equally to train and test data. That can't be parallelized with `scikit-learn`, but since the dataset is small, that is not a problem. # + Collapsed="false" from sagemaker import get_execution_role from sagemaker.sklearn.processing import SKLearnProcessor try: execution_role = get_execution_role() except ValueError: execution_role = "AmazonSageMaker-ExecutionRole-20191003T111555" sklearn_processor1 = SKLearnProcessor(framework_version='0.23-1', role=execution_role, instance_type='ml.m5.xlarge', instance_count=1) # + [markdown] Collapsed="false" # We're now ready to run the Processing job. # # To enable distributing the data files equally among the instances, you could have specified the `ShardedByS3Key` distribution type in the `ProcessingInput` object. This would have ensured that if you have `n` instances, each instance will receive `1/n` files from the specified S3 bucket. # This is not needed in this case since the dataset is fairly small. # # It may take around 3 minutes for the following code cell to run, mainly to set up the cluster. At the end of the job, the cluster automatically will be torn down by SageMaker. # + Collapsed="false" from sagemaker.processing import ProcessingInput, ProcessingOutput from time import gmtime, strftime processing_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime())) output_destination = 's3://{}/{}/data'.format(bucket, s3_prefix) sklearn_processor1.run( code='preprocessing.py', job_name=processing_job_name, inputs=[ProcessingInput( source=raw_s3, destination='/opt/ml/processing/input' )], outputs=[ ProcessingOutput(output_name='train', destination='{}/train'.format(output_destination), source='/opt/ml/processing/train'), ProcessingOutput(output_name='test', destination='{}/test'.format(output_destination), source='/opt/ml/processing/test') ] ) preprocessing_job_description = sklearn_processor1.jobs[-1].describe() # + [markdown] Collapsed="false" # In the log output of the SageMaker Processing job above, you should be able to see logs in two different colors for the two different instances, and that each instance received different files. Without the `ShardedByS3Key` distribution type, each instance would have received a copy of **all** files. By spreading the data equally among `n` instances, you should receive a speedup by approximately a factor of `n` for most stateless data transformations. After saving the job results locally, we'll move on to training and inference code. # + Collapsed="false" x_train_in_s3 = '{}/train/x_train.npy'.format(output_destination) y_train_in_s3 = '{}/train/y_train.npy'.format(output_destination) x_test_in_s3 = '{}/test/x_test.npy'.format(output_destination) y_test_in_s3 = '{}/test/y_test.npy'.format(output_destination) # !aws s3 cp {x_train_in_s3} ./data/train/x_train.npy # !aws s3 cp {y_train_in_s3} ./data/train/y_train.npy # !aws s3 cp {x_test_in_s3} ./data/test/x_test.npy # !aws s3 cp {y_test_in_s3} ./data/test/y_test.npy # + [markdown] Collapsed="false" # # SageMaker hosted training <a class="anchor" id="SageMakerHostedTraining"> # # Now that we've prepared a dataset, we can move on to SageMaker's model training functionality. With SageMaker hosted training the actual training itself occurs not on the notebook instance, but on a separate cluster of machines managed by SageMaker. Before starting hosted training, the data must be in S3, or an EFS or FSx for Lustre file system. We'll upload to S3 now, and confirm the upload was successful. # + Collapsed="false" s3_prefix = 'tf-2-workflow' traindata_s3_prefix = '{}/data/train'.format(s3_prefix) testdata_s3_prefix = '{}/data/test'.format(s3_prefix) # + Collapsed="false" train_s3 = sess.upload_data(path='./data/train/', key_prefix=traindata_s3_prefix) test_s3 = sess.upload_data(path='./data/test/', key_prefix=testdata_s3_prefix) inputs = {'train':train_s3, 'test': test_s3} print(inputs) # + [markdown] Collapsed="false" # We're now ready to set up an Estimator object for hosted training. We simply call `fit` to start the actual hosted training. # + Collapsed="false" from sagemaker.tensorflow import TensorFlow train_instance_type = 'ml.c5.xlarge' hyperparameters = {'epochs': 70, 'batch_size': 128, 'learning_rate': 0.01} hosted_estimator = TensorFlow( source_dir='code', entry_point='train.py', instance_type=train_instance_type, instance_count=1, hyperparameters=hyperparameters, role=sagemaker.get_execution_role(), base_job_name='tf-2-workflow', framework_version='2.3.1', py_version='py37') # + [markdown] Collapsed="false" # After starting the hosted training job with the `fit` method call below, you should observe the valication loss converge with each epoch. Can we do better? We'll look into a way to do so in the **Automatic Model Tuning** section below. In the meantime, the hosted training job should take about 3 minutes to complete. # + Collapsed="false" hosted_estimator.fit(inputs) # + [markdown] Collapsed="false" # The training job produces a model saved in S3 that we can retrieve. This is an example of the modularity of SageMaker: having trained the model in SageMaker, you can now take the model out of SageMaker and run it anywhere else. Alternatively, you can deploy the model into a production-ready environment using SageMaker's hosted endpoints functionality, as shown in the **SageMaker hosted endpoint** section below. # # Retrieving the model from S3 is very easy: the hosted training estimator you created above stores a reference to the model's location in S3. You simply copy the model from S3 using the estimator's `model_data` property and unzip it to inspect the contents. # + Collapsed="false" # !aws s3 cp {hosted_estimator.model_data} ./model/model.tar.gz # + [markdown] Collapsed="false" # The unzipped archive should include the assets required by TensorFlow Serving to load the model and serve it, including a .pb file: # + Collapsed="false" # !tar -xvzf ./model/model.tar.gz -C ./model # + [markdown] Collapsed="false" # # Automatic Model Tuning <a class="anchor" id="AutomaticModelTuning"> # # So far we have simply run one Hosted Training job without any real attempt to tune hyperparameters to produce a better model. Selecting the right hyperparameter values to train your model can be difficult, and typically is very time consuming if done manually. The right combination of hyperparameters is dependent on your data and algorithm; some algorithms have many different hyperparameters that can be tweaked; some are very sensitive to the hyperparameter values selected; and most have a non-linear relationship between model fit and hyperparameter values. SageMaker Automatic Model Tuning helps automate the hyperparameter tuning process: it runs multiple training jobs with different hyperparameter combinations to find the set with the best model performance. # # We begin by specifying the hyperparameters we wish to tune, and the range of values over which to tune each one. We also must specify an objective metric to be optimized: in this use case, we'd like to minimize the validation loss. # + Collapsed="false" from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner hyperparameter_ranges = { 'learning_rate': ContinuousParameter(0.001, 0.2, scaling_type="Logarithmic"), 'epochs': IntegerParameter(10, 50), 'batch_size': IntegerParameter(64, 256), } metric_definitions = [{'Name': 'loss', 'Regex': ' loss: ([0-9\\.]+)'}, {'Name': 'val_loss', 'Regex': ' val_loss: ([0-9\\.]+)'}] objective_metric_name = 'val_loss' objective_type = 'Minimize' # + [markdown] Collapsed="false" # Next we specify a HyperparameterTuner object that takes the above definitions as parameters. Each tuning job must be given a budget: a maximum number of training jobs. A tuning job will complete after that many training jobs have been executed. # # We also can specify how much parallelism to employ, in this case three jobs, meaning that the tuning job will complete after two series of three jobs in parallel have completed. For the default Bayesian Optimization tuning strategy used here, the tuning search is informed by the results of previous groups of training jobs, so we don't run all of the jobs in parallel, but rather divide the jobs into groups of parallel jobs. There is a trade-off: using more parallel jobs will finish tuning sooner, but likely will sacrifice tuning search accuracy. # # Now we can launch a hyperparameter tuning job by calling the `fit` method of the HyperparameterTuner object. The tuning job may take around 10 minutes to finish. While you're waiting, the status of the tuning job, including metadata and results for invidual training jobs within the tuning job, can be checked in the SageMaker console in the **Hyperparameter tuning jobs** panel. # + Collapsed="false" tuner = HyperparameterTuner(hosted_estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=6, max_parallel_jobs=3, objective_type=objective_type) tuning_job_name = "tf-2-workflow-{}".format(strftime("%d-%H-%M-%S", gmtime())) tuner.fit(inputs, job_name=tuning_job_name) tuner.wait() # + [markdown] Collapsed="false" # After the tuning job is finished, we can use the `HyperparameterTuningJobAnalytics` object from the SageMaker Python SDK to list the top 5 tuning jobs with the best performance. Although the results vary from tuning job to tuning job, the best validation loss from the tuning job (under the FinalObjectiveValue column) likely will be substantially lower than the validation loss from the hosted training job above, where we did not perform any tuning other than manually increasing the number of epochs once. # + Collapsed="false" tuner_metrics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name) tuner_metrics.dataframe().sort_values(['FinalObjectiveValue'], ascending=True).head(5) # + [markdown] Collapsed="false" # The total training time and training jobs status can be checked with the following lines of code. Because automatic early stopping is by default off, all the training jobs should be completed normally. For an example of a more in-depth analysis of a tuning job, see the SageMaker official sample [HPO_Analyze_TuningJob_Results.ipynb](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/analyze_results/HPO_Analyze_TuningJob_Results.ipynb) notebook. # + Collapsed="false" total_time = tuner_metrics.dataframe()['TrainingElapsedTimeSeconds'].sum() / 3600 print("The total training time is {:.2f} hours".format(total_time)) tuner_metrics.dataframe()['TrainingJobStatus'].value_counts() # + [markdown] Collapsed="false" # # SageMaker hosted endpoint <a class="anchor" id="SageMakerHostedEndpoint"> # # Assuming the best model from the tuning job is better than the model produced by the individual hosted training job above, we could now easily deploy that model to production. A convenient option is to use a SageMaker hosted endpoint, which serves real time predictions from the trained model (For asynchronous, offline predictions on large datasets, you can use either SageMaker Processing or SageMaker Batch Transform.). The endpoint will retrieve the TensorFlow SavedModel created during training and deploy it within a SageMaker TensorFlow Serving container. This all can be accomplished with one line of code. # # More specifically, by calling the `deploy` method of the HyperparameterTuner object we instantiated above, we can directly deploy the best model from the tuning job to a SageMaker hosted endpoint. # + Collapsed="false" tuning_predictor = tuner.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge') # + [markdown] Collapsed="false" # We can compare the predictions generated by this endpoint with the actual target values: # + Collapsed="false" results = tuning_predictor.predict(x_test[:10])['predictions'] flat_list = [float('%.1f'%(item)) for sublist in results for item in sublist] print('predictions: \t{}'.format(np.array(flat_list))) print('target values: \t{}'.format(y_test[:10].round(decimals=1))) # + [markdown] Collapsed="false" # To avoid billing charges from stray resources, you can delete the prediction endpoint to release its associated instance(s). # + Collapsed="false" sess.delete_endpoint(tuning_predictor.endpoint_name) # + [markdown] Collapsed="false" # # Batch Scoring Step <a class="anchor" id="BatchScoringStep"> # # The final step in this pipeline is offline, batch scoring (inference/prediction). The inputs to this step will be the model we trained earlier, and the test data. A simple, ordinary Python script is all we need to do the actual batch inference. # + Collapsed="false" # %%writefile batch-score.py import os import subprocess import sys import numpy as np import pathlib import tarfile def install(package): subprocess.check_call([sys.executable, "-m", "pip", "install", package]) if __name__ == "__main__": install('tensorflow==2.3.1') model_path = f"/opt/ml/processing/model/model.tar.gz" with tarfile.open(model_path, 'r:gz') as tar: tar.extractall('./model') import tensorflow as tf model = tf.keras.models.load_model('./model/1') test_path = "/opt/ml/processing/test/" x_test = np.load(os.path.join(test_path, 'x_test.npy')) y_test = np.load(os.path.join(test_path, 'y_test.npy')) scores = model.evaluate(x_test, y_test, verbose=2) print("\nTest MSE :", scores) output_dir = "/opt/ml/processing/batch" pathlib.Path(output_dir).mkdir(parents=True, exist_ok=True) evaluation_path = f"{output_dir}/score-report.txt" with open(evaluation_path, 'w') as writer: writer.write(f"Test MSE : {scores}") # + [markdown] Collapsed="false" # We'll use SageMaker Processing here to perform batch scoring. # + Collapsed="false" framework_version = "0.23-1" batch_instance_type = "ml.c5.xlarge" batch_instance_count = 1 batch_scorer = SKLearnProcessor( framework_version="0.23-1", instance_type="ml.c5.xlarge", instance_count=1, base_job_name="tf-2-workflow-batch", role=execution_role ) # + Collapsed="false" batch_scorer.run( inputs=[ ProcessingInput( source=tuner.best_estimator().model_data, destination="/opt/ml/processing/model" ), ProcessingInput( source=sklearn_processor1.latest_job.outputs[1].destination, # [0] is train, [1] is test destination="/opt/ml/processing/test" ) ], outputs=[ProcessingOutput(output_name="batch", source="/opt/ml/processing/batch"),], code="./batch-score.py" ) # + Collapsed="false" report_path = f"{batch_scorer.latest_job.outputs[0].destination}/score-report.txt" # !aws s3 cp {report_path} ./score-report.txt --quiet && cat score-report.txt
labs/03_manual_sagemaker_process_train/03_manual_sagemaker_process_train.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # September 10 - Training results on the new dataset # + # Imports import sys import os import time import math # Add the path to the parent directory to augment search for module par_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) if par_dir not in sys.path: sys.path.append(par_dir) # Plotting import import matplotlib.pyplot as plt import numpy as np # Import the utils for plotting the metrics from plot_utils import plot_utils from plot_utils import notebook_utils_2 # - # ## $\beta$-VAE and VAE (latent dimensions=128) training # + run_ids = ["20190909_190620", "20190909_183138"] model_ids = ["ENet(128)", r"$\beta$-ENet(128)"] dump_dirs = ["/home/akajal/WatChMaL/VAE/dumps/" + run_id + "/" for run_id in run_ids] training_logs = [dump_dir + "log_train.csv" for dump_dir in dump_dirs] val_logs = [dump_dir + "log_val.csv" for dump_dir in dump_dirs] # Plot training log plot_utils.plot_vae_training(training_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"]}, downsample_interval=64, legend_loc=(0.65,0.88), show_plot=True) # Plot validation log plot_utils.plot_vae_training(val_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"]}, downsample_interval=64, legend_loc=(0.65,0.88), show_plot=True) # - # ## Planar flow training (using $\beta$ annealing) # + run_ids = ["20190909_223147", "20190909_223333", "20190909_215637"] model_ids = ["PNF(4)", "PNF(8)", "PNF(16)"] dump_dirs = ["/home/akajal/WatChMaL/VAE/dumps/" + run_id + "/" for run_id in run_ids] training_logs = [dump_dir + "log_train.csv" for dump_dir in dump_dirs] val_logs = [dump_dir + "log_val.csv" for dump_dir in dump_dirs] # Plot training log plot_utils.plot_vae_training(training_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"], model_ids[2]:["pink", "brown"]}, downsample_interval=64, legend_loc=(0.88,0.88), show_plot=True) # Plot validation log plot_utils.plot_vae_training(val_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"], model_ids[2]:["pink", "brown"]}, downsample_interval=64, legend_loc=(0.87,0.88), show_plot=True) # + run_ids = ["20190910_184445", "20190910_184617", "20190909_225041"] model_ids = ["PNF(64)", "PNF(128)", "PNF(256)"] dump_dirs = ["/home/akajal/WatChMaL/VAE/dumps/" + run_id + "/" for run_id in run_ids] training_logs = [dump_dir + "log_train.csv" for dump_dir in dump_dirs] val_logs = [dump_dir + "log_val.csv" for dump_dir in dump_dirs] # Plot training log plot_utils.plot_vae_training(training_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"], model_ids[2]:["pink", "brown"]}, downsample_interval=64, legend_loc=(0.88,0.88), show_plot=True) # Plot validation log plot_utils.plot_vae_training(val_logs, model_ids, {model_ids[0]:["orange", "cyan"], model_ids[1]:["red", "blue"], model_ids[2]:["pink", "brown"]}, downsample_interval=64, legend_loc=(0.87,0.88), show_plot=True) # - # ## Pure classifier training # + # Using the absolute path run_id = "20190911_021551" dump_dir = "/home/akajal/WatChMaL/VAE/dumps/" + run_id + "/" model_name = "ENet-CL(128)" training_log, val_log = dump_dir + "log_train.csv", dump_dir + "log_val.csv" # Plot training log plot_utils.plot_training([training_log], [model_name], {model_name:["red", "blue"]}, downsample_interval=32, show_plot=True) # Plot validation log plot_utils.plot_training([val_log], [model_name], {model_name:["red", "blue"]}, downsample_interval=32, show_plot=True) # - # ## Validate the 128 latent dimensional VAE ( both with and without beta annealing ) # + latent_dims = [128, 128] dumps = ["20190911_025430", "20190911_023041"] # First check that all the indices from the test validation set exist in all the dumps ldump_idx_arr = None # Iterate over the dumps and check the indices for latent_dim, dump in zip(latent_dims, dumps): print("----------------------------------------------------") print("Reading metrics from VAE with {0} latent dimensions :".format(latent_dim)) print("----------------------------------------------------") dump_npz_path = "/home/akajal/WatChMaL/VAE/dumps/{0}/val_valid_iteration_metrics.npz".format(dump) dump_npz_arr = np.load(dump_npz_path) dump_indices = np.sort(dump_npz_arr["indices"]) if ldump_idx_arr is not None: if not np.array_equal(dump_indices, ldump_idx_arr): print("Index array for latent dims {0} not equal to all the other.".format(latent_dim)) else: print("Index array equal to the first index array") else: ldump_idx_arr = dump_indices # + # Collect the metrics for plotting as well recon_loss_values, kl_loss_values = [], [] recon_std_values, kl_std_values = [], [] recon_stderr_values, kl_stderr_values = [], [] # Iterate over the dumps and check the indices for latent_dim, dump in zip(latent_dims, dumps): print("\n----------------------------------------------------") print("Printing metrics for VAE with {0} latent dimensions :".format(latent_dim)) print("----------------------------------------------------") dump_npz_path = "/home/akajal/WatChMaL/VAE/dumps/{0}/val_valid_iteration_metrics.npz".format(dump) npz_arr = np.load(dump_npz_path) dump_recon_loss, dump_kl_loss = npz_arr["recon_loss"], npz_arr["kl_loss"] mean_recon_loss, std_recon_loss = np.mean(dump_recon_loss), np.std(dump_recon_loss) stderr_recon_loss = std_recon_loss/math.sqrt(dump_recon_loss.shape[0]) recon_loss_values.append(mean_recon_loss) recon_std_values.append(std_recon_loss) recon_stderr_values.append(stderr_recon_loss) mean_kl_loss, std_kl_loss = np.mean(dump_kl_loss), np.std(dump_kl_loss) stderr_kl_loss = std_kl_loss/math.sqrt(dump_kl_loss.shape[0]) kl_loss_values.append(mean_kl_loss) kl_std_values.append(std_kl_loss) kl_stderr_values.append(stderr_kl_loss) print("Recon Loss metrics") print("Mean Recon loss : {0}".format(mean_recon_loss)) print("Std Recon loss : {0}".format(std_recon_loss)) print("Stderr Recon loss : {0}\n".format(stderr_recon_loss)) print("KL Loss metrics") print("Mean KL loss : {0}".format(mean_kl_loss)) print("Std KL loss : {0}".format(std_kl_loss)) print("Stderr KL loss : {0}".format(stderr_kl_loss)) # -
notebooks/notebooks_archive/September 10 - Training results on the new dataset.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # "COVID-19: Your State is Not That Different" # > "All 50 US States and Washington DC saw exponential growth in confirmed cases, and have not shown clear signs of slowing down." # # - toc: false # - branch: master # - badges: true # - comments: true # - categories: [covid] # <!-- - image: images/test.png --> # - hide: false # + #hide library(tidyverse) library(lubridate) library(janitor) theme_set(theme_light(base_size = 20)) # - # This post is inspired by the following work: # # * The smoothed [log-log plot](https://aatishb.com/covidtrends/) of cumulative cases against new cases, by [Aatish Bhatia](https://aatishb.com/) and [Minute Physics](https://www.youtube.com/watch?v=54XLXg4fYsc). The log-log scale and the stock vs (smoothed 7-day) flow help us to see the exponential path each country was on, and make it easier to spot when the growth has passed the exponential growth phase and started to slow down. # * <NAME>'s [replication](https://kieranhealy.org/blog/archives/2020/03/27/a-covid-small-multiple/) of <NAME>-Murdoch's [small-multiple plot](https://www.ft.com/coronavirus-latest) of confirmed cases by country. The technique is useful to contrast a country's trajectory with other countries trajectories, putting things into perspective. # # I combine these two approaches to show that all 50 States and Washington DC in the United States saw exponential growth in confirmed cases, and have not shown clear signs of slowing down, with some caveats below. # #hide fixed_date = lubridate::date("2020-03-28") us_states_raw <- read_csv("https://covidtracking.com/api/states/daily.csv") %>% janitor::clean_names() us_states <- us_states_raw %>% select(date, state, positive, negative, pending, death, test = total_test_results) %>% mutate(date = lubridate::ymd(date)) %>% # filter(date <= lubridate::today() - 1) %>% filter(date <= fixed_date) %>% pivot_longer(positive:test, names_to = "measure", values_to = "count") # + #hide state_name <- read_csv(' "sname","state" "Alabama","AL" "Alaska","AK" "Arizona","AZ" "Arkansas","AR" "California","CA" "Colorado","CO" "Connecticut","CT" "Delaware","DE" "D.C.","DC" "Florida","FL" "Georgia","GA" "Hawaii","HI" "Idaho","ID" "Illinois","IL" "Indiana","IN" "Iowa","IA" "Kansas","KS" "Kentucky","KY" "Louisiana","LA" "Maine","ME" "Montana","MT" "Nebraska","NE" "Nevada","NV" "New Hampshire","NH" "New Jersey","NJ" "New Mexico","NM" "New York","NY" "North Carolina","NC" "North Dakota","ND" "Ohio","OH" "Oklahoma","OK" "Oregon","OR" "Maryland","MD" "Massachusetts","MA" "Michigan","MI" "Minnesota","MN" "Mississippi","MS" "Missouri","MO" "Pennsylvania","PA" "Rhode Island","RI" "South Carolina","SC" "South Dakota","SD" "Tennessee","TN" "Texas","TX" "Utah","UT" "Vermont","VT" "Virginia","VA" "Washington","WA" "West Virginia","WV" "Wisconsin","WI" "Wyoming","WY" ') states_gt100 <- us_states %>% filter(measure == "positive") %>% group_by(state) %>% filter(date == max(date)) %>% filter(count >= 100) %>% pull(state) states_cumulative <- us_states %>% inner_join(state_name, by = "state") %>% filter(measure == "positive") %>% select(state, sname, date, count) %>% group_by(state) %>% mutate( increase_7d = count - lag(count, 7, order_by = date), increase_4d = count - lag(count, 4, order_by = date), increase_1d = count - lag(count, 1, order_by = date), ) %>% ungroup() all_states_background <- states_cumulative %>% select(st = state, date, count, starts_with("increase_")) endpoints <- states_cumulative %>% group_by(state) %>% filter(date == max(date)) %>% ungroup() state_name_label <- states_cumulative %>% group_by(state) %>% filter(date == max(date)) %>% ungroup() %>% mutate( count = 1, increase_7d = max(increase_7d, na.rm = T) - 1e4, increase_4d = max(increase_4d, na.rm = T) - 1e4, increase_1d = max(increase_1d, na.rm = T) - 1e4, ) %>% select(state, sname, count, starts_with("increase_")) # + #hide_input options(repr.plot.width = 12, repr.plot.height = 15, repr.plot.res = 100) plt = states_cumulative %>% ggplot(mapping = aes(x = count, y = increase_7d)) + # The line traces for every country, in every panel geom_line(data = all_states_background, aes(group = st), size = 0.2, color = "gray80") + # The line trace in red, for the country in any given panel geom_line(color = "firebrick", lineend = "round") + # The point at the end. Bonus trick: some points can have fills! geom_point(data = endpoints, size = 2.2, shape = 21, color = "firebrick", fill = "firebrick2" ) + # The country label inside the panel, in lieu of the strip label geom_text(data = state_name_label, mapping = aes(label = sname), vjust = "inward", hjust = "inward", fontface = "bold", color = "firebrick", size = 5) + # Log transform and friendly labels scale_x_log10(labels = scales::label_number_si()) + scale_y_log10(labels = scales::label_number_si()) + # Facet by country, order from high to low facet_wrap(~ reorder(state, -count), ncol = 7) + labs(x = "Number of Confirmed Cases (log10 scale)", y = "Number of Mew Confirmed Cases in the Last 7 Days (log10 scale)", title = "All States' confirmed cases are growing exponentially, and no sign of slowing down", # title = "Exponential Growth of COVID-19 Confirmed Cases by US State", subtitle = paste("Total confirmed cases and rolling 7-day total of new confirmed cases.\nStates with straighter trajectories are closer to exponential growth.", "Data as of", format(max(us_states$date), "%A, %B %e, %Y")), caption = "<NAME> @paulymli / Data: https://covidtracking.com") + theme(plot.title = element_text(size = rel(1), face = "bold"), plot.subtitle = element_text(size = rel(0.7)), plot.caption = element_text(size = rel(0.7)), # turn off the strip label and tighten the panel spacing strip.text = element_blank(), panel.spacing.x = unit(-0.05, "lines"), panel.spacing.y = unit(0.3, "lines"), axis.text.x = element_text(size = rel(0.8)), axis.text.y = element_text(size = rel(0.8)), axis.title.x = element_text(size = rel(1)), axis.title.y = element_text(size = rel(1)), legend.text = element_text(size = rel(1))) suppressWarnings(print(plt)) # - # A few notes to help with the interpretation of the graph: # # * X axis is the cumulative number of confirmed cases in log10 scale, and the y axis is the new confirmed cases in the last 7 days in log10 scale. # * Each day corresponds to a point on each of the graphs. Imaging a dot climbing up to the top right corner of each graph, with the large red dot representing the latest day, leaving a trace behind it. # * The gray background lines of each small-multiple panel are the growth paths of all other states. # * The 7-day period in the y axis is to smooth the daily fluctuations in new confirmed cases. # * If the growth of confirmed cases are exponential, the slope of log(confirmed cases) vs log(total new cases in the last 7 days) is 45 degrees. Therefore the fact that every state is on the 45-degree line suggests every state is on an exponential growth path. # * But the 45-degree line does not suggest all states share the same rate of growth, just that all states are on some exponential path. # * This graph only considers confirmed/detected cases, not the actually infected cases, which are only larger than detected cases. Limited testing is likely an important factor that confounds the interpretation: the actual spread of the virus could be faster or slower than the increase in confirmed cases. Combining confirmed cases and hospitalizations trends may help paint a clearer picture. Although a shorter time series, the hospitalization trends are also exponential in states that report the data. # * The chart is not intended to be predictive: it's tracking the smoothed trends to see where we are, not where we are going to be. Specifically, it's tracking whether or not the confirmed cases are still on an exponential growth trajectory.
_notebooks/2020-03-29-us-covid-trends-by-state.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # <a href="https://colab.research.google.com/github/jdthayer/ATMS-597-SP-2020/blob/master/ATMS_597_Project_1_Thayer_FINALFINALFINAL.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # + [markdown] id="KnNoTce5eqR-" colab_type="text" # ATMS-597-SP-2020 Project 1 # Task: Create an object-oriented python module that converts temperatures interchangeably between degrees Celsius, Fahrenheit, and Kelvin. # # Group Members:<NAME>, <NAME>, <NAME> # # # + [markdown] id="2kS-l3oF0gok" colab_type="text" # Installing needed libraries and importing packages # + id="Gxr1-0QTBix5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 277} outputId="363f05f8-b108-41d3-c6c8-4007a3ef8ce1" # !apt-get -qq install libproj-dev proj-data proj-bin libgeos-dev import numpy as np # + id="kt-yPTgxeDe6" colab_type="code" colab={} # ATMS-597-Project # class temp_array(): """ A class to store temperature values and convert them interchangeably between Celsius, Fahrenheit, and Kelvin. """ # Defining the function for temperature values and units def __init__(self, temp_values, temp_units): # This is a special function to initiate a method self.temp_values = temp_values self.temp_units = temp_units # Defining the function for temperature conversion between Kelvin, Celsius, and Fahrenheit def tempconvert(self, conversion_unit): """ This function converts temperature values from the current units to desired units. Supported units are degrees C, degrees F, and K. Input: conversion_unit - desired unit for output temperature values (string; 'C', 'K', or 'F') Output: A new instance of the temp_array class which includes the new temperature values in the desired units """ # Not a list unless...it is islist = False # Checking whether the temperature input type is a list or numpy array if type(self.temp_values) is list: self.temp_values = np.asarray(self.temp_values) islist = True elif isinstance(self.temp_values, np.ndarray) == True: self.temp_values = self.temp_values # Setting up logic comparisons to convert temperatures stated in Celsuis to Kelvin or Fahrenheit if self.temp_units == "C" and conversion_unit == "C": print ("Values are already in degrees C") new = temp_array(self.temp_values, self.temp_units) elif self.temp_units == "C" and conversion_unit == "K": new = temp_array(self.temp_values + 273.15,"K") elif self.temp_units == "C" and conversion_unit == "F": new = temp_array(self.temp_values * 9.0 / 5.0 + 32.0,"F") elif self.temp_units == "C": raise ValueError("Error: incompatible temperature unit. Conversion units must be Fahrenheit or Kelvin.") # Setting up logic comparisons to convert temperatures stated in Kelvin to Celsuis or Fahrenheit if self.temp_units == "K" and conversion_unit == "K": print ("Values are already in degrees K") new = temp_array(self.temp_values, self.temp_units) elif self.temp_units == "K" and conversion_unit == "C": new = temp_array(self.temp_values - 273.15,"C") elif self.temp_units == "K" and conversion_unit == "F": new = temp_array((self.temp_values - 273.15) * 9.0 / 5.0 + 32.0,"F") elif self.temp_units == "K": raise ValueError("Error: incompatible temperature unit. Conversion units must be Celsius or Fahrenheit.") # Setting up logic comparisons to convert temperatures stated in Fahrenheit to Celsuis or Kelvin if self.temp_units == "F" and conversion_unit == "F": print ("Values are already in degrees F") new = temp_array(self.temp_values, self.temp_units) elif self.temp_units == "F" and conversion_unit == "K": new = temp_array((self.temp_values - 32.0) * (5.0 / 9.0) + 273.15,"K") elif self.temp_units == "F" and conversion_unit == "C": new = temp_array((self.temp_values - 32.0) * (5.0 / 9.0),"C") elif self.temp_units == "F": raise ValueError("Error: incompatible temperature unit. Conversion units must be Celsius or Kelvin.") # Return output as list if a list was provided if islist: new.temp_values = list(new.temp_values) return new # Otherwise return as-is else: return new # Organized printing of output for easy visualization def print_nice_output(self, orig_values, orig_units, multiple_input = False): """ This function prints the given original temperature values and the converted values. Input: orig_values - original temperature values (list, array, or single number - float or int) orig_units - original units (string; 'C', 'K', or 'F') multiple_input - set to true if temperature values are passed as a list or array, false if single number Output: Original and converted temperature values will be printed to the console. """ # Do this if temperature values are type list or array if multiple_input: # Print original values print('Original temps were:') for i in range(len(orig_values)): print('{0:3.2f}'.format(orig_values[i]), orig_units) #Print converted values print('New temps are:') for i in range(len(self.temp_values)): print('{0:3.2f}'.format(self.temp_values[i]), self.temp_units) # Do this if single temperature value else: # Print original and converted values print('Original temp was {0:3.2f}'.format(orig_values) + orig_units + ', new temp is {0:3.2f}'.format(self.temp_values) + self.temp_units + '.') # Add white space for readability print('\n') # + id="tlKTunXgyiL3" colab_type="code" outputId="a00217b5-8ca4-4a96-fa86-759ce8e01de4" colab={"base_uri": "https://localhost:8080/", "height": 518} # Below are examples demonstrating the functionality of the temp_array class # Run this code if this is the main script if __name__ == "__main__": # Example of functionality with a single temperature value # Create class instance temp = temp_array(50, 'C') # Convert to new units temp_in_K = temp.tempconvert('K') # Print output temp_in_K.print_nice_output(temp.temp_values, temp.temp_units) # Example of functionality with an array of temperatures # Create class instance temp_arr = temp_array(np.asarray([50., 60., 70., 80.]), 'F') # Convert to new units temp_arr_in_C = temp_arr.tempconvert('C') # Print output temp_arr_in_C.print_nice_output(temp_arr.temp_values, temp_arr.temp_units, multiple_input = True) # Example of functionality with a list of temperatures # Create class instance temp_list = temp_array([300., 270., 250., 299.], 'K') # Convert to new units temp_list_in_C = temp_list.tempconvert('C') # Print output temp_list_in_C.print_nice_output(temp_list.temp_values, temp_list.temp_units, multiple_input = True) # + id="jHaXoX_Rrouc" colab_type="code" colab={}
ATMS-597-SP-2020-Project-1/ATMS_597_Project_1_Thayer_FINALFINALFINAL.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import sklearn import matplotlib.pyplot as plt from sklearn.metrics import r2_score, median_absolute_error, mean_absolute_error from sklearn.metrics import median_absolute_error, mean_squared_error, mean_squared_log_error from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.linear_model import LinearRegression import matplotlib.pyplot as plt import properscoring as ps import scipy.stats as st import gc as garbage import warnings warnings.filterwarnings('ignore') # %matplotlib inline # #%load_ext line_profiler # - def root_mean_squared_error(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) def mean_average_percentage_error(y_true, y_pred): return np.nanmean(np.abs((y_true - y_pred) / y_true))*100. garbage.collect() # # Load dataset print('numpy', np.__version__) print('pandas', pd.__version__) print('scikit-learn', sklearn.__version__) # !curl https://archive.ics.uci.edu/ml/machine-learning-databases/00235/household_power_consumption.zip # !unzip household_power_consumption.zip fname = './household_power_consumption.txt' df = pd.read_csv(fname, sep=';', parse_dates={'dt' : ['Date', 'Time']}, infer_datetime_format=True, low_memory=False, na_values=['nan','?'], index_col='dt') df.head() df.describe() # + ts = df.Global_active_power.resample('1h').mean() hourly_m = ts.groupby(ts.index.hour).mean() hourly_50 = ts.groupby(ts.index.hour).quantile(0.50) hourly_25 = ts.groupby(ts.index.hour).quantile(0.25) hourly_75 = ts.groupby(ts.index.hour).quantile(0.75) hourly_05 = ts.groupby(ts.index.hour).quantile(0.05) hourly_95 = ts.groupby(ts.index.hour).quantile(0.95) # - plt.figure(figsize=(6,3)) plt.fill_between(hourly_m.index, hourly_05, hourly_95, alpha=0.1, color='blue', label='90%') plt.fill_between(hourly_m.index, hourly_25, hourly_75, alpha=0.2, color='blue', label='IQR') plt.plot(hourly_m, label='mean', color='k', linestyle='solid') #plt.plot(hourly_50, label='median', color='k', linestyle='dashed') plt.ylabel('Load [kW]') plt.xlabel('hour of day') plt.grid(True) plt.xticks(np.arange(0, 24, step=4)) plt.xlim(0,23) plt.ylim(0,4) plt.tight_layout() plt.legend(loc='upper left') plt.savefig('hourly.png', dpi=300) # + ts = df.Global_active_power.resample('1d').mean() daily_m = ts.groupby(ts.index.dayofweek).mean() daily_50 = ts.groupby(ts.index.dayofweek).quantile(0.50) daily_25 = ts.groupby(ts.index.dayofweek).quantile(0.25) daily_75 = ts.groupby(ts.index.dayofweek).quantile(0.75) daily_05 = ts.groupby(ts.index.dayofweek).quantile(0.05) daily_95 = ts.groupby(ts.index.dayofweek).quantile(0.95) # - plt.figure(figsize=(6,3)) plt.fill_between(daily_m.index, daily_05, daily_95, alpha=0.1, color='blue', label='90%') plt.fill_between(daily_m.index, daily_25, daily_75, alpha=0.2, color='blue', label='IQR') plt.plot(daily_m, label='mean', color='k', linestyle='solid') #plt.plot(daily_50, label='median', color='k', linestyle='dashed') plt.ylabel('Load [kW]') plt.xlabel('day of week') plt.xticks(np.arange(7),('Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun')) plt.grid(True) plt.xlim(0,6) plt.ylim(0,2.5) plt.tight_layout() plt.legend(loc='upper left') plt.savefig('daily.png', dpi=300) # + ts = df.Global_active_power.resample('1w').mean() weekly_m = ts.groupby(ts.index.weekofyear).mean() weekly_50 = ts.groupby(ts.index.weekofyear).quantile(0.50) weekly_25 = ts.groupby(ts.index.weekofyear).quantile(0.25) weekly_75 = ts.groupby(ts.index.weekofyear).quantile(0.75) weekly_05 = ts.groupby(ts.index.weekofyear).quantile(0.05) weekly_95 = ts.groupby(ts.index.weekofyear).quantile(0.95) weekly_m.index -= 1 # - plt.figure(figsize=(6,3)) plt.fill_between(weekly_m.index, weekly_05, weekly_95, alpha=0.1, color='blue', label='90%') plt.fill_between(weekly_m.index, weekly_25, weekly_75, alpha=0.2, color='blue', label='IQR') plt.plot(weekly_m, label='mean', color='k', linestyle='solid') #plt.plot(daily_50, label='median', color='k', linestyle='dashed') plt.ylabel('Load [kW]') plt.xlabel('month of the year') plt.grid(True) plt.xticks(np.arange(0,53,4.34),('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec')) plt.annotate('Winter holidays', xy=(8, 0.5), xytext=(6, 0.2), arrowprops={'arrowstyle':'-'}) plt.annotate('Easter', xy=(14, 1.5), xytext=(12, 1.8), arrowprops={'arrowstyle':'-'}) plt.annotate('Summer holidays', xy=(32, 0.8), xytext=(20, 1.4), arrowprops={'arrowstyle':'-'}) plt.annotate('All Saints', xy=(43, 0.6), xytext=(40, 0.2), arrowprops={'arrowstyle':'-'}) plt.annotate('Christmas', xy=(50, 1.7), xytext=(40, 2), arrowprops={'arrowstyle':'-'}) plt.xlim(0,52.1) plt.ylim(0,2.5) plt.tight_layout() plt.legend(loc='upper left') plt.savefig('weekly.png', dpi=300) df.Global_active_power.resample('1T').mean().describe() 2.049280e+06 / 2.049280e+06 df.Global_active_power.resample('15min').mean().describe() df.Global_active_power.resample('1h').mean().describe() df.Global_active_power.resample('1d').mean().describe() df.Global_active_power.resample('1w').mean().describe() df.fillna(df.shift(7, freq='d'), inplace=True) df.fillna(method='pad', inplace=True) print(df.isnull().sum()) (1. - 2.049280e+06 / df.shape[0]) * 100 train_date = pd.Timestamp('01-01-2009') test_date = pd.Timestamp('01-01-2010') def long_term_fit(ts): y = ts year = y.index.year.to_series(name='year', index=y.index) dayofyear = y.index.dayofyear.to_series(name='dayofyear', index=y.index) month = y.index.month.to_series(name='month', index=y.index) dayofweek = y.index.dayofweek.to_series(name='dayofweek', index=y.index) hour = y.index.hour.to_series(name='hour', index=y.index) minute = y.index.minute.to_series(name='minute', index=y.index) time = hour + minute / 60. time.name = 'hour' X = pd.concat([year, dayofyear, dayofweek, time], axis=1) print('Find optimal tree depth...') depth = 0 rmse_val = np.inf for d in range(8, 11): print('Depth: %d'%d) rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, oob_score=True, max_features='sqrt', max_depth=d, random_state=42) rf.fit(X[:train_date], y[:train_date]) rmse = root_mean_squared_error(rf.predict(X[train_date:test_date]), y[train_date:test_date]) if (rmse < rmse_val): rmse_val = rmse depth = d print('MAX_DEPTH: %d - RMSE_VAL %f' %(depth, rmse_val)) print('Fit random forest...') rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, oob_score=True, max_features='sqrt', max_depth=depth, random_state=42) rf.fit(X[:test_date], y[:test_date]) rmse_train = root_mean_squared_error(rf.predict(X[:test_date]), y[:test_date]) rmse_test = root_mean_squared_error(rf.predict(X[test_date:]), y[test_date:]) print('RMSE_TRAIN: %f - RMSE_TEST %f' %(rmse_train, rmse_test)) return rf def long_term_predict(dt, rf): year = dt.year.to_series(name='year', index=dt) dayofyear = dt.dayofyear.to_series(name='dayofyear', index=dt) month = dt.month.to_series(name='month', index=dt) dayofweek = dt.dayofweek.to_series(name='dayofweek', index=dt) hour = dt.hour.to_series(name='hour', index=dt) minute = dt.minute.to_series(name='minute', index=dt) time = hour + minute / 60. time.name = 'hour' X = pd.concat([year, dayofyear, dayofweek, time], axis=1) ts_lt = pd.Series(rf.predict(X), index=dt) return ts_lt def short_term_fit(ts, ts_lt, lookback, steps): ts_ref = ts_lt.reindex_like(ts) res = ts - ts_ref Xy = pd.concat([res.shift(-h) for h in range(-lookback+1, steps+1)], axis=1).dropna() Xy.columns = range(-lookback+1, steps+1) X = Xy.loc[:,:0] y = Xy.loc[:,1:] print('Fit Linear regression...') lr = LinearRegression(n_jobs=-1) lr.fit(X[:test_date], y[:test_date]) return lr def short_term_predict(ts, ts_lt, lr, lookback, steps): ts_ref = ts_lt.reindex_like(ts) res = ts - ts_ref X = pd.concat([res.shift(-h) for h in range(-lookback+1, 1)], axis=1).dropna() X.columns = range(-lookback+1, 1) print('Predict linear regression...') res_st = pd.DataFrame(lr.predict(X), index=X.index) res_st.columns = range(1, steps+1) ts_st = pd.DataFrame() for s in res_st.columns: ts_st[s] = ts_ref + res_st[s].shift(s) return ts_st def short_term_single(ts, rf, lr, lookback, steps): t0 = ts.index[-1] resolution = ts.index.freq dt = pd.date_range(t0 + pd.Timedelta(resolution), freq=resolution, periods=steps) y_lt = long_term_predict(dt, rf) x = ts[-lookback:] res = x - long_term_predict(x.index, rf) y_st = y_lt + pd.Series(lr.predict(res.values.reshape(1,-1)).flatten(), index=dt) return y_st def deterministic_results(ts, ts_st, ts_p, name='deterministic', x_label='forecast time [min]', x_factor = 1): y_hat = ts_p[test_date:].dropna() y_gt = ts.reindex_like(y_hat) MAE_p = mean_absolute_error(y_gt, y_hat) MAPE_p = mean_average_percentage_error(y_gt, y_hat) RMSE_p = np.sqrt(mean_squared_error(y_gt, y_hat)) MAE_st = [] MAPE_st = [] RMSE_st = [] for s in ts_st.columns: y_hat = ts_st[s][test_date:].dropna() y_gt = ts.reindex_like(y_hat) MAE_st.append(mean_absolute_error(y_gt, y_hat)) MAPE_st.append(mean_average_percentage_error(y_gt, y_hat)) RMSE_st.append(np.sqrt(mean_squared_error(y_gt, y_hat))) SS_st = 1. - RMSE_st / RMSE_p print('MAE p: %f'%MAE_p) print('MAPE p: %f'%MAPE_p) print('RMSE p: %f'%RMSE_p) print() print('MAE: %f - %f - %f'%(MAE_st[0], np.mean(MAE_st), MAE_st[-1])) print('MAPE: %f - %f - %f'%(MAPE_st[0], np.mean(MAPE_st), MAPE_st[-1])) print('RMSE: %f - %f - %f'%(RMSE_st[0], np.mean(RMSE_st), RMSE_st[-1])) print('SS: %f - %f - %f'%(SS_st[0], np.mean(SS_st), SS_st[-1])) plt.figure(figsize=(4,4)) #plt.plot(range(1, len(ts_st.columns) +1), RMSE_st, color='tab:orange', label='Deterministic', linestyle='dashed', linewidth=2) #plt.plot((1, len(ts_st.columns)), (RMSE_p, RMSE_p), color='tab:green', label='Persistence', linestyle='dotted', linewidth=2) plt.plot(np.arange(1, len(ts_st.columns) +1)*x_factor, RMSE_st, color='tab:orange', label='Deterministic', linestyle='none', marker='s') plt.plot(np.arange(1, len(ts_st.columns) +1)*x_factor, RMSE_p * np.ones(len(ts_st.columns)), color='tab:green', label='Persistence', linestyle='none', marker='v') plt.ylabel('RMSE [kW]') plt.xlabel(x_label) plt.grid(True) plt.xlim(0, len(ts_st.columns)*x_factor) #plt.ylim(0, 1.) plt.tight_layout() plt.legend(loc='lower right') plt.savefig(name + '.png', dpi=300) def error_quantiles(ts, ts_d): err = ts - ts_d hour = err.index.hour.to_series(name='hour', index=err.index) eq = err.groupby(hour).quantile(np.around(np.arange(0.05, 1.0, 0.05), 3)) return eq def probabilistic_results_mean(ts, ts_st, ts_p, name='probabilistic', frac=1.0, x_label='forecast time [min]', x_factor = 1): crps_p = [] crps_d = [] crps_q = [] if (frac < 1.0): ts_st_train = ts_st[:test_date].dropna().sample(frac=frac, random_state=42).sort_index() else: ts_st_train = ts_st[:test_date].dropna() ts_train = ts.reindex_like(ts_st_train) ts_p_train = ts_p.reindex_like(ts_st_train) if (frac < 1.0): ts_st_test = ts_st[test_date:].dropna().sample(frac=frac, random_state=42).sort_index() else: ts_st_test = ts_st[test_date:].dropna() ts_test = ts.reindex_like(ts_st_test) ts_p_test = ts_p.reindex_like(ts_st_test) for s in ts_st.columns: if not(s % 10): print(s) eq = error_quantiles(ts_train, ts_st_train[s]).unstack().values ts_q = np.broadcast_to(ts_st_test[s].values.reshape(-1,1), (len(ts_st_test), 19)) h = ts_st_test.index.hour ts_q = (ts_q + eq[h,:]).clip(0.) ts_q = pd.DataFrame(ts_q, index=ts_st_test.index, columns=list(np.around(np.arange(0.05, 1.0, 0.05), 3))) crps_p.append(ps.crps_ensemble(ts_test, ts_p_test).mean()) crps_d.append(ps.crps_ensemble(ts_test, ts_st_test[s]).mean()) crps_q.append(ps.crps_ensemble(ts_test, ts_q).mean()) print('CRPS_p: %f - %f - %f'%(crps_p[0], np.mean(crps_p), crps_p[-1])) print('CRPS_d: %f - %f - %f'%(crps_d[0], np.mean(crps_d), crps_d[-1])) print('CRPS_q: %f - %f - %f'%(crps_q[0], np.mean(crps_q), crps_q[-1])) plt.figure(figsize=(4,4)) #plt.plot(range(1, len(ts_st.columns) +1), crps_q, color='tab:blue', label='Probabilistic', linestyle='solid', linewidth=2) #plt.plot(range(1, len(ts_st.columns) +1), crps_d, color='tab:orange', label='Deterministic', linestyle='dashed', linewidth=2) #plt.plot(range(1, len(ts_st.columns) +1), crps_p, color='tab:green', label='Persistence', linestyle='dotted', linewidth=2) plt.plot(np.arange(1, len(ts_st.columns) +1)*x_factor, crps_q, color='tab:blue', label='Probabilistic', linestyle='none', marker='o') plt.plot(np.arange(1, len(ts_st.columns) +1)*x_factor, crps_d, color='tab:orange', label='Deterministic', linestyle='none', marker='s') plt.plot(np.arange(1, len(ts_st.columns) +1)*x_factor, crps_p, color='tab:green', label='Persistence', linestyle='none', marker='v') plt.ylabel('CRPS [kW]') plt.xlabel(x_label) plt.grid(True) plt.xlim(0, len(ts_st.columns)*x_factor) #plt.ylim(0, 0.6) plt.tight_layout() plt.legend(loc='lower right') plt.savefig(name + '.png', dpi=300) def probabilistic_results_hourly(ts, ts_st, ts_p, name='hourly', s=1): ts_st_train = ts_st[:test_date].dropna() ts_train = ts.reindex_like(ts_st_train) ts_p_train = ts_p.reindex_like(ts_st_train) ts_st_test = ts_st[test_date:].dropna() ts_test = ts.reindex_like(ts_st_test) ts_p_test = ts_p.reindex_like(ts_st_test) eq = error_quantiles(ts_train, ts_st_train[s]).unstack().values ts_q = np.broadcast_to(ts_st_test[s].values.reshape(-1,1), (len(ts_st_test), 19)) h = ts_st_test.index.hour ts_q = (ts_q + eq[h,:]).clip(0.) ts_q = pd.DataFrame(ts_q, index=ts_st_test.index, columns=list(np.around(np.arange(0.05, 1.0, 0.05), 3))) crps_p = ps.crps_ensemble(ts_test, ts_p_test) crps_d = ps.crps_ensemble(ts_test, ts_st_test[s]) crps_q = ps.crps_ensemble(ts_test, ts_q) crps_p_h = np.empty(24) crps_d_h = np.empty(24) crps_q_h = np.empty(24) for i in range(24): crps_p_h[i] = crps_p[h == i].mean() crps_d_h[i] = crps_d[h == i].mean() crps_q_h[i] = crps_q[h == i].mean() plt.figure(figsize=(8,4)) plt.plot(crps_q_h, color='tab:blue', label='Probabilistic', linestyle='solid', marker='o') plt.plot(crps_d_h, color='tab:orange', label='Deterministic', linestyle='solid', marker='s') plt.plot(crps_p_h, color='tab:green', label='Persistence', linestyle='solid', marker='v') plt.ylim([0,1.0]) plt.ylabel('CRPS [kW]') plt.xlabel('hour of the day') #plt.title('Forecast horizon: ' + name) plt.legend() plt.grid(True) plt.tight_layout() plt.legend(loc='upper left') plt.savefig('hourly_crps_' + name + '.png', dpi=300) def probabilistic_results_hist(ts, ts_st, h=7): ts_st_train = ts_st[:test_date].dropna() ts_train = ts.reindex_like(ts_st_train) ts_p_train = ts_p.reindex_like(ts_st_train) ts_st_test = ts_st[test_date:].dropna() ts_test = ts.reindex_like(ts_st_test) ts_p_test = ts_p.reindex_like(ts_st_test) plt.figure(figsize=(6,3)) for s in [1, 2, 4, 8]: err = ts_train - ts_st_train[s] hour = err.index.hour.to_series(name='hour', index=err.index) err[hour==h].plot.density(label='steps: %d'%s) plt.xlim(-2,4) #plt.ylabel('CRPS [kW]') #plt.xlabel('hour of the day') #plt.title('Forecast horizon: ' + name) plt.legend() plt.grid(True) plt.tight_layout() plt.legend(loc='upper left') #plt.savefig('density_%d.png'%h, dpi=300) def probabilistic_results_time(ts, ts_st, y_st, t0, lookback, steps, name='ts', frac=1.0): tA = t0 - (lookback)*pd.Timedelta(resolution) tB = t0 + (steps)*pd.Timedelta(resolution) if (frac < 1.0): ts_st_train = ts_st[:test_date].dropna().sample(frac=frac, random_state=42).sort_index() else: ts_st_train = ts_st[:test_date].dropna() ts_train = ts.reindex_like(ts_st_train) if (frac < 1.0): ts_st_test = ts_st[test_date:].dropna().sample(frac=frac, random_state=42).sort_index() else: ts_st_test = ts_st[test_date:].dropna() ts_test = ts.reindex_like(ts_st_test) y_05 = y_st.copy() y_25 = y_st.copy() y_75 = y_st.copy() y_95 = y_st.copy() for s in range(1, steps+1): err = ts_train - ts_st_train[s] hour = err.index.hour.to_series(name='hour', index=err.index) eq = err.groupby(hour).quantile([0.05, 0.25, 0.75, 0.95]).unstack().values y_05.iloc[s-1] += eq[y_05.index.hour[s-1],0] y_25.iloc[s-1] += eq[y_25.index.hour[s-1],1] y_75.iloc[s-1] += eq[y_75.index.hour[s-1],2] y_95.iloc[s-1] += eq[y_95.index.hour[s-1],3] y_05 = y_05.clip(0.) y_25 = y_25.clip(0.) y_75 = y_75.clip(0.) y_95 = y_95.clip(0.) plt.figure(figsize=(8,4)) y_st.plot(label='Forecast', color='k', linestyle='solid') ts[tA:tB].plot(label='Measure', color='k', linestyle='dotted') plt.fill_between(y_st.index, y_05, y_95, alpha=0.1, color='blue', label='90%') plt.fill_between(y_st.index, y_25, y_75, alpha=0.2, color='blue', label='IQR') plt.xlabel('') plt.ylabel('Load [kW]') plt.xlim(tA, tB) plt.grid(True) plt.tight_layout() plt.legend(loc='upper left') plt.savefig(name + '.png', dpi=300) ts = df.Global_active_power # + # %%time resolution = '1min' lookback = 60 steps = 60 rf = long_term_fit(ts.resample(resolution).mean()) ts_lt = long_term_predict(ts.resample(resolution).mean().index, rf) lr = short_term_fit(ts.resample(resolution).mean(), ts_lt, lookback, steps) # - # %%time ts_st = short_term_predict(ts.resample(resolution).mean(), ts_lt, lr, lookback, steps) # %%time ts_p = ts.resample(resolution).mean().shift(steps) deterministic_results(ts.resample(resolution).mean(), ts_st, ts_p, 'det1', x_factor=1, x_label='forecast time [min]') # %%time probabilistic_results_mean(ts.resample(resolution).mean(), ts_st, ts_p, 'prob1', frac=0.1, x_factor=1, x_label='forecast time [min]') # %%time t0 = pd.Timestamp('2010-5-26 19:15:00') #t0 = pd.Timestamp('2009-5-27 19:00:00') y_st = short_term_single(ts.resample(resolution).mean()[:t0], rf, lr, lookback, steps) probabilistic_results_time(ts.resample(resolution).mean(), ts_st, y_st, t0, lookback, steps, name='ts_1', frac=0.1) # + # %%time resolution = '15min' lookback = 4*24 steps = 4*24 rf = long_term_fit(ts.resample(resolution).mean()) ts_lt = long_term_predict(ts.resample(resolution).mean().index, rf) lr = short_term_fit(ts.resample(resolution).mean(), ts_lt, lookback, steps) # - # %%time ts_st = short_term_predict(ts.resample(resolution).mean(), ts_lt, lr, lookback, steps) # %%time ts_p = ts.resample(resolution).mean().shift(steps) deterministic_results(ts.resample(resolution).mean(), ts_st, ts_p, 'det2', x_factor=0.25, x_label='forecast time [h]') # %%time probabilistic_results_mean(ts.resample(resolution).mean(), ts_st, ts_p, 'prob2', x_factor=0.25, x_label='forecast time [h]') # %%time t0 = pd.Timestamp('2010-5-26 19:15:00') y_st = short_term_single(ts.resample(resolution).mean()[:t0], rf, lr, lookback, steps) probabilistic_results_time(ts.resample(resolution).mean(), ts_st, y_st, t0, lookback, steps, name='ts_2') probabilistic_results_hourly(ts.resample(resolution).mean(), ts_st, ts_p, '15min', s=1) probabilistic_results_hourly(ts.resample(resolution).mean(), ts_st, ts_p, '1h', s=4) probabilistic_results_hourly(ts.resample(resolution).mean(), ts_st, ts_p, '2h', s=8) probabilistic_results_hist(ts, ts_st, h=16) # + # %%time resolution = '1h' lookback = 24*7 steps = 24*7 rf = long_term_fit(ts.resample(resolution).mean()) ts_lt = long_term_predict(ts.resample(resolution).mean().index, rf) lr = short_term_fit(ts.resample(resolution).mean(), ts_lt, lookback, steps) # - # %%time ts_st = short_term_predict(ts.resample(resolution).mean(), ts_lt, lr, lookback, steps) # %%time ts_p = ts.resample(resolution).mean().shift(steps) deterministic_results(ts.resample(resolution).mean(), ts_st, ts_p, 'det3', x_factor=1, x_label='forecast time [h]') # %%time probabilistic_results_mean(ts.resample(resolution).mean(), ts_st, ts_p, 'prob3', x_factor=1, x_label='forecast time [h]') # %%time t0 = pd.Timestamp('2010-5-26 19:15:00') y_st = short_term_single(ts.resample(resolution).mean()[:t0], rf, lr, lookback, steps) probabilistic_results_time(ts.resample(resolution).mean(), ts_st, y_st, t0, lookback, steps, name='ts_3') # + # %%time rf_resolution = '7d' y_rf = ts.resample(rf_resolution).mean() year = y_rf.index.year.to_series(name='year', index=y_rf.index) dayofyear = y_rf.index.dayofyear.to_series(name='dayofyear', index=y_rf.index) month = y_rf.index.month.to_series(name='month', index=y_rf.index) weekofyear = y_rf.index.weekofyear.to_series(name='weekofyear', index=y_rf.index) dayofweek = y_rf.index.dayofweek.to_series(name='dayofweek', index=y_rf.index) hour = y_rf.index.hour.to_series(name='hour', index=y_rf.index) minute = y_rf.index.minute.to_series(name='minute', index=y_rf.index) time = hour + minute / 60. time.name = 'hour' X_rf = pd.concat([year, month, weekofyear], axis=1) print('Find optimal tree depth...') depth = 0 rmse_val = np.inf for d in range(3, 10): rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, oob_score=True, max_features='sqrt', max_depth=d, random_state=42) rf.fit(X_rf[:train_date], y_rf[:train_date]) rmse = root_mean_squared_error(rf.predict(X_rf[train_date:test_date]), y_rf[train_date:test_date]) if (rmse < rmse_val): rmse_val = rmse depth = d print('MAX_DEPTH: %d - RMSE_VAL %f' %(depth, rmse_val)) print('Fit random forest...') rf = RandomForestRegressor(n_estimators=100, n_jobs=-1, oob_score=True, max_features='sqrt', max_depth=depth, random_state=42) rf.fit(X_rf[:test_date], y_rf[:test_date]) rmse_train = root_mean_squared_error(rf.predict(X_rf[:test_date]), y_rf[:test_date]) rmse_test = root_mean_squared_error(rf.predict(X_rf[test_date:]), y_rf[test_date:]) print('RMSE_TRAIN: %f - RMSE_TEST %f' %(rmse_train, rmse_test)) # + # %%time ts_rf = pd.Series(rf.predict(X_rf), index=y_rf.index) print('MAE p: %f'%(mean_absolute_error(y_rf[test_date:], y_rf.shift(52)[test_date:]))) print('MAPE p: %f'%(mean_average_percentage_error(y_rf[test_date:], y_rf.shift(52)[test_date:]))) print('RMSE p: %f'%(np.sqrt(mean_squared_error(y_rf[test_date:], y_rf.shift(52)[test_date:])))) print() print('MAE: %f'%(mean_absolute_error(y_rf[test_date:], ts_rf[test_date:]))) print('MAPE: %f'%(mean_average_percentage_error(y_rf[test_date:], ts_rf[test_date:]))) print('RMSE: %f'%(np.sqrt(mean_squared_error(y_rf[test_date:], ts_rf[test_date:])))) print('SS: %f'%(1. - np.sqrt(mean_squared_error(y_rf[test_date:], ts_rf[test_date:])) / np.sqrt(mean_squared_error(y_rf[test_date:], y_rf.shift(52)[test_date:])))) print() print('CRPS_p: %f'%(ps.crps_ensemble(y_rf[test_date:], y_rf.shift(52)[test_date:]).mean())) print('CRPS_d: %f'%(ps.crps_ensemble(y_rf[test_date:], ts_rf[test_date:]).mean())) # - plt.figure(figsize=(8,4)) ts_rf[test_date-pd.Timedelta('7 days'):].plot(label='Forecast', color='tab:blue', linestyle='solid') ts_rf[:test_date].plot(label='Fit', color='tab:orange', linestyle='dashed') y_rf.plot(label='Real', color='k', linestyle='dotted') plt.ylabel('Load [kW]') #plt.xlabel('time') plt.xlabel('') plt.grid(True) plt.tight_layout() plt.legend(loc='lower left') plt.savefig('weekly_forecast.png', dpi=300) print(len(ts_rf[:test_date]), len(ts_rf[test_date:])) import holidays hd = holidays.France hd for date, name in sorted(holidays.France(years=[2007, 2008, 2009, 2010]).items()): print(date, name) y_rf.plot() plt.xlim('2007', '2011') plt.plot((pd.Timestamp('2007-04-09'), pd.Timestamp('2007-04-09')), (0,2)) plt.plot((pd.Timestamp('2007-11-01'), pd.Timestamp('2007-11-01')), (0,2)) plt.plot((pd.Timestamp('2008-03-24'), pd.Timestamp('2008-03-24')), (0,2)) plt.plot((pd.Timestamp('2008-11-01'), pd.Timestamp('2008-11-01')), (0,2)) plt.plot((pd.Timestamp('2009-04-13'), pd.Timestamp('2009-04-13')), (0,2)) plt.plot((pd.Timestamp('2009-11-01'), pd.Timestamp('2009-11-01')), (0,2)) plt.plot((pd.Timestamp('2010-04-05'), pd.Timestamp('2010-04-05')), (0,2)) plt.plot((pd.Timestamp('2010-11-01'), pd.Timestamp('2010-11-01')), (0,2))
Scenarios-RF-LR.ipynb
# -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: julia 1.4.1 # language: julia # name: julia-1.4 # --- # ## Online system identification in Duffing oscillator by free energy minimisation # # This project considers a [Duffing oscillator](hkps://en.wikipedia.org/wiki/Duffing_equation), a driven damped harmonic oscillator with a cubic nonlinearity in its spring stiffness component. State-space model description of the system: # # $$\begin{align} # m \frac{d^2 x(t)}{dt^2} + c \frac{d x(t)}{dt} + a x(t) + b x^3(t) =&\ u(t) + w(t) \\ # y(t) =&\ x(t) + e(t) # \end{align}$$ # # where # $$\begin{align} # m =&\ \text{mass} \\ # c =&\ \text{damping} \\ # a =&\ \text{linear stiffness} \\ # b =&\ \text{nonlinear stiffness} \\ # y(t) =&\ \text{observation (displacement)} \\ # x(t) =&\ \text{state (displacement)} \\ # u(t) =&\ \text{force} \\ # v(t) =&\ \text{measurement noise} \\ # w(t) =&\ \text{process noise} # \end{align}$$ # # The process noise is a Wiener process, where the increment is Gaussian distributed $w(t) \sim \mathcal{N}(0, \tau^{-1}dt)$. The parameter $\tau$ represents the precision of the process. The measurement noise is also a Wiener process, $v(t) \sim \mathcal{N}(0, \xi^{-1}dt)$. # # ## Experiment: simulation error # # In this notebook, we will perform a simulation error experiment: the model will simulate future outputs for a large time horizon using only inputs and inferred parameters. The predictions will be evaluated and compared to a few benchmark methods. # ### Data # # There is an electronic implementation of the Duffing oscillator on the Nonlinear System Identification Benchmark website: http://nonlinearbenchmark.org/#Silverbox. It's called the Silverbox setup. using Pkg Pkg.activate(".") Pkg.instantiate() using CSV using DataFrames using Plots pyplot(); viz = true; # + # Read data from CSV file df = CSV.read("data/SNLS80mV.csv", ignoreemptylines=true) df = select(df, [:V1, :V2]) # Sampling frequency fs = 610.35 # Shorthand input = df[:,1] output = df[:,2] # Time horizon T = size(df, 1); # + # Zoomed in versions of signals tt = 50000:1:50500 p01 = plot(tt, input[tt], color="red", label="", markersize=2, xlabel="", ylabel="input", xticks=:none, ylims=[-0.22, 0.22], tickfontsize=14, legendfontsize=12, guidefontsize=16) p02 = plot(tt, output[tt], color="blue", label="", markersize=2, xlabel="time (t)", ylabel="output", ylims=[-0.22, 0.22], tickfontsize=14, legendfontsize=12, guidefontsize=16) p00 = plot(p01, p02, layout=(2,1), size=(800,400)) savefig(p00, "figures/signals_zoomed.png") savefig(p00, "figures/signals_zoomed.pdf") # + # Select training set ix_trn = collect(40101:131072) input_trn = input[ix_trn] output_trn = output[ix_trn] T_trn = length(ix_trn); # Select validation set ix_val = 1:40100 input_val = input[ix_val] output_val = output[ix_val] T_val = length(ix_val); # - # Plot entire series with training split ss = 80 ix = 1:ss:T p11 = plot(ix, output[ix], color="blue", linewidth=2, xlabel="time (t)", ylabel="output", label="", ylim=[-.24, .24], tickfontsize=14, legendfontsize=12, guidefontsize=16) vline!([40100], color="black", linewidth=4, label="") p12 = plot(ix, input[ix], color="red", linewidth=2, xticks=:none, xlabel="", ylabel="input", label="", ylim=[-.24, .24], tickfontsize=14, legendfontsize=12, guidefontsize=16) vline!([40100], color="black", linewidth=4, label="") p10 = plot(p12, p11, layout=(2,1), size=(1200,400)) savefig(p10, "figures/dataset_split.png") # Plot example of signals ss = 4 ix = 126000:ss:127500 p31 = Plots.plot(ix, output[ix], color="blue", linewidth=3, xlabel="time (t)", label="output") Plots.plot!(ix, input[ix], color="red", linewidth=3, xlabel="time (t)", label="input", size=(1200,300), ylim=[-.16, .21], legend=:topright, tickfontsize=14, legendfontsize=12, ylabel="signal", guidefontsize=16) savefig(p31, "figures/input-output_seq1.png") savefig(p31, "figures/input-output_seq1.pdf") # ## Solution steps # # ### 1. Discretize # # I'm using a central difference for the second derivative and a forward difference for the first derivative. Let $w_t$ be a sample from $\mathcal{N}(0, \tau^{-1})$. The state transition can now be wriken as the following discrete-time system: # # $$\begin{align} # m (x_{t+1} - 2x_{t} + x_{t-1}) + c (x_{t+1} - x_{t}) + a x_t + b x_t^3 =&\ u_t + w_t # \end{align}$$ # Re-writing this as a function of $x_{t+1}$ yields: # $$\begin{align} # % (m + c) x_{t+1}&\ + (-2m - c + a) x_{t} + bx_t^3 + m x_{t-1} = u_t + w_t \\ # x_{t+1}&\ = \frac{2m + c - a}{m + c} x_{t} + \frac{-b}{m + c}x_t^3 + \frac{-m}{m + c} x_{t-1} + \frac{1}{m + c} u_t + \frac{1}{m + c} w_t \, . # \end{align}$$ # # ### 2. Substitute variables and reduce order # # I can cast the above system into matrix form: # # $$ \underbrace{\begin{bmatrix} x_{t+1} \\ x_{t} \end{bmatrix}}_{z_t} = \underbrace{\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}}_{S} \underbrace{\begin{bmatrix} x_{t} \\ x_{t-1} \end{bmatrix}}_{z_{t-1}} + \underbrace{\begin{bmatrix} 1 \\ 0 \end{bmatrix}}_{s} g(\theta, z_{t-1}) + \begin{bmatrix} 1 \\ 0 \end{bmatrix} \eta u_t + \begin{bmatrix} 1 \\ 0 \end{bmatrix} \tilde{w}_t \, ,$$ # # where # # $$\begin{align} # \theta_1 = \frac{2m+c-a}{m+c} \ , \quad # \theta_2 = \frac{-b}{m+c} \ , \quad # \theta_3 = \frac{-m}{m+c} \ , \quad # \eta = \frac{1}{m+c} \ , \quad # \gamma^{-1} = \frac{\tau^{-1}}{(m+c)^2} \, , # \end{align}$$ # # with $g(\theta, z_{t-1}) = \theta_1 x_t + \theta_2 x_t^3 + \theta_3 x_{t-1}$ and $\tilde{w}_t \sim \mathcal{N}(0, \gamma^{-1})$. In total, I have five unknowns $m,c,a,b,\tau$ and five equations. I can invert the mapping between $\phi = (m, c, a, b, \tau)$ and $\psi = (\theta_1, \theta_2, \theta_3, \eta, \gamma)$ to recover MAP estimates for the physical parameters. An additional advantage of variable substitution is that it allows for more freedom in choosing priors. # # The system is now a nonlinear autoregressive process: # # $$z_t = f(\theta, z_{t-1}, \eta, u_t) + \tilde{w}_t$$ # # where $f(\theta, z_{t-1}, \eta, u_t) = Sz_{t-1} + s g(\theta, z_{t-1}) + s \eta u_t$. Note that the states are two-dimensional now. # # ### 3. Convert to Gaussian probability # # Integrating out $\tilde{w}_t$ and $v_t$ produces a Gaussian state transition node: # # $$\begin{align} # z_t \sim&\ \mathcal{N}(f(\theta, z_{t-1}, \eta, u_t), V) \\ # y_t \sim&\ \mathcal{N}(s^{\top} z_t, \xi^{-1}) \, , # \end{align}$$ # # where $V = \begin{bmatrix} \gamma^{-1} & 0 \\ 0 & \epsilon \end{bmatrix}$ and $W = V^{-1} = \begin{bmatrix} \gamma & 0 \\ 0 & \epsilon^{-1} \end{bmatrix}$. # # ### 4. Approximating the nonlinearity # # The nonlinearity is approximated using a first-order Taylor expansion. The work here revolves around working out the expectations for $g(x,\theta)$: # # $$ g(\theta, x) = g(m_{\theta}, m_x) + J_{x}(m_{\theta}, m_x)^{\top}(x - m_x) + J_{\theta}(m_{\theta}, m_x)^{\top}(\theta - m_{\theta}) \, ,$$ # # where $J_x$ denotes the partial derivative of $g$ with respect to $x$ and $J_{\theta}$ w.r.t. $\theta$. Note that our current $g$ is linear in $\theta$ and one could argue that the approximation is unnecessary. However, this form is more general and the first-order T_valaylor is exact anyway. # # ### 5. Choose priors # # We know that mass $m$ and process precision $\gamma$ are strictly positive parameters and that the damping and stiffness coefficients can be both positive and negative. By examing the nonlinear transform $\psi = G(\phi)$, we realize that $\theta_1$, $\theta_2$, $\theta_3$ and $\eta$ can be both positive and negative, but $\gamma$ can only be positive. As such, we choose the following priors: # # $$\begin{align} # \theta \sim \text{Normal}(m^{0}_{\theta}, V^{0}_{\theta}) \ , \quad # \eta \sim \text{Normal}(m^{0}_{\eta}, v^{0}_{\eta}) \ , \quad # \gamma \sim \text{Gamma}(a^{0}_\gamma, b^{0}_\gamma) \, . # \end{align}$$ # # ### 6. Choose recognition model # # We do not introduce any independencies; the recognition model follows the generative model: # # $$\begin{align} # q(\theta) \sim \text{Normal}(m_{\theta}, V_{\theta}) \ , \quad # q(\eta) \sim \text{Normal}(m_{\eta}, v_{\eta}) \ , \quad # q(\gamma) \sim \text{Gamma}(a_\gamma, b_\gamma) \, . # \end{align}$$ # # ## Implementation # # The procedure described above was implemented using [ForneyLab.jl](hkps://github.com/biaslab/ForneyLab.jl) with a custom node called "NLARX". It contains a Nonlinear Latent Autoregressive model with eXogenous input to model the state transition. # + using LinearAlgebra using ForneyLab using ForneyLab: unsafeMean, unsafeCov, unsafeVar, unsafePrecision using ProgressMeter include("NLARX-node/NLARX.jl") include("NLARX-node/util.jl") using .NLARX # + # System identification graph graph1 = FactorGraph() # Static parameters @RV θ ~ GaussianMeanPrecision(placeholder(:m_θ, dims=(3,)), placeholder(:w_θ, dims=(3,3))) @RV η ~ GaussianMeanPrecision(placeholder(:m_η), placeholder(:w_η)) @RV γ ~ Gamma(placeholder(:a_γ), placeholder(:b_γ)) @RV ξ ~ Gamma(placeholder(:a_ξ), placeholder(:b_ξ)) # Nonlinearity g(θ, x) = θ[1]*x[1] + θ[2]*x[1]^3 + θ[3]*x[2] # State prior @RV z_tmin1 ~ GaussianMeanPrecision(placeholder(:m_z, dims=(2,)), placeholder(:w_z, dims=(2, 2)), id=:z_tmin1) # Autoregressive node @RV z_t ~ NLatentAutoregressiveX(θ, z_tmin1, η, placeholder(:u_t), γ, g=g, id=:z_t) # Specify likelihood @RV y_t ~ GaussianMeanPrecision(dot([1. , 0.], z_t), ξ, id=:y_t) # Placeholder for observation placeholder(y_t, :y_t) # Draw time-slice subgraph ForneyLab.draw(graph1) # + # Specify recognition model q1 = PosteriorFactorization(z_t, z_tmin1, θ, η, γ, ξ, ids=[:z_t, :z_tmin1, :θ, :η, :γ, :ξ]) algo1 = variationalAlgorithm(q1, free_energy=true) # Compile inference algorithm source_code1 = algorithmSourceCode(algo1, free_energy=true) eval(Meta.parse(source_code1)); # println(source_code1) # - # ### Infer parameters on training data # + # Inference parameters num_iterations = 10 # Initialize marginal distribution and observed data dictionaries data = Dict() marginals = Dict() # Initialize free energy tracking array free_energy_trn = zeros(T_trn, num_iterations) # Initialize arrays of parameterizations params_z = (zeros(2,T_trn+1), repeat(.1 .*float(eye(2)), outer=(1,1,T_trn+1))) params_θ = (ones(3,T_trn+1), repeat(.1 .*float(eye(3)), outer=(1,1,T_trn+1))) params_η = (2*ones(1,T_trn+1), 1e2 *ones(1,T_trn+1)) params_γ = (1e8*ones(1,T_trn+1), 1e3*ones(1,T_trn+1)) params_ξ = (1e8*ones(1,T_trn+1), 1e1*ones(1,T_trn+1)) # Start progress bar p = Progress(T_trn, 1, "At time ") # Perform inference at each time-step for t = 1:T_trn # Update progress bar update!(p, t) # Initialize marginals marginals[:z_tmin1] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_z[1][:,t], w=params_z[2][:,:,t]) marginals[:z_t] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_z[1][:,t], w=params_z[2][:,:,t]) marginals[:θ] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_θ[1][:,t], w=params_θ[2][:,:,t]) marginals[:η] = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=params_η[1][1,t], w=params_η[2][1,t]) marginals[:γ] = ProbabilityDistribution(Univariate, Gamma, a=params_γ[1][1,t], b=params_γ[2][1,t]) marginals[:ξ] = ProbabilityDistribution(Univariate, Gamma, a=params_ξ[1][1,t], b=params_ξ[2][1,t]) data = Dict(:y_t => output_trn[t], :u_t => input_trn[t], :m_z => params_z[1][:,t], :w_z => params_z[2][:,:,t], :m_θ => params_θ[1][:,t], :w_θ => params_θ[2][:,:,t], :m_η => params_η[1][1,t], :w_η => params_η[2][1,t], :a_γ => params_γ[1][1,t], :b_γ => params_γ[2][1,t], :a_ξ => params_ξ[1][1,t], :b_ξ => params_ξ[2][1,t]) # Iterate variational parameter updates for i = 1:num_iterations # Update parameters stepη!(data, marginals) stepθ!(data, marginals) # Update states stepz_t!(data, marginals) stepz_tmin1!(data, marginals) # Update noise stepγ!(data, marginals) stepξ!(data, marginals) # Compute free energy free_energy_trn[t, i] = freeEnergy(data, marginals) end # Store current parameterizations of marginals params_z[1][:,t+1] = unsafeMean(marginals[:z_t]) params_z[2][:,:,t+1] = marginals[:z_t].params[:w] params_θ[1][:,t+1] = unsafeMean(marginals[:θ]) params_θ[2][:,:,t+1] = marginals[:θ].params[:w] params_η[1][1,t+1] = unsafeMean(marginals[:η]) params_η[2][1,t+1] = marginals[:η].params[:w] params_γ[1][1,t+1] = marginals[:γ].params[:a] params_γ[2][1,t+1] = marginals[:γ].params[:b] params_ξ[1][1,t+1] = marginals[:ξ].params[:a] params_ξ[2][1,t+1] = marginals[:ξ].params[:b] end # - # ### Simulate validation data # + # Prediction graph graph2 = FactorGraph() # Autoregressive node @RV z_pred ~ NLatentAutoregressiveX(placeholder(:θ, dims=(3,)), placeholder(:z_tmin1, dims=(2,)), placeholder(:η), placeholder(:u_t), placeholder(:γ), g=g, id=:z_pred_t) # Draw time-slice subgraph ForneyLab.draw(graph2) # Inference algorithm q2 = PosteriorFactorization(z_pred, ids=[:z_pred]) algo2 = variationalAlgorithm(q2, free_energy=true) source_code2 = algorithmSourceCode(algo2, free_energy=true) eval(Meta.parse(source_code2)); # println(source_code2) # + # Initialize free energy tracking array free_energy_pred = zeros(T_val, num_iterations) # Initialize future state arrays params_preds = (zeros(2, T_val), repeat(.1 .*float(eye(2)), outer=(1,1,T_val))) # Start simulation with known output params_preds[1][1,2] = output_val[2] params_preds[1][2,2] = output_val[1] # Start progress bar p = Progress(T_val, 1, "At time ") for t = 3:T_val update!(p, t) # Initialize marginals marginals[:z_pred] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_preds[1][:,t], w=params_preds[2][:,:,t]) # Clamp data data = Dict(:u_t => input[t], :z_tmin1 => params_preds[1][:,t-1], :θ => params_θ[1][:,end], :η => params_η[1][end], :γ => params_γ[1][end]/params_γ[2][end]) # Iterate variational parameter updates for i = 1:num_iterations # Make prediction stepz_pred!(data, marginals) # Compute free energy free_energy_pred[t, i] = freeEnergy(data, marginals) end # Store current parameterizations of marginals params_preds[1][:,t] = unsafeMean(marginals[:z_pred]) params_preds[2][:,:,t] = marginals[:z_pred].params[:w] end # - # Store predictions for later comparisons results_NLARX = Dict() results_NLARX["preds"] = params_preds results_NLARX["params_z"] = params_z; results_NLARX["params_θ"] = params_θ; results_NLARX["params_η"] = params_η; results_NLARX["params_γ"] = params_γ; results_NLARX["params_ξ"] = params_ξ; results_NLARX["FE_pred"] = free_energy_pred; results_NLARX["FE_trn"] = free_energy_trn; # ### Visualize results # + # Mean and std dev of predictions predictions_mean = params_preds[1][1,:] predictions_std = sqrt.(inv.(params_preds[2][1,1,:])) # Subsample for visualization ss = 10 # viz_ix = 50000:ss:60000 viz_ix = 1024:ss:40000 # Plot predictions p23 = scatter(viz_ix, output[viz_ix], label="observations", xlabel="time (t)", ylims=[-.4, .4], color="black") plot!(viz_ix, predictions_mean[viz_ix], ribbon=[predictions_std[viz_ix], predictions_std[viz_ix]], label="predictions", color="red") # - Plots.savefig(p23, "figures/simulations_nlarx.png") # + # Compute prediction error pred_error = (predictions_mean[2:end] .- output_val[2:end]).^2 # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p24 = scatter(viz_ix, pred_error[viz_ix], color="black", xlabel="time (t)", ylabel="Prediction error", label="", ylims=[1e-10, 1e-2], yscale=:log10) # - Plots.savefig(p24, "figures/pred-error_nlarx.png") # + # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p24 = plot(viz_ix, free_energy_trn[viz_ix,end], color="black", xlabel="time (t)", ylabel="F[q]", label="", title="Free energy at training time") # + # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p24 = plot(viz_ix, free_energy_pred[viz_ix,end], color="black", xlabel="time (t)", ylabel="F[q]", label="", title="Free energy of predictions") # - # ## Baseline: linear autoregression # + # System identification graph graph3 = FactorGraph() # Static parameters @RV θ ~ GaussianMeanPrecision(placeholder(:m_θ, dims=(2,)), placeholder(:w_θ, dims=(2,2))) @RV η ~ GaussianMeanPrecision(placeholder(:m_η), placeholder(:w_η)) @RV γ ~ Gamma(placeholder(:a_γ), placeholder(:b_γ)) @RV ξ ~ Gamma(placeholder(:a_ξ), placeholder(:b_ξ)) # Linear autoregression function g(θ, x) = θ[1]*x[1] + θ[2]*x[2] # State prior @RV z_tmin1 ~ GaussianMeanPrecision(placeholder(:m_z, dims=(2,)), placeholder(:w_z, dims=(2, 2)), id=:z_tmin1) # Autoregressive node @RV z_t ~ NLatentAutoregressiveX(θ, z_tmin1, η, placeholder(:u_t), γ, g=g, id=:z_t) # Specify likelihood @RV y_t ~ GaussianMeanPrecision(dot([1. , 0.], z_t), ξ, id=:y_t) # Placeholder for observation placeholder(y_t, :y_t) # Specify recognition model q3 = PosteriorFactorization(z_t, z_tmin1, θ, η, γ, ξ, ids=[:z_t, :z_tmin1, :θ, :η, :γ, :ξ]) algo3 = variationalAlgorithm(q3, free_energy=true) # Compile inference algorithm source_code3 = algorithmSourceCode(algo3, free_energy=true) eval(Meta.parse(source_code3)); # println(source_code3) # + # Inference parameters num_iterations = 10 # Initialize marginal distribution and observed data dictionaries data = Dict() marginals = Dict() # Initialize free energy tracking array free_energy_trn = zeros(T_trn, num_iterations) # Initialize arrays of parameterizations params_z = (zeros(2,T_trn+1), repeat(.1 .*float(eye(2)), outer=(1,1,T_trn+1))) params_θ = (ones(2,T_trn+1), repeat(.1 .*float(eye(2)), outer=(1,1,T_trn+1))) params_η = (2*ones(1,T_trn+1), 1e2 *ones(1,T_trn+1)) params_γ = (1e8*ones(1,T_trn+1), 1e3*ones(1,T_trn+1)) params_ξ = (1e8*ones(1,T_trn+1), 1e1*ones(1,T_trn+1)) # Start progress bar p = Progress(T_trn, 1, "At time ") # Perform inference at each time-step for t = 1:T_trn # Update progress bar update!(p, t) # Initialize marginals marginals[:z_tmin1] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_z[1][:,t], w=params_z[2][:,:,t]) marginals[:z_t] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_z[1][:,t], w=params_z[2][:,:,t]) marginals[:θ] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_θ[1][:,t], w=params_θ[2][:,:,t]) marginals[:η] = ProbabilityDistribution(Univariate, GaussianMeanPrecision, m=params_η[1][1,t], w=params_η[2][1,t]) marginals[:γ] = ProbabilityDistribution(Univariate, Gamma, a=params_γ[1][1,t], b=params_γ[2][1,t]) marginals[:ξ] = ProbabilityDistribution(Univariate, Gamma, a=params_ξ[1][1,t], b=params_ξ[2][1,t]) data = Dict(:y_t => output_trn[t], :u_t => input_trn[t], :m_z => params_z[1][:,t], :w_z => params_z[2][:,:,t], :m_θ => params_θ[1][:,t], :w_θ => params_θ[2][:,:,t], :m_η => params_η[1][1,t], :w_η => params_η[2][1,t], :a_γ => params_γ[1][1,t], :b_γ => params_γ[2][1,t], :a_ξ => params_ξ[1][1,t], :b_ξ => params_ξ[2][1,t]) # Iterate variational parameter updates for i = 1:num_iterations # Update parameters stepη!(data, marginals) stepθ!(data, marginals) # Update states stepz_t!(data, marginals) stepz_tmin1!(data, marginals) # Update noise stepγ!(data, marginals) stepξ!(data, marginals) # Compute free energy free_energy_trn[t, i] = freeEnergy(data, marginals) end # Store current parameterizations of marginals params_z[1][:,t+1] = unsafeMean(marginals[:z_t]) params_z[2][:,:,t+1] = marginals[:z_t].params[:w] params_θ[1][:,t+1] = unsafeMean(marginals[:θ]) params_θ[2][:,:,t+1] = marginals[:θ].params[:w] params_η[1][1,t+1] = unsafeMean(marginals[:η]) params_η[2][1,t+1] = marginals[:η].params[:w] params_γ[1][1,t+1] = marginals[:γ].params[:a] params_γ[2][1,t+1] = marginals[:γ].params[:b] params_ξ[1][1,t+1] = marginals[:ξ].params[:a] params_ξ[2][1,t+1] = marginals[:ξ].params[:b] end # + # Prediction graph graph4 = FactorGraph() # Autoregressive node @RV z_pred ~ NLatentAutoregressiveX(placeholder(:θ, dims=(3,)), placeholder(:z_tmin1, dims=(2,)), placeholder(:η), placeholder(:u_t), placeholder(:γ), g=g, id=:z_pred_t) # Inference algorithm q4 = PosteriorFactorization(z_pred, ids=[:z_pred]) algo4 = variationalAlgorithm(q4, free_energy=true) source_code4 = algorithmSourceCode(algo4, free_energy=true) eval(Meta.parse(source_code4)); # + # Initialize free energy tracking array free_energy_pred = zeros(T_val, num_iterations) # Initialize future state arrays params_preds = (zeros(2, T_val), repeat(.1 .*float(eye(2)), outer=(1,1,T_val))) # Start simulation with known output params_preds[1][1,2] = output_val[2] params_preds[1][2,2] = output_val[1] # Start progress bar p = Progress(T_val, 1, "At time ") for t = 3:T_val update!(p, t) # Initialize marginals marginals[:z_pred] = ProbabilityDistribution(Multivariate, GaussianMeanPrecision, m=params_preds[1][:,t], w=params_preds[2][:,:,t]) # Clamp data data = Dict(:u_t => input[t], :z_tmin1 => params_preds[1][:,t-1], :θ => params_θ[1][:,end], :η => params_η[1][end], :γ => params_γ[1][end]/params_γ[2][end]) # Iterate variational parameter updates for i = 1:num_iterations # Make prediction stepz_pred!(data, marginals) # Compute free energy free_energy_pred[t, i] = freeEnergy(data, marginals) end # Store current parameterizations of marginals params_preds[1][:,t] = unsafeMean(marginals[:z_pred]) params_preds[2][:,:,t] = marginals[:z_pred].params[:w] end # - # Store predictions for later comparisons results_LARX = Dict() results_LARX["preds"] = params_preds results_LARX["params_z"] = params_z; results_LARX["params_θ"] = params_θ; results_LARX["params_η"] = params_η; results_LARX["params_γ"] = params_γ; results_LARX["params_ξ"] = params_ξ; results_LARX["FE_pred"] = free_energy_pred; results_LARX["FE_trn"] = free_energy_trn; # ### Visualize results # + # Mean and std dev of predictions predictions_mean = params_preds[1][1,:] predictions_std = sqrt.(inv.(params_preds[2][1,1,:])) # Subsample for visualization ss = 40 # viz_ix = 50000:ss:60000 viz_ix = 1024:ss:40000 # Plot predictions p230 = scatter(viz_ix, output[viz_ix], color="black", label="observations", xlabel="time (t)", ylims=[-.4, .4], legend=:topleft) plot!(viz_ix, predictions_mean[viz_ix], ribbon=[predictions_std[viz_ix], predictions_std[viz_ix]], color="red", label="predictions") # - Plots.savefig(p230, "figures/simulations_larx.png") # + # Compute prediction error pred_error = (predictions_mean[3:end] .- output_val[3:end]).^2 # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p240 = scatter(viz_ix, pred_error[viz_ix], color="black", xlabel="time (t)", ylabel="Prediction error", ylims=[1e-10, 1e0], label="", yscale=:log10) # - Plots.savefig(p240, "figures/sim-error_larx.png") # + # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p24 = plot(viz_ix, free_energy_trn[viz_ix,end], color="black", xlabel="time (t)", ylabel="F[q]", label="", title="Free energy at training time") # + # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Scatter error over time p24 = plot(viz_ix, free_energy_pred[viz_ix,end], color="black", xlabel="time (t)", ylabel="F[q]", label="", title="Free energy of predictions") # - # ### Baseline: offline sigmoid network NARX model # # As baseline, I ran a nonlinear ARX model using Matlab's System Identification Toolbox. Parameters are estimated using a Prediction Error Minimisation (PEM) procedure. It simultaneously estimates a static nonlinearity that I'm modelling with a sigmoid network. I've used 4 units to keep the comparison with the 4-coefficient NLARX model fair. using MAT results_NARX = matread("results/results_narx_sigmoidnet4_simulation.mat") # + # Subsample for visualization ss = 10 viz_ix = 1:ss:40000 # Plot predictions p402 = scatter(viz_ix, output[viz_ix], label="observations", ylims=[-.4, .4], color="black") plot!(viz_ix, results_NARX["pred_states"][viz_ix], label="predictions", xlabel="time (t)", color="red") # + # Subsample for visualization ss = 40 viz_ix = 1:ss:40000 # Scatter error over time p403 = scatter(viz_ix, results_NARX["pred_error"][viz_ix], color="black", xlabel="time (t)", ylims=[1e-10, 1e0], ylabel="Prediction error", label="", yscale=:log10) # - # ## Comparison # + # Store prediction error for each step ahead prederror_LARX = (results_LARX["preds"][1][1,3:end] .- output_val[3:end]).^2 prederror_NLARX = (results_NLARX["preds"][1][1,3:end] .- output_val[3:end]).^2 prederror_NARX = results_NARX["pred_error"] println("MSE FEM-LARX = "*string(mean(prederror_LARX))) println("MSE FEM-NLARX = "*string(mean(prederror_NLARX))) println("MSE PEM-NARX = "*string(mean(prederror_NARX))) # + # Joint plot of prediction errors # Subsample for visualization ss = 40 viz_ix = 1:ss:40000 # Scatter error over time p71 = plot(viz_ix, output[viz_ix], label="", color="blue", alpha=0.5, ylims=[-0.2 0.2]) plot!(viz_ix, results_NLARX["preds"][1][1,viz_ix], color="black", alpha=0.9, xlabel="", ylabel="", label="FEM-NLARX", legend=:topleft) p72 = plot(viz_ix, output[viz_ix], label="", color="blue", alpha=0.5, ylims=[-0.2 0.2]) plot!(viz_ix, results_LARX["preds"][1][1,viz_ix], color="black", alpha=0.9, xlabel="", ylabel="", label="FEM-LARX", legend=:topleft) p73 = plot(viz_ix, output[viz_ix], label="", color="blue", alpha=0.5, ylims=[-0.2 0.2]) plot!(viz_ix, results_NARX["pred_states"][viz_ix], color="black", alpha=0.9, xlabel="", ylabel="output", label="PEM-NARX", legend=:topleft) p70 = plot(p73, p72, p71, layout=(1,3), size=(1200,300)) # - savefig(p70, "figures/sim-states-comparison.png") # + # Joint plot of prediction errors prederror_LARX = (results_LARX["preds"][1][1,3:end] .- output_val[3:end]).^2 prederror_NLARX = (results_NLARX["preds"][1][1,3:end] .- output_val[3:end]).^2 prederror_NARX = results_NARX["pred_error"] # Subsample for visualization ss = 40 viz_ix = 1:ss:40000 # Scatter error over time p81 = plot(viz_ix, results_NLARX["preds"][1][1,viz_ix], color="purple", alpha=0.9, xlabel="", ylabel="", label="predictions", title="FEM-NLARX", legend=:topleft) plot!(viz_ix, prederror_NLARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", ylabel="", label="error", yticks=:none) p82 = plot(viz_ix, results_LARX["preds"][1][1,viz_ix], color="purple", alpha=0.9, xlabel="", label="predictions", title="FEM-LARX", legend=:topleft) plot!(viz_ix, prederror_LARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", ylabel="", label="error", yticks=:none) p83 = plot(viz_ix, results_NARX["pred_states"][viz_ix], color="purple", alpha=0.9, xlabel="", ylabel="", label="predictions", title="PEM-NARX", legend=:topleft) plot!(viz_ix, prederror_NARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", ylabel="", label="error") p80 = plot(p83, p82, p81, layout=(1,3), size=(1200,300)) # - savefig(p80, "figures/simulation-comparison.png") # + # Joint plot of prediction errors # Subsample for visualization ss = 40 viz_ix = 1:ss:40000 # Scatter error over time p61 = plot(viz_ix, prederror_NLARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", title="", ylabel="", label="error", yticks=:none, ylims=[10e-12, 10e-2], yscale=:log10) p62 = plot(viz_ix, prederror_LARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", title="", ylabel="", label="error", yticks=:none, ylims=[10e-12, 10e-2], yscale=:log10) p63 = plot(viz_ix, prederror_NARX[viz_ix], color="black", alpha=0.9, xlabel="time (t)", title="", ylabel="", label="error", ylims=[10e-12, 10e-2], yscale=:log10) p60 = plot(p63, p62, p61, layout=(1,3), size=(1200,300)) # - Plots.savefig(p60, "figures/sim-error-comparison.png")
FEM_simerror.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="--60e6djzsjv" # # Modeling ODE with only absorted solar radiation to predict the change in temperature for 200 years from 1850 and visualizing the model # + colab={"base_uri": "https://localhost:8080/"} id="XTc-OKVMzs39" outputId="88f776a8-2224-4fcb-b46c-9987d4492ce4" #from sympy import solve import matplotlib.pyplot as plt import numpy as np from scipy.integrate import solve_ivp S = 1368 α = 0.3 absorbed_solar_radiation = S*(1 - α)/4 temp0 = 14 C = 51.0 F = lambda t, s: (1/C) * absorbed_solar_radiation t_span = [0, 210] s0 = [temp0] t_eval = np.arange(0, 210, 50) sol = solve_ivp(F, t_span, s0, t_eval=t_eval) sol # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="g5l-YaCLJlgv" outputId="28cf9dd9-f8f3-47ae-bfec-a0598e9b8c67" plt.figure(figsize=(12, 4)) plt.subplot(121) plt.plot(sol.t, sol.y[0]) plt.title('Absorbing Solar Radiation (only)') plt.xlabel('Years from 1850') plt.ylabel('Temperature °C') plt.show() # + [markdown] id="_3iVBASjOkHV" # # Extending the model with thermal radiation and visualizing it # + colab={"base_uri": "https://localhost:8080/"} id="hnNZsVcVOkQ7" outputId="e3113b84-e422-4bd6-92bb-e7458ba83729" B = 1.3 start_temp = 14 #0->28, default: 14 F1 = lambda t, s: (1/C) * B * (temp0-s) t_span = [0, 210] s0 = [start_temp] t_eval = np.arange(0, 210, 50) sol = solve_ivp(F1, t_span, s0, t_eval=t_eval) sol # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="tK_H16cJQ8s7" outputId="0ba985f5-b270-48e4-fbc1-967c4bcf7c49" plt.figure(figsize=(12, 4)) plt.subplot(121) plt.plot(sol.t, sol.y[0]) plt.title('Energy Balance Model (Healthy Earth)') plt.xlabel('Years from start') plt.ylabel('Temperature °C') plt.ylim(0, 30) plt.show() # + [markdown] id="scIVMcsnVxPj" # # Extending the model with Greenhouse Effects and visualizing it # + colab={"base_uri": "https://localhost:8080/"} id="V31vYyd-VxWI" outputId="aa2645ba-930c-439d-ae7a-2425c02f19dc" def calc_greenhouse_effect(CO2): return forcing_coef*np.log(CO2/CO2_PreIndust) forcing_coef = 5.0 CO2_PreIndust = 280.0 calc_greenhouse_effect(CO2_PreIndust * (1 + np.power((15/220), 3))) # + colab={"base_uri": "https://localhost:8080/"} id="II_jmGvQ0j7y" outputId="dfa73555-68cc-4723-e778-f94f0e8a06aa" F2 = lambda t, s: (1/C) * (B * (temp0-s) + calc_greenhouse_effect(CO2_PreIndust * (1 + np.power((t/220), 3)))) sol = solve_ivp(F2, [0, 210], [start_temp], t_eval=np.arange(0, 210, 50)) sol # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="0fwZaRUP6zS1" outputId="298f8b90-5805-4bce-f8df-6db4e531cc9d" plt.figure(figsize=(12, 4)) plt.subplot(121) plt.plot(sol.t, sol.y[0]) plt.title('Model with CO₂') plt.xlabel('Years from 1850') plt.ylabel('Temperature °C') plt.ylim(10, 20) plt.show() # + id="rq5RXOpl8Raq" tmp_CO2 = [] for i in range(1850, 2021): t_year = i - 1850 CO2_from_1850 = CO2_PreIndust * (1 + np.power((t_year/220), 3)) tmp_CO2.append(CO2_from_1850) tmp_CO2 # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="oKEHNSPX9TrD" outputId="e729c5dc-b6b7-421b-ad23-12a212cffdc8" plt.figure(figsize=(12, 4)) plt.subplot(121) plt.plot(range(1850, 2021), tmp_CO2) plt.show() # + [markdown] id="BJh4YDySAms3" # # Compare with NASA # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="MFFrXPW3AoB2" outputId="55071d1d-72c7-4527-8d93-bfadb1e0574c" import pandas as pd url ='https://data.giss.nasa.gov/gistemp/graphs/graph_data/Global_Mean_Estimates_based_on_Land_and_Ocean_Data/graph.txt' df = pd.read_csv(url, skiprows=3, sep='\s+').drop(0) df # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="UA6HfOX7CAHr" outputId="ee1df730-27d5-4a2d-de58-3e0a615190d1" df['Year'] = df['Year'].astype('float64') df['No_Smoothing'] = df['No_Smoothing'] + 14.15 df # + colab={"base_uri": "https://localhost:8080/"} id="Jr-zeWzNF8C6" outputId="4b1a1476-0d0f-4752-d7ba-1115ed4bb904" BB = 1.3 #[0.0, 4.0] CC = 51.0 #[10.0, 200.0] F3 = lambda t, s: (1/CC) * (BB * (temp0-s) + calc_greenhouse_effect(CO2_PreIndust * (1 + np.power((t/220), 3)))) solp4 = solve_ivp(F3, [0, 171], [start_temp], t_eval=np.arange(0, 171, 1)) solp4 # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="Vx23zpSgDGOf" outputId="bb8be20d-038f-456d-f2ea-99b4e1f190eb" plt.figure(figsize=(16, 4)) plt.subplot(121) plt.plot(df['Year'].tolist(), df['No_Smoothing'].tolist(), label='NASA Observations') plt.plot(range(1850, 2021), solp4.y[0], label='Predicted Temperature from model') plt.xlabel('Years') plt.ylabel('Temp °C') plt.legend() plt.show() # + [markdown] id="nm1D_GeQuBPQ" # # Improving the model # + colab={"base_uri": "https://localhost:8080/"} id="ttDjC_12uANe" outputId="6dc5823b-7a11-4220-9713-864c7781d724" def calc_alpha(T, alpha_0=0.3, alpha_i=0.5, delta_T=10.0): if T < -delta_T: return alpha_i elif -delta_T <= T < delta_T: return alpha_i + (alpha_0 - alpha_i) * (T + delta_T) / (2 * delta_T) elif T >= delta_T: return alpha_0 F_final = lambda t, s: ((1/85.0) * (BB * (temp0 - s) + \ calc_greenhouse_effect(CO2_PreIndust * (1 + np.power((t/220), 3))))) * ((1/85.0) * (S * (1 - calc_alpha(t))/4)) solp_final = solve_ivp(F_final, [0, 171], [start_temp], t_eval=np.arange(0, 171, 1)) solp_final # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="z0QAGZwy7oef" outputId="eb0eff84-315f-48a9-d681-c1bf2f42fb93" plt.figure(figsize=(16, 4)) plt.subplot(121) plt.plot(df['Year'].tolist(), df['No_Smoothing'].tolist(), label='NASA Observations') plt.plot(range(1850, 2021), solp_final.y[0], label='Predicted Temperature from improved model') plt.xlabel('Years') plt.ylabel('Temp °C') plt.legend() plt.show()
Project/Weather_Prediction.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # # Get form # # *Getting the form of a molecular system* import molsysmt as msm item = msm.demo_systems.files['1sux.pdb'] msm.get_form(item) item = msm.demo_systems.files['1sux.mmtf'] msm.get_form(item) item1 = msm.demo_systems.files['1sux.pdb'] item2 = msm.demo_systems.files['1sux.mmtf'] msm.get_form([item1, item2]) # + import numpy as np item = np.zeros(shape=[10,4,3])*msm.puw.unit('angstroms') msm.get_form(item) # - item = np.zeros(shape=[10,4,3])*msm.puw.unit('nm') msm.get_form(item) msm.get_form('pdbid:2LAO') msm.get_form('aminoacids3:ACEALAGLYVALNME') msm.get_form('aminoacids1:ALYDERRRT') msm.get_form('2LAO') msm.get_form('ACEALAGLYVALNME') msm.get_form('ALYDERRRT') pdb_text = ('HETATM 3274 C1 BEN A 302 -9.410 30.002 12.405 1.00 61.32 C \n' 'HETATM 3275 C2 BEN A 302 -10.677 29.482 12.626 1.00 58.40 C \n' 'HETATM 3276 C3 BEN A 302 -10.836 28.180 13.091 1.00 49.12 C \n' 'HETATM 3277 C4 BEN A 302 -9.725 27.387 13.331 1.00 56.99 C \n' 'HETATM 3278 C5 BEN A 302 -8.454 27.906 13.109 1.00 53.41 C \n' 'HETATM 3279 C6 BEN A 302 -8.298 29.207 12.650 1.00 55.79 C \n' 'HETATM 3280 C BEN A 302 -9.255 31.315 11.933 1.00 63.37 C \n' 'HETATM 3281 N1 BEN A 302 -8.925 31.552 10.675 1.00 73.79 N \n' 'HETATM 3282 N2 BEN A 302 -9.382 32.348 12.740 1.00 62.54 N ') print(pdb_text) msm.get_form('pdb:'+pdb_text) msm.get_form(pdb_text) msm.view(pdb_text, standardize=False)
docs/contents/Get_form.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Pandas # # ### Instructions # # This assignment will be done completely inside this Jupyter notebook with answers placed in the cell provided. # # All python imports that are needed shown. # # Follow all the instructions in this notebook to complete these tasks. # # Make sure the CSV data files is in the same folder as this notebook - alumni.csv, groceries.csv # + gather={"logged": 1616864891337} # Imports needed to complete this assignment import pandas as pd # - # ### Question 1 : Import CSV file (1 Mark) # # # Write code to load the alumni csv dataset into a Pandas DataFrame called 'alumni'. # # + gather={"logged": 1616864893867} #q1 (1) #reading csv alumni = pd.read_csv("alumni.csv") alumni # - # ### Question 2 : Understand the data set (5 Marks) # # Use the following pandas commands to understand the data set: a) head, b) tail, c) dtypes, d) info, e) describe # + gather={"logged": 1616864895191} #a) (1) #head:shows first 5 records alumni.head() # + gather={"logged": 1616864895779} #b) (1) #tail:shows last 5 recods alumni.tail() # + gather={"logged": 1616864896204} #c) (1) #dtypes:data types of features/columns alumni.dtypes # + gather={"logged": 1616864896809} #d) (1) #info:full dataframe summary alumni.info() # + gather={"logged": 1616864897568} #e) (1) #describe:shows basic statistical details of numerical columns alumni.describe() # - # ### Question 3 : Cleaning the data set - part A (3 Marks) # # a) Use clean_currency method below to strip out commas and dollar signs from Savings ($) column and put into a new column called 'Savings'. # + gather={"logged": 1616864898266} def clean_currency(curr): return float(curr.replace(",", "").replace("$", "")) clean_currency("$66,000") # + gather={"logged": 1616864898869} #a) (2) #method1 alumni['Savings'] = alumni['Savings ($)'].apply(lambda saving: clean_currency(saving)) alumni # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1616864899367} #method 2 #initializing an empty array savings = [] #for loop to clean all values in Savings ($) and append the clean values to array for saving in alumni['Savings ($)']: savings.append(clean_currency(saving)) #creating a new column Savings and assigning the clean values alumni["Savings"] = savings alumni # - # b) Uncomment 'alumni.dtypes.Savings' to check that the type change has occurred # + gather={"logged": 1616864899927} #b) (1) alumni.dtypes.Savings # - # ### Question 4 : Cleaning the data set - part B (5 Marks) # # a) Run the 'alumni["Gender"].value_counts()' to see the incorrect 'M' fields that need to be converted to 'Male' # + gather={"logged": 1616864900483} # a) (1) alumni['Gender'].value_counts() # - # b) Now use a '.str.replace' on the 'Gender' column to covert the incorrect 'M' fields. Hint: We must use ^...$ to restrict the pattern to match the whole string. # + gather={"logged": 1616864901025} # b) (1) alumni['Gender'].str.replace('^M$','Male') # - # c) That didn't the set alumni["Gender"] column however. You will need to update the column when using the replace command 'alumni["Gender"]=<replace command>', show how this is done below # + gather={"logged": 1616864901525} # c) (1) alumni['Gender'] = alumni['Gender'].str.replace('^M$','Male') # - # d) You can set it directly by using the df.loc command, show how this can be done by using the 'df.loc[row_indexer,col_indexer] = value' command to convert the 'M' to 'Male' # + gather={"logged": 1616864902022} # d) (1) alumni.loc[alumni['Gender'] =='M', 'Gender']='Male' # - # e) Now run the 'value_counts' for Gender again to see the correct columns - 'Male' and 'Female' # + gather={"logged": 1616864902557} # e) (1) alumni['Gender'].value_counts() # - # ### Question 5 : Working with the data set (4) # # a) get the median, b) mean and c) standard deviation for the 'Salary' column # + gather={"logged": 1616864903092} # a)(1) #median alumni['Salary'].median() # + gather={"logged": 1616864903667} # b)(1) #mean alumni['Salary'].mean() # + gather={"logged": 1616864904322} # c)(1) #std alumni.Salary.std() # - # d) identify which alumni paid more than $15000 in fees, using the 'Fee' column # + gather={"logged": 1616864904951} # d) (1) alumni[alumni['Fee'] >15000] #alumni whose row index is 18 # - # ### Question 6 : Visualise the data set (4 Marks) # # a) Using the 'Diploma Type' column, plot a bar chart and show its value counts. # + gather={"logged": 1616864905810} #a) (1) #bar plot alumni['Diploma Type'].value_counts().plot(kind = 'bar') # - # b) Now create a box plot comparison between 'Savings' and 'Salary' columns # + gather={"logged": 1616864906306} #b) (1) #combined box plot alumni[['Savings','Salary']].plot(kind = 'box') # - # c) Generate a histogram with the 'Salary' column and use 12 bins. # + gather={"logged": 1616864906913} #c) (1) #histogram alumni['Salary'].plot(kind = 'hist', bins = 12) # - # d) Generate a scatter plot comparing 'Salary' and 'Savings' columns. # + gather={"logged": 1616864918148} #d) (1) #scatter plot alumni.plot(kind = 'scatter', x = 'Savings', y = 'Salary') # - # ### Question 7 : Contingency Table (2 Marks) # # Using both the 'Martial Status' and 'Defaulted' create a contingency table. Hint: crosstab # + gather={"logged": 1616864919055} # Q7 (2) #shows the frequency with which certain groups of data appear pd.crosstab(alumni['Marital Status'], alumni['Defaulted'])
Pandas Assignment.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="-8DWs66YCcg4" # this is based on https://github.com/dimun/pate_torch/blob/master/PATE.ipynb # + [markdown] colab_type="text" id="p-JylkvK_Q5z" # # Private Aggregation of Teacher Ensembles (PATE) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="VEuCLJfuevUj" outputId="bf05bed5-3c22-4020-cde0-fa53ce590072" # !pip freeze | grep torch # + [markdown] colab_type="text" id="gLn9eKfe_Q51" # ## Import libraries # + colab={} colab_type="code" id="cyjvipBT_Q52" import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data import Subset # + [markdown] colab_type="text" id="1nI-_HBC_Q55" # ## Load the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html) # # Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time. # + colab={"base_uri": "https://localhost:8080/", "height": 369, "referenced_widgets": ["88c34b495ef84d18868c89d86ba74781", "<KEY>", "b22a21cee9b74cf9bf88335f39f5eafc", "<KEY>", "bd429a24c41f4f0aa03da63558677bce", "3f96f5ab3d6d47bcb7948e36233e4983", "<KEY>", "<KEY>", "<KEY>", "01c73e5d7b3e4c7bae717f0ee51e1058", "<KEY>", "3d2ff1e00e844fc08f54f19cd2dff484", "<KEY>", "5f1007296f3c4071ad9c1ee99643f596", "<KEY>", "<KEY>", "<KEY>", "1a9ee64ef77e4160bc0e59c01099be18", "f2f58cbdbe464dd6b4240425ed69a567", "2a27418390754d9397bd06b49363809a", "596367d5d9a14f6c9a1fd9139dfaa442", "<KEY>", "a73fa9db86344b59afe151470d01326f", "5ed4cdf05f514f6195381c0c6a83df89", "94fc494383184117aa3748673053bbe7", "<KEY>", "<KEY>", "0fadbe1391d741f5982f6ff9a88dfb37", "74e87bd28def4a0187379d04fb4b83ed", "2fc36c2d96ec4ea1b42ce74a40072db1", "<KEY>", "7bac61dd31f64b56bfa1d2d76ef2e7e9"]} colab_type="code" id="xDTtX0cf_Q56" outputId="bb9949c0-d348-4c59-9ec8-0f69b53f70c5" # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 32 # convert data to torch.FloatTensor transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))] ) # choose the training and test datasets train_data = datasets.MNIST( root='data', train=True, download=True, transform=transform ) test_data = datasets.MNIST( root='data', train=False, download=True, transform=transform ) # + [markdown] colab_type="text" id="vcgFJx4l_Q59" # Function for returning dataloaders for a specified number of teachers. # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="dUYhi5o1_Q59" outputId="48bb2c52-29e1-4cf6-9150-50fa8216a410" # number of teachers to essemble num_teachers = 100 def get_data_loaders(train_data, num_teachers=10): """Simple partitioning algorithm that returns the right portion of the data needed by a given teacher out of a certain number of teachers. Each teacher model will get a disjoint subset of the training data. """ teacher_loaders = [] data_size = len(train_data) // num_teachers for i in range(num_teachers): indices = list(range(i * data_size, (i+1) * data_size)) subset_data = Subset(train_data, indices) loader = torch.utils.data.DataLoader( subset_data, batch_size=batch_size, num_workers=num_workers ) teacher_loaders.append(loader) return teacher_loaders teacher_loaders = get_data_loaders(train_data, num_teachers) # + [markdown] colab_type="text" id="2MwZOUVw_Q6A" # Define a train student set of 9000 examples and 1000 test examples. Use 9000 samples of the dataset's test subset as unlabeled training points - they will be labeled using the teacher predictions. # + colab={} colab_type="code" id="bJjZ1DYX_Q6B" student_train_data = Subset(test_data, list(range(9000))) student_test_data = Subset(test_data, list(range(9000, 10000))) student_train_loader = torch.utils.data.DataLoader( student_train_data, batch_size=batch_size, num_workers=num_workers ) student_test_loader = torch.utils.data.DataLoader( student_test_data, batch_size=batch_size, num_workers=num_workers ) # + [markdown] colab_type="text" id="K-0his1L_Q6D" # ## Defining models # # We are going to define a single model for all the teachers. # + colab={} colab_type="code" id="Mg7fkeWD_Q6E" import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) # + colab={} colab_type="code" id="w66owE9-_Q6H" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") def train(model, trainloader, criterion, optimizer, epochs=10, print_every=120): model.to(device) steps = 0 running_loss = 0 for e in range(epochs): # Model in training mode, dropout is on model.train() for images, labels in trainloader: images, labels = images.to(device), labels.to(device) steps += 1 optimizer.zero_grad() output = model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() # + colab={} colab_type="code" id="ItF8KZAv_Q6K" def predict(model, dataloader): outputs = torch.zeros(0, dtype=torch.long).to(device) model.to(device) model.eval() for images, labels in dataloader: images, labels = images.to(device), labels.to(device) output = model.forward(images) ps = torch.argmax(torch.exp(output), dim=1) outputs = torch.cat((outputs, ps)) return outputs # + [markdown] colab_type="text" id="SrACeiNU_Q6N" # ## Training all the teacher models # # Here we define and train the teachers # + colab={"base_uri": "https://localhost:8080/", "height": 103, "referenced_widgets": ["a7019a3be194415abc43ebe967a8417e", "42ffe71cf63f47d2b8196325ff08edb0", "cf7916b493ae4e6bbf64f60286178ed3", "<KEY>", "6ef387f1f9e1424bab89cc16d901009c", "<KEY>", "<KEY>", "41bfc86d40da4a7ca22f4a857da32d9c"]} colab_type="code" id="cmiezzAL_Q6O" outputId="9ac64250-10ea-4b74-8d39-e6422462fe50" from tqdm.notebook import trange # Instantiate and train the models for each teacher def train_models(num_teachers): models = [] for t in trange(num_teachers): model = Net() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.003) train(model, teacher_loaders[t], criterion, optimizer) models.append(model) return models models = train_models(num_teachers) # + [markdown] colab_type="text" id="fWK73FIX_Q6R" # ## Aggregated teacher # # This function predict the labels from all the dataset in each of the teachers, then return all the predictions and the maximum votation after adding laplacian noise # + colab={} colab_type="code" id="jyrxvbfc_Q6R" import numpy as np # + colab={} colab_type="code" id="vi48rANh_Q6U" # define standard deviation for noise standard_deviation = 5.0 # + [markdown] colab_type="text" id="JNwBqVpO_Q6X" # # Aggregated teacher # # This function makes the predictions in all the teachers, count the votes and add noise, then returns the votation and the argmax results. # + colab={} colab_type="code" id="W_2mvbuc_Q6Y" def aggregated_teacher(models, data_loader, standard_deviation=1.0): preds = torch.torch.zeros((len(models), 9000), dtype=torch.long) print('Running teacher predictions...') for i, model in enumerate(models): results = predict(model, data_loader) preds[i] = results print('Calculating aggregates...') labels = np.zeros(preds.shape[1]).astype(int) for i, image_preds in enumerate(np.transpose(preds)): label_counts = np.bincount(image_preds, minlength=10).astype(float) label_counts += np.random.normal(0, standard_deviation, len(label_counts)) labels[i] = np.argmax(label_counts) return preds.numpy(), np.array(labels) # + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="JtPsYvpJ_Q6b" outputId="7a6b8d41-bea0-4745-aa96-107c5df97982" teacher_models = models preds, student_labels = aggregated_teacher(teacher_models, student_train_loader, standard_deviation) # + [markdown] colab_type="text" id="DF_cMtyN_Q6i" # # Training the student # # Now we will train the student with the aggregated teacher labels # + colab={} colab_type="code" id="nOnkfKCF_Q6i" def student_loader(student_train_loader, labels): for i, (data, _) in enumerate(iter(student_train_loader)): yield data, torch.from_numpy(labels[i*len(data):(i+1)*len(data)]) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="MWjR-iHn_Q6l" outputId="47e5115f-c2a3-40c3-f3ea-d0b6de7089f2" student_model = Net() criterion = nn.NLLLoss() optimizer = optim.Adam(student_model.parameters(), lr=0.001) epochs = 10 student_model.to(device) steps = 0 running_loss = 0 for e in range(epochs): student_model.train() train_loader = student_loader(student_train_loader, student_labels) for images, labels in train_loader: images, labels = images.to(device), labels.to(device) steps += 1 optimizer.zero_grad() output = student_model.forward(images) loss = criterion(output, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % 50 == 0: test_loss = 0 accuracy = 0 student_model.eval() with torch.no_grad(): for images, labels in student_test_loader: images, labels = images.to(device), labels.to(device) log_ps = student_model(images) test_loss += criterion(log_ps, labels).item() ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)) student_model.train() print("Epoch: {}/{}.. ".format(e+1, epochs), "Training Loss: {:.3f}.. ".format(running_loss/len(student_train_loader)), "Test Loss: {:.3f}.. ".format(test_loss/len(student_test_loader)), "Test Accuracy: {:.3f}".format(accuracy/len(student_test_loader))) running_loss = 0 # + [markdown] colab_type="text" id="tnDXbz1dyy13" # # Privacy Analysis # + colab={} colab_type="code" id="dyoxKjqni1q1" In Papernot and others (2018), they detail how data-dependent differential privacy bounds can be computed to estimate the cost of training the student. They provide a script to do this analysis based on the vote counts and the used standard deviation of the noise. # + colab={} colab_type="code" id="Ku2DLGSMi2v3" # !git clone https://github.com/tensorflow/privacy # + colab={} colab_type="code" id="4Z6WQvctjj-2" # %cd privacy/research/pate_2018/ICLR2018 # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="E8-GzZIJ455H" outputId="df2a8e11-f666-4b02-e99a-71b34407320f" # !ls # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Q-YB9oMg_vZ-" outputId="cf73b6aa-b340-487c-bc4d-4c395255ea66" preds.shape # + colab={} colab_type="code" id="ln_Vk86q5baY" # put together the counts matrix: clean_votes = [] for image_preds in np.transpose(preds): label_counts = np.bincount(image_preds, minlength=10).astype(float) clean_votes.append(label_counts) clean_votes = np.array(label_counts) # + colab={} colab_type="code" id="Y05DIkjJ6HsT" with open('clean_votes.npy', 'wb') as file_obj: np.save(file_obj, clean_votes) # + colab={} colab_type="code" id="dD_OuAoMAgE3" #with open('labels_for_dump.npy', 'wb') as file_obj: # np.save(file_obj, preds) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="0u-48maABIzy" outputId="9db2c4cb-abf3-4f25-f7d1-66632fe3af2d" standard_deviation # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="s9WSMiey478R" outputId="d2771a69-daa6-4d17-aa50-aaec90df6217" # !python smooth_sensitivity_table.py --sigma2=5.0 --counts_file=clean_votes.npy --delta=1e-5 # + [markdown] colab_type="text" id="EoCa4P73Eyda" # Data Independent Epsilon: 34.226 # # Data Dependent Epsilon: 6.998
chapter11/Securing Models Against Attack.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.display import HTML HTML(''' <script> code_show=false; function code_toggle(){ if(code_show){$('.prompt, .input, .output_stderr, .output_error').hide();} else{$('.input, .prompt, .output_stderr, .output_error').show();} code_show=!code_show; } function initialize(){ document.body.style.fontFamily='Palatino'; var output=$('.output_subarea.output_text.output_stream.output_stdout'); $.merge(output,$('.output_subarea.output_text.output_result')); for(var i=0;i<output.length;i++)for(var j=0;j<output[i].children.length;j++) output[i].children[j].style.fontFamily='Palatino'; code_toggle(); } $(document).ready(initialize); </script> Click <a href="javascript:code_toggle()">here</a> to show/hide codes in this notebook. ''') # ### 7.3.6 Exercices # #### Exercice 6. # Soit la fonction de boucle ouverte d'un processus: $$G(p)={\frac {1}{p(1+0,1p)(1+p)}}$$ # # On demande de déterminer *graphiquement* le correcteur PD de manière à optimiser le temps de réponse du système, à garantir une erreur de vitesse de 10% et une marge de phase de 45°. # ##### Solution: # # Fixer l'erreur de vitesse à 10% impose le gain: $\varepsilon_v={\frac{1}{K_P}}=0.1$ d'où $K_P=10$. # # Pour choisir la constante de temps de dérivation, nous avons deux possibilités: # + from IPython.display import display, Markdown from control.matlab import * # Python Control Systems Toolbox (compatibility with MATLAB) import numpy as np # Library to manipulate array and matrix import matplotlib.pyplot as plt # Library to create figures and plots import math # Library to be able to do some mathematical operations import ReguLabFct as rlf # Library useful for the laboratory of regulation of Gramme # + # Fonction de transfert en boucle ouverte G1 = tf(1, [1, 0]) G2 = tf(1, [0.1, 1]) G3 = tf(1, [1, 1]) G = G1*G2*G3 # G de l'énoncé MP = 45 # Marge de phase réclamée Kp = 10 # - # ##### $1^{ère}$ méthode: suppression dupôle dominant # # $\tau_d=1$ et nous vérifions les performances obtenues pour le système corrigé. # + tD = 1 Corr = Kp*tf([tD, 1],1) fig = plt.figure("Nichols",figsize=(10,5)) ax = fig.subplots() rlf.nichols(G, grid = False, labels=['G(p)'], NameOfFigure = "Nichols") rlf.nichols(Kp*G, grid = False, labels=['10*G(p)'], NameOfFigure = "Nichols", linestyle = '-.') rlf.nichols(Corr*G, grid = False, labels=['10*(1+p)*G(p)'], NameOfFigure = "Nichols", linestyle = '--') ax.plot(-180+MP, 0,'k+'); # ; pour supprimer les lignes de sortie matplotlib # - # Les performances du système ainsi corrigé sont : # + fig = plt.figure("Step Response",figsize=(10,5)) ax = fig.subplots() # Système non corrigé # ------------------- Gbf = feedback(G,1) info = rlf.info() rlf.stepWithInfo(Gbf, info, NameOfFigure="Step Response", sysName='SystInit') # Renvoie toutes les infos du step ep = (1-info.DCGain)*100 # Erreur de position gm, pm, wg, wp = margin(G) # Extract the gain margin (Gm) and the phase margin (Pm) print("\nSystème non corrigé") print("-------------------") print(f"""Marge de phase = {pm:.3f}° DC gain = {info.DCGain:.3f} Rise Time = {info.RiseTime:.3f} s Peak amplitude = {info.Peak:.3f} Overshoot = {info.Overshoot:.3f}% Settling Time = {info.SettlingTime:.3f} s """) # Système corrigé # --------------- Gbf_PD = feedback(Corr*G,1) info_PD = rlf.info() rlf.stepWithInfo(Gbf_PD, info_PD, NameOfFigure="Step Response", sysName='SystCorr', linestyle='-.') # Renvoie toutes les infos du step ep_PD = (1-info_PD.DCGain)*100 # Erreur de position gm, pm, wg, wp = margin(Corr*G) # Extract the gain margin (Gm) and the phase margin (Pm) print("\nSystème corrigé") print("---------------") print(f"""Marge de phase = {pm:.3f}° DC gain = {info_PD.DCGain:.3f} Rise Time = {info_PD.RiseTime:.3f} s Peak amplitude = {info_PD.Peak:.3f} Overshoot = {info_PD.Overshoot:.3f}% Settling Time = {info_PD.SettlingTime:.3f} s """) # Ajout de détails ax.set_xlim(0, 10); # Zoom sur la région d'intérêt ax.arrow(info.SettlingTime, 0, -(info.SettlingTime-info_PD.SettlingTime), 0, length_includes_head=True, width=.005, head_width=0.05, head_length=0.05, color='g'); ax.text(info.SettlingTime-(info.SettlingTime-info_PD.SettlingTime)/2, 0.05, 'Amélioration du\ntps de réponse', verticalalignment='bottom', horizontalalignment='center', color='g'); # - # ##### $2^{ème}$ méthode: placement fréquentiel # # $K_P$ étant fixé, traçons $K_P*G(p)$ : le système est instable et nous allons nous efforcer de le # stabiliser par l’intermédiaire du terme $(1+\tau_D*p)$. # # Ce terme a pour effet, pour la pulsation $\omega=\frac{10}{\tau_D}$, de translater le module de +20 dB et # d’introduire un déphasage de +90° (+84° pour être précis). # # Comme nous voulons que le système en trait mixte soit corrigé et passe par le point # (0dB,-135°), cherchons le point qui a une phase de –135°-84° ; son module vaut -23 dB. Il # sera donc corrigé par le terme $(1+\tau_D*p)$ et passera *approximativement* par le point voulu # (0dB,-135°). # # Sur la courbe en trait interrompu, le point (–23dB,–135°-84°) correspond à une pulsation de # 9.9rad/s, soit : $\tau_D=\frac{10}{\omega_{à -135°-84°}}=\frac{10}{9.9}=1.01s$. # + # Lecture phase mag, w = rlf.getValues(G, -180+MP-84, NameOfFigure="Bode Gbo_P") # tD tD = 10/w # = tD conseillé display(Markdown(rf"$\tau_D$={tD:.2f}")) Corr = Kp*tf([tD, 1],1) display(Markdown(r'$C(p)*G(p) = 10*\frac{(1+1.01p)}{p(1+0.1p)(1+p)}$')) fig = plt.figure("Nichols",figsize=(10,5)) ax = fig.subplots() rlf.nichols(G, grid = False, labels=['G(p)'], NameOfFigure = "Nichols") rlf.nichols(Kp*G, grid = False, labels=['Kp*G(p)'], NameOfFigure = "Nichols", linestyle = '-.') rlf.nichols(Corr*G, grid = False, labels=['C(p)*G(p)'], NameOfFigure = "Nichols", linestyle = '--') ax.plot(-180+MP, 0,'k+'); # Ajout du repère (+) par lequel on est censé passer gm, pm, wg, wp = margin(Corr*G) # Extrait la marge de gain (Gm) et de phase (Pm) print(f"Le système ainsi corrigé présente une marge de phase de {pm:.2f}° et une marge de gain de {gm:.2f} dB.") # - # La marge de phase étant malgré tout toujours trop grande, par essai et erreur, nous allons augmenter $\tau_D$. # + tD = 15/w # tD trouvé par essais-erreurs display(Markdown(rf"$\tau_D$={tD:.2f}")) Corr2 = Kp*tf([tD, 1],1) display(Markdown(r'$C_2(p)*G(p) = 10*\frac{(1+1.51p)}{p(1+0.1p)(1+p)}$')) fig = plt.figure("Nichols",figsize=(10,5)) ax = fig.subplots() rlf.nichols(G, grid = False, labels=['G(p)'], NameOfFigure = "Nichols") rlf.nichols(Kp*G, grid = False, labels=['Kp*G(p)'], NameOfFigure = "Nichols", linestyle = '-.') rlf.nichols(Corr*G, grid = False, labels=['C(p)*G(p)'], NameOfFigure = "Nichols", linestyle = '--') rlf.nichols(Corr2*G, grid = False, labels=['C2(p)*G(p)'], NameOfFigure = "Nichols", linestyle = ':') ax.plot(-180+MP, 0,'k+'); # Ajout du repère (+) par lequel on est censé passer gm, pm, wg, wp = margin(Corr2*G) # Extrait la marge de gain (Gm) et de phase (Pm) print(f"Le système ainsi corrigé présente une marge de phase de {pm:.2f}° et une marge de gain de {gm:.2f} dB.") # - # Les performances du système ainsi corrigé sont : # + fig = plt.figure("Step Response",figsize=(10,5)) ax = fig.subplots() # Système non corrigé # ------------------- Gbf = feedback(G,1) info = rlf.info() rlf.stepWithInfo(Gbf, info, NameOfFigure="Step Response", sysName='SystInit') # Renvoie toutes les infos du step ep = (1-info.DCGain)*100 # Erreur de position gm, pm, wg, wp = margin(G) # Extract the gain margin (Gm) and the phase margin (Pm) print("\nSystème non corrigé") print("-------------------") print(f"""Marge de phase = {pm:.3f}° DC gain = {info.DCGain:.3f} Rise Time = {info.RiseTime:.3f} s Peak amplitude = {info.Peak:.3f} Overshoot = {info.Overshoot:.3f}% Settling Time = {info.SettlingTime:.3f} s """) # Système corrigé # --------------- Gbf_PD = feedback(Corr2*G,1) info_PD = rlf.info() rlf.stepWithInfo(Gbf_PD, info_PD, NameOfFigure="Step Response", sysName='SystCorr', linestyle='-.') # Renvoie toutes les infos du step ep_PD = (1-info_PD.DCGain)*100 # Erreur de position gm, pm, wg, wp = margin(Corr2*G) # Extract the gain margin (Gm) and the phase margin (Pm) print("\nSystème corrigé") print("---------------") print(f"""Marge de phase = {pm:.3f}° DC gain = {info_PD.DCGain:.3f} => Erreur de position = {ep_PD:.3f}% Rise Time = {info_PD.RiseTime:.3f} s Peak amplitude = {info_PD.Peak:.3f} Overshoot = {info_PD.Overshoot:.3f}% Settling Time = {info_PD.SettlingTime:.3f} s """) # Ajout de détails ax.set_xlim(0, 10); # Zoom sur la région d'intérêt ax.arrow(info.SettlingTime, 0, -(info.SettlingTime-info_PD.SettlingTime), 0, length_includes_head=True, width=.005, head_width=0.05, head_length=0.05, color='g'); ax.text(info.SettlingTime-(info.SettlingTime-info_PD.SettlingTime)/2, 0.05, 'Amélioration du\ntps de réponse', verticalalignment='bottom', horizontalalignment='center', color='g'); # + # Mesure de l'erreur de vitesse t = linspace(0, 20, 1000) s = t; [y, t, xout] = lsim(Gbf,s,t) # Simuler la réponse à une rampe => erreur de vitesse [y2, t, xout2] = lsim(Gbf_PD,s,t) # Simuler la réponse à une rampe => erreur de vitesse plt.figure("Erreur de vitesse",figsize=(10,7)) plt.subplot(3,1,1); plt.plot(t,s); plt.title("La rampe S(t)") plt.subplot(3,1,2); plt.plot(t,y); plt.plot(t, y2, linestyle='-.'); plt.title("La réponse Y(t) à la rampe S(t)") plt.subplot(3,1,3); plt.plot(t,(s-y)); plt.plot(t, (s-y2), linestyle='-.'); plt.title("L'erreur S(t)-Y(t)") plt.subplots_adjust(hspace=0.5) # Pour laisser un peu d'espace pour les titres ev = s[-1] - y[-1] # Erreur de vitesse système original ev2 = s[-1] - y2[-1] # Erreur de vitesse du système corrigé display(Markdown(rf"L'erreur de vitesse du système original vaut {ev*100:.1f}% et celle du système corrigé vaut {ev2*100:.1f}%.")) # - HTML('''<script>initialize();</script>Click <a href="javascript:code_toggle()">here</a> to show/hide codes in this notebook.''')
Jupyter/ChapitresTheoriques/.ipynb_checkpoints/7.3-checkpoint.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import plotly.express as px from prettytable import PrettyTable import plotly.graph_objects as go import numpy as np # ## Import data df = pd.read_csv("PROCESSED_PairedAnalysis_13datasets_aBSREL-MH_vs_aBSREL.csv") df df.describe() # ## Plot p-values # + # Ignore for now """ # Bar plot for AICc - Unconstrained dataset = [] BASE = [] MH = [] for index, row in df.iterrows(): dataset.append(df["Filename"][index].replace(".BUSTEDS-MH.json","")) BASE.append(df["pvalue_BUSTEDS"][index]) MH.append(df["pvalue_BUSTEDSMH"][index]) #end for #plot fig = go.Figure(data=[ go.Bar(name='BUSTEDS p-value', y=dataset, x=BASE, orientation='h'), go.Bar(name='BUSTEDS-MH p-value', y=dataset, x=MH, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') fig.update_traces(marker_line_color='rgb(8,48,107)', marker_line_width=1.5, opacity=0.75) fig.update_layout(title_text='Unconstrained model, p-value comparision') fig.show() """ # + """ fig = go.Figure() fig = px.histogram(df, x="delta p-values", marginal="violin", nbins=len(df.index), title='delta p-values [n=' + str(len(df.index)) + ']') fig.update_traces(marker_color='rgb(158,202,225)', marker_line_color='rgb(0,0,0)', marker_line_width=1.5, opacity=0.75) fig.show() """ # + ## Plots # pvalue comparison # AIC comparision # LogL comparision # DH rate comparision # TH rate comparision # SRV CoV? # - # ## Plot cAIC -- Full adapative models # + # Bar plot for AICc - Unconstrained dataset = [] MH = [] BASE = [] for index, row in df.iterrows(): dataset.append(df["Filename"][index].replace(".BUSTEDS-MH.json","")) MH.append(df["aBSREL-MH_FULL_AICc"][index]) BASE.append(df["aBSREL_FULL_AICc"][index]) #end for #plot fig = go.Figure(data=[ go.Bar(name='aBSREL-MH cAIC', y=dataset, x=MH, orientation='h'), go.Bar(name='aBSREL cAIC', y=dataset, x=BASE, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') fig.update_traces(marker_line_color='rgb(8,48,107)', marker_line_width=1.5, opacity=0.75) title = 'cAIC values - aBSREL-MH versus aBSREL (Full Adaptive)' fig.update_layout(title_text=title) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") # - # ## Plot cAIC -- MH Full adapative model versus MH MG94xREV Model # + # Bar plot for AICc - Unconstrained dataset = [] MH_MG94 = [] MH_Unconstrained = [] for index, row in df.iterrows(): dataset.append(df["Filename"][index].replace(".BUSTEDS-MH.json","")) MH_MG94.append(df["aBSREL-MH_MG94_AICc"][index]) MH_Unconstrained.append(df["aBSREL-MH_FULL_AICc"][index]) #end for #plot fig = go.Figure(data=[ go.Bar(name='aBSREL-MH MG94 cAIC', y=dataset, x=MH_MG94, orientation='h'), go.Bar(name='aBSREL-MH Full adapative cAIC', y=dataset, x=MH_Unconstrained, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') fig.update_traces(marker_line_color='rgb(8,48,107)', marker_line_width=1.5, opacity=0.75) title='cAIC values - aBSREL-MH (Full adaptive) and aBSREL-MH (MG94)' fig.update_layout(title_text=title) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") # - # ## delta cAIC -- Full adapative models # + tag_pair = [["aBSREL-MH_FULL_AICc", "aBSREL_FULL_AICc"]] import random for tag in tag_pair: dataset = [] delta_value = [] for index, row in df.iterrows(): dataset.append(str(df["Filename"][index]).replace(".aBSREL-MH.json", "")) delta_value.append(df[tag[0]][index] - df[tag[1]][index]) #end for #plot fig = go.Figure(data=[ go.Bar(y=dataset, x=delta_value, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') title = 'delta cAIC -- aBSREL-MH and aBSREL (Full adapative)' fig.update_layout(title_text=title) fig.update_traces(marker_color='rgb(130,202,225)', marker_line_color='rgb(28,48,107)', marker_line_width=1.5, opacity=0.75) #fig.show() fig.update_layout( autosize=False, width=800, height=600,) fig.show() #output = tag[0].replace("MH_", "") + ' values - BUSTEDS-MH_vs_BUSTEDS' #output = "DELTA_" + output.replace(" ", "") + ".png" #fig.write_image(output) fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") """ Lower aic is better 1-2 = -1 means that for delta aic, negative values indicate MH is a better fit """ # + fig = go.Figure() df2 = pd.DataFrame (delta_value, columns=['delta_cAIC_Unconstrained']) fig = px.histogram(df2, x="delta_cAIC_Unconstrained", marginal="violin", nbins=len(df.index), title='delta-cAIC for Unconstrained Models [n=' + str(len(df.index)) + ']') fig.update_traces(marker_color='rgb(52, 201, 235)', marker_line_color='rgb(0,0,0)', marker_line_width=1.5, opacity=0.75) fig.show() """""" # - """ import plotly.graph_objects as go help_fig = px.scatter(df2, x="sepal_width", y="sepal_length", trendline="ols") x_trend = help_fig["data"][1]['x'] y_trend = help_fig["data"][1]['y'] fig.add_trace(go.Line(x=x_trend, y=y_trend)) """ # ## delta cAIC -- Unconstrained model versus MG94xREV Model # + tag_pair = [["aBSREL-MH_FULL_AICc", "aBSREL-MH_MG94_AICc"]] import random for tag in tag_pair: dataset = [] delta_value = [] for index, row in df.iterrows(): dataset.append(str(df["Filename"][index]).replace(".aBSREL-MH.json", "")) delta_value.append(df[tag[0]][index] - df[tag[1]][index]) #end for #plot fig = go.Figure(data=[ go.Bar(y=dataset, x=delta_value, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') title="delta cAIC -- aBSREL-MH (Full Adapative) model versus aBSREL (MG94) model" fig.update_layout(title_text=title) fig.update_traces(marker_color='rgb(130,202,225)', marker_line_color='rgb(28,48,107)', marker_line_width=1.5, opacity=0.75) #fig.show() fig.update_layout( autosize=False, width=800, height=600,) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") #output = tag[0].replace("MH_", "") + ' values - BUSTEDS-MH_vs_BUSTEDS' #output = "DELTA_" + output.replace(" ", "") + ".png" #fig.write_image(output) """ Lower aic is better 1-2 = -1 means that for delta aic, negative values indicate Unconstrained is a better fit """ # - # ## lnL -- Full Adaptive Models # + # Bar plot for AICc - Unconstrained dataset = [] MH = [] BASE = [] for index, row in df.iterrows(): dataset.append(df["Filename"][index].replace(".aBSREL-MH.json","")) MH.append(df["aBSREL-MH_FULL_LL"][index]) BASE.append(df["aBSREL_FULL_LL"][index]) #end for #plot fig = go.Figure(data=[ go.Bar(name='aBSREL-MH Full adapative LogL', y=dataset, x=MH, orientation='h'), go.Bar(name='aBSREL Full adapative LogL', y=dataset, x=BASE, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') fig.update_traces(marker_line_color='rgb(8,48,107)', marker_line_width=1.5, opacity=0.75) title="lnL values - aBSREL-MH and aSBREL (Full adapative)" fig.update_layout(title_text=title) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") # - # ## lnL values - aBSREL-MH (Full adapative) and aBSREL (MG94) # + dataset = [] MH_MG94 = [] MH_Unconstrained = [] for index, row in df.iterrows(): dataset.append(df["Filename"][index].replace(".aBSREL-MH.json","")) MH_MG94.append(df["aBSREL-MH_MG94_LL"][index]) MH_Unconstrained.append(df["aBSREL-MH_FULL_LL"][index]) #end for #plot fig = go.Figure(data=[ go.Bar(name='aBSREL-MH MG94 LogL', y=dataset, x=MH_MG94, orientation='h'), go.Bar(name='aBSREL-MH Full Adapative LogL', y=dataset, x=MH_Unconstrained, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') fig.update_traces(marker_line_color='rgb(8,48,107)', marker_line_width=1.5, opacity=0.75) title="lnL values - aBSREL-MH (Full adapative) and aBSREL (MG94)" fig.update_layout(title_text=title) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") # - # ## delta Log L -- Full Adaptive Models # + tag_pair = [["aBSREL-MH_FULL_LL", "aBSREL_FULL_LL"]] import random for tag in tag_pair: dataset = [] delta_value = [] for index, row in df.iterrows(): dataset.append(str(df["Filename"][index]).replace(".BUSTEDS-MH.json", "")) delta_value.append(df[tag[0]][index] - df[tag[1]][index]) #end for #plot fig = go.Figure(data=[ go.Bar(y=dataset, x=delta_value, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') title = "delta for lnL -- aBSREL-MH versus aBSREL (Full Adapative model)" fig.update_layout(title_text=title) fig.update_traces(marker_color='rgb(130,202,225)', marker_line_color='rgb(28,48,107)', marker_line_width=1.5, opacity=0.75) #fig.show() fig.update_layout( autosize=False, width=800, height=600,) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") #output = tag[0].replace("MH_", "") + ' values - BUSTEDS-MH_vs_BUSTEDS' #output = "DELTA_" + output.replace(" ", "") + ".png" #fig.write_image(output) """ Negative delta LL are convergence problems """ # - # ## delta for lnL -- aBSREL-MH (Full Adapative) versus aBSREL-MH MG94 # + tag_pair = [["aBSREL-MH_FULL_LL", "aBSREL-MH_MG94_LL"]] import random for tag in tag_pair: dataset = [] delta_value = [] for index, row in df.iterrows(): dataset.append(str(df["Filename"][index]).replace(".BUSTEDS-MH.json", "")) delta_value.append(df[tag[0]][index] - df[tag[1]][index]) #end for #plot fig = go.Figure(data=[ go.Bar(y=dataset, x=delta_value, orientation='h') ]) # Change the bar mode fig.update_layout(barmode='group') title = "delta for lnL -- aBSREL-MH (Full Adapative) versus aBSREL-MH (MG94) model" fig.update_layout(title_text=title) fig.update_traces(marker_color='rgb(130,202,225)', marker_line_color='rgb(28,48,107)', marker_line_width=1.5, opacity=0.75) #fig.show() fig.update_layout( autosize=False, width=800, height=600,) fig.show() fig.write_image(title.replace(" ", "") + ".png", engine="kaleido") #output = tag[0].replace("MH_", "") + ' values - BUSTEDS-MH_vs_BUSTEDS' #output = "DELTA_" + output.replace(" ", "") + ".png" #fig.write_image(output) """ Negative delta LL are convergence problems """ # -
scripts/View_PairedAnalysis_For_aBSREL-MH_and_aBSREL.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # [NTDS'19] assignment 1: network science # [ntds'19]: https://github.com/mdeff/ntds_2019 # # [Eda Bayram](https://lts4.epfl.ch/bayram), [EPFL LTS4](https://lts4.epfl.ch) and # [<NAME>](https://people.epfl.ch/nikolaos.karalias), [EPFL LTS2](https://lts2.epfl.ch). # ## Students # # * Team: `25` # * Students: `<NAME>`, `<NAME>`, `<NAME>`, `<NAME>` # ## Rules # # Grading: # * The first deadline is for individual submissions. The second deadline is for the team submission. # * All team members will receive the same grade based on the team solution submitted on the second deadline. # * As a fallback, a team can ask for individual grading. In that case, solutions submitted on the first deadline are graded. # * Collaboration between team members is encouraged. No collaboration between teams is allowed. # # Submission: # * Textual answers shall be short. Typically one to two sentences. # * Code has to be clean. # * You cannot import any other library than we imported. # Note that Networkx is imported in the second section and cannot be used in the first. # * When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks. # * The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart Kernel and Run All Cells" in Jupyter. # ## Objective # # The purpose of this milestone is to explore a given dataset, represent it by network by constructing different graphs. In the first section, you will analyze the network properties. In the second section, you will explore various network models and find out the network model fitting the ones you construct from the dataset. # ## Cora Dataset # # The [Cora dataset](https://linqs.soe.ucsc.edu/node/236) consists of scientific publications classified into one of seven research fields. # # * **Citation graph:** the citation network can be constructed from the connections given in the `cora.cites` file. # * **Feature graph:** each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary and its research field, given in the `cora.content` file. The dictionary consists of 1433 unique words. A feature graph can be constructed using the Euclidean distance between the feature vector of the publications. # # The [`README`](data/cora/README) provides details about the content of [`cora.cites`](data/cora/cora.cites) and [`cora.content`](data/cora/cora.content). # ## Section 1: Network Properties # + import numpy as np import pandas as pd from matplotlib import pyplot as plt # %matplotlib inline # - # ### Utils functions def are_matrix_equals(a1, a2): if a1.shape != a2.shape: return False for index in np.ndindex(a1.shape): if a1[index] != a2[index]: return False return True def array_map(f, *x): return np.array(list(map(f,*x))) def is_matrix_symmetric(a): differences = np.count_nonzero(A_citation - A_citation.transpose()) return differences == 0 def get_index_of_1D(array, element): i, = np.where(array == element) if len(i) == 0: return -1 return i[0] # + def prune_ids_adjacency_matrix(matrix, ids): """set to zeros rows/columns for given ids""" pruned = np.copy(matrix) for i in ids: pruned[i] = 0 pruned[:,i] = 0 return pruned def reduce_ids_adjacency_matrix(matrix, ids): """only keeps given ids""" removed_ids = np.setdiff1d(range(matrix.shape[0]), ids) return reduce_ids_adjacency_matrix_removeids(matrix, removed_ids) def reduce_ids_adjacency_matrix_removeids(matrix, ids): """only remove given ids""" pruned = np.delete(matrix, ids, axis=0) pruned = np.delete(pruned, ids, axis=1) return pruned # - def unzip(zipped): return [ i for i, _ in zipped ], [ j for _, j in zipped ] def clean_diagonal(matrix): np.fill_diagonal(matrix, 0) return matrix # ### Question 1: Construct a Citation Graph and a Feature Graph # Read the `cora.content` file into a Pandas DataFrame by setting a header for the column names. Check the `README` file. features_names = ['w'+str(i) for i in range(1433)] column_list = ['uid'] + features_names + ['class_label'] df = pd.read_csv('data/cora/cora.content', delimiter='\t', names=column_list) df.set_index('uid') df.head() # Print out the number of papers contained in each of the reasearch fields. # # **Hint:** You can use the `value_counts()` function. df.class_label.value_counts() # Select all papers from a field of your choice and store their feature vectors into a NumPy array. # Check its shape. my_field = 'Neural_Networks' filtered_data = df[df.class_label == my_field] features = filtered_data[features_names].values features.shape # Let $D$ be the Euclidean distance matrix whose $(i,j)$ entry corresponds to the Euclidean distance between feature vectors $i$ and $j$. # Using the feature vectors of the papers from the field which you have selected, construct $D$ as a Numpy array. # + def euclidean_distance_vectors(a, b): return np.linalg.norm(a-b) def euclidean_distance_matrix(features_matrix): matrix = [[euclidean_distance_vectors(a,b) for b in features_matrix] for a in features_matrix] return np.array(matrix) distance = euclidean_distance_matrix(features) distance.shape # - # Check the mean pairwise distance $\mathbb{E}[D]$. mean_distance = distance.mean() mean_distance # Plot an histogram of the euclidean distances. plt.figure(1, figsize=(8, 4)) plt.title("Histogram of Euclidean distances between papers") plt.hist(distance.flatten()); # Now create an adjacency matrix for the papers by thresholding the Euclidean distance matrix. # The resulting (unweighted) adjacency matrix should have entries # $$ A_{ij} = \begin{cases} 1, \; \text{if} \; d(i,j)< \mathbb{E}[D], \; i \neq j, \\ 0, \; \text{otherwise.} \end{cases} $$ # # First, let us choose the mean distance as the threshold. # + def binary_threshold_matrix(matrix, threshold): ''' returns a copy of the input matrix with values set to 1 if under threshold, 0 otherwise''' return np.where(matrix<threshold, 1, 0) threshold = mean_distance A_feature = clean_diagonal(binary_threshold_matrix(distance, threshold)) A_feature.shape # - # Now read the `cora.cites` file and construct the citation graph by converting the given citation connections into an adjacency matrix. # + def create_adjacency_matrix(edges, uids = None, directed = False): if uids is None: uids = np.unique(edges, axis=None) length = uids.shape[0] a = np.zeros((length,length)) for edge in edges: id1 = get_index_of_1D(uids, edge[0]) id2 = get_index_of_1D(uids, edge[1]) if id1 == -1 or id2 == -1: continue a[id1,id2] = 1 if not directed: a[id2,id1] = 1 return a cora_cites = np.genfromtxt('data/cora/cora.cites', delimiter='\t') A_citation_full = create_adjacency_matrix(cora_cites) A_citation_full.shape # - # Get the adjacency matrix of the citation graph for the field that you chose. # You have to appropriately reduce the adjacency matrix of the citation graph. filtered_uids = filtered_data['uid'].values A_citation = create_adjacency_matrix(cora_cites, filtered_uids) A_citation.shape # Check if your adjacency matrix is symmetric. Symmetrize your final adjacency matrix if it's not already symmetric. # + assert is_matrix_symmetric(A_citation) is True np.count_nonzero(A_citation - A_citation.transpose()) # - # Check the shape of your adjacency matrix again. A_citation.shape # ### Question 2: Degree Distribution and Moments # What is the total number of edges in each graph? # + def count_edges(adjacency_matrix, directed = False): count = np.count_nonzero(adjacency_matrix) if directed: return count return count/2 num_edges_feature = count_edges(A_feature) num_edges_citation = count_edges(A_citation) print(f"Number of edges in the feature graph: {num_edges_feature}") print(f"Number of edges in the citation graph: {num_edges_citation}") # - # Plot the degree distribution histogram for each of the graphs. # + def compute_nodes_degrees(adjacency_matrix, directed = False): if directed: raise NotImplementedError return np.sum(adjacency_matrix, axis=0) degrees_citation = compute_nodes_degrees(A_citation) degrees_feature = compute_nodes_degrees(A_feature) deg_hist_normalization = np.ones(degrees_citation.shape[0]) / degrees_citation.shape[0] fig, axes = plt.subplots(1, 2, figsize=(16, 4)) axes[0].set_title('Citation graph degree distribution') axes[0].hist(degrees_citation, weights=deg_hist_normalization); axes[1].set_title('Feature graph degree distribution') axes[1].hist(degrees_feature, weights=deg_hist_normalization); # - # Calculate the first and second moments of the degree distribution of each graph. # + def compute_nth_moment(adjacency_matrix, degree = 1, directed = False, center = 0): nodes_degrees = compute_nodes_degrees(adjacency_matrix, directed) nodes_degrees_t = array_map(lambda x: (x-center)**degree, nodes_degrees) return np.mean(nodes_degrees_t) cit_moment_1 = compute_nth_moment(A_citation, 1) cit_moment_2 = compute_nth_moment(A_citation, 2, center = cit_moment_1) feat_moment_1 = compute_nth_moment(A_feature, 1) feat_moment_2 = compute_nth_moment(A_feature, 2, center = feat_moment_1) print(f"1st moment of citation graph: {cit_moment_1} (sqrt = {np.sqrt(cit_moment_1)})") print(f"2nd moment of citation graph: {cit_moment_2} (sqrt = {np.sqrt(cit_moment_2)})") print(f"1st moment of feature graph: {feat_moment_1} (sqrt = {np.sqrt(feat_moment_1)})") print(f"2nd moment of feature graph: {feat_moment_2} (sqrt = {np.sqrt(feat_moment_2)})") # - # What information do the moments provide you about the graphs? # Explain the differences in moments between graphs by comparing their degree distributions. # **Your answer here:** # The first moment is the average degree: It tells us how **dense** is the network (from **isolated nodes** (0) to a **complete graph** (number of nodes-1)). # # We can use the second moment (variance) to assimilate the network to a **regular lattice** (variance == 0), **random network** (variance == sqrt(average_degree)) or **scale free network** (unbounded variance). # # It is well represented with the degree distribution: # 1. Citation Graph: All nodes are about the same degree (low variance), and the graph is not very dense (low average), therefore it could be a random network, because the standard deviation is close to the average degree. # 2. Feature Graph: There is a big variance because lots of node have a large degree (800) and many of them have a smaller one (around 200). It is probably because some big hubs exist in the graph. This implies that it could be a scale free network. # Select the 20 largest hubs for each of the graphs and remove them. Observe the sparsity pattern of the adjacency matrices of the citation and feature graphs before and after such a reduction. # + # TODO: 1. get the ids of the 20 largest hubs # TODO: 2. prune the matrix of these ids NUMBER_OF_HUBS = 20 ids_largest_hubs_citation = np.argpartition(degrees_citation, -NUMBER_OF_HUBS)[-NUMBER_OF_HUBS:] ids_largest_hubs_feature = np.argpartition(degrees_feature, -NUMBER_OF_HUBS)[-NUMBER_OF_HUBS:] reduced_A_feature = reduce_ids_adjacency_matrix_removeids(A_feature, ids_largest_hubs_feature) reduced_A_citation = reduce_ids_adjacency_matrix_removeids(A_citation, ids_largest_hubs_citation) fig, axes = plt.subplots(2, 2, figsize=(16, 16)) axes[0, 0].set_title('Feature graph: adjacency matrix sparsity pattern') axes[0, 0].spy(A_feature); axes[0, 1].set_title('Feature graph without top 20 hubs: adjacency matrix sparsity pattern') axes[0, 1].spy(reduced_A_feature); axes[1, 0].set_title('Citation graph: adjacency matrix sparsity pattern') axes[1, 0].spy(A_citation); axes[1, 1].set_title('Citation graph without top 20 hubs: adjacency matrix sparsity pattern') axes[1, 1].spy(reduced_A_citation); # - # Plot the new degree distribution histograms. # + reduced_degrees_feat = compute_nodes_degrees(reduced_A_feature) reduced_degrees_cit = compute_nodes_degrees(reduced_A_citation) deg_hist_normalization_citation = np.ones(reduced_degrees_cit.shape[0])/reduced_degrees_cit.shape[0] deg_hist_normalization_feature = np.ones(reduced_degrees_feat.shape[0])/reduced_degrees_feat.shape[0] fig, axes = plt.subplots(2, 2, figsize=(16, 8)) axes[0,0].set_title('Reduced citation graph degree distribution') axes[0,0].hist(reduced_degrees_cit, weights=deg_hist_normalization_citation); axes[0,1].set_title('Reduced feature graph degree distribution') axes[0,1].hist(reduced_degrees_feat, weights=deg_hist_normalization_feature); # recalling graphs distribution to compare more easily axes[1,0].set_title('Citation graph degree distribution') axes[1,0].hist(degrees_citation, weights=deg_hist_normalization); axes[1,1].set_title('Feature graph degree distribution') axes[1,1].hist(degrees_feature, weights=deg_hist_normalization); # - # Compute the first and second moments for the new graphs. # + reduced_cit_moment_1 = compute_nth_moment(reduced_A_citation, 1) reduced_cit_moment_2 = compute_nth_moment(reduced_A_citation, 2, center = reduced_cit_moment_1) reduced_feat_moment_1 = compute_nth_moment(reduced_A_feature, 1) reduced_feat_moment_2 = compute_nth_moment(reduced_A_feature, 2, center = reduced_feat_moment_1) print("Citation graph first moment:", reduced_cit_moment_1) print("Citation graph second moment:", reduced_cit_moment_2) print("Feature graph first moment: ", reduced_feat_moment_1) print("Feature graph second moment: ", reduced_feat_moment_2) # - # Print the number of edges in the reduced graphs. num_edges_reduced_feature = count_edges(reduced_A_feature) num_edges_reduced_citation = count_edges(reduced_A_citation) print(f'Num edges in reduced feature={num_edges_reduced_feature}') print(f'Num edges in reduced citation={num_edges_reduced_citation}') # Is the effect of removing the hubs the same for both networks? Look at the percentage changes for each moment. Which of the moments is affected the most and in which graph? Explain why. # # **Hint:** Examine the degree distributions. # + def show_variation(old_value, new_value, name = 'feature'): variation = (new_value-old_value)/old_value*100 print('{} varied by {}%'.format(name, variation)) return name, variation variations_params = [ (num_edges_feature, num_edges_reduced_feature, 'num_edges_feature'), (num_edges_citation, num_edges_reduced_citation, 'num_edges_citation'), (cit_moment_1, reduced_cit_moment_1, 'moment_1_citation'), (cit_moment_2, reduced_cit_moment_2, 'moment_2_citation'), (feat_moment_1, reduced_feat_moment_1, 'moment_1_feature'), (feat_moment_2, reduced_feat_moment_2, 'moment_2_feature') ] variations = [show_variation(a,b,c) for a,b,c in variations_params] x, heights = unzip(variations) plt.rcdefaults() fig, ax = plt.subplots() y_pos = np.arange(len(x)) ax.barh(y_pos, heights, align='center') ax.set_yticks(y_pos) ax.set_yticklabels(x) ax.invert_yaxis() # labels read top-to-bottom ax.set_xlabel('Variation (% of old value)') plt.show() # - # **Your answer here:** It has more effect on the 2nd moment of the citation graph. Looking at the degree distribution, we can see that the citation graph has a few nodes with a very high degree. Removing them will decrease the variance significantly. On the other hand, the feature graph is less impacted because its degree distribution is more spread out and there are many nodes with high degrees, therefore removing only twenty of them will not significantly reduce the variance. # ### Question 3: Pruning, sparsity, paths # By adjusting the threshold of the euclidean distance matrix, prune the feature graph so that its number of edges is roughly close (within a hundred edges) to the number of edges in the citation graph. # + def dichotomic_search(min_param, max_param, expected_val, param_to_val, epsilon = 100, max_loops = 50): '''param_to_val should be a strctly increasing function of param on [min_param, max_param]''' test_param = (min_param+max_param)/2 test_value = param_to_val(test_param) max_loops = max_loops - 1 if max_loops == 0: print('WARNING: max_loops parameter exceeded -> dichotomic_search interrupted') return test_param if test_value < expected_val + epsilon and test_value > expected_val - epsilon: return test_param if test_value > expected_val: return dichotomic_search(min_param, test_param, expected_val, param_to_val, epsilon, max_loops) else: return dichotomic_search(test_param, max_param, expected_val, param_to_val, epsilon, max_loops) threshold_2 = dichotomic_search( 0,# zero is the lowest bound possible mean_distance,# we know that mean distance is already too much num_edges_citation,# that's our goal value (aka. number of edges in graph) lambda x: count_edges(clean_diagonal(binary_threshold_matrix(distance, x))), epsilon = 5 ) print("") print(f"Optimal threshold is {threshold_2}") A_feature_pruned = clean_diagonal(binary_threshold_matrix(distance, threshold_2)) num_edges_feature_pruned = count_edges(A_feature_pruned) print(f"Number of edges in the feature graph: {num_edges_feature}") print(f"Number of edges in the feature graph after pruning: {num_edges_feature_pruned}") print(f"Number of edges in the citation graph: {num_edges_citation}") # - # Check your results by comparing the sparsity patterns and total number of edges between the graphs. fig, axes = plt.subplots(1, 2, figsize=(12, 6)) axes[0].set_title('Citation graph sparsity') axes[0].spy(A_citation); axes[1].set_title('Feature graph sparsity') axes[1].spy(A_feature_pruned); # Let $C_{k}(i,j)$ denote the number of paths of length $k$ from node $i$ to node $j$. # # We define the path matrix $P$, with entries # $ P_{ij} = \displaystyle\sum_{k=0}^{N}C_{k}(i,j). $ # Calculate the path matrices for both the citation and the unpruned feature graphs for $N =10$. # # **Hint:** Use [powers of the adjacency matrix](https://en.wikipedia.org/wiki/Adjacency_matrix#Matrix_powers). # + def compute_N_path_matrix(adjacency_matrix, N, next_n = 0, acc = None): if acc is None: acc = np.zeros(adjacency_matrix.shape).astype('float64') else: acc = acc.astype('float64') mul = np.linalg.matrix_power(adjacency_matrix,0).astype('float64') for i in range(next_n, N+1): acc += mul mul = mul @ adjacency_matrix return acc N = 10 path_matrix_citation = compute_N_path_matrix(A_citation, N) path_matrix_feature = compute_N_path_matrix(A_feature, N) # - # Check the sparsity pattern for both of path matrices. fig, axes = plt.subplots(1, 2, figsize=(16, 9)) axes[0].set_title('Citation Path matrix sparsity') axes[0].spy(path_matrix_citation); axes[1].set_title('Feature Path matrix sparsity') axes[1].spy(path_matrix_feature); # Now calculate the path matrix of the pruned feature graph for $N=10$. Plot the corresponding sparsity pattern. Is there any difference? path_matrix_pruned = compute_N_path_matrix(A_feature_pruned, N) plt.figure(figsize=(12, 6)) plt.title('Pruned Feature Path matrix sparsity') plt.spy(path_matrix_pruned); # **Your answer here:** The pruned feature path matrix has many fewer edges than the original feature graph, and therefore its path matrix is much more sparse, since it takes more steps to reach one node from another if there are fewer edges connecting them. # Describe how you can use the above process of counting paths to determine whether a graph is connected or not. Is the original (unpruned) feature graph connected? # **Your answer here:** A graph is connected when there exists a path from any two nodes in the network, meaning that at some time up until the (N-1)th iteration of computing the path matrix, there are no zeros left in the path matrix because there exists at least one path from any two nodes in the network. The original feature graph is connected because at N=10, there are no more zeros left in the path matrix. # + def get_matrix_connectivity(adjacency_matrix, N_max = 50, next_n = 0, output_logs = False): ''' returns -1 if not connected or the diameter if connected ''' path_matrix = compute_N_path_matrix(adjacency_matrix, next_n) if np.count_nonzero(path_matrix==0) == 0: if output_logs: print(f"Matrix is connected with paths of max {next_n} step(s)") return next_n if next_n == N_max: if output_logs: print(f"Matrix is not connected") print(f"WARNING: connectivity has been checked for maximum {N_max}-step path") return -1 return get_matrix_connectivity(adjacency_matrix, N_max, next_n + 1, output_logs) def is_matrix_connected(adjacency_matrix, N_max = 50, next_n = 1, output_logs = False): return get_matrix_connectivity(adjacency_matrix, N_max, next_n, output_logs) != -1 print("Checking feature (unpruned) connectivity...") is_matrix_connected(A_feature, output_logs = True) # - # If the graph is connected, how can you guess its diameter using the path matrix? # **Your answer here:** The diameter of the graph is the lowest iteration of computing the path matrix at which there are no more zeros left in the path matrix, since this is equal to the maximum shortest path between nodes in the network. # If any of your graphs is connected, calculate the diameter using that process. diameter = get_matrix_connectivity(A_feature) print(f"The diameter is: {diameter}") # Check if your guess was correct using [NetworkX](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.distance_measures.diameter.html). # Note: usage of NetworkX is only allowed in this part of Section 1. import networkx as nx feature_graph = nx.from_numpy_matrix(A_feature) print(f"Diameter according to networkx: {nx.diameter(feature_graph)}") # ## Section 2: Network Models # In this section, you will analyze the feature and citation graphs you constructed in the previous section in terms of the network model types. # For this purpose, you can use the NetworkX libary imported below. import networkx as nx # Let us create NetworkX graph objects from the adjacency matrices computed in the previous section. G_citation = nx.from_numpy_matrix(A_citation) print('Number of nodes: {}, Number of edges: {}'. format(G_citation.number_of_nodes(), G_citation.number_of_edges())) print('Number of self-loops: {}, Number of connected components: {}'. format(G_citation.number_of_selfloops(), nx.number_connected_components(G_citation))) # In the rest of this assignment, we will consider the pruned feature graph as the feature network. G_feature = nx.from_numpy_matrix(A_feature_pruned) print('Number of nodes: {}, Number of edges: {}'. format(G_feature.number_of_nodes(), G_feature.number_of_edges())) print('Number of self-loops: {}, Number of connected components: {}'. format(G_feature.number_of_selfloops(), nx.number_connected_components(G_feature))) # ### Question 4: Simulation with Erdős–Rényi and Barabási–Albert models # Create an Erdős–Rényi and a Barabási–Albert graph using NetworkX to simulate the citation graph and the feature graph you have. When choosing parameters for the networks, take into account the number of vertices and edges of the original networks. # The number of nodes should exactly match the number of nodes in the original citation and feature graphs. assert len(G_citation.nodes()) == len(G_feature.nodes()) n = len(G_citation.nodes()) n # The number of match shall fit the average of the number of edges in the citation and the feature graph. m = np.round((G_citation.size() + G_feature.size()) / 2) m # How do you determine the probability parameter for the Erdős–Rényi graph? # **Your answer here:** This parameter describes the probability that two nodes in the graph are connected. Therefore, we divide the number of links by the total number of possible links in the network. p = 2*m/(n*(n-1)) G_er = nx.erdos_renyi_graph(n, p, seed = 1) # we put a seed to be able to compare btw different notebooks # Check the number of edges in the Erdős–Rényi graph. print('My Erdos-Rényi network has {} edges.'.format(G_er.size())) # How do you determine the preferential attachment parameter for Barabási–Albert graphs? # **Your answer here:** The preferential attachment parameter describes the number of edges to attach from a new node to existing nodes, which can be approximated by the number of edges per node in the network. q = int(round(m/n)) G_ba = nx.barabasi_albert_graph(n, q, seed=1) # we put a seed to be able to compare btw different notebooks # Check the number of edges in the Barabási–Albert graph. print('My Barabási-Albert network has {} edges.'.format(G_ba.size())) # ### Question 5: Giant Component # Check the size of the largest connected component in the citation and feature graphs. def get_giant_component(graph): Gcc = sorted(nx.connected_component_subgraphs(graph), key=len, reverse=True) return Gcc[0] giant_citation = get_giant_component(G_citation) print('The giant component of the citation graph has {} nodes and {} edges.'.format(giant_citation.number_of_nodes(), giant_citation.size())) giant_feature = get_giant_component(G_feature) print('The giant component of the feature graph has {} nodes and {} edges.'.format(giant_feature.number_of_nodes(), giant_feature.size())) # Check the size of the giant components in the generated Erdős–Rényi graph. giant_er = get_giant_component(G_er) print('The giant component of the Erdos-Rényi network has {} nodes and {} edges.'.format(giant_er.number_of_nodes(), giant_er.size())) # Let us match the number of nodes in the giant component of the feature graph by simulating a new Erdős–Rényi network. # How do you choose the probability parameter this time? # # **Hint:** Recall the expected giant component size from the lectures. # **Your answer here:** # Number of nodes in network (N): 818 # Wanted number of nodes in GC (Ng): 117 # That is to say N-Ng (= 701) nodes not in GC... # We know that the probability for a node not to be in Gc is (1-p)^Ng # So we should have N.(1-p)^Ng = N-Ng <=> p = 1-((N-Ng)/N)^(1/Ng) # Finally, p = 0.0013183989472099755 Ng = giant_feature.number_of_nodes() N = n print(f'Number of nodes in network (N): {N}') print(f'Wanted number of nodes in GC (Ng): {Ng}') print(f'That is to say N-Ng (= {N-Ng}) nodes not in GC...') print(f'We know that the probability for a node not to be in Gc is (1-p)**Ng') print(f'So we should have N*(1-p)**Ng = N-Ng <=> p = 1 - ((N-Ng)/N)**(1/Ng)') p_new = 1 - ((N-Ng)/N)**(1/Ng) print(f'Finally, p = {p_new}') # Check the size of the new Erdős–Rényi network and its giant component. G_er_new = nx.erdos_renyi_graph(n, p_new, seed = 1) print('My new Erdos Renyi network that simulates the citation graph has {} edges.'.format(G_er_new.size())) giant_er_new = get_giant_component(G_er_new) print('The giant component of the new Erdos-Rényi network has {} nodes and {} edges.'.format(giant_er_new.number_of_nodes(), giant_er_new.size())) # ### Question 6: Degree Distributions # Recall the degree distribution of the citation and the feature graph. fig, axes = plt.subplots(1, 2, figsize=(15, 6)) axes[0].set_title('Citation graph') citation_degrees = sorted([d for n, d in G_citation.degree()], reverse=True) axes[0].hist(citation_degrees); axes[1].set_title('Feature graph') feature_degrees = sorted([d for n, d in G_feature.degree()], reverse=True) axes[1].hist(feature_degrees); # What does the degree distribution tell us about a network? Can you make a prediction on the network model type of the citation and the feature graph by looking at their degree distributions? # **Your answer here:** The degree distribution shows us how many nodes in the network have a given degree, which then allows us to determine a network model we can use to estimate the degree distribution of our network. For example, the above graphs seem to follow a power law distribution because there are many nodes with very low degrees and a significant number of hubs with high degrees. # Now, plot the degree distribution histograms for the simulated networks. fig, axes = plt.subplots(1, 3, figsize=(20, 4)) axes[0].set_title('Erdos-Rényi network') er_degrees = [G_er.degree(n) for n in G_er.nodes()] axes[0].hist(er_degrees); axes[1].set_title('Barabási-Albert network') ba_degrees = [G_ba.degree(n) for n in G_ba.nodes()] axes[1].hist(ba_degrees); axes[2].set_title('new Erdos-Rényi network') er_new_degrees = [G_er_new.degree(n) for n in G_er_new.nodes()] axes[2].hist(er_new_degrees); # In terms of the degree distribution, is there a good match between the citation and feature graphs and the simulated networks? # For the citation graph, choose one of the simulated networks above that match its degree distribution best. Indicate your preference below. # **Your answer here:** The best match to the feature and citation degree distribution graphs is the Barabasi-Albert network because they both have a high number of low-degree nodes and several high-degree hubs. The Erdos-Renyi graphs have a more Gaussian shape, which does not describe the degree distribution of our networks very well. # You can also simulate a network using the configuration model to match its degree disctribution exactly. Refer to [Configuration model](https://networkx.github.io/documentation/stable/reference/generated/networkx.generators.degree_seq.configuration_model.html#networkx.generators.degree_seq.configuration_model). # # Let us create another network to match the degree distribution of the feature graph. G_config = nx.configuration_model([d for n, d in G_feature.degree()]) print('Configuration model has {} nodes and {} edges.'.format(G_config.number_of_nodes(), G_config.size())) #fig, axe = plt.subplots(figsize=(10, 4)) #axe.set_title('configuration model network') #degrees = [G_config.degree(n) for n in G_config.nodes()] #axe.hist(degrees); # Does it mean that we create the same graph with the feature graph by the configuration model? If not, how do you understand that they are not the same? # **Your answer here:** We do not create the same graph because a configuration model generates a random graph based on the degree distribution provided. Even though it may have the same number of nodes and edges as our graph, the edges are randomly placed among the nodes in a way that preserves the degree distribution, so the exact configuration of the graph is not preserved. # ### Question 7: Clustering Coefficient # Let us check the average clustering coefficient of the original citation and feature graphs. p_citation = nx.average_clustering(G_citation) p_citation nx.average_clustering(G_feature) # What does the clustering coefficient tell us about a network? Comment on the values you obtain for the citation and feature graph. # **Your answer here:** The clustering coefficient of a network is a value ranging from 0 to 1 that indicates what percentage of the neighbors of a given node are connected to each other. The low average clustering coefficients of our graphs imply that they are relatively unclustered. However, the citation graph coefficient is twice that of the feature graph. We could explain this difference by the existence of "citation clusters": papers that all cite each other. This should not be the case because citations in papers are constrained by the course of time; a paper must be published in order to be cited by another one. However, the fact that we built the citation graph as an undirected graph makes this phenomenon possible. # Now, let us check the average clustering coefficient for the simulated networks. nx.average_clustering(G_er) nx.average_clustering(G_ba) nx.average_clustering(nx.Graph(G_config)) # Comment on the values you obtain for the simulated networks. Is there any good match to the citation or feature graph in terms of clustering coefficient? # **Your answer here:** The clustering coefficients of the simulated networks are all much lower than those of the citation and feature graphs, and there is no good match. # Check the other [network model generators](https://networkx.github.io/documentation/networkx-1.10/reference/generators.html) provided by NetworkX. Which one do you predict to have a better match to the citation graph or the feature graph in terms of degree distribution and clustering coefficient at the same time? Justify your answer. # **Your answer here:** The Power Law Clusered Graph may give us a better match in terms of degree distribution and clusering coefficient because it allows us to keep the same power law distribution of the Barbarasi-Albert model, but incorporate a new parameter that allows us to emulate the clustering of the feature graph. # If you find a better fit, create a graph object below for that network model. Print the number of edges and the average clustering coefficient. Plot the histogram of the degree distribution. # + def print_graph_brief(graph, name='Test'): print(f'--- {name} ---') print(f'Model has {graph.number_of_nodes()} nodes and {graph.size()} edges.') print(f'Clustering coefficient is {nx.average_clustering(graph)}') print(f'Self-loops: {graph.number_of_selfloops()} - Connected components: {nx.number_connected_components(graph)}') n_edges_citation = 2*G_citation.size() #undirected average_edges_per_node = int(n_edges_citation/n) G_test = nx.powerlaw_cluster_graph(n=n, m=average_edges_per_node, p=p_citation) print_graph_brief(G_test) print_graph_brief(G_citation, name='Citation graph') fig, axes = plt.subplots(1, 2, figsize=(20, 4)) axes[0].set_title('My model network') degrees = [G_test.degree(n) for n in G_test.nodes()] axes[0].hist(degrees); axes[1].set_title('Citation graph') axes[1].hist(degrees_citation); # - # Comment on the similarities of your match. # **Your answer here:** The simulated degree distribution seems fairly similar to that of the original feature graph in that they both follow a power law distribution, but there seem to be fewer nodes with higher degrees than in the original graph.
assignments/1_network_science.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:kaggle] # language: python # name: conda-env-kaggle-py # --- # + # %load_ext autoreload # %autoreload 2 # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import os import numpy as np, pandas as pd import matplotlib.pyplot as plt, seaborn as sns from tqdm import tqdm, tqdm_notebook from pathlib import Path from collections import Counter from sklearn.decomposition import PCA from ast import literal_eval from functools import partial import re sns.set() DATA = Path('../../data') RAW = DATA/'raw' PROCESSED = DATA/'processed' SUBMISSIONS = DATA/'submissions' # - # %%time product = pd.read_csv(RAW/'productid_category.csv', low_memory=False) train_tracking = pd.read_csv(RAW/'train_tracking.csv', low_memory=False) test_tracking = pd.read_csv(RAW/'test_tracking.csv', low_memory=False) train_session = pd.read_csv(RAW/'train_session.csv', low_memory=False) test_session = pd.read_csv(RAW/'random_submission.csv', low_memory=False) train_features = train_session.copy() test_features = test_session.copy() # + def add_page(train_tracking): def extract_page(x): pages_types = ['_LR', '_PA', '_LP', '_CAROUSEL', '_SHOW_CASE'] pages = ['CAROUSEL', 'PA', 'SEARCH', 'SHOW_CASE', 'LIST_PRODUCT'] pages_map = [['PURCHASE_PRODUCT_UNKNOW_ORIGIN', 'UNKNOWN']] for pages_type in pages_types: if x.endswith(pages_type): return x[-len(pages_type)+1:] for page in pages: if x == page: return x for page_map in pages_map: if x == page_map[0]: return page_map[1] return '::' + x train_tracking['page'] = train_tracking.type.apply(extract_page) return train_tracking def simplify_categories(product): counter1 = product.groupby('category_product_id_level1').size() counter1dict = counter1.to_dict() mapcat = {} for idx in counter1dict: if counter1dict[idx] > 10: mapcat[idx] = idx else: mapcat[idx] = 10e7 product['cat1'] = product.category_product_id_level1.apply(lambda x: mapcat[x]) return product def convert_jsonproducts(train_tracking, column): def convert_json(x): if pd.isnull(x): return x else: return literal_eval(x) train_tracking['product_list'] = train_tracking[column].apply(convert_json) return train_tracking def nn_convert_jsonproducts(train_tracking, column): train_tracking['product_list'] = train_tracking[column].apply(literal_eval) return train_tracking def fast_convert_jsonproducts(train_tracking, column): prog = re.compile("'sku':\ *'([a-zA-Z0-9\+\=\/]+)'") train_tracking['product_list'] = train_tracking[column].apply(lambda val: re.findall(prog, val)) return train_tracking # - print('Loading pages') train_tracking = add_page(train_tracking) print('Loading categories') product = simplify_categories(product) print('Loading catmap') catmap = dict(zip(product.product_id, product.cat1)) # + def cat_counter(prodlist, catmap): try: counter = {} for prod in prodlist: if not prod in catmap: # print('CANT FIND ' + prod['sku']) # print(prodlist) cat = 10e7 else: cat = int(catmap[prod]) if cat in counter: counter[cat] = counter[cat] + 1 else: counter[cat] = 1 return counter except: print(prodlist) print("ERROR") return {} def prod_counter(prodlist): try: counter = {} for prod in prodlist: if prod in counter: counter[prod] = counter[prod] + 1 else: counter[prod] = 1 return counter except: print(prodlist) print("ERROR") return {} def merge_counters(counters): merged = {} for counter in counters: for key in counter: if key in merged: merged[key] = merged[key] + counter[key] else: merged[key] = counter[key] # merged = {**merged, **counter} return merged # - # # Main category in purchases view def main_cat_purchase(session_features): carousel = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_CAROUSEL'].copy(), 'ocarproducts') carousel['prod_counter'] = carousel.product_list.apply(cat_counter) session_carousel = carousel.groupby('sid').prod_counter.agg(merge_counters).reset_index() lp = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_LP'].copy(), 'products') lp['prod_counter'] = lp.product_list.apply(cat_counter) session_lp = lp.groupby('sid').prod_counter.agg(merge_counters).reset_index() lr = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_LR'].copy(), 'oproducts') lr['prod_counter'] = lr.product_list.apply(cat_counter) session_lr = lr.groupby('sid').prod_counter.agg(merge_counters).reset_index() session_categories = pd.merge(pd.merge(session_carousel, session_lp, on='sid', how='left'), session_lr, on='sid', how='left') session_categories['prod_counters'] = list(zip(session_categories.prod_counter_car, session_categories.prod_counter_lp, session_categories.prod_counter_lr)) def merge_xyz(row): counters = [] for i in range(3): if pd.notnull(row[i]): counters.append(row[i]) merged = merge_counters(counters) evaluate = Counter(merged) return evaluate.most_common(1)[0][0] session_categories['top_cat'] = session_categories.prod_counters.apply(merge_xyz) top_cat_sessions = session_categories[['sid', 'top_cat']] top_cat_sessions.columns = ['sid', 'MAIN_CATEGORY_PURCHASED_VIEW'] session_features = pd.merge(session_features, top_cat_sessions, on='sid', how='left') return session_features # + session_data = [] carousel = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_CAROUSEL'].copy(), 'ocarproducts') carousel['prod_counter0'] = carousel.product_list.apply(cat_counter) session_carousel = carousel.groupby('sid').prod_counter0.agg(merge_counters).reset_index() session_data.append(session_carousel) lp = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_LP'].copy(), 'products') lp['prod_counter1'] = lp.product_list.apply(cat_counter) session_lp = lp.groupby('sid').prod_counter1.agg(merge_counters).reset_index() session_data.append(session_lp) lr = convert_jsonproducts(train_tracking[train_tracking.type=='PURCHASE_PRODUCT_LR'].copy(), 'oproducts') lr['prod_counter2'] = lr.product_list.apply(cat_counter) session_lr = lr.groupby('sid').prod_counter2.agg(merge_counters).reset_index() session_data.append(session_lr) bkt = convert_jsonproducts(train_tracking[train_tracking.type=='ADD_TO_BASKET_CAROUSEL'].copy(), 'ocarproducts') bkt['prod_counter3'] = bkt.product_list.apply(cat_counter) session_bkt = bkt.groupby('sid').prod_counter3.agg(merge_counters).reset_index() session_data.append(session_bkt) len(carousel), len(lr), len(lp), len(bkt) # - bkt = convert_jsonproducts(train_tracking[train_tracking.type=='ADD_TO_BASKET_LR'].copy(), 'oproducts') # + bkt = bkt.drop(bkt[pd.isnull(bkt.oproducts)].index, axis=0) bkt['prod_counter5'] = bkt.product_list.apply(cat_counter) session_bkt = bkt.groupby('sid').prod_counter5.agg(merge_counters).reset_index() session_data.append(session_bkt) len(bkt) # - session_categories = pd.merge(pd.merge(session_carousel, session_lp, on='sid', how='left'), session_lr, on='sid', how='left') session_categories['prod_counters'] = list(zip(session_categories.prod_counter_car, session_categories.prod_counter_lp, session_categories.prod_counter_lr)) # session_categories.applymap(merge_xyz) # session_categories def merge_xyz(row): counters = [] for i in range(3): if pd.notnull(row[i]): counters.append(row[i]) merged = merge_counters(counters) evaluate = Counter(merged) return evaluate.most_common(1)[0][0] session_categories['top_cat'] = session_categories.prod_counters.apply(merge_xyz) top_cat_sessions = session_categories[['sid', 'top_cat']] top_cat_sessions.columns = ['sid', 'MAIN_CATEGORY_PURCHASED_VIEW'] var = pd.merge(session_features, top_cat_sessions, on='sid', how='left') len(var[pd.notnull(var.MAIN_CATEGORY_PURCHASED_VIEW)]) # top_cat_sessions 275/len(session_features) # # OCAR Products main category # + session_data = [] ocarprods = convert_jsonproducts(train_tracking[pd.notnull(train_tracking.ocarproducts)].copy(), 'ocarproducts') # - ocarprods['prod_counter'] = ocarprods.product_list.apply(cat_counter) session_ocar = ocarprods.groupby('sid').prod_counter.agg(merge_counters).reset_index() len(session_ocar) 25484/len(session_features) # # Main watched product # + def watched_category(session): prods = fast_convert_jsonproducts(train_tracking[pd.notnull(train_tracking.products)].copy(), 'products') prods['prod_counter'] = prods.product_list.apply(partial(cat_counter, catmap=catmap)) session_prods = prods.groupby('sid').prod_counter.agg(merge_counters).reset_index() def top_cat(x): evaluation = Counter(x) return evaluation.most_common(1)[0][0] session_prods['top_cat'] = session_prods.prod_counter.apply(top_cat) session_cat = session_prods[['sid', 'top_cat']].copy() session_cat.columns = ['sid', 'WATCHED_CATEGORY'] session_features = pd.merge(session_features, session_cat, on='sid', how='left') return session_features def watched_product(session_features, train_tracking, product): prods = fast_convert_jsonproducts(train_tracking[pd.notnull(train_tracking.products)].copy(), 'products') prods['prod_counter'] = prods.product_list.apply(prod_counter) session_prods = prods.groupby('sid').prod_counter.agg(merge_counters).reset_index() def top_cat(x): evaluation = Counter(x) return evaluation.most_common(1)[0][0] session_prods['top_cat'] = session_prods.prod_counter.apply(top_cat) session_cat = session_prods[['sid', 'top_cat']].copy() session_cat.columns = ['sid', 'WATCHED_PRODUCT'] session_features = pd.merge(session_features, session_cat, on='sid', how='left') return session_features # + len(train_tracking[pd.notnull(train_tracking.products)]) test_data = train_tracking[pd.notnull(train_tracking.products)].sample(10000).copy() def test(f1, f2): return sum(test_data.products.apply(lambda val: f1(val) == f2(val))) prog = re.compile("'sku':\ *'([a-zA-Z0-9\+\=\/]+)'") test(lambda val: len(re.findall(prog, val)), lambda x: len(literal_eval(x)))/10000 # - prods = fast_convert_jsonproducts(train_tracking[pd.notnull(train_tracking.products)].copy(), 'products') prods['prod_counter'] = prods.product_list.apply(partial(prod_counter)) session_prods = prods.groupby('sid').prod_counter.agg(merge_counters).reset_index() len(session_prods) len(session_prods)/len(session_features) def top_cat(x): evaluation = Counter(x) return evaluation.most_common(1)[0][0] session_prods['top_cat'] = session_prods.prod_counter.apply(top_cat) session_cat = session_prods[['sid', 'top_cat']].copy() session_cat.columns = ['sid', 'WATCHED_PRODUCT'] session_features = pd.merge(session_features, session_cat, on='sid', how='left') train_features = watched_product(train_features, train_tracking, product) test_features = watched_product(test_features, test_tracking, product) train_features[['sid', 'WATCHED_PRODUCT']].to_feather(PROCESSED/'train_WP.feather') test_features[['sid', 'WATCHED_PRODUCT']].to_feather(PROCESSED/'test_WP.feather') len(train_features.WATCHED_PRODUCT.unique())/len(train_features)
notebooks/franco/03 - Categories Features.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python3 # --- # + import glob import os import pathlib import shutil import re # - def change_suffix(file_name, from_suffix, to_suffix): # ファイルの拡張子を得る sf = pathlib.PurePath(file_name).suffix # 変更対象かどうか判定する if sf == from_suffix: # ファイル名(拡張子除く)を得る st = pathlib.PurePath(file_name).stem # 変更後のファイル名を得る to_name = st + to_suffix # ファイル名を変更する shutil.copyfile(file_name, to_name) return to_name # + change_suffix('sample.abc', '.abc', '.xyz') file_name = glob.glob('*.tex')[0] file_name = change_suffix(file_name, '.tex', '.txt') # - def tex_trans(data_lines): # data_lines = data_lines.replace('documentclass', 'bocumentclass') # \begin{document} 以下を抽出 data_lines = re.search(r'\\begin{document}[\s\S]*\\end{document}', data_lines).group() # data_lines = data_lines[data_lines.find(r'\begin{document}')+16:] # data_lines = data_lines.replace('\n', '') # 余計な空白を一つに変換 data_lines = re.sub('[ ]+', ' ', data_lines) # 複数改行を一つの改行に変換 data_lines = re.sub('[\n]+', '\n', data_lines) # 改行後の空白を削除 data_lines = re.sub('\n'+'[ ]+', '\n', data_lines) # 方程式の変換 equations = re.findall(r'\\begin{equation}[\s\S]*?\\end{equation}', data_lines) for equation in equations: data_lines = data_lines.replace(equation, 'EQUATION') aligns = re.findall(r'\\begin{align}[\s\S]*?\\end{align}', data_lines) equations = re.findall(r'\\begin{equation\*}[\s\S]*?\\end{equation\*}', data_lines) for equation in equations: data_lines = data_lines.replace(equation, 'EQUATION') for align in aligns: data_lines = data_lines.replace(eq, 'EQUATION') commentouts = re.findall(r'%.*\n', data_lines) for com in commentouts: data_lines = data_lines.replace(com, '') functions = re.findall(r'(\n\\[\s\S]*?)(?=\n)', data_lines) # # functions = re.findall(r'(\n\\[\s\S]*?)\n', data_lines) # これだと重複は持ってこないことに注意 for func in functions: data_lines = data_lines.replace(func, '') eqs = re.findall(r'\$[\s\S]*?\$', data_lines) for eq in eqs: data_lines = data_lines.replace(eq, 'EQ') data_lines = data_lines.lstrip(r'\begin{document}') data_lines = data_lines.rstrip(r'\end{document}') # data_lines = data_lines.replace(re.search(r'\\begin{equation}[\s\S]*\\end{equation}', data_lines).group(), 'EQUATION') # 文中の改行を削除 data_lines = re.sub(r'\n(?=[a-z])', ' ', data_lines) data_lines = re.sub(r'\n(?=[0123456789])', ' ', data_lines) data_lines = re.sub('[ ]+', ' ', data_lines) return data_lines # + with open(file_name, encoding='utf-8') as f: data_lines = f.read() data_lines = tex_trans(data_lines) # 同じファイル名で保存 new_file_name = 'output.txt' with open(new_file_name, mode='w', encoding='utf-8') as f: f.write(data_lines) # -
tex/tex_trans.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bJVJHn6Ji57J" colab_type="text" # # Import Library & Data Analysis # # + id="DyjKcEVBFxPR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="bda1615f-f8c5-48bf-a7fe-2dbc2cb65726" # connecting google drive with google colab from google.colab import drive drive.mount('/content/drive') # + id="Q9GExNuxhBos" colab_type="code" colab={} import pandas as pd import numpy as np import seaborn as sns import time from scipy import stats import matplotlib.pyplot as plt # + id="hwMm5ufhhBuk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="5f456d34-e47f-44dc-970c-56a5cac0f852" df_borrower = pd.read_csv('drive/My Drive/DS-course2 - Dr. Xuan Ha/w4/borrower_data.csv') df_borrower.sample(5) # + id="3cJkiY-ThCHd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 363} outputId="5b1a465c-c468-451a-be01-4ce35e8b3766" df_loan = pd.read_csv('drive/My Drive/DS-course2 - Dr. Xuan Ha/w4/loan_data.csv') df_loan.sample(10) # + id="XLyKSiXAiRBz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 607} outputId="7b778be3-e022-4e22-c71d-ac7ee0e634f5" df_borrower.info() # Checking nan print('\n > Train columns with null values:\n', df_borrower.isnull().sum()) # print('\n > Train columns with unique values:\n', df_borrower.nunique()) print('\n Unique value in loan_id: ', df_borrower['loan_id'].nunique()) # + id="U08NipjMhCAu" colab_type="code" colab={} # Generating the training df_borrower_tr df_borrower_tr = df_borrower # Replacing Nan values by new categorical values, ex: if currently_repaying_other_loans is null, # I will adjust it with 2 df_borrower_tr['currently_repaying_other_loans'] = df_borrower_tr['currently_repaying_other_loans'].fillna(2) df_borrower_tr['fully_repaid_previous_loans'] = df_borrower_tr['fully_repaid_previous_loans'].fillna(2) df_borrower_tr['avg_percentage_credit_card_limit_used_last_year'] = df_borrower_tr['avg_percentage_credit_card_limit_used_last_year'].fillna(0) # + id="V1EQwPAuhB8j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 312} outputId="d2f4c4cc-f821-4aff-cd60-62ef2cd5f004" df_borrower_tr.info() # + id="EcC9A2OQhCEO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 364} outputId="0addafd9-9c22-4ec7-9784-498e3a5c87b8" df_loan.info() print('\n> Train columns with null values:\n', df_loan.isnull().sum()) print('\n Unique value in loan_id: ', df_loan['loan_id'].nunique()) # + id="KmcQV1-iRV0q" colab_type="code" colab={} # Generating the training dataframe df_loan_tr = df_loan df_loan_tr['loan_repaid'] = df_loan_tr['loan_repaid'].fillna('not granted') # + id="phNWtqU7afV-" colab_type="code" colab={} # using list comprehension df_loan_tr['profit'] = [ 1 if x == 1 else (-1 if x == 0 else 0) for x in df_loan_tr['loan_repaid']] #df_loan_tr['profit'] = np.select( # [(df_loan_tr['loan_granted'] == 1) & (df_loan_tr['loan_repaid'] == 0), # (df_loan_tr['loan_granted'] == 1) & (df_loan_tr['loan_repaid'] == 1)], # [-1,1], default= 0) # np.select work well with big dataframe evironment # + id="EZKG3z7-QnMl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 144} outputId="3ae616af-d725-4ebb-c28f-a2189b4f73db" df_loan_tr.groupby(['loan_granted', 'loan_repaid']).size().reset_index(name='Freq') # + id="PGkzKNL9iswU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 144} outputId="cfad24db-cea6-48aa-fdf2-c43fbc8cdcc5" df_loan_tr.groupby(['loan_granted', 'loan_repaid'])['profit'].sum().reset_index(name='Projit') # + id="fjQedCpWGF2H" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 112} outputId="fc4d93c6-e9cf-48b1-fb72-86aecc636499" df_loan_tr['moneymaking'] = [ -1 if x == 0 else 1 for x in df_loan_tr['loan_repaid']] df_loan_tr.groupby(['moneymaking']).size().reset_index(name='Freq') # + id="GegI9oSNGGGn" colab_type="code" colab={} df_model = pd.merge(df_borrower_tr,df_loan_tr, how = 'inner', on ='loan_id') # + id="AZfJczfgGGOD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 416} outputId="1d8b246b-4e66-44ab-bd51-2c4c5033bfad" df_model.info() # + id="z1eo8uFQGGLu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="e04e9906-ba72-4200-93d0-f41b7409ad49" df_model.sample(5) # + [markdown] id="iQCh9ANkJrlg" colab_type="text" # # Feature Engineering # + [markdown] id="EwUWs2e1BPB3" colab_type="text" # ## 1.Graphical analysis # Modelling with "profit" target variable, I have to face the multiclass classification problems, unfortunally, my knowledge is not enough to solve this problems at this moment. # # Thus, I change my target into "moneymaking" feature, as I want to predict exactly whether the borrower will pay the loan or not. # # # # + id="sXeYh_7xLlKb" colab_type="code" colab={} df_train = df_model[df_model.columns.difference(['loan_id','date','profit','loan_repaid','loan_granted'])] # + id="TrKR3siNJoZP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1180} outputId="8a43e2e9-37a1-460d-f5e9-08a67b209d33" import matplotlib.pyplot as plt correlations = df_train.corr() #fig = plt.figure() #ax = fig.add_subplot(111) #cax = ax.matshow(correlations, vmin=-1, vmax=1) #fig.colorbar(cax) #names = list(df_model) #ax.set_xticklabels(names) #ax.set_yticklabels(names) #plt.show() # Using seaborn package # Generate a mask for the upper triangle mask = np.zeros_like(correlations, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(260, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(correlations, mask=mask, cmap=cmap, vmin = -1, vmax= 1 , center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) correlations # + id="TaQedDsvOlaw" colab_type="code" colab={} # Use box plot with serveral continous variable # + id="RBa3CiN1Joks" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="275f15d4-48fd-4791-8fd8-b4553a4c5670" sns.boxplot(x="moneymaking", y="total_credit_card_limit", whis=1.6, data=df_train) # whis defined as proportion of the IQR past the low and high quartiles to extend the plot whiskers # or interquartile range (IQR) # therefore, maximum = Q3 + 1.6*IQR , min = Q1 - 1.6*IQR Q1 = df_train['total_credit_card_limit'].quantile(0.25) Q3 = df_train['total_credit_card_limit'].quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((df_train['total_credit_card_limit'] < (Q1 - 1.6 * IQR)) | (df_train['total_credit_card_limit'] > (Q3 + 1.6 * IQR))).sum()) # + id="Y85lRUL0P61U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="c1a0093b-788e-44b0-c082-1f956ff6ec64" sns.boxplot(x="moneymaking", y="checking_amount", whis=1.6, data=df_train) # whis defined as proportion of the IQR past the low and high quartiles to extend the plot whiskers # or interquartile range (IQR) # therefore, maximum = Q3 + 1.6*IQR , min = Q1 - 1.6*IQR Q1 = df_train['checking_amount'].quantile(0.25) Q3 = df_train['checking_amount'].quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((df_train['checking_amount'] < (Q1 - 1.6 * IQR)) | (df_train['checking_amount'] > (Q3 + 1.6 * IQR))).sum()) # + id="k66uUwNdeI6W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="12e8c9c3-ea59-4e24-bdb2-84d67c904e5a" sns.boxplot(x="moneymaking", y="saving_amount", whis=1.6, data=df_train) # whis defined as proportion of the IQR past the low and high quartiles to extend the plot whiskers # or interquartile range (IQR) # therefore, maximum = Q3 + 1.6*IQR , min = Q1 - 1.6*IQR Q1 = df_train['saving_amount'].quantile(0.25) Q3 = df_train['saving_amount'].quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((df_train['saving_amount'] < (Q1 - 1.6 * IQR)) | (df_train['saving_amount'] > (Q3 + 1.6 * IQR))).sum()) # + [markdown] id="Q_1wfAoYAV2m" colab_type="text" # ### Note: # saving_amount, and checking_amount may be too skewed, however, we cannot remove these values as the outlier, because this could contain the critical information for modelling. # + id="jxfAFZx2fFDv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="cb98087a-4f8b-48ca-fb5b-27dcc10ca3ee" sns.boxplot(x="moneymaking", y="yearly_salary", whis=1.6, data=df_train) # whis defined as proportion of the IQR past the low and high quartiles to extend the plot whiskers # or interquartile range (IQR) # therefore, maximum = Q3 + 1.6*IQR , min = Q1 - 1.6*IQR Q1 = df_train['yearly_salary'].quantile(0.25) Q3 = df_train['yearly_salary'].quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((df_train['yearly_salary'] < (Q1 - 1.6 * IQR)) | (df_train['yearly_salary'] > (Q3 + 1.6 * IQR))).sum()) # + id="b155LxYw_gAE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 450} outputId="d69631d7-979f-48a5-b527-63e8e1c5ac80" sns.boxplot(x="moneymaking", y="age", whis=1.6, data=df_train) # whis defined as proportion of the IQR past the low and high quartiles to extend the plot whiskers # or interquartile range (IQR) # therefore, maximum = Q3 + 1.6*IQR , min = Q1 - 1.6*IQR Q1 = df_train['age'].quantile(0.25) Q3 = df_train['age'].quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((df_train['age'] < (Q1 - 1.6 * IQR)) | (df_train['age'] > (Q3 + 1.6 * IQR))).sum()) # + [markdown] id="Yuy65MtyjZjF" colab_type="text" # ## 2.Important Features # + id="52oKkW4Jj2z8" colab_type="code" colab={} from sklearn.model_selection import train_test_split from sklearn.ensemble import ExtraTreesClassifier # + id="rnOKKEUljY85" colab_type="code" colab={} y = df_train['moneymaking'] X = df_train.drop(['moneymaking'],axis=1).copy() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # + id="fKpXC0DEnNSA" colab_type="code" colab={} # Endcoding the categorical features and getting the dummies matrix X_train=pd.get_dummies(X_train, prefix=['loan_purpose'], columns=['loan_purpose']) # + id="iqjbvQqhl57j" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 121} outputId="afdf4f19-989c-41ab-921e-e1523ccf7344" # Build a forest and compute the feature importances clf_forest = ExtraTreesClassifier(n_estimators=250, random_state=0) clf_forest.fit(X_train, y_train) # + id="ZRM0zeYGl6E9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 602} outputId="bbf46996-548b-4698-a4e8-df0b102bc91e" # std = np.std([tree.feature_importances_ for tree in clf_forest.estimators_],axis=0) features = X_train.columns importances = clf_forest.feature_importances_ nSelectedFeature = 11 indices = np.argsort(importances)[-(nSelectedFeature-1):] # Print the feature ranking rank = np.argsort(clf_forest.feature_importances_)[::-1] print("Feature ranking:") for f in range(nSelectedFeature): print("%d. %s (%f)" % (f + 1, features[rank[f]] , importances[rank[f]])) # Bar plot plt.title('Feature Importances') plt.barh(range(len(indices)), importances[indices], color='r', align='center') plt.yticks(range(len(indices)), [features[i] for i in indices]) plt.xlabel('Relative Importance') # + [markdown] id="q3amUI3PfWWW" colab_type="text" # ### Note: # After this step, I can make decision for dropping features namely: yearly_salary, is_first_loan, currently_repaying_other_loans. <br> # This is because yearly_salary significantly correlates with is_employed, and the latter feature get higher importance in model. Similar for the case of removing is_first_loan and currently_repaying_other_loans. # + [markdown] id="0TwfCh-LBgyo" colab_type="text" # ## 3.Feature selection # + id="OFQybrKHfFZR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 226} outputId="aeb3aed4-c5f8-464b-b3e3-5736acb4f972" df_trained = df_train[df_train.columns.difference(['is_employed','is_first_loan','currently_repaying_other_loans'])] df_trained.head() # + id="kAa29SZ8Cciw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 619} outputId="3dafcb45-f693-45a2-df94-258d685817c2" #I uses the boxplot transformation to reduce the skew of continous varibale # plus 1 for eliminating zero cases df_trained['checking_amount'], para = stats.boxcox(df_trained['checking_amount']+1) col = df_trained['checking_amount'] print(np.isinf(col).sum()>1) print(col.isnull().sum()>0) Q1 = col.quantile(0.25) Q3 = col.quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((col < (Q1 - 1.6 * IQR)) | (col > (Q3 + 1.6 * IQR))).sum()) plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) fig = sns.boxplot(y=col) fig.set_title('') fig.set_ylabel('checking_amount') plt.subplot(1, 2, 2) fig = sns.distplot(col.dropna())#.hist(bins=20) fig.set_ylabel('Volumn') fig.set_xlabel('checking_amount') plt.show() # + id="22XgQA2sEGLI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 616} outputId="0089dc49-241d-46a1-cd36-3ef273f085d5" #I uses the boxplot transformation to reduce the skew of continous varibale # plus 1 for eliminating zero cases df_trained['saving_amount'], para = stats.boxcox(df_trained['saving_amount']+1) col = df_trained['saving_amount'] print(np.isinf(col).sum()>1) print(col.isnull().sum()>0) Q1 = col.quantile(0.25) Q3 = col.quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((col < (Q1 - 1.6 * IQR)) | (col > (Q3 + 1.6 * IQR))).sum()) plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) fig = sns.boxplot(y=col) fig.set_title('') fig.set_ylabel('checking_amount') plt.subplot(1, 2, 2) fig = sns.distplot(col.dropna())#.hist(bins=20) fig.set_ylabel('Volumn') fig.set_xlabel('checking_amount') plt.show() # + id="YowdfoCVdCPE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 616} outputId="e23afa88-964d-425f-adda-d8aee32a69d4" #I uses the boxplot transformation to reduce the skew of continous varibale # plus 1 for eliminating zero cases df_trained['yearly_salary'], para = stats.boxcox(df_trained['yearly_salary']+1) col = df_trained['yearly_salary'] print(np.isinf(col).sum()>1) print(col.isnull().sum()>0) Q1 = col.quantile(0.25) Q3 = col.quantile(0.75) IQR = Q3 - Q1 print('> No.outliner: %d \n' %((col < (Q1 - 1.6 * IQR)) | (col > (Q3 + 1.6 * IQR))).sum()) plt.figure(figsize=(15,6)) plt.subplot(1, 2, 1) fig = sns.boxplot(y=col) fig.set_title('') fig.set_ylabel('checking_amount') plt.subplot(1, 2, 2) fig = sns.distplot(col.dropna())#.hist(bins=20) fig.set_ylabel('Volumn') fig.set_xlabel('checking_amount') plt.show() # + [markdown] id="pGOG8H4XEelS" colab_type="text" # # Fitting model # + [markdown] id="sk66GVjQM2eR" colab_type="text" # ## Define function # + colab_type="code" id="HfJvvsgLNHO-" colab={} from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import numpy as np import operator class MajorityVoteClassifier(BaseEstimator, ClassifierMixin): """ A majority vote ensemble classifier Parameters ---------- classifiers : array-like, shape = [n_classifiers] Different classifiers for the ensemble vote : str, {'classlabel', 'probability'} (default='label') If 'classlabel' the prediction is based on the argmax of class labels. Else if 'probability', the argmax of the sum of probabilities is used to predict the class label (recommended for calibrated classifiers). weights : array-like, shape = [n_classifiers], optional (default=None) If a list of `int` or `float` values are provided, the classifiers are weighted by importance; Uses uniform weights if `weights=None`. """ def __init__(self, classifiers, vote='classlabel', weights=None): self.classifiers = classifiers self.named_classifiers = {key: value for key, value in _name_estimators(classifiers)} self.vote = vote self.weights = weights def fit(self, X, y): """ Fit classifiers. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. y : array-like, shape = [n_samples] Vector of target class labels. Returns ------- self : object """ if self.vote not in ('probability', 'classlabel'): raise ValueError("vote must be 'probability' or 'classlabel'" "; got (vote=%r)" % self.vote) if self.weights and len(self.weights) != len(self.classifiers): raise ValueError('Number of classifiers and weights must be equal' '; got %d weights, %d classifiers' % (len(self.weights), len(self.classifiers))) # Use LabelEncoder to ensure class labels start with 0, which # is important for np.argmax call in self.predict self.lablenc_ = LabelEncoder() self.lablenc_.fit(y) self.classes_ = self.lablenc_.classes_ self.classifiers_ = [] for clf in self.classifiers: fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y)) self.classifiers_.append(fitted_clf) return self def predict(self, X): """ Predict class labels for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. Returns ---------- maj_vote : array-like, shape = [n_samples] Predicted class labels. """ if self.vote == 'probability': maj_vote = np.argmax(self.predict_proba(X), axis=1) else: # 'classlabel' vote # Collect results from clf.predict calls predictions = np.asarray([clf.predict(X) for clf in self.classifiers_]).T maj_vote = np.apply_along_axis( lambda x: np.argmax(np.bincount(x, weights=self.weights)), axis=1, arr=predictions) maj_vote = self.lablenc_.inverse_transform(maj_vote) return maj_vote def predict_proba(self, X): """ Predict class probabilities for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- avg_proba : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample. """ probas = np.asarray([clf.predict_proba(X) for clf in self.classifiers_]) avg_proba = np.average(probas, axis=0, weights=self.weights) return avg_proba def get_params(self, deep=True): """ Get classifier parameter names for GridSearch""" if not deep: return super(MajorityVoteClassifier, self).get_params(deep=False) else: out = self.named_classifiers.copy() for name, step in six.iteritems(self.named_classifiers): for key, value in six.iteritems(step.get_params(deep=True)): out['%s__%s' % (name, key)] = value return out # + [markdown] id="AhKWeQ9ENl1N" colab_type="text" # ## Majority Voting # + id="Rn2eL9dhGRXm" colab_type="code" colab={} from sklearn import datasets from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline from sklearn.model_selection import cross_val_score # + id="pRrhfWqjGSEC" colab_type="code" colab={} y = df_trained['moneymaking'] X = df_trained.drop(['moneymaking'],axis=1).copy() X=pd.get_dummies(X, prefix=['loan_purpose'], columns=['loan_purpose']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # + id="x4_TWKr1GhiC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="ae6ae0e2-3f1b-49c2-a253-de34e744ed10" clf1 = LogisticRegression(penalty='l2', C=0.001, random_state=1) clf2 = DecisionTreeClassifier(max_depth=1, criterion='entropy', random_state=0) clf3 = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski') pipe1 = Pipeline([['sc', StandardScaler()], ['clf', clf1]]) pipe3 = Pipeline([['sc', StandardScaler()], ['clf', clf3]]) clf_labels = ['Logistic regression', 'Decision tree', 'KNN'] print('10-fold cross validation:\n') for clf, label in zip([pipe1, clf2, pipe3], clf_labels): scores = cross_val_score(estimator=clf,X=X_train,y=y_train,cv=10,scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]"% (scores.mean(), scores.std(), label)) # + id="YzvhentgGhpU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 86} outputId="1484ada9-0ac2-4428-d41c-b33975cd8530" # Majority Rule (hard) Voting mv_clf = MajorityVoteClassifier(classifiers=[pipe1, clf2, pipe3]) clf_labels += ['Majority voting'] all_clf = [pipe1, clf2, pipe3, mv_clf] for clf, label in zip(all_clf, clf_labels): scores = cross_val_score(estimator=clf,X=X_train,y=y_train,cv=10,scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]"% (scores.mean(), scores.std(), label)) # + id="zEMqy5tFMMDz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 361} outputId="25ab2b7a-7dbc-48b7-b0cd-d6ef600f71c8" from sklearn.metrics import roc_curve from sklearn.metrics import auc colors = ['black', 'orange', 'blue', 'green'] linestyles = [':', '--', '-.', '-'] for clf, label, clr, ls in zip(all_clf, clf_labels, colors, linestyles): # assuming the label of the positive class is 1 y_pred = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1] fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=y_pred) roc_auc = auc(x=fpr, y=tpr) plt.plot(fpr, tpr, color=clr, linestyle=ls, label='%s (auc = %0.2f)' % (label, roc_auc)) plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2) plt.xlim([-0.1, 1.1]) plt.ylim([-0.1, 1.1]) plt.grid(alpha=0.5) plt.xlabel('False positive rate (FPR)') plt.ylabel('True positive rate (TPR)') #plt.savefig('images/04_04', dpi=300) plt.show() # + [markdown] id="zehuP9wLGGE2" colab_type="text" # ## Bagging # + id="ewfWK3VUO4U0" colab_type="code" colab={} from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion='entropy',max_depth=None,random_state=1) bag = BaggingClassifier(base_estimator=tree, n_estimators=500, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, n_jobs=1, random_state=1) # + id="enJzW5HbO4fw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="d9f70bde-e963-49fc-d4d2-93a9d5dc122b" from sklearn.metrics import accuracy_score tree = tree.fit(X_train, y_train) tree_y_train_pred = tree.predict(X_train) tree_y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, tree_y_train_pred) tree_test = accuracy_score(y_test, tree_y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f'% (tree_train, tree_test)) print('comparing bank profitability vs your my profitability in test_set %.3f/%.3f \n'%(y_test.sum(), tree_y_test_pred.sum())) bag = bag.fit(X_train, y_train) bag_y_train_pred = bag.predict(X_train) bag_y_test_pred = bag.predict(X_test) bag_train = accuracy_score(y_train, y_train_pred) bag_test = accuracy_score(y_test, y_test_pred) print('Bagging train/test accuracies %.3f/%.3f'% (bag_train, bag_test)) print('comparing bank profitability vs your my profitability in test_set %.3f/%.3f'%(y_test.sum(), bag_y_test_pred.sum())) # + [markdown] id="RgV_CxIjGJp8" colab_type="text" # ## Adaptive bossting # + id="rH3YsqKsWkIQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="55054fe6-74da-44d5-e43c-d3a453d80233" from sklearn.ensemble import AdaBoostClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=None, random_state=1) ada = AdaBoostClassifier(base_estimator=tree, n_estimators=500, learning_rate=0.1, random_state=1) tree = tree.fit(X_train, y_train) tree_y_train_pred = tree.predict(X_train) tree_y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f'% (tree_train, tree_test)) print('comparing bank profitability vs your my profitability in test_set %.3f/%.3f \n'%(y_test.sum(), tree_y_test_pred.sum())) ada = ada.fit(X_train, y_train) ada_y_train_pred = ada.predict(X_train) ada_y_test_pred = ada.predict(X_test) ada_train = accuracy_score(y_train, y_train_pred) ada_test = accuracy_score(y_test, y_test_pred) print('AdaBoost train/test accuracies %.3f/%.3f'% (ada_train, ada_test)) print('comparing bank profitability vs your my profitability in test_set %.3f/%.3f'%(y_test.sum(), ada_y_test_pred.sum())) # + [markdown] id="EWPj1PXuPmhy" colab_type="text" # # Final answer: # 1. As concerned in Feature Engineering section, I have built the variable to describe the model follows strategies 3 class-output: -1 lost, 0 neutral, 1 earned. However, the technique to solve this multiclass classification is over my level at this time. Therefore, I had changed to the binary problems as # # # 2. Using revenue-level rules above, however by my assumption, we agree that my model only lose money -1 when customers do not reload the rent amount. Thus, we will earn 1 for others case. Following the result above, I have 3 times that the profit of my model is less than the banking, however, I also have a positive result for Bagging model, and in this case my model give a better result for customer credit rating model - 13466/14144. # # # 3. As I mentioned in Important features section, I have the feature ranking as follows Feature ranking: 1. saving_amount (0.187259), 2. checking_amount (0.180769), 3. total_credit_card_limit (0.127340), 4. avg_percentage_credit_card_limit_used_last_year (0.123178), 5. age (0.108905), 6. yearly_salary (0.102952), 7. dependent_number (0.068194), 8. is_employed (0.036917). Hence, the most critical feature is saving amount, and "is_employed" took at 8th position over 10, however, this feature is significantly relevent with "yearly_salary" but it is less important than. # # 4. It is really hard to find the effective answer for the question of adding other varibales, in my opinion, we should extend more information about the customer such as: living address, academic level or their vehicle type.
Vicohub-DS-with-Advantages-Python/Data/week4/HW4.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Watersheds Segmentation # + import matplotlib.pyplot as plt # %matplotlib inline import SimpleITK as sitk from myshow import myshow, myshow3d # Download data to work on # %run update_path_to_download_script from downloaddata import fetch_data as fdata # - img = sitk.ReadImage(fdata("cthead1.png")) myshow(img) # ## Gradient Watersheds Segmentation sigma=img.GetSpacing()[0] level=4 feature_img = sitk.GradientMagnitude(img) myshow(feature_img, "Edge Features") ws_img = sitk.MorphologicalWatershed(feature_img, level=0, markWatershedLine=True, fullyConnected=False) myshow(sitk.LabelToRGB(ws_img), "Watershed Over Segmentation") # + from ipywidgets import interact, interactive, FloatSlider def callback(feature_img,*args, **kwargs): ws_img = sitk.MorphologicalWatershed(feature_img,*args, **kwargs) myshow(sitk.LabelToRGB(ws_img), "Watershed Segmentation") interact(lambda **kwargs: callback(feature_img, **kwargs), markWatershedLine=True, fullyConnected=False, level=FloatSlider(min=0, max=255, step=0.1, value=4.0) ) # - # ## Segmentation From Markers min_img = sitk.RegionalMinima(feature_img, backgroundValue=0, foregroundValue=1.0, fullyConnected=False, flatIsMinima=True) marker_img = sitk.ConnectedComponent(min_img, fullyConnected=False) myshow(sitk.LabelToRGB(marker_img), "Too many local minima markers") ws = sitk.MorphologicalWatershedFromMarkers(feature_img, marker_img, markWatershedLine=True, fullyConnected=False) myshow(sitk.LabelToRGB(ws), "Watershed Oversegmentation from markers") # + pt = [60,60] idx = img.TransformPhysicalPointToIndex(pt) marker_img *= 0 marker_img[0,0] = 1 marker_img[idx] = 2 ws = sitk.MorphologicalWatershedFromMarkers(feature_img, marker_img, markWatershedLine=True, fullyConnected=False) myshow(sitk.LabelOverlay(img, ws, opacity=.2), "Watershed Oversegmentation from manual markers") # - # ## Binary Watersheds for Object Separation rgb_img = sitk.ReadImage(fdata("coins.png")) myshow(rgb_img, "coins.png") img = sitk.VectorIndexSelectionCast(rgb_img,1) myshow(img, "Green Coins") feature_img = sitk.GradientMagnitudeRecursiveGaussian(img, sigma=1.5) myshow(feature_img) ws_img = sitk.MorphologicalWatershed(feature_img, level=4, markWatershedLine=False, fullyConnected=False) myshow(sitk.LabelToRGB(ws_img), "Watershed Over Segmentation") seg = sitk.ConnectedComponent(ws_img!=ws_img[0,0]) myshow(sitk.LabelOverlay(img, seg), "Foreground Components") filled = sitk.BinaryFillhole(seg!=0) d = sitk.SignedMaurerDistanceMap(filled, insideIsPositive=False, squaredDistance=False, useImageSpacing=False) myshow(d, "Inside Distance Map") ws = sitk.MorphologicalWatershed( d, markWatershedLine=False, level=1) myshow(sitk.LabelOverlay(img, ws)) ws = sitk.Mask( ws, sitk.Cast(seg, ws.GetPixelID())) myshow(sitk.LabelOverlay(img, ws), "Split Objects") # # Multi-label Morphology seg = ws radius=10 bd_img = sitk.BinaryDilate(seg!=0, radius) myshow(bd_img, "Binary Dilate") dist_img = sitk.SignedMaurerDistanceMap(seg!=0, insideIsPositive=False, squaredDistance=False, useImageSpacing=False) wsd_img = sitk.MorphologicalWatershedFromMarkers(dist_img, seg, markWatershedLine=False) myshow(sitk.LabelOverlay(img,wsd_img)) md_img = sitk.Mask(wsd_img,bd_img) myshow(sitk.LabelToRGB(md_img), "Multi-label Dilate") e_img=sitk.BinaryErode(md_img!=0, radius) mo_img=sitk.Mask(md_img, e_img) myshow(sitk.LabelOverlay(img, mo_img), "Multi-label Closing")
Python/32_Watersheds_Segmentation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Imputation import pandas as pd import numpy as np import statsmodels from statsmodels.imputation import mice import random random.seed(10) # ## Create data frame df = pd.read_csv("http://goo.gl/19NKXV") df.head() original = df.copy() original.describe().loc['count',:] # ** Add some missing values ** def add_nulls(df, n): new = df.copy() new.iloc[random.sample(range(new.shape[0]), n), :] = np.nan return new df.Cholesterol = add_nulls(df[['Cholesterol']], 20) df.Smoking = add_nulls(df[['Smoking']], 20) df.Education = add_nulls(df[['Education']], 20) df.Age = add_nulls(df[['Age']], 5) df.BMI = add_nulls(df[['BMI']], 5) # Confirm the presence of null values df.describe() # ** Create categorical variables ** for col in ['Gender', 'Smoking', 'Education']: df[col] = df[col].astype('category') df.dtypes # ** Create dummy variables ** df = pd.get_dummies(df); # ## Impute data # Replace null values using MICE model # ** MICEData class ** imp = mice.MICEData(df) # ** Imputation for one feature ** # The `conditional_formula` attribute is a dictionary containing the models that will be used to impute the data for each column. This can be updated to change the imputation model. imp.conditional_formula['BMI'] before = imp.data.BMI.copy() # The `perturb_params` method must be called before running the `impute` method, that runs the imputation. It updates the specified column in the `data` attribute. imp.perturb_params('BMI') imp.impute('BMI') after = imp.data.BMI import matplotlib.pyplot as plt plt.clf() fig, ax = plt.subplots(1, 1) ax.plot(before, 'or', label='before', alpha=1, ms=8) ax.plot(after, 'ok', label='after', alpha=0.8, mfc='w', ms=8) plt.legend(); pd.DataFrame(dict(before=before.describe(), after=after.describe())) before[before != after] after[before != after] # ### Impute all imp.update_all(2) imp.plot_fit_obs('BMI'); imp.plot_fit_obs('Age'); # ### Validation original.mean() for col in original.mean().index: x = original.mean()[col] y = imp.data[col].mean() e = abs(x - y) / x print("{:<12} mean={:>8.2f}, exact={:>8.2f}, error={:>5.2g}%".format(col, x, y, e * 100)) # ## MICE # This allows to fit data containing missing values.
research/imputation.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from scipy import stats import numpy as np import matplotlib.pyplot as plt from s import Sample, ZHyp, THyp, ChiHyp, Z2Hyp, T2Hyp, FHyp, AltHypKind # - def plot_dist(dist, label=''): domain = np.linspace(dist.ppf(0.001), dist.ppf(0.999), 200) values = dist.pdf(domain) plt.plot(domain, values, label=label) plt.fill_between(domain, 0, values, alpha=.2) plt.show() # + def v1(): sample1 = Sample.from_data( '1', np.array([8.60, 9.70, 7.83, 8.77, 9.15, 9.40, 9.36, 8.90, 10.22, 7.13]) ) sample1.describe() hyp = THyp(kind=AltHypKind.RIGHT, m=9, sample=sample1) alpha = 0.12 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v1() # + def v2(): sample1 = Sample.from_data( '1', np.array([10.73, 9.878, 10.12, 10.58, 10.56, 10.50, 10.93, 10.32, 10.23, 10.89]) ) sample2 = Sample.from_data( '2', np.array([9.594, 11.37, 10.53, 11.04, 10.47, 10.30, 10.90, 9.878, 10.84, 10.60]) ) sample1.describe() sample2.describe() hyp = Z2Hyp(kind=AltHypKind.LEFT, sigma1=0.35, sigma2=0.44) alpha = 0.18 real_dist1 = stats.norm(10.5, 0.35) real_dist2 = stats.norm(10.5, 0.44) plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v2() # + def v3(): sample1 = Sample.from_data( '1', np.array([27.84, 27.65, 26.47, 28.18, 29.33]) ) sample2 = Sample.from_data( '2', np.array([29.28, 28.40, 28.90, 30.47, 30.48, 30.34, 29.44, 28.23, 28.96]) ) sample1.describe() sample2.describe() hyp = Z2Hyp(kind=AltHypKind.TWO_SIDED, sigma1=0.9, sigma2=0.9) alpha = 0.20 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v3() # + def v4(): sample1 = Sample.from_data( '1', np.array([19.29, 20.04, 23.29, 16.00, 21.47, 16.05, 19.02, 15.34, 20.23, 19.00]) ) sample2 = Sample.from_data( '2', np.array([19.11, 17.81, 23.75, 20.70, 18.51, 19.72, 19.38, 18.49, 19.32, 18.93]) ) sample1.describe() sample2.describe() hyp = FHyp(kind=AltHypKind.RIGHT, sample1=sample1, sample2=sample2) alpha = 0.14 real_dist1 = stats.norm(18.5, 2.20) real_dist2 = stats.norm(19.2, 1.65) plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v4() # + def v5(): sample1 = Sample.from_data( '1', np.array([8.60, 9.70, 7.83, 8.77, 9.15, 9.40, 9.36, 8.90, 10.22, 7.13]) ) sample1.describe() hyp = ZHyp(kind=AltHypKind.TWO_SIDED, m=9, sigma=np.sqrt(0.5625)) alpha = 0.2 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v5() # + def v6(): sample1 = Sample.from_data( '1', np.array([36.90, 34.47, 33.78, 30.72, 33.04, 37.09, 34.94, 36.73, 30.69, 35.68]) ) sample2 = Sample.from_data( '2', np.array([32.26, 29.95, 39.11, 40.90, 38.73, 34.21, 31.79, 37.27, 40.88, 32.88]) ) sample1.describe() sample2.describe() hyp = Z2Hyp(kind=AltHypKind.LEFT, sigma1=2.7, sigma2=3.8) alpha = 0.15 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v6() # + def v7(): sample1 = Sample.from_data( '1', np.array([27.8, 27.6, 26.4, 28.1, 29.3, 26.1, 28.8]) ) sample2 = Sample.from_data( '2', np.array([29.2, 28.4, 28.9, 30.4, 30.4, 30.3, 29.4, 28.2, 28.9, 27.4, 29.7]) ) sample1.describe() sample2.describe() hyp = FHyp(kind=AltHypKind.TWO_SIDED, sample1=sample1, sample2=sample2) alpha = 0.20 real_dist1 = stats.norm(27.8, 0.9) real_dist2 = stats.norm(29.3, 0.9) plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v7() # + def v8(): sample1 = Sample.from_data( '1', np.array([29.29, 30.04, 33.29, 26.00, 31.47, 26.05, 29.02, 25.34, 30.23, 29.00]) ) sample2 = Sample.from_data( '2', np.array([34.11, 32.81, 38.75, 35.70, 31.51, 36.72, 34.38, 30.49, 35.32, 33.93]) ) sample1.describe() sample2.describe() hyp = Z2Hyp(kind=AltHypKind.LEFT, sigma1=2.2, sigma2=1.65) alpha = 0.25 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v8() # + def v9(): sample1 = Sample.from_data( '1', np.array([36.90, 34.47, 33.78, 30.72, 33.04, 37.09, 34.94, 36.73, 30.69, 35.68]) ) sample2 = Sample.from_data( '2', np.array([32.26, 29.95, 39.11, 40.90, 38.73, 34.21, 31.79, 37.27, 40.88, 32.88]) ) sample1.describe() sample2.describe() hyp = FHyp(kind=AltHypKind.LEFT, sample1=sample1, sample2=sample2) alpha = 0.18 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v9() # + def v10(): sample1 = Sample.from_data( '1', np.array([10.73, 9.878, 10.12, 10.58, 10.56, 10.50, 10.93, 10.32, 10.23, 10.89]) ) sample2 = Sample.from_data( '2', np.array([ 9.60, 11.37, 9.77, 9.20, 10.70, 9.28, 10.44, 10.26, 11.31, 9.62]) ) sample1.describe() sample2.describe() hyp = FHyp(kind=AltHypKind.TWO_SIDED, sample1=sample1, sample2=sample2) alpha = 0.20 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v10() # + def v11(): sample1 = Sample.from_data( '1', np.array([2.89, 3.26, 2.52, 2.41, 3.28, 2.17, 2.57, 1.67, 3.04, 2.90]) ) sample2 = Sample.from_data( '2', np.array([3.34, 2.86, 3.26, 3.14, 2.97, 3.23, 2.97, 3.04, 2.87, 2.73]) ) sample1.describe() sample2.describe() hyp = Z2Hyp(kind=AltHypKind.LEFT, sigma1=0.6, sigma2=0.2) alpha = 0.15 plot_dist(hyp.dist) print('Crit values:') print(hyp.critical_values(alpha)) criterion_value, _, p_value, result = hyp.full_test(sample1, sample2, alpha) print('Criterion value: {}'.format(criterion_value)) print('P-value: {}'.format(p_value)) print('H0' if result else 'H1') v11()
test-half-semester.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np from pathlib import Path dPath = Path("../docs/dumps") import pickle with open(dPath / "train_data.pkl", 'rb') as filename: train_data = pickle.load(filename) with open(dPath / "valid_data.pkl", 'rb') as filename: valid_data = pickle.load(filename) X_train = train_data.drop("Detected", axis=1) y_train = train_data.Detected X_valid = valid_data.drop("Detected", axis=1) y_valid = valid_data.Detected with open(dPath / "rf_exp_04_names.pkl", 'rb') as filename: names = pickle.load(filename) X_train = X_train[names] X_valid = X_valid[names] X_train.head() from imblearn.over_sampling import ADASYN sm = ADASYN(random_state=42, n_jobs=-1, n_neighbors=5) # %time X_train, y_train = sm.fit_resample(X_train, y_train) from catboost import CatBoostClassifier, Pool, cv cb = CatBoostClassifier( custom_loss=['AUC'], random_seed=42, cat_features=[1], iterations=1000, verbose=10) cb.fit(X_train, y_train); cb.save_model(str(dPath / 'cb_exp_1.dump')) from sklearn.metrics import classification_report from sklearn import metrics def conf_matr(m): y_pred = m.predict(X_train)=='True' print(classification_report(y_train, y_pred)) conf_matr(cb) cb.get_all_params()
notebooks/cb_experiment_2.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''base'': conda)' # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd # + # Importar el data set dataset = pd.read_csv('./Section 19 - Decision Tree Classification/Social_Network_Ads.csv') X = dataset.iloc[:, [2,3]].values y = dataset.iloc[:, 4].values dataset.head() # - #Dividir el data set en conjunto de entrenamiento y conjunto de testing from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0) # no se necesecita escalar las variables porque no se usa las distancias uclideas from sklearn.tree import DecisionTreeClassifier classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0) classifier.fit(X_train, y_train) #prediccion de resultados y_pred = classifier.predict(X_test) #porncentaje de precision print(np.mean(y_test == y_pred)) # Elaborar una matriz de confusión from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm # Representación gráfica de los resultados del algoritmo en el Conjunto de Entrenamiento from matplotlib.colors import ListedColormap X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 1), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 500)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Desicion Tree Classification (Conjunto de Entrenamiento)') plt.xlabel('Edad') plt.ylabel('Sueldo Estimado') plt.legend() plt.show() # Representación gráfica de los resultados del algoritmo en el Conjunto de Testing X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 1), np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 500)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Desicion Tree Classification (Conjunto de Test)') plt.xlabel('Edad') plt.ylabel('Sueldo Estimado') plt.legend() plt.show()
Clases/Part 3 - Classification/19__Desicion_Tree_Classification.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib notebook import numpy as np import matplotlib.pyplot as plt from ipywidgets import interact f = lambda x:print(x) interact(f, x=['apples','oranges']) def calcFresnel(ti,nt=2.5): tt = np.arcsin(np.sin(ti)/nt) Rs = (np.cos(ti)-nt*np.cos(tt))/(np.cos(ti)+nt*np.cos(tt)) Rs = np.real(Rs*np.conj(Rs)) Rp = (np.cos(tt)-nt*np.cos(ti))/(np.cos(tt)+nt*np.cos(ti)) Rp = np.real(Rp*np.conj(Rp)) return tt,Rp,Rs tis = np.linspace(0,np.pi/2,100) def plotFresnel(nt=1.8): tt,Rp,Rs = calcFresnel(tis,nt) plt.plot(tis*180/np.pi,Rp,label='$R_p$') plt.plot(tis*180/np.pi,Rs,label='$R_s$') plt.xlabel('Angle from normal / deg') plt.ylabel('Reflectivity') plt.legend() # - plotFresnel()
optics/fresnel.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + from pathlib import Path from data.preprocessing_helpers import preprocess from features.as_numpy import get_data_as_numpy_array from models.train import split_into_training_and_testing_sets, train_model, model_test from visualization.plots import get_plot_for_best_fit_line # - raw_data_file_path = str(Path().resolve().parents[0] / "data" / "raw" / "housing_data.txt") clean_data_file_path = str(Path().resolve().parents[0] / "data" / "clean" / "clean_housing_data.txt") preprocess(raw_data_file_path, clean_data_file_path) data_array = get_data_as_numpy_array(clean_data_file_path, 2) training_set, testing_set = split_into_training_and_testing_sets(data_array) slope, intercept = train_model(training_set) print("Slope: {0}, Intercept: {1}".format(slope, intercept)) print("R Square of fit on the testing set is {0}".format(model_test(testing_set, slope, intercept))) training_figure = get_plot_for_best_fit_line(slope, intercept, training_set[:, 0], training_set[:, 1], "Training") training_figure.show() testing_figure = get_plot_for_best_fit_line(slope, intercept, testing_set[:, 0], testing_set[:, 1], "Testing") testing_figure.show()
notebooks/train_test_model.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: xpython # language: python # name: xpython # --- # + [markdown] deletable=false editable=false # Copyright 2020 <NAME> and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # # - # # Decision trees: Problem solving # # We previously looked at predicting whether or not a candy is popular based on its other properties using logistic regression. # # This gave us an idea of how different properties **add** together to make a candy popular, but it didn't give us as much of an idea of how the properties act on each other. # # In this session, you will predict whether or not is popular using decision trees. # # This dataset [was collected](http://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) to discover the most popular Halloween candy. # # | Variable | Type | Description | # |:-----------------|:------------------|:--------------------------------------------------------------| # | chocolate | Numeric (binary) | Does it contain chocolate? | # | fruity | Numeric (binary) | Is it fruit flavored? | # | caramel | Numeric (binary) | Is there caramel in the candy? | # | peanutalmondy | Numeric (binary) | Does it contain peanuts, peanut butter or almonds? | # | nougat | Numeric (binary) | Does it contain nougat? | # | crispedricewafer | Numeric (binary) | Does it contain crisped rice, wafers, or a cookie component? | # | hard | Numeric (binary) | Is it a hard candy? | # | bar | Numeric (binary) | Is it a candy bar? | # | pluribus | Numeric (binary) | Is it one of many candies in a bag or box? | # | sugarpercent | Numeric (0 to 1) | The percentile of sugar it falls under within the data set. | # | pricepercent | Numeric (0 to 1) | The unit price percentile compared to the rest of the set. | # | winpercent | Numeric (percent) | The overall win percentage according to 269,000 matchups | # | popular | Numeric (binary) | 1 if win percentage is over 50% and 0 otherwise | # # <div style="text-align:center;font-size: smaller"> # <b>Source:</b> This dataset is Copyright (c) 2014 ESPN Internet Ventures and distributed under an MIT license. # </div> # # + [markdown] slideshow={"slide_type": "slide"} # ## Load the data # # First import `pandas`. # - # Load a dataframe with `"datasets/candy-data.csv"` but use `index_col="competitorname"` to make `competitorname` an ID instead of a variable. # Then display the dataframe. # ## Explore the data # # Since this is a dataset you've looked at before, just make a correlation heatmap to show how the variables are related to each other. # # Start by importing `plotly.express`. # And create and show the heatmap figure in one line. # ---------------------------- # **QUESTION:** # # Look at the first two columns. # What variables do these correspond to? # **ANSWER: (click here to edit)** # # # <hr> # **QUESTION:** # # How would you describe their pattern of correlation with other variables? # **ANSWER: (click here to edit)** # # # <hr> # ## Prepare train/test sets # # You need to split the dataframe into training data and testing data, and also separate the predictors from the class labels. # # Start by dropping the label, `popular`, and its counterpart, `winpercent`, to make a new dataframe called `X`. # Save a dataframe with just `popular` in `Y`. # Import `sklearn.model_selection` to split `X` and `Y` into train and test sets. # Now do the splits. Use `random_state=1` so we all get the same answer # ## Decision tree model # # First import `sklearn.tree`. # Now create the decision tree model # ---------------------------- # **QUESTION:** # # Why don't we need to scale anything? # **ANSWER: (click here to edit)** # # # <hr> # Fit the model and get predictions. # ## Evaluate model performance # # Import `sklearn.metrics`. # Get the accuracy. # And get the recall, precision, and f1. # As we can see, both the accuracy and the average precision, recall, and f1 are all very good. # ## Display the Tree # First import `graphviz`. # And use it to build the tree. Try to copy this from your other notebook if at all possible. # -------------- # # **QUESTION:** # # Explain the tree - what are the first three important decisions it makes? # **ANSWER: (click here to edit)** # # # <hr> # **QUESTION:** # # Consider what the logistic regression model said were important features below. # How does the decision tree compare? # # ![image.png](attachment:image.png) # **ANSWER: (click here to edit)** # # # <hr> # **QUESTION:** # # Which model (decision tree or logistic regression) do you think is more correct? # How would you know? # **ANSWER: (click here to edit)** # # # <hr> #
Decision-trees-PS.ipynb
# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import math n = np.arange(0, 150, 1) #input from 0 ~ 150 merge = 6*n * np.log2(n) + 6*n insert = 0.5*n**2 # + plt.plot(n, merge, n, insert) plt.legend(('Marge Sort', 'Insert Sort'), loc='upper right') plt.grid(True) plt.show() # - n = np.arange(200, 1400) merge = 6*n * np.log2(n) + 6*n insert = 0.5*n**2 plt.plot(n, merge, n, insert) plt.legend(('Marge Sort', 'Insert Sort'), loc='upper right') plt.grid(True) plt.show()
Coursera/Divide and Conquer, Sorting and Searching, and Randomized Algorithms/Week-1/Excercise/MergeSort-Vs-InsertionSort.ipynb