repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
fhmartinezs/Dinamicos_UD
notebooks/03_Laplace.ipynb
apache-2.0
import sympy from sympy import * sympy.init_printing() s = Symbol('s') t = Symbol('t', positive=True) """ Explanation: Analysis of Dynamic Systems Schedule: Getting started Introduction Mathematical bases Bode diagrams Modeling with linear elements State variables Block diagrams Time response Frequency response Stability Root Locus Final project Course evaluation Mathematical bases Complex Variable Theory. Differential equations. Laplace transform. Theory of matrices. Bode diagrams. End of explanation """ from IPython.display import Image Image(filename='img/laplace_fig1.png') """ Explanation: Laplace transform Tool for the solution of linear differential equations. Definition: For a function $f(t)$ such that: $$\int_{0}^{\infty }\left | f(t) \, e^{\sigma t} \right | dt< \infty$$ For a finite real value of $\sigma$, the Laplace transform of $f(t)$ is defined as: $$F\left ( s \right ) = \int_{0}^{\infty } f\left ( t \right ) \, e^{-st} dt$$ This is the unilateral Laplace transform. Or: $$F\left ( s \right ) = \mathfrak{L} \left [ f\left ( t \right ) \right ]$$ The variable $s$ is known as the Laplace operator, and is a complex variable. All the information contained in $f(t)$ prior to $t = 0$ is ignored or considered equal to zero, this does not normally affect calculations since the time reference that is normally used is $t = 0$. We will not use the definition of the Laplace transform, but the tables of the books. End of explanation """ import numpy as np import matplotlib.pyplot as plt x = [-1, 0, 0, 1, 2, 3] y = [0, 0, 1, 1, 1, 1] plt.plot(x, y, 'r', label='f(t) = u(t)', linewidth=3) plt.title('Unitary step function') plt.xlabel('t') plt.ylabel('f(t)') plt.legend() plt.xlim(-1, 3) plt.ylim(-1, 2) plt.grid() plt.show() """ Explanation: Example: Consider a unitary step function $f(t)$: End of explanation """ F1 = 5/(s*(s**2+s+2)) F1.expand() from control import * from scipy import signal sysF1 = signal.lti([5], [1, 1, 2, 0]) # F1(s) = (5) / (s *** 3 + s ** 2 + 2 * s + 0) w, H = signal.freqresp(sysF1) print(sysF1.zeros, sysF1.poles, sysF1.gain) plt.plot(sysF1.zeros.real, sysF1.zeros.imag, 'o') plt.plot(sysF1.poles.real, sysF1.poles.imag, 'x') plt.title('Poles and Zeros') plt.xlabel('Re') plt.ylabel('Im') plt.xlim(-3, 3) plt.ylim(-3, 3) plt.grid() plt.show() """ Explanation: $$F\left ( s \right ) = \mathfrak{L}\left [ u\left ( t \right ) \right ] = \int_{0}^{\infty } u\left ( t \right ) e^{-st} dt$$ $$\Rightarrow \, F\left ( s \right ) = \left [ - \frac{1}{s} e^{-st} \right ] _{0}^{\infty } = \frac{1}{s}$$ Inverse of Laplace Transforms It seeks to obtain $f(t)$ from $F(s)$. $$f\left ( t \right ) = \boldsymbol{\mathfrak{L}}^{-1} \left [ F\left ( s \right ) \right ]$$ $$f\left ( t \right ) = \frac{1}{2\pi j} \int_{c-j\infty }^{c+j\infty } F\left ( s \right ) e^{st} ds$$ Theorems 1. Multiplication by a constant $$\boldsymbol{\mathfrak{L}} \left [ k \, f \left (t \right ) \right ] = k \, F \left (s \right )$$ 2. add and subtract $$\mathfrak{L} \left [ f_1 \left ( t \right ) \pm f_2 \left ( t \right ) \right ] = F_1 \left ( s \right ) \pm F_2 \left ( s \right )$$ 3. Differentiation $$\mathfrak{L} \left [ \frac{d\, f\left ( t \right )}{dt} \right ] = s \, F\left ( s \right ) - \lim_{t \to 0} f\left ( t \right ) = s \, F\left ( s \right ) - f\left ( 0 \right )$$ $$\mathfrak{L} \left [ \frac{d^n \, f\left ( t \right )}{dt^n} \right ] = s^n \, F\left ( s \right ) - \lim_{t \to 0} \left [ s^{n-1} f\left ( t \right ) + s^{n-2} \frac{df\left ( t \right )}{dt} + \cdots +s^{0} \frac{d^{n-1} f\left ( t \right )}{dt^{n-1}} \right ]$$ $$\mathfrak{L} \left [ \frac{d^n \, f\left ( t \right )}{dt^n} \right ] = s^n \, F\left ( s \right ) - s^{n-1} f\left ( 0 \right ) - s^{n-2} f^{\left ( 1 \right )} - \cdots - f^{\left ( n-1 \right )} \left ( 0 \right )$$ 4. Integration $$\mathfrak{L} \left [ \int_{0}^{t} f\left ( \tau \right ) d\tau \right ] = \frac{F\left ( s \right )}{s}$$ $$\mathfrak{L} \left [ \int_{0}^{t_1} \int_{0}^{t_2} \cdots \int_{0}^{t_n} f\left ( \tau \right ) d\tau \, dt_1 \cdots dt_{n-1}\right ] = \frac{F\left ( s \right )}{s^n}$$ 5. Time shift $$\mathfrak{L} \left [ f\left ( t-T \right ) u\left ( t-T \right )\right ] = e^{-Ts} F\left ( s \right )$$ Where $u\left ( t-T \right )$ is the unitary step offset to the right. 6. Initial value theorem $$\lim {t \to 0} f\left ( t \right ) = \lim {s \to \infty } sF\left ( s \right )$$ 7. Final value theorem $$\lim {t \to \infty} f\left ( t \right ) = \lim {s \to 0 } sF\left ( s \right )$$ If and only if $F(s)$ is analytic in the imaginary axis and in the right half-plane of $s$. Example: Consider the function $F_1(s)$: $$F_1\left ( s \right ) = \frac{5}{s\left ( s^2 + s + 2 \right )}$$ The poles of $F_1(s)$ are: End of explanation """ import sympy from sympy import * sympy.init_printing() s = Symbol('s') t = Symbol('t', positive=True) X = (-s**2-s+5)/((s)*(s+1)*(s+2)) X X.expand() xt = inverse_laplace_transform(X,s,t).evalf().simplify() xt """ Explanation: $$0, \, -\frac{1}{2} \pm j\frac{\sqrt{7}}{2}$$ Since the function is analytic on the imaginary axis and on the right half-plane of $s$, then it fulfills the final value theorem. Thus: $$\lim {t \to \infty } f\left ( t \right ) = \lim {s \to 0 } s F\left ( s \right ) = \lim _{s \to 0 } \frac{5}{s^2+s+2} = \frac{5}{2}$$ $$ \Rightarrow \lim _{t \to \infty } f\left ( t \right ) = \frac{5}{2} $$ Example: Consider the function $F_2(s)$: $$ F_2 \left ( s \right ) = \frac{\omega }{s^2 + \omega ^2} $$ And we know that $f_2 \left ( t \right ) = sin \left ( \omega t \right )$. The poles are: $\pm j \omega $ $ \Rightarrow $ Does not meet the final value theorem $ \Rightarrow $ We can not determine the final value of $f(t)$. 8. Complex displacement $$\mathfrak{L} \left [ e^{\mp at} f\left ( t \right ) \right ] = F \left ( s \pm a \right )$$ 9. Real convolution (complex multiplication) $$F_1 \left ( s \right ) F_2 \left ( s \right ) = \mathfrak{L} \left [ \int_{0}^{t} f_1 \left ( \tau \right ) f_2 \left ( t - \tau \right ) d\tau \right ]$$ $$F_1 \left ( s \right ) F_2 \left ( s \right ) = \mathfrak{L} \left [ \int_{0}^{t} f_2 \left ( \tau \right ) f_1 \left ( t - \tau \right ) d\tau \right ]$$ $$ F_1 \left ( s \right ) F_2 \left ( s \right ) = \mathfrak{L} \left [ f_1 \left ( t \right ) * f_2 \left ( t \right ) \right ] $$ Note: $\mathfrak{L}^{-1} \left [ F_1 \left ( s \right ) F_2 \left ( s \right ) \right ] \neq f_1 \left ( t \right ) \, f_2 \left ( t \right ) $ 10. Complex convolution (real multiplication) $$\mathfrak{L} \left [ f_1 \left ( t \right ) f_2 \left ( t \right ) \right ] = F_1 \left ( s \right ) * F_2 \left ( s \right )$$ Partial fractions 1. Simple poles $$ X \left ( s \right ) = \frac{P \left ( s \right )}{Q \left ( s \right )} = \frac{P \left ( s \right )}{\left ( s+s_1 \right )\left ( s+s_2 \right )\cdots \left ( s+s_n \right )} $$ $$ \Rightarrow X \left ( s \right ) = \frac{A}{s+s_1} + \frac{B}{s+s_2} + \cdots + \frac{Z}{s+s_n} $$ Where the coefficients $A, B, ... Z$ are determined by multiplying both sides by $(s+s_i)$, and replacing $s = -s_i$. Example: Consider the function $X(s)$: $$X \left ( s \right ) = \frac{5s+3}{\left ( s+1 \right )\left ( s+2 \right )\left ( s+3 \right )}$$ $$ \Rightarrow X \left ( s \right ) = \frac{A}{s+1} + \frac{B}{s+2} + \frac{C}{s+3} $$ Both sides by $(s+1)$: $$\left | A+\frac{s+1}{s+2} B+\frac{s+1}{s+3} C =\frac{5s+3}{\left ( s+2 \right )\left ( s+3 \right )} \right | _{s=-1}$$ $$\Rightarrow A = \frac{-5+3}{\left ( 1 \right )\left ( 2 \right )} = \frac{-2}{2} = -1$$ Both sides by $(s+2)$: $$\left | \frac{s+2}{s+1} A + B +\frac{s+2}{s+3} C =\frac{5s+3}{\left ( s+1 \right )\left ( s+3 \right )} \right | _{s=-2}$$ $$\Rightarrow B = \frac{-10+3}{\left ( -1 \right )\left ( 1 \right )} = \frac{-7}{-1} = 7$$ Both sides by $(s+3)$: $$\left | \frac{s+3}{s+1} A + \frac{s+3}{s+2} B + C =\frac{5s+3}{\left ( s+1 \right )\left ( s+2 \right )} \right | _{s=-3}$$ $$\Rightarrow C = \frac{-15+3}{\left ( -2 \right )\left ( -1 \right )} = \frac{-12}{2} = -6$$ $$\Rightarrow X \left ( s \right ) = \frac{-1}{s+1} + \frac{7}{s+2} + \frac{-6}{s+3}$$ 2. Poles of multiple order. The procedure is the same, but we have as many terms as polo multiplicity. Example: Consider the function $X(s)$: $$X\left ( s \right ) = \frac{1}{s\left ( s+1 \right )^3 \left ( s+2 \right )}$$ $$\Rightarrow X\left ( s \right ) = \frac{A}{s} + \frac{B_1}{\left ( s+1 \right )^3} + \frac{B_2}{\left ( s+1 \right )^2} + \frac{B_3}{\left ( s+1 \right )} + \frac{C}{s+2}$$ $$\Rightarrow A = \left [ \frac{1}{\left ( s+1 \right )^3 \left ( s+2 \right )} \right ]_{s=0} = \frac{1}{\left ( 1 \right )\left ( 2 \right )} = \frac{1}{2}$$ $$\Rightarrow B_1 = \left | \frac{1}{s\left ( s+2 \right )} \right |_{s=-1} = \frac{1}{\left ( -1 \right )\left ( 1 \right )} = -1$$ $$ \Rightarrow B_2 = \left | \frac{d}{ds} \left [ \frac{1}{s\left ( s+2 \right )} \right ] \right |{s=-1} = \left | \frac{-\frac{d\left ( s^2 +2s \right )}{ds}}{s^2 \left ( s+2 \right )^2} \right |{s=-1} =\left | \frac{-\left ( 2s+2 \right )}{s^2\left ( s+2 \right )^2} \right |_{s=-1} = 0 $$ $$ \Rightarrow B_3 = \left | \frac{1}{2!} \frac{d^2}{ds^2} \left [ \frac{1}{s\left ( s+2 \right )} \right ] \right |{s=-1} = \left | \frac{1}{2!} \frac{d}{ds} \left [ \frac{-2s-2}{s^2\left ( s+2 \right )^2} \right ] \right |{s=-1} $$ $$ = \left | \frac{1}{2} \frac{-2s^2\left ( s+2 \right )^2 + \left ( 2s+2 \right ) \frac{d}{ds} \left ( s^4 + 4s^3 + 4 s^2 \right )}{s^4\left ( s+2 \right )^4} \right |_{s=-1} $$ $$ =\frac{1}{2} \frac{-\left ( 1 \right )\left ( 1 \right )\left ( 2 \right )+\left ( 0 \right )\left ( 4s^3 +12 s^2 + 8s \right )}{\left ( 1 \right )\left ( 1 \right )} = -1 $$ General form: $$ B_r = \left | \frac{1}{\left ( r-1 \right )!} \frac{d^{r-1}}{ds^{r-1}} \left [ \left ( s+s_i \right )^{r-1} X\left ( s \right ) \right ] \right |_{s=s_i} $$ $$\Rightarrow C = \left [ \frac{1}{s \left ( s+1 \right )^3 } \right ]_{s=-2} = \frac{1}{\left ( -2 \right )\left ( -1 \right )^3} = \frac{1}{2}$$ Therefore, the complete expansion is: $$ X\left ( s \right ) = \frac{\frac{1}{2}}{s} + \frac{-1}{\left ( s+1 \right )^3} + \frac{-1}{s+1} +\frac{\frac{1}{2}}{s+2} $$ 3. Simple complex poles They develop in the same way. Workshop: Carry out the expansion by hand by partial fractions of: $$ X\left ( s \right ) = \frac{\omega _0 ^2}{s\left ( s^2 + 2 \zeta \omega _0 s + \omega _0 ^2 \right )} $$ Example: Consider the function: $$ \frac{d^2 x\left ( t \right )}{dt^2} + 3 \frac{dx\left ( t \right )}{dt} + 2 x\left ( t \right ) = 5\, u\left ( t \right ) $$ Define $x(t)$ if: $$ u\left ( t \right ) = \left{\begin{matrix} 1 & t\geq 0\ 0 & t < 0 \end{matrix}\right.$$ $$ x\left ( 0 \right ) = -1,\: x^{(1)} \left ( 0 \right ) = 2 $$ Solution Applying Laplace: $$ s^2 X\left ( s \right ) - s x\left ( 0 \right ) - x^{(1)} \left ( 0 \right ) +3 s X\left ( s \right ) - 3 x\left ( 0 \right ) + 2 X\left ( s \right ) = \frac{5}{s} $$ $$ \Rightarrow X\left ( s \right ) = \frac{-s^2 - s + 5}{s\left ( s^2 + 3 s + 2 \right )} = \frac{-s^2 - s + 5}{s\left ( s+1 \right )\left ( s+2 \right )} $$ To calculate the inverse of Laplace, we expand in partial fractions: $$ \Rightarrow X\left ( s \right ) = \frac{\frac{5}{2}}{s} - \frac{5}{s+1} + \frac{\frac{3}{2}}{s+2} $$ And using the tables: $$ \Rightarrow x\left ( t \right ) = \frac{5}{2} - 5 e^{-t} + \frac{3}{2} e^{-2t} \: \; / t\geq 0 $$ $\frac{5}{2}$ is the stationary response, and the rest is the transient response. If we are only looking for the stationary response, we can apply the final value theorem: $$ \lim {t \to \infty } x\left ( t \right ) = \lim {s \to 0} s X\left ( s \right ) = \lim _{s \to 0} \frac{-s^2 - s + 5}{s^2 + 3s + 2} = \frac{5}{2} $$ This is a way, the other option is calculating it directly with sympy: End of explanation """
mrustl/flopy
examples/Notebooks/flopy3_swi2package_ex4.ipynb
bsd-3-clause
%matplotlib inline import os import platform import numpy as np import matplotlib.pyplot as plt import flopy.modflow as mf import flopy.utils as fu import flopy.plot as fp """ Explanation: FloPy SWI2 Example 4. Upconing Below a Pumping Well in a Two-Aquifer Island System This example problem is the fourth example problem in the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/) and simulates transient movement of the freshwater-seawater interface beneath an island in response to recharge and groundwater withdrawals. The island is 2,050$\times$2,050 m and consists of two 20-m thick aquifers that extend below sea level. The aquifers are confined, storage changes are not considered (all MODFLOW stress periods are steady-state), and the top and bottom of each aquifer is horizontal. The top of the upper aquifer and the bottom of the lower aquifer are impermeable. The domain is discretized into 61 columns, 61 rows, and 2 layers, with respective cell dimensions of 50 m (DELR), 50 m (DELC), and 20 m. A total of 230 years is simulated using three stress periods with lengths of 200, 12, and 18 years, with constant time steps of 0.2, 0.1, and 0.1 years, respectively. The horizontal and vertical hydraulic conductivity of both aquifers are 10 m/d and 0.2 m/d, respectively. The effective porosity is 0.2 for both aquifers. The model is extended 500 m offshore along all sides and the ocean boundary is represented as a general head boundary condition (GHB) in model layer 1. A freshwater head of 0 m is specified at the ocean bottom in all general head boundaries. The GHB conductance that controls outflow from the aquifer into the ocean is 62.5 m$^{2}$/d and corresponds to a leakance of 0.025 d$^{-1}$ (or a resistance of 40 days). The groundwater is divided into a freshwater zone and a seawater zone, separated by an active ZETA surface between the zones (NSRF=1) that approximates the 50-percent seawater salinity contour. Fluid density is represented using the stratified density option (ISTRAT=1). The dimensionless density difference ($\nu$) between freshwater and saltwater is 0.025. The tip and toe tracking parameters are a TOESLOPE and TIPSLOPE of 0.005, a default ALPHA of 0.1, and a default BETA of 0.1. Initially, the interface between freshwater and saltwater is 1 m below land surface on the island and at the top of the upper aquifer offshore. The SWI2 ISOURCE parameter is set to -2 in cells having GHBs so that water that infiltrates into the aquifer from the GHB cells is saltwater (zone 2), whereas water that flows out of the model at the GHB cells is identical to water at the top of the aquifer. ISOURCE in layer 2, row 31, column 36 is set to 2 so that a saltwater well may be simulated in the third stress period of simulation 2. In all other cells, the SWI2 ISOURCE parameter is set to 0, indicating boundary conditions have water that is identical to water at the top of the aquifer and can be either freshwater or saltwater, depending on the elevation of the active ZETA surface in the cell. A constant recharge rate of 0.4 millimeters per day (mm/d) is used in all three stress periods. The development of the freshwater lens is simulated for 200 years, after which a pumping well having a withdrawal rate of 250 m$^3$/d is started in layer 1, row 31, column 36. For the first simulation (simulation 1), the well pumps for 30 years, after which the interface almost reaches the top of the upper aquifer layer. In the second simulation (simulation 2), an additional well withdrawing saltwater at a rate of 25 m$^3$/d is simulated below the freshwater well in layer 2 , row 31, column 36, 12 years after the freshwater groundwater withdrawal begins in the well in layer 1. The saltwater well is intended to prevent the interface from upconing into the upper aquifer (model layer). Import numpy and matplotlib, set all figures to be inline, import flopy.modflow and flopy.utils. End of explanation """ #Set name of MODFLOW exe # assumes executable is in users path statement exe_name = 'mf2005' if platform.system() == 'Windows': exe_name = 'mf2005.exe' workspace = os.path.join('data') #make sure workspace directory exists if not os.path.exists(workspace): os.makedirs(workspace) """ Explanation: Define model name of your model and the location of MODFLOW executable. All MODFLOW files and output will be stored in the subdirectory defined by the workspace. Create a model named ml and specify that this is a MODFLOW-2005 model. End of explanation """ ncol = 61 nrow = 61 nlay = 2 nper = 3 perlen = [365.25 * 200., 365.25 * 12., 365.25 * 18.] nstp = [1000, 120, 180] save_head = [200, 60, 60] steady = True """ Explanation: Define the number of layers, rows and columns. The heads are computed quasi-steady state (hence a steady MODFLOW run) while the interface will move. There are three stress periods with a length of 200, 12, and 18 years and 1,000, 120, and 180 steps. End of explanation """ # dis data delr, delc = 50.0, 50.0 botm = np.array([-10., -30., -50.]) """ Explanation: Specify the cell size along the rows (delr) and along the columns (delc) and the top and bottom of the aquifer for the DIS package. End of explanation """ # bas data # ibound - active except for the corners ibound = np.ones((nlay, nrow, ncol), dtype= np.int) ibound[:, 0, 0] = 0 ibound[:, 0, -1] = 0 ibound[:, -1, 0] = 0 ibound[:, -1, -1] = 0 # initial head data ihead = np.zeros((nlay, nrow, ncol), dtype=np.float) """ Explanation: Define the IBOUND array and starting heads for the BAS package. The corners of the model are defined to be inactive. End of explanation """ # lpf data laytyp=0 hk=10. vka=0.2 """ Explanation: Define the layers to be confined and define the horizontal and vertical hydraulic conductivity of the aquifer for the LPF package. End of explanation """ # boundary condition data # ghb data colcell, rowcell = np.meshgrid(np.arange(0, ncol), np.arange(0, nrow)) index = np.zeros((nrow, ncol), dtype=np.int) index[:, :10] = 1 index[:, -10:] = 1 index[:10, :] = 1 index[-10:, :] = 1 nghb = np.sum(index) lrchc = np.zeros((nghb, 5)) lrchc[:, 0] = 0 lrchc[:, 1] = rowcell[index == 1] lrchc[:, 2] = colcell[index == 1] lrchc[:, 3] = 0. lrchc[:, 4] = 50.0 * 50.0 / 40.0 # create ghb dictionary ghb_data = {0:lrchc} # recharge data rch = np.zeros((nrow, ncol), dtype=np.float) rch[index == 0] = 0.0004 # create recharge dictionary rch_data = {0: rch} # well data nwells = 2 lrcq = np.zeros((nwells, 4)) lrcq[0, :] = np.array((0, 30, 35, 0)) lrcq[1, :] = np.array([1, 30, 35, 0]) lrcqw = lrcq.copy() lrcqw[0, 3] = -250 lrcqsw = lrcq.copy() lrcqsw[0, 3] = -250. lrcqsw[1, 3] = -25. # create well dictionary base_well_data = {0:lrcq, 1:lrcqw} swwells_well_data = {0:lrcq, 1:lrcqw, 2:lrcqsw} # swi2 data nadptmx = 10 nadptmn = 1 nu = [0, 0.025] numult = 5.0 toeslope = nu[1] / numult #0.005 tipslope = nu[1] / numult #0.005 z1 = -10.0 * np.ones((nrow, ncol)) z1[index == 0] = -11.0 z = np.array([[z1, z1]]) iso = np.zeros((nlay, nrow, ncol), dtype=np.int) iso[0, :, :][index == 0] = 1 iso[0, :, :][index == 1] = -2 iso[1, 30, 35] = 2 ssz=0.2 # swi2 observations obsnam = ['layer1_', 'layer2_'] obslrc=[[0, 30, 35], [1, 30, 35]] nobs = len(obsnam) iswiobs = 1051 """ Explanation: Define the boundary condition data for the model End of explanation """ # oc data spd = {(0,199): ['print budget', 'save head'], (0,200): [], (0,399): ['print budget', 'save head'], (0,400): [], (0,599): ['print budget', 'save head'], (0,600): [], (0,799): ['print budget', 'save head'], (0,800): [], (0,999): ['print budget', 'save head'], (1,0): [], (1,59): ['print budget', 'save head'], (1,60): [], (1,119): ['print budget', 'save head'], (1,120): [], (2,0): [], (2,59): ['print budget', 'save head'], (2,60): [], (2,119): ['print budget', 'save head'], (2,120): [], (2,179): ['print budget', 'save head']} """ Explanation: Create output control (OC) data using words End of explanation """ modelname = 'swiex4_s1' ml = mf.Modflow(modelname, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = mf.ModflowDis(ml, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) bas = mf.ModflowBas(ml, ibound=ibound, strt=ihead) lpf = mf.ModflowLpf(ml, laytyp=laytyp, hk=hk, vka=vka) wel = mf.ModflowWel(ml, stress_period_data=base_well_data) ghb = mf.ModflowGhb(ml, stress_period_data=ghb_data) rch = mf.ModflowRch(ml, rech=rch_data) swi = mf.ModflowSwi2(ml, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu, zeta=z, ssz=ssz, isource=iso, nsolver=1, nadptmx=nadptmx, nadptmn=nadptmn, nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc) oc = mf.ModflowOc(ml, stress_period_data=spd) pcg = mf.ModflowPcg(ml, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50) """ Explanation: Create the model with the freshwater well (Simulation 1) End of explanation """ ml.write_input() ml.run_model(silent=True) """ Explanation: Write the simulation 1 MODFLOW input files and run the model End of explanation """ modelname2 = 'swiex4_s2' ml2 = mf.Modflow(modelname2, version='mf2005', exe_name=exe_name, model_ws=workspace) discret = mf.ModflowDis(ml2, nlay=nlay, nrow=nrow, ncol=ncol, laycbd=0, delr=delr, delc=delc, top=botm[0], botm=botm[1:], nper=nper, perlen=perlen, nstp=nstp) bas = mf.ModflowBas(ml2, ibound=ibound, strt=ihead) lpf = mf.ModflowLpf(ml2, laytyp=laytyp, hk=hk, vka=vka) wel = mf.ModflowWel(ml2, stress_period_data=swwells_well_data) ghb = mf.ModflowGhb(ml2, stress_period_data=ghb_data) rch = mf.ModflowRch(ml2, rech=rch_data) swi = mf.ModflowSwi2(ml2, nsrf=1, istrat=1, toeslope=toeslope, tipslope=tipslope, nu=nu, zeta=z, ssz=ssz, isource=iso, nsolver=1, nadptmx=nadptmx, nadptmn=nadptmn, nobs=nobs, iswiobs=iswiobs, obsnam=obsnam, obslrc=obslrc) oc = mf.ModflowOc(ml2, stress_period_data=spd) pcg = mf.ModflowPcg(ml2, hclose=1.0e-6, rclose=3.0e-3, mxiter=100, iter1=50) """ Explanation: Create the model with the saltwater well (Simulation 2) End of explanation """ ml2.write_input() ml2.run_model(silent=True) """ Explanation: Write the simulation 2 MODFLOW input files and run the model End of explanation """ # read base model zeta zfile = fu.CellBudgetFile(os.path.join(ml.model_ws, modelname+'.zta')) kstpkper = zfile.get_kstpkper() zeta = [] for kk in kstpkper: zeta.append(zfile.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta = np.array(zeta) # read swi obs zobs = np.genfromtxt(os.path.join(ml.model_ws, modelname+'.zobs'), names=True) """ Explanation: Load the simulation 1 ZETA data and ZETA observations. End of explanation """ # read saltwater well model zeta zfile2 = fu.CellBudgetFile(os.path.join(ml2.model_ws, modelname2+'.zta')) kstpkper = zfile2.get_kstpkper() zeta2 = [] for kk in kstpkper: zeta2.append(zfile2.get_data(kstpkper=kk, text='ZETASRF 1')[0]) zeta2 = np.array(zeta2) # read swi obs zobs2 = np.genfromtxt(os.path.join(ml2.model_ws, modelname2+'.zobs'), names=True) """ Explanation: Load the simulation 2 ZETA data and ZETA observations. End of explanation """ x = np.linspace(-1500, 1500, 61) xcell = np.linspace(-1500, 1500, 61) + delr / 2. xedge = np.linspace(-1525, 1525, 62) years = [40, 80, 120, 160, 200, 6, 12, 18, 24, 30] """ Explanation: Create arrays for the x-coordinates and the output years End of explanation """ # figure dimensions fwid, fhgt = 8.00, 5.50 flft, frgt, fbot, ftop = 0.125, 0.95, 0.125, 0.925 # line color definition icolor = 5 colormap = plt.cm.jet #winter cc = [] cr = np.linspace(0.9, 0.0, icolor) for idx in cr: cc.append(colormap(idx)) """ Explanation: Define figure dimensions and colors used for plotting ZETA surfaces End of explanation """ plt.rcParams.update({'legend.fontsize': 6, 'legend.frameon' : False}) fig = plt.figure(figsize=(fwid, fhgt), facecolor='w') fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop) # first plot ax = fig.add_subplot(2, 2, 1) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5): # layer 1 ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Recharge conditions', transform=ax.transAxes, va='center', ha='right', size='8') # second plot ax = fig.add_subplot(2, 2, 2) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5, len(years)): # layer 1 ax.plot(xcell, zeta[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Freshwater well withdrawal', transform=ax.transAxes, va='center', ha='right', size='8') # third plot ax = fig.add_subplot(2, 2, 3) # axes limits ax.set_xlim(-1500, 1500) ax.set_ylim(-50, -10) for idx in range(5, len(years)): # layer 1 ax.plot(xcell, zeta2[idx, 0, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='{:2d} years'.format(years[idx])) # layer 2 ax.plot(xcell, zeta2[idx, 1, 30, :], drawstyle='steps-mid', linewidth=0.5, color=cc[idx-5], label='_None') ax.plot([-1500, 1500], [-30, -30], color='k', linewidth=1.0) # legend plt.legend(loc='lower left') # axes labels and text ax.set_xlabel('Horizontal distance, in meters') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.975, .1, 'Freshwater and saltwater\nwell withdrawals', transform=ax.transAxes, va='center', ha='right', size='8') # fourth plot ax = fig.add_subplot(2, 2, 4) # axes limits ax.set_xlim(0, 30) ax.set_ylim(-50, -10) t = zobs['TOTIM'][999:] / 365 - 200. tz2 = zobs['layer1_001'][999:] tz3 = zobs2['layer1_001'][999:] for i in range(len(t)): if zobs['layer2_001'][i+999] < -30. - 0.1: tz2[i] = zobs['layer2_001'][i+999] if zobs2['layer2_001'][i+999] < 20. - 0.1: tz3[i] = zobs2['layer2_001'][i+999] ax.plot(t, tz2, linestyle='solid', color='r', linewidth=0.75, label='Freshwater well') ax.plot(t, tz3, linestyle='dotted', color='r', linewidth=0.75, label='Freshwater and saltwater well') ax.plot([0, 30], [-30, -30], 'k', linewidth=1.0, label='_None') # legend leg = plt.legend(loc='lower right', numpoints=1) # axes labels and text ax.set_xlabel('Time, in years') ax.set_ylabel('Elevation, in meters') ax.text(0.025, .55, 'Layer 1', transform=ax.transAxes, va='center', ha='left', size='7') ax.text(0.025, .45, 'Layer 2', transform=ax.transAxes, va='center', ha='left', size='7'); """ Explanation: Recreate Figure 9 from the SWI2 documentation (http://pubs.usgs.gov/tm/6a46/). End of explanation """ fig = plt.figure(figsize=(fwid, fhgt/2)) fig.subplots_adjust(wspace=0.25, hspace=0.25, left=flft, right=frgt, bottom=fbot, top=ftop) colors = ['#40d3f7', '#F76541'] ax = fig.add_subplot(1, 2, 1) modelxsect = fp.ModelCrossSection(model=ml, line={'Row': 30}, extent=(0, 3050, -50, -10)) modelxsect.plot_fill_between(zeta[4, :, :, :], colors=colors, ax=ax, edgecolors='none') linecollection = modelxsect.plot_grid(ax=ax) ax.set_title('Recharge year {}'.format(years[4])); ax = fig.add_subplot(1, 2, 2) ax.set_xlim(0, 3050) ax.set_ylim(-50, -10) modelxsect.plot_fill_between(zeta[-1, :, :, :], colors=colors, ax=ax) linecollection = modelxsect.plot_grid(ax=ax) ax.set_title('Scenario year {}'.format(years[-1])); """ Explanation: Use ModelCrossSection plotting class and plot_fill_between() method to fill between zeta surfaces. End of explanation """
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/matplotlib/04.12-Three-Dimensional-Plotting.ipynb
mit
from mpl_toolkits import mplot3d """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! No changes were made to the contents of this notebook from the original. <!--NAVIGATION--> < Customizing Matplotlib: Configurations and Stylesheets | Contents | Geographic Data with Basemap > Three-Dimensional Plotting in Matplotlib Matplotlib was initially designed with only two-dimensional plotting in mind. Around the time of the 1.0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. three-dimensional plots are enabled by importing the mplot3d toolkit, included with the main Matplotlib installation: End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = plt.axes(projection='3d') """ Explanation: Once this submodule is imported, a three-dimensional axes can be created by passing the keyword projection='3d' to any of the normal axes creation routines: End of explanation """ ax = plt.axes(projection='3d') # Data for a three-dimensional line zline = np.linspace(0, 15, 1000) xline = np.sin(zline) yline = np.cos(zline) ax.plot3D(xline, yline, zline, 'gray') # Data for three-dimensional scattered points zdata = 15 * np.random.random(100) xdata = np.sin(zdata) + 0.1 * np.random.randn(100) ydata = np.cos(zdata) + 0.1 * np.random.randn(100) ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens'); """ Explanation: With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. Three-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use %matplotlib notebook rather than %matplotlib inline when running this code. Three-dimensional Points and Lines The most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples. In analogy with the more common two-dimensional plots discussed earlier, these can be created using the ax.plot3D and ax.scatter3D functions. The call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to Simple Line Plots and Simple Scatter Plots for more information on controlling the output. Here we'll plot a trigonometric spiral, along with some points drawn randomly near the line: End of explanation """ def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) fig = plt.figure() ax = plt.axes(projection='3d') ax.contour3D(X, Y, Z, 50, cmap='binary') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z'); """ Explanation: Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page. While the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points. Three-dimensional Contour Plots Analogous to the contour plots we explored in Density and Contour Plots, mplot3d contains tools to create three-dimensional relief plots using the same inputs. Like two-dimensional ax.contour plots, ax.contour3D requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point. Here we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function: End of explanation """ ax.view_init(60, 35) fig """ Explanation: Sometimes the default viewing angle is not optimal, in which case we can use the view_init method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis): End of explanation """ fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_wireframe(X, Y, Z, color='black') ax.set_title('wireframe'); """ Explanation: Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends. Wireframes and Surface Plots Two other types of three-dimensional plots that work on gridded data are wireframes and surface plots. These take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize. Here's an example of using a wireframe: End of explanation """ ax = plt.axes(projection='3d') ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none') ax.set_title('surface'); """ Explanation: A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon. Adding a colormap to the filled polygons can aid perception of the topology of the surface being visualized: End of explanation """ r = np.linspace(0, 6, 20) theta = np.linspace(-0.9 * np.pi, 0.8 * np.pi, 40) r, theta = np.meshgrid(r, theta) X = r * np.sin(theta) Y = r * np.cos(theta) Z = f(X, Y) ax = plt.axes(projection='3d') ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none'); """ Explanation: Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear. Here is an example of creating a partial polar grid, which when used with the surface3D plot can give us a slice into the function we're visualizing: End of explanation """ theta = 2 * np.pi * np.random.random(1000) r = 6 * np.random.random(1000) x = np.ravel(r * np.sin(theta)) y = np.ravel(r * np.cos(theta)) z = f(x, y) """ Explanation: Surface Triangulations For some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient. In these situations, the triangulation-based plots can be very useful. What if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws? End of explanation """ ax = plt.axes(projection='3d') ax.scatter(x, y, z, c=z, cmap='viridis', linewidth=0.5); """ Explanation: We could create a scatter plot of the points to get an idea of the surface we're sampling from: End of explanation """ ax = plt.axes(projection='3d') ax.plot_trisurf(x, y, z, cmap='viridis', edgecolor='none'); """ Explanation: This leaves a lot to be desired. The function that will help us in this case is ax.plot_trisurf, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays): End of explanation """ theta = np.linspace(0, 2 * np.pi, 30) w = np.linspace(-0.25, 0.25, 8) w, theta = np.meshgrid(w, theta) """ Explanation: The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots. For example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next. Example: Visualizing a Möbius strip A Möbius strip is similar to a strip of paper glued into a loop with a half-twist. Topologically, it's quite interesting because despite appearances it has only a single side! Here we will visualize such an object using Matplotlib's three-dimensional tools. The key to creating the Möbius strip is to think about it's parametrization: it's a two-dimensional strip, so we need two intrinsic dimensions. Let's call them $\theta$, which ranges from $0$ to $2\pi$ around the loop, and $w$ which ranges from -1 to 1 across the width of the strip: End of explanation """ phi = 0.5 * theta """ Explanation: Now from this parametrization, we must determine the (x, y, z) positions of the embedded strip. Thinking about it, we might realize that there are two rotations happening: one is the position of the loop about its center (what we've called $\theta$), while the other is the twisting of the strip about its axis (we'll call this $\phi$). For a Möbius strip, we must have the strip makes half a twist during a full loop, or $\Delta\phi = \Delta\theta/2$. End of explanation """ # radius in x-y plane r = 1 + w * np.cos(phi) x = np.ravel(r * np.cos(theta)) y = np.ravel(r * np.sin(theta)) z = np.ravel(w * np.sin(phi)) """ Explanation: Now we use our recollection of trigonometry to derive the three-dimensional embedding. We'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates: End of explanation """ # triangulate in the underlying parametrization from matplotlib.tri import Triangulation tri = Triangulation(np.ravel(w), np.ravel(theta)) ax = plt.axes(projection='3d') ax.plot_trisurf(x, y, z, triangles=tri.triangles, cmap='viridis', linewidths=0.2); ax.set_xlim(-1, 1); ax.set_ylim(-1, 1); ax.set_zlim(-1, 1); """ Explanation: Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation within the underlying parametrization, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip. This can be accomplished as follows: End of explanation """
massimo-nocentini/on-python
UniFiCourseSpring2020/generators.ipynb
mit
__AUTHORS__ = {'am': ("Andrea Marino", "andrea.marino@unifi.it",), 'mn': ("Massimo Nocentini", "massimo.nocentini@unifi.it", "https://github.com/massimo-nocentini/",)} __KEYWORDS__ = ['Python', 'Jupyter', 'language', 'keynote',] """ Explanation: <p> <img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg" alt="UniFI logo" style="float: left; width: 20%; height: 20%;"> <div align="right"> <small> Massimo Nocentini, PhD. <br><br> May 28, 2020: init </small> </div> </p> <br> <br> <div align="center"> <b>Abstract</b><br> A (very concise) introduction to some Python language constructs. </div> End of explanation """ def increment(a): return a + 1 increment(0) increment(1) L = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] L LL = [increment(a) for a in L] LL LLL = [increment(a) for a in LL] LLL r = range(10) r list(r) map(lambda i: i + 1, L) (lambda i: i + 1)(0) (lambda i: i + 1)(1) list(map(lambda i: i + 1, L)) M = map(lambda i: i + 1, L) M next(M) next(M) next(M) next(M) next(M) next(M) next(M) next(M) next(M) next(M) next(M) list(range(10)) list(i for i in range(10)) N = (i for i in range(10)) N list(N) next(N) """ Explanation: <center><img src="https://upload.wikimedia.org/wikipedia/commons/c/c3/Python-logo-notext.svg"></center> The Python language documentation Start at https://docs.python.org/3/index.html for everything. For the tutorial have a look at https://docs.python.org/3/tutorial/index.html, nicely written and complete. End of explanation """ from random import random # import the random generator, to be used to sample from the uniform distribution random() # a quick check that the random function works int(True) # this is a very quick check to see if a Boolean can be used as integer def Bernoulli(p): 'This is a generator for a Bernoulli random variable of parameter `p` for success.' while True: # forever we loop r = random() # get a sample yield int(r <= p) # if that sample denotes a success or a failure we *yield* that outcome yield # if we evaluate *yield* not in a context, Python raises an error because it is a construct help(Bernoulli) B = Bernoulli(p=0.6) # B is our random variable B next(B) next(B) next(B) sample = [next(B) for _ in range(1000)] sample[:20] # just for a quick evaluation, we print the first 20 elements from collections import Counter Counter(sample) B_flip = map(lambda o: 1-o, B) B_flip sample = [next(B_flip) for _ in range(1000)] sample[:20] # just for a quick evaluation, we print the first 20 elements def Bernoulli(p): 'This is a generator for a Bernoulli random variable of parameter `p` for success.' while True: # forever we loop r = random() # get a sample o = int(r <= p) # if that sample denotes a success or a failure we *yield* that outcome print('B ' + str(o)) yield o def flip(o): print('flip') return 1-o B_flip = map(flip, Bernoulli(p=0.9)) B_flip sample = [next(B_flip) for _ in range(20)] Counter(sample) class A(object): def __init__(self, j): self.j = j def __add__(self, i): return self.j + i def __radd__(self, i): return self.j + i def __lt__(self, i): return self.j < i def B(b): pass B B(3) is None def B(b): ... increment(4) a = A() increment(a) a = A() increment(a) A(3) + 1 1 + A(3) 1 + A(3) A(4) < 2 """ Explanation: we want to build an object that denotes a Bernoulli random variable. it is desired to be able to sample from that variable an arbitrary number of times, not known at design time. End of explanation """
Pencroff/deep-learning-course
lesson-1/1_notmnist.ipynb
mit
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import matplotlib.pyplot as plt import numpy as np import os import sys import time from datetime import timedelta import tarfile from IPython.display import display, Image from scipy import ndimage from scipy.spatial import distance from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle from ipyparallel import Client, require # Config the matplotlib backend as plotting inline in IPython %matplotlib inline %run label_util.py def draw_images(label, a_arr, b_arr, bins_size=20): x = np.array(range(bins_size)) f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(8, 1)) h_a = np.histogram(a_arr, bins=bins_size) h_b = np.histogram(b_arr, bins=bins_size) ax1.imshow(a_arr) ax1.set_title('Label: ' + label) ax2.bar(x, h_a[0]) ax3.imshow(b_arr) ax4.bar(x, h_b[0]) plt.show() def overlapping_comparison(counters, dataset_a, dataset_b, number_shown = 3): letters = list(counters.keys()) i = 0 shown = 0 while shown < number_shown and i < len(letters): character = letters[i] similar_keys = list(counters[character].keys()) key = similar_keys[0] if key == 'counter' and len(similar_keys) > 1: key = similar_keys[1] idx_a = int(key) idx_b = counters[character][key][0] label = '{0} (Distance: {1}) [{2}] - [{3}]'.format(character, manhattan_distance(dataset_a[idx_a], dataset_b[idx_b]), idx_a, idx_b) draw_images(label, dataset_a[idx_a], dataset_b[idx_b]) i += 1 shown += 1 def display_overlap(counters): key_lst = sorted(counters.keys()) total = 0 for key in key_lst: total += counters[key]['counter'] print('Label {0}: {1}'.format(key, counters[key]['counter'])) print('Total:', total) def wrap_tuples(labels, dataset): result = [] for idx, item in enumerate(zip(labels, dataset)): result.append((idx, item[0], item[1])) return result def is_equal_comparison(a_arr, b_arr): return (a_arr==b_arr).all() def euclidean_distance(a_arr, b_arr): '''Euclidean distance without the sqrt''' return np.sum(np.power(a_arr - b_arr, 2)) @require('numpy as np') def manhattan_distance(a_arr, b_arr): return np.sum(np.absolute(a_arr - b_arr)) def count_duplication(counters, lbl, idxA, idxB): str_lbl = get_char_by_lbl(lbl) if str_lbl not in counters: counters[str_lbl] = {} counters[str_lbl]['counter'] = 0 counters[str_lbl]['counter'] += 1 if str(idxA) not in counters[str_lbl]: counters[str_lbl][str(idxA)] = [] counters[str_lbl][str(idxA)].append(idxB) def count_equal_data(label_lst_A, data_lst_A, label_lst_B, data_lst_B, distance_threshold=0, min_distance_threshold = 0): start_time = time.clock() counters = {} for idxA, lblA in enumerate(label_lst_A): for idxB, lblB in enumerate(label_lst_B): if lblA == lblB: itemA = data_lst_A[idxA] itemB = data_lst_B[idxB] if distance_threshold == 0 and is_equal_comparison(itemA, itemB): count_duplication(counters, lblA, idxA, idxB) if distance_threshold > 0 and distance_threshold >= manhattan_distance(itemA, itemB) > min_distance_threshold: count_duplication(counters, lblA, idxA, idxB) end_time = time.clock() return (counters, timedelta(seconds=end_time - start_time)) def count_equal_tuples(tuple_lst_A, tuple_lst_B, distance_threshold=0, min_distance_threshold = 0): idx_idx = 0 lbl_idx = 1 data_idx = 2 counters = {} for item_A in tuple_lst_A: for item_B in tuple_lst_B: if item_A[lbl_idx] == item_B[lbl_idx]: if distance_threshold == 0 and is_equal_comparison(item_A[data_idx], item_B[data_idx]): count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx]) if distance_threshold > 0 and distance_threshold >= manhattan_distance(item_A[data_idx], item_B[data_idx]) > min_distance_threshold: count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx]) return counters @require(get_char_by_lbl) def count_duplication(counters, lbl, idxA, idxB): str_lbl = get_char_by_lbl(lbl) if str_lbl not in counters: counters[str_lbl] = {} counters[str_lbl]['counter'] = 0 counters[str_lbl]['counter'] += 1 if str(idxA) not in counters[str_lbl]: counters[str_lbl][str(idxA)] = [] counters[str_lbl][str(idxA)].append(idxB) @require(is_equal_comparison, count_duplication, manhattan_distance) def item_acync_handler(): idx_idx = 0 lbl_idx = 1 data_idx = 2 for item_A in tuple_lst_A: for item_B in tuple_lst_B: if item_A[lbl_idx] == item_B[lbl_idx]: if distance_threshold == 0 and is_equal_comparison(item_A[data_idx], item_B[data_idx]): count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx]) if distance_threshold > 0 and distance_threshold >= manhattan_distance(item_A[data_idx], item_B[data_idx]) > min_distance_threshold: count_duplication(counters, item_A[lbl_idx], item_A[idx_idx], item_B[idx_idx]) def reduce_counters(counters_lst): result = {} for counters in counters_lst: for letter_key, item in counters.items(): if letter_key not in result: result[letter_key] = {'counter': 0} for key, value in item.items(): if key == 'counter': result[letter_key][key] += value elif key not in result[letter_key]: result[letter_key][key] = value else: for idx in value: result[letter_key][key].append(idx) return result def count_equal_tuples_parallel(tuple_lst_A, tuple_lst_B, distance_threshold=0, min_distance_threshold = 0): rc = Client() dview = rc[:] dview.push(dict(tuple_lst_B = tuple_lst_B, map_dict=map_dict, distance_threshold=distance_threshold, min_distance_threshold=min_distance_threshold)) dview['counters'] = {} dview.scatter('tuple_lst_A', tuple_lst_A) dview.block=True dview.apply(item_acync_handler) result = reduce_counters(dview['counters']) return result """ Explanation: Deep Learning Assignment 1 The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later. This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST. End of explanation """ url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 1% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" if force or not os.path.exists(filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) """ Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine. End of explanation """ num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall() tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) """ Explanation: Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J. End of explanation """ from IPython.display import Image, display num_first_items = 1 def display_first_items(folder_path): print('Letter:', folder_path[-1:]) lst = os.listdir(folder_path)[:num_first_items] for file_name in lst: full_file_name = os.path.join(folder_path, file_name) display(Image(filename=full_file_name)) for folder in train_folders: display_first_items(folder) for folder in test_folders: display_first_items(folder) """ Explanation: Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display. End of explanation """ image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) if image_data.mean() == 0.5: print('No data in image:', image_file) continue dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1600) """ Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them. End of explanation """ def show_pickle(file_path): print(file_path) with open(file_path, 'rb') as f: dataset = pickle.load(f) plt.figure(figsize=(1,1)) plt.imshow(dataset[1]) plt.show() for pickle_file in train_datasets: show_pickle(pickle_file) for pickle_file in test_datasets: show_pickle(pickle_file) """ Explanation: Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot. End of explanation """ def show_pickle_stats(file_path): with open(file_path, 'rb') as f: dataset = pickle.load(f) print(file_path, len(dataset)) for pickle_file in train_datasets: show_pickle_stats(pickle_file) for pickle_file in test_datasets: show_pickle_stats(pickle_file) """ Explanation: Problem 3 Another check: we expect the data to be balanced across classes. Verify that. End of explanation """ def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) print('train labels:', count_labels(train_labels)) print('valid labels:', count_labels(valid_labels)) print('test labels:', count_labels(test_labels)) """ Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning. End of explanation """ def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) """ Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. End of explanation """ print('train labels:', count_labels(train_labels)) print('valid labels:', count_labels(valid_labels)) print('test labels:', count_labels(test_labels)) def show_data(dataset, labels, size=3): print('=============================================') for lbl, img_arr in zip(labels[:size], dataset[:size]): print(map_dict[str(lbl)]) plt.figure(figsize=(1,1)) plt.imshow(img_arr) plt.show() show_data(train_dataset, train_labels) show_data(test_dataset, test_labels) show_data(valid_dataset, valid_labels) """ Explanation: Problem 4 Convince yourself that the data is still good after shuffling! End of explanation """ pickle_file = 'notMNIST.pickle' try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) bins_size = 28 * 4 def calc_histogram(dataset, bins = bins_size): start_time = time.clock() hist_list = [] for item in dataset: hist = np.histogram(item, bins=bins) hist_list.append(hist[0]) end_time = time.clock() return (hist_list, timedelta(seconds=end_time - start_time)) train_histogram, calc_duration = calc_histogram(train_dataset, bins_size) print('Histograms for train dataset calculates in', calc_duration) valid_histogram, calc_duration = calc_histogram(valid_dataset, bins_size) print('Histograms for validation dataset calculates in', calc_duration) test_histogram, calc_duration = calc_histogram(test_dataset, bins_size) print('Histograms for test dataset calculates in', calc_duration) # pickle_hist_file = 'notMNIST.hist.pickle' # try: # f = open(pickle_hist_file, 'wb') # save = { # 'train_histogram': train_histogram, # 'valid_histogram': valid_histogram, # 'test_histogram': test_histogram, # } # pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) # f.close() # except Exception as e: # print('Unable to save data to', pickle_hist_file, ':', e) # raise # statinfo = os.stat(pickle_hist_file) # print('Compressed histograms pickle size:', statinfo.st_size) """ Explanation: Finally, let's save the data for later reuse: End of explanation """ pickle_file = 'notMNIST.pickle' pickle_hist_file = 'notMNIST.hist.pickle' try: with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) except Exception as e: print('Unable to load full dataset to', pickle_file, ':', e) raise # try: # with open(pickle_hist_file, 'rb') as f: # save = pickle.load(f) # train_histogram = save['train_histogram'] # valid_histogram = save['valid_histogram'] # test_histogram = save['test_histogram'] # print('Training histogram:', len(train_histogram)) # print('Validation histogram:', len(valid_histogram)) # print('Testing histogram:', len(test_histogram)) # except Exception as e: # print('Unable to load full dataset to', pickle_file, ':', e) # raise start_time = time.clock() train_tuple_lst = wrap_tuples(train_labels, train_dataset) valid_tuple_lst = wrap_tuples(valid_labels, valid_dataset) test_tuple_lst = wrap_tuples(test_labels, test_dataset) end_time = time.clock() print('Labels and data sets to tuples time:', timedelta(seconds=end_time - start_time)) """ Explanation: Problem 5 By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples. Optional questions: - What about near duplicates between datasets? (images that are almost identical) - Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments. End of explanation """ distance_overlapping = 10 """ Explanation: Comparison parallel and sync overlapping calculation For distance measurement between images used Manhattan metric (reference) End of explanation """ start_time = time.clock() overlap_valid_test = count_equal_tuples(valid_tuple_lst, test_tuple_lst) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets during', duration) display_overlap(overlap_valid_test) start_time = time.clock() overlap_valid_test_near = count_equal_tuples(valid_tuple_lst, test_tuple_lst, distance_overlapping) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration) display_overlap(overlap_valid_test_near) """ Explanation: Synchronously End of explanation """ start_time = time.clock() overlap_valid_test = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets during', duration) display_overlap(overlap_valid_test) overlapping_comparison(overlap_valid_test, valid_dataset, test_dataset) start_time = time.clock() overlap_valid_test_near = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst, distance_overlapping) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration) display_overlap(overlap_valid_test_near) overlapping_comparison(overlap_valid_test_near, valid_dataset, test_dataset) start_time = time.clock() overlap_valid_test_far = count_equal_tuples_parallel(valid_tuple_lst, test_tuple_lst, 110, 100) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets (with overlaping interval) during', duration) display_overlap(overlap_valid_test_far) overlapping_comparison(overlap_valid_test_far, valid_dataset, test_dataset) """ Explanation: Asynchronously End of explanation """ start_time = time.clock() overlap_train_valid = count_equal_tuples_parallel(train_tuple_lst, valid_tuple_lst) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets during', duration) display_overlap(overlap_train_valid) overlapping_comparison(overlap_train_valid, train_dataset, valid_dataset) start_time = time.clock() overlap_train_valid_near = count_equal_tuples_parallel(train_tuple_lst, valid_tuple_lst, distance_overlapping) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration) display_overlap(overlap_train_valid_near) overlapping_comparison(overlap_train_valid_near, train_dataset, valid_dataset) start_time = time.clock() overlap_train_test = count_equal_tuples_parallel(train_tuple_lst, test_tuple_lst) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets during', duration) display_overlap(overlap_train_test) overlapping_comparison(overlap_train_test, train_dataset, test_dataset) start_time = time.clock() overlap_train_test_near = count_equal_tuples_parallel(train_tuple_lst, test_tuple_lst, distance_overlapping) end_time = time.clock() duration = timedelta(seconds=end_time - start_time) print('Counting overlapping between validation and test datasets (with overlaping distance) during', duration) display_overlap(overlap_train_test_near) overlapping_comparison(overlap_train_test_near, train_dataset, test_dataset) %timeit is_equal_comparison(item_a, item_b) %timeit manhattan_distance(valid_histogram[8], valid_histogram[9]) %timeit distance.cityblock(item_a.flatten(), item_b.flatten()) """ Explanation: Estimation overlapping End of explanation """ from sklearn.linear_model import LogisticRegression %run label_util.py # print('train labels:', count_labels(train_labels)) # print('valid labels:', count_labels(valid_labels)) # print('test labels:', count_labels(test_labels)) # show_data(train_dataset, train_labels) # show_data(test_dataset, test_labels) # show_data(valid_dataset, valid_labels) from collections import Counter cnt = Counter(valid_labels) keys = cnt.keys() one_class_size = 50 // len(keys) for key in keys: class_indexes = np.where(valid_labels == key)[0][:one_class_size] print(type(valid_labels[class_indexes]), valid_labels[class_indexes]) valid_labels.shape logreg = linear_model.LogisticRegression() """ Explanation: Problem 6 Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it. Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model. Optional question: train an off-the-shelf model on all the data! End of explanation """
maxhutch/mapcombine
examples/word_count/WordCount.ipynb
mit
def my_init(args, params, frame): from copy import deepcopy ans = {"words" : {}} base = deepcopy(ans) jobs = [] ans["fname"] = "/tmp/Dickens/TaleOfTwoCities.txt" jobs.append(((0, 16271), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/ChristmasCarol.txt" jobs.append(((0, 4236), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/HardTimes.txt" jobs.append(((0, 12036), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/GreatExpectations.txt" jobs.append(((0, 20415), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/DavidCopperfield.txt" jobs.append(((0, 38588), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/BleakHouse.txt" jobs.append(((0, 40234), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/PickwickPapers.txt" jobs.append(((0, 36613), params, args, deepcopy(ans))) ans["fname"] = "/tmp/Dickens/OliverTwist.txt" jobs.append(((0, 19202), params, args, deepcopy(ans))) return jobs, base """ Explanation: Word Count MapCombine expects three functions: - Initialization - Map - Reduce The reduce is also used as thd combine. Initialize At init, set up jobs for Charles Dickens' books. Return the jobs list and an empty base case. For production jobs, this list would be read from meta-data or generated in a less verbose way. End of explanation """ import glopen def my_map(pos, nelm_to_read, params, ans, last): if "input_file" not in ans: ans["glopen"] = glopen.glopen(ans["fname"], "r", endpoint="maxhutch#alpha-admin") ans["input_file"] = ans["glopen"].__enter__() for i in range(nelm_to_read): line = ans["input_file"].readline() for tok in line.split(): word = tok.strip('.,;:?!_/\\--"`') if word in ans["words"]: ans["words"][word] += 1 else: ans["words"][word] = 1 if last and False: ans["glopen"].__exit__(None, None, None) del ans["glopen"] del ans["input_file"] return ans """ Explanation: Map Map increments a counter for each word. The first time it is called, it opens a remote file with the glopen context manager. The last time it is called, it closes the context, which deletes the local copy of the file. The funny __enter__() business is because glopen is a context manager. End of explanation """ def my_reduce(whole, part): for word in part["words"]: if word in whole["words"]: whole["words"][word] += part["words"][word] else: whole["words"][word] = part["words"][word] return """ Explanation: Reduce Add the counts across the map outputs. End of explanation """ class Foo: pass args = Foo() args.MR_init = my_init args.map = my_map args.reduce = my_reduce args.thread = 2 args.verbose = False args.block = 1024 args.post = None params = {} jobs = [(args, params, 0),] """ Explanation: Putting it all together Outside of an IPython Notebook, these would be set on the command line with argparse. End of explanation """ from mapcombine import outer_process stuff = map(outer_process, jobs) for i, res in enumerate(stuff): print("Charles Dickens wrote 'linen' {:d} times, but 'Rotherhithe' only {:d} times.".format( res["words"]['linen'], res["words"]['Rotherhithe'])) """ Explanation: Here's the actual work. We can use any python map: - map builtin - IPython.Parallel.map - multiprocessing.Pool.map - multiprocessing.dummy.Pool.map - dask.bag.map End of explanation """
tensorflow/docs-l10n
site/en-snapshot/tfx/tutorials/transform/simple.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ try: import colab !pip install --upgrade pip except: pass """ Explanation: Preprocess data with TensorFlow Transform The Feature Engineering Component of TensorFlow Extended (TFX) Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab". <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/transform/simple"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/transform/simple.ipynb"> <img width=32px src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td> </table></div> This example colab notebook provides a very simple example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/get_started/'>TensorFlow Transform (<code>tf.Transform</code>)</a> can be used to preprocess data using exactly the same code for both training a model and serving inferences in production. TensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could: Normalize an input value by using the mean and standard deviation Convert strings to integers by generating a vocabulary over all of the input values Convert floats to integers by assigning them to buckets, based on the observed data distribution TensorFlow has built-in support for manipulations on a single example or a batch of examples. tf.Transform extends these capabilities to support full passes over the entire training dataset. The output of tf.Transform is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages. Upgrade Pip To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately. End of explanation """ !pip install -q -U tensorflow_transform # This cell is only necessary because packages were installed while python was # running. It avoids the need to restart the runtime when running in Colab. import pkg_resources import importlib importlib.reload(pkg_resources) """ Explanation: Install TensorFlow Transform End of explanation """ import pathlib import pprint import tempfile import tensorflow as tf import tensorflow_transform as tft import tensorflow_transform.beam as tft_beam from tensorflow_transform.tf_metadata import dataset_metadata from tensorflow_transform.tf_metadata import schema_utils """ Explanation: Imports End of explanation """ raw_data = [ {'x': 1, 'y': 1, 's': 'hello'}, {'x': 2, 'y': 2, 's': 'world'}, {'x': 3, 'y': 3, 's': 'hello'} ] raw_data_metadata = dataset_metadata.DatasetMetadata( schema_utils.schema_from_feature_spec({ 'y': tf.io.FixedLenFeature([], tf.float32), 'x': tf.io.FixedLenFeature([], tf.float32), 's': tf.io.FixedLenFeature([], tf.string), })) """ Explanation: Data: Create some dummy data We'll create some simple dummy data for our simple example: raw_data is the initial raw data that we're going to preprocess raw_data_metadata contains the schema that tells us the types of each of the columns in raw_data. In this case, it's very simple. End of explanation """ def preprocessing_fn(inputs): """Preprocess input columns into transformed columns.""" x = inputs['x'] y = inputs['y'] s = inputs['s'] x_centered = x - tft.mean(x) y_normalized = tft.scale_to_0_1(y) s_integerized = tft.compute_and_apply_vocabulary(s) x_centered_times_y_normalized = (x_centered * y_normalized) return { 'x_centered': x_centered, 'y_normalized': y_normalized, 's_integerized': s_integerized, 'x_centered_times_y_normalized': x_centered_times_y_normalized, } """ Explanation: Transform: Create a preprocessing function The preprocessing function is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor'><code>Tensor</code></a> or <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/SparseTensor'><code>SparseTensor</code></a>. There are two main groups of API calls that typically form the heart of a preprocessing function: TensorFlow Ops: Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving. Tensorflow Transform Analyzers/Mappers: Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, (prior to training, during analysis), and typically make a full pass over the entire training dataset. They create tf.constant tensors, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset. Caution: When you apply your preprocessing function to serving inferences, the constants that were created by analyzers during training do not change. If your data has trend or seasonality components, plan accordingly. Note: The preprocessing_fn is not directly callable. This means that calling preprocessing_fn(raw_data) will not work. Instead, it must be passed to the Transform Beam API as shown in the following cells. End of explanation """ def main(output_dir): # Ignore the warnings with tft_beam.Context(temp_dir=tempfile.mkdtemp()): transformed_dataset, transform_fn = ( # pylint: disable=unused-variable (raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset( preprocessing_fn)) transformed_data, transformed_metadata = transformed_dataset # pylint: disable=unused-variable # Save the transform_fn to the output_dir _ = ( transform_fn | 'WriteTransformFn' >> tft_beam.WriteTransformFn(output_dir)) return transformed_data, transformed_metadata output_dir = pathlib.Path(tempfile.mkdtemp()) transformed_data, transformed_metadata = main(str(output_dir)) print('\nRaw data:\n{}\n'.format(pprint.pformat(raw_data))) print('Transformed data:\n{}'.format(pprint.pformat(transformed_data))) """ Explanation: Syntax You're almost ready to put everything together and use <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> to run it. Apache Beam uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line: result = pass_this | 'name this step' &gt;&gt; to_this_call The method to_this_call is being invoked and passed the object called pass_this, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as name this step in a stack trace</a>. The result of the call to to_this_call is returned in result. You will often see stages of a pipeline chained together like this: result = apache_beam.Pipeline() | 'first step' &gt;&gt; do_this_first() | 'second step' &gt;&gt; do_this_last() and since that started with a new pipeline, you can continue like this: next_result = result | 'doing more stuff' &gt;&gt; another_function() Putting it all together Now we're ready to transform our data. We'll use Apache Beam with a direct runner, and supply three inputs: raw_data - The raw input data that we created above raw_data_metadata - The schema for the raw data preprocessing_fn - The function that we created to do our transformation End of explanation """ !ls -l {output_dir} """ Explanation: Is this the right answer? Previously, we used tf.Transform to do this: x_centered = x - tft.mean(x) y_normalized = tft.scale_to_0_1(y) s_integerized = tft.compute_and_apply_vocabulary(s) x_centered_times_y_normalized = (x_centered * y_normalized) x_centered - With input of [1, 2, 3] the mean of x is 2, and we subtract it from x to center our x values at 0. So our result of [-1.0, 0.0, 1.0] is correct. y_normalized - We wanted to scale our y values between 0 and 1. Our input was [1, 2, 3] so our result of [0.0, 0.5, 1.0] is correct. s_integerized - We wanted to map our strings to indexes in a vocabulary, and there were only 2 words in our vocabulary ("hello" and "world"). So with input of ["hello", "world", "hello"] our result of [0, 1, 0] is correct. Since "hello" occurs most frequently in this data, it will be the first entry in the vocabulary. x_centered_times_y_normalized - We wanted to create a new feature by crossing x_centered and y_normalized using multiplication. Note that this multiplies the results, not the original values, and our new result of [-0.0, 0.0, 1.0] is correct. Use the resulting transform_fn End of explanation """ loaded = tf.saved_model.load(str(output_dir/'transform_fn')) loaded.signatures['serving_default'] """ Explanation: The transform_fn/ directory contains a tf.saved_model implementing with all the constants tensorflow-transform analysis results built into the graph. It is possible to load this directly with tf.saved_model.load, but this not easy to use: End of explanation """ tf_transform_output = tft.TFTransformOutput(output_dir) tft_layer = tf_transform_output.transform_features_layer() tft_layer """ Explanation: A better approach is to load it using tft.TFTransformOutput. The TFTransformOutput.transform_features_layer method returns a tft.TransformFeaturesLayer object that can be used to apply the transformation: End of explanation """ raw_data_batch = { 's': tf.constant([ex['s'] for ex in raw_data]), 'x': tf.constant([ex['x'] for ex in raw_data], dtype=tf.float32), 'y': tf.constant([ex['y'] for ex in raw_data], dtype=tf.float32), } """ Explanation: This tft.TransformFeaturesLayer expects a dictionary of batched features. So create a Dict[str, tf.Tensor] from the List[Dict[str, Any]] in raw_data: End of explanation """ transformed_batch = tft_layer(raw_data_batch) {key: value.numpy() for key, value in transformed_batch.items()} """ Explanation: You can use the tft.TransformFeaturesLayer on it's own: End of explanation """ class StackDict(tf.keras.layers.Layer): def call(self, inputs): values = [ tf.cast(v, tf.float32) for k,v in sorted(inputs.items(), key=lambda kv: kv[0])] return tf.stack(values, axis=1) class TrainedModel(tf.keras.Model): def __init__(self): super().__init__(self) self.concat = StackDict() self.body = tf.keras.Sequential([ tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10), ]) def call(self, inputs, training=None): x = self.concat(inputs) return self.body(x, training) trained_model = TrainedModel() """ Explanation: Export A more typical use case would use tf.Transform to apply the transformation to the training and evaluation datasets (see the next tutorial for an example). Then, after training, before exporting the model attach the tft.TransformFeaturesLayer as the first layer so that you can export it as part of your tf.saved_model. For a concrete example, keep reading. An example training model Below is a model that: takes the transformed batch, stacks them all together into a simple (batch, features) matrix, runs them through a few dense layers, and produces 10 linear outputs. In a real use case you would apply a one-hot to the s_integerized feature. You could train this model on a dataset transformed by tf.Transform: End of explanation """ trained_model_output = trained_model(transformed_batch) trained_model_output.shape """ Explanation: Imagine we trained the model. trained_model.compile(loss=..., optimizer='adam') trained_model.fit(...) This model runs on the transformed inputs End of explanation """ class ExportModel(tf.Module): def __init__(self, trained_model, input_transform): self.trained_model = trained_model self.input_transform = input_transform @tf.function def __call__(self, inputs, training=None): x = self.input_transform(inputs) return self.trained_model(x) export_model = ExportModel(trained_model=trained_model, input_transform=tft_layer) """ Explanation: An example export wrapper Imagine you've trained the above model and want to export it. You'll want to include the transform function in the exported model: End of explanation """ export_model_output = export_model(raw_data_batch) export_model_output.shape tf.reduce_max(abs(export_model_output - trained_model_output)).numpy() """ Explanation: This combined model works on the raw data, and produces exactly the same results as calling the trained model directly: End of explanation """ import tempfile model_dir = tempfile.mkdtemp(suffix='tft') tf.saved_model.save(export_model, model_dir) reloaded = tf.saved_model.load(model_dir) reloaded_model_output = reloaded(raw_data_batch) reloaded_model_output.shape tf.reduce_max(abs(export_model_output - reloaded_model_output)).numpy() """ Explanation: This export_model includes the tft.TransformFeaturesLayer and is entierly self-contained. You can save it and restore it in another environment and still get exactly the same result: End of explanation """
DeepLearningUB/EBISS2017
2. Automatic Differentiation.ipynb
mit
!pip install autograd """ Explanation: Automatic Differentiation and Computational Graphs The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams. (Michael Nielsen in "Neural Networks and Deep Learning", http://neuralnetworksanddeeplearning.com/chap2.html). Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years. (Christopher Olah, 2016) We have seen that in order to optimize our models we need to compute the derivative of the loss function with respect to all model paramaters. The computation of derivatives in computer models is addressed by four main methods: manually working out derivatives and coding the result (as in the original paper describing backpropagation); numerical differentiation (using finite difference approximations); symbolic differentiation (using expression manipulation in software, such as Sympy); and automatic differentiation (AD). When training large and deep neural networks, AD is the only practical alternative. Automatic differentiation (AD) works by systematically applying the chain rule of differential calculus at the elementary operator level. Let $ y = f(g(x)) $ our target function. In its basic form, the chain rule states: $$ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial g} \frac{\partial g}{\partial x} $$ or, if there are more than one variable $g_i$ in-between $y$ and $x$ (f.e. if $f$ is a two dimensional function such as $f(g_1(x), g_2(x))$), then: $$ \frac{\partial f}{\partial x} = \sum_i \frac{\partial f}{\partial g_i} \frac{\partial g_i}{\partial x} $$ For example, let's consider the derivative of this function: $$ f(x) = \frac{1}{1 + e^{- ({w}^T \cdot x + b)}} $$ Now, let's write how to evaluate $f(x)$ via a sequence of primitive operations: python x = ? f1 = w * x f2 = f1 + b f3 = -f2 f4 = 2.718281828459 ** f3 f5 = 1.0 + f4 f = 1.0/f5 The question mark indicates that $x$ is a value that must be provided. This program can compute the value of $x$ and also populate program variables. We can evaluate $\frac{\partial f}{\partial x}$ at some $x$ by using the chain rule. This is called forward-mode differentiation. In our case: python def dfdx_forward(x, w, b): f1 = w * x df1 = w # = df1/dx f2 = f1 + b df2 = df1 * 1.0 # = df1 * df2/df1 f3 = -f2 df3 = df2 * -1.0 # = df2 * df3/df2 f4 = 2.718281828459 ** f3 df4 = df3 * 2.718281828459 ** f3 # = df3 * df4/df3 f5 = 1.0 + f4 df5 = df4 * 1.0 # = df4 * df5/df4 df6 = df5 * -1.0 / f5 ** 2.0 # = df5 * df6/df5 return df6 It is interesting to note that this program can be readily executed if we have access to subroutines implementing the derivatives of primitive functions (such as $\exp{(x)}$ or $1/x$) and all intermediate variables are computed in the right order. It is also interesting to note that AD allows the accurate evaluation of derivatives at machine precision, with only a small constant factor of overhead. Forward differentiation is efficient for functions $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $n << m$ (only $O(n)$ sweeps are necessary). For cases $n >> m$ a different technique is needed. To this end, we will rewrite the chain rule as: $$ \frac{\partial f}{\partial x} = \frac{\partial g}{\partial x} \frac{\partial f}{\partial g} $$ to propagate derivatives backward from a given output. This is called reverse-mode differentiation. Reverse pass starts at the end (i.e. $\frac{\partial f}{\partial f} = 1$) and propagates backward to all dependencies. ```python def dfdx_backward(x, w, b): f1 = w * x f2 = f1 + b f3 = -f2 f4 = 2.718281828459 ** f3 f5 = 1.0 + f4 f6 = 1.0/f5 df6 = 1.0 # = df/df6 df5 = 1.0 * -1.0 / (f5 * f5) * df6 # = df6 * df6/df5 df4 = df5 * 1.0 # = df5 * df5/df4 df3 = df4 * numpy.log(2.718281828459) \ * 2.718281828459 ** f3 # = df4 * df4/df3 df2 = df3 * -1.0 # = df3 * df3/df2 df1 = df2 * 1.0 # = df2 * df2/df1 df = df1 * w # = df1 * df1/dx return df ``` In practice, reverse-mode differentiation is a two-stage process. In the first stage the original function code is run forward, populating $f_i$ variables. In the second stage, derivatives are calculated by propagating in reverse, from the outputs to the inputs. The most important property of reverse-mode differentiation is that it is cheaper than forward-mode differentiation for functions with a high number of input variables. In our case, $f : \mathbb{R}^n \rightarrow \mathbb{R}$, only one application of the reverse mode is sufficient to compute the full gradient of the function $\nabla f = \big( \frac{\partial y}{\partial x_1}, \dots ,\frac{\partial y}{\partial x_n} \big)$. This is the case of deep learning, where the number of input variables is very high. As we have seen, AD relies on the fact that all numerical computations are ultimately compositions of a finite set of elementary operations for which derivatives are known. For this reason, given a library of derivatives of all elementary functions in a deep neural network, we are able of computing the derivatives of the network with respect to all parameters at machine precision and applying stochastic gradient methods to its training. Without this automation process the design and debugging of optimization processes for complex neural networks with millions of parameters would be impossible. Autograd is a Python module (with only one function) that implements automatic differentiation. End of explanation """ import autograd.numpy as np from autograd import grad x = np.array([2, 5], dtype=float) def test(x): return np.log(x[0]) + x[0]*x[1] - np.sin(x[1]) grad_test = grad(test) print "({:.2f},{:.2f})".format(grad_test(x)[0],grad_test(x)[1]) """ Explanation: Autograd can automatically differentiate Python and Numpy code: It can handle most of Python’s features, including loops, if statements, recursion and closures. Autograd allows you to compute gradients of many types of data structures (Any nested combination of lists, tuples, arrays, or dicts). It can also compute higher-order derivatives. Uses reverse-mode differentiation (backpropagation) so it can efficiently take gradients of scalar-valued functions with respect to array-valued or vector-valued arguments. You can easily implement your custim gradients (good for speed, numerical stability, non-compliant code, etc). End of explanation """ import autograd.numpy as np from autograd import grad def sigmoid(x): return 1 / (1 + np.exp(-x)) def logistic_predictions(weights, inputs): return sigmoid(np.dot(inputs, weights)) def training_loss(weights, inputs, targets): preds = logistic_predictions(weights, inputs) label_probabilities = preds * targets + (1 - preds) * (1 - targets) return -np.sum(np.log(label_probabilities)) def optimize(inputs, targets, training_loss): # Optimize weights using gradient descent. gradient_loss = grad(training_loss) weights = np.zeros(inputs.shape[1]) print "Initial loss:", training_loss(weights, inputs, targets) for i in xrange(100): weights -= gradient_loss(weights, inputs, targets) * 0.01 print "Final loss:", training_loss(weights, inputs, targets) return weights # Build a toy dataset. inputs = np.array([[0.52, 1.12, 0.77], [0.88, -1.08, 0.15], [0.52, 0.06, -1.30], [0.74, -2.49, 1.39]]) targets = np.array([True, True, False, True]) weights = optimize(inputs, targets, training_loss) print "Weights:", weights """ Explanation: Then, logistic regression model fitting $$ f(x) = \frac{1}{1 + \exp^{-(w_0 + w_1 x)}} $$ can be implemented in this way: End of explanation """ %reset import numpy as np #Example dataset N_samples_per_class = 100 d_dimensions = 2 x = np.vstack((np.random.randn(N_samples_per_class, d_dimensions), np.random.randn(N_samples_per_class, d_dimensions) +np.array([5,5]))) y = np.concatenate([-1.0*np.ones(N_samples_per_class), 1.*np.ones(N_samples_per_class)]) %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') idx = y==1 plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25) idx = y==-1 plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink') import autograd.numpy as np from autograd import grad def SVM_predictions(w, inputs): return np.dot(w[0,:-1],inputs.T)+w[0,-1] #your code in this funtion def SVM_training_loss(weights, inputs, targets): pred = SVM_predictions(weights, inputs) pass def optimize(inputs, targets, training_loss): gradient_loss = grad(training_loss) weights = np.zeros((1,inputs.shape[1]+1)) print "Initial loss:", training_loss(weights, inputs, targets) for i in xrange(10000): weights -= gradient_loss(weights, inputs, targets) * 0.05 if i%1000 == 0: print " Loss:", training_loss(weights, inputs, targets) print "Final loss:", training_loss(weights, inputs, targets) return weights weights = optimize(x, y, SVM_training_loss) print "Weights", weights delta = 0.1 xx = np.arange(-4.0, 10.0, delta) yy = np.arange(-4.0, 10.0, delta) XX, YY = np.meshgrid(xx, yy) Xf = XX.flatten() Yf = YY.flatten() sz=XX.shape test_data = np.concatenate([Xf[:,np.newaxis],Yf[:,np.newaxis]],axis=1) Z = SVM_predictions(weights,test_data) fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') Z = np.reshape(Z,(xx.shape[0],xx.shape[0])) plt.contour(XX,YY,Z,[0]) idx = y==1 plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.25) idx = y==-1 plt.scatter(x[idx.ravel(),0],x[idx.ravel(),1],alpha=0.5,color='pink') """ Explanation: Any complex function that can be decomposed in a set of elementary functions can be derived in an automatic way, at machine precision, by this algorithm! We no longer need to code complex derivatives to apply SGD! Exercise Make the necessary changes to the code below in order to compute a max-margin solution for a linear separation problem by using SGD. End of explanation """ %reset import matplotlib.pyplot as plt import sklearn import sklearn.datasets import sklearn.linear_model import matplotlib import autograd.numpy as np from autograd import grad from autograd.misc import flatten # Display plots inline and change default figure size %matplotlib inline matplotlib.rcParams['figure.figsize'] = (6.0, 4.0) # Generate a dataset and plot it np.random.seed(0) X, y = sklearn.datasets.make_moons(200, noise=0.20) plt.scatter(X[:,0], X[:,1], s=40, c=y, alpha=0.45) """ Explanation: Neural Network End of explanation """ num_examples = len(X) # training set size nn_input_dim = 2 # input layer dimensionality nn_output_dim = 2 # output layer dimensionality # Gradient descent parameters epsilon = 0.01 # learning rate for gradient descent reg_lambda = 0.01 # regularization strength def calculate_loss(model): W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2'] # Forward propagation to calculate our predictions z1 = np.dot(X,W1) + b1 a1 = np.tanh(z1) z2 = np.dot(a1,W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Calculating the loss corect_logprobs = -np.log(probs[range(num_examples), y]) data_loss = np.sum(corect_logprobs) # Add regulatization term to loss (optional) data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2))) return 1./num_examples * data_loss # output (0 or 1) def predict(model, x): W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2'] # Forward propagation z1 = np.dot(x,W1) + b1 a1 = np.tanh(z1) z2 = np.dot(a1,W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) return np.argmax(probs, axis=1) """ Explanation: Let's now build a 3-layer neural network with one input layer, one hidden layer, and one output layer. The number of nodes in the input layer is determined by the dimensionality of our data, 2. Similarly, the number of nodes in the output layer is determined by the number of classes we have, also 2. Our network makes predictions using forward propagation, which is just a bunch of matrix multiplications and the application of the activation function(s). If $x$ is the 2-dimensional input to our network then we calculate our prediction $\hat{y}$ (also two-dimensional) as follows: $$ z_1 = x W_1 + b_1 $$ $$ a_1 = \mbox{tanh}(z_1) $$ $$ z_2 = a_1 W_2 + b_2$$ $$ a_2 = \mbox{softmax}({z_2})$$ $W_1, b_1, W_2, b_2$ are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then $W_1 \in \mathbb{R}^{2\times500}$, $b_1 \in \mathbb{R}^{500}$, $W_2 \in \mathbb{R}^{500\times2}$, $b_2 \in \mathbb{R}^{2}$. A common choice with the softmax output is the cross-entropy loss. If we have $N$ training examples and $C$ classes then the loss for our prediction $\hat{y}$ with respect to the true labels $y$ is given by: $$ \begin{aligned} L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i} \end{aligned} $$ End of explanation """ # This function learns parameters for the neural network and returns the model. # - nn_hdim: Number of nodes in the hidden layer # - num_passes: Number of passes through the training data for gradient descent # - print_loss: If True, print the loss every 1000 iterations def build_model(nn_hdim, num_passes=20000, print_loss=False): # Initialize the parameters to random values. We need to learn these. np.random.seed(0) W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim) b1 = np.zeros((1, nn_hdim)) W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim) b2 = np.zeros((1, nn_output_dim)) # This is what we return at the end model = {} # Gradient descent. For each batch... for i in xrange(0, num_passes): # Forward propagation z1 = np.dot(X,W1) + b1 a1 = np.tanh(z1) z2 = np.dot(a1,W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Backpropagation delta3 = probs delta3[range(num_examples), y] -= 1 dW2 = (a1.T).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2)) dW1 = np.dot(X.T, delta2) db1 = np.sum(delta2, axis=0) # Add regularization terms (b1 and b2 don't have regularization terms) dW2 += reg_lambda * W2 dW1 += reg_lambda * W1 # Gradient descent parameter update W1 += -epsilon * dW1 b1 += -epsilon * db1 W2 += -epsilon * dW2 b2 += -epsilon * db2 # Assign new parameters to the model model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2} # Optionally print the loss. # This is expensive because it uses the whole dataset, so we don't want to do it too often. if print_loss and i % 1000 == 0: print "Loss after iteration %i: %f" %(i, calculate_loss(model)) return model def plot_decision_boundary(pred_func): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, alpha=0.45) plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45) # Build a model with a 3-dimensional hidden layer model = build_model(3, print_loss=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(model, x)) plt.title("Decision Boundary for hidden layer size 3") """ Explanation: This is a version that solves the optimization problem by using the backpropagation algorithm (hand-coded derivatives): End of explanation """ # This function learns parameters for the neural network and returns the model. # - nn_hdim: Number of nodes in the hidden layer # - num_passes: Number of passes through the training data for gradient descent # - print_loss: If True, print the loss every 1000 iterations def build_model(nn_hdim, num_passes=20000, print_loss=False): # Initialize the parameters to random values. We need to learn these. np.random.seed(0) W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim) b1 = np.zeros((1, nn_hdim)) W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim) b2 = np.zeros((1, nn_output_dim)) # This is what we return at the end model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2} # Gradient descent. For each batch... for i in xrange(0, num_passes): # Forward propagation z1 = np.dot(X,model['W1']) + model['b1'] a1 = np.tanh(z1) z2 = np.dot(a1,model['W2']) + model['b2'] exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) gradient_loss = grad(calculate_loss) model_flat, unflatten_m = flatten(model) grad_flat, unflatten_g = flatten(gradient_loss(model)) model_flat -= grad_flat * 0.05 model = unflatten_m(model_flat) # Optionally print the loss. # This is expensive because it uses the whole dataset, so we don't want to do it too often. if print_loss and i % 1000 == 0: print "Loss after iteration %i: %f" %(i, calculate_loss(model)) return model def plot_decision_boundary(pred_func): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, alpha=0.45) plt.scatter(X[:, 0], X[:, 1], c=y, alpha=0.45) # Build a model with a 3-dimensional hidden layer model = build_model(3, print_loss=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(model, x)) plt.title("Decision Boundary for hidden layer size 3") """ Explanation: The next version solves the optimization problem by using AD: End of explanation """ plt.figure(figsize=(16, 32)) hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50] for i, nn_hdim in enumerate(hidden_layer_dimensions): plt.subplot(5, 2, i+1) plt.title('Hidden Layer size %d' % nn_hdim) model = build_model(nn_hdim) plot_decision_boundary(lambda x: predict(model, x)) plt.show() """ Explanation: Let's now get a sense of how varying the hidden layer size affects the result. End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb
apache-2.0
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex AI SDK : AutoML training image object detection model for batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex AI SDK to create image object detection models and do batch prediction using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and the corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese. Objective In this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex AI SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Make a batch prediction. There is one key difference between using batch prediction and using online prediction: Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time. Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex AI SDK for Python. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library. End of explanation """ ! pip3 install --upgrade tensorflow $USER_FLAG """ Explanation: Install the latest version of tensorflow library. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ import os PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) """ Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. End of explanation """ if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} ! gcloud config set project $PROJECT_ID """ Explanation: Otherwise, set your project ID here. End of explanation """ REGION = "[your-region]" # @param {type:"string"} if REGION == "[your-region]": REGION = "us-central1" """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_URI """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_URI """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aiplatform """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation """ aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME, location=REGION) """ Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation """ IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv" """ Explanation: Tutorial Now you are ready to start creating your own AutoML image object detection model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation """ count = ! gsutil cat $IMPORT_FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $IMPORT_FILE | head -10 """ Explanation: Quick peek at your data This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation """ dataset = aiplatform.ImageDataset.create( display_name="Salads" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aiplatform.schema.dataset.ioformat.image.bounding_box, ) print(dataset.resource_name) """ Explanation: Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes. End of explanation """ job = aiplatform.AutoMLImageTrainingJob( display_name="salads_" + TIMESTAMP, prediction_type="object_detection", multi_label=False, model_type="CLOUD", base_model=None, ) print(job) """ Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: An image classification model. object_detection: An image object detection model. multi_label: If a classification task, whether single (False) or multi-labeled (True). model_type: The type of model for deployment. CLOUD: Deployment on Google Cloud CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud. CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud. MOBILE_TF_VERSATILE_1: Deployment on an edge device. MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device. MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device. base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only. The instantiated object is the job for the training job. End of explanation """ model = job.run( dataset=dataset, model_display_name="salads_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=20000, disable_early_stopping=False, ) """ Explanation: Run the training pipeline Next, you run the job to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 1 hour 30 minutes. End of explanation """ # Get model resource ID models = aiplatform.Model.list(filter="display_name=salads_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aiplatform.gapic.ModelServiceClient( client_options=client_options ) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) """ Explanation: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. End of explanation """ test_items = !gsutil cat $IMPORT_FILE | head -n2 cols_1 = str(test_items[0]).split(",") cols_2 = str(test_items[1]).split(",") if len(cols_1) == 11: test_item_1 = str(cols_1[1]) test_label_1 = str(cols_1[2]) test_item_2 = str(cols_2[1]) test_label_2 = str(cols_2[2]) else: test_item_1 = str(cols_1[0]) test_label_1 = str(cols_1[1]) test_item_2 = str(cols_2[0]) test_label_2 = str(cols_2[1]) print(test_item_1, test_label_1) print(test_item_2, test_label_2) """ Explanation: Send a batch prediction request Send a batch prediction to your deployed model. Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation """ file_1 = test_item_1.split("/")[-1] file_2 = test_item_2.split("/")[-1] ! gsutil cp $test_item_1 $BUCKET_URI/$file_1 ! gsutil cp $test_item_2 $BUCKET_URI/$file_2 test_item_1 = BUCKET_URI + "/" + file_1 test_item_2 = BUCKET_URI + "/" + file_2 """ Explanation: Copy test item(s) For the batch prediction, copy the test items over to your Cloud Storage bucket. End of explanation """ import json import tensorflow as tf gcs_input_uri = BUCKET_URI + "/test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {"content": test_item_1, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") data = {"content": test_item_2, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") print(gcs_input_uri) ! gsutil cat $gcs_input_uri """ Explanation: Make the batch input file Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs: content: The Cloud Storage path to the image. mime_type: The content type. In our example, it is a jpeg file. For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'} End of explanation """ batch_predict_job = model.batch_predict( job_display_name="salads_" + TIMESTAMP, gcs_source=gcs_input_uri, gcs_destination_prefix=BUCKET_URI, machine_type="n1-standard-4", starting_replica_count=1, max_replica_count=1, sync=False, ) print(batch_predict_job) """ Explanation: Make the batch prediction request Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters: job_display_name: The human readable name for the batch prediction job. gcs_source: A list of one or more batch request input files. gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls. machine_type: The type of machine for running batch prediction on dedicated resources. Not specifying machine type will result in batch prediction job being run with automatic resources. starting_replica_count: The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count. Only used if machine_type is set. max_replica_count: The maximum number of machine replicas the batch operation may be scaled to. Only used if machine_type is set. Default is 10. sync: If set to True, the call will block while waiting for the asynchronous batch job to complete. For AutoML models, only manual scaling is supported. In manual scaling both starting_replica_count and max_replica_count have the same value. For this batch job we are using manual scaling. Here we are setting both starting_replica_count and max_replica_count to the same value that is 1. End of explanation """ batch_predict_job.wait() """ Explanation: Wait for completion of batch prediction job Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed. End of explanation """ import json bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) print(line) """ Explanation: Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format: content: The prediction request. prediction: The prediction response. ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. bboxes: The bounding box of each detected object. End of explanation """ delete_bucket = False # Delete the dataset using the Vertex dataset object dataset.delete() # Delete the model using the Vertex model object model.delete() # Delete the AutoML or Pipeline trainig job job.delete() # Delete the batch prediction job using the Vertex batch prediction object batch_predict_job.delete() if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -r $BUCKET_URI """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Model AutoML Training Job Batch Job Cloud Storage Bucket End of explanation """
Kaggle/learntools
notebooks/time_series/raw/tut4.ipynb
apache-2.0
#$HIDE_INPUT$ import pandas as pd # Federal Reserve dataset: https://www.kaggle.com/federalreserve/interest-rates reserve = pd.read_csv( "../input/ts-course-data/reserve.csv", parse_dates={'Date': ['Year', 'Month', 'Day']}, index_col='Date', ) y = reserve.loc[:, 'Unemployment Rate'].dropna().to_period('M') df = pd.DataFrame({ 'y': y, 'y_lag_1': y.shift(1), 'y_lag_2': y.shift(2), }) df.head() """ Explanation: What is Serial Dependence? In earlier lessons, we investigated properties of time series that were most easily modeled as time dependent properties, that is, with features we could derive directly from the time index. Some time series properties, however, can only be modeled as serially dependent properties, that is, using as features past values of the target series. The structure of these time series may not be apparent from a plot over time; plotted against past values, however, the structure becomes clear -- as we see in the figure below below. <figure style="padding: 1em;"> <img src="https://i.imgur.com/X0sSnwp.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center>These two series have serial dependence, but not time dependence. Points on the right have coordinates <code>(value at time t-1, value at time t)</code>. </center></figcaption> </figure> With trend and seasonality, we trained models to fit curves to plots like those on the left in the figure above -- the models were learning time dependence. The goal in this lesson is to train models to fit curves to plots like those on the right -- we want them to learn serial dependence. Cycles One especially common way for serial dependence to manifest is in cycles. Cycles are patterns of growth and decay in a time series associated with how the value in a series at one time depends on values at previous times, but not necessarily on the time step itself. Cyclic behavior is characteristic of systems that can affect themselves or whose reactions persist over time. Economies, epidemics, animal populations, volcano eruptions, and similar natural phenomena often display cyclic behavior. <figure style="padding: 1em;"> <img src="https://i.imgur.com/CC3TkAf.png" width=800, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Four time series with cyclic behavior. </center></figcaption> </figure> What distinguishes cyclic behavior from seasonality is that cycles are not necessarily time dependent, as seasons are. What happens in a cycle is less about the particular date of occurence, and more about what has happened in the recent past. The (at least relative) independence from time means that cyclic behavior can be much more irregular than seasonality. Lagged Series and Lag Plots To investigate possible serial dependence (like cycles) in a time series, we need to create "lagged" copies of the series. Lagging a time series means to shift its values forward one or more time steps, or equivalently, to shift the times in its index backward one or more steps. In either case, the effect is that the observations in the lagged series will appear to have happened later in time. This shows the monthly unemployment rate in the US (y) together with its first and second lagged series (y_lag_1 and y_lag_2, respectively). Notice how the values of the lagged series are shifted forward in time. End of explanation """ #$HIDE_INPUT$ from pathlib import Path from warnings import simplefilter import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from scipy.signal import periodogram from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from statsmodels.graphics.tsaplots import plot_pacf simplefilter("ignore") # Set Matplotlib defaults plt.style.use("seaborn-whitegrid") plt.rc("figure", autolayout=True, figsize=(11, 4)) plt.rc( "axes", labelweight="bold", labelsize="large", titleweight="bold", titlesize=16, titlepad=10, ) plot_params = dict( color="0.75", style=".-", markeredgecolor="0.25", markerfacecolor="0.25", ) %config InlineBackend.figure_format = 'retina' def lagplot(x, y=None, lag=1, standardize=False, ax=None, **kwargs): from matplotlib.offsetbox import AnchoredText x_ = x.shift(lag) if standardize: x_ = (x_ - x_.mean()) / x_.std() if y is not None: y_ = (y - y.mean()) / y.std() if standardize else y else: y_ = x corr = y_.corr(x_) if ax is None: fig, ax = plt.subplots() scatter_kws = dict( alpha=0.75, s=3, ) line_kws = dict(color='C3', ) ax = sns.regplot(x=x_, y=y_, scatter_kws=scatter_kws, line_kws=line_kws, lowess=True, ax=ax, **kwargs) at = AnchoredText( f"{corr:.2f}", prop=dict(size="large"), frameon=True, loc="upper left", ) at.patch.set_boxstyle("square, pad=0.0") ax.add_artist(at) ax.set(title=f"Lag {lag}", xlabel=x_.name, ylabel=y_.name) return ax def plot_lags(x, y=None, lags=6, nrows=1, lagplot_kwargs={}, **kwargs): import math kwargs.setdefault('nrows', nrows) kwargs.setdefault('ncols', math.ceil(lags / nrows)) kwargs.setdefault('figsize', (kwargs['ncols'] * 2, nrows * 2 + 0.5)) fig, axs = plt.subplots(sharex=True, sharey=True, squeeze=False, **kwargs) for ax, k in zip(fig.get_axes(), range(kwargs['nrows'] * kwargs['ncols'])): if k + 1 <= lags: ax = lagplot(x, y, lag=k + 1, ax=ax, **lagplot_kwargs) ax.set_title(f"Lag {k + 1}", fontdict=dict(fontsize=14)) ax.set(xlabel="", ylabel="") else: ax.axis('off') plt.setp(axs[-1, :], xlabel=x.name) plt.setp(axs[:, 0], ylabel=y.name if y is not None else x.name) fig.tight_layout(w_pad=0.1, h_pad=0.1) return fig data_dir = Path("../input/ts-course-data") flu_trends = pd.read_csv(data_dir / "flu-trends.csv") flu_trends.set_index( pd.PeriodIndex(flu_trends.Week, freq="W"), inplace=True, ) flu_trends.drop("Week", axis=1, inplace=True) ax = flu_trends.FluVisits.plot(title='Flu Trends', **plot_params) _ = ax.set(ylabel="Office Visits") """ Explanation: By lagging a time series, we can make its past values appear contemporaneous with the values we are trying to predict (in the same row, in other words). This makes lagged series useful as features for modeling serial dependence. To forecast the US unemployment rate series, we could use y_lag_1 and y_lag_2 as features to predict the target y. This would forecast the future unemployment rate as a function of the unemployment rate in the prior two months. Lag plots A lag plot of a time series shows its values plotted against its lags. Serial dependence in a time series will often become apparent by looking at a lag plot. We can see from this lag plot of US Unemployment that there is a strong and apparently linear relationship between the current unemployment rate and past rates. <figure style="padding: 1em;"> <img src="https://i.imgur.com/Hvrboya.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Lag plot of US Unemployment with autocorrelations indicated. </center></figcaption> </figure> The most commonly used measure of serial dependence is known as autocorrelation, which is simply the correlation a time series has with one of its lags. US Unemployment has an autocorrelation of 0.99 at lag 1, 0.98 at lag 2, and so on. Choosing lags When choosing lags to use as features, it generally won't be useful to include every lag with a large autocorrelation. In US Unemployment, for instance, the autocorrelation at lag 2 might result entirely from "decayed" information from lag 1 -- just correlation that's carried over from the previous step. If lag 2 doesn't contain anything new, there would be no reason to include it if we already have lag 1. The partial autocorrelation tells you the correlation of a lag accounting for all of the previous lags -- the amount of "new" correlation the lag contributes, so to speak. Plotting the partial autocorrelation can help you choose which lag features to use. In the figure below, lag 1 through lag 6 fall outside the intervals of "no correlation" (in blue), so we might choose lags 1 through lag 6 as features for US Unemployment. (Lag 11 is likely a false positive.) <figure style="padding: 1em;"> <img src="https://i.imgur.com/6nTe94E.png" width=600, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Partial autocorrelations of US Unemployment through lag 12 with 95% confidence intervals of no correlation. </center></figcaption> </figure> A plot like that above is known as a correlogram. The correlogram is for lag features essentially what the periodogram is for Fourier features. Finally, we need to be mindful that autocorrelation and partial autocorrelation are measures of linear dependence. Because real-world time series often have substantial non-linear dependences, it's best to look at a lag plot (or use some more general measure of dependence, like mutual information) when choosing lag features. The Sunspots series has lags with non-linear dependence which we might overlook with autocorrelation. <figure style="padding: 1em;"> <img src="https://i.imgur.com/Q38UVOu.png" width=350, alt=""> <figcaption style="textalign: center; font-style: italic"><center>Lag plot of the <em>Sunspots</em> series. </center></figcaption> </figure> Non-linear relationships like these can either be transformed to be linear or else learned by an appropriate algorithm. Example - Flu Trends The Flu Trends dataset contains records of doctor's visits for the flu for weeks between 2009 and 2016. Our goal is to forecast the number of flu cases for the coming weeks. We will take two approaches. In the first we'll forecast doctor's visits using lag features. Our second approach will be to forecast doctor's visits using lags of another set of time series: flu-related search terms as captured by Google Trends. End of explanation """ _ = plot_lags(flu_trends.FluVisits, lags=12, nrows=2) _ = plot_pacf(flu_trends.FluVisits, lags=12) """ Explanation: Our Flu Trends data shows irregular cycles instead of a regular seasonality: the peak tends to occur around the new year, but sometimes earlier or later, sometimes larger or smaller. Modeling these cycles with lag features will allow our forecaster to react dynamically to changing conditions instead of being constrained to exact dates and times as with seasonal features. Let's take a look at the lag and autocorrelation plots first: End of explanation """ def make_lags(ts, lags): return pd.concat( { f'y_lag_{i}': ts.shift(i) for i in range(1, lags + 1) }, axis=1) X = make_lags(flu_trends.FluVisits, lags=4) X = X.fillna(0.0) """ Explanation: The lag plots indicate that the relationship of FluVisits to its lags is mostly linear, while the partial autocorrelations suggest the dependence can be captured using lags 1, 2, 3, and 4. We can lag a time series in Pandas with the shift method. For this problem, we'll fill in the missing values the lagging creates with 0.0. End of explanation """ #$HIDE_INPUT$ # Create target series and data splits y = flu_trends.FluVisits.copy() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=60, shuffle=False) # Fit and predict model = LinearRegression() # `fit_intercept=True` since we didn't use DeterministicProcess model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_train), index=y_train.index) y_fore = pd.Series(model.predict(X_test), index=y_test.index) #$HIDE_INPUT$ ax = y_train.plot(**plot_params) ax = y_test.plot(**plot_params) ax = y_pred.plot(ax=ax) _ = y_fore.plot(ax=ax, color='C3') """ Explanation: In previous lessons, we were able to create forecasts for as many steps as we liked beyond the training data. When using lag features, however, we are limited to forecasting time steps whose lagged values are available. Using a lag 1 feature on Monday, we can't make a forecast for Wednesday because the lag 1 value needed is Tuesday which hasn't happened yet. We'll see strategies for handling this problem in Lesson 6. For this example, we'll just use a values from a test set. End of explanation """ #$HIDE_INPUT$ ax = y_test.plot(**plot_params) _ = y_fore.plot(ax=ax, color='C3') """ Explanation: Looking just at the forecast values, we can see how our model needs a time step to react to sudden changes in the target series. This is a common limitation of models using only lags of the target series as features. End of explanation """ #$HIDE_INPUT$ ax = flu_trends.plot( y=["FluCough", "FluVisits"], secondary_y="FluCough", ) """ Explanation: To improve the forecast we could try to find leading indicators, time series that could provide an "early warning" for changes in flu cases. For our second approach then we'll add to our training data the popularity of some flu-related search terms as measured by Google Trends. Plotting the search phrase 'FluCough' against the target 'FluVisits' suggests such search terms could be useful as leading indicators: flu-related searches tend to become more popular in the weeks prior to office visits. End of explanation """ search_terms = ["FluContagious", "FluCough", "FluFever", "InfluenzaA", "TreatFlu", "IHaveTheFlu", "OverTheCounterFlu", "HowLongFlu"] # Create three lags for each search term X0 = make_lags(flu_trends[search_terms], lags=3) # Create four lags for the target, as before X1 = make_lags(flu_trends['FluVisits'], lags=4) # Combine to create the training data X = pd.concat([X0, X1], axis=1).fillna(0.0) """ Explanation: The dataset contains 129 such terms, but we'll just use a few. End of explanation """ #$HIDE_INPUT$ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=60, shuffle=False) model = LinearRegression() model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_train), index=y_train.index) y_fore = pd.Series(model.predict(X_test), index=y_test.index) ax = y_test.plot(**plot_params) _ = y_fore.plot(ax=ax, color='C3') """ Explanation: Our forecasts are a bit rougher, but our model appears to be better able to anticipate sudden increases in flu visits, suggesting that the several time series of search popularity were indeed effective as leading indicators. End of explanation """
dwhswenson/openpathsampling
examples/misc/tutorial_handle_nan.ipynb
mit
import openpathsampling as paths import openpathsampling.engines.openmm as dyn_omm import openpathsampling.engines as dyn from simtk.openmm import app import simtk.openmm as mm import simtk.unit as unit import mdtraj as md import numpy as np """ Explanation: How to deal with errors in engines Imports End of explanation """ # this cell is all OpenMM specific forcefield = app.ForceField('amber96.xml', 'tip3p.xml') pdb = app.PDBFile("../resources/AD_initial_frame.pdb") system = forcefield.createSystem( pdb.topology, nonbondedMethod=app.PME, nonbondedCutoff=1.0*unit.nanometers, constraints=app.HBonds, rigidWater=True, ewaldErrorTolerance=0.0005 ) hi_T_integrator = mm.LangevinIntegrator( 500*unit.kelvin, 1.0/unit.picoseconds, 2.0*unit.femtoseconds) hi_T_integrator.setConstraintTolerance(0.00001) """ Explanation: Setting up the engine Now we set things up for the OpenMM simulation. We will need a openmm.System object and an openmm.Integrator object. To learn more about OpenMM, read the OpenMM documentation. The code we use here is based on output from the convenient web-based OpenMM builder. End of explanation """ template = dyn_omm.snapshot_from_pdb("../resources/AD_initial_frame.pdb") openmm_properties = {'OpenCLPrecision': 'mixed'} engine_options = { 'n_frames_max': 2000, 'nsteps_per_frame': 10 } engine = dyn_omm.Engine( template.topology, system, hi_T_integrator, openmm_properties=openmm_properties, options=engine_options ).named('500K') engine.initialize('OpenCL') """ Explanation: The storage file will need a template snapshot. In addition, the OPS OpenMM-based Engine has a few properties and options that are set by these dictionaries. End of explanation """ volA = paths.EmptyVolume() volB = paths.EmptyVolume() init_traj_ensemble = paths.AllOutXEnsemble(volA) | paths.AllOutXEnsemble(volB) """ Explanation: Defining states We define stupid non-existant states which we can never hit. Good grounds to generate nan or too long trajectories. End of explanation """ nan_causing_template = template.copy() kinetics = template.kinetics.copy() # this is crude but does the trick kinetics.velocities = kinetics.velocities.copy() kinetics.velocities[0] = \ (np.zeros(template.velocities.shape[1]) + 1000000.) * \ unit.nanometers / unit.picoseconds nan_causing_template.kinetics = kinetics # generate trajectory that includes frame in both states try: trajectory = engine.generate(nan_causing_template, [init_traj_ensemble.can_append]) except dyn.EngineNaNError as e: print 'we got NaNs, oh no.' print 'last valid trajectory was of length %d' % len(e.last_trajectory) except dyn.EngineMaxLengthError as e: print 'we ran into max length.' print 'last valid trajectory was of length %d' % len(e.last_trajectory) """ Explanation: Create a bad snapshot End of explanation """ engine.options['n_frames_max'] = 10 engine.options['on_max_length'] = 'fail' # generate trajectory that includes frame in both states try: trajectory = engine.generate(template, [init_traj_ensemble.can_append]) except dyn.EngineNaNError as e: print 'we got NaNs, oh no.' print 'last valid trajectory was of length %d' % len(e.last_trajectory) except dyn.EngineMaxLengthError as e: print 'we ran into max length.' print 'last valid trajectory was of length %d' % len(e.last_trajectory) """ Explanation: Now we will make a long trajectory End of explanation """ mover = paths.ForwardShootMover( ensemble=init_traj_ensemble, selector=paths.UniformSelector(), engine=engine) """ Explanation: What, if that happens inside of a simulation? End of explanation """ init_sampleset = paths.SampleSet([paths.Sample( trajectory=paths.Trajectory([template] * 5), replica=0, ensemble = init_traj_ensemble )]) """ Explanation: Should run indefinitely and hit the max frames of 10. End of explanation """ change = mover.move(init_sampleset) assert(isinstance(change, paths.movechange.RejectedMaxLengthSampleMoveChange)) assert(not change.accepted) change.samples[0].details.__dict__.get('stopping_reason') """ Explanation: Run the PathMover and check the change End of explanation """ init_sampleset = paths.SampleSet([paths.Sample( trajectory=paths.Trajectory([nan_causing_template] * 5), replica=0, ensemble = init_traj_ensemble )]) change = mover.move(init_sampleset) assert(isinstance(change, paths.movechange.RejectedNaNSampleMoveChange)) assert(not change.accepted) change.samples[0].details.__dict__.get('stopping_reason') """ Explanation: Let's try again what happens when nan is encountered End of explanation """ engine.options['on_nan'] = 'ignore' engine.options change = mover.move(init_sampleset) """ Explanation: Change the behaviour of the engine to ignore nans. This is really not advised, because not all platforms support this. CPU will always throw an nan error and End of explanation """
krondor/nlp-dsx-pot
Spark - Word2Vec Lab.ipynb
gpl-3.0
# The code was removed by DSX for sharing. """ Explanation: ACTION REQUIRED to get your credentials: Click on the empty cell below Then look for the data icon on the top right (drawing with zeros and ones) and click on it You should see the tweets.gz file, then click on "insert to code and choose the Spark SQLContext option from the drop down options" You should see a SparkSQL context code block inserted in the cell above with your credentials Replace the path name to path_1 (if it is not already) Run the below cell End of explanation """ t0 = time.time() #datapath = 'swift://'+credentials_1['container']+'.keystone/tweets.gz' tweets = sqlContext.read.json(path_1) tweets.registerTempTable("tweets") twr = tweets.count() print "Number of tweets read: ", twr print "Elapsed time (seconds): ", time.time() - t0 """ Explanation: Run the next cell to set up a connection to your object storage From the File IO mentu on the right, upload and import the tweets.gz dataset using the DSX UI. Import the dataset to the blank cell below as a SQLContext setup. Read the tweets as a Spark dataframe and count Type the following to load the dataframe and time the operation. t0 = time.time() tweets = sqlContext.read.json(path_1) tweets.registerTempTable("tweets") twr = tweets.count() print "Number of tweets read: ", twr print "Elapsed time (seconds): ", time.time() - t0 End of explanation """ tweets.printSchema() """ Explanation: Investigate Twitter Data Schema End of explanation """ filter = ['santa','claus','merry','christmas','eve', 'congrat','holiday','jingle','bell','silent', 'night','faith','hope','family','new', 'year','spirit','turkey','ham','food'] pd.DataFrame(filter,columns=['word']).head(5) """ Explanation: The keywords: christmas, santa, turkey, ... End of explanation """ # Construct SQL Command t0 = time.time() sqlString = "(" for substr in filter: sqlString = sqlString+"text LIKE '%"+substr+"%' OR " sqlString = sqlString+"text LIKE '%"+substr.upper()+"%' OR " sqlString=sqlString[:-4]+")" sqlFilterCommand = "SELECT lang, text FROM tweets WHERE (lang = 'en') AND "+sqlString # Query tweets in english that contain at least one of the keywords tweetsDF = sqlContext.sql(sqlFilterCommand).cache() twf = tweetsDF.count() print "Number of tweets after filtering: ", twf # last line add ~9 seconds (from ~0.72 seconds to ~9.42 seconds) print "Elapsed time (seconds): ", time.time() - t0 print "Percetage of Tweets Used: ", float(twf)/twr """ Explanation: Use Spark SQL to Filter Relevant Tweets: Relevant tweets: + In english and + Contain at least one of the keywords End of explanation """ tweetsRDD = tweetsDF.select('text').rdd def parseAndRemoveStopWords(text): t = text[0].replace(";"," ").replace(":"," ").replace('"',' ').replace('-',' ').replace("?"," ") t = t.replace(',',' ').replace('.',' ').replace('!','').replace("'"," ").replace("/"," ").replace("\\"," ") t = t.lower().split(" ") return t tw = tweetsRDD.map(parseAndRemoveStopWords) """ Explanation: Parse Tweets and Remove Stop Words End of explanation """ # map to df twDF = tw.map(lambda p: Row(text=p)).toDF() # default minCount = 5 (we may need to try something larger: 20-100 to reduce cost) # default vectorSize = 100 (we may want to keep default) t0 = time.time() word2Vec = Word2Vec(vectorSize=100, minCount=10, inputCol="text", outputCol="result") modelW2V = word2Vec.fit(twDF) wordVectorsDF = modelW2V.getVectors() print "Elapsed time (seconds) to train Word2Vec: ", time.time() - t0 vocabSize = wordVectorsDF.count() print "Vocabulary Size: ", vocabSize """ Explanation: Train Word2Vec Model Word2vec returns a dataframe with words and vectors Sometimes you need to run this block twice (strange reason that need to de-bug) End of explanation """ word = 'christmas' topN = 5 ### synonymsDF = modelW2V.findSynonyms(word, topN).toPandas() synonymsDF[['word']].head(topN) """ Explanation: Find top N closest words End of explanation """ word = 'dog' topN = 5 ### synonymsDF = modelW2V.findSynonyms(word, topN).toPandas() synonymsDF[['word']].head(topN) """ Explanation: As Expected, Unrelated terms are Not Accurate End of explanation """ dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features') numComponents = 3 pca = PCA(k = numComponents, inputCol = 'features', outputCol = 'pcaFeatures') model = pca.fit(dfW2V) dfComp = model.transform(dfW2V).select("pcaFeatures") """ Explanation: PCA on Top of Word2Vec using DF (spark.ml) End of explanation """ def topNwordsToPlot(dfComp,wordVectorsDF,word,nwords): compX = np.asarray(dfComp.map(lambda vec: vec[0][0]).collect()) compY = np.asarray(dfComp.map(lambda vec: vec[0][1]).collect()) compZ = np.asarray(dfComp.map(lambda vec: vec[0][2]).collect()) words = np.asarray(wordVectorsDF.select('word').toPandas().values.tolist()) Feat = np.asarray(wordVectorsDF.select('vector').rdd.map(lambda v: np.asarray(v[0])).collect()) Nw = words.shape[0] # total number of words ind_star = np.where(word == words) # find index associated to 'word' wstar = Feat[ind_star,:][0][0] # vector associated to 'word' nwstar = math.sqrt(np.dot(wstar,wstar)) # norm of vector assoicated with 'word' dist = np.zeros(Nw) # initialize vector of distances i = 0 for w in Feat: # loop to compute cosine distances between 'word' and the rest of the words den = math.sqrt(np.dot(w,w))*nwstar # denominator of cosine distance dist[i] = abs( np.dot(wstar,w) )/den # cosine distance to each word i = i + 1 indexes = np.argpartition(dist,-(nwords+1))[-(nwords+1):] di = [] for j in range(nwords+1): di.append(( words[indexes[j]], dist[indexes[j]], compX[indexes[j]], compY[indexes[j]], compZ[indexes[j]] ) ) result=[] for elem in sorted(di,key=lambda x: x[1],reverse=True): result.append((elem[0][0], elem[2], elem[3], elem[4])) return pd.DataFrame(result,columns=['word','X','Y','Z']) word = 'christmas' nwords = 200 ############# r = topNwordsToPlot(dfComp,wordVectorsDF,word,nwords) ############ fs=20 #fontsize w = r['word'] fig = plt.figure() ax = fig.add_subplot(111, projection='3d') height = 10 width = 10 fig.set_size_inches(width, height) ax.scatter(r['X'], r['Y'], r['Z'], color='red', s=100, marker='o', edgecolors='black') for i, txt in enumerate(w): if(i<2): ax.text(r['X'].ix[i],r['Y'].ix[i],r['Z'].ix[i], '%s' % (txt), size=30, zorder=1, color='k') ax.set_xlabel('1st. Component', fontsize=fs) ax.set_ylabel('2nd. Component', fontsize=fs) ax.set_zlabel('3rd. Component', fontsize=fs) ax.set_title('Visualization of Word2Vec via PCA', fontsize=fs) ax.grid(True) plt.show() """ Explanation: 3D Visualization End of explanation """ t0 = time.time() K = int(math.floor(math.sqrt(float(vocabSize)/2))) # K ~ sqrt(n/2) this is a rule of thumb for choosing K, # where n is the number of words in the model # feel free to choose K with a fancier algorithm dfW2V = wordVectorsDF.select('vector').withColumnRenamed('vector','features') kmeans = KMeans(k=K, seed=1) modelK = kmeans.fit(dfW2V) labelsDF = modelK.transform(dfW2V).select('prediction').withColumnRenamed('prediction','labels') print "Number of Clusters (K) Used: ", K print "Elapsed time (seconds) :", time.time() - t0 """ Explanation: K-means on top of Word2Vec using DF (spark.ml) End of explanation """
sassoftware/sas-viya-machine-learning
Python-integration/The Data Science Pilot Action Set.ipynb
apache-2.0
import swat import numpy as np import pandas as pd conn = swat.CAS('localhost', 5570, authinfo='~/.authinfo', caslib="CASUSER") """ Explanation: The Data Science Pilot Action Set The dataSciencePilot action set consists of actions that implement a policy-based, configurable, and scalable approach to automating data science workflows. This action set can be used to automate an end-to-end workflow or to automate steps in the workflow such as data preparation, feature preprocessing, feature engineering, feature selection, and hyperparameter tuning. More information about this action set is available on its documentation page. Table of Contents Today we will set up the notebook and go through each of the seven actions. Setting Up the Notebook Explore Data Explore Correlations Analyze Missing Patterns Detect Interactions Screen Variables Feature Machine Generate Shadow Features Select Features Data Science Automated Machine Learning Pipeline Conclusion Setting Up the Notebook First, we must import the Scripting Wrapper for Analytics Transfer (SWAT) package and use the package to connect to out Cloud Analytics Service (CAS). End of explanation """ conn.builtins.loadactionset('dataSciencePilot') conn.builtins.loadactionset('decisionTree') """ Explanation: Now we will load the dataSciencePilot action set and the decisionTree action set. End of explanation """ tbl = 'hmeq' hmeq = conn.read_csv("./data/hmeq.csv", casout=dict(name=tbl, replace=True)) hmeq.head() """ Explanation: Next, we must connect to our data source. We are using a data set for predicting home equity loan defaults. End of explanation """ # Target Name trt='BAD' # Exploration Policy expo = {'cardinality': {'lowMediumCutoff':40}} # Screen Policy scpo = {'missingPercentThreshold':35} # Selection Policy sepo = {'criterion': 'SU', 'topk':4} # Transformation Policy trpo = {'entropy': True, 'iqv': True, 'kurtosis': True, 'outlier': True} """ Explanation: Our target is “BAD” meaning that it was a bad loan. I am setting up a variable to hold our target information as well as our policy information. Each policy is applicable to specific actions and I will provide more information about each policy later in the notebook. End of explanation """ conn.dataSciencePilot.exploreData( table = tbl, target = trt, casOut = {'name': 'EXPLORE_DATA_OUT_PY', 'replace' : True}, explorationPolicy = expo ) conn.fetch(table = {'name': 'EXPLORE_DATA_OUT_PY'}) """ Explanation: Explore Data The exploreData action calculates various statistical measures for each column in your data set such as Minimum, Maximum, Mean, Median, Mode, Number Missing, Standard Deviation, and more. The exploreData action also creates a hierarchical variable grouping with two levels. The first level groups variables according to their data type (interval, nominal, data, time, or datetime). The second level uses the following statistical metrics to group the interval and nominal data: - Missing rate (interval and nominal). - Cardinality (nominal). - Entropy (nominal). - Index of Qualitative Variation(IQV; interval and nominal). - Skewness (interval). - Kurtosis (interval). - Outliers (interval). - Coefficient of Variation (CV; interval). This action returns a CAS table listing all the variables, the variable groupings, and the summary statistics. These groupings allow for a pipelined approach to data transformation and cleaning. End of explanation """ conn.dataSciencePilot.exploreCorrelation( table = tbl, casOut = {'name':'CORR_PY', 'replace':True}, target = trt ) conn.fetch(table = {"name" : "CORR_PY"}) """ Explanation: Explore Correlations If a target is specified, the exploreCorrelation action performs a linear and nonlinear correlation analysis of the input variables and the target. If a target is not specified, the exploreCorrelation action performs a linear and nonlinear correlation analysis for all pairwise combinations of the input variables. The correlation statistics available depend on the data type of each input variable in the pair. - Nominal-nominal correlation pairs have the following statistics available: Mutual Information (MI), Symmetric Uncertainty (SU), Information Value (IV; for binary target), Entropy, chi-square, G test (G2), and Cramer’s V. - Nominal-interval correlation pairs have the following statistics available: Mutual Information (MI), Symmetric Uncertainty (SU), Entropy, and F-test. - Interval-interval correlation pairs have the following statistics available: Mutual Information (MI), Symmetric Uncertainty (SU), Entropy, and Pearson correlation. This action returns a CAS table listing all the variable pairs and the correlation statistics. End of explanation """ conn.dataSciencePilot.analyzeMissingPatterns( table = tbl, target = trt, casOut = {'name':'MISS_PATTERN_PY', 'replace':True} ) conn.fetch(table = {'name': 'MISS_PATTERN_PY'}) """ Explanation: Analyze Missing Patterns If the target is specified, the analyzeMissingPatterns action performs a missing pattern analysis of the input variables and the target. If a target is not specified, the analyzeMissingPatterns action performs a missing pattern analysis for all pairwise combinations of the input variables. This analysis provides the correlation strength between missing patterns across variable pairs and dependencies of missingness in one variable and the values of the other variable. This action returns a CAS table listing all the missing variable pairs and the statistics around missingness. End of explanation """ # Tranform data for binary format conn.dataPreprocess.transform( table = hmeq, copyVars = ["BAD"], casOut = {"name": "hmeq_transform", "replace": True}, requestPackages = [{"inputs":["JOB", "REASON"], "catTrans":{"method": "label", "arguments":{"overrides":{"binMissing": True}}}}, {"inputs":["MORTDUE", "DEBTINC", "LOAN"], "discretize": {"method": "quantile", "arguments":{"overrides":{"binMissing": True}}} }]) conn.fetch(table = {'name': 'hmeq_transform'}) conn.dataSciencePilot.detectInteractions( table ='hmeq_transform', target = trt, event = '1', sparse = True, inputs = ["_TR1_JOB", "_TR1_REASON", "_TR2_MORTDUE", "_TR2_DEBTINC", "_TR2_LOAN"], inputLevels = [7, 3, 6, 6, 6], casOut = {'name': 'DETECT_INT_OUT_PY', 'replace': True}) conn.fetch(table={'name':'DETECT_INT_OUT_PY'}) """ Explanation: Detect Interactions The detectInteractions action will assess the interactions between pairs of predictor variables and the correlation of that interaction on the response variable. Specially, it will see if the product of the pair of predictor variables correlate with the response variable. Since checking the correlation between the product of every predictor pair and the response variable can be computationally intensive, this action relies on the XYZ algorithm to search for these interactions efficiently in a high-dimensional space. The detectInteractions Action requires that all predictor variables be in a binary format, but the response variable can be numeric, binary, or multi-class. Additionally, the detectInteractions Action can handle data in a sparse format, such as when predictor variables are encoded using an one-hot-encoding scheme. In the example below, we will specify that our inputs are sparse. The output tables shows the gamma value for each pair of variables. End of explanation """ conn.dataSciencePilot.screenVariables( table = tbl, target = trt, casOut = {'name': 'SCREEN_VARIABLES_OUT_PY', 'replace': True}, screenPolicy = {} ) conn.fetch(table = {'name': 'SCREEN_VARIABLES_OUT_PY'}) """ Explanation: Screen Variables The screenVariables action makes one of the following recommendations for each input variable: - Remove variable if there are significant data-quality issues. - Transform and keep variable if there are some data-quality issues. - Keep variable if there are no data quality issues. The screenVariables action considers the following features of the input variables to make its recommendation: - Missing rate exceeds threshold in screenPolicy (default is 90). - Constant value across input variable. - Mutual Information (MI) about the target is below the threshold in the screenPolicy (default is 0.05) - Entropy across levels. - Entropy reduction of target exceeds threshold in screenPolicy (default is 90); also referred to as leakage. - Symmetric Uncertainty (SU) of two variables exceed threshold in screenPolicy (default is 1); also referred to as redundancy. This action returns a CAS table listing all the input variables, the recommended action, and the reason for the recommended action. End of explanation """ conn.dataSciencePilot.featureMachine( table = tbl, target = trt, copyVars = trt, explorationPolicy = expo, screenPolicy = scpo, transformationPolicy = trpo, transformationOut = {"name" : "TRANSFORMATION_OUT", "replace" : True}, featureOut = {"name" : "FEATURE_OUT", "replace" : True}, casOut = {"name" : "CAS_OUT", "replace" : True}, saveState = {"name" : "ASTORE_OUT", "replace" : True} ) conn.fetch(table = {'name': 'TRANSFORMATION_OUT'}) conn.fetch(table = {'name': 'FEATURE_OUT'}) conn.fetch(table = {'name': 'CAS_OUT'}) """ Explanation: Feature Machine The featureMachine action creates an automated and parallel generation of features. The featureMachine action first explores the data and groups the input variables into categories with the same statistical profile, like the exploreData action. Next the featureMachine action screens variables to identify noise variables to exclude from further analysis, like the screenVariables action. Finally, the featureMachine action generates new features by using the available structured pipelines: - Missing indicator addition. - Mode imputation and rare value grouping. - Missing level and rare value grouping. - Median imputation. - Mode imputation and label encoding. - Missing level and label encoding. - Yeo-Johnson transformation and median imputation. - Box-Cox transformation. - Quantile binning with missing bins. - Regression tree binning. - Decision tree binning. - MDLP binning. - Target encoding. - Date, time, and datetime transformations. Depending on the parameters specified in the transformationPolicy, the featureMachine action can generate several features for each input variable. This action returns four CAS tables: the first lists information around the transformation pipelines, the second lists information around the transformed features, the third is the input table scored with the transformed features, and the fourth is an analytical store for scoring any additional input tables. End of explanation """ # Getting variable names and metadata from feature machine output fm = conn.CASTable('FEATURE_OUT').to_frame() inputs = fm['Name'].to_list() nom = fm.loc[fm['IsNominal'] == 1] nom = nom['Name'].to_list() # Generating Shadow Features conn.dataSciencePilot.generateShadowFeatures( table = 'CAS_OUT', nProbes = 2, inputs = inputs, nominals = nom, casout={"name" : "SHADOW_FEATURES_OUT", "replace" : True}, copyVars = trt ) conn.fetch(table = {"name" : "SHADOW_FEATURES_OUT"}) # Getting Feature Importance for Orginal Features feats = conn.decisionTree.forestTrain( table = 'CAS_OUT', inputs = inputs, target = trt, varImp = True) real_features = feats.DTreeVarImpInfo # Getting Feature Importance for Shadow Features inp = conn.CASTable('SHADOW_FEATURES_OUT').axes[1].to_list() shadow_feats = conn.decisionTree.forestTrain( table = 'SHADOW_FEATURES_OUT', inputs = inp, target = trt, varImp = True) sf = shadow_feats.DTreeVarImpInfo # Building dataframe for easy comparison feat_comp = pd.DataFrame(columns=['Variable', 'Real_Imp', 'SF_Imp1', 'SF_Imp2']) # Filling Variable Column of Data Frame from Feature feat_comp['Variable'] = real_features['Variable'] # Filling Importance Column of Data Frame from Feature feat_comp['Real_Imp'] = real_features['Importance'] # Finding each Feature's Shadow Feature for index, row in sf.iterrows(): temp_name = row['Variable'] temp_num = int(temp_name[-1:]) temp_name = temp_name[5:-2] temp_imp = row['Importance'] for ind, ro in feat_comp.iterrows(): if temp_name == ro['Variable']: if temp_num == 1: # Filling First Shadow Feature's Importance feat_comp.at[ind, 'SF_Imp1'] = temp_imp else: # Filling First Shadow Feature's Importance feat_comp.at[ind, 'SF_Imp2'] = temp_imp feat_comp.head() # Determining which features have an importance smaller than their shadow feature's importance to_drop = list() for ind, ro in feat_comp.iterrows(): if ro['Real_Imp'] <= ro['SF_Imp1'] or ro['Real_Imp'] <= ro['SF_Imp2']: to_drop.append(ro['Variable']) to_drop # Dropping Columns from CAS_OUT CAS_OUT=conn.CASTable('CAS_OUT') CAS_OUT = CAS_OUT.drop(to_drop, axis=1) """ Explanation: Generate Shadow Features The generateShadowFeatures Action performs a scalable random permutation of input features to create shadow features. The shadow features are randomly selected from a matching distribution of each input feature. These shadow features can be used for all-relevant feature selection which removes the inputs whose variable importance is lower than the shadow feature’s variable importance. The shadow features can also be used in a post-fit analysis using Permutation Feature Importance (PFI). By replacing each input with its shadow feature one-by-one and measuring the change on model performance, one can determine that features importance based on relative size of the model’s performance change. In the example below, I will use the outputs of the feature machine for all-relevant feature selection. This involves getting the variable metadata from my feature machine table, generating my shadow features, finding the variable importance for my features and shadow features using a random forest, and comparing each variable's performance to its shadow features. In the end, I will only keep variables with a higher importance than its shadow feature for the next phase. End of explanation """ conn.dataSciencePilot.screenVariables( table='CAS_OUT', target=trt, screenPolicy=scpo, casout={"name" : "SCREEN_VARIABLES_OUT", "replace" : True} ) conn.fetch(table = {"name" : "SCREEN_VARIABLES_OUT"}) """ Explanation: Select Features The selectFeatures action performs a filter-based selection by the criterion selected in the selectionPolicy (default is the best ten input variables according to the Mutual Information statistic). The criterion available for selection include Chi-Square, Cramer’s V, F-test, G2, Information Value, Mutual Information, Normalized Mutual Information statistic, Pearson correlation, and the Symmetric Uncertainty statistic. This action returns a CAS table listing the variables, their rank according to the selected criterion, and the value of the selected criterion. End of explanation """ conn.dataSciencePilot.dsAutoMl( table = tbl, target = trt, explorationPolicy = expo, screenPolicy = scpo, selectionPolicy = sepo, transformationPolicy = trpo, modelTypes = ["decisionTree", "gradboost"], objective = "ASE", sampleSize = 10, topKPipelines = 10, kFolds = 5, transformationOut = {"name" : "TRANSFORMATION_OUT_PY", "replace" : True}, featureOut = {"name" : "FEATURE_OUT_PY", "replace" : True}, pipelineOut = {"name" : "PIPELINE_OUT_PY", "replace" : True}, saveState = {"modelNamePrefix" : "ASTORE_OUT_PY", "replace" : True, "topK":1} ) conn.fetch(table = {"name" : "TRANSFORMATION_OUT_PY"}) conn.fetch(table = {"name" : "FEATURE_OUT_PY"}) conn.fetch(table = {"name" : "PIPELINE_OUT_PY"}) """ Explanation: Data Science Automated Machine Learning Pipeline The dsAutoMl action creates a policy-based, scalable, end-to-end automated machine learning pipeline for both regression and classification problems. The only input required from the user is the input data set and the target variable, but optional parameters include the policy parameters for data exploration, variable screening, feature selection, and feature transformation. Overriding the default policy parameters allow a data scientist to configure their pipeline in their data science workflow. In addition, a data scientist may also select additional models to consider. By default, only a decision tree model is included in the pipeline, but neural networks, random forest models, and gradient boosting models are also available. The dsAutoMl action first explores the data and groups the input variables into categories with the same statistical profile, like the exploreData action. Next the dsAutoMl action screens variables to identify noise variables to exclude from further analysis, like the screenVariables action. Then, the dsAutoMl action generates several new features for the input variables, like the featureMachine action. After there are various new cleaned features, the dsAutoMl action will select features based on selected criterion, like the selectFeatures action. From here, various pipelines are created using subsets of the selected features, chosen for each pipeline using a feature-representation algorithm. Then the chosen models are added to each pipeline and the hyperparameters for the selected models are optimized, like the modelComposer action of the Autotune action set. These hyperparameters are optimized for the selected objective parameter when cross-validated. By default, classification problems are optimized to have the smallest Misclassification Error Rate (MCE) and regression problems are optimized to have the smallest Average Square Error (ASR). Data scientists can then select their champion and challenger models from the pipelines. This action returns several CAS tables: the first lists information around the transformation pipelines, the second lists information around the transformed features, the third lists pipeline performance according to the objective parameter and the last tables are analytical stores for creating the feature set and scoring with our model when new data is available. End of explanation """ conn.close() """ Explanation: Conclusion The dataSciencePilot action set consists of actions that implement a policy-based, configurable, and scalable approach to automating data science workflows. This action set can be used to automate and end-to-end workflow or to automate steps in the workflow such as data preparation, feature preprocessing, feature engineering, feature selection, and hyperparameter tuning. In this notebook, we demonstrated how to use each step of the dataSciencePilot Action set using a Python interface. End of explanation """
luofan18/deep-learning
tensorboard/Anna_KaRNNa_Summaries.ipynb
mit
import time from collections import namedtuple import numpy as np import tensorflow as tf """ Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation """ with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100] """ Explanation: First we'll load the text file and convert it into integers for our network to use. End of explanation """ def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y train_x, train_y, val_x, val_y = split_data(chars, 10, 200) train_x.shape train_x[:,:10] """ Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. End of explanation """ def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph with tf.name_scope('inputs'): inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') with tf.name_scope('targets'): targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers with tf.name_scope("RNN_cells"): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) with tf.name_scope("RNN_init_state"): initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers with tf.name_scope("RNN_forward"): outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one row for each cell output with tf.name_scope('sequence_reshape'): seq_output = tf.concat(outputs, axis=1,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN outputs to a softmax layer and calculate the cost with tf.name_scope('logits'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b tf.summary.histogram('softmax_w', softmax_w) tf.summary.histogram('softmax_b', softmax_b) with tf.name_scope('predictions'): preds = tf.nn.softmax(logits, name='predictions') tf.summary.histogram('predictions', preds) with tf.name_scope('cost'): loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') tf.summary.scalar('cost', cost) # Optimizer for training, using gradient clipping to control exploding gradients with tf.name_scope('train'): tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) merged = tf.summary.merge_all() # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer', 'merged'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph """ Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. End of explanation """ batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 """ Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. End of explanation """ !mkdir -p checkpoints/anna epochs = 10 save_every_n = 100 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph) test_writer = tf.summary.FileWriter('./logs/2/test') # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) train_writer.add_summary(summary, iteration) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} summary, batch_loss, new_state = sess.run([model.merged, model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) test_writer.add_summary(summary, iteration) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') #saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna') """ Explanation: Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. End of explanation """ def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) """ Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation """
rflamary/POT
docs/source/auto_examples/plot_otda_mapping.ipynb
mit
# Authors: Remi Flamary <remi.flamary@unice.fr> # Stanislas Chambon <stan.chambon@gmail.com> # # License: MIT License import numpy as np import matplotlib.pylab as pl import ot """ Explanation: OT mapping estimation for domain adaptation This example presents how to use MappingTransport to estimate at the same time both the coupling transport and approximate the transport map with either a linear or a kernelized mapping as introduced in [8]. [8] M. Perrot, N. Courty, R. Flamary, A. Habrard, "Mapping estimation for discrete optimal transport", Neural Information Processing Systems (NIPS), 2016. End of explanation """ n_source_samples = 100 n_target_samples = 100 theta = 2 * np.pi / 20 noise_level = 0.1 Xs, ys = ot.datasets.make_data_classif( 'gaussrot', n_source_samples, nz=noise_level) Xs_new, _ = ot.datasets.make_data_classif( 'gaussrot', n_source_samples, nz=noise_level) Xt, yt = ot.datasets.make_data_classif( 'gaussrot', n_target_samples, theta=theta, nz=noise_level) # one of the target mode changes its variance (no linear mapping) Xt[yt == 2] *= 3 Xt = Xt + 4 """ Explanation: Generate data End of explanation """ pl.figure(1, (10, 5)) pl.clf() pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples') pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples') pl.legend(loc=0) pl.title('Source and target distributions') """ Explanation: Plot data End of explanation """ # MappingTransport with linear kernel ot_mapping_linear = ot.da.MappingTransport( kernel="linear", mu=1e0, eta=1e-8, bias=True, max_iter=20, verbose=True) ot_mapping_linear.fit(Xs=Xs, Xt=Xt) # for original source samples, transform applies barycentric mapping transp_Xs_linear = ot_mapping_linear.transform(Xs=Xs) # for out of source samples, transform applies the linear mapping transp_Xs_linear_new = ot_mapping_linear.transform(Xs=Xs_new) # MappingTransport with gaussian kernel ot_mapping_gaussian = ot.da.MappingTransport( kernel="gaussian", eta=1e-5, mu=1e-1, bias=True, sigma=1, max_iter=10, verbose=True) ot_mapping_gaussian.fit(Xs=Xs, Xt=Xt) # for original source samples, transform applies barycentric mapping transp_Xs_gaussian = ot_mapping_gaussian.transform(Xs=Xs) # for out of source samples, transform applies the gaussian mapping transp_Xs_gaussian_new = ot_mapping_gaussian.transform(Xs=Xs_new) """ Explanation: Instantiate the different transport algorithms and fit them End of explanation """ pl.figure(2) pl.clf() pl.subplot(2, 2, 1) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=.2) pl.scatter(transp_Xs_linear[:, 0], transp_Xs_linear[:, 1], c=ys, marker='+', label='Mapped source samples') pl.title("Bary. mapping (linear)") pl.legend(loc=0) pl.subplot(2, 2, 2) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=.2) pl.scatter(transp_Xs_linear_new[:, 0], transp_Xs_linear_new[:, 1], c=ys, marker='+', label='Learned mapping') pl.title("Estim. mapping (linear)") pl.subplot(2, 2, 3) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=.2) pl.scatter(transp_Xs_gaussian[:, 0], transp_Xs_gaussian[:, 1], c=ys, marker='+', label='barycentric mapping') pl.title("Bary. mapping (kernel)") pl.subplot(2, 2, 4) pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples', alpha=.2) pl.scatter(transp_Xs_gaussian_new[:, 0], transp_Xs_gaussian_new[:, 1], c=ys, marker='+', label='Learned mapping') pl.title("Estim. mapping (kernel)") pl.tight_layout() pl.show() """ Explanation: Plot transported samples End of explanation """
iutzeler/Introduction-to-Python-for-Data-Sciences
4-4_Going_Further.ipynb
mit
import numpy as np import matplotlib.pyplot as plt %matplotlib inline #import seaborn as sns #sns.set() N = 100 #points to generate X = np.sort(10*np.random.rand(N, 1)**0.8 , axis=0) #abscisses y = 4 + 0.4*np.random.rand(N) - 1. / (X.ravel() + 0.5)**2 - 1. / (10.5 - X.ravel() ) # some complicated function plt.scatter(X,y) """ Explanation: <table> <tr> <td width=15%><img src="./img/UGA.png"></img></td> <td><center><h1>Introduction to Python for Data Sciences</h1></center></td> <td width=15%><a href="http://www.iutzeler.org" style="font-size: 16px; font-weight: bold">Franck Iutzeler</a> </td> </tr> </table> <br/><br/> <center><a style="font-size: 40pt; font-weight: bold">Chap. 4 - Scikit Learn </a></center> <br/><br/> 4. Going Further Creating new features/models We saw above how to transform categorical features. It is possible to modify them in a number of ways in order to create different model. For instance, from 1D point/value couples $(x,y)$, the linear regression fits a line. However, if we tranform $x$ into $(x^1,x^2,x^3)$, the same linear regression will fit a 3-degree polynomial. End of explanation """ from sklearn.linear_model import LinearRegression model = LinearRegression().fit(X, y) yfit = model.predict(X) plt.scatter(X, y) plt.plot(X, yfit,color='r',label="linear regression") plt.legend() """ Explanation: Linear regression will obviously be a bad fit. End of explanation """ from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3, include_bias=False) # 3 degree without degree 0 (no constant) XPoly = poly.fit_transform(X) print(XPoly[:5,]) modelPoly = LinearRegression().fit(XPoly, y) yfitPoly = modelPoly.predict(XPoly) plt.scatter(X, y) plt.plot(X, yfit,color='r',label="linear regression") plt.plot(X, yfitPoly,color='k',label="Polynomial regression (deg 3)") plt.legend(loc = 'lower right') """ Explanation: Let us transform it into a 3-degree polynomial fit and perform the same linear regression. End of explanation """ from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression polyFeat = PolynomialFeatures(degree=3, include_bias=False) linReg = LinearRegression() polyReg = Pipeline([ ('polyFeat',polyFeat) , ('linReg',linReg) ]) polyReg.fit(X, y) # X original not XPoly yfitPolyNew = polyReg.predict(X) plt.scatter(X, y) plt.plot(X, yfit,color='r',label="linear regression") plt.plot(X, yfitPolyNew,color='k',label="Polynomial regression (deg 3)") plt.legend(loc = 'lower right') """ Explanation: Pipeline This 2-step fitting (Polynomial transform + Linear regression) calls for a replicated dataset which can be costly. That is why Scikit Learn implement a pipeline method that allows to perform multiple fit/transform sequentially. This pipeline can then be used as a model. End of explanation """ from sklearn.model_selection import cross_val_score cv_score = cross_val_score(polyReg, X, y, cv=5, scoring="neg_mean_absolute_error") # 5 groups cross validation print(cv_score) print("Mean score:" , np.mean(cv_score)) """ Explanation: Validation and Hyperparameters tuning We saw above (see the lasso example of the regression part) some basic examples of how to: * validate our model by splitting the dataset into training and testing set (using train_test_split) * tune hyperparameter by looking at the error for different values Scikit Learn actually provides some methods for that as well. Validation Scikit Learn offer a cross validation method that * split the dataset in several groups * for each of these group, fit the model on all but this group and computer the error on this one This way all the data has gone thought the learning and validating sets hence the cross validation. This is illustrated by the following figure from the Python Data Science Handbook by Jake VanderPlas. The score computer is computed either as the standard score of the estimator or can be precised with the <tt>scoring</tt> option (see the available metrics ). Warning All scorer objects follow the convention that higher return values are better than lower return values. Let us compute the cross validation for our polynomial fit problem. End of explanation """ polyReg.get_params() """ Explanation: Grid Search Now that scoring and cross validation is done, we can focus on investigating the best parameters of our polynomial model: * degree * presence or not of an intercept Let us see which are the parameters of our model (as this is a pipeline, this might be interesting to use the <tt>get_params</tt> function). End of explanation """ param_grid = [ {'polyFeat__degree': np.arange(1,12), 'linReg__fit_intercept': [True,False], 'polyFeat__include_bias': [True,False]}] from sklearn.model_selection import GridSearchCV grid = GridSearchCV(polyReg, param_grid, cv=5) grid.fit(X, y) """ Explanation: This enables to see the parameters corresponding to the quantities to fit: * degree: polyFeat__degree * presence or not of an intercept: linReg__fit_intercept and polyFeat__include_bias We can now construct a dictionary of values to test. End of explanation """ grid.best_params_ best_model = grid.best_estimator_.fit(X,y) overfit = polyReg.set_params(polyFeat__degree=15).fit(X,y) Xplot = np.linspace(-1,10.5,100).reshape(-1, 1) yBest = best_model.predict(Xplot) yOver = overfit.predict(Xplot) plt.scatter(X, y) plt.plot(Xplot, yBest , 'r' ,label="Best polynomial") plt.plot(Xplot, yOver , 'k' , label="overfitted (deg 15)") plt.legend(loc = 'lower right') plt.ylim([0,5]) plt.title("Best and overfitted models") """ Explanation: We can then get the best parameters and the corresponding model. End of explanation """ f = open('./data/poems/poe-raven.txt', 'r') poe = f.read().replace('\n',' ').replace('.','').replace(',','').replace('-','') poe from sklearn.feature_extraction.text import CountVectorizer vec = CountVectorizer() X = vec.fit_transform([poe]) X """ Explanation: We notice that the grid search based on cross validation helped discarded overfitted models (as they were bad on validation sets). Text and Image Features We already saw an example of feature extraction from categorical data. However, for some particular categorical data, dedicated tools exist. For instance, text and images. Text feature extraction In Learning applications, words are usually more important than letters, so a basic way to extract features is to construct one feature per present word and count the occurences of this word. This is known as word count. An approach to mitigate very present words (like "the" , "a" , etc) is term frequency inverse document frequency (tf-idf) which weights the occurence count by how often it appears. End of explanation """ import pandas as pd pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) """ Explanation: The vectorizer has registered the feature names and outed a sparse matrix that can be converted to a Dataframe. End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer vec = TfidfVectorizer() X = vec.fit_transform([poe]) pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) """ Explanation: The tf-idf verctorizer works the same way. End of explanation """
letsgoexploring/economicData
seigniorage/python/us_seigniorage_data.ipynb
mit
# Download monetary base and GDP deflator data m_base = fp.series('BOGMBASE') gdp_deflator = fp.series('A191RD3A086NBEA') # Convert monetary base data to annual frequency m_base = m_base.as_frequency('A') # Equalize data ranges for monetary base and GDP deflator data m_base, gdp_deflator = fp.window_equalize([m_base, gdp_deflator]) # GDP deflator base year base_year = gdp_deflator.units.split(' ')[-1].split('=')[0] # Construct real monetary base real_m_base = m_base.data/gdp_deflator.data*100/1000/1000 # Construct inflation data inflation = (gdp_deflator.data/gdp_deflator.data.shift(1))-1 """ Explanation: US Seigniorage Data Seigniorage is the real value of the change in the monetary base: \begin{align} \frac{M_{t}-M_{t-1}}{P_t} & = \Delta m_t + \frac{\pi_t}{1+\pi_t}m_{t-1} \end{align} where $\Delta m_t = m_t - m_{t-1}$ and $\pi_t = P_t/ P_{t-1} - 1$. The first term on the right-hand side is the revenue from increasing the real monetary base while the seond term is the inflation tax on the existing monetary base. This program downloads from FRED the following data for the US economy: monetary base (BOGMBASE) GDP deflator data (A191RD3A086NBEA) The program returns a CSV file with columns containing: average annual real monetary base (trillions of $) average annual GDP deflator inflation change in real monetary base from preceding year for 1960 only inflation tax from preceding year for 1960 only total seigniorage for 1960 only The purpose behind providing incomplete columns is to give students a starting point for completing the columns using Excel or a comparable tool. Download and manage data End of explanation """ real_m_base_col_name = 'real monetary base [billions of '+base_year+' dollars]' inflation_col_name = 'GDP deflator inflation' delta_m_base_col_name = 'change in real monetary base [billions of '+base_year+' dollars]' inflation_tax_col_name = 'inflation tax revenue [billions of '+base_year+' dollars]' total_col_name = 'total seigniorage [billions of '+base_year+' dollars]' # Put data into DataFrame df = pd.DataFrame({real_m_base_col_name:real_m_base,inflation_col_name:inflation}) # Add single value to second row of the change in real monetary base column df.loc[df.index[1],delta_m_base_col_name] = df.loc[df.index[1],real_m_base_col_name] - df.loc[df.index[0],real_m_base_col_name] # Add single value to second row of the inflation tax revenue column df.loc[df.index[1],inflation_tax_col_name] = 1/(1+1/df.loc[df.index[1],inflation_col_name]) * df.loc[df.index[0],real_m_base_col_name] # Add single value to second row of total seigniorage column df.loc[df.index[1],total_col_name] = df.loc[df.index[1],delta_m_base_col_name] + df.loc[df.index[1],inflation_tax_col_name] # Save to csv df.to_csv('../csv/us_seigniorage_data.csv',index=True) """ Explanation: Save data to csv End of explanation """
tarashor/vibrations
py/notebooks/draft/Corrugated geometries.ipynb
mit
from sympy import * from sympy.vector import CoordSys3D N = CoordSys3D('N') x1, x2, x3 = symbols("x_1 x_2 x_3") alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha3") R, L, ga, gv = symbols("R L g_a g_v") init_printing() """ Explanation: Corrugated Shells Init symbols for sympy End of explanation """ a1 = pi / 2 + (L / 2 - alpha1)/R x = (R + ga * cos(gv * a1)) * cos(a1) y = alpha2 z = (R + ga * cos(gv * a1)) * sin(a1) r = x*N.i + y*N.j + z*N.k """ Explanation: Corrugated cylindrical coordinates End of explanation """ r """ Explanation: Mid-surface coordinates is defined with the following vector $\vec{r}=\vec{r}(\alpha_1, \alpha_2)$ End of explanation """ r1 = trigsimp(r.diff(alpha1)) r2 = trigsimp(r.diff(alpha2)) # r1m=trigsimp(simplify((expand(r1.dot(r1))))) r1m=sympify("(R**2 + 2*R*g_a*cos(g_v*(L + pi*R - 2*alpha_1)/(2*R)) + g_a**2*(g_v**2*sin(g_v*(L + pi*R - 2*alpha_1)/(2*R))**2 + cos(g_v*(L + pi*R - 2*alpha_1)/(2*R))**2))/R**2") k1 = r1m**(S(3)/S(2)) k2 = trigsimp(r2.magnitude()**3) r1 = r1/k1 r2 = r2/k2 r1 r2 """ Explanation: Tangent to curve End of explanation """ n = r1.cross(r2) n """ Explanation: Normal to curve End of explanation """ dn = n.diff(alpha1) dn """ Explanation: Derivative of base vectors Let's find $\frac { d\vec{n} } { d\alpha_1}$ $\frac { d\vec{v} } { d\alpha_1}$ $\frac { d\vec{n} } { d\alpha_2}$ $\frac { d\vec{v} } { d\alpha_2}$ End of explanation """ dr1 = r1.diff(alpha1) dr1 """ Explanation: $ \frac { d\vec{n} } { d\alpha_1} = -\frac {1}{R} \vec{v} = -k \vec{v} $ End of explanation """ R_alpha=r+alpha3*n R_alpha R1=R_alpha.diff(alpha1) R2=R_alpha.diff(alpha2) R3=R_alpha.diff(alpha3) trigsimp(R1) R2 R3 """ Explanation: $ \frac { d\vec{v} } { d\alpha_1} = \frac {1}{R} \vec{n} = k \vec{n} $ Derivative of vectors $ \vec{u} = u_v \vec{v} + u_n\vec{n} $ $ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u_v\vec{v}) } { d\alpha_1} + \frac { d(u_n\vec{n}) } { d\alpha_1} = \frac { du_n } { d\alpha_1} \vec{n} + u_n \frac { d\vec{n} } { d\alpha_1} + \frac { du_v } { d\alpha_1} \vec{v} + u_v \frac { d\vec{v} } { d\alpha_1} = \frac { du_n } { d\alpha_1} \vec{n} - u_n k \vec{v} + \frac { du_v } { d\alpha_1} \vec{v} + u_v k \vec{n}$ Then $ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du_v } { d\alpha_1} - u_n k \right) \vec{v} + \left( \frac { du_n } { d\alpha_1} + u_v k \right) \vec{n}$ $ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u_n\vec{n}) } { d\alpha_2} + \frac { d(u_v\vec{v}) } { d\alpha_2} = \frac { du_n } { d\alpha_2} \vec{n} + u_n \frac { d\vec{n} } { d\alpha_2} + \frac { du_v } { d\alpha_2} \vec{v} + u_v \frac { d\vec{v} } { d\alpha_2} = \frac { du_n } { d\alpha_2} \vec{n} + \frac { du_v } { d\alpha_2} \vec{v} $ Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$ End of explanation """ eps=trigsimp(R1.dot(R2.cross(R3))) R_1=simplify(trigsimp(R2.cross(R3)/eps)) R_2=simplify(trigsimp(R3.cross(R1)/eps)) R_3=simplify(trigsimp(R1.cross(R2)/eps)) R_1 R_2 R_3 """ Explanation: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$ End of explanation """ dx1da1=R1.dot(N.i) dx1da2=R2.dot(N.i) dx1da3=R3.dot(N.i) dx2da1=R1.dot(N.j) dx2da2=R2.dot(N.j) dx2da3=R3.dot(N.j) dx3da1=R1.dot(N.k) dx3da2=R2.dot(N.k) dx3da3=R3.dot(N.k) A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]]) simplify(A) A_inv = trigsimp(A**-1) simplify(trigsimp(A_inv)) trigsimp(A.det()) """ Explanation: Jacobi matrix: $ A = \left( \begin{array}{ccc} \frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \ \frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \ \frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \ \end{array} \right)$ $ \left[ \begin{array}{ccc} \vec{R}_1 & \vec{R}_2 & \vec{R}_3 \end{array} \right] = \left[ \begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3 \end{array} \right] \cdot \left( \begin{array}{ccc} \frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \ \frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \ \frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \ \end{array} \right) = \left[ \begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3 \end{array} \right] \cdot A$ $ \left[ \begin{array}{ccc} \vec{e}_1 & \vec{e}_2 & \vec{e}_3 \end{array} \right] =\left[ \begin{array}{ccc} \vec{R}_1 & \vec{R}_2 & \vec{R}_3 \end{array} \right] \cdot A^{-1}$ End of explanation """ g11=R1.dot(R1) g12=R1.dot(R2) g13=R1.dot(R3) g21=R2.dot(R1) g22=R2.dot(R2) g23=R2.dot(R3) g31=R3.dot(R1) g32=R3.dot(R2) g33=R3.dot(R3) G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]]) G=trigsimp(G) G """ Explanation: Metric tensor ${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$ End of explanation """ g_11=R_1.dot(R_1) g_12=R_1.dot(R_2) g_13=R_1.dot(R_3) g_21=R_2.dot(R_1) g_22=R_2.dot(R_2) g_23=R_2.dot(R_3) g_31=R_3.dot(R_1) g_32=R_3.dot(R_2) g_33=R_3.dot(R_3) G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]]) G_con=trigsimp(G_con) G_con G_inv = G**-1 G_inv """ Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$ End of explanation """ dR1dalpha1 = trigsimp(R1.diff(alpha1)) dR1dalpha1 """ Explanation: Derivatives of vectors Derivative of base vectors End of explanation """ dR1dalpha2 = trigsimp(R1.diff(alpha2)) dR1dalpha2 dR1dalpha3 = trigsimp(R1.diff(alpha3)) dR1dalpha3 """ Explanation: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $ End of explanation """ dR2dalpha1 = trigsimp(R2.diff(alpha1)) dR2dalpha1 dR2dalpha2 = trigsimp(R2.diff(alpha2)) dR2dalpha2 dR2dalpha3 = trigsimp(R2.diff(alpha3)) dR2dalpha3 dR3dalpha1 = trigsimp(R3.diff(alpha1)) dR3dalpha1 """ Explanation: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $ End of explanation """ dR3dalpha2 = trigsimp(R3.diff(alpha2)) dR3dalpha2 dR3dalpha3 = trigsimp(R3.diff(alpha3)) dR3dalpha3 """ Explanation: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $ End of explanation """ u1=Function('u^1') u2=Function('u^2') u3=Function('u^3') q=Function('q') # q(alpha3) = 1+alpha3/R K = Symbol('K') # K = 1/R u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3) u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1) u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3) u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2) u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2) u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2) u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3) u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3) u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3) # $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$ grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]]) grad_u G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]]) grad_u_down=grad_u*G_s expand(simplify(grad_u_down)) """ Explanation: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $ Derivative of vectors $ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $ $ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$ Then $ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$ $ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $ Then $ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $ $ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $ Then $ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$ Gradient of vector $\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$ $\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $ $\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $ $\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$ $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$ $\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$ $\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$ $\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $ $\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$ $ \nabla \vec{u} = \left( \begin{array}{ccc} \nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \ \nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \ \nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \ \end{array} \right)$ End of explanation """ B = zeros(9, 12) B[0,1] = (1+alpha3/R)**2 B[0,8] = (1+alpha3/R)/R B[1,2] = (1+alpha3/R)**2 B[2,0] = (1+alpha3/R)/R B[2,3] = (1+alpha3/R)**2 B[3,5] = S(1) B[4,6] = S(1) B[5,7] = S(1) B[6,9] = S(1) B[6,0] = -(1+alpha3/R)/R B[7,10] = S(1) B[8,11] = S(1) B """ Explanation: $ \left( \begin{array}{c} \nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \ \nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \ \nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \ \end{array} \right) = \left( \begin{array}{c} \left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \ \left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \ \left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \ \frac { \partial u^2 } { \partial \alpha_1} \ \frac { \partial u^2 } { \partial \alpha_2} \ \frac { \partial u^2 } { \partial \alpha_3} \ \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \ \frac { \partial u^3 } { \partial \alpha_2} \ \frac { \partial u^3 } { \partial \alpha_3} \ \end{array} \right) $ $ \left( \begin{array}{c} \nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \ \nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \ \nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \ \end{array} \right) = B \cdot \left( \begin{array}{c} u^1 \ \frac { \partial u^1 } { \partial \alpha_1} \ \frac { \partial u^1 } { \partial \alpha_2} \ \frac { \partial u^1 } { \partial \alpha_3} \ u^2 \ \frac { \partial u^2 } { \partial \alpha_1} \ \frac { \partial u^2 } { \partial \alpha_2} \ \frac { \partial u^2 } { \partial \alpha_3} \ u^3 \ \frac { \partial u^3 } { \partial \alpha_1} \ \frac { \partial u^3 } { \partial \alpha_2} \ \frac { \partial u^3 } { \partial \alpha_3} \ \end{array} \right) $ End of explanation """ E=zeros(6,9) E[0,0]=1 E[1,4]=1 E[2,8]=1 E[3,1]=1 E[3,3]=1 E[4,2]=1 E[4,6]=1 E[5,5]=1 E[5,7]=1 E Q=E*B Q=simplify(Q) Q """ Explanation: Deformations tensor End of explanation """ T=zeros(12,6) T[0,0]=1 T[0,2]=alpha3 T[1,1]=1 T[1,3]=alpha3 T[3,2]=1 T[8,4]=1 T[9,5]=1 T Q=E*B*T Q=simplify(Q) Q """ Explanation: Tymoshenko theory $u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $ $u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $ $u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $ $ \left( \begin{array}{c} u^1 \ \frac { \partial u^1 } { \partial \alpha_1} \ \frac { \partial u^1 } { \partial \alpha_2} \ \frac { \partial u^1 } { \partial \alpha_3} \ u^2 \ \frac { \partial u^2 } { \partial \alpha_1} \ \frac { \partial u^2 } { \partial \alpha_2} \ \frac { \partial u^2 } { \partial \alpha_3} \ u^3 \ \frac { \partial u^3 } { \partial \alpha_1} \ \frac { \partial u^3 } { \partial \alpha_2} \ \frac { \partial u^3 } { \partial \alpha_3} \ \end{array} \right) = T \cdot \left( \begin{array}{c} u \ \frac { \partial u } { \partial \alpha_1} \ \gamma \ \frac { \partial \gamma } { \partial \alpha_1} \ w \ \frac { \partial w } { \partial \alpha_1} \ \end{array} \right) $ End of explanation """ from sympy import MutableDenseNDimArray C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3) for i in range(3): for j in range(3): for k in range(3): for l in range(3): elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1) el = Symbol(elem_index) C_x[i,j,k,l] = el C_x """ Explanation: Elasticity tensor(stiffness tensor) General form End of explanation """ C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3) def getCIndecies(index): if (index == 0): return 0, 0 elif (index == 1): return 1, 1 elif (index == 2): return 2, 2 elif (index == 3): return 0, 1 elif (index == 4): return 0, 2 elif (index == 5): return 1, 2 for s in range(6): for t in range(s, 6): i,j = getCIndecies(s) k,l = getCIndecies(t) elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1) el = Symbol(elem_index) C_x_symmetry[i,j,k,l] = el C_x_symmetry[i,j,l,k] = el C_x_symmetry[j,i,k,l] = el C_x_symmetry[j,i,l,k] = el C_x_symmetry[k,l,i,j] = el C_x_symmetry[k,l,j,i] = el C_x_symmetry[l,k,i,j] = el C_x_symmetry[l,k,j,i] = el C_x_symmetry """ Explanation: Include symmetry End of explanation """ C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3) C_isotropic_matrix = zeros(6) mu = Symbol('mu') la = Symbol('lambda') for s in range(6): for t in range(s, 6): if (s < 3 and t < 3): if(t != s): C_isotropic_matrix[s,t] = la C_isotropic_matrix[t,s] = la else: C_isotropic_matrix[s,t] = 2*mu+la C_isotropic_matrix[t,s] = 2*mu+la elif (s == t): C_isotropic_matrix[s,t] = mu C_isotropic_matrix[t,s] = mu for s in range(6): for t in range(s, 6): i,j = getCIndecies(s) k,l = getCIndecies(t) el = C_isotropic_matrix[s, t] C_isotropic[i,j,k,l] = el C_isotropic[i,j,l,k] = el C_isotropic[j,i,k,l] = el C_isotropic[j,i,l,k] = el C_isotropic[k,l,i,j] = el C_isotropic[k,l,j,i] = el C_isotropic[l,k,i,j] = el C_isotropic[l,k,j,i] = el C_isotropic def getCalpha(C, A, q, p, s, t): res = S(0) for i in range(3): for j in range(3): for k in range(3): for l in range(3): res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l] return simplify(trigsimp(res)) C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3) for i in range(3): for j in range(3): for k in range(3): for l in range(3): c = getCalpha(C_isotropic, A_inv, i, j, k, l) C_isotropic_alpha[i,j,k,l] = c C_isotropic_alpha[0,0,0,0] C_isotropic_matrix_alpha = zeros(6) for s in range(6): for t in range(6): i,j = getCIndecies(s) k,l = getCIndecies(t) C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l] C_isotropic_matrix_alpha """ Explanation: Isotropic material End of explanation """ C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3) C_orthotropic_matrix = zeros(6) for s in range(6): for t in range(s, 6): elem_index = 'C^{{{}{}}}'.format(s+1, t+1) el = Symbol(elem_index) if ((s < 3 and t < 3) or t == s): C_orthotropic_matrix[s,t] = el C_orthotropic_matrix[t,s] = el for s in range(6): for t in range(s, 6): i,j = getCIndecies(s) k,l = getCIndecies(t) el = C_orthotropic_matrix[s, t] C_orthotropic[i,j,k,l] = el C_orthotropic[i,j,l,k] = el C_orthotropic[j,i,k,l] = el C_orthotropic[j,i,l,k] = el C_orthotropic[k,l,i,j] = el C_orthotropic[k,l,j,i] = el C_orthotropic[l,k,i,j] = el C_orthotropic[l,k,j,i] = el C_orthotropic """ Explanation: Orthotropic material End of explanation """ def getCalpha(C, A, q, p, s, t): res = S(0) for i in range(3): for j in range(3): for k in range(3): for l in range(3): res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l] return simplify(trigsimp(res)) C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3) for i in range(3): for j in range(3): for k in range(3): for l in range(3): c = getCalpha(C_orthotropic, A_inv, i, j, k, l) C_orthotropic_alpha[i,j,k,l] = c C_orthotropic_alpha[0,0,0,0] C_orthotropic_matrix_alpha = zeros(6) for s in range(6): for t in range(6): i,j = getCIndecies(s) k,l = getCIndecies(t) C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l] C_orthotropic_matrix_alpha """ Explanation: Orthotropic material in shell coordinates End of explanation """ P=eye(12,12) P[0,0]=1/(1+alpha3/R) P[1,1]=1/(1+alpha3/R) P[2,2]=1/(1+alpha3/R) P[3,0]=-1/(R*(1+alpha3/R)**2) P[3,3]=1/(1+alpha3/R) P Def=simplify(E*B*P) Def rows, cols = Def.shape D_p=zeros(rows, cols) q = 1+alpha3/R for i in range(rows): ratio = 1 if (i==0): ratio = q*q elif (i==3 or i == 4): ratio = q for j in range(cols): D_p[i,j] = Def[i,j] / ratio D_p = simplify(D_p) D_p """ Explanation: Physical coordinates $u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$ $\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $ End of explanation """ C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3) q=1+alpha3/R for i in range(3): for j in range(3): for k in range(3): for l in range(3): fact = 1 if (i==0): fact = fact*q if (j==0): fact = fact*q if (k==0): fact = fact*q if (l==0): fact = fact*q C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact) C_isotropic_matrix_alpha_p = zeros(6) for s in range(6): for t in range(6): i,j = getCIndecies(s) k,l = getCIndecies(t) C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l] C_isotropic_matrix_alpha_p C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3) q=1+alpha3/R for i in range(3): for j in range(3): for k in range(3): for l in range(3): fact = 1 if (i==0): fact = fact*q if (j==0): fact = fact*q if (k==0): fact = fact*q if (l==0): fact = fact*q C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact) C_orthotropic_matrix_alpha_p = zeros(6) for s in range(6): for t in range(6): i,j = getCIndecies(s) k,l = getCIndecies(t) C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l] C_orthotropic_matrix_alpha_p """ Explanation: Stiffness tensor End of explanation """ D_p_T = D_p*T K = Symbol('K') D_p_T = D_p_T.subs(R, 1/K) simplify(D_p_T) """ Explanation: Tymoshenko End of explanation """ theta, h1, h2=symbols('theta h_1 h_2') square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2 expand(simplify(square_geom)) """ Explanation: Square of segment $A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$ End of explanation """ square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R)) expand(simplify(square_int)) """ Explanation: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$ End of explanation """ simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p) """ Explanation: Virtual work Isotropic material physical coordinates End of explanation """ W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+alpha3*K)**2) W h=Symbol('h') E=Symbol('E') v=Symbol('nu') W_a3 = integrate(W, (alpha3, -h/2, h/2)) W_a3 = simplify(W_a3) W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2)) A_M = zeros(3) A_M[0,0] = E*h/(1-v**2) A_M[1,1] = 5*E*h/(12*(1+v)) A_M[2,2] = E*h**3/(12*(1-v**2)) Q_M = zeros(3,6) Q_M[0,1] = 1 Q_M[0,4] = K Q_M[1,0] = -K Q_M[1,2] = 1 Q_M[1,5] = 1 Q_M[2,3] = 1 W_M=Q_M.T*A_M*Q_M W_M """ Explanation: Isotropic material physical coordinates - Tymoshenko End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_sensor_connectivity.ipynb
bsd-3-clause
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # # License: BSD (3-clause) import numpy as np from scipy import linalg import mne from mne import io from mne.connectivity import spectral_connectivity from mne.datasets import sample print(__doc__) """ Explanation: Compute all-to-all connectivity in sensor space Computes the Phase Lag Index (PLI) between all gradiometers and shows the connectivity in 3D using the helmet geometry. The left visual stimulation data are used which produces strong connectvitiy in the right occipital sensors. End of explanation """ data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' # Setup for reading the raw data raw = io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) # Add a bad channel raw.info['bads'] += ['MEG 2443'] # Pick MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True, exclude='bads') # Create epochs for the visual condition event_id, tmin, tmax = 3, -0.2, 0.5 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6)) # Compute connectivity for band containing the evoked response. # We exclude the baseline period fmin, fmax = 3., 9. sfreq = raw.info['sfreq'] # the sampling frequency tmin = 0.0 # exclude the baseline period con, freqs, times, n_epochs, n_tapers = spectral_connectivity( epochs, method='pli', mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax, faverage=True, tmin=tmin, mt_adaptive=False, n_jobs=1) # the epochs contain an EOG channel, which we remove now ch_names = epochs.ch_names idx = [ch_names.index(name) for name in ch_names if name.startswith('MEG')] con = con[idx][:, idx] # con is a 3D array where the last dimension is size one since we averaged # over frequencies in a single band. Here we make it 2D con = con[:, :, 0] # Now, visualize the connectivity in 3D from mayavi import mlab # noqa mlab.figure(size=(600, 600), bgcolor=(0.5, 0.5, 0.5)) # Plot the sensor locations sens_loc = [raw.info['chs'][picks[i]]['loc'][:3] for i in idx] sens_loc = np.array(sens_loc) pts = mlab.points3d(sens_loc[:, 0], sens_loc[:, 1], sens_loc[:, 2], color=(1, 1, 1), opacity=1, scale_factor=0.005) # Get the strongest connections n_con = 20 # show up to 20 connections min_dist = 0.05 # exclude sensors that are less than 5cm apart threshold = np.sort(con, axis=None)[-n_con] ii, jj = np.where(con >= threshold) # Remove close connections con_nodes = list() con_val = list() for i, j in zip(ii, jj): if linalg.norm(sens_loc[i] - sens_loc[j]) > min_dist: con_nodes.append((i, j)) con_val.append(con[i, j]) con_val = np.array(con_val) # Show the connections as tubes between sensors vmax = np.max(con_val) vmin = np.min(con_val) for val, nodes in zip(con_val, con_nodes): x1, y1, z1 = sens_loc[nodes[0]] x2, y2, z2 = sens_loc[nodes[1]] points = mlab.plot3d([x1, x2], [y1, y2], [z1, z2], [val, val], vmin=vmin, vmax=vmax, tube_radius=0.001, colormap='RdBu') points.module_manager.scalar_lut_manager.reverse_lut = True mlab.scalarbar(title='Phase Lag Index (PLI)', nb_labels=4) # Add the sensor names for the connections shown nodes_shown = list(set([n[0] for n in con_nodes] + [n[1] for n in con_nodes])) for node in nodes_shown: x, y, z = sens_loc[node] mlab.text3d(x, y, z, raw.ch_names[picks[node]], scale=0.005, color=(0, 0, 0)) view = (-88.7, 40.8, 0.76, np.array([-3.9e-4, -8.5e-3, -1e-2])) mlab.view(*view) """ Explanation: Set parameters End of explanation """
jrg365/gpytorch
examples/02_Scalable_Exact_GPs/Simple_MultiGPU_GP_Regression.ipynb
mit
import math import torch import gpytorch import sys from matplotlib import pyplot as plt sys.path.append('../') from LBFGS import FullBatchLBFGS %matplotlib inline %load_ext autoreload %autoreload 2 """ Explanation: Exact GP Regression with Multiple GPUs and Kernel Partitioning Introduction In this notebook, we'll demonstrate training exact GPs on large datasets using two key features from the paper https://arxiv.org/abs/1903.08114: The ability to distribute the kernel matrix across multiple GPUs, for additional parallelism. Partitioning the kernel into chunks computed on-the-fly when performing each MVM to reduce memory usage. We'll be using the protein dataset, which has about 37000 training examples. The techniques in this notebook can be applied to much larger datasets, but the training time required will depend on the computational resources you have available: both the number of GPUs available and the amount of memory they have (which determines the partition size) have a significant effect on training time. End of explanation """ import os import urllib.request from scipy.io import loadmat dataset = 'protein' if not os.path.isfile(f'../{dataset}.mat'): print(f'Downloading \'{dataset}\' UCI dataset...') urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1nRb8e7qooozXkNghC5eQS0JeywSXGX2S', f'../{dataset}.mat') data = torch.Tensor(loadmat(f'../{dataset}.mat')['data']) """ Explanation: We will be using the Protein UCI dataset which contains a total of 40000+ data points. The next cell will download this dataset from a Google drive and load it. End of explanation """ import numpy as np N = data.shape[0] # make train/val/test n_train = int(0.8 * N) train_x, train_y = data[:n_train, :-1], data[:n_train, -1] test_x, test_y = data[n_train:, :-1], data[n_train:, -1] # normalize features mean = train_x.mean(dim=-2, keepdim=True) std = train_x.std(dim=-2, keepdim=True) + 1e-6 # prevent dividing by 0 train_x = (train_x - mean) / std test_x = (test_x - mean) / std # normalize labels mean, std = train_y.mean(),train_y.std() train_y = (train_y - mean) / std test_y = (test_y - mean) / std # make continguous train_x, train_y = train_x.contiguous(), train_y.contiguous() test_x, test_y = test_x.contiguous(), test_y.contiguous() output_device = torch.device('cuda:0') train_x, train_y = train_x.to(output_device), train_y.to(output_device) test_x, test_y = test_x.to(output_device), test_y.to(output_device) """ Explanation: Normalization and train/test Splits In the next cell, we split the data 80/20 as train and test, and do some basic z-score feature normalization. End of explanation """ n_devices = torch.cuda.device_count() print('Planning to run on {} GPUs.'.format(n_devices)) """ Explanation: How many GPUs do you want to use? In the next cell, specify the n_devices variable to be the number of GPUs you'd like to use. By default, we will use all devices available to us. End of explanation """ class ExactGPModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood, n_devices): super(ExactGPModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() base_covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) self.covar_module = gpytorch.kernels.MultiDeviceKernel( base_covar_module, device_ids=range(n_devices), output_device=output_device ) def forward(self, x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) def train(train_x, train_y, n_devices, output_device, checkpoint_size, preconditioner_size, n_training_iter, ): likelihood = gpytorch.likelihoods.GaussianLikelihood().to(output_device) model = ExactGPModel(train_x, train_y, likelihood, n_devices).to(output_device) model.train() likelihood.train() optimizer = FullBatchLBFGS(model.parameters(), lr=0.1) # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) with gpytorch.beta_features.checkpoint_kernel(checkpoint_size), \ gpytorch.settings.max_preconditioner_size(preconditioner_size): def closure(): optimizer.zero_grad() output = model(train_x) loss = -mll(output, train_y) return loss loss = closure() loss.backward() for i in range(n_training_iter): options = {'closure': closure, 'current_loss': loss, 'max_ls': 10} loss, _, _, _, _, _, _, fail = optimizer.step(options) print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % ( i + 1, n_training_iter, loss.item(), model.covar_module.module.base_kernel.lengthscale.item(), model.likelihood.noise.item() )) if fail: print('Convergence reached!') break print(f"Finished training on {train_x.size(0)} data points using {n_devices} GPUs.") return model, likelihood """ Explanation: In the next cell we define our GP model and training code. For this notebook, the only thing different from the Simple GP tutorials is the use of the MultiDeviceKernel to wrap the base covariance module. This allows for the use of multiple GPUs behind the scenes. End of explanation """ import gc def find_best_gpu_setting(train_x, train_y, n_devices, output_device, preconditioner_size ): N = train_x.size(0) # Find the optimum partition/checkpoint size by decreasing in powers of 2 # Start with no partitioning (size = 0) settings = [0] + [int(n) for n in np.ceil(N / 2**np.arange(1, np.floor(np.log2(N))))] for checkpoint_size in settings: print('Number of devices: {} -- Kernel partition size: {}'.format(n_devices, checkpoint_size)) try: # Try a full forward and backward pass with this setting to check memory usage _, _ = train(train_x, train_y, n_devices=n_devices, output_device=output_device, checkpoint_size=checkpoint_size, preconditioner_size=preconditioner_size, n_training_iter=1) # when successful, break out of for-loop and jump to finally block break except RuntimeError as e: print('RuntimeError: {}'.format(e)) except AttributeError as e: print('AttributeError: {}'.format(e)) finally: # handle CUDA OOM error gc.collect() torch.cuda.empty_cache() return checkpoint_size # Set a large enough preconditioner size to reduce the number of CG iterations run preconditioner_size = 100 checkpoint_size = find_best_gpu_setting(train_x, train_y, n_devices=n_devices, output_device=output_device, preconditioner_size=preconditioner_size) """ Explanation: Automatically determining GPU Settings In the next cell, we automatically determine a roughly reasonable partition or checkpoint size that will allow us to train without using more memory than the GPUs available have. Not that this is a coarse estimate of the largest possible checkpoint size, and may be off by as much as a factor of 2. A smarter search here could make up to a 2x performance improvement. End of explanation """ model, likelihood = train(train_x, train_y, n_devices=n_devices, output_device=output_device, checkpoint_size=10000, preconditioner_size=100, n_training_iter=20) """ Explanation: Training End of explanation """ # Get into evaluation (predictive posterior) mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.beta_features.checkpoint_kernel(1000): # Make predictions on a small number of test points to get the test time caches computed latent_pred = model(test_x[:2, :]) del latent_pred # We don't care about these predictions, we really just want the caches. """ Explanation: Computing test time caches End of explanation """ with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.beta_features.checkpoint_kernel(1000): %time latent_pred = model(test_x) test_rmse = torch.sqrt(torch.mean(torch.pow(latent_pred.mean - test_y, 2))) print(f"Test RMSE: {test_rmse.item()}") """ Explanation: Testing: Computing predictions End of explanation """
alasdairtran/mclearn
projects/alasdair/notebooks/04_learning_curves.ipynb
bsd-3-clause
# remove after testing %load_ext autoreload %autoreload 2 import pickle import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from itertools import product from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import StratifiedShuffleSplit from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.utils import shuffle from mclearn.classifier import (train_classifier, grid_search_logistic, grid_search_svm_poly, grid_search_svm_rbf, learning_curve) from mclearn.preprocessing import balanced_train_test_split from mclearn.tools import results_exist, load_results from mclearn.viz import plot_learning_curve, plot_average_learning_curve, plot_validation_accuracy_heatmap %matplotlib inline sns.set_style('ticks') fig_dir = '../thesis/figures/' target_col = 'class' sdss_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14', 'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14', 'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r'] vstatlas_features = ['rmagC', 'umg', 'gmr', 'rmi', 'imz', 'rmw1', 'w1m2'] sdss = pd.read_hdf('../data/sdss.h5', 'sdss') vstatlas = pd.read_hdf('../data/vstatlas.h5', 'vstatlas') X_sdss, _, y_sdss, _ = balanced_train_test_split( sdss[sdss_features], sdss[target_col], train_size=10000, test_size=0, random_state=2) X_vstatlas, _, y_vstatlas, _ = balanced_train_test_split( vstatlas[vstatlas_features], vstatlas[target_col], train_size=2360, test_size=0, random_state=2) """ Explanation: Learning Curves End of explanation """ sdss_rbf_path = '../pickle/04_learning_curves/sdss_rbf_scores.pickle' sdss_rbf_heat = fig_dir + '4_expt1/sdss_grid_rbf.pdf' sdss_poly_path = '../pickle/04_learning_curves/sdss_poly_scores.pickle' sdss_poly_heat = fig_dir + '4_expt1/sdss_grid_poly.pdf' sdss_logistic_path = '../pickle/04_learning_curves/sdss_logistic_scores.pickle' sdss_logistic_heat = fig_dir + '4_expt1/sdss_grid_logistic.pdf' sdss_paths = [sdss_rbf_path, sdss_poly_path, sdss_logistic_path] vstatlas_rbf_path = '../pickle/04_learning_curves/vstatlas_rbf_scores.pickle' vstatlas_rbf_heat = fig_dir + '4_expt1/vstatlas_grid_rbf.pdf' vstatlas_poly_path = '../pickle/04_learning_curves/vstatlas_poly_scores.pickle' vstatlas_poly_heat = fig_dir + '4_expt1/vstatlas_grid_poly.pdf' vstatlas_logistic_path = '../pickle/04_learning_curves/vstatlas_logistic_scores.pickle' vstatlas_logistic_heat = fig_dir + '4_expt1/vstatlas_grid_logistic.pdf' vstatlas_paths = [vstatlas_rbf_path, vstatlas_poly_path, vstatlas_logistic_path] logistic_labels = ['Degree 1, OVR, L1-norm', 'Degree 1, OVR, L2-norm', 'Degree 1, Multinomial, L2-norm', 'Degree 2, OVR, L1-norm', 'Degree 2, OVR, L2-norm', 'Degree 2, Multinomial, L2-norm', 'Degree 3, OVR, L1-norm', 'Degree 3, OVR, L2-norm', 'Degree 3, Multinomial, L2-norm'] poly_labels = ['Degree 1, OVR, Squared Hinge, L1-norm', 'Degree 1, OVR, Squared Hinge, L2-norm', 'Degree 1, OVR, Hinge, L2-norm', 'Degree 1, Crammer-Singer', 'Degree 2, OVR, Squared Hinge, L1-norm', 'Degree 2, OVR, Squared Hinge, L2-norm', 'Degree 2, OVR, Hinge, L2-norm', 'Degree 2, Crammer-Singer', 'Degree 3, OVR, Squared Hinge, L1-norm', 'Degree 3, OVR, Squared Hinge, L2-norm', 'Degree 3, OVR, Hinge, L2-norm', 'Degree 3, Crammer-Singer'] C_rbf_range = np.logspace(-2, 10, 13) C_range = np.logspace(-6, 6, 13) gamma_range = np.logspace(-9, 3, 13) if not results_exist(sdss_rbf_path): grid_search_svm_rbf(X_sdss, y_sdss, pickle_path=sdss_rbf_path) if not results_exist(sdss_poly_path): grid_search_svm_poly(X_sdss, y_sdss, pickle_path=sdss_poly_path) if not results_exist(sdss_logistic_path): grid_search_logistic(X_sdss, y_sdss, pickle_path=sdss_logistic_path) sdss_rbf, sdss_poly, sdss_logistic = load_results(sdss_paths) if not results_exist(vstatlas_rbf_path): grid_search_svm_rbf(X_vstatlas, y_vstatlas, pickle_path=vstatlas_rbf_path) if not results_exist(vstatlas_poly_path): grid_search_svm_poly(X_vstatlas, y_vstatlas, pickle_path=vstatlas_poly_path) if not results_exist(vstatlas_logistic_path): grid_search_logistic(X_vstatlas, y_vstatlas, pickle_path=vstatlas_logistic_path) vstatlas_rbf, vstatlas_poly, vstatlas_logistic = load_results(vstatlas_paths) """ Explanation: Hyperparameter Optimization We start by optimising the hyperparameters with grid search. For each combination, we use a 5-fold cross validation, each fold having 300 examples in the training set and 300 in the test set. End of explanation """ fig = plt.figure(figsize=(7, 3)) im = plot_validation_accuracy_heatmap(sdss_logistic, x_range=C_range, x_label='$C$', power10='x') plt.yticks(np.arange(0, 9), logistic_labels) plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(sdss_logistic_heat, bbox_inches='tight') """ Explanation: Logistic Regression The best parameters for the SDSS Dataset are {degree=3, multi_class='ovr', penalty='l2', C=0.01}. But this is too slow. We can only do degree 2. The best one here is {degree=2, multi_class='ovr', penalty='l1', C=1}. End of explanation """ fig = plt.figure(figsize=(7, 3)) im = plot_validation_accuracy_heatmap(vstatlas_logistic, x_range=C_range, x_label='$C$', power10='x') plt.yticks(np.arange(0, 9), logistic_labels) plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(vstatlas_logistic_heat, bbox_inches='tight') """ Explanation: Multinomial has the highest score, but it doesn't give us reliable probability estimates. The next best option for VST ATLAS is {degree=2, multi_class='ovr', penalty='l1', C=100}. End of explanation """ fig = plt.figure(figsize=(9, 3.5)) im = plot_validation_accuracy_heatmap(sdss_poly, x_range=C_range, x_label='$C$', power10='x') plt.yticks(np.arange(0, 12), poly_labels) plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(sdss_poly_heat, bbox_inches='tight') """ Explanation: SVM with Polynomial Kernel The best parameters for the SDSS dataset are {degree=2, multi_class='ovr', loss='squared_hinge', penalty='l1', C=0.1}: End of explanation """ fig = plt.figure(figsize=(9, 3.5)) im = plot_validation_accuracy_heatmap(vstatlas_poly, x_range=C_range, x_label='$C$', power10='x') plt.yticks(np.arange(0, 12), poly_labels) plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(vstatlas_poly_heat, bbox_inches='tight') """ Explanation: The best parameters for the VST ATLAS dataset are {degree=1, multi_class='crammer-singer', C=1000}: End of explanation """ fig = plt.figure(figsize=(8, 4)) im = plot_validation_accuracy_heatmap(sdss_rbf, x_range=gamma_range, y_range=C_rbf_range, y_label='$C$', x_label='$\gamma$', power10='both') plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(sdss_rbf_heat, bbox_inches='tight') """ Explanation: SVM with RBF Kernel The best one is {C = 10,000, gamma=0.001}. End of explanation """ fig = plt.figure(figsize=(8, 4)) im = plot_validation_accuracy_heatmap(vstatlas_rbf, x_range=gamma_range, y_range=C_rbf_range, y_label='$C$', x_label='$\gamma$', power10='both') plt.tick_params(top='off', right='off') plt.colorbar(im) fig.savefig(vstatlas_rbf_heat, bbox_inches='tight') """ Explanation: The best one is {C = 1,000,000, gamma=0.001}. End of explanation """ X = np.asarray(sdss[sdss_features]) y = np.asarray(sdss[target_col]) cv = StratifiedShuffleSplit(y, n_iter=5, test_size=200000, train_size=300001, random_state=29) rbf = SVC(kernel='rbf', gamma=0.1, C=10, cache_size=2000, class_weight='auto') poly = LinearSVC(C=0.1, loss='squared_hinge', penalty='l1', dual=False, multi_class='ovr', fit_intercept=True, class_weight='auto', random_state=21) logistic = LogisticRegression(penalty='l1', dual=False, C=1, multi_class='ovr', solver='liblinear', class_weight='auto', random_state=21) forest = RandomForestClassifier(n_estimators=300, n_jobs=-1, class_weight='subsample', random_state=21) classifiers = [forest, logistic, rbf, poly, poly] degrees = [1, 2, 1, 2, 3] sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000), np.arange(10000, 100001, 10000), [200000, 300000])) curve_labels = ['Random Forest', 'Logistic Regression', 'RBF SVM', 'Degree 2 Polynomial SVM', 'Degree 3 Polynomial SVM'] pickle_paths = ['../pickle/04_learning_curves/sdss_lc_forest.pickle', '../pickle/04_learning_curves/sdss_lc_logistic_2.pickle', '../pickle/04_learning_curves/sdss_lc_rbf.pickle', '../pickle/04_learning_curves/sdss_lc_poly_2.pickle', '../pickle/04_learning_curves/sdss_lc_poly_3.pickle'] for classifier, degree, pickle_path in zip(classifiers, degrees, pickle_paths): if not results_exist(pickle_path): learning_curve(classifier, X, y, cv, sample_sizes, degree, pickle_path) all_learning_curves = load_results(pickle_paths) for c in all_learning_curves: print(np.array(c)[:, -1]) fig = plt.figure(figsize=(4, 4)) ax = plot_average_learning_curve(sample_sizes, all_learning_curves, curve_labels) ax.set_xscale('log') fig.savefig(fig_dir + '4_expt1/sdss_learning_curves.pdf', bbox_inches='tight') """ Explanation: Learning Curves SDSS Learning Curves Note that logistic regression with degree 3 polynomial transformation takes too long, so we skip obtaining its learning curves. End of explanation """ logistic_lc = np.array(all_learning_curves[1]) rbf_lc = np.array(all_learning_curves[2]) print(np.mean(logistic_lc[:,-1])) print(np.mean(rbf_lc[:,-1])) """ Explanation: Upper bounds for Logistic Regression and RBF SVM End of explanation """ X = np.asarray(vstatlas[vstatlas_features]) y = np.asarray(vstatlas[target_col]) cv = StratifiedShuffleSplit(y, n_iter=5, test_size=0.3, train_size=0.7, random_state=29) rbf = SVC(kernel='rbf', gamma=0.001, C=1000000, cache_size=2000, class_weight='auto') poly = LinearSVC(C=1000, multi_class='crammer_singer', fit_intercept=True, class_weight='auto', random_state=21) logistic = LogisticRegression(penalty='l1', dual=False, C=100, multi_class='ovr', solver='liblinear', class_weight='auto', random_state=21) forest = RandomForestClassifier(n_estimators=300, n_jobs=-1, class_weight='subsample', random_state=21) classifiers = [forest, logistic, rbf, poly] degrees = [1, 2, 1, 1] sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000), np.arange(10000, 30001, 10000), [35056])) curve_labels = ['Random Forest', 'Logistic Regression', 'RBF SVM', 'Linear SVM'] pickle_paths = ['../pickle/04_learning_curves/vstatlas_lc_forest.pickle', '../pickle/04_learning_curves/vstatlas_lc_logistic.pickle', '../pickle/04_learning_curves/vstatlas_lc_rbf.pickle', '../pickle/04_learning_curves/vstatlas_lc_poly.pickle'] for classifier, degree, pickle_path in zip(classifiers, degrees, pickle_paths): if not results_exist(pickle_path): learning_curve(classifier, X, y, cv, sample_sizes, degree, pickle_path) all_learning_curves = load_results(pickle_paths) for c in all_learning_curves: print(np.array(c)[:, -1]) fig = plt.figure(figsize=(4, 4)) ax = plot_average_learning_curve(sample_sizes, all_learning_curves, curve_labels) ax.set_xscale('log') fig.savefig(fig_dir + '4_expt1/vstatlas_learning_curves.pdf', bbox_inches='tight') logistic_lc = np.array(all_learning_curves[1]) rbf_lc = np.array(all_learning_curves[2]) print(np.max(logistic_lc[:,-1])) print(np.max(rbf_lc[:,-1])) """ Explanation: VST ATLAS Learning Curves End of explanation """ transformer = PolynomialFeatures(degree=2, interaction_only=False, include_bias=True) X_poly = transformer.fit_transform(X) %%timeit -n 1 -r 1 rbf = SVC(kernel='rbf', gamma=0.001, C=1000, cache_size=2000, class_weight='auto', probability=True) rbf.fit(X, y) %%timeit -n 1 -r 1 rbf = SVC(kernel='rbf', gamma=0.1, C=10, cache_size=2000, class_weight='auto', probability=True) rbf.fit(X, y) %%timeit -n 1 -r 1 poly = LinearSVC(C=1000, multi_class='crammer_singer', fit_intercept=True, class_weight='auto', random_state=21) poly.fit(X, y) %%timeit -n 1 -r 1 poly = LinearSVC(C=0.1, loss='squared_hinge', penalty='l1', dual=False, multi_class='ovr', fit_intercept=True, class_weight='auto', random_state=21) poly.fit(X_poly, y) %%timeit -n 1 -r 1 logistic = LogisticRegression(penalty='l1', dual=False, C=100, multi_class='ovr', solver='liblinear', random_state=21) logistic.fit(X_poly, y) """ Explanation: Appendix: Time Complexity End of explanation """
intel-analytics/BigDL
apps/variational-autoencoder/using_variational_autoencoder_to_generate_digital_numbers.ipynb
apache-2.0
# a bit of setup import numpy as np from bigdl.dllib.nn.criterion import * from bigdl.dllib.feature.dataset import mnist from bigdl.dllib.keras.layers import * from bigdl.dllib.keras.models import Model from bigdl.dllib.keras.utils import * import datetime as dt IMAGE_SIZE = 784 IMAGE_ROWS = 28 IMAGE_COLS = 28 IMAGE_CHANNELS = 1 latent_size = 2 from bigdl.dllib.nncontext import * sc = init_nncontext("Variational Autoencoder Example") """ Explanation: Using Variational Autoencoder to Generate Digital Numbers Variational Autoencoders (VAEs) are very popular approaches to unsupervised learning of complicated distributions. In this example, we are going to use VAE to generate digital numbers. In standard Autoencoder, we have an encoder network that takes in the original image and encode it into a vector of latent variables and a decoder network that takes in the latent vector and output an generated image that we hope to look similar to the original image. In VAE, we constrain the latent variable to be unit gaussian, so that we can sample latent variables from a unit gaussian distribution, then use the decoder network to generate images. So, we get the architecture above. Instead of generate the latent varibles directly, the encoder network output a mean vector and a variance (or log-variance) vector, and the decoder takes the sampled latent vector to generate the output image. And we add penalty on the latent distribution's KL Divergence to a unit gaussian distribution. Define the Model End of explanation """ def get_encoder(latent_size): input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS)) #CONV conv1 = Convolution2D(16, 5, 5, input_shape=(IMAGE_CHANNELS, IMAGE_ROWS, IMAGE_COLS), border_mode='same', subsample=(2, 2))(input0) relu1 = LeakyReLU()(conv1) conv2 = Convolution2D(32, 5, 5, input_shape=(16, 14, 14), border_mode='same', subsample=(2, 2))(relu1) relu2 = LeakyReLU()(conv2) # 32,7,7 reshape = Flatten()(relu2) #fully connected to output mean vector and log-variance vector reshape = Reshape([7*7*32])(relu2) z_mean = Dense(latent_size)(reshape) z_log_var = Dense(latent_size)(reshape) model = Model([input0],[z_mean,z_log_var]) return model def get_decoder(latent_size): input0 = Input(shape=(latent_size,)) reshape0 = Dense(1568)(input0) reshape1 = Reshape((32, 7, 7))(reshape0) relu0 = Activation('relu')(reshape1) # use resize and conv layer instead of deconv layer resize1 = ResizeBilinear(14,14)(relu0) deconv1 = Convolution2D(16, 5, 5, subsample=(1, 1), activation='relu', border_mode = 'same', input_shape=(32, 14, 14))(resize1) resize2 = ResizeBilinear(28,28)(deconv1) deconv2 = Convolution2D(1, 5, 5, subsample=(1, 1), input_shape=(16, 28, 28), border_mode = 'same')(resize2) outputs = Activation('sigmoid')(deconv2) model = Model([input0],[outputs]) return model def get_autoencoder(latent_size): input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS)) encoder = get_encoder(latent_size)(input0) sample = GaussianSampler()(encoder) decoder_model = get_decoder(latent_size) decoder = decoder_model(sample) model = Model([input0],[encoder,decoder]) return model,decoder_model autoencoder,decoder_model = get_autoencoder(2) """ Explanation: We are going to use a simple cnn network as our encoder and decoder. In decoder, we use SpatialFullConvolution (aka deconvolution or convolution transpose) layer to upsample the image to the original resolution. End of explanation """ def get_mnist(sc, mnist_path): (train_images, train_labels) = mnist.read_data_sets(mnist_path = "/tmp/mnist", "train") train_images = np.reshape(train_images, (60000, 1, 28, 28)) rdd_train_images = sc.parallelize(train_images) rdd_train_sample = rdd_train_images.map(lambda img: Sample.from_ndarray( (img > 128) * 1.0, [(img > 128) * 1.0, (img > 128) * 1.0])) return rdd_train_sample mnist_path = "/tmp/mnist" # please replace this train_data = get_mnist(sc, mnist_path) # (train_images, train_labels) = mnist.read_data_sets(mnist_path, "train") """ Explanation: Get the MNIST Dataset End of explanation """ batch_size = 100 criterion = ParallelCriterion() criterion.add(KLDCriterion(), 1.0) criterion.add(BCECriterion(size_average=False), 1.0/batch_size) """ Explanation: Define our Training Objective The size_average parameter in BCECriterion should be False, because when size_average is True, the negative_log_likelyhood computed in BCECriterion is average over each observations as well as dimensions, while in the KLDCriterion the KL-Divergence is sumed over each observations, the loss woule be wrong. End of explanation """ autoencoder.compile(optimizer=Adam(0.001), loss=criterion) import os if not os.path.exists("./log"): os.makedirs("./log") app_name='vae-digits-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S") autoencoder.set_tensorboard(log_dir='./log/',app_name=app_name) print("Saving logs to ", app_name) """ Explanation: Compile the Model End of explanation """ autoencoder.fit(x=train_data, batch_size=batch_size, nb_epoch = 6) """ Explanation: Start Training This step may take a while depending on your system. End of explanation """ import matplotlib matplotlib.use('Agg') %pylab inline import matplotlib.pyplot as plt from matplotlib.pyplot import imshow import numpy as np import datetime as dt train_summary = TrainSummary('./log/', app_name) loss = np.array(train_summary.read_scalar("Loss")) plt.figure(figsize = (12,12)) plt.plot(loss[:,0],loss[:,1],label='loss') plt.xlim(0,loss.shape[0]+10) plt.grid(True) plt.title("loss") """ Explanation: Let's show the learning curve. End of explanation """ from matplotlib.pyplot import imshow img = np.column_stack([decoder_model.forward(np.random.randn(1,2)).reshape(28,28) for s in range(8)]) imshow(img, cmap='gray') """ Explanation: You can also open tensorboard to see this curve. Sample Some Images from the Decoder End of explanation """ # This code snippet references this keras example (https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py) from scipy.stats import norm # display a 2D manifold of the digits n = 15 # figure with 15x15 digits digit_size = 28 figure = np.zeros((digit_size * n, digit_size * n)) # linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian # to produce values of the latent variables z, since the prior of the latent space is Gaussian grid_x = norm.ppf(np.linspace(0.05, 0.95, n)) grid_y = norm.ppf(np.linspace(0.05, 0.95, n)) for i, yi in enumerate(grid_x): for j, xi in enumerate(grid_y): z_sample = np.array([[xi, yi]]) x_decoded = decoder_model.forward(z_sample) digit = x_decoded.reshape(digit_size, digit_size) figure[i * digit_size: (i + 1) * digit_size, j * digit_size: (j + 1) * digit_size] = digit plt.figure(figsize=(10, 10)) plt.imshow(figure, cmap='Greys_r') plt.show() """ Explanation: Explore the Latent Space End of explanation """
Amarchuk/2FInstability
data/n4258_u7353/.ipynb_checkpoints/n4258-checkpoint.ipynb
gpl-3.0
from IPython.display import HTML from IPython.display import Image import os %pylab %matplotlib inline %run ../../../utils/load_notebook.py from photometry import * from instabilities import * name = 'N4258' gtype = 'SA(s)ab' incl = 70. #(adopted by Epinat+2008) scale = 0.092 #kpc/arcsec according to ApJ 142 145(31pp) 2011 data_path = '../../data/ngc4258' sin_i, cos_i = np.sin(incl*np.pi/180.), np.cos(incl*np.pi/180.) """ Explanation: NGC 4258 (UGC 7353) End of explanation """ os.chdir(data_path) # Данные из NED HTML('<iframe src=http://ned.ipac.caltech.edu/cgi-bin/objsearch?objname=ngc+4258&extend=no&hconst=\ 73&omegam=0.27&omegav=0.73&corr_z=1&out_csys=Equatorial&out_equinox=J2000.0&obj_sort=RA+or+Longitude&of=pre_text&zv_breaker=\ 30000.0&list_limit=5&img_stamp=YES width=1000 height=350></iframe>') # Данные из HYPERLEDA HTML('<iframe src=http://leda.univ-lyon1.fr/ledacat.cgi?o=ngc4258 width=1000 height=350></iframe>') #SDSS Image('ngc3898_SDSS.jpeg', width=300) #JHK Image('ngc3898_JHK.jpg', width=300) """ Explanation: Статьи Noordermeer 2008 https://ui.adsabs.harvard.edu/#abs/2008MNRAS.388.1381N/abstract Heraudeau 1999 https://ui.adsabs.harvard.edu/#abs/1999A&AS..136..509H/abstract https://ui.adsabs.harvard.edu/#abs/1994A&A...286..395V/abstract (V-band photometry и HI кривая вращения) https://ui.adsabs.harvard.edu/#abs/2001MNRAS.323..188P/abstract (фотометрия и декомпозиция в V, кривая вращения и дисперсии до 20 секунд, фото в $\rm{H}_{\alpha}$) https://arxiv.org/pdf/0805.0976v1.pdf ($\rm{H}_{\alpha}$ кривая вращения) Mendez-Abreu https://ui.adsabs.harvard.edu/#abs/2008A&A...478..353M/abstract Gutierrez https://ui.adsabs.harvard.edu/#abs/2011AJ....142..145G/abstract Hameed 2005 http://iopscience.iop.org/article/10.1086/430211/pdf ($\rm{H_{\alpha}}$) Данные End of explanation """ Image('u7353.png') """ Explanation: Кинематические данные по звездам Дисперсии скоростей и кривая вращения - есть в http://adsabs.harvard.edu/cgi-bin/bib_query?1998A%26AS..133..317H до ~ 50'' (1 разрез) End of explanation """ Image() """ Explanation: Данные по фотометрии Вот тут есть профили http://mnras.oxfordjournals.org/content/312/1/2.full.pdf по V, R, I Декомпозиция GALFIT http://iopscience.iop.org/article/10.1088/0004-637X/780/1/69/pdf в K данные балджа в I http://iopscience.iop.org/article/10.1086/423036/pdf B-V http://iopscience.iop.org/article/10.1088/0004-637X/716/2/942/pdf Фотометрия в I, кривая вращения по газу, декомпозиция в V,I,J http://pasj.oxfordjournals.org/content/60/3/493.full.pdf Данные по газу Кривая вращения: * есть по $\rm{CO}$ тут http://iopscience.iop.org/article/10.1086/511762/pdf * есть по $\rm{HI}$ тут http://www.aanda.org/articles/aa/pdf/2011/06/aa16177-10.pdf * по СO + HI http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1997PASJ...49...17S&amp;data_type=PDF_HIGH&amp;whole_paper=YES&amp;type=PRINTER&amp;filetype=.pdf $\Sigma_{HI}$ и $\Sigma_{H_2}$ https://arxiv.org/pdf/1608.06735v1.pdf End of explanation """
femtotrader/pyfolio
pyfolio/examples/single_stock_example.ipynb
apache-2.0
%matplotlib inline import pyfolio as pf """ Explanation: Single stock analysis example in pyfolio Here's a simple example where we produce a set of plots, called a tear sheet, for a stock. Import pyfolio End of explanation """ stock_rets = pf.utils.get_symbol_rets('FB') """ Explanation: Fetch the daily returns for a stock End of explanation """ pf.create_returns_tear_sheet(stock_rets, live_start_date='2015-12-1') """ Explanation: Create a full tear sheet for the single stock This will show charts about returns and shock events. End of explanation """
bakanchevn/DBCourseMirea2017
Неделя 1/Задание в классе/Лекция 2.ipynb
gpl-3.0
%load_ext sql %sql sqlite:// %%sql pragma foreign_keys = ON; -- WARNING: by default off in sqlite drop table if exists product; -- This needs to be dropped if exists, see why further down! drop table if exists company; create table company ( cname varchar primary key, -- company name uniquely identifies the company. stockprice money, -- stock price is in money country varchar); -- country is just a string insert into company values ('GizmoWorks', 25.0, 'USA'); insert into company values ('Canon', 65.0, 'Japan'); insert into company values ('Hitachi', 15.0, 'Japan'); create table product( pname varchar primary key, -- name of the product price money, -- price of the product category varchar, -- category manufacturer varchar, -- manufacturer foreign key (manufacturer) references company(cname)); insert into product values('Gizmo', 19.99, 'Gadgets', 'GizmoWorks'); insert into product values('SingleTouch', 149.99, 'Photography', 'Canon'); insert into product values('PowerGizmo', 29.99, 'Gadgets', 'GizmoWorks'); insert into product values('MultiTouch', 203.99, 'Household', 'Hitachi'); %%sql DROP TABLE IF EXISTS franchise; CREATE TABLE franchise (name TEXT, db_type TEXT); INSERT INTO franchise VALUES ('Bobs Bagels', 'NoSQL'); INSERT INTO franchise VALUES ('eBagel', 'NoSQL'); INSERT INTO franchise VALUES ('BAGEL CORP', 'MySQL'); DROP TABLE IF EXISTS store; CREATE TABLE store (franchise TEXT, location TEXT); INSERT INTO store VALUES ('Bobs Bagels', 'NYC'); INSERT INTO store VALUES ('eBagel', 'PA'); INSERT INTO store VALUES ('BAGEL CORP', 'Chicago'); INSERT INTO store VALUES ('BAGEL CORP', 'NYC'); INSERT INTO store VALUES ('BAGEL CORP', 'PA'); DROP TABLE IF EXISTS bagel; CREATE TABLE bagel (name TEXT, price MONEY, made_by TEXT); INSERT INTO bagel VALUES ('Plain with shmear', 1.99, 'Bobs Bagels'); INSERT INTO bagel VALUES ('Egg with shmear', 2.39, 'Bobs Bagels'); INSERT INTO bagel VALUES ('eBagel Drinkable Bagel', 27.99, 'eBagel'); INSERT INTO bagel VALUES ('eBagel Expansion Pack', 1.99, 'eBagel'); INSERT INTO bagel VALUES ('Plain with shmear', 0.99, 'BAGEL CORP'); INSERT INTO bagel VALUES ('Organic Flax-seed bagel chips', 0.99, 'BAGEL CORP'); DROP TABLE IF EXISTS purchase; -- Note that date is an int here just to simplify things CREATE TABLE purchase (bagel_name TEXT, franchise TEXT, date INT, quantity INT, purchaser_age INT); INSERT INTO purchase VALUES ('Plain with shmear', 'Bobs Bagels', 1, 12, 28); INSERT INTO purchase VALUES ('Egg with shmear', 'Bobs Bagels', 2, 6, 47); INSERT INTO purchase VALUES ('Plain with shmear', 'BAGEL CORP', 2, 12, 24); INSERT INTO purchase VALUES ('Plain with shmear', 'BAGEL CORP', 3, 1, 17); INSERT INTO purchase VALUES ('eBagel Expansion Pack', 'eBagel', 1, 137, 5); INSERT INTO purchase VALUES ('Plain with shmear', 'Bobs Bagels', 4, 24, NULL); """ Explanation: Замечание: Выполните код ниже до лекции End of explanation """ %sql SELECT * FROM Product; %%sql SELECT pname,price FROM Product ORDER BY pname %%sql SELECT pname FROM Product ORDER BY Price %%sql SELECT distinct pname FROM Product ORDER BY Price """ Explanation: Сортировка по значению не в выводе SQL-89 запрещает следующее, но современные СУБД позволяют: SELECT pname FROM Product ORDER BY Price End of explanation """ # Create tables & insert some random numbers # Note: in Postgresql, try the generate_series function... %sql DROP TABLE IF EXISTS R; DROP TABLE IF EXISTS S; DROP TABLE IF EXISTS T; %sql CREATE TABLE R (A int); CREATE TABLE S (A int); CREATE TABLE T (A int); for i in range(1,6): %sql INSERT INTO R VALUES (:i) for i in range(1,11,3): %sql INSERT INTO T VALUES (:i) """ Explanation: Часть СУБД выполнит запрос выше, а часть - нет Операции с множествами Сгенерим 3 таблицы: * R is {1,2,3,4,5} * S is {} * T is {1,4,7,10} End of explanation """ %%sql SELECT DISTINCT R.A FROM R, S, T WHERE R.A=S.A OR R.A=T.A """ Explanation: Попробуем получить $R \cap (S \cup T) = {1,4}$ End of explanation """ %sql SELECT DISTINCT R.A FROM R, S, T; """ Explanation: Почему возвращаемое множество пусто? Посмотрим на порядок операций для данного запроса: 1. Выполняем декартово произведение R,S,T 2. Фильтруем таблицу из (1) по условию в WHERE. Выполним (1): End of explanation """ %%sql SELECT R.A FROM R, S WHERE R.A=S.A UNION ALL SELECT R.A FROM R, T WHERE R.A=T.A """ Explanation: Декартово произведение пусто, потому что S - пусто! Union Воспользуемся UNION: End of explanation """ %sql DROP TABLE IF EXISTS S; CREATE TABLE S (A int); for i in range(1,6): %sql INSERT INTO S VALUES (:i) %%sql SELECT R.A FROM R, S WHERE R.A=S.A UNION ALL SELECT R.A FROM R, T WHERE R.A=T.A """ Explanation: Нет дублей (union возвращает множество) Если нам нужны дубли, используем UNION ALL R = {1,2,3,4,5} S = {1,2,3,4,5} T = {1,4,7,10} End of explanation """ %%sql SELECT R.A FROM R, S, T WHERE R.A = S.A INTERSECT SELECT R.A FROM R, S, T WHERE R.A = T.A %%sql SELECT R.A FROM R, S, T WHERE R.A = S.A EXCEPT SELECT R.A FROM R, S, T WHERE R.A = T.A """ Explanation: Другие операции со множествами: INTERSECT, EXCEPT End of explanation """ %%sql SELECT franchise FROM store WHERE location = 'NYC' UNION SELECT franchise FROM store WHERE location = 'PA'; """ Explanation: Пример: BAGELS История: eBagel - это новый стартап для создания новой NoSQL-системы eBagel только что получил $100M инвестиций. Однако, их продажи уменьшаются, и вас попросили проанализировать данные и понять, что не так Заполним таблицы Franchise(name TEXT, db_type TEXT) Store(franchise TEXT, location TEXT) Bagel(name TEXT, price MONEY, made_by TEXT) Purchase(bagel_name TEXT, franchise TEXT, date INT, quantity INT, purchaser_age INT) Union Найдем франшизы, которые расположены в PA или NYC, чтобы определить конкурентов. End of explanation """ %%sql SELECT f.db_type FROM franchise f, store s WHERE f.name = s.franchise AND s.location = 'NYC' INTERSECT SELECT f.db_type FROM franchise f, store s WHERE f.name = s.franchise AND s.location = 'PA' """ Explanation: Intersect: небольшая проблема... eBagel's CEO заинтересован узнать о back-end технологиях компании Bagel, который находятся в нескольких местах. Попробуем использовать оператор INTERSECT, чтобы найти типы баз данных франшиз, который имеют магазины в PA и NYC: End of explanation """ %%sql SELECT f.name, s.location, f.db_type FROM franchise f, store s WHERE f.name = s.franchise; """ Explanation: Что не так Если посмотреть на данные, то должно вернуться только "MySQL" в качестве результата: End of explanation """ %%sql SELECT f.db_type FROM franchise f, store s WHERE f.name = s.franchise AND s.location = 'NYC' %%sql SELECT f.db_type FROM franchise f, store s WHERE f.name = s.franchise AND s.location = 'PA' """ Explanation: Посмотрим на значения до пересечения End of explanation """ %%sql select distinct f.dbtype from ( select franchise from store where location = 'NYC' INTERSECT SELECT franchise from store where location = 'PA' ) A, franchise where A.franchise = f.name %%sql SELECT f.db_type FROM franchise f WHERE f.name IN ( SELECT s.franchise FROM store s WHERE s.location = 'NYC') AND f.name IN ( SELECT s.franchise FROM store s WHERE s.location = 'PA'); """ Explanation: Проблема в том, что мы сделали операцию INTERSECTпосле возращение атрибутов запросов, а не до Вложенные запросы Данную задачу можно решить через вложенные запрос End of explanation """ %%sql SELECT f.db_type FROM franchise f WHERE f.name IN ( SELECT b.made_by FROM bagel b, purchase p WHERE b.name = p.bagel_name AND p.purchaser_age >= 20 AND p.purchaser_age < 30); """ Explanation: Другой пример: eBagel's CEO хочет знать, какие бд используют компании с возрастом покупателей от 20 до 30: End of explanation """ %%sql SELECT f.db_type FROM franchise f, bagel b, purchase p WHERE f.name = b.made_by AND b.name = p.bagel_name AND p.purchaser_age >= 20 AND p.purchaser_age < 30; """ Explanation: Можно ли обойтись без вложенного запроса? End of explanation """ %%sql SELECT b.name, b.price FROM bagel b WHERE b.made_by = 'eBagel' AND EXISTS (SELECT name FROM bagel WHERE made_by <> 'eBagel' AND price > b.price); """ Explanation: Обращайте внимание на дубли! используйте DISTINCT. Также можно использовать: * ALL * ANY * EXISTS К сожалению, ALL и ANY не поддерживаются SQLite Покажем пример с EXISTS. Предположим, что мы хотим ответить на вопрос: у eBagel есть какие-нибудь products, которые дешевле всех продуктов конкурентов? End of explanation """ %sql SELECT AVG(price) FROM bagel WHERE made_by = 'eBagel'; %sql SELECT COUNT(*) AS "Number of Stores in PA" FROM store WHERE location = 'PA'; %sql SELECT COUNT(location) FROM store; %sql SELECT COUNT(DISTINCT location) FROM store; """ Explanation: Выводы: SQL: * Предоставляет высоко уровневый декларативный язык программирования для манипулирования данными(DML) * SFW блок - основа * Есть поддержка работы с множествами и вложенными запросами Агрегация SQL поддерживает следующие агрегатные операции: * SUM * COUNT * AVG * MIN * MAX Кроме COUNT, все операторы агрегации применяются только к единственному атрибуту Примеры End of explanation """ %%sql SELECT SUM(b.price * p.quantity) AS net_sales FROM bagel b, purchase p WHERE b.name = p.bagel_name; """ Explanation: Можно ли получить общую сумму, заработанную компаниями? End of explanation """ %%sql SELECT b.made_by, SUM(b.price * p.quantity) AS revenue FROM bagel b, purchase p WHERE b.made_by = p.franchise AND b.name = p.bagel_name GROUP BY b.made_by; """ Explanation: Информацию можно детализировать! End of explanation """ %%sql SELECT b.name, SUM(b.price * p.quantity) AS sales FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise GROUP BY b.name HAVING SUM(p.quantity) > 12; """ Explanation: Найдем только те компании, количество продаж которых больше 12. End of explanation """ %%sql SELECT * FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise; """ Explanation: Посмотрим на предыдущий запрос детально Сформируем SFW End of explanation """ %%sql SELECT b.name, GROUP_CONCAT(b.price, ',') AS prices, GROUP_CONCAT(b.made_by, ',') AS made_bys, bagel_name, GROUP_CONCAT(p.franchise, ',') AS franchises, GROUP_CONCAT(p.date, ',') AS dates, GROUP_CONCAT(p.quantity, ',') AS quantities, GROUP_CONCAT(p.purchaser_age, ',') AS purchaser_ages FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise GROUP BY b.name; """ Explanation: Применим GROUP_BY End of explanation """ %%sql SELECT b.name, GROUP_CONCAT(b.price, ',') AS prices, GROUP_CONCAT(b.made_by, ',') AS made_bys, bagel_name, GROUP_CONCAT(p.franchise, ',') AS franchises, GROUP_CONCAT(p.date, ',') AS dates, SUM(p.quantity) AS total_quantity, GROUP_CONCAT(p.purchaser_age, ',') AS purchaser_ages FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise GROUP BY b.name HAVING SUM(p.quantity) > 12; """ Explanation: Group_concat показывает сгенерированную строку Применим Having; End of explanation """ %%sql SELECT b.name, SUM(b.price * p.quantity) AS sales FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise GROUP BY b.name HAVING SUM(p.quantity) > 12; """ Explanation: Выполним проекцию. End of explanation """ %sql SELECT DISTINCT made_by FROM bagel WHERE name LIKE '%shmear%'; """ Explanation: Вхождение во множества: Найдем компании, у которых есть shmear в ассортименте: End of explanation """ %%sql SELECT DISTINCT made_by FROM bagel WHERE made_by NOT IN ( SELECT made_by FROM bagel WHERE name NOT LIKE '%shmear%'); """ Explanation: Найдем компании, у которых все продукты имеют shmear в названии: End of explanation """ %sql SELECT * FROM purchase WHERE bagel_name LIKE '%shmear%'; %%sql SELECT * FROM purchase WHERE bagel_name LIKE '%shmear%' AND (purchaser_age >= 5 OR purchaser_age < 5); """ Explanation: NULL в SQL End of explanation """ %%sql SELECT * FROM purchase WHERE bagel_name LIKE '%shmear%' AND (purchaser_age >= 5 OR purchaser_age < 5 OR purchaser_age IS NULL); """ Explanation: Применяя условия, необходимо корректно обрабатывать условия на NULL: End of explanation """ %%sql SELECT DISTINCT b.name FROM bagel b, purchase p WHERE b.name = p.bagel_name AND b.made_by = p.franchise; """ Explanation: Что произойдет если есть null в join'е? End of explanation """ %%sql SELECT DISTINCT b.name FROM bagel b INNER JOIN purchase p ON b.name = p.bagel_name AND b.made_by = p.franchise; """ Explanation: Пропускаем пончики, которые никто не покупал Inner/Outer Joins End of explanation """ %%sql SELECT DISTINCT b.name FROM bagel b LEFT OUTER JOIN purchase p ON b.name = p.bagel_name AND b.made_by = p.franchise; """ Explanation: INNER JOIN на таблицыA и B с условием на соединение C(A,B) возвращает только такие отношения (a,b), для которых C(a,b) = TRUE. LEFT OUTER JOIN. возвращает также (a, NULL) для тех кортежей a, для которых нет b таких, что C(a,b) = TRUE: End of explanation """
ireapps/cfj-2017
exercises/20. Exercise - Web scraping-working.ipynb
mit
# the URL to request # get that page # turn the page text into soup # find the table of interest """ Explanation: Let's scrape some death row data Texas executes a lot of criminals, and it has a web page that keeps track of people on its death row. Using what you've learned so far, let's scrape this table into a CSV. Then we're going write a function to grab a couple pieces of additional data from the inmates' detail pages. Import our libraries Fetch and parse the summary page End of explanation """ # find all table rows (skip the first one) # open a file to write to # create a writer object # write header row # loop over the rows # extract the cells # offense ID # link to detail page # last name # first name # dob # sex # race # date received # county # offense date # write out to file """ Explanation: Loop over the table rows and write to CSV End of explanation """ """Fetch details from a death row inmate's page.""" # create a dictionary with some default values # as we go through, we're going to add stuff to it # (if you want to explore further, there is actually # a special kind of dictionary called a "defaultdict" to # handle this use case) => # https://docs.python.org/3/library/collections.html#collections.defaultdict # partway down the page, the links go to JPEGs instead of HTML pages # we can't parse images, so we'll just return the empty dictionary # get the page # soup the HTML # find the table of info # target the mugshot, if it exists # if there is a mug, grab the src and add it to the dictionary # get a list of the "label" cells # on some pages, they're identified by the class 'tabledata_bold_align_right_deathrow' # on others, they're identified by the class 'tabledata_bold_align_right_unit' # so we pass it a list of possible classes # gonna do some fanciness here in the interests of DRY => # a list of attributes we're interested in -- should match exactly the text inside the cells of interest # loop over the list of label cells that we targeted earlier # check to see if the cell text is in our list of attributes # if so, find the value -- go up to the tr and search for the other td -- # and add that attribute to our dictionary # return the dictionary to the script """ Explanation: Let's write a parsing function We need a function that will take a URL of a detail page and do these things: Open the detail page URL using requests Parse the contents using BeautifulSoup Isolate the bits of information we're interested in: height, weight, eye color, hair color, native county, native state, link to mugshot Return those bits of information in a dictionary A couple things to keep in mind: Not every inmate will have every piece of data. Also, not every inmate has an HTML detail page to parse -- the older ones are a picture. So we'll need to work around those limitations. We shall call our function fetch_details(). End of explanation """ # open the CSV file to read from and the one to write to # create a reader object # the output headers are goind to be the headers from the summary file # plus a list of new attributes # create the writer object # write the header row # loop over the rows in the input file # print the inmate's name (so we can keep track of where we're at) # helps with debugging, too # call our function on the URL in the row # add the two dicts together by # unpacking them inside a new one # and write out to file """ Explanation: Putting it all together Now that we have our parsing function, we can: Open and read the CSV files of summary inmate info (the one we just scraped) Open and write a new CSV file of detailed inmate info As we loop over the summary inmate data, we're going to call our new parsing function on the detail URL in each row. Then we'll combine the dictionaries (data from the row of summary data + new detailed data) and write out to the new file. End of explanation """
ergosimulation/mpslib
scikit-mps/examples/ex_mpslib_hard_and_soft.ipynb
lgpl-3.0
import mpslib as mps import numpy as np import matplotlib.pyplot as plt O=mps.mpslib(method='mps_snesim_tree', parameter_filename='mps_snesim.txt') #O=mps.mpslib(method='mps_genesim', parameter_filename='mps_genesim.txt') TI1, TI_filename1 = mps.trainingimages.strebelle(3, coarse3d=1) O.par['soft_data_categories']=np.array([0,1]) O.ti=TI1 #%% Set parameters for MPSlib O.par['rseed']=1 O.par['n_multiple_grids']=0; O.par['n_cond']=16 O.par['n_cond_soft']=1 O.par['n_real']=100 O.par['debug_level']=-1 O.par['simulation_grid_size'][0]=18 O.par['simulation_grid_size'][1]=13 O.par['simulation_grid_size'][2]=1 O.par['hard_data_fnam']='hard.dat' O.par['soft_data_fnam']='soft.dat' O.delete_local_files() O.par['n_max_cpdf_count']=100 """ Explanation: MPSlib: hard and soft data in MPSlib MPSlib can account for hard and soft data (both colocated and no-colocated). Detail about the use of the preferential path and co- and non-co-located soft data can be found in Hansen, Thomas Mejer, Klaus Mosegaard, and Knud Skou Cordua. "Multiple point statistical simulation using uncertain (soft) conditional data." Computers & geosciences 114 (2018): 1-10 Define hard data Hard data (model parameyers with no uncertainty) are given by the '''d_hard''' variable, with X, Y, Z, and VALUE for each conditonal data. 3 conditional hard data can be given by O.d_hard = np.array( [[ ix1, iy1, iz1, val1], [ ix2, iy2, iz2, val2], [ ix3, iy3, iz3, val3]]) Define soft/uncertain data Soft data (model parametrs wth no uncertainty) are given by the '''d_soft variable, with X, Y, Z, for the position, and a probability of each possible outcome. When considering a training with two categories [0,1], then with P(m=0)=0.2, at position [5,3,2] can be set as O.d_soft = np.array( [[ 5, 3, 2, 0.2 0.8]]) If a training image has 3 categories and P(m=0)=0.2, and P(m=1)=0.3, then O.d_soft = np.array( [[ 5, 3, 2, 0.2, 0.4, 0.5]]) Preferential path MPSlib makes use of a preferential simulation path, such that model parameters with more informed conditional information (i.e. wigh lower entropy) prior to less nodes with less informed conditonal information. Especially when using spare soft data, the use of a preferential path should be preferred O.par['shuffle_simulation_grid']=0 # Unilateral path O.par['shuffle_simulation_grid']=1 # Random path O.par['shuffle_simulation_grid']=0 # Preferential path co-located soft data By default only co-located soft data are considered during simulation, as given by O.par['n_cond_soft']=0 O.par['shuffle_simulation_grid']=0 # Preferential path Whenever using only co-locate soft data is is adviced to use the preferential path non-co-located soft data. Even when using the preferential path, model parameters with informed conditional indformation, close to the point being simulated, will not be taken into account. This means in practice that not all information in soft conditional data is used. As an alternative 'mps_genesim' can handle non-colocated soft data, by using a rejection sample to accept a proposed match m from scannining the TI, with a probability proprtional to the product of the condtional information evalauted in m. This means that one can account for, in principle, any number of soft data, as one can account for any number of hard data. In practice, it becomes computationally har to account for many soft data. To set the number of soft data used for conditining to 3, on can use O.par['n_cond_soft']=3 When using multiple (or all) conditional soft data, then use of the preferential path may not lead to more informed realizations than using a random path, but simulation mey be sginificantly faster using the preferential path as model parameters with soft data will be simulated first, and the subsequenty simulation will be conditional to only hard data, and hence computationally more efficient. Therefore it is advised to use a preferential path always. End of explanation """ # Set hard data d_hard = np.array([[ 15, 4, 0, 1], [ 15, 5, 0, 1]]) #O.d_hard = d_hard """ Explanation: Hard data End of explanation """ # Set soft data d_soft = np.array([[ 2, 2, 0, 0.7, 0.3], [ 5, 5, 0, 0.001, 0.999], [ 10, 8, 0, 0.999, 0.001]]) O.d_soft = d_soft """ Explanation: Soft/uncertain data End of explanation """ # Only co-locational O.par['n_cond_soft']=0 gtxt=['unilateral','random','preferential'] shuffle_simulation_grid_arr = [0,1,2] fig = plt.figure(figsize=(15, 8)) for i in range(len(shuffle_simulation_grid_arr)): # Set preferential path O.par['shuffle_simulation_grid']=shuffle_simulation_grid_arr[i] O.delete_local_files() O.run_parallel() m_mean, m_std, m_mode=O.etype() plt.subplot(2,3,i+1) plt.imshow(m_mean.T, zorder=-1, vmin=0, vmax=1, cmap='hot') plt.colorbar(fraction=0.046, pad=0.04) plt.title('%s path' % gtxt[i]) plt.subplot(2,3,3+i+1) plt.imshow(m_std.T, zorder=-1, vmin=0, vmax=0.4, cmap='gray') plt.title('std') plt.colorbar(fraction=0.046, pad=0.04) """ Explanation: Example 1: co-locational soft data only End of explanation """ # Only co-locational O.par['n_cond_soft']=1 shuffle_simulation_grid_arr = [0,1,2] fig = plt.figure(figsize=(15, 8)) for i in range(len(shuffle_simulation_grid_arr)): # Set preferential path O.par['shuffle_simulation_grid']=shuffle_simulation_grid_arr[i] O.delete_local_files() O.run_parallel() m_mean, m_std, m_mode=O.etype() plt.subplot(2,3,i+1) plt.imshow(m_mean.T, zorder=-1, vmin=0, vmax=1, cmap='hot') plt.colorbar(fraction=0.046, pad=0.04) plt.title('%s path' % gtxt[i]) plt.subplot(2,3,3+i+1) plt.imshow(m_std.T, zorder=-1, vmin=0, vmax=0.4, cmap='gray') plt.title('std') plt.colorbar(fraction=0.046, pad=0.04) """ Explanation: Example 2: 1 non-co-locational soft data only End of explanation """ # Only co-locational O.par['n_cond_soft']=3 shuffle_simulation_grid_arr = [0,1,2] fig = plt.figure(figsize=(15, 8)) for i in range(len(shuffle_simulation_grid_arr)): # Set preferential path O.par['shuffle_simulation_grid']=shuffle_simulation_grid_arr[i] O.delete_local_files() O.run_parallel() m_mean, m_std, m_mode=O.etype() plt.subplot(2,3,i+1) plt.imshow(m_mean.T, zorder=-1, vmin=0, vmax=1, cmap='hot') plt.colorbar(fraction=0.046, pad=0.04) plt.title('%s path' % gtxt[i]) plt.subplot(2,3,3+i+1) plt.imshow(m_std.T, zorder=-1, vmin=0, vmax=0.4, cmap='gray') plt.title('std') plt.colorbar(fraction=0.046, pad=0.04) """ Explanation: Example 3: 3 (all) non-co-locational soft data only End of explanation """
ml4a/ml4a-guides
examples/models/tacotron2.ipynb
gpl-2.0
%tensorflow_version 1.x !pip3 install --quiet ml4a """ Explanation: tacotron2: Text-to-speech synthesis Generates speech audio from a text string. See the original code and paper. Set up ml4a and enable GPU If you don't already have ml4a installed, or you are opening this in Colab, first enable GPU (Runtime > Change runtime type), then run the following cell to install ml4a and its dependencies. End of explanation """ from ml4a import audio from ml4a.models import tacotron2 text = 'Hello everyone!' speech = tacotron2.run(text) """ Explanation: Run tacotron2 End of explanation """ audio.display(speech.wav, speech.sampling_rate) """ Explanation: speech contains the raw waveform and sampling rate, which can be played back. End of explanation """ %matplotlib inline audio.plot(speech.wav, speech.sampling_rate) """ Explanation: You can also plot the waveform. End of explanation """
eugen/mlstudy
3. Logistic Regression/3. Binary Classification Exercise.ipynb
apache-2.0
%matplotlib inline import sqlite3 import pandas as pd import numpy as np import nltk import string import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn import metrics from sklearn.metrics import roc_curve, auc from nltk.stem.porter import PorterStemmer """ Explanation: Making predictions over amazon fine food reviews dataset Predictions The purpose of this analysis is to make up a prediction model where we will be able to predict whether a recommendation is positive or negative. In this analysis, we will not focus on the Score, but only the positive/negative sentiment of the recommendation. To do so, we will work on Amazon's recommendation dataset, we will build a Term-doc incidence matrix using term frequency and inverse document frequency ponderation. When the data is ready, we will load it into predicitve algorithms, mainly naïve Bayesian and regression. In the end, we hope to find a "best" model for predicting the recommendation's sentiment. Loading the data In order to load the data, we will use the SQLITE dataset where we will only fetch the Score and the recommendation summary. As we only want to get the global sentiment of the recommendations (positive or negative), we will purposefully ignore all Scores equal to 3. If the score id above 3, then the recommendation wil be set to "postive". Otherwise, it will be set to "negative". The data will be split into an training set and a test set with a test set ratio of 0.2 End of explanation """ import os from IPython.core.display import display, HTML if not os.path.isfile('database.sqlite'): display(HTML("<h3 style='color: red'>Dataset database missing!</h3><h3> Please download it "+ "<a href='https://www.kaggle.com/snap/amazon-fine-food-reviews'>from here on Kaggle</a> "+ "and extract it to the current directory.")) raise(Exception("missing dataset")) con = sqlite3.connect('database.sqlite') pd.read_sql_query("SELECT * FROM Reviews LIMIT 3", con) """ Explanation: Let's first check whether we have the dataset available: End of explanation """ messages = pd.read_sql_query(""" SELECT Score, Summary, HelpfulnessNumerator as VotesHelpful, HelpfulnessDenominator as VotesTotal FROM Reviews WHERE Score != 3""", con) """ Explanation: Let's select only what's of interest to us: End of explanation """ messages.head(5) """ Explanation: Let's see what we've got: End of explanation """ messages["Sentiment"] = messages["Score"].apply(lambda score: "positive" if score > 3 else "negative") messages.head(2) """ Explanation: Let's add the Sentiment column that turns the numeric score into either positive or negative. End of explanation """ messages["Usefulness"] = TODO messages.head(2) """ Explanation: Similarly, the Usefulness column turns the number of votes into either useful or useless using the formula (VotesHelpful/VotesTotal) &gt; 0.8 End of explanation """ messages[messages.Score == 5].head(10) """ Explanation: Let's have a look at some 5s: End of explanation """ TODO: select some reviews with score 1 """ Explanation: And some 1s as well: End of explanation """ from wordcloud import WordCloud, STOPWORDS # Note: you need to install wordcloud with pip. # On Windows, you might need a binary package obtainable from here: http://www.lfd.uci.edu/~gohlke/pythonlibs/#wordcloud stopwords = set(STOPWORDS) #mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0) mpl.rcParams['font.size']=12 #10 mpl.rcParams['savefig.dpi']=100 #72 mpl.rcParams['figure.subplot.bottom']=.1 def show_wordcloud(data, title = None): wordcloud = WordCloud( background_color='white', stopwords=stopwords, max_words=200, max_font_size=40, scale=3, random_state=1 # chosen at random by flipping a coin; it was heads ).generate(str(data)) fig = plt.figure(1, figsize=(8, 8)) plt.axis('off') if title: fig.suptitle(title, fontsize=20) fig.subplots_adjust(top=2.3) plt.imshow(wordcloud) plt.show() show_wordcloud(messages["Summary_Clean"]) """ Explanation: Let's do some exploratory data analysis with WordClouds! End of explanation """ TODO: create word cloud from negative reviews TODO: create word cloud from positive reviews """ Explanation: We can also view wordclouds for only positive or only negative entries: End of explanation """ # first do some cleanup from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer import re import string cleanup_re = re.compile('[^a-z]+') def cleanup(sentence): sentence = sentence.lower() sentence = cleanup_re.sub(' ', sentence).strip() return sentence messages["Summary_Clean"] = messages["Summary"].apply(cleanup) train, test = train_test_split(messages, test_size=0.2) print("%d items in training data, %d in test data" % (len(train), len(test))) # To cleanup stop words, add stop_words = STOPWORDS # But it seems to function better without it count_vect = CountVectorizer(min_df = 1, ngram_range = (1, 4)) X_train_counts = count_vect.fit_transform(train["Summary_Clean"]) tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) X_test_counts = count_vect.transform(test["Summary_Clean"]) X_test_tfidf = tfidf_transformer.transform(X_test_counts) y_train = train["Sentiment"] y_test = test["Sentiment"] # prepare prediction = dict() """ Explanation: Extracting features from text data SciKit cannot work with words, so we'll just assign a new dimention to each word and work with word counts. See more here: http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html End of explanation """ word_features = count_vect.get_feature_names() word_features[10000:10010] chosen_word_idx = 99766 chosen_word_indices = np.nonzero(X_train_counts[:,chosen_word_idx].toarray().ravel())[0] for i in chosen_word_indices[0:10]: print("'%s' appears %d times in: %s" % ( word_features[chosen_word_idx], X_train_counts[i,chosen_word_idx], train["Summary"].values[i] )) """ Explanation: Let's explore a bit what those did End of explanation """ #TODO: find the counts for "gluten" and the reviews it appears in """ Explanation: Let's now find the counts for "gluten" and the phrases that contain that End of explanation """ from sklearn.naive_bayes import MultinomialNB model = MultinomialNB().fit(X_train_tfidf, y_train) prediction['Multinomial'] = model.predict(X_test_tfidf) """ Explanation: Create a Multinomial Naïve Bayes model End of explanation """ from sklearn.naive_bayes import BernoulliNB model = BernoulliNB().fit(X_train_tfidf, y_train) prediction['Bernoulli'] = model.predict(X_test_tfidf) """ Explanation: Create a Bernoulli Naïve Bayes model End of explanation """ from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(C=1e5) logreg_result = logreg.fit(X_train_tfidf, y_train) prediction['Logistic'] = logreg.predict(X_test_tfidf) """ Explanation: Create a Logistic Regression model End of explanation """ from sklearn.svm import LinearSVC linsvc = LinearSVC(C=1e5) linsvc_result = linsvc.fit(X_train_tfidf, y_train) prediction['LinearSVC'] = linsvc.predict(X_test_tfidf) """ Explanation: Create a Linear SVC model End of explanation """ def formatt(x): if x == 'negative': return 0 return 1 vfunc = np.vectorize(formatt) cmp = 0 colors = ['b', 'g', 'y', 'm', 'k'] for model, predicted in prediction.items(): false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test.map(formatt), vfunc(predicted)) roc_auc = auc(false_positive_rate, true_positive_rate) plt.plot(false_positive_rate, true_positive_rate, colors[cmp], label='%s: AUC %0.2f'% (model,roc_auc)) cmp += 1 plt.title('Classifiers comparison with ROC') plt.legend(loc='lower right') plt.plot([0,1],[0,1],'r--') plt.xlim([-0.1,1.2]) plt.ylim([-0.1,1.2]) plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate') plt.show() """ Explanation: Analyzing Results Before analyzing the results, let's remember what Precision and Recall are (more here https://en.wikipedia.org/wiki/Precision_and_recall) ROC Curves In order to compare our learning algorithms, let's build the ROC curve. The curve with the highest AUC value will show our "best" algorithm. In first data cleaning, stop-words removal has been used, but the results were much worse. Reason for this result could be that when people want to speak about what is or is not good, they use many small words like "not" for instance, and these words will typically be tagged as stop-words, and will be removed. This is why in the end, it was decided to keep the stop-words. For those who would like to try it by themselves, I have let the stop-words removal as a comment in the cleaning part of the analysis. End of explanation """ for model_name in ["Logistic", "LinearSVC"]: print("Confusion matrix for %s" % model_name) print(metrics.classification_report(y_test, prediction[model_name], target_names = ["positive", "negative"])) print() def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues, labels=["positive", "negative"]): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(labels)) plt.xticks(tick_marks, labels, rotation=45) plt.yticks(tick_marks, labels) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cm = confusion_matrix(y_test, prediction['Logistic']) np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cm) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] plt.figure() plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix') plt.show() """ Explanation: After plotting the ROC curve, it would appear that the Logistic regression method provides us with the best results, although the AUC value for this method is not outstanding... I looks like the best are LogisticRegression and LinearSVC. Let's see the accuracy, recall and confusion matrix for these models: End of explanation """ words = count_vect.get_feature_names() feature_coefs = pd.DataFrame( data = list(zip(words, logreg_result.coef_[0])), columns = ['feature', 'coef']) feature_coefs.sort_values(by='coef') def test_sample(model, sample): sample_counts = count_vect.transform([sample]) sample_tfidf = tfidf_transformer.transform(sample_counts) result = model.predict(sample_tfidf)[0] prob = model.predict_proba(sample_tfidf)[0] print("Sample estimated as %s: negative prob %f, positive prob %f" % (result.upper(), prob[0], prob[1])) test_sample(logreg, "The food was delicious, it smelled great and the taste was awesome") test_sample(logreg, "The whole experience was horrible. The smell was so bad that it literally made me sick.") test_sample(logreg, "The food was ok, I guess. The smell wasn't very good, but the taste was ok.") """ Explanation: Let's also have a look at what the best & words are by looking at the coefficients: End of explanation """ show_wordcloud(messages[messages.Usefulness == "useful"]["Summary_Clean"], title = "Useful") show_wordcloud(messages[messages.Usefulness == "useless"]["Summary_Clean"], title = "Useless") """ Explanation: Now let's try to predict how helpful a review is End of explanation """ messages_ufn = messages[messages.VotesTotal >= 10] messages_ufn.head() """ Explanation: Nothing seems to pop out.. let's try to limit the dataset to only entries with at least 10 votes. End of explanation """ show_wordcloud(messages_ufn[messages_ufn.Usefulness == "useful"]["Summary_Clean"], title = "Useful") show_wordcloud(messages_ufn[messages_ufn.Usefulness == "useless"]["Summary_Clean"], title = "Useless") """ Explanation: Now let's try again with the word clouds: End of explanation """ from sklearn.pipeline import Pipeline train_ufn, test_ufn = train_test_split(messages_ufn, test_size=0.2) ufn_pipe = Pipeline([ ('vect', CountVectorizer(min_df = 1, ngram_range = (1, 4))), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression(C=1e5)), ]) ufn_result = ufn_pipe.fit(train_ufn["Summary_Clean"], train_ufn["Usefulness"]) prediction['Logistic_Usefulness'] = ufn_pipe.predict(test_ufn["Summary_Clean"]) print(metrics.classification_report(test_ufn["Usefulness"], prediction['Logistic_Usefulness'])) """ Explanation: This seems a bit better, let's see if we can build a model though End of explanation """ ufn_scores = [a[0] for a in ufn_pipe.predict_proba(train_ufn["Summary"])] ufn_scores = zip(ufn_scores, train_ufn["Summary"], train_ufn["VotesHelpful"], train_ufn["VotesTotal"]) ufn_scores = sorted(ufn_scores, key=lambda t: t[0], reverse=True) # just make this into a DataFrame since jupyter renders it nicely: pd.DataFrame(ufn_scores) cm = confusion_matrix(test_ufn["Usefulness"], prediction['Logistic_Usefulness']) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cm_normalized, labels=["useful", "useless"]) """ Explanation: Let's also see which of the reviews are rated by our model as most helpful and least helpful: End of explanation """ from sklearn.pipeline import FeatureUnion from sklearn.base import BaseEstimator, TransformerMixin # Useful to select only certain features in a dataset for forwarding through a pipeline # See: http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html class ItemSelector(BaseEstimator, TransformerMixin): def __init__(self, key): self.key = key def fit(self, x, y=None): return self def transform(self, data_dict): return data_dict[self.key] train_ufn2, test_ufn2 = train_test_split(messages_ufn, test_size=0.2) ufn_pipe2 = Pipeline([ ('union', FeatureUnion( transformer_list = [ ('summary', Pipeline([ ('textsel', ItemSelector(key='Summary_Clean')), ('vect', CountVectorizer(min_df = 1, ngram_range = (1, 4))), ('tfidf', TfidfTransformer())])), ('score', ItemSelector(key=['Score'])) ], transformer_weights = { 'summary': 0.2, 'score': 0.8 } )), ('model', LogisticRegression(C=1e5)) ]) ufn_result2 = ufn_pipe2.fit(train_ufn2, train_ufn2["Usefulness"]) prediction['Logistic_Usefulness2'] = ufn_pipe2.predict(test_ufn2) print(metrics.classification_report(test_ufn2["Usefulness"], prediction['Logistic_Usefulness2'])) cm = confusion_matrix(test_ufn2["Usefulness"], prediction['Logistic_Usefulness2']) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] np.set_printoptions(precision=2) plt.figure() plot_confusion_matrix(cm_normalized, labels=["useful", "useless"]) len(ufn_result2.named_steps['model'].coef_[0]) """ Explanation: Even more complicated pipeline End of explanation """ ufn_summary_pipe = next(tr[1] for tr in ufn_result2.named_steps["union"].transformer_list if tr[0]=='summary') ufn_words = ufn_summary_pipe.named_steps['vect'].get_feature_names() ufn_features = ufn_words + ["Score"] ufn_feature_coefs = pd.DataFrame( data = list(zip(ufn_features, ufn_result2.named_steps['model'].coef_[0])), columns = ['feature', 'coef']) ufn_feature_coefs.sort_values(by='coef') print("And the coefficient of the Score variable: ") ufn_feature_coefs[ufn_feature_coefs.feature == 'Score'] """ Explanation: Again, let's have a look at the best/worst words: End of explanation """
usantamaria/iwi131
ipynb/12-Actividad-FuncionesYCondicionales/Actividad2.ipynb
cc0-1.0
def nota_minima(nota1, nota2): nota3 = 164 - nota1 - nota2 return nota3 print nota_minima(0,0) print nota_minima(35,65) print nota_minima(88,70) print nota_minima(100,100) """ Explanation: <header class="w3-container w3-teal"> <img src="images/utfsm.png" alt="" align="left"/> <img src="images/inf.png" alt="" align="right"/> </header> <br/><br/><br/><br/><br/> IWI131 Programación de Computadores Sebastián Flores http://progra.usm.cl/ https://www.github.com/usantamaria/iwi131 Grupos Actividad 2 CANTILLANA, CASTRO, ARÉVALO GALLARDO, MORALES, ZEGERS ALMEIDA, ARRATIA, CANALES ALVARADO, BAHAMONDE, CASTILLO CARRIEL, CODDOU, ESTRADA CATALAN, COLLAO, ETEROVIC CHAURA, CURIHUAL, FLORES ESTAY, DURÁN, GRUNERT KANZUA, HERRERA, LÓPEZ C. LOBOS, OGALDE, PÉREZ LOPEZ, PROBOSTE, REQUENA SALAS, SANTELICES, TORREBLANCA MEDIANO, REYES SCHWERTER, SANDOVAL Actividad 2 Un grupo por fila. Designar un líder de grupo. Discutir problema (inputs, outputs, requerimientos específicos) antes de comenzar a escribir código. Actividad: 3 preguntas completamente independientes. (a) Nota Mínima End of explanation """ def nota_minima(nota1, nota2): nota3 = -100 while nota3<200: if (nota1+nota2+nota3)/3.0>=54.5: return nota3 nota3 += 1 return nota3 print nota_minima(0,0) print nota_minima(35,65) print nota_minima(88,70) print nota_minima(100,100) """ Explanation: (a) Nota Mínima End of explanation """ def recuperativo(nota1, nota2, nota3): peor_nota = min(nota1, nota2, nota3) nota_recuperativo = 164 - nota1 - nota2 - nota3 + peor_nota return nota_recuperativo print recuperativo(0,0,0) print recuperativo(50,40,30) print recuperativo(35,65,45) print recuperativo(88,70,90) print recuperativo(100,100,100) """ Explanation: (b) Recuperativo End of explanation """ def recuperativo(nota1, nota2, nota3): rec1 = nota_minima(nota2, nota3) rec2 = nota_minima(nota1, nota3) rec3 = nota_minima(nota1, nota2) nota_recuperativo = min(rec1, min(rec2, rec3)) return nota_recuperativo print recuperativo(0,0,0) print recuperativo(50,40,30) print recuperativo(35,65,45) print recuperativo(88,70,90) print recuperativo(100,100,100) """ Explanation: (b) Recuperativo End of explanation """ def alumno_critico(): alumno_critico = 0 nota_alumno_critico = -1 alumno_actual = 1 while True: print "Alumno", alumno_actual nota1 = int(raw_input("Certamen 1: ")) if nota1==-1: break nota2 = int(raw_input("Certamen 2: ")) nota3_necesaria = nota_minima(nota1, nota2) print "nota minima C3 es: ", nota3_necesaria nota3 = int(raw_input("Certamen 3: ")) if nota3_necesaria > nota3: CR = recuperativo(nota1, nota2, nota3) print "nota minima CR es: ", CR if CR<=100 and CR>nota_alumno_critico: alumno_critico = alumno_actual nota_alumno_critico = CR alumno_actual += 1 print "Alumno mas critico:", alumno_critico alumno_critico() """ Explanation: (c) Alumnos End of explanation """
fmfn/BayesianOptimization
examples/exploitation_vs_exploration.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from bayes_opt import BayesianOptimization """ Explanation: Exploitation vs Exploration End of explanation """ np.random.seed(42) xs = np.linspace(-2, 10, 10000) def f(x): return np.exp(-(x - 2) ** 2) + np.exp(-(x - 6) ** 2 / 10) + 1/ (x ** 2 + 1) plt.plot(xs, f(xs)) plt.show() """ Explanation: Target function End of explanation """ def plot_bo(f, bo): x = np.linspace(-2, 10, 10000) mean, sigma = bo._gp.predict(x.reshape(-1, 1), return_std=True) plt.figure(figsize=(16, 9)) plt.plot(x, f(x)) plt.plot(x, mean) plt.fill_between(x, mean + sigma, mean - sigma, alpha=0.1) plt.scatter(bo.space.params.flatten(), bo.space.target, c="red", s=50, zorder=10) plt.show() """ Explanation: Utility function for plotting End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="ucb", kappa=0.1) plot_bo(f, bo) """ Explanation: Acquisition Function "Upper Confidence Bound" Prefer exploitation (kappa=1.0) Note that most points are around the peak(s). End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="ucb", kappa=10) plot_bo(f, bo) """ Explanation: Prefer exploration (kappa=10) Note that the points are more spread out across the whole range. End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="ei", xi=1e-4) plot_bo(f, bo) """ Explanation: Acquisition Function "Expected Improvement" Prefer exploitation (xi=0.0) Note that most points are around the peak(s). End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="ei", xi=1e-1) plot_bo(f, bo) """ Explanation: Prefer exploration (xi=0.1) Note that the points are more spread out across the whole range. End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="poi", xi=1e-4) plot_bo(f, bo) """ Explanation: Acquisition Function "Probability of Improvement" Prefer exploitation (xi=0.0) Note that most points are around the peak(s). End of explanation """ bo = BayesianOptimization( f=f, pbounds={"x": (-2, 10)}, verbose=0, random_state=987234, ) bo.maximize(n_iter=10, acq="poi", xi=1e-1) plot_bo(f, bo) """ Explanation: Prefer exploration (xi=0.1) Note that the points are more spread out across the whole range. End of explanation """
fastai/course-v3
nbs/dl2/cyclegan.ipynb
apache-2.0
#path = Config().data_path() #! wget https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/horse2zebra.zip -P {path} #! unzip -q -n {path}/horse2zebra.zip -d {path} #! rm {path}/horse2zebra.zip path = Config().data_path()/'horse2zebra' path.ls() """ Explanation: Data One-time download, uncomment the next cells to get the data. End of explanation """ class ImageTuple(ItemBase): def __init__(self, img1, img2): self.img1,self.img2 = img1,img2 self.obj,self.data = (img1,img2),[-1+2*img1.data,-1+2*img2.data] def apply_tfms(self, tfms, **kwargs): self.img1 = self.img1.apply_tfms(tfms, **kwargs) self.img2 = self.img2.apply_tfms(tfms, **kwargs) return self def to_one(self): return Image(0.5+torch.cat(self.data,2)/2) class TargetTupleList(ItemList): def reconstruct(self, t:Tensor): if len(t.size()) == 0: return t return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5)) class ImageTupleList(ImageList): _label_cls=TargetTupleList def __init__(self, items, itemsB=None, **kwargs): self.itemsB = itemsB super().__init__(items, **kwargs) def new(self, items, **kwargs): return super().new(items, itemsB=self.itemsB, **kwargs) def get(self, i): img1 = super().get(i) fn = self.itemsB[random.randint(0, len(self.itemsB)-1)] return ImageTuple(img1, open_image(fn)) def reconstruct(self, t:Tensor): return ImageTuple(Image(t[0]/2+0.5),Image(t[1]/2+0.5)) @classmethod def from_folders(cls, path, folderA, folderB, **kwargs): itemsB = ImageList.from_folder(path/folderB).items res = super().from_folder(path/folderA, itemsB=itemsB, **kwargs) res.path = path return res def show_xys(self, xs, ys, figsize:Tuple[int,int]=(12,6), **kwargs): "Show the `xs` and `ys` on a figure of `figsize`. `kwargs` are passed to the show method." rows = int(math.sqrt(len(xs))) fig, axs = plt.subplots(rows,rows,figsize=figsize) for i, ax in enumerate(axs.flatten() if rows > 1 else [axs]): xs[i].to_one().show(ax=ax, **kwargs) plt.tight_layout() def show_xyzs(self, xs, ys, zs, figsize:Tuple[int,int]=None, **kwargs): """Show `xs` (inputs), `ys` (targets) and `zs` (predictions) on a figure of `figsize`. `kwargs` are passed to the show method.""" figsize = ifnone(figsize, (12,3*len(xs))) fig,axs = plt.subplots(len(xs), 2, figsize=figsize) fig.suptitle('Ground truth / Predictions', weight='bold', size=14) for i,(x,z) in enumerate(zip(xs,zs)): x.to_one().show(ax=axs[i,0], **kwargs) z.to_one().show(ax=axs[i,1], **kwargs) data = (ImageTupleList.from_folders(path, 'trainA', 'trainB') .split_none() .label_empty() .transform(get_transforms(), size=128) .databunch(bs=4)) data.show_batch(rows=2) """ Explanation: See this tutorial for a detailed walkthrough of how/why this custom ItemList was created. End of explanation """ def convT_norm_relu(ch_in:int, ch_out:int, norm_layer:nn.Module, ks:int=3, stride:int=2, bias:bool=True): return [nn.ConvTranspose2d(ch_in, ch_out, kernel_size=ks, stride=stride, padding=1, output_padding=1, bias=bias), norm_layer(ch_out), nn.ReLU(True)] def pad_conv_norm_relu(ch_in:int, ch_out:int, pad_mode:str, norm_layer:nn.Module, ks:int=3, bias:bool=True, pad=1, stride:int=1, activ:bool=True, init:Callable=nn.init.kaiming_normal_)->List[nn.Module]: layers = [] if pad_mode == 'reflection': layers.append(nn.ReflectionPad2d(pad)) elif pad_mode == 'border': layers.append(nn.ReplicationPad2d(pad)) p = pad if pad_mode == 'zeros' else 0 conv = nn.Conv2d(ch_in, ch_out, kernel_size=ks, padding=p, stride=stride, bias=bias) if init: init(conv.weight) if hasattr(conv, 'bias') and hasattr(conv.bias, 'data'): conv.bias.data.fill_(0.) layers += [conv, norm_layer(ch_out)] if activ: layers.append(nn.ReLU(inplace=True)) return layers class ResnetBlock(nn.Module): def __init__(self, dim:int, pad_mode:str='reflection', norm_layer:nn.Module=None, dropout:float=0., bias:bool=True): super().__init__() assert pad_mode in ['zeros', 'reflection', 'border'], f'padding {pad_mode} not implemented.' norm_layer = ifnone(norm_layer, nn.InstanceNorm2d) layers = pad_conv_norm_relu(dim, dim, pad_mode, norm_layer, bias=bias) if dropout != 0: layers.append(nn.Dropout(dropout)) layers += pad_conv_norm_relu(dim, dim, pad_mode, norm_layer, bias=bias, activ=False) self.conv_block = nn.Sequential(*layers) def forward(self, x): return x + self.conv_block(x) def resnet_generator(ch_in:int, ch_out:int, n_ftrs:int=64, norm_layer:nn.Module=None, dropout:float=0., n_blocks:int=6, pad_mode:str='reflection')->nn.Module: norm_layer = ifnone(norm_layer, nn.InstanceNorm2d) bias = (norm_layer == nn.InstanceNorm2d) layers = pad_conv_norm_relu(ch_in, n_ftrs, 'reflection', norm_layer, pad=3, ks=7, bias=bias) for i in range(2): layers += pad_conv_norm_relu(n_ftrs, n_ftrs *2, 'zeros', norm_layer, stride=2, bias=bias) n_ftrs *= 2 layers += [ResnetBlock(n_ftrs, pad_mode, norm_layer, dropout, bias) for _ in range(n_blocks)] for i in range(2): layers += convT_norm_relu(n_ftrs, n_ftrs//2, norm_layer, bias=bias) n_ftrs //= 2 layers += [nn.ReflectionPad2d(3), nn.Conv2d(n_ftrs, ch_out, kernel_size=7, padding=0), nn.Tanh()] return nn.Sequential(*layers) resnet_generator(3, 3) def conv_norm_lr(ch_in:int, ch_out:int, norm_layer:nn.Module=None, ks:int=3, bias:bool=True, pad:int=1, stride:int=1, activ:bool=True, slope:float=0.2, init:Callable=nn.init.kaiming_normal_)->List[nn.Module]: conv = nn.Conv2d(ch_in, ch_out, kernel_size=ks, padding=pad, stride=stride, bias=bias) if init: init(conv.weight) if hasattr(conv, 'bias') and hasattr(conv.bias, 'data'): conv.bias.data.fill_(0.) layers = [conv] if norm_layer is not None: layers.append(norm_layer(ch_out)) if activ: layers.append(nn.LeakyReLU(slope, inplace=True)) return layers def discriminator(ch_in:int, n_ftrs:int=64, n_layers:int=3, norm_layer:nn.Module=None, sigmoid:bool=False)->nn.Module: norm_layer = ifnone(norm_layer, nn.InstanceNorm2d) bias = (norm_layer == nn.InstanceNorm2d) layers = conv_norm_lr(ch_in, n_ftrs, ks=4, stride=2, pad=1) for i in range(n_layers-1): new_ftrs = 2*n_ftrs if i <= 3 else n_ftrs layers += conv_norm_lr(n_ftrs, new_ftrs, norm_layer, ks=4, stride=2, pad=1, bias=bias) n_ftrs = new_ftrs new_ftrs = 2*n_ftrs if n_layers <=3 else n_ftrs layers += conv_norm_lr(n_ftrs, new_ftrs, norm_layer, ks=4, stride=1, pad=1, bias=bias) layers.append(nn.Conv2d(new_ftrs, 1, kernel_size=4, stride=1, padding=1)) if sigmoid: layers.append(nn.Sigmoid()) return nn.Sequential(*layers) discriminator(3) """ Explanation: Models We use the models that were introduced in the cycleGAN paper. End of explanation """ class CycleGAN(nn.Module): def __init__(self, ch_in:int, ch_out:int, n_features:int=64, disc_layers:int=3, gen_blocks:int=6, lsgan:bool=True, drop:float=0., norm_layer:nn.Module=None): super().__init__() self.D_A = discriminator(ch_in, n_features, disc_layers, norm_layer, sigmoid=not lsgan) self.D_B = discriminator(ch_in, n_features, disc_layers, norm_layer, sigmoid=not lsgan) self.G_A = resnet_generator(ch_in, ch_out, n_features, norm_layer, drop, gen_blocks) self.G_B = resnet_generator(ch_in, ch_out, n_features, norm_layer, drop, gen_blocks) #G_A: takes real input B and generates fake input A #G_B: takes real input A and generates fake input B #D_A: trained to make the difference between real input A and fake input A #D_B: trained to make the difference between real input B and fake input B def forward(self, real_A, real_B): fake_A, fake_B = self.G_A(real_B), self.G_B(real_A) if not self.training: return torch.cat([fake_A[:,None],fake_B[:,None]], 1) idt_A, idt_B = self.G_A(real_A), self.G_B(real_B) #Needed for the identity loss during training. return [fake_A, fake_B, idt_A, idt_B] """ Explanation: We group two discriminators and two generators in a single model, then a Callback will take care of training them properly. End of explanation """ class AdaptiveLoss(nn.Module): def __init__(self, crit): super().__init__() self.crit = crit def forward(self, output, target:bool, **kwargs): targ = output.new_ones(*output.size()) if target else output.new_zeros(*output.size()) return self.crit(output, targ, **kwargs) """ Explanation: AdaptiveLoss is a wrapper around a PyTorch loss function to compare an output of any size with a single number (0. or 1.). It will generate a target with the same shape as the output. A discriminator returns a feature map, and we want it to predict zeros (or ones) for each feature. End of explanation """ class CycleGanLoss(nn.Module): def __init__(self, cgan:nn.Module, lambda_A:float=10., lambda_B:float=10, lambda_idt:float=0.5, lsgan:bool=True): super().__init__() self.cgan,self.l_A,self.l_B,self.l_idt = cgan,lambda_A,lambda_B,lambda_idt self.crit = AdaptiveLoss(F.mse_loss if lsgan else F.binary_cross_entropy) def set_input(self, input): self.real_A,self.real_B = input def forward(self, output, target): fake_A, fake_B, idt_A, idt_B = output #Generators should return identity on the datasets they try to convert to self.id_loss = self.l_idt * (self.l_A * F.l1_loss(idt_A, self.real_A) + self.l_B * F.l1_loss(idt_B, self.real_B)) #Generators are trained to trick the discriminators so the following should be ones self.gen_loss = self.crit(self.cgan.D_A(fake_A), True) + self.crit(self.cgan.D_B(fake_B), True) #Cycle loss self.cyc_loss = self.l_A * F.l1_loss(self.cgan.G_A(fake_B), self.real_A) self.cyc_loss += self.l_B * F.l1_loss(self.cgan.G_B(fake_A), self.real_B) return self.id_loss+self.gen_loss+self.cyc_loss """ Explanation: The main loss used to train the generators. It has three parts: - the classic GAN loss: they must make the critics believe their images are real - identity loss: if they are given an image from the set they are trying to imitate, they should return the same thing - cycle loss: if an image from A goes through the generator that imitates B then through the generator that imitates A, it should be the same as the initial image. Same for B and switching the generators End of explanation """ class CycleGANTrainer(LearnerCallback): _order = -20 #Need to run before the Recorder def _set_trainable(self, D_A=False, D_B=False): gen = (not D_A) and (not D_B) requires_grad(self.learn.model.G_A, gen) requires_grad(self.learn.model.G_B, gen) requires_grad(self.learn.model.D_A, D_A) requires_grad(self.learn.model.D_B, D_B) if not gen: self.opt_D_A.lr, self.opt_D_A.mom = self.learn.opt.lr, self.learn.opt.mom self.opt_D_A.wd, self.opt_D_A.beta = self.learn.opt.wd, self.learn.opt.beta self.opt_D_B.lr, self.opt_D_B.mom = self.learn.opt.lr, self.learn.opt.mom self.opt_D_B.wd, self.opt_D_B.beta = self.learn.opt.wd, self.learn.opt.beta def on_train_begin(self, **kwargs): self.G_A,self.G_B = self.learn.model.G_A,self.learn.model.G_B self.D_A,self.D_B = self.learn.model.D_A,self.learn.model.D_B self.crit = self.learn.loss_func.crit if not getattr(self,'opt_G',None): self.opt_G = self.learn.opt.new([nn.Sequential(*flatten_model(self.G_A), *flatten_model(self.G_B))]) else: self.opt_G.lr,self.opt_G.wd = self.opt.lr,self.opt.wd self.opt_G.mom,self.opt_G.beta = self.opt.mom,self.opt.beta if not getattr(self,'opt_D_A',None): self.opt_D_A = self.learn.opt.new([nn.Sequential(*flatten_model(self.D_A))]) if not getattr(self,'opt_D_B',None): self.opt_D_B = self.learn.opt.new([nn.Sequential(*flatten_model(self.D_B))]) self.learn.opt.opt = self.opt_G.opt self._set_trainable() self.id_smter,self.gen_smter,self.cyc_smter = SmoothenValue(0.98),SmoothenValue(0.98),SmoothenValue(0.98) self.da_smter,self.db_smter = SmoothenValue(0.98),SmoothenValue(0.98) self.recorder.add_metric_names(['id_loss', 'gen_loss', 'cyc_loss', 'D_A_loss', 'D_B_loss']) def on_batch_begin(self, last_input, **kwargs): self.learn.loss_func.set_input(last_input) def on_backward_begin(self, **kwargs): self.id_smter.add_value(self.loss_func.id_loss.detach().cpu()) self.gen_smter.add_value(self.loss_func.gen_loss.detach().cpu()) self.cyc_smter.add_value(self.loss_func.cyc_loss.detach().cpu()) def on_batch_end(self, last_input, last_output, **kwargs): self.G_A.zero_grad(); self.G_B.zero_grad() fake_A, fake_B = last_output[0].detach(), last_output[1].detach() real_A, real_B = last_input self._set_trainable(D_A=True) self.D_A.zero_grad() loss_D_A = 0.5 * (self.crit(self.D_A(real_A), True) + self.crit(self.D_A(fake_A), False)) self.da_smter.add_value(loss_D_A.detach().cpu()) loss_D_A.backward() self.opt_D_A.step() self._set_trainable(D_B=True) self.D_B.zero_grad() loss_D_B = 0.5 * (self.crit(self.D_B(real_B), True) + self.crit(self.D_B(fake_B), False)) self.db_smter.add_value(loss_D_B.detach().cpu()) loss_D_B.backward() self.opt_D_B.step() self._set_trainable() def on_epoch_end(self, last_metrics, **kwargs): return add_metrics(last_metrics, [s.smooth for s in [self.id_smter,self.gen_smter,self.cyc_smter, self.da_smter,self.db_smter]]) """ Explanation: The main callback to train a cycle GAN. The training loop will train the generators (so learn.opt is given those parameters) while the critics are trained by the callback during on_batch_end. End of explanation """ cycle_gan = CycleGAN(3,3, gen_blocks=9) learn = Learner(data, cycle_gan, loss_func=CycleGanLoss(cycle_gan), opt_func=partial(optim.Adam, betas=(0.5,0.99)), callback_fns=[CycleGANTrainer]) learn.lr_find() learn.recorder.plot() learn.fit(100, 1e-4) learn.save('100fit') learn = learn.load('100fit') """ Explanation: Training End of explanation """ learn.show_results(ds_type=DatasetType.Train, rows=2) learn.show_results(ds_type=DatasetType.Train, rows=2) """ Explanation: Let's look at some results using Learner.show_results. End of explanation """ len(learn.data.train_ds.items),len(learn.data.train_ds.itemsB) def get_batch(filenames, tfms, **kwargs): samples = [open_image(fn) for fn in filenames] for s in samples: s = s.apply_tfms(tfms, **kwargs) batch = torch.stack([s.data for s in samples], 0).cuda() return 2. * (batch - 0.5) fnames = learn.data.train_ds.items[:8] x = get_batch(fnames, get_transforms()[1], size=128) learn.model.eval() tfms = get_transforms()[1] bs = 16 def get_losses(fnames, gen, crit, bs=16): losses_in,losses_out = [],[] with torch.no_grad(): for i in progress_bar(range(0, len(fnames), bs)): xb = get_batch(fnames[i:i+bs], tfms, size=128) fakes = gen(xb) preds_in,preds_out = crit(xb),crit(fakes) loss_in = learn.loss_func.crit(preds_in, True,reduction='none') loss_out = learn.loss_func.crit(preds_out,True,reduction='none') losses_in.append(loss_in.view(loss_in.size(0),-1).mean(1)) losses_out.append(loss_out.view(loss_out.size(0),-1).mean(1)) return torch.cat(losses_in),torch.cat(losses_out) losses_A = get_losses(data.train_ds.x.items, learn.model.G_B, learn.model.D_B) losses_B = get_losses(data.train_ds.x.itemsB, learn.model.G_A, learn.model.D_A) def show_best(fnames, losses, gen, n=8): sort_idx = losses.argsort().cpu() _,axs = plt.subplots(n//2, 4, figsize=(12,2*n)) xb = get_batch(fnames[sort_idx][:n], tfms, size=128) with torch.no_grad(): fakes = gen(xb) xb,fakes = (1+xb.cpu())/2,(1+fakes.cpu())/2 for i in range(n): axs.flatten()[2*i].imshow(xb[i].permute(1,2,0)) axs.flatten()[2*i].axis('off') axs.flatten()[2*i+1].imshow(fakes[i].permute(1,2,0)) axs.flatten()[2*i+1].set_title(losses[sort_idx][i].item()) axs.flatten()[2*i+1].axis('off') show_best(data.train_ds.x.items, losses_A[1], learn.model.G_B) show_best(data.train_ds.x.itemsB, losses_B[1], learn.model.G_A) """ Explanation: Now let's go through all the images of the training set and find the ones that are the best converted (according to our critics) or the worst converted. End of explanation """
Hexiang-Hu/mmds
final/Final-basic.ipynb
mit
## Q2 Solution. def hash(x): return math.fmod(3 * x + 2, 11) for i in xrange(1,12): print hash(i) """ Explanation: Q1. Solution 3-shingles for "hello world": hel, ell, llo, lo_, o_w ,_wo, wor, orl, rld => 9 in total Q2. Solution End of explanation """ ## Q3 Solution. prob = 1.0 / 10 a = (1 - prob)**4 print a b = (1 - ( 1 - (1 - prob)**2) )**2 print b c = (1 - (1.0 /10 * 1.0 / 9)) print c """ Explanation: Q3. This question involves three different Bloom-filter-like scenarios. Each scenario involves setting to 1 certain bits of a 10-bit array, each bit of which is initially 0. Scenario A: we use one hash function that randomly, and with equal probability, selects one of the ten bits of the array. We apply this hash function to four different inputs and set to 1 each of the selected bits. Scenario B: We use two hash functions, each of which randomly, with equal probability, and independently of the other hash function selects one of the of 10 bits of the array. We apply both hash functions to each of two inputs and set to 1 each of the selected bits. Scenario C: We use one hash function that randomly and with equal probability selects two different bits among the ten in the array. We apply this hash function to two inputs and set to 1 each of the selected bits. Let a, b, and c be the expected number of bits set to 1 under scenarios A, B, and C, respectively. Which of the following correctly describes the relationships among a, b, and c? End of explanation """ ## Q5 Solution. vec1 = np.array([2, 1, 1]) vec2 = np.array([10, -7, 1]) print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) """ Explanation: Q4. In this market-basket problem, there are 99 items, numbered 2 to 100. There is a basket for each prime number between 2 and 100. The basket for p contains all and only the items whose numbers are a multiple of p. For example, the basket for 17 contains the following items: {17, 34, 51, 68, 85}. What is the support of the pair of items {12, 30}? Q4 Solution. support = 2 => {2,4,6,8, ...} & {3, 6, 9,...} Q5. To two decimal places, what is the cosine of the angle between the vectors [2,1,1] and [10,-7,1]? End of explanation """ ## Q6 Solution. # probability that they agree at one particular band p1 = 0.6**2 print (1 - p1)**3 """ Explanation: Q6. In this question we use six minhash functions, organized as three bands of two rows each, to identify sets of high Jaccard similarity. If two sets have Jaccard similarity 0.6, what is the probability (to two decimal places) that this pair will become a candidate pair? End of explanation """ ## Q7 Solution. p1 = 1 - (1 - .9)**3 p2 = 1 - (1 - .1)**3 print "new LSH is (.4, .6, {}, {})-sensitive family".format(p1, p2) """ Explanation: Q7. Suppose we have a (.4, .6, .9, .1)-sensitive family of functions. If we apply a 3-way OR construction to this family, we get a new family of functions whose sensitivity is: End of explanation """ ## Q9 Solution. M = np.array([[0, 0, 0, .25], [1, 0, 0, .25], [0, 1, 0, .25], [0, 0, 1, .25]]) r = np.array([.25, .25, .25, .25]) for i in xrange(30): r = M.dot(r) print r """ Explanation: Q8. Suppose we have a database of (Class, Student, Grade) facts, each giving the grade the student got in the class. We want to estimate the fraction of students who have gotten A's in at least 10 classes, but we do not want to examine the entire relation, just a sample of 10% of the tuples. We shall hash tuples to 10 buckets, and take only those tuples in the first bucket. But to get a valid estimate of the fraction of students with at least 10 A's, we need to pick our hash key judiciously. To which Attribute(s) of the relation should we apply the hash function? Q8 Solution. We will need to hash it to with regard to class and students Q9 Suppose the Web consists of four pages A, B, C, and D, that form a chain A-->B-->C-->D We wish to compute the PageRank of each of these pages, but since D is a "dead end," we will "teleport" from D with probability 1 to one of the four pages, each with equal probability. We do not teleport from pages A, B, or C. Assuming the sum of the PageRanks of the four pages is 1, what is the PageRank of page B, correct to two decimal places? End of explanation """ ## Q10 Solution. print 1 - (1 - .3)*(1 - .4) """ Explanation: Q10. Suppose in the AGM model we have four individuals (A,B,C,D} and two communities. Community 1 consists of {A,B,C} and Community 2 consists of {B,C,D}. For Community 1 there is a 30% chance- it will cause an edge between any two of its members. For Community 2 there is a 40% chance it will cause an edge between any two of its members. To the nearest two decimal places, what is the probability that there is an edge between B and C? End of explanation """ ##Q12 L = np.array([[-.25, -.5, -.76, -.29, -.03, -.07, -.01], [-.05, -.1, -.15, .20, .26, .51, .77 ]]).T print L V = np.array([[6.74, 0],[0, 5.44]]) print V R = np.array([[-.57, -.11, -.57, -.11, -.57], [-.09, 0.70, -.09, .7, -.09]]) print R print L.dot(V).dot(R) """ Explanation: Q11. X is a dataset of n columns for which we train a supervised Machine Learning algorithm. e is the error of the model measured against a validation dataset. Unfortunately, e is too high because model has overfitted on the training data X and it doesn't generalize well. We now decide to reduce the model variance by reducing the dimensionality of X, using a Singular Value Decomposition, and using the resulting dataset to train our model. If i is the number of singular values used in the SVD reduction, how does e change as a function of i, for i ∈ {1, 2,...,n}? Solution. A Convex Function starts low End of explanation """ X = 0.8 * np.array([[1.0/3, 0, 0], [1.0/3, 0, 0], [1.0/3, 1, 0]]) X += 0.2 * np.array([[.5, .5, .5], [.5, .5, .5], [ 0, 0, 0]]) print X """ Explanation: Q13. Recall that the power iteration does r=X·r until converging, where X is a nxn matrix and n is the number of nodes in the graph. Using the power iteration notation above, what is matrix X value when solving topic sensitive Pagerank with teleport set {0,1} for the following graph? Use beta=0.8. (Recall that the teleport set contains the destination nodes used when teleporting). End of explanation """ from scipy.cluster.vq import kmeans,vq def L1(p1, p2): return abs(p1[0] - p2[0]) + abs(p1[1] - p2[1]) c1 = [(1,1), (4,4)] points = np.array([[1, 1], [2, 1], [2, 2], [3, 3], [4, 2], [2, 4], [4, 4]] ) # for point in points: # minDist = 9999 # minIdx = None # for point in points: # print points """ Explanation: Q14. Here are two sets of integers S = {1,2,3,4} and T = {1,2,5,6,x}, where x stands for some integer. For how many different integer values of x are the Jaccard similarity and the Jaccard distance of S and T the same? (Note: x can be one of 1, 2, 5, or 6, but in that case T, being a set, will contain x only once and thus have four members, not five.) Solution. x = 3 or x = 4 Q15. Which of the following are advantages of using decision trees? (check all correct options) Solution. My Answer It can handle categorical input data without any special preprocessing The resulting model is easy to interpret The training is easy to parallelize Q16. Consider a dataset of points xi,....xn with labels yi,....,yi ∈ {-1, 1}, such that the data is separable. We run a soft-margin SVM and a hard-margin SVM, and in each case we obtain parameters w and b. Check the option that is true: The resulting w and b can be different, and the boundaries can be different. The resulting w and b are the same in the two cases, hence boundaries are the same. The resulting w and b can be different in the two cases, but the boundaries are the same. None of the above. Q17. Consider the following MapReduce algorithm. The input is a collection of positive integers. Given integer X, the Map function produces a tuple with key Y and value X for each prime divisor Y of X. For example, if X = 20, there are two key-value pairs: (2,20) and (5,20). The Reduce function, given a key K and list L of values, produces a tuple with key K and value sum(L) i.e., the sum of the values in the list. Given the input 9, 15, 16, 23, 25, 27, 28, 56 which of the following tuples appears in the final output? Solution. {2, 16 + 28 + 56 } => {2, 100} {3, 9 + 15 + 27 } => {3, 51} {5, 15 + 25} => {5, 40} {7, 28 + 56} => {7, 84} Q18. Suppose we run K-means clustering over the following set of points in 2-d space using the L1 distance metric: (1,1), (2,1) (2,2), (3,3), (4,2), (2,4), (4,4). We pick k=2 and the initial centroids are (1,1) and (4,4). Which of these is the centroid of the cluster containing the point (3,3) when the algorithm terminates? Recall that the L1 distance between two points is the sum of their distances along each dimension, e.g. the L1 distance between (1, 2) and (-1, 3) is 3. End of explanation """ ## Solution vec1 = np.array([0, 1, -1, 0, 0]) vec2 = np.array([0, 1, 0, 0, -1]) print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) """ Explanation: Q20. Consider an execution of the BALANCE algorithm with 4 advertisers, A1, A2, A3, A4, and 4 kinds of queries, Q1, Q2, Q3, Q4. Advertiser A1 bids on queries Q1 and Q2; A2 bids on queries Q2 and Q3; A3 on queries Q3 and Q4; and A4 on queries Q1 and Q4. All bids are equal to 1, and all clickthrough rates are equal. All advertisers have a budget of 3, and ties are broken in favor of the advertiser with the lower index (e.g., A1 beats A2). Queries appear in the following order: Q1, Q2, Q3, Q3, Q1, Q2, Q3, Q1, Q4, Q1 Which advertiser’s budget is exhausted first? Solution A1 will exhausted first Q21. Consider the bipartite graph with the following edges (you might want to draw a picture): (a,1), (a,3), (b,1), (b,2), (b,4), (c,2), (d,1), (d,4) Which of the following edges appears in NO perfect matching? Solution perfect match 1: a-3, b-4, c-2, d-1 perfect match 2: a-3, b-1, c-2, d-4 Q22. The Utility Matrix below captures the ratings of 5 users (A,B,C,D,E) for 5 movies (P,Q,R,S,T). Each known rating is a number between 1 and 5, and blanks represent unknown ratings. What is the Pearson Correlation (also known as the Centered Cosine) between users B and D? <pre> P Q R S T A 2 4 B 3 1 2 C 5 5 D 4 3 2 E 4 5 1 </pre> End of explanation """ ## Solution print "RMSE = {}".format(math.sqrt(2.0 / 3)) """ Explanation: Q23. The Utility Matrix below captures the ratings of 5 users (A,B,C,D,E) for 5 movies (P,Q,R,S,T). Each known rating is a number between 1 and 5, and blanks represent unknown ratings. Let (U,M) denote the rating of movie M by user U. We evaluate a Recommender System by withholding the ratings (A,P), (B,Q), and (C,S). The recommender system estimates (A,P)=1, (B,Q)=4, and (C,S)=5. What is the RMSE of the Recommender System, rounded to 2 decimal places? <pre> P Q R S T A 2 4 B 3 1 2 C 5 5 D 4 3 2 E 4 5 1 </pre> End of explanation """
DaveBackus/Data_Bootcamp
Code/IPython/bootcamp_advgraphics_seaborn.ipynb
mit
import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import sys %matplotlib inline """ Explanation: Graphics using Seaborn We previously have covered how to do some basic graphics using matplotlib. In this notebook we introduce a package called seaborn. seaborn builds on top of matplotlib by doing 2 things: Gives us access to more types of plots (Note: Every plot created in seaborn could be made by matplotlib, but you shouldn't have to worry about doing this) Sets better defaults for how the plot looks right away Before we start, make sure that you have seaborn installed. If not, then you can install it by conda install seaborn This notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp. End of explanation """ # check versions (overkill, but why not?) print('Python version:', sys.version) print('Pandas version: ', pd.__version__) print('Matplotlib version: ', mpl.__version__) print('Seaborn version: ', sns.__version__) """ Explanation: As per usual, we begin by listing the versions of each package that is used in this notebook. End of explanation """ tips = sns.load_dataset("tips") ansc = sns.load_dataset("anscombe") tita = sns.load_dataset("titanic") """ Explanation: Datasets There are some classical datasets that get used to demonstrate different types of plots. We will use several of them here. tips : This dataset has informaiton on waiter tips. Includes information such as total amount of the bill, tip amount, sex of waiter, what day of the week, which meal, and party size. anscombe: This dataset is a contrived example. It has 4 examples which differ drastically when you look at them, but they have the same correlation, regression coefficient, and $R^2$. titanic : This dataset has information on each of the passengers who were on the titanic. Includes information such as: sex, age, ticket class, fare paid, whether they were alone, and more. End of explanation """ plt.style.use("classic") def plot_tips(): fig, ax = plt.subplots(2, figsize=(8, 6)) tips[tips["sex"] == "Male"].plot(x="total_bill", y="tip", ax=ax[0], kind="scatter", color="blue") tips[tips["sex"] == "Female"].plot(x="total_bill", y="tip", ax=ax[1], kind="scatter", color="#F52887") ax[0].set_xlim(0, 60) ax[1].set_xlim(0, 60) ax[0].set_ylim(0, 15) ax[1].set_ylim(0, 15) ax[0].set_title("Male Tips") ax[1].set_title("Female Tips") fig.tight_layout() plot_tips() # fig.savefig("/home/chase/Desktop/foo.png") # sns.set() resets default seaborn settings sns.set() plot_tips() """ Explanation: Better Defaults Recall that in our previous notebook that we used plt.style.use to set styles. We will begin by setting the style to "classic"; this sets all of our default settings back to matplotlib's default values. Below we plot open and closing prices on the top axis and the implied returns on the bottom axis. End of explanation """ plt.style.available """ Explanation: What did you notice about the differences in the settings of the plot? Which do you like better? We like the second better. Investigate other styles and create the same plot as above using a style you like. You can choose from the list in the code below. If you have additional time, visit the seaborn docs and try changing other default settings. End of explanation """ sns.axes_style? sns.set() plot_tips() """ Explanation: We could do the same for a different style (like ggplot) End of explanation """ # Move back to seaborn defaults sns.set() """ Explanation: Exercise: Find a style you like and recreate the plot above using that style. The Juicy Stuff While having seaborn set sensible defaults is convenient, it isn't a particularly large innovation. We could choose sensible defaults and set them to be our default. The main benefit of seaborn is the types of graphs that it gives you access to -- All of which could be done in matplotlib, but, instead of 5 lines of code, it would require possibly hundreds of lines of code. Trust us... This is a good thing. We don't have time to cover everything that can be done in seaborn, but we suggest having a look at the gallery of examples. We will cover: kdeplot jointplot violinplot pairplot ... End of explanation """ fig, ax = plt.subplots() ax.hist(tips["tip"], bins=25) ax.set_title("Histogram of tips") plt.show() fig, ax = plt.subplots() sns.kdeplot(tips["tip"], ax=ax) ax.hist(tips["tip"], bins=25, alpha=0.25, normed=True, label="tip") ax.legend() fig.suptitle("Kernel Density with Histogram"); """ Explanation: kdeplot What does kde stand for? kde stands for "kernel density estimation." This is (far far far) beyond the scope of this class, but the basic idea is that this is a smoothed histogram. When we are trying to get information about distributions it sometimes looks nicer than a histogram does. End of explanation """ fig, ax = plt.subplots() sns.kdeplot(tips.total_bill, ax=ax) sns.kdeplot(tips.tip, ax=ax) # ax.hist(tips.total_bill, alpha=0.3, normed=True) """ Explanation: Exercise: Create your own kernel density plot using sns.kdeplot of "total_bill" from the tips dataframe End of explanation """ sns.jointplot(x="total_bill", y="tip", data=tips) """ Explanation: Jointplot We now show what jointplot does. It draws a scatter plot of two variables and puts their histogram just outside of the scatter plot. This tells you information about not only the joint distribution, but also the marginals. End of explanation """ sns.jointplot(x="total_bill", y="tip", data=tips, kind="kde") """ Explanation: We can also plot everything as a kernel density estimate -- Notice the main plot is now a contour map. End of explanation """ sns.jointplot(x="size", y="tip", data=tips, kind="kde") sns.jointplot(x="size", y="tip", data=tips) """ Explanation: Like an contour map Exercise: Create your own jointplot. Feel free to choose your own x and y data (if you can't decide then use x=size and y=tip). Interpret the output of the plot. End of explanation """ tita.head() tips.head() sns.violinplot? sns.violinplot(x="fare", y="deck", hue="survived", split=True, data=tita) # sns.swarmplot(x="class", y="age", hue="sex", data=tita) sns.boxplot(x="class", y="age", hue="sex", data=tita) """ Explanation: violinplot Some of the story of this notebook is that distributions matter and how we can show them. Violin plots are similar to a sideways kernel density and it allows us to look at how distributions matter over some aspect of the data. End of explanation """ sns.swarmplot(x="sex", y="age", hue="survived", data=tita) """ Explanation: Exercise: We might also want to look at the distribution of prices across ticket classes. Make a violin plot of the prices over the different ticket classes. End of explanation """ sns.pairplot(tips[["tip", "total_bill", "size"]], size=2.5) """ Explanation: Pairplot Pair plots show us two things. They show us the histograms of the variables along the diagonal and then the scatter plot of each pair of variables on the off diagonal pictures. Why might this be useful? It allows us to look get an idea of the correlations across each pair of variables and gives us an idea of their relationships across the variables. End of explanation """ sns.pairplot(tips[["tip", "total_bill", "size"]], size=2.5, diag_kind="kde") """ Explanation: Below is the same plot, but slightly different. What is different? End of explanation """ tips.head() sns.pairplot(tips[["tip", "total_bill", "size", "time"]], size=3.5, hue="time", diag_kind="kde") """ Explanation: What's different about this plot? Different colors for each company. End of explanation """ fig, ax = plt.subplots(2, 2, figsize=(8, 6)) ansc[ansc["dataset"] == "I"].plot.scatter(x="x", y="y", ax=ax[0, 0]) ansc[ansc["dataset"] == "II"].plot.scatter(x="x", y="y", ax=ax[0, 1]) ansc[ansc["dataset"] == "III"].plot.scatter(x="x", y="y", ax=ax[1, 0]) ansc[ansc["dataset"] == "IV"].plot.scatter(x="x", y="y", ax=ax[1, 1]) ax[0, 0].set_title("Dataset I") ax[0, 1].set_title("Dataset II") ax[1, 0].set_title("Dataset III") ax[1, 1].set_title("Dataset IV") fig.suptitle("Anscombe's Quartet") """ Explanation: lmplot We often want to think about running regressions of variables. A statistician named Francis Anscombe came up with four datasets that: Same mean for $x$ and $y$ Same variance for $x$ and $y$ Same correlation between $x$ and $y$ Same regression coefficient of $x$ on $y$ Below we show the scatter plot of the datasets to give you an idea of how different they are. End of explanation """ sns.lmplot(x="x", y="y", data=ansc, col="dataset", hue="dataset", col_wrap=2, ci=None) sns.lmplot(x="total_bill", y="tip", data=tips, hue="sex") """ Explanation: lmplot plots the data with the regression coefficient through it. End of explanation """ sns.regplot(x="total_bill", y="tip", data=tips) """ Explanation: regplot regplot also shows the regression line through data points End of explanation """
ND-CSE-30151/tock
docs/source/tutorial/General.ipynb
mit
from tock import * """ Explanation: More general machines End of explanation """ m1 = Machine([BASE, BASE, BASE, BASE], state=0, input=1) """ Explanation: We've seen finite automata, pushdown automata, and Turing machines, but many other kinds of automata can be created by instantiating a Machine directly. End of explanation """ m1.set_start_state('q1') m1.add_transition('q1, &, &, & -> q2, &, $, $') m1.add_transition('q2, a, &, & -> q2, &, a, &') m1.add_transition('q2, b, &, & -> q2, &, b, &') m1.add_transition('q2, # # #, &, & -> q3, &, &, &') m1.add_transition('q3, &, a, & -> q3, &, &, a') m1.add_transition('q3, &, b, & -> q3, &, &, b') m1.add_transition('q3, &, $, & -> q4, &, &, &') m1.add_transition('q4, a, &, a -> q4, &, &, &') m1.add_transition('q4, b, &, b -> q4, &, &, &') m1.add_transition('q4, _, &, $ -> q5, &, &, &') m1.add_accept_state('q5') m1 run(m1, 'a a b # # # a a b').shortest_path() run(m1, 'a a b # # # b a a').has_path() """ Explanation: The first argument is required. This machine has four stores, all of type BASE (to be explained below). The argument state=0 means that store 0 is the state. It's this store that is used to define the start and accept conditions, and this store that is used to define the nodes in a state transition diagram. The argument input=1 is required and means that store 1 is the input. When the automaton is run, the input string will be placed on this store. As with other kinds of machines, you can define a start state, transitions, and accept states using set_start_state, add_transition, and add_accept_state. End of explanation """ m2 = Machine([BASE, STREAM, BASE], state=0, input=1) m2.set_start_state('q1') m2.add_transition('q1, &, & -> q2, $') m2.add_transition('q2, a, & -> q2, a') m2.add_transition('q2, &, & -> q3, &') m2.add_transition('q3, b, a -> q3, &') m2.add_transition('q3, &, $ -> q4, &') m2.add_accept_state('q4') m2 """ Explanation: This is something like a 2-stack PDA that recognizes the language ${w###w}$. It works by transferring the first half of the input to the first stack, transferring the first stack to the second stack (reversing it), then checking the second half of the input against it. This example also demonstrates that we can read three # signs in one transition. Store types BASE Our example above demonstrated the BASE store type. We used a BASE store like a stack, and showed that if the lhs of a transition has more than one symbol, it pops that many symbols. Likewise, if the rhs of a transition has more than one symbol, it pushes that many symbols. Additionally, a BASE store has a head like a Turing machine; it just happens that we left the head in position 0. If the rhs is ^ v, where v is a string, the head moves to the cell to the left of v; if the rhs is v ^, the head moves to the cell to the right of v. For more information, see the reference section on Internals. STREAM The above example looked different from a standard PDA because the right-hand sides of transitions explicitly popped input symbols as they were read. And it had to explicitly check for a blank (_) indicating the end of the string before accepting. To make the input look more like a finite or pushdown automaton's, use the store type STREAM. A store of type STREAM (which only really makes sense for the input store) has two properties. Transitions do not have an entry in their right-hand side for STREAMs; it is implicitly &amp;. The input must be entirely consumed in order for the machine to accept the input string. End of explanation """ m3 = Machine([BASE, TAPE], state=0, input=1) m3.set_start_state('q1') m3.add_transition('q1, a -> q2, b, R') m3.add_accept_state('q2') m3 """ Explanation: This looks just like a pushdown automaton. TAPE The caret (^) notation above can be used to move the head as in a Turing machine, but to get more standard notation, use a store of type TAPE. Then the right-hand-side of a transition has two entries for that store, a write and a move (which can be L or R). End of explanation """ for t in m3.transitions: print(t) m3.transitions.append(machines.Transition('q1, b -> q2, c ^')) for t in m3.transitions: print(t) """ Explanation: Low-level interface Instead of creating an automaton using set_start_state, add_transition, and add_accept_state, you can also directly access the members start_config, transitions, and accept_config. This interface completely ignores store types. Transitions are created as for BASE stores: End of explanation """ m3.start_config m3.start_config = machines.Configuration('q2, &') m3.start_config """ Explanation: start_config specifies an initial value for every store (not just the state). The initial value for the input is ignored, as it will be replaced by the input string. End of explanation """ for c in m3.accept_configs: print(c) # accept in state q3 if current symbol is a m3.accept_configs.add(machines.Configuration('q3, a')) """ Explanation: accept_configs is a set of configurations, each which specifies a pattern for every store (not just the state). End of explanation """ for c in m2.accept_configs: print(c) """ Explanation: This is how STREAM stores are able to require that the input is fully consumed -- by making the accept configuration for the input store to default to a blank (_). End of explanation """
xpharry/Udacity-DLFoudation
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('cifar-10-python.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', 'cifar-10-python.tar.gz', pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open('cifar-10-python.tar.gz') as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x / 255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function one_hot = np.zeros((len(x), 10)) for i in range(len(x)): one_hot[i][x[i]] = 1 return one_hot """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape = [None, image_shape[0], image_shape[1], image_shape[2]], name = 'x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape = [None, n_classes], name = 'y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name = 'keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. If you're finding it hard to dedicate enough time for this course a week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use TensorFlow Layers or TensorFlow Layers (contrib) to build each layer, except "Convolutional & Max Pooling" layer. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. If you would like to get the most of this course, try to solve all the problems without TF Layers. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function # Input/Image input = x_tensor # Weight and bias weight = tf.Variable(tf.truncated_normal( [conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) # Apply Convolution conv_layer = tf.nn.conv2d(input, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') # Add bias conv_layer = tf.nn.bias_add(conv_layer, bias) # Apply activation function conv_layer = tf.nn.relu(conv_layer) # Apply maxpooling conv_layer = tf.nn.max_pool( conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. You're free to use any TensorFlow package for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function batch_size = x_tensor.get_shape().as_list()[0]# i tried as_list()[] width = x_tensor.get_shape().as_list()[1] height = x_tensor.get_shape().as_list()[2] depth = x_tensor.get_shape().as_list()[3] image_flat_size = width * height * depth return tf.contrib.layers.flatten(x_tensor, [batch_size, image_flat_size]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return tf.contrib.layers.fully_connected(x_tensor, num_outputs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs])) mul = tf.matmul(x_tensor, weights, name='mul') bias = tf.Variable(tf.zeros(num_outputs)) return tf.add(mul, bias) # y = tf.add(mul, bias) # fc = tf.contrib.layers.fully_connected(y, num_outputs) # return fc """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). You can use TensorFlow Layers or TensorFlow Layers (contrib) for this layer. Note: Activation, softmax, or cross entropy shouldn't be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = 10 conv_ksize = [2, 2] conv_strides = [2, 2] pool_ksize = [2, 2] pool_strides = [2, 2] conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) conv_layer = flatten(conv_layer) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) num_outputs = 10 conv_layer = fully_conn(conv_layer, num_outputs) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) num_classes = 10 conv_layer = output(conv_layer, num_classes) # TODO: return output return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0}) print('Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_acc)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 100 batch_size = 256 keep_probability = 0.2 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
CyberCRI/dataanalysis-herocoli-redmetrics
v1.52/Tests/1.6 Google form analysis - MCA.ipynb
cc0-1.0
%run "../Functions/1. Google form analysis.ipynb" """ Explanation: Table of Contents MCA <br> <br> <br> <br> End of explanation """ import mca np.set_printoptions(formatter={'float': '{: 0.4f}'.format}) pd.set_option('display.precision', 5) pd.set_option('display.max_columns', 25) """ Explanation: MCA <a id=MCA /> source: http://nbviewer.jupyter.org/github/esafak/mca/blob/master/docs/mca-BurgundiesExample.ipynb mca - Burgundies Example This example demonstrated capabilities of mca package by reproducing results of Multiple Correspondence Analysis, Hedbi & Valentin, 2007. Imports and loading data End of explanation """ data = pd.read_table('../../data/burgundies.csv',sep=',', skiprows=1, index_col=0, header=0) X = data.drop('oak_type', axis=1) j_sup = data.oak_type i_sup = np.array([0, 1, 0, 1, 0, .5, .5, 1, 0, 1, 0, 0, 1, 0, .5, .5, 1, 0, .5, .5, 0, 1]) ncols = 10 X.shape, j_sup.shape, i_sup.shape """ Explanation: For input format, mca uses DataFrame from pandas package. Here we use pandas to load CSV file with indicator matrix $X$ of categorical data with 6 observations, 10 variables and 22 levels in total. We also set up supplementary variable $j_{sup}$ and supplementary observation $i_{sup}$. End of explanation """ src_index = (['Expert 1'] * 7 + ['Expert 2'] * 9 + ['Expert 3'] * 6) var_index = (['fruity'] * 2 + ['woody'] * 3 + ['coffee'] * 2 + ['fruity'] * 2 + ['roasted'] * 2 + ['vanillin'] * 3 + ['woody'] * 2 + ['fruity'] * 2 + ['butter'] * 2 + ['woody'] * 2) yn = ['y','n']; rg = ['1', '2', '3']; val_index = yn + rg + yn*3 + rg + yn*4 col_index = pd.MultiIndex.from_arrays([src_index, var_index, val_index], names=['source', 'variable', 'value']) table1 = pd.DataFrame(data=X.values, index=X.index, columns=col_index) table1.loc['W?'] = i_sup table1['','Oak Type',''] = j_sup table1 """ Explanation: Table 1 "Data for the barrel-aged red burgundy wines example. “Oak Type" is an illustrative (supplementary) variable, the wine W? is an unknown wine treated as a supplementary observation." (Hedbi & Valentin, 2007) End of explanation """ mca_ben = mca.MCA(X, ncols=ncols) mca_ind = mca.MCA(X, ncols=ncols, benzecri=False) print(mca.MCA.__doc__) """ Explanation: MCA Let's create two MCA instances - one with Benzécri correction enabled (default) and one without it. Parameter ncols denotes number of categorical variables. End of explanation """ data = {'Iλ': pd.Series(mca_ind.L), 'τI': mca_ind.expl_var(greenacre=False, N=4), 'Zλ': pd.Series(mca_ben.L), 'τZ': mca_ben.expl_var(greenacre=False, N=4), 'cλ': pd.Series(mca_ben.L), 'τc': mca_ind.expl_var(greenacre=True, N=4)} # 'Indicator Matrix', 'Benzecri Correction', 'Greenacre Correction' columns = ['Iλ', 'τI', 'Zλ', 'τZ', 'cλ', 'τc'] table2 = pd.DataFrame(data=data, columns=columns).fillna(0) table2.index += 1 table2.loc['Σ'] = table2.sum() table2.index.name = 'Factor' table2 """ Explanation: Table 2 (L, expl_var) "Eigenvalues, corrected eigenvalues, proportion of explained inertia and corrected proportion of explained inertia. The eigenvalues of the Burt matrix are equal to the squared eigenvalues of the indicator matrix; The corrected eigenvalues for Benzécri and Greenacre are the same, but the proportion of explained variance differ. Eigenvalues are denoted by λ, proportions of explained inertia by τ (note that the average inertia used to compute Greenacre’s correction is equal to I = .7358)." (Hedbi & Valentin, 2007) Field L contains the eigenvalues, or the principal inertias, of the factors. Method expl_var returns proportion of explained inertia for each factor, whereas Greenacre corrections may be enabled with parameter greenacre and N limits number of retained factors. Note that Burt matrix values are not included in the following table, as it is not currently implemented in mca package. End of explanation """ mca_ind.inertia, mca_ind.L.sum(), mca_ben.inertia, mca_ben.L.sum() """ Explanation: The inertia is simply the sum of the principle inertias: End of explanation """ data = np.array([mca_ben.L[:2], mca_ben.expl_var(greenacre=True, N=2) * 100]).T df = pd.DataFrame(data=data, columns=['cλ','%c'], index=range(1,3)) df """ Explanation: Table 3 (fs_r, cos_r, cont_r, fs_r_sup) "Factor scores, squared cosines, and contributions for the observations (I-set). The eigenvalues and proportions of explained inertia are corrected using Benzécri/Greenacre formula. ~~Contributions corresponding to negative scores are in italic.~~ The mystery wine (Wine ?) is a supplementary observation. Only the first two factors are reported." (Hedbi & Valentin, 2007) Firstly, we once again tabulate eigenvalues and their proportions. This time only for the first two factors and as percentage. End of explanation """ fs, cos, cont = 'Factor score','Squared cosines', 'Contributions x 1000' table3 = pd.DataFrame(columns=X.index, index=pd.MultiIndex .from_product([[fs, cos, cont], range(1, 3)])) table3.loc[fs, :] = mca_ben.fs_r(N=2).T table3.loc[cos, :] = mca_ben.cos_r(N=2).T table3.loc[cont, :] = mca_ben.cont_r(N=2).T * 1000 table3.loc[fs, 'W?'] = mca_ben.fs_r_sup(pd.DataFrame([i_sup]), N=2)[0] np.round(table3.astype(float), 2) """ Explanation: Factor scores, squared cosines, and contributions for the observations are computed by fs_r, cos_r and cont_r methods respectively, where r denotes rows (i.e. observations). Again, N limits the number of retained factors. Factor scores of supplementary observation $i_{sup}$ is computed by method fs_r_sup. Note that squared cosines do not agree with those in the reference. See issue #1. End of explanation """ table4 = pd.DataFrame(columns=col_index, index=pd.MultiIndex .from_product([[fs, cos, cont], range(1, 3)])) table4.loc[fs, :] = mca_ben.fs_c(N=2).T table4.loc[cos, :] = mca_ben.cos_c(N=2).T table4.loc[cont,:] = mca_ben.cont_c(N=2).T * 1000 fs_c_sup = mca_ben.fs_c_sup(mca.dummy(pd.DataFrame(j_sup)), N=2) table4.loc[fs, ('Oak', '', 1)] = fs_c_sup[0] table4.loc[fs, ('Oak', '', 2)] = fs_c_sup[1] np.round(table4.astype(float), 2) """ Explanation: Table 4 (fs_c, cos_c, cont_c, fs_c_sup) "Factor scores, squared cosines, and contributions for the variables (J-set). The eigenvalues and percentages of inertia have been corrected using Benzécri/Greenacre formula. ~~Contributions corresponding to negative scores are in italic.~~ Oak 1 and 2 are supplementary variables." (Hedbi & Valentin, 2007) Computations for columns (i.e. variables) are analogous to those of rows. Before the supplementary variable factor scores can be computed, $j_{sup}$ must be converted from categorical variable into dummy indicator matrix by method mca.dummy. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt points = table3.loc[fs].values labels = table3.columns.values plt.figure() plt.margins(0.1) plt.axhline(0, color='gray') plt.axvline(0, color='gray') plt.xlabel('Factor 1') plt.ylabel('Factor 2') plt.scatter(*points, s=120, marker='o', c='r', alpha=.5, linewidths=0) for label, x, y in zip(labels, *points): plt.annotate(label, xy=(x, y), xytext=(x + .03, y + .03)) plt.show() noise = 0.05 * (np.random.rand(*table4.T[fs].shape) - 0.5) fs_by_source = table4.T[fs].add(noise).groupby(level=['source']) fig, ax = plt.subplots() plt.margins(0.1) plt.axhline(0, color='gray') plt.axvline(0, color='gray') plt.xlabel('Factor 1') plt.ylabel('Factor 2') ax.margins(0.1) markers = '^', 's', 'o', 'o' colors = 'r', 'g', 'b', 'y' for fscore, marker, color in zip(fs_by_source, markers, colors): label, points = fscore ax.plot(*points.T.values, marker=marker, color=color, label=label, linestyle='', alpha=.5, mew=0, ms=12) ax.legend(numpoints=1, loc=4) plt.show() """ Explanation: Figure 1 "Multiple Correspondence Analysis. Projections on the first 2 dimensions. The eigenvalues (λ) and proportion of explained inertia (τ) have been corrected with Benzécri/Greenacre formula. (a) The I set: rows (i.e., wines), wine ? is a supplementary element. (b) The J set: columns (i.e., adjectives). Oak 1 and Oak 2 are supplementary elements. (the projection points have been slightly moved to increase readability). (Projections from Tables 3 and 4)." (Hedbi & Valentin, 2007) Following plots do not introduce anything new in terms of mca package, it just reuses factor scores from Tables 3 and 4. But everybody loves colourful graphs, so... End of explanation """
jhconning/Dev-II
notebooks/Village_sharing.ipynb
bsd-3-clause
import numpy as np import matplotlib.pyplot as plt from ipywidgets import interact, fixed, FloatSlider %matplotlib inline """ Explanation: Village consumption smoothing We simulate T periods of income for N individuals. Each individual receives a base level of income plus an income shocks. The income shocks can be independent or correlated. We plot income in each household with and without consumption smoothing. End of explanation """ rho = 0 N = 5 T = 15 V = 1 def shocks(rho=0, N=N, T=T): '''Returns an N by T matrix of multivariate normally distributed shocks with correlation rho''' mean = np.zeros(N) cov = np.ones((N, N), int)*rho*V np.fill_diagonal(cov, V) #print(cov) e = np.random.multivariate_normal(mean, cov, size=T) return e def incomes(rho=0, V=V, N = N, T=T): '''Generates random incomes for N over T periods''' t = np.arange(T) # time periods x0 = np.arange(10,10+N*5,5) # average income e = shocks(rho=rho, N=N, T=T)*V X = x0 + e XT = np.sum(X, axis = 1) w = x0/np.sum(x0) XS = np.array([XT * wt for wt in w]).T return t, X, XS def plot_cons(rho=0, V=1): #print('rho = {}'.format(rho)) t, X, XS = incomes(rho=rho, V=V, N=N, T=T) fig, ax = plt.subplots(figsize=(10,8)) ax.plot(t, X,'x-') ax.plot(t,XS,linestyle='dashed') ax.set_xlabel('time') ax.set_xticks(np.arange(T, step=1)) fig.suptitle(r'$\rho = {:0.2f}$'.format(rho)) ax.grid() plt.tight_layout(); interact(plot_cons, rho=(-0.25,0.99,0.05)); """ Explanation: Default parameters End of explanation """ plot_cons(rho=0, V=1) """ Explanation: Examples We illustrate with three different scenarios. Remember that these are random draws so will be different everytime these are run. independent incomes End of explanation """ plot_cons(rho=0.8, V=1) """ Explanation: Correlated incomes Less opportunity for risk sharing. So consumption will tend to follow income. End of explanation """ plot_cons(rho=-0.2, V=1) """ Explanation: Negatively correlated End of explanation """
MarsUniversity/ece387
website/notes/AL5D/lynx_al5d-3.ipynb
mit
%matplotlib inline from __future__ import print_function from __future__ import division import numpy as np from matplotlib import pyplot as plt from sympy import symbols, sin, cos, simplify, trigsimp, pi from math import radians as d2r from math import degrees as r2d from math import atan2, sqrt, acos, fabs class mDH(object): """ This uses the modified DH parameters see Craig, eqn (3.6) """ def __init__(self): pass def fk(self, params): t = np.eye(4) for p in params: t = t.dot(self.makeT(*p)) return t def makeT(self, a, alpha, d, theta): return np.array([ # classic DH [ cos(theta), -sin(theta), 0, a], [sin(theta)*cos(alpha), cos(theta)*cos(alpha), -sin(alpha), -d*sin(alpha)], [sin(theta)*sin(alpha), cos(theta)*sin(alpha), cos(alpha), d*cos(alpha)], [ 0, 0, 0, 1] ]) def eval(f): """ This allows you to simplify the trigonomic mess that kinematics can create and also substitute in some inputs in the process """ c = [] for row in f: r = [] for col in row: # use python symbolic toolbox to simplify the trig mess above r.append(simplify(col)) c.append(r) return np.array(c) def subs(f, m): """ This allows you to simplify the trigonomic mess that kinematics can create and also substitute in some inputs in the process """ c = [] for row in f: r = [] for col in row: r.append(col.subs(m)) c.append(r) return np.array(c) def rplot(t1, t2, t3, t4): """Given the 4 joint angles (in rads), plot the arm in the x-y and w-z planes""" l1 = 5.75 l2 = 7.375 l3 = 3.375 ptsx = [0] ptsy = [0] # our definition is reverse or these joints t3 = -t3 t4 = -t4 # link 1 x1 = l1*cos(t2) y1 = l1*sin(t2) ptsx.append(x1) ptsy.append(y1) # link 2 x2 = x1 + l2*cos(t2 + t3) y2 = y1 + l2*sin(t2 + t3) ptsx.append(x2) ptsy.append(y2) # link 3 x3 = x2 + l3*cos(t2 + t3 + t4) y3 = y2 + l3*sin(t2 + t3 + t4) ptsx.append(x3) ptsy.append(y3) plt.subplot(1,2,1,projection='polar') plt.plot([0, t1], [0, 1.0]) plt.grid(True) plt.title('Azimuth Angle (x-y plane)') plt.subplot(1,2,2) plt.plot(ptsx, ptsy, 'b-', marker='o') plt.axis('equal') plt.grid(True) plt.title('w-z Plane') """ Explanation: Lynx Motion AL5D Here is the serial manipulator we are using for our lab. Note all of the dimension at the bottom center. The toy RC servos for the robot arm work off of a pulse width modulation (PWM) where a pulse emmited with the following width will move the servo to a desired position. | Dir | PWM | |--------|------| |CCW_max | 1200 | |Center | 1500 | |CW_max | 1800 | Since this interface was developed around the 1970's, and toy RC servos are built for price not performance, different servos behave differently to the same signal. Thus, the user typically has to find the true limits of each servo to know how to command it. But the times shown above are approximate (close enough for hand grenades and horseshoes). End of explanation """ from sympy import pi t1, t2, t3, t4 = symbols('t1 t2 t3 t4') # a2, a3, a4, a5 = symbols('a2 a3 a4 a5') # t1 - base # t2 - shoulder # t3 - elbow # t4 - wrist # a2 = 2.75 # base to shoulder a2 = 0.0 a3 = 5.75 # shoulder to elbow a4 = 7.375 # elbow to wrist a5 = 3.375 # wrist to end effector # put all angles in rads # sybolic can't handle converting a symbol using d2r() # a, alpha, d, theta params = [ [ 0, 0, 0, t1], [ a3, -pi/2, 0, t2], [ a4, 0, 0, t3], [ a5, 0, 0, t4] ] # params = [ # [ 0, 0, 2.75, t1], # [ 0, pi/2, 0, t2], # [ 5.75, 0, 0, t3], # [ 7.375, 0, 0, t4], # [ 3.375, 0, 0, 0] # ] dh = mDH() al5d = dh.fk(params) al5d = eval(al5d) def printT(tt): """Print out the entire T matrix""" R = tt[0:3,0:3] D = tt[0:3, 3] print('-'*30) print('Position:') print(' x:', D[0]) print(' y:', D[1]) print(' z:', D[2]) # R(n, o, a) print('-'*30) print('Orientation') print(' nx:', R[0,0]) print(' ny:', R[0,1]) print(' nz:', R[0,2]) print('') print(' ox:', R[1,0]) print(' oy:', R[1,1]) print(' oz:', R[1,2]) print('') print(' ax:', R[2,0]) print(' ay:', R[2,1]) print(' az:', R[2,2]) def printP(dh): """dh is the forward kinematics equations matrix""" pos = [float(x) for x in dh[0:3, 3]] print('Pos (x,y,z): {:5.2f} {:5.2f} {:5.2f}'.format(*pos)) return pos def printDegrees(angles): """angles are in radians""" a = [r2d(x) for x in angles] print('Angles: {:6.1f} {:6.1f} {:6.1f} {:6.1f}'.format(*a)) # the symbolic forward kinematics of our robot arm printT(al5d) # Let's set some angles simp=subs(al5d, [(t1,0.0), (t2,pi/2), (t3, pi/2), (t4, 0.0)]) printT(simp) simp=subs(al5d, [(t1,0.0),(t2,d2r(111.5)), (t3, d2r(-127)), (t4, d2r(-74.5))]) printP(simp) """ Explanation: The DH parameters are: | i |$a_i$ | $\alpha_i$ | $d_i$ | $theta_i$ | |---|-------------|--------------|---------|------------| | 1 | 0 | 0 | $d_2$ | $\theta_1$ | | 2 | 0 |$\frac{\pi}{2}$| 0 | $\theta_2$ | | 3 | $a_3$ | 0 | 0 | $\theta_3$ | | 4 | $a_4$ | 0 | 0 | $\theta_4$ | | 5 | $a_5$ | 0 | 0 | 0 | End of explanation """ from math import atan2, acos, sqrt, pi, cos, sin def cosine_law(a, b, c, phase=False): if phase: angle = ((c**2 - (a**2 + b**2))/(2*a*b)) else: angle = ((c**2 - (a**2 + b**2))/(-2*a*b)) # print('cosine_law', angle) if angle > 1 or angle < -1: raise Exception('angle outside range') return acos(angle) def line(x1, y1, x2, y2): return sqrt((x2-x1)**2 + (y2-y1)**2) def mag(a, b): return sqrt(a**2 + b**2) def mag3(a, b, c): return sqrt(a**2 + b**2 + c**2) def inverse(x, y, z, orient): """ Azimuth angle is between x and w and lies in the x-y plane ^ x w | \ | \ | \ | \| <----------+ (z is out of the page - right hand rule) y Most of the robot arm move in the plane defined by w-z ^ z | o-----o | / \ | / E |/ +----------------> w All joint angles returned are in radians: (t1, t2, t3, t4) """ l1 = 5.75 l2 = 7.375 l3 = 3.375 # check workspace constraints if z < 0: raise Exception('z in ground') elif mag3(x,y,z) > (l1 + l2 + l3): raise Exception('out of reach') # get x-y plane azimuth t1 = atan2(y, x) # Now, most of the arm operates in the w-z frame w = mag(x, y) # new frame axis gamma = atan2(z, w) r = mag(z, w) c = mag(w-l3*cos(orient), z-l3*sin(orient)) t3 = cosine_law(l1, l2, c, True) d = cosine_law(l2, c, l1) e = cosine_law(c, l3, r) t4 = pi - d - e alpha = cosine_law(l1, c, l2) beta = cosine_law(c,r,l3) t2 = alpha + beta + gamma return (t1, t2, t3, t4) def checkPts(x, y, z, orient): """Given a point (in inches) and orientation (in rads), this calculates the joint angles, then uses those angles to calculate the forward solution and prints out the error. It also plots the arm. """ angles = inverse(x, y, z, orient) a,b,c,d = angles simp=subs(al5d, [(t1, a), (t2, b), (t3, -c), (t4, -d)]) pts = printP(simp) printDegrees(angles) rplot(*angles) error = [a-b for a,b in zip((x,y,z), pts)] print('Error: {:6.3f} {:6.3f} {:6.3f}'.format(*error)) checkPts(10.75, 0, 5.75, 0.0) # 0 90 -90 0 checkPts(7.385, 0, 5.75-3.375, -pi/2) x,y,z = (7.385*cos(pi/4), 7.385*sin(pi/4), 5.75-3.375) checkPts(x,y,z, -pi/2) x,y,z = (7.385*cos(-pi/4), 7.385*sin(-pi/4), 5.75-3.375) checkPts(x,y,z, -pi/2) x,y,z = (7.385*cos(pi/2), 7.385*sin(pi/2), 5.75-3.375) checkPts(x,y,z, -pi/2) checkPts(5,0,0, -pi/2) checkPts(7,-3,0, -pi/2) checkPts(7,6,4, -pi/2) """ Explanation: Inverse Kinematics Law of Cosines $$ a^2 = b^2 + c^2 - 2bc \cos(A) \rightarrow \cos(A)=\frac{-a^2+b^2+c^2}{2bc}\ b^2 = a^2 + c^2 - 2ac \cos(B) \rightarrow \cos(B)=\frac{a^2-b^2+c^2}{2ac}\ c^2 = a^2 + b^2 - 2ab \cos(C) \rightarrow \cos(C)=\frac{a^2+b^2-c^2}{2ab} $$ Wolfram: law of cosines Law of Sines $$ \frac{a}{\sin(A)} = \frac{b}{\sin(B)} = \frac{c}{\sin(C)} $$ Wolfram: law of sines Arm We are going to solve this using geometry! So, given a point (w,z) and the orientation of the end effector, we will calculate the joint angles ($\theta_2$, $\theta_3$, $\theta_4$). You will also want to make some checks: $z \geq 0$ $-1 \geq \arctan(\frac{y}{x}) \leq 1$ $\|(x,y,z) \| \leq \|(l_1, l_2, l_3) \|$ End of explanation """ print(cos(pi-1)) print(cos(1-pi)) print(cos(1)) # let's plot cos from -pi to pi def draw(p, title): test = [] for i in range(-pi*100, pi*100): if p == pi: ans = cos(pi - i/100) elif p == -pi: ans = cos(i/100 - pi) else: ans = cos(i/100) test.append(ans) x = [x/100 for x in range(-pi*100, pi*100)] plt.plot(x,test) plt.title(title) plt.grid(True) draw(pi, 'pi') draw(-pi, '-pi') draw(0, '0') """ Explanation: Phasing Let's show $\cos(\pi-\theta)$ and $\cos(\theta - \pi)$ are the same thing. End of explanation """
google/uncertainty-baselines
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
apache-2.0
import numpy as np import pandas as pd import random import tensorflow as tf from tensorflow import keras """ Explanation: Gradient Based Constraint Learning Demo Licensed under the Apache License, Version 2.0. This colab explores joint learning neural networks with soft constraints. End of explanation """ # ======================================================================== # Constants # ======================================================================== _TRAIN_PATH = '' _TEST_PATH = '' _COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relationship", "race", "gender", "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket"] # ======================================================================== # Seed Data # ======================================================================== SEED = random.randint(-10000000, 10000000) print("Seed: %d" % SEED) tf.random.set_seed(SEED) # ======================================================================== # Load Data # ======================================================================== with tf.io.gfile.GFile(_TRAIN_PATH, 'r') as csv_file: train_df = pd.read_csv(csv_file, names=_COLUMNS, sep=r'\s*,\s*', na_values="?").dropna(how="any", axis=0) with tf.io.gfile.GFile(_TEST_PATH, 'r') as csv_file: test_df = pd.read_csv(csv_file, names=_COLUMNS, skiprows=[0], sep=r'\s*,\s*', na_values="?").dropna(how="any", axis=0) """ Explanation: Dataset and Task We test and validate our system over a common fairness dataset and task: Adult Census Income dataset. This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker. Our analysis aims at learning a model that does not bias predictions towards men over 50K through soft constraints. End of explanation """ #@title Prepare Dataset # ======================================================================== # Categorical Feature Columns # ======================================================================== # Unknown length occupation = tf.feature_column.categorical_column_with_hash_bucket( "occupation", hash_bucket_size=1000) native_country = tf.feature_column.categorical_column_with_hash_bucket( "native_country", hash_bucket_size=1000) # Known length gender = tf.feature_column.categorical_column_with_vocabulary_list( "gender", ["Female", "Male"]) race = tf.feature_column.categorical_column_with_vocabulary_list( "race", [ "White", "Asian-Pac-Islander", "Amer-Indian-Eskimo", "Other", "Black" ]) education = tf.feature_column.categorical_column_with_vocabulary_list( "education", [ "Bachelors", "HS-grad", "11th", "Masters", "9th", "Some-college", "Assoc-acdm", "Assoc-voc", "7th-8th", "Doctorate", "Prof-school", "5th-6th", "10th", "1st-4th", "Preschool", "12th" ]) marital_status = tf.feature_column.categorical_column_with_vocabulary_list( "marital_status", [ "Married-civ-spouse", "Divorced", "Married-spouse-absent", "Never-married", "Separated", "Married-AF-spouse", "Widowed" ]) relationship = tf.feature_column.categorical_column_with_vocabulary_list( "relationship", [ "Husband", "Not-in-family", "Wife", "Own-child", "Unmarried", "Other-relative" ]) workclass = tf.feature_column.categorical_column_with_vocabulary_list( "workclass", [ "Self-emp-not-inc", "Private", "State-gov", "Federal-gov", "Local-gov", "?", "Self-emp-inc", "Without-pay", "Never-worked" ]) # ======================================================================== # Numeric Feature Columns # ======================================================================== age = tf.feature_column.numeric_column("age") age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) fnlwgt = tf.feature_column.numeric_column("fnlwgt") education_num = tf.feature_column.numeric_column("education_num") capital_gain = tf.feature_column.numeric_column("capital_gain") capital_loss = tf.feature_column.numeric_column("capital_loss") hours_per_week = tf.feature_column.numeric_column("hours_per_week") # ======================================================================== # Specify Features # ======================================================================== deep_columns = [ tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(age_buckets), tf.feature_column.indicator_column(gender), tf.feature_column.indicator_column(relationship), tf.feature_column.embedding_column(native_country, dimension=8), tf.feature_column.embedding_column(occupation, dimension=8), ] features = { 'age': tf.keras.Input(shape=(1,), name='age'), 'education': tf.keras.Input(shape=(1,), name='education', dtype=tf.string), 'gender': tf.keras.Input(shape=(1,), name='gender', dtype=tf.string), 'native_country': tf.keras.Input(shape=(1,), name='native_country', dtype=tf.string), 'occupation': tf.keras.Input(shape=(1,), name='occupation', dtype=tf.string), 'relationship': tf.keras.Input(shape=(1,), name='relationship', dtype=tf.string), 'workclass': tf.keras.Input(shape=(1,), name='workclass', dtype=tf.string), } # ======================================================================== # Create Dataset # ======================================================================== def df_to_dataset(dataframe, shuffle=True, batch_size=512): dataframe = dataframe.copy() labels = dataframe.pop('income_bracket').apply(lambda x: ">50K" in x).astype(int) ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) if shuffle: ds = ds.shuffle(buffer_size=len(dataframe)) ds = ds.batch(batch_size) return ds #@title Helper Functions def confusion_matrix(predictions, labels, threshold=0.5): tp = 0 tn = 0 fp = 0 fn = 0 for prediction, label in zip(predictions, labels): if prediction > threshold: if label == 1: tp += 1 else: fp += 1 else: if label == 0: tn += 1 else: fn += 1 return (tp, tn, fp, fn) def remove_group(dataframe, predictions, group): dataframe = dataframe.copy() dataframe['predictions'] = predictions dataframe = dataframe[dataframe.gender != group] group_predictions = dataframe.pop('predictions') return dataframe, group_predictions def print_accuracy(dataframe, predictions, threshold=0.5): dataframe = dataframe.copy() labels = dataframe.pop('income_bracket').apply(lambda x: ">50K" in x).astype(int) tp, tn, fp, fn = confusion_matrix(predictions, labels, threshold=threshold) print("True Positives: %d True Negatives: %d False Positives %d False Negatives: %d" % (tp, tn, fp, fn)) print("Accuracy: %0.5f" % ((tp+tn) / (tp + tn + fp + fn))) print("Positive Accuracy: %0.5f" % (tp / (tp + fp))) print("Negative Accuracy: %0.5f" % (tn / (tn + fn))) print("Percentage Predicted over >50K: %0.5f" % (((tp + fp) / (tp + tn + fp + fn)) * 100)) return (tp, tn, fp, fn) def parity(m_tp, m_fp, m_tn, m_fn, f_tp, f_fp, f_tn, f_fn): return ((m_tp + m_fp) / (m_tp + m_tn + m_fp + m_fn)) - ((f_tp + f_fp) / (f_tp + f_tn + f_fp + f_fn)) def print_title(title, print_length=50): print(('-' * print_length) + '\n' + title + '\n' + ('-' * print_length)) def print_analysis(train_df, train_predictions, test_df, test_predictions): print_title("Train Accuracy") print_accuracy(train_df, train_predictions) print_title("Full Test Accuracy") print_accuracy(test_df, test_predictions) print_title("Male Test Accuracy") male_df, male_pred = remove_group(test_df, test_predictions, "Female") m_tp, m_tn, m_fp, m_fn = print_accuracy(male_df, male_pred) print_title("Female Test Accuracy") female_df, female_pred = remove_group(test_df, test_predictions, "Male") f_tp, f_tn, f_fp, f_fn = print_accuracy(female_df, female_pred) print_title("Parity") print(parity(m_tp, m_fp, m_tn, m_fn, f_tp, f_fp, f_tn, f_fn)) """ Explanation: Feature Columns The following code was taken from intro_to_fairness. In short, Tensorflow requires a mapping of data and so every column is specified. End of explanation """ def build_model(feature_columns, features): feature_layer = tf.keras.layers.DenseFeatures(feature_columns) hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features)) hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1) output = tf.keras.layers.Dense(1, activation='sigmoid')(hidden_layer_2) model = tf.keras.Model([v for v in features.values()], output) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) return model baseline_model = build_model(deep_columns, features) baseline_model.fit(df_to_dataset(train_df), epochs=50) test_predictions = baseline_model.predict(df_to_dataset(test_df, shuffle=False)) baseline_model.evaluate(df_to_dataset(test_df)) train_predictions = baseline_model.predict(df_to_dataset(train_df, shuffle=False)) baseline_model.evaluate(df_to_dataset(train_df)) """ Explanation: Create and Run Non-Constrained Neural Model Defining our neural model that will be used as a comparison. Note: this model was purposfully designed to be simplistic, as it is trying to highlight the benifit to learning with soft constraints. End of explanation """ print_analysis(train_df, train_predictions, test_df, test_predictions) """ Explanation: Analyze Non-Constrained Results For this example we look at the fairness constraint that the protected group (gender) should have no predictive difference between classes. In this situation this means that the ratio of positive predictions should be the same between male and female. Note: this is by no means the only fairness constraint needed to have a fair model, and in fact can result in some doubious results (as seen in the follwoing section). The results do clearly show a skew in ratios as males are have a higher ratio of >50k predictions. End of explanation """ def constrained_loss(data, logits, threshold=0.5, weight=3): """Linear constrained loss for equal ratio prediction for the protected group. The constraint: (#Female >50k / #Total Female) - (#Male >50k / #Total Male) This constraint penalizes predictions between the protected group (gender), such that the ratio between all classes must be the same. An important note: to maintian differentability we do not use #Female >50k (which requires a round operation), instead we set values below the threshold to zero, and sum the logits. Args: data: Input features. logits: Predictions made in the logit. threshold: Binary threshold for predicting positive and negative labels. weight: Weight of the constrained loss. Returns: A scalar loss of the constraint violations. """ gender_label, gender_idx, gender_count = tf.unique_with_counts(data['gender'], out_idx=tf.int32, name=None) cut_logits = tf.reshape(tf.cast(logits > threshold, logits.dtype) * logits, [-1]) def f1(): return gender_idx def f2(): return tf.cast(tf.math.logical_not(tf.cast(gender_idx, tf.bool)), tf.int32) # Load male indexes as ones and female indexes to zeros. male_index = tf.cond(tf.reduce_all(tf.equal(gender_label, tf.constant(["Male", "Female"]))), f1, f2) # Cast the integers to float32 to do a multiplication with the logits. male_index = tf.cast(male_index, tf.float32) # (#Male > 50k / #Total Male) male_prob = tf.divide(tf.reduce_sum(tf.multiply(cut_logits, male_index)), tf.reduce_sum(male_index)) # Flip all female indexes to one and male indexes to zeros. female_index = tf.math.logical_not(tf.cast(male_index, tf.bool)) # Cast the integers to float32 to do a multiplication with the logits. female_index = tf.cast(female_index, tf.float32) # (#Female > 50k / #Total Female) female_prob = tf.divide(tf.reduce_sum(tf.multiply(cut_logits, female_index)), tf.reduce_sum(female_index)) # Since tf.math.abs is not differentable, separate the loss into two hinges. loss = tf.add(tf.maximum(male_prob - female_prob, 0.0), tf.maximum(female_prob - male_prob, 0.0)) return tf.multiply(loss, weight) class StructureModel(keras.Model): def train_step(self, data): features, labels = data with tf.GradientTape() as tape: logits = self(features, training=True) standard_loss = self.compiled_loss(labels, logits, regularization_losses=self.losses) constraint_loss = constrained_loss(features, logits) loss = standard_loss + constraint_loss trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) self.optimizer.apply_gradients(zip(gradients, trainable_vars)) self.compiled_metrics.update_state(labels, logits) return {m.name: m.result() for m in self.metrics} """ Explanation: Define Constraints This requires a constrained loss function and a custom train step within the keras model class. End of explanation """ def build_constrained_model(feature_columns, features): feature_layer = tf.keras.layers.DenseFeatures(feature_columns) hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features)) hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1) output = tf.keras.layers.Dense(1, activation='sigmoid')(hidden_layer_2) model = StructureModel([v for v in features.values()], output) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) return model constrained_model = build_constrained_model(deep_columns, features) constrained_model.fit(df_to_dataset(train_df), epochs=50) test_predictions = constrained_model.predict(df_to_dataset(test_df, shuffle=False)) constrained_model.evaluate(df_to_dataset(test_df)) train_predictions = constrained_model.predict(df_to_dataset(train_df, shuffle=False)) constrained_model.evaluate(df_to_dataset(train_df)) """ Explanation: Build and Run Constrained Neural Model End of explanation """ print_analysis(train_df, train_predictions, test_df, test_predictions) """ Explanation: Analyze Constrained Results Ideally this constraint should correct the ratio imbalance between the protected groups (gender). This means our parity should be very close to zero. Note: This constraint does not mean the neural classifier is guaranteed to generalize and make better predictions. It is more likely to attempt to balance the class prediction ratio in the simplest fashion (resulting in a worse accuracy). End of explanation """
microsoft/dowhy
docs/source/example_notebooks/DoWhy-The Causal Story Behind Hotel Booking Cancellations.ipynb
mit
%reload_ext autoreload %autoreload 2 # Config dict to set the logging level import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'INFO', }, } } logging.config.dictConfig(DEFAULT_LOGGING) # Disabling warnings output import warnings from sklearn.exceptions import DataConversionWarning, ConvergenceWarning warnings.filterwarnings(action='ignore', category=DataConversionWarning) warnings.filterwarnings(action='ignore', category=ConvergenceWarning) warnings.filterwarnings(action='ignore', category=UserWarning) #!pip install dowhy import dowhy import pandas as pd import numpy as np import matplotlib.pyplot as plt """ Explanation: DoWhy-The Causal Story Behind Hotel Booking Cancellations We consider what factors cause a hotel booking to be cancelled. This analysis is based on a hotel bookings dataset from Antonio, Almeida and Nunes (2019). On GitHub, the dataset is available at rfordatascience/tidytuesday. There can be different reasons for why a booking is cancelled. A customer may have requested something that was not available (e.g., car parking), a customer may have found later that the hotel did not meet their requirements, or a customer may have simply cancelled their entire trip. Some of these like car parking are actionable by the hotel whereas others like trip cancellation are outside the hotel's control. In any case, we would like to better understand which of these factors cause booking cancellations. The gold standard of finding this out would be to use experiments such as Randomized Controlled Trials wherein each customer is randomly assigned to one of the two categories i.e. each customer is either assigned a car parking or not. However, such an experiment can be too costly and also unethical in some cases (for example, a hotel would start losing its reputation if people learn that its randomly assigning people to different level of service). Can we somehow answer our query using only observational data or data that has been collected in the past? End of explanation """ dataset = pd.read_csv('https://raw.githubusercontent.com/Sid-darthvader/DoWhy-The-Causal-Story-Behind-Hotel-Booking-Cancellations/master/hotel_bookings.csv') dataset.head() dataset.columns """ Explanation: Data Description For a quick glance of the features and their descriptions the reader is referred here. https://github.com/rfordatascience/tidytuesday/blob/master/data/2020/2020-02-11/readme.md End of explanation """ # Total stay in nights dataset['total_stay'] = dataset['stays_in_week_nights']+dataset['stays_in_weekend_nights'] # Total number of guests dataset['guests'] = dataset['adults']+dataset['children'] +dataset['babies'] # Creating the different_room_assigned feature dataset['different_room_assigned']=0 slice_indices =dataset['reserved_room_type']!=dataset['assigned_room_type'] dataset.loc[slice_indices,'different_room_assigned']=1 # Deleting older features dataset = dataset.drop(['stays_in_week_nights','stays_in_weekend_nights','adults','children','babies' ,'reserved_room_type','assigned_room_type'],axis=1) dataset.columns """ Explanation: Feature Engineering Lets create some new and meaningful features so as to reduce the dimensionality of the dataset. - Total Stay = stays_in_weekend_nights + stays_in_week_nights - Guests = adults + children + babies - Different_room_assigned = 1 if reserved_room_type & assigned_room_type are different, 0 otherwise. End of explanation """ dataset.isnull().sum() # Country,Agent,Company contain 488,16340,112593 missing entries dataset = dataset.drop(['agent','company'],axis=1) # Replacing missing countries with most freqently occuring countries dataset['country']= dataset['country'].fillna(dataset['country'].mode()[0]) dataset = dataset.drop(['reservation_status','reservation_status_date','arrival_date_day_of_month'],axis=1) dataset = dataset.drop(['arrival_date_year'],axis=1) dataset = dataset.drop(['distribution_channel'], axis=1) # Replacing 1 by True and 0 by False for the experiment and outcome variables dataset['different_room_assigned']= dataset['different_room_assigned'].replace(1,True) dataset['different_room_assigned']= dataset['different_room_assigned'].replace(0,False) dataset['is_canceled']= dataset['is_canceled'].replace(1,True) dataset['is_canceled']= dataset['is_canceled'].replace(0,False) dataset.dropna(inplace=True) print(dataset.columns) dataset.iloc[:, 5:20].head(100) dataset = dataset[dataset.deposit_type=="No Deposit"] dataset.groupby(['deposit_type','is_canceled']).count() dataset_copy = dataset.copy(deep=True) """ Explanation: We also remove other columns that either contain NULL values or have too many unique values (e.g., agent ID). We also impute missing values of the country column with the most frequent country. We remove distribution_channel since it has a high overlap with market_segment. End of explanation """ counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset.sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] counts_sum+= counts_i counts_sum/10000 """ Explanation: Calculating Expected Counts Since the number of number of cancellations and the number of times a different room was assigned is heavily imbalanced, we first choose 1000 observations at random to see that in how many cases do the variables; 'is_cancelled' & 'different_room_assigned' attain the same values. This whole process is then repeated 10000 times and the expected count turns out to be near 50% (i.e. the probability of these two variables attaining the same value at random). So statistically speaking, we have no definite conclusion at this stage. Thus assigning rooms different to what a customer had reserved during his booking earlier, may or may not lead to him/her cancelling that booking. End of explanation """ # Expected Count when there are no booking changes counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset[dataset["booking_changes"]==0].sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] counts_sum+= counts_i counts_sum/10000 """ Explanation: We now consider the scenario when there were no booking changes and recalculate the expected count. End of explanation """ # Expected Count when there are booking changes = 66.4% counts_sum=0 for i in range(1,10000): counts_i = 0 rdf = dataset[dataset["booking_changes"]>0].sample(1000) counts_i = rdf[rdf["is_canceled"]== rdf["different_room_assigned"]].shape[0] counts_sum+= counts_i counts_sum/10000 """ Explanation: In the 2nd case, we take the scenario when there were booking changes(>0) and recalculate the expected count. End of explanation """ import pygraphviz causal_graph = """digraph { different_room_assigned[label="Different Room Assigned"]; is_canceled[label="Booking Cancelled"]; booking_changes[label="Booking Changes"]; previous_bookings_not_canceled[label="Previous Booking Retentions"]; days_in_waiting_list[label="Days in Waitlist"]; lead_time[label="Lead Time"]; market_segment[label="Market Segment"]; country[label="Country"]; U[label="Unobserved Confounders",observed="no"]; is_repeated_guest; total_stay; guests; meal; hotel; U->{different_room_assigned,required_car_parking_spaces,guests,total_stay,total_of_special_requests}; market_segment -> lead_time; lead_time->is_canceled; country -> lead_time; different_room_assigned -> is_canceled; country->meal; lead_time -> days_in_waiting_list; days_in_waiting_list ->{is_canceled,different_room_assigned}; previous_bookings_not_canceled -> is_canceled; previous_bookings_not_canceled -> is_repeated_guest; is_repeated_guest -> {different_room_assigned,is_canceled}; total_stay -> is_canceled; guests -> is_canceled; booking_changes -> different_room_assigned; booking_changes -> is_canceled; hotel -> {different_room_assigned,is_canceled}; required_car_parking_spaces -> is_canceled; total_of_special_requests -> {booking_changes,is_canceled}; country->{hotel, required_car_parking_spaces,total_of_special_requests}; market_segment->{hotel, required_car_parking_spaces,total_of_special_requests}; }""" """ Explanation: There is definitely some change happening when the number of booking changes are non-zero. So it gives us a hint that Booking Changes may be affecting room cancellation. But is Booking Changes the only confounding variable? What if there were some unobserved confounders, regarding which we have no information(feature) present in our dataset. Would we still be able to make the same claims as before? Using DoWhy to estimate the causal effect Step-1. Create a Causal Graph Represent your prior knowledge about the predictive modelling problem as a CI graph using assumptions. Don't worry, you need not specify the full graph at this stage. Even a partial graph would be enough and the rest can be figured out by DoWhy ;-) Here are a list of assumptions that have then been translated into a Causal Diagram:- Market Segment has 2 levels, “TA” refers to the “Travel Agents” and “TO” means “Tour Operators” so it should affect the Lead Time (which is simply the number of days between booking and arrival). Country would also play a role in deciding whether a person books early or not (hence more Lead Time) and what type of Meal a person would prefer. Lead Time would definitely affected the number of Days in Waitlist (There are lesser chances of finding a reservation if you’re booking late). Additionally, higher Lead Times can also lead to Cancellations. The number of Days in Waitlist, the Total Stay in nights and the number of Guests might affect whether the booking is cancelled or retained. Previous Booking Retentions would affect whether a customer is a or not. Additionally, both of these variables would affect whether the booking get cancelled or not (Ex- A customer who has retained his past 5 bookings in the past has a higher chance of retaining this one also. Similarly a person who has been cancelling this booking has a higher chance of repeating the same). Booking Changes would affect whether the customer is assigned a different room or not which might also lead to cancellation. Finally, the number of Booking Changes being the only variable affecting Treatment and Outcome is highly unlikely and its possible that there might be some Unobsevered Confounders, regarding which we have no information being captured in our data. End of explanation """ model= dowhy.CausalModel( data = dataset, graph=causal_graph.replace("\n", " "), treatment="different_room_assigned", outcome='is_canceled') model.view_model() from IPython.display import Image, display display(Image(filename="causal_model.png")) """ Explanation: Here the Treatment is assigning the same type of room reserved by the customer during Booking. Outcome would be whether the booking was cancelled or not. Common Causes represent the variables that according to us have a causal affect on both Outcome and Treatment. As per our causal assumptions, the 2 variables satisfying this criteria are Booking Changes and the Unobserved Confounders. So if we are not specifying the graph explicitly (Not Recommended!), one can also provide these as parameters in the function mentioned below. To aid in identification of causal effect, we remove the unobserved confounder node from the graph. (To check, you can use the original graph and run the following code. The identify_effect method will find that the effect cannot be identified.) End of explanation """ #Identify the causal effect identified_estimand = model.identify_effect(proceed_when_unidentifiable=True) print(identified_estimand) """ Explanation: Step-2. Identify the Causal Effect We say that Treatment causes Outcome if changing Treatment leads to a change in Outcome keeping everything else constant. Thus in this step, by using properties of the causal graph, we identify the causal effect to be estimated End of explanation """ estimate = model.estimate_effect(identified_estimand, method_name="backdoor.propensity_score_weighting",target_units="ate") # ATE = Average Treatment Effect # ATT = Average Treatment Effect on Treated (i.e. those who were assigned a different room) # ATC = Average Treatment Effect on Control (i.e. those who were not assigned a different room) print(estimate) """ Explanation: Step-3. Estimate the identified estimand End of explanation """ refute1_results=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause") print(refute1_results) """ Explanation: The result is surprising. It means that having a different room assigned decreases the chances of a cancellation. There's more to unpack here: is this the correct causal effect? Could it be that different rooms are assigned only when the booked room is unavailable, and therefore assigning a different room has a positive effect on the customer (as opposed to not assigning a room)? There could also be other mechanisms at play. Perhaps assigning a different room only happens at check-in, and the chances of a cancellation once the customer is already at the hotel are low? In that case, the graph is missing a critical variable on when these events happen. Does different_room_assigned happen mostly on the day of the booking? Knowing that variable can help improve the graph and our analysis. While the associational analysis earlier indicated a positive correlation between is_canceled and different_room_assigned, estimating the causal effect using DoWhy presents a different picture. It implies that a decision/policy to reduce the number of different_room_assigned at hotels may be counter-productive. Step-4. Refute results Note that the causal part does not come from data. It comes from your assumptions that lead to identification. Data is simply used for statistical estimation. Thus it becomes critical to verify whether our assumptions were even correct in the first step or not! What happens when another common cause exists? What happens when the treatment itself was placebo? Method-1 Random Common Cause:- Adds randomly drawn covariates to data and re-runs the analysis to see if the causal estimate changes or not. If our assumption was originally correct then the causal estimate shouldn't change by much. End of explanation """ refute2_results=model.refute_estimate(identified_estimand, estimate, method_name="placebo_treatment_refuter") print(refute2_results) """ Explanation: Method-2 Placebo Treatment Refuter:- Randomly assigns any covariate as a treatment and re-runs the analysis. If our assumptions were correct then this newly found out estimate should go to 0. End of explanation """ refute3_results=model.refute_estimate(identified_estimand, estimate, method_name="data_subset_refuter") print(refute3_results) """ Explanation: Method-3 Data Subset Refuter:- Creates subsets of the data(similar to cross-validation) and checks whether the causal estimates vary across subsets. If our assumptions were correct there shouldn't be much variation. End of explanation """
kylepolich/dataskeptic
blog/b010_pushing-data-to-home-sales-api.ipynb
cc0-1.0
import requests from datetime import datetime import json """ Explanation: Pushing property information to the Data Skeptic Home Sales Project API Writing that title out makes me realize how poorly this project is named! Perhaps some volunteer might take up the challenge of branding these efforts... In any event, I wanted to write up a quick blog post giving people a BARE BONES example, using Python, of how to push property data to the API. The objective of this demo is to be accessible to anyone that can get Python running. In time, a more formal module should be defined, but for now, let's keep it light weight. End of explanation """ baseurl = 'https://home-sales-data-api.herokuapp.com/' """ Explanation: Right now the API is hosted on Heroku at the base URL shown below. In time, this might be moved. api.dataskeptic.com might be a better TLD, so I've called this out as a parameter. End of explanation """ r = requests.get(baseurl + '/api/property') response = json.loads(r.content) print(response) """ Explanation: The requests package is great for calling straightforward RESTful APIs. Since the request below is a simple GET request, you could make it in your browser if you'd like. Let's ask the API for all listings. End of explanation """ # Defining a few variables explicitly price = 1 bedrooms = 0 bathrooms = 0 car_spaces = None building_size = 10 land_size = 20 size_units = 'M' # metric raw_address = '123 Test Ave' # This is the actual request we want to send in JSON format to the server. request = { "listing_timestamp": str(datetime.now()), "listing_type": 'F', # for sale "price": price, "bedrooms": bedrooms, "bathrooms": bathrooms, "car_spaces": car_spaces, "building_size": building_size, "land_size": land_size, "size_units": size_units, "raw_address": raw_address, "features": [] } # Let's send that request to the API, and see what response we get back r = requests.post(baseurl + '/api/property/', data = request) print(r.content) """ Explanation: Clearly, I'm writing this blog post in early days of the API as I only found a few listings, all of which appear to be tests. But that's ok, my next post will be about getting some actual data into the API so I can do more interesting things. As a next step, let's push in one more test property, so you can see how the upload happens. End of explanation """ r = requests.get(baseurl + '/api/property') resp = json.loads(r.content) success = False for prop in resp: if prop['id']==3: if prop['raw_address']==raw_address: success=True print success """ Explanation: Looks like our upload was successful. Let's request all listings and check that the one with this ID is there and matches our address. End of explanation """
kaiping/incubator-singa
doc/en/docs/notebook/regression.ipynb
apache-2.0
from __future__ import division from __future__ import print_function from builtins import range from past.utils import old_div %matplotlib inline import numpy as np import matplotlib.pyplot as plt """ Explanation: Train a linear regression model In this notebook, we are going to use the tensor module from PySINGA to train a linear regression model. We use this example to illustrate the usage of tensor of PySINGA. Please refer the documentation page to for more tensor functions provided by PySINGA. End of explanation """ from singa import tensor """ Explanation: To import the tensor module of PySINGA, run End of explanation """ a, b = 3, 2 f = lambda x: a * x + b gx = np.linspace(0.,1,100) gy = [f(x) for x in gx] plt.plot(gx, gy, label='y=f(x)') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') """ Explanation: The ground-truth Our problem is to find a line that fits a set of 2-d data points. We first plot the ground truth line, End of explanation """ nb_points = 30 # generate training data train_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32) train_y = np.asarray(f(train_x) + np.random.rand(30), np.float32) plt.plot(train_x, train_y, 'bo', ms=7) """ Explanation: Generating the trainin data Then we generate the training data points by adding a random error to sampling points from the ground truth line. 30 data points are generated. End of explanation """ def plot(idx, x, y): global gx, gy, axes # print the ground truth line axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)') # print the learned line axes[idx//5, idx%5].plot(x, y, label='y=kx+b') axes[idx//5, idx%5].legend(loc='best') # set hyper-parameters max_iter = 15 alpha = 0.05 # init parameters k, b = 2.,0. """ Explanation: Training via SGD Assuming that we know the training data points are sampled from a line, but we don't know the line slope and intercept. The training is then to learn the slop (k) and intercept (b) by minimizing the error, i.e. ||kx+b-y||^2. 1. we set the initial values of k and b (could be any values). 2. we iteratively update k and b by moving them in the direction of reducing the prediction error, i.e. in the gradient direction. For every iteration, we plot the learned line. End of explanation """ # to plot the intermediate results fig, axes = plt.subplots(3, 5, figsize=(12, 8)) x = tensor.from_numpy(train_x) y = tensor.from_numpy(train_y) # sgd for idx in range(max_iter): y_ = x * k + b err = y_ - y loss = old_div(tensor.sum(err * err), nb_points) print('loss at iter %d = %f' % (idx, loss)) da1 = old_div(tensor.sum(err * x), nb_points) db1 = old_div(tensor.sum(err), nb_points) # update the parameters k -= da1 * alpha b -= db1 * alpha plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_)) """ Explanation: SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform). SINGA Tensor instances could be created via tensor.Tensor() by specifying the shape, and optionally the device and data type. Note that every Tensor instance should be initialized (e.g., via set_value() or random functions) before reading data from it. You can also create Tensor instances from numpy arrays, numpy array could be converted into SINGA tensor via tensor.from_numpy(np_ary) SINGA tensor could be converted into numpy array via tensor.to_numpy(); Note that the tensor should be on the host device. tensor instances could be transferred from other devices to host device via to_host() Users cannot read a single cell of the Tensor instance. To read a single cell, users need to convert the Tesnor into a numpy array. End of explanation """ # to plot the intermediate results fig, axes = plt.subplots(3, 5, figsize=(12, 8)) x = tensor.from_numpy(train_x) y = tensor.from_numpy(train_y) # sgd for idx in range(max_iter): y_ = x * k + b err = y_ - y loss = old_div(tensor.sum(err * err), nb_points) print('loss at iter %d = %f' % (idx, loss)) da1 = old_div(tensor.sum(err * x), nb_points) db1 = old_div(tensor.sum(err), nb_points) # update the parameters k -= da1 * alpha b -= db1 * alpha plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_)) """ Explanation: We can see that the learned line is becoming closer to the ground truth line (in blue color). Next: MLP example End of explanation """
mne-tools/mne-tools.github.io
0.22/_downloads/ecc61038e0082bd1c13f6a49dd4cd752/plot_70_fnirs_processing.ipynb
bsd-3-clause
import os import numpy as np import matplotlib.pyplot as plt from itertools import compress import mne fnirs_data_folder = mne.datasets.fnirs_motor.data_path() fnirs_cw_amplitude_dir = os.path.join(fnirs_data_folder, 'Participant-1') raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True) raw_intensity.load_data() """ Explanation: Preprocessing functional near-infrared spectroscopy (fNIRS) data This tutorial covers how to convert functional near-infrared spectroscopy (fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and deoxyhaemoglobin (HbR) concentration. :depth: 2 Here we will work with the fNIRS motor data &lt;fnirs-motor-dataset&gt;. End of explanation """ subjects_dir = mne.datasets.sample.data_path() + '/subjects' fig = mne.viz.create_3d_figure(size=(800, 600), bgcolor='white') fig = mne.viz.plot_alignment(raw_intensity.info, show_axes=True, subject='fsaverage', coord_frame='mri', trans='fsaverage', surfaces=['brain'], fnirs=['channels', 'pairs', 'sources', 'detectors'], subjects_dir=subjects_dir, fig=fig) mne.viz.set_3d_view(figure=fig, azimuth=20, elevation=60, distance=0.4, focalpoint=(0., -0.01, 0.02)) """ Explanation: View location of sensors over brain surface Here we validate that the location of sources-detector pairs and channels are in the expected locations. Source-detector pairs are shown as lines between the optodes, channels (the mid point of source-detector pairs) are optionally shown as orange dots. Source are optionally shown as red dots and detectors as black. End of explanation """ picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True) dists = mne.preprocessing.nirs.source_detector_distances( raw_intensity.info, picks=picks) raw_intensity.pick(picks[dists > 0.01]) raw_intensity.plot(n_channels=len(raw_intensity.ch_names), duration=500, show_scrollbars=False) """ Explanation: Selecting channels appropriate for detecting neural responses First we remove channels that are too close together (short channels) to detect a neural response (less than 1 cm distance between optodes). These short channels can be seen in the figure above. To achieve this we pick all the channels that are not considered to be short. End of explanation """ raw_od = mne.preprocessing.nirs.optical_density(raw_intensity) raw_od.plot(n_channels=len(raw_od.ch_names), duration=500, show_scrollbars=False) """ Explanation: Converting from raw intensity to optical density The raw intensity values are then converted to optical density. End of explanation """ sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od) fig, ax = plt.subplots() ax.hist(sci) ax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1]) """ Explanation: Evaluating the quality of the data At this stage we can quantify the quality of the coupling between the scalp and the optodes using the scalp coupling index. This method looks for the presence of a prominent synchronous signal in the frequency range of cardiac signals across both photodetected signals. In this example the data is clean and the coupling is good for all channels, so we will not mark any channels as bad based on the scalp coupling index. End of explanation """ raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5)) """ Explanation: In this example we will mark all channels with a SCI less than 0.5 as bad (this dataset is quite clean, so no channels are marked as bad). End of explanation """ raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od) raw_haemo.plot(n_channels=len(raw_haemo.ch_names), duration=500, show_scrollbars=False) """ Explanation: At this stage it is appropriate to inspect your data (for instructions on how to use the interactive data visualisation tool see tut-visualize-raw) to ensure that channels with poor scalp coupling have been removed. If your data contains lots of artifacts you may decide to apply artifact reduction techniques as described in ex-fnirs-artifacts. Converting from optical density to haemoglobin Next we convert the optical density data to haemoglobin concentration using the modified Beer-Lambert law. End of explanation """ fig = raw_haemo.plot_psd(average=True) fig.suptitle('Before filtering', weight='bold', size='x-large') fig.subplots_adjust(top=0.88) raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2, l_trans_bandwidth=0.02) fig = raw_haemo.plot_psd(average=True) fig.suptitle('After filtering', weight='bold', size='x-large') fig.subplots_adjust(top=0.88) """ Explanation: Removing heart rate from signal The haemodynamic response has frequency content predominantly below 0.5 Hz. An increase in activity around 1 Hz can be seen in the data that is due to the person's heart beat and is unwanted. So we use a low pass filter to remove this. A high pass filter is also included to remove slow drifts in the data. End of explanation """ events, _ = mne.events_from_annotations(raw_haemo, event_id={'1.0': 1, '2.0': 2, '3.0': 3}) event_dict = {'Control': 1, 'Tapping/Left': 2, 'Tapping/Right': 3} fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw_haemo.info['sfreq']) fig.subplots_adjust(right=0.7) # make room for the legend """ Explanation: Extract epochs Now that the signal has been converted to relative haemoglobin concentration, and the unwanted heart rate component has been removed, we can extract epochs related to each of the experimental conditions. First we extract the events of interest and visualise them to ensure they are correct. End of explanation """ reject_criteria = dict(hbo=80e-6) tmin, tmax = -5, 15 epochs = mne.Epochs(raw_haemo, events, event_id=event_dict, tmin=tmin, tmax=tmax, reject=reject_criteria, reject_by_annotation=True, proj=True, baseline=(None, 0), preload=True, detrend=None, verbose=True) epochs.plot_drop_log() """ Explanation: Next we define the range of our epochs, the rejection criteria, baseline correction, and extract the epochs. We visualise the log of which epochs were dropped. End of explanation """ epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15]))) """ Explanation: View consistency of responses across trials Now we can view the haemodynamic response for our tapping condition. We visualise the response for both the oxy- and deoxyhaemoglobin, and observe the expected peak in HbO at around 6 seconds consistently across trials, and the consistent dip in HbR that is slightly delayed relative to the HbO peak. End of explanation """ epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15]))) """ Explanation: We can also view the epoched data for the control condition and observe that it does not show the expected morphology. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6)) clims = dict(hbo=[-20, 20], hbr=[-20, 20]) epochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims) epochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims) for column, condition in enumerate(['Control', 'Tapping']): for ax in axes[:, column]: ax.set_title('{}: {}'.format(condition, ax.get_title())) """ Explanation: View consistency of responses across channels Similarly we can view how consistent the response is across the optode pairs that we selected. All the channels in this data are located over the motor cortex, and all channels show a similar pattern in the data. End of explanation """ evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'), 'Tapping/HbR': epochs['Tapping'].average(picks='hbr'), 'Control/HbO': epochs['Control'].average(picks='hbo'), 'Control/HbR': epochs['Control'].average(picks='hbr')} # Rename channels until the encoding of frequency in ch_name is fixed for condition in evoked_dict: evoked_dict[condition].rename_channels(lambda x: x[:-4]) color_dict = dict(HbO='#AA3377', HbR='b') styles_dict = dict(Control=dict(linestyle='dashed')) mne.viz.plot_compare_evokeds(evoked_dict, combine="mean", ci=0.95, colors=color_dict, styles=styles_dict) """ Explanation: Plot standard fNIRS response image Next we generate the most common visualisation of fNIRS data: plotting both the HbO and HbR on the same figure to illustrate the relation between the two signals. End of explanation """ times = np.arange(-3.5, 13.2, 3.0) topomap_args = dict(extrapolate='local') epochs['Tapping'].average(picks='hbo').plot_joint( times=times, topomap_args=topomap_args) """ Explanation: View topographic representation of activity Next we view how the topographic activity changes throughout the response. End of explanation """ times = np.arange(4.0, 11.0, 1.0) epochs['Tapping/Left'].average(picks='hbo').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbo').plot_topomap( times=times, **topomap_args) """ Explanation: Compare tapping of left and right hands Finally we generate topo maps for the left and right conditions to view the location of activity. First we visualise the HbO activity. End of explanation """ epochs['Tapping/Left'].average(picks='hbr').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbr').plot_topomap( times=times, **topomap_args) """ Explanation: And we also view the HbR activity for the two conditions. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5), gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1])) vmin, vmax, ts = -8, 8, 9.0 evoked_left = epochs['Tapping/Left'].average() evoked_right = epochs['Tapping/Right'].average() evoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1]) evoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:], vmin=vmin, vmax=vmax, colorbar=True, **topomap_args) evoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:], vmin=vmin, vmax=vmax, colorbar=True, **topomap_args) for column, condition in enumerate( ['Tapping Left', 'Tapping Right', 'Left-Right']): for row, chroma in enumerate(['HbO', 'HbR']): axes[row, column].set_title('{}: {}'.format(chroma, condition)) fig.tight_layout() """ Explanation: And we can plot the comparison at a single time point for two conditions. End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) mne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b', axes=axes, legend=False) mne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r', axes=axes, legend=False) # Tidy the legend. leg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1] leg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0]) fig.legend(leg_lines, ['Left', 'Right'], loc='lower right') """ Explanation: Lastly, we can also look at the individual waveforms to see what is driving the topographic plot above. End of explanation """
omartinsky/PYBOR
main.ipynb
mit
%pylab %matplotlib inline %run jupyter_helpers %run yc_framework figure_width = 16 """ Explanation: PYBOR PYBOR is a multi-curve interest rate framework and risk engine based on multivariate optimization techniques, written in Python. Copyright &copy; 2017 Ondrej Martinsky, All rights reserved www.github.com/omartinsky/pybor End of explanation """ eval_date = create_date('2017-01-03') def generate_pricing_curvemap(eval_date): random.seed(0) pricing_curvemap = CurveMap() t = linspace(eval_date+0, eval_date+365*80, 7) def createCurve(name, r0, speed, mean, sigma): return CurveConstructor.FromShortRateModel(name, t, r0, speed, mean, sigma, interpolation=InterpolationMode.CUBIC_LOGDF) def createCurveFromSpread(baseCurve, name, r0, speed, mean, sigma): out = createCurve(name, r0, speed, mean, sigma) out.add_another_curve(baseCurve) return out u3m = createCurve('USD.LIBOR.3M', 0.02, 0.03, 0.035, 5e-4) u6m = createCurveFromSpread(u3m, 'USD.LIBOR.6M', 0.01, 0.03, 0.011, 5e-4) u12m = createCurveFromSpread(u6m, 'USD.LIBOR.12M', 0.01, 0.03, 0.011, 5e-4) g3m = createCurveFromSpread(u3m, 'GBP.LIBOR.3M', 0.03, 0.03, 0.0, 5e-4) u1b = createCurve('USD/USD.OIS', 0.01, 0.03, 0.011, 5e-4) g1b = createCurveFromSpread(u1b, 'GBP/GBP.SONIA', 0.005, 0.03, 0.005, 5e-4) gu1b = createCurveFromSpread(u1b, 'GBP/USD.OIS', 0.001, 0.03, 0.001, 5e-4) pricing_curvemap.add_curve(u3m) pricing_curvemap.add_curve(u6m) pricing_curvemap.add_curve(u12m) pricing_curvemap.add_curve(g3m) pricing_curvemap.add_curve(g1b) pricing_curvemap.add_curve(u1b) pricing_curvemap.add_curve(gu1b) return pricing_curvemap pricing_curvemap = generate_pricing_curvemap(eval_date) # Display: figsize(figure_width, 6) linestyle('solid'), pricing_curvemap.plot(), title('Pricing Curvemap'), legend(), show(); """ Explanation: Pricing Curve Map Generate pricing curvemap using stochastic short-rate model $dr_t=a(b-r_t)dt + \sigma dW_t$ for curves and tenor/cross-currency basis spreads. This will be our starting point, the curves inside this curvemap will be used only to reprice market instruments. End of explanation """ cloned_curve = deepcopy(pricing_curvemap['USD.LIBOR.3M']) figsize(figure_width, 5), linestyle('solid'), title('Curve Interpolation Modes') for i, interpolation in enumerate(InterpolationMode._member_map_.values()): cloned_curve.set_interpolator(interpolation) cloned_curve.plot(label=interpolation), legend() """ Explanation: Interpolation Modes PYBOR supports three different interpolation methods: * Linear interpolation of the logarithm of discount factors (aka piecewise-constant in forward-rate space) * Linear interpolation of the continuously-compounded zero-rates * Cubic interpolation of the logarithm of discount factors Below is the curve interpolated in three different ways: End of explanation """ curve_builder = CurveBuilder('engine_usd_gbp.xlsx', eval_date) """ Explanation: Curve Builder Create the curve builder. Definitions of curves and market instruments from which these curves are built are loaded from the excel spreadsheet End of explanation """ price_ladder = curve_builder.reprice(pricing_curvemap) """ Explanation: Instrument Repricing Use the curve builder (specifically instrument definitions which it contains) to reprice instruments from previously created pricing curve map. Instrument prices are returned in a structure called price ladder End of explanation """ # Display: figsize(figure_width, 4) price_ladder.sublist('USD.LIBOR.3M').dataframe() """ Explanation: Display price ladder for a specific curve End of explanation """ figsize(figure_width, 6) m, r = curve_builder.get_instrument_rates(price_ladder.sublist('USD.LIBOR.3M')) m = [exceldate_to_pydate(int(i)) for i in m] title('USD.LIBOR.3M instrument par-rates') linestyle(' '), plt.plot(m,r,marker='.', label='USD.LIBOR.3M instrument par-rates') linestyle('-'), pricing_curvemap['USD.LIBOR.3M'].plot() legend(); """ Explanation: Display instrument par-rates Every instrument type has a specific relationship between the quoted price $P$ and the par-rate $r$. For instance: For interest rate swaps, $P = 100 \times r$ For interest rate futures, $P = 10000 \times (1 - r)$ The relationship between interest rate curve in a zero-rate space and instrument par-rates is often a source of confusion for many people. The below is a graph which illustrates the difference between USD.LIBOR.3M pricing curve's zero rates vs. par-rates of instruments (e.g. deposits, futures, swaps), which are repriced using this curve. As we can see, only the par-rates of money market (deposit) instruments correspond to the curve points plotted in a zero-rate space. End of explanation """ build_output = curve_builder.build_curves(price_ladder) """ Explanation: Curve Building Build a brand new collection of curves from the instrument prices. This will take few seconds to complete ... End of explanation """ # Display: figsize(figure_width, 6) title('Curvebuilder output') linestyle('solid'), build_output.output_curvemap.plot(), legend() linestyle('dotted'), pricing_curvemap.plot(); """ Explanation: Below is the comparison of curves which we have just built (solid lines) with pricing curves (dotted lines). These lines should be as close to each other as possible. End of explanation """ jacobian_dPdI = np.linalg.pinv(build_output.jacobian_dIdP) # Display: figsize(figure_width, 8) title("Jacobian Matrix"), xlabel('Pillars'), ylabel('Instruments') imshow(jacobian_dPdI), colorbar(); """ Explanation: Instrument/Pillar Jacobian Matrix The optimizer is using gradient-descent method to minimize error between instrument par-rates calculated from the curves which are subject to this optimization and the input instrument par-rates. In order to do this, optimizer calculates derivative ${\delta (I-I') / \delta P}$, where $I$ is the actual instrument par-rate, $I'$ is the target instrument par-rate and $P$ is the pillar value from the curve (practically speaking, the discount factor). Jacobian matrix which is a by-product of the curve building process can be then used for risk calculation purposes and it will be illustrated lated. End of explanation """ risk_calculator = RiskCalculator(curve_builder, build_output) """ Explanation: Risk Calculator Risk calculator is constructed from the curve builder (which contains curve definitions and market conventions) and build output (which contains curves and the jacobian matrix). End of explanation """ def visualise_bump(instrument_search_string, bumpsize): instruments, bumpsize = risk_calculator.find_instruments(instrument_search_string), bumpsize curvemap_bump = risk_calculator.get_bumped_curvemap(instruments, bumpsize, BumpType.JACOBIAN_REBUILD) # Display: figsize(figure_width, 6) linestyle('solid'), build_output.output_curvemap.plot(), legend() linestyle('dashed'), curvemap_bump.plot() title("Effect of bumping instrument %s" % instrument_search_string) """ Explanation: Let's define a convenience function which will bump par-rate of a specific instrument by the given amount of basis points and visualise the effect on all curves. End of explanation """ visualise_bump('USD.LIBOR.3M__Swap__20Y', 1e-4) visualise_bump('USD.LIBOR.3M.*', 15e-4) """ Explanation: Bumping Market Instruments Bumping market instruments (such as those which define USD.LIBOR.3M neutral curve) will cause parallel shift of all other curves which are defined as a basis from this curve End of explanation """ visualise_bump('USD.LIBOR.6M__BasisSwap__20Y', 1e-4) visualise_bump('USD.LIBOR.6M.*', 15e-4) """ Explanation: Bumping Basis Instruments Bumping basis instruments (USD.LIBOR.6M) will cause movement in a USD LIBOR 6M basis curve End of explanation """
hackgnar/pyubertooth
notebooks/AHA_Demo-4-28-16.ipynb
gpl-2.0
import time import sys sys.path.insert(0,"/Users/rholeman/src/pyubertooth") from pyubertooth.ubertooth import Ubertooth, ubertooth_rx_to_stdout from pylibbtbb.bluetooth_packet import BtbbPacket import bluetooth """ Explanation: Python Ubertooth Bindings bla bla bla bla jupyter has no spellcheck so drink when you find spelling errors https://github.com/hackgnar/pyubertooth Why not just use the provided CLI C tools and C libs? * My python implementation sucks and drops a lot of traffic. Makes it good for low resource devices. * Easy to deploy * Good for prototypes * Access to more functionality than what is provided by CLI without the need to hammer out some C * Easier to integrate with other libs Lib imports The only one that matters is import Ubertooth The BtbbPacket lib is important if you want to decode baseband data off the wire into something other than bits and bytes. However the current code sucks ass and its rewrite is my main objective for revisiting this project End of explanation """ u = Ubertooth() """ Explanation: Create an ubertooth device to interact with This supports real and file based devices so you can import a dump file for post mortem analysis or testing End of explanation """ print "serial: %s" % u.cmd_get_serial() print "part number: %s" % u.cmd_get_partnum() print "board id: %s" % u.cmd_get_board_id() """ Explanation: Get simple information from the device all of this is supported from the C CLI tools End of explanation """ for data in u.rx_stream(count=5): print data u.cmd_stop() """ Explanation: Stream raw data form the device This allows you to do your own deserialization of the btbb data or whatever you want to do End of explanation """ u.cmd_led_specan() u.cmd_stop() """ Explanation: Do the same stuff you can do from CLI but in python meh... not so impressive, but cool End of explanation """ u.cmd_get_channel() u.cmd_set_channel(50) """ Explanation: Get & set core functionality on the fly You can kind of do this from the CLI tools, but this allows you to do shit on the fly without hacks or a C implementation End of explanation """ for i in range(10): print u.cmd_get_clock() u.cmd_set_clock() """ Explanation: Get & set C only functionality on the fly You can do this shit from the CLI tools. End of explanation """ for i in range(10): time.sleep(1) u.cmd_set_usrled(state=i%2) for i in range(100): time.sleep(0.1) x = u.cmd_get_clock() u.cmd_set_usrled(state=x%2) x = u.cmd_get_clock() u.cmd_set_rxled(state=x%2) x = u.cmd_get_clock() u.cmd_set_txled(state=x%2) """ Explanation: Blink lights like a boss because it makes us feel like we know how to program hardware End of explanation """ results = [] for data in u.rx_stream(): channel = u.cmd_get_channel() channel += 1 u.cmd_set_channel(channel % 79) pkt = BtbbPacket(data=data) if pkt.LAP: results.append(pkt) print pkt if len(results) > 10: break u.cmd_stop() """ Explanation: Stream and deserialize bluetooth data all in python We can even add some different spins on this by changing settings while doing live streams. For example, we can add channel survey (this wasnt always available in the CLI) with the following: channel = u.cmd_get_channel() channel += 1 u.cmd_set_channel(channel % 79) End of explanation """ import numpy as np import matplotlib.pyplot as plt %matplotlib inline tmp = {} for i in results: if i.LAP in tmp: tmp[i.LAP] += 1 else: tmp[i.LAP] = 1 LAPs = tmp.keys() lap_count = tmp.values() y_pos = np.arange(len(LAPs)) fig = plt.figure(figsize=(10, 5)) plt.barh(y_pos, lap_count, align='center', alpha=0.4) plt.yticks(y_pos, LAPs) plt.xlabel('count') plt.title('who is out there?') plt.show() """ Explanation: Paint a pretty picture Because managers love this shit End of explanation """ u_results = list(set([i.LAP for i in results])) u_results brute_uap = [chr(x).encode("hex") for x in range(256)] brute_uap for i in brute_uap: address = "0000%s%s" % (i,u_results[0]) print "trying %s" % address try: results = bluetooth.find_service(address=address) if results: print results break except: pass """ Explanation: Implement hybrid active/passive approaches NOTE: dont do this in osx because its python bluetooth support sucks End of explanation """
clarkkev/attention-analysis
General_Analysis.ipynb
mit
import collections import pickle import matplotlib import numpy as np import seaborn as sns import sklearn from matplotlib import pyplot as plt from matplotlib import cm from sklearn import manifold sns.set_style("darkgrid") """ Explanation: General BERT Attention Analysis This notebook contains code for analyzing general patterns BERT's attention (see Sections 3 and 6 of What Does BERT Look At? An Analysis of BERT's Attention) End of explanation """ def load_pickle(fname): with open(fname, "rb") as f: return pickle.load(f) # add, encoding="latin1") if using python3 and downloaded data # BERT-base Attention Maps extracted from Wikipedia # Data is a list of dicts of the followig form: # { # "tokens": list of strings # "attns": [n_layers, n_heads, n_tokens, n_tokens] # tensor of attention weights # } data = load_pickle("./data/unlabeled_attn.pkl") n_docs = len(data) # Average Jenson-Shannon divergences between attention heads js_divergences = load_pickle("./data/head_distances.pkl") def data_iterator(): for i, doc in enumerate(data): if i % 100 == 0 or i == len(data) - 1: print("{:.1f}% done".format(100.0 * (i + 1) / len(data))) yield doc["tokens"], np.array(doc["attns"]) """ Explanation: Loading the Data You can download the data used for the notebook here. You can create your own data and extract your own attention maps using preprocess_unlabeled.py and extract_attention.py (see the README for details). End of explanation """ avg_attns = { k: np.zeros((12, 12)) for k in [ "self", "right", "left", "sep", "sep_sep", "rest_sep", "cls", "punct"] } print("Computing token stats") for tokens, attns in data_iterator(): n_tokens = attns.shape[-1] # create masks indicating where particular tokens are seps, clss, puncts = (np.zeros(n_tokens) for _ in range(3)) for position, token in enumerate(tokens): if token == "[SEP]": seps[position] = 1 if token == "[CLS]": clss[position] = 1 if token == "." or token == ",": puncts[position] = 1 # create masks indicating which positions are relevant for each key sep_seps = np.ones((n_tokens, n_tokens)) sep_seps *= seps[np.newaxis] sep_seps *= seps[:, np.newaxis] rest_seps = np.ones((n_tokens, n_tokens)) rest_seps *= (np.ones(n_tokens) - seps)[:, np.newaxis] rest_seps *= seps[np.newaxis] selectors = { "self": np.eye(n_tokens, n_tokens), "right": np.eye(n_tokens, n_tokens, 1), "left": np.eye(n_tokens, n_tokens, -1), "sep": np.tile(seps[np.newaxis], [n_tokens, 1]), "sep_sep": sep_seps, "rest_sep": rest_seps, "cls": np.tile(clss[np.newaxis], [n_tokens, 1]), "punct": np.tile(puncts[np.newaxis], [n_tokens, 1]), } # get the average attention for each token type for key, selector in selectors.items(): if key == "sep_sep": denom = 2 elif key == "rest_sep": denom = n_tokens - 2 else: denom = n_tokens avg_attns[key] += ( (attns * selector[np.newaxis, np.newaxis]).sum(-1).sum(-1) / (n_docs * denom)) """ Explanation: Computing Average Attention to Particular Tokens/Positions (Sections 3.1 and 3.2 of the Paper) End of explanation """ uniform_attn_entropy = 0 # entropy of uniform attention entropies = np.zeros((12, 12)) # entropy of attention heads entropies_cls = np.zeros((12, 12)) # entropy of attention from [CLS] print("Computing entropy stats") for tokens, attns in data_iterator(): attns = 0.9999 * attns + (0.0001 / attns.shape[-1]) # smooth to avoid NaNs uniform_attn_entropy -= np.log(1.0 / attns.shape[-1]) entropies -= (attns * np.log(attns)).sum(-1).mean(-1) entropies_cls -= (attns * np.log(attns))[:, :, 0].sum(-1) uniform_attn_entropy /= n_docs entropies /= n_docs entropies_cls /= n_docs """ Explanation: Computing Attention Head Entropies (Section 3.3) End of explanation """ # Pretty colors BLACK = "k" GREEN = "#59d98e" SEA = "#159d82" BLUE = "#3498db" PURPLE = "#9b59b6" GREY = "#95a5a6" RED = "#e74c3c" ORANGE = "#f39c12" def get_data_points(head_data): xs, ys, avgs = [], [], [] for layer in range(12): for head in range(12): ys.append(head_data[layer, head]) xs.append(1 + layer) avgs.append(head_data[layer].mean()) return xs, ys, avgs """ Explanation: Plotting Utilities End of explanation """ width = 3 example_sep = 3 word_height = 1 pad = 0.1 def plot_attn(example, heads): """Plots attention maps for the given example and attention heads.""" for ei, (layer, head) in enumerate(heads): yoffset = 1 xoffset = ei * width * example_sep attn = example["attns"][layer][head][-15:, -15:] attn = np.array(attn) attn /= attn.sum(axis=-1, keepdims=True) words = example["tokens"][-15:] words[0] = "..." n_words = len(words) for position, word in enumerate(words): plt.text(xoffset + 0, yoffset - position * word_height, word, ha="right", va="center") plt.text(xoffset + width, yoffset - position * word_height, word, ha="left", va="center") for i in range(1, n_words): for j in range(1, n_words): plt.plot([xoffset + pad, xoffset + width - pad], [yoffset - word_height * i, yoffset - word_height * j], color="blue", linewidth=1, alpha=attn[i, j]) plt.figure(figsize=(12, 4)) plt.axis("off") plot_attn(data[581], [(0, 0), (2, 0), (7, 6), (10, 5)]) plt.show() """ Explanation: Examples of Attention Head Behavior (Figure 1) End of explanation """ def add_line(key, ax, color, label, plot_avgs=True): xs, ys, avgs = get_data_points(avg_attns[key]) ax.scatter(xs, ys, s=12, label=label, color=color) if plot_avgs: ax.plot(1 + np.arange(len(avgs)), avgs, color=color) ax.legend(loc="best") ax.set_xlabel("Layer") ax.set_ylabel("Avg. Attention") plt.figure(figsize=(5, 10)) ax = plt.subplot(3, 1, 1) for key, color, label in [ ("cls", RED, "[CLS]"), ("sep", BLUE, "[SEP]"), ("punct", PURPLE, ". or ,"), ]: add_line(key, ax, color, label) ax = plt.subplot(3, 1, 2) for key, color, label in [ ("rest_sep", BLUE, "other -> [SEP]"), ("sep_sep", GREEN, "[SEP] -> [SEP]"), ]: add_line(key, ax, color, label) ax = plt.subplot(3, 1, 3) for key, color, label in [ ("left", RED, "next token"), ("right", BLUE, "prev token"), ("self", PURPLE, "current token"), ]: add_line(key, ax, color, label, plot_avgs=False) plt.show() """ Explanation: Avg. Attention Plots (Figure 2, Sections 3.1 and 3.2) End of explanation """ xs, es, avg_es = get_data_points(entropies) xs, es_cls, avg_es_cls = get_data_points(entropies_cls) plt.figure(figsize=(5, 5)) def plot_entropies(ax, data, avgs, label, c): ax.scatter(xs, data, c=c, s=5, label=label) ax.plot(1 + np.arange(12), avgs, c=c) ax.plot([1, 12], [uniform_attn_entropy, uniform_attn_entropy], c="k", linestyle="--") ax.text(7, uniform_attn_entropy - 0.45, "uniform attention", ha="center") ax.legend(loc="lower right") ax.set_ylabel("Avg. Attention Entropy (nats)") ax.set_xlabel("Layer") plot_entropies(plt.subplot(2, 1, 1), es, avg_es, "BERT Heads", c=BLUE) plot_entropies(plt.subplot(2, 1, 2), es_cls, avg_es_cls, "BERT Heads from [CLS]", c=RED) plt.show() """ Explanation: Entropy Plots (Section 3.3, Figure 4) End of explanation """ ENTROPY_THRESHOLD = 3.8 # When to say a head "attends broadly" POSITION_THRESHOLD = 0.5 # When to say a head "attends to next/prev" SPECIAL_TOKEN_THRESHOLD = 0.6 # When to say a heads attends to [CLS]/[SEP]" # Heads that were found to have linguistic behaviors LINGUISTIC_HEADS = { (4, 3): "Coreference", (7, 10): "Determiner", (7, 9): "Direct object", (8, 5): "Object of prep.", (3, 9): "Passive auxiliary", (6, 5): "Possesive", } # Use multi-dimensional scaling to compute 2-dimensional embeddings that # reflect Jenson-Shannon divergences between attention heads. mds = sklearn.manifold.MDS(metric=True, n_init=5, n_jobs=4, eps=1e-10, max_iter=1000, dissimilarity="precomputed") pts = mds.fit_transform(js_divergences) pts = pts.reshape((12, 12, 2)) pts_flat = pts.reshape([144, 2]) colormap = cm.seismic(np.linspace(0, 1.0, 12)) plt.figure(figsize=(4.8, 9.6)) plt.title("BERT Attention Heads") for color_by_layer in [False, True]: ax = plt.subplot(2, 1, int(color_by_layer) + 1) seen_labels = set() for layer in range(12): for head in range(12): label = "" color = GREY marker = "o" markersize = 4 x, y = pts[layer, head] if avg_attns["right"][layer, head] > POSITION_THRESHOLD: color = RED marker = ">" label = "attend to next" if avg_attns["left"][layer, head] > POSITION_THRESHOLD: color = BLUE label = "attend to prev." marker = "<" if entropies[layer, head] > ENTROPY_THRESHOLD: color = ORANGE label = "attend broadly" marker = "^" if avg_attns["cls"][layer, head] > SPECIAL_TOKEN_THRESHOLD: color = PURPLE label = "attend to [CLS]" marker = "$C$" markersize = 5 if avg_attns["sep"][layer, head] > SPECIAL_TOKEN_THRESHOLD: color = GREEN marker = "$S$" markersize = 5 label = "attend to [SEP]" if avg_attns["punct"][layer, head] > SPECIAL_TOKEN_THRESHOLD: color = SEA marker = "s" markersize = 3.2 label = "attend to . and ," if color_by_layer: label = str(layer + 1) color = colormap[layer] marker = "o" markersize = 3.8 if not color_by_layer: if (layer, head) in LINGUISTIC_HEADS: label = "" color = BLACK marker = "x" ax.text(x, y, LINGUISTIC_HEADS[(layer, head)], color=color) if label not in seen_labels: seen_labels.add(label) else: label = "" ax.plot([x], [y], marker=marker, markersize=markersize, color=color, label=label, linestyle="") ax.set_xticks([]) ax.set_yticks([]) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.spines["left"].set_visible(False) ax.set_facecolor((0.96, 0.96, 0.96)) plt.title(("Colored by Layer" if color_by_layer else "Behaviors")) handles, labels = ax.get_legend_handles_labels() ax.legend(handles, labels, loc="best") plt.suptitle("Embedded BERT attention heads", fontsize=14, y=1.05) plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0.1, wspace=0) plt.show() """ Explanation: Clustering Attention Heads (Section 6, Figure 6) End of explanation """
OpenWIM/pywim
notebooks/presentations/scipyla2015/PyWIM-presentation.ipynb
mit
from IPython.display import display from matplotlib import pyplot as plt from scipy import signal from scipy import constants from scipy.signal import argrelextrema from collections import defaultdict from sklearn import metrics import statsmodels.api as sm import numpy as np import pandas as pd import numba as nb import sqlalchemy import psycopg2 import os import sys import datetime import matplotlib as mpl # local sys.path.insert(0, os.path.dirname(os.getcwd())) try: import PyDAQmx as pydaq except NotImplementedError: print('Usando DAQ genérico') import pywim.lib.daq.generic as pydaq # PyWIM from pywim.lib.vehicular_classification import dww from pywim.lib.vehicular_classification import dww_nb from pywim.lib.daq.generic import ( gen_synthetic_analog_data, gen_synthetic_digital_data ) # matplotlib mpl.style.use('ggplot') %matplotlib inline """ Explanation: Table of Contents 1. El uso de Python como apoyo al pesaje de vehículos pesados en movimiento (WIM) 2. Descripción del proyecto 3. Adquisición de datos 3.1 Uso de datos sintéticos 4. Almacenamiento y flujo de lo datos 5. Procesamiento digital de señal 5.1 Corrección de baseline 5.2 Filtrado de señal 5.3 Detección de picos 5.4 Detección de la curva de la señal para el cálculo de peso 6. Cálculos 6.1 Velocidad 6.2 Distancia entre ejes 6.3 Área bajo la curva 6.4 Pesos 7. Clasificación de vehículos 8. Calibración de los cálculos de pesaje 9. Reconocimiento automático de matrículas vehiculares 10. Conclusión <!--bibtex @TechReport{tech:optimization-vehicle-classification, Title = {Optimization Vehicle Classification}, Author = {van Boxel, DW and van Lieshout, RA}, Institution = {Ministerie van Verkeer en Waterstaat - Directoraat-Generaal Rijkswaterstaat - Dienst Weg- en Waterbouwkunde (DWW)}, Year = {2003}, Owner = {xmn}, Timestamp = {2014.10.22} } @Article{pattern-recogntion-of-strings, Title = {Pattern recognition of strings with substitutions, insertions, deletions and generalized transpositions}, Author = {Oommen, B John and Loke, Richard KS}, Journal = {Pattern Recognition}, Year = {1997}, Number = {5}, Pages = {789--800}, Volume = {30}, Publisher = {Elsevier} } @article{vanweigh, title={Weigh-in-Motion--Categorising vehicles}, author={van Boxel, DW and van Lieshout, RA and van Doorn, RA} } @misc{kistler2004installation, title={Installation Instructions: Lineas{\textregistered} Sensors for Weigh-in-Motion Type 9195E}, author={Kistler Instrumente, AG}, year={2004}, publisher={Kistler Instrumente AG, Switzerland} } @article{helmus2013nmrglue, title={Nmrglue: an open source Python package for the analysis of multidimensional NMR data}, author={Helmus, Jonathan J and Jaroniec, Christopher P}, journal={Journal of biomolecular NMR}, volume={55}, number={4}, pages={355--367}, year={2013}, publisher={Springer} } @article{billauer2008peakdet, title={peakdet: Peak detection using MATLAB}, author={Billauer, Eli}, journal={Eli Billauer’s home page}, year={2008} } @Article{article:alpr-using-python-and-opencv, Title = {Automatic License Plate Recognition using Python and OpenCV}, Author = {Sajjad, K.M.}, Year = {2010}, Institution = {Department of Computer Science and Engineering, MES College of Engineering, Kerala, India}, Owner = {xmn}, Timestamp = {2014.08.24} } @inproceedings{burnos2008auto, title={Auto-calibration and temperature correction of WIM systems}, author={Burnos, Piotr}, booktitle={Fifth International Conference on Weigh-in-Motion (ICWIM5)}, pages={439}, year={2008} } @inproceedings{gajda2012analysis, title={Analysis of the temperature influences on the metrological properties of polymer piezoelectric load sensors applied in Weigh-in-Motion systems}, author={Gajda, Janusz and Sroka, Ryszard and Stencel, Marek and Zeglen, Tadeusz and Piwowar, Piotr and Burnos, Piotr}, booktitle={Instrumentation and Measurement Technology Conference (I2MTC), 2012 IEEE International}, pages={772--775}, year={2012}, organization={IEEE} } --> <!-- %%javascript IPython.load_extensions('calico-document-tools'); --> 1. El uso de Python como apoyo al pesaje de vehículos pesados en movimiento (WIM) Muchos accidentes en carreteras son causados directa o indirectamente por vehículos pesados conducidos con sobrepeso. Estos causan daños en el pavimento y también sufren más efectos dinámicos durante las curvas. Para inhibir el exceso de peso de estos vehículos es necesario fiscalizar estas infracciones y, cuando necesario, aplicar las medidas establecidas por ley, como multas y aprehensiones. Un método que está siendo investigado en muchas partes del mundo es el pesaje en movimiento. Este método tiene como ventajas la economía en espacio físico y operación, ya que sus sensores son instalados en la propia carretera y no implica en atrasos en el viaje de los usuarios de la vía, pues puede pesar los vehículos pesados transitando en la velocidad directriz de la vía. En este trabajo serán presentados tecnologías útiles para desarrollar un sistema computacional para apoyo al pesaje de vehículos en movimiento. La experiencia para desarrollar este trabajo fue obtenida a través del proyecto desarrollado en el laboratorio de transportes (LabTrans) de la Universidade Federal de Santa Catarina (UFSC). El objetivo de este trabajo es servir como base inicial para futuros investigadores del tema. El lenguaje utilizado aquí será el Python y las librerías principales utilizadas serán: numpy, scipy, pandas, sqlalchemy, statsmodels, numba, scikit-learn, pydaqmx, matplotlib. 2. Descripción del proyecto Un sistema computacional de pesaje de vehículos en movimiento está compuesto, básicamente, de: - Adquisición de señal de los sensores de peso en la vía); - Segmentación de señal (para recortar la señal respectiva al camión medido); - Tratamiento de señales; - Cálculos (velocidad, número de ejes, grupos de ejes, distancia entre ejes, peso total, peso por ejes, peso por grupo de ejes, largo); - Clasificación del vehículo; - Calibración; - Reconocimiento de matrículas vehiculares; - Detección de infracción; El sistema debe ser rápido y robusto para procesar todas estas informaciones en el menor tiempo posible. Python no es un lenguaje reconocido por tener un alto desempeño, por eso, es necesario utilizar librerías y métodos para potenciar su capacidad de procesamiento. Con base en los resultados del pesaje, clasificación y reconocimiento de la matrícula vehicular es posible saber si el vehículo cometió alguna infracción y, en caso positivo, es posible vincular la infracción a la identificación del vehículo infractor. End of explanation """ samples_per_channel = 1000 number_of_channels = 1 task = pydaq.Task() task.CreateAIVoltageChan() task.CfgSampClkTiming() total_samples = pydaq.int32() data_size = samples_per_channel * number_of_channels data = np.zeros((data_size,), dtype=np.float64) task.StartTask() data = task.ReadAnalogF64( samples_per_channel, 10.0, pydaq.DAQmx_Val_GroupByChannel, data, data_size, pydaq.byref(total_samples), None ) plt.plot(np.linspace(0, 3, 15000), data, label='sensor 1') plt.title('DAQ') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.show() """ Explanation: 3. Adquisición de datos La adquisición de datos fue hecha a través de placas de adquisición DAQmx de la empresa National Instruments (NI). Para comunicar con estas fue utilizada la librería PyDAQmx, un wrap hecho en Python para los controladores del hardware fornecidos por la empresa. Esta librería es una interfaz completa para los controladores NIDAQmx ANSI C e importa todas las funciones del controlador e importa todas las constantes predefinidas. Como resultado, la librería retorna un objeto numpy.array. Después de adquirir la señal de los sensores, el sistema la almacena en un buffer circular en memoria que, dentro un proceso paralelo, es analizada en busca de una señal completa de un vehículo (segmento). Este proceso fue construido de manera muy simple, donde el programa espera la señal desde un bucle inductivo y, cuando accionado, segmenta la señal con valores respectivos a los 3 segundos siguientes. End of explanation """ df = pd.DataFrame() sample_rate = 2000 total_seconds = 3.0 # analog channel 1 df['a1'] = gen_synthetic_analog_data( sample_rate=sample_rate, total_seconds=total_seconds, time_delay=0.7, noise_p=10 ) # analog channel 2 df['a2'] = gen_synthetic_analog_data( sample_rate=sample_rate, total_seconds=total_seconds, time_delay=1.0, noise_p=10 ) # digital loop df['d1'] = gen_synthetic_digital_data( sample_rate=sample_rate, total_seconds=total_seconds, time_delay=0.8 ) df.plot() plt.title('Datos de los sensores') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.legend() plt.show() """ Explanation: 3.1 Uso de datos sintéticos End of explanation """ # Connect to the database DATABASE = { 'host': 'localhost', 'database': 'pywim', 'port': '5432', 'user': 'pywim', 'password': 'pywim' } conn = psycopg2.connect(**DATABASE) engine = sqlalchemy.create_engine( 'postgresql+psycopg2://', creator=lambda: conn ) # creates acquisition data cur = conn.cursor() cur.execute( 'INSERT INTO wim.acquisition (id, date_time) ' + 'VALUES (DEFAULT, %s) RETURNING id', (datetime.datetime.now(),) ) acq_id = cur.fetchone()[0] conn.commit() cur.close() # save the sensor data into database df_data = df.copy() df_data['acquisition'] = acq_id df_data['time_seconds'] = df_data.index df_data.rename( columns={ 'a1': 'sensor1', 'a2': 'sensor2', 'd1': 'inductive_loop' }, inplace=True ) df_data.to_sql( 'acquisition_data', con=engine, schema='wim', if_exists='append', index=False ) conn.commit() # select acquisition data from database df_data = pd.read_sql_query( ''' SELECT * FROM wim.acquisition_data WHERE acquisition=%s ''' % acq_id, con=engine, index_col='time_seconds' ) df_data.drop('acquisition', axis=1, inplace=True) df_data[['sensor1', 'sensor2', 'inductive_loop']].plot() plt.title('Datos de los sensores') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.legend() plt.show() """ Explanation: 4. Almacenamiento y flujo de lo datos Después de segmentados, los datos brutos son almacenados en la base de datos. Eso posibilita cambiar los métodos de cálculos o parámetros de calibración, posibilitando analizar los métodos utilizados. En todos los métodos y funciones de cálculos en el sistema, el tipo patrón para los conjuntos de datos es el pandas.DataFrame. Este es utilizado desde el momento de la lectura en la base de datos, en conjunto con sqlalchemy, hasta en los cálculos, ploteos y grabación en base de datos o en archivos CSV. El pandas.DataFrame fornece mecanismos para manipulación de datos muy parecidos a los utilizados en el lenguaje R. End of explanation """ df_filt = df.copy() for s in df_filt.keys(): df_filt[s] -= df_filt[s][:100].min() df_filt.plot() plt.title('Datos de los sensores') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.legend() plt.show() """ Explanation: 5. Procesamiento digital de señal Para la realización de los cálculos, la señal necesita ser tratada y, para eso, es necesario aplicar un filtrado de señal y corrección de baseline. Para la aplicación del filtrado, en el ejemplo, será utilizado la recomendación de <a name="ref-1"/>(KistlerInstrumente, 2004), la fabricante de los sensores Lineas: filtrado del tipo pasa baja de orden 1, a 600 Hz. 5.1 Corrección de baseline Para hacer la corrección de baseline pode ser utilizado el método que sea más apropiado para las características eléctricas de la señal del sensor. En la librería nmrglue <a name="ref-2"/>(Helmus and Jaroniec, 2013) tiene el módulo proc_bl que contiene muchas funciones que pueden ayudar a hacer la corrección de baseline. En el ejemplo abajo, la corrección será hecha sustrayendo de la señal el valor mínimo encontrado en los primeros 100 puntos de la señal. End of explanation """ order = 1 freq = 600 # Mz lower_cut = freq/sample_rate b, a = signal.butter(order, lower_cut) df_filt['a1'] = signal.filtfilt(b, a, df_filt['a1']) df_filt['a2'] = signal.filtfilt(b, a, df_filt['a2']) df_filt.plot() plt.title('Datos de los sensores') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.legend() plt.show() """ Explanation: 5.2 Filtrado de señal El filtro utilizado será de tipo basa baja, de orden 1, con la frecuencia de corte de 600Hz. Para eso, fue utilizado los métodos filtfilt y butterworth de la librería scipy. End of explanation """ peaks = {} _tmp = df_filt['a1'].values.copy() _tmp[_tmp < 1] = 0.0 peaks['a1'] = argrelextrema(_tmp, np.greater, order=100)[0] _tmp = df_filt['a2'].values.copy() _tmp[_tmp < 1] = 0.0 peaks['a2'] = argrelextrema(_tmp, np.greater, order=100)[0] df_peaks = pd.DataFrame() df_peaks['peak_a1'] = np.zeros(df_filt.shape[0]) df_peaks['peak_a2'] = np.zeros(df_filt.shape[0]) df_peaks['peak_a1'][peaks['a1']] = 10 df_peaks['peak_a2'][peaks['a2']] = 10 df_peaks.index = df_filt.index pd.concat((df_filt, df_peaks)).plot() plt.title('Datos de los sensores') plt.grid(True) plt.ylabel('Tensión (V)') plt.xlabel('Tiempo (s)') plt.legend() plt.show() """ Explanation: 5.3 Detección de picos El método de detección de picos a ser utilizados debe llevar en cuenta las características de la señal. En <a name="ref-3"/>(Billauer, 2008) se puede encontrar un método muy bueno para encontrar las máximas y mínimas locales. Para los datos de ejemplo, será utilizado el módulo argrelextrema de scipy y un threshold de 1 (volt), para evitar los ruidos de la señal. End of explanation """ sensor_curve = defaultdict(dict) sensor_curve_chart = defaultdict(list) fig, axes = plt.subplots(nrows=2, ncols=2) for k, s in enumerate(['a1', 'a2']): for i, peak in enumerate(peaks[s]): sensor_curve[s]['axle%s' % (i+1)] = ( pd.Series(df_filt[s].index[peak-400:peak+400]) ) df_filt[s].iloc[peak-400:peak+400].plot(ax=axes[k, i]) axes[k, i].set_title('Axle %s - Peak %s' % (k, i)) plt.tight_layout() """ Explanation: 5.4 Detección de la curva de la señal para el cálculo de peso Para el recorte de la curva para el cálculo de peso para los sensores Lineas de Kistler, puede ser utilizado el concepto descrito en <a name="ref-4"/>(KistlerInstrumente, 2004). La figura abajo <a name="ref-5"/>(KistlerInstrumente, 2004) ilustra cómo debe ser hecho el recorte. <figure> <img src="https://github.com/xmnfw/pywim/blob/master/docs/img/kistler-cut-signal-area.png?raw=true" alt="Recorte del área de la señal"/> <center><figcaption>Recorte del área de la señal</figcaption></center> </figure> Para hacerlo con los datos de ejemplo, puede ser adoptado un threshold de 0,2 y un $\Delta{t}$ de 20. Para facilitar el entendimiento, el corte será hecho desde 400 puntos antes del pico hasta 400 puntos después del pico. End of explanation """ distance_sensors = 1 # metro vehicle_speed = {} time_points = peaks['a2'][0]-peaks['a1'][0] d_time = time_points*(1/sample_rate) vehicle_speed['axle1'] = distance_sensors/d_time # m/s time_points = peaks['a2'][1]-peaks['a1'][1] d_time = time_points*(1/sample_rate) vehicle_speed['axle2'] = distance_sensors/d_time # m/s df_speed = pd.DataFrame( vehicle_speed, index=['speed_sensor_0_1', 'speed_sensor_1_2'] ) vehicle_speed_mean = df_speed.mean()[0] display(df_speed*3.6) # km print('Velocidad media:', vehicle_speed_mean * 3.6, 'km/h') # km/h """ Explanation: 6. Cálculos A partir de las informaciones de los picos de la señal y su curva, es posible empezar los cálculos para determinar la distancia entre ejes, velocidad y peso. A continuación, serán presentados estos cálculos utilizando los datos de ejemplo generados en las secciones anteriores. 6.1 Velocidad Para calcular la velocidad es necesario, primeramente, saber la distancia entre los sensores. Para este ejemplo, será adoptada la distancia de 1 metro. La velocidad se da a través de la fórmula: $v = \frac{\Delta{s}}{\Delta{t}}$ End of explanation """ axles_distance = defaultdict(dict) time_points = peaks['a1'][1]-peaks['a1'][0] d_time = time_points*(1/sample_rate) axles_distance['a1']['axle1-axle2'] = d_time*vehicle_speed_mean time_points = peaks['a2'][1]-peaks['a2'][0] d_time = time_points*(1/sample_rate) axles_distance['a2']['axle1-axle2'] = d_time*vehicle_speed_mean df_distance_axles = pd.DataFrame(axles_distance) display(df_distance_axles) """ Explanation: 6.2 Distancia entre ejes Para calcular la distancia entre ejes es necesario haber calculado la velocidad. La fórmula para el cálculo de la distancia entre ejes es: $\Delta{s} = v*\Delta{t}$. En este ejemplo será utilizada la velocidad media, pero también podría ser utilizada la velocidad encontrada por eje. End of explanation """ df_area = pd.DataFrame() time_interval = 1/sample_rate print('intervalo de tiempo:', time_interval) for s in sensor_curve: area = {} for axle, v in sensor_curve[s].items(): # sumatorio con corrección de baseline area.update({axle: (v-v.min()).sum()*time_interval}) df_area[s] = pd.Series(area) df_area = df_area.T display(df_area) """ Explanation: 6.3 Área bajo la curva Otra información necesaria para la realización de los cálculos de pesaje es el área bajo la curva identificada. Para realizar este cálculo es necesario hacer la integral de curva o, en este caso, la suma de los puntos de la curva. End of explanation """ amp_sensibility = 0.15*10**-3 # 1.8 pC/N*5V/60000pC C = pd.Series([1, 1]) Ls = pd.Series([0.53] * 2) V = df_speed.reset_index(drop=True) A = df_area.reset_index(drop=True) W = pd.DataFrame() for axle in V.keys(): W[axle] = ((V[axle]/Ls)*A[axle]*C)/amp_sensibility/constants.g display(W) print('\nPromedio por eje:') display(W.mean()) print('\nPeso Bruto Total:', W.mean().sum(), 'kg') """ Explanation: 6.4 Pesos Para calcular el peso del vehículo serán utilizadas las informaciones de velocidad, la curva de cada eje, . Para los sensores Lineas de Kistler, debe ser seguida la siguiente formula <a name="ref-6"/>(KistlerInstrumente, 2004): $W = ( V / L_s ) * A * C$, donde W es la variable de peso, V es la velocidad, $L_s$ es el ancho del sensor, A es la integral de la curva y C es una constante de calibración. Para otros tipos de sensores, la fórmula es similar. Para sensores del tipo piezoeléctrico polímero y cerámicos es necesario considerar un método para corrección de los resultados debido a la sensibilidad a la temperatura <a name="ref-7"/>(Burnos, 2008), <a name="ref-8"/>(Gajda et al., 2012). Para los datos de ejemplo, serán calculados los pesos sobre los ejes y el peso bruto total utilizando como parámetro: el ancho del sensor con el valor de 0.53 metros y la constante de calibración igual a 1 para todos los sensores. End of explanation """ layout = dww.layout_to_int(dww.layout((7, 2, 0.5, 2))) layout_ref = dww.layout_to_int('-O----O-O----O-') z = np.zeros((len(layout), len(layout_ref)), dtype=int) %time resultado = dww_nb.D(layout, layout_ref, z) %time dww.D(layout, layout_ref, z) print(resultado) """ Explanation: 7. Clasificación de vehículos Aquí será presentado un método para clasificación vehicular basado en los trabajos de <a name="ref-9"/>(vanBoxel and vanLieshout, 2003) y <a name="ref-10"/>(Oommen and Loke, 1997) En este método, es utilizado un conjunto de layouts de referencias, definido por un conjunto de símbolos, que representa el diseño del vehículo, como puede ser visto en la figura abajo <a name="ref-11"/>(vanBoxel and vanLieshout, 2003). <figure> <img src="https://github.com/xmnfw/pywim/blob/master/docs/img/dww-layout.png?raw=true" alt="Ejemplos de layout de vehículos"/> <center><figcaption>Ejemplo de *layouts* de la representación de clases de vehículos pesados</figcaption></center> </figure> Para clasificar el vehículo, el sistema crea un layout para el vehículo medido, lo compara con layouts de referencias y clasifica el vehículo que con el layout de referencia que resulta más próximo. Este método presenta bajo desempeño en el lenguaje Python. Para solucionar esto, fue utilizada la librería numba, llegando a ser cerca de 100 veces más rápido. Fue necesária una adaptación en el algoritmo donde, ante de hacer las comparaciones, el layout del veículo y el layout de la clase de referencia son convertidos en números, así la función de comparación puede ser marcada para ser compilada en modo nopython. Cuanto más cerca de 0 más cerca el layout del vehículo está del layout de referencia. End of explanation """ # datos sintéticos df_weight = pd.DataFrame({ 'a1': np.ones(200), 'a2': np.ones(200), 'target': np.ones(200) }) df_weight.loc[:100, ['a1', 'a2']] = 8000 df_weight.loc[100:, ['a1', 'a2']] = 10000 df_weight['a1'] += np.random.random(200)*1000 df_weight['a2'] += np.random.random(200)*1000 df_weight.loc[:100, ['target']] = 8000 df_weight.loc[100:, ['target']] = 10000 r2 = {} c = {} predict = [] X = [] for i, s in enumerate(['a1', 'a2']): # Adds a constant term to the predictor X.append(sm.add_constant(df_weight[s])) model = sm.OLS(df_weight['target'], X[i]) predict.append(model.fit()) r2[s] = [predict[i]._results.rsquared] c[s] = predict[i].params[s] print('R2', r2) print('CC', c) i, s = 0, 'a1' fig = sm.graphics.abline_plot(model_results=predict[i]) ax = fig.axes[0] ax.scatter(df_weight[s], df_weight['target']) ax.set_xlabel('Valor calculado') ax.set_ylabel('Valor Target') ax.set_title('Valores de pesaje sensor %s' % s) plt.tight_layout() i, s = 1, 'a2' fig = sm.graphics.abline_plot(model_results=predict[i]) ax = fig.axes[0] ax.scatter(df_weight[s], df_weight['target']) ax.set_xlabel('Valor calculado') ax.set_ylabel('Valor Target') ax.set_title('Valores de pesaje sensor %s' % s) plt.tight_layout() def score_95_calc(metric_score, y, y_pred): if y.shape[0] < 1: print('size calc 0') return 0.0 y_true = np.array([True] * y.shape[0]) lb, ub = y - y * 0.05, y + y * 0.05 y_pred_95 = (lb < y_pred) == (y_pred < ub) y_pred_95 = y_pred_95 == True return metric_score(y_true, y_pred_95) def score_95_base(metric_score, estimator, X_test, y_test): if y_test.shape[0] < 1: print('size base 0') return 0.0 y_pred = estimator.predict(X_test) return score_95_calc(metric_score, y_test, y_pred) def score_95_accuracy(estimator, X, y): return score_95_base(metrics.accuracy_score, estimator, X, y) def score_95_precision(estimator, X, y): return score_95_base(metrics.precision_score, estimator, X, y) def score_95_recall(estimator, X, y): return score_95_base(metrics.recall_score, estimator, X, y) def score_95_f1_score(estimator, X, y): return score_95_base(metrics.f1_score, estimator, X, y) df_weight_cc = df_weight[['a1', 'a2']].copy() for s in ['a1', 'a2']: df_weight_cc[s] *= c[s] df_gross_weight = df_weight_cc.mean(axis=1) for _m_name, _metric in [ ('accuracy', metrics.accuracy_score), ('precision', metrics.precision_score), ('recall', metrics.recall_score), ('f1 score', metrics.f1_score), ]: print( ('%s:' % _m_name).ljust(22, ' '), score_95_calc(_metric, df_weight['target'], df_gross_weight) ) """ Explanation: 8. Calibración de los cálculos de pesaje La calibración periódica en sistemas de pesaje es muy importante para mantener a un bajo margen de errores los pesos calculados. Para apoyar esta etapa puede ser utilizado el método de regresión lineal por mínimos cuadrados (OLS) de la librería statsmodels que, por ejemplo, posibilita saber informaciones como el coeficiente de determinación (R²) de la regresión lineal realizada. La librería scikit-learn también puede ser usada en esta etapa con finalidad de apoyo en los análisis de los resultados. Para probar estas funcionalidades, serán utilizados dados de pesaje sintéticos con ruidos, para simular los errores de medición con 100 pasadas de dos camiones con peso conocido. End of explanation """
spencer2211/deep-learning
autoencoder/Simple_Autoencoder_Solution.ipynb
mit
%matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) """ Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation """ img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') """ Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation """ # Size of the encoding layer (the hidden layer) encoding_dim = 32 image_size = mnist.train.images.shape[1] inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') # Output of hidden layer encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) # Output layer logits logits = tf.layers.dense(encoded, image_size, activation=None) # Sigmoid output from decoded = tf.nn.sigmoid(logits, name='output') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost) """ Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation """ # Create the session sess = tf.Session() """ Explanation: Training End of explanation """ epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) """ Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() """ Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation """
celiasmith/syde556
SYDE 556 Lecture 4 Transformation.ipynb
gpl-2.0
%pylab inline import numpy as np import nengo from nengo.dists import Uniform from nengo.processes import WhiteSignal from nengo.solvers import LstsqL2 T = 1.0 max_freq = 10 model = nengo.Network('Communication Channel', seed=3) with model: stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5)) ensA = nengo.Ensemble(20, dimensions=1, neuron_type=nengo.LIFRate()) ensB = nengo.Ensemble(19, dimensions=1, neuron_type=nengo.LIFRate()) temp = nengo.Ensemble(1, dimensions=1, neuron_type=nengo.LIFRate()) nengo.Connection(stim, ensA) stim_B = nengo.Connection(stim, ensB) connectionA = nengo.Connection(ensA, temp) #This is just to generate the decoders connectionB = nengo.Connection(ensB, temp) #This is just to generate the decoders stim_p = nengo.Probe(stim) a_rates = nengo.Probe(ensA.neurons, 'rates') b_rates = nengo.Probe(ensB.neurons, 'rates') sim = nengo.Simulator(model, seed=3) sim.run(T) x = sim.data[stim_p] d_i = sim.data[connectionA].weights.T A_i = sim.data[a_rates] d_j = sim.data[connectionB].weights.T B_j = sim.data[b_rates] #Add noise A_i = A_i + np.random.normal(scale=0.2*np.max(A_i), size=A_i.shape) B_j = B_j + np.random.normal(scale=0.2*np.max(B_j), size=B_j.shape) xhat_i = np.dot(A_i, d_i) yhat_j = np.dot(B_j, d_j) t = sim.trange() figure(figsize=(8,4)) subplot(1,2,1) plot(t, xhat_i, 'g', label='$\hat{x}$') plot(t, x, 'b', label='$x$') legend() xlabel('time (seconds)') ylabel('value') title('first population') ylim(-1,1) subplot(1,2,2) plot(t, yhat_j, 'g', label='$\hat{y}$') plot(t, x, 'b', label='$y$') legend() xlabel('time (seconds)') ylabel('value') title('second population') ylim(-1,1); """ Explanation: SYDE 556/750: Simulating Neurobiological Systems Accompanying Readings: Chapter 6 Transformation The story so far: The activity of groups of neurons can represent variables $x$ $x$ can be an aribitrary-dimension vector Each neuron has a preferred vector $e$ Current going into each neuron is $J = \alpha e \cdot x + J^{bias}$ We can interpret neural activity via $\hat{x}=\sum a_i d_i$ For spiking neurons, we filter the spikes first: $\hat{x}=\sum a_i(t)*h(t) d_i$ To compute $d$, generate some $x$ values and find the optimal $d$ (assuming some amount of noise) So far we've just talked about neural activity in a single population What about connections between neurons? <img src="files/lecture4/communication1.png"> Connecting neurons Up till now, we've always had the current going into a neuron be something we computed from $x$ $J = \alpha e \cdot x + J^{bias}$ This will continue to be how we handle inputs Sensory neurons, for example Or whatever's coming from the rest of the brain that we're not modelling (yet) But what about other groups of neurons? How do they end up getting the amount of input current that we're injecting with $J = \alpha e \cdot x + J^{bias}$ ? Where does that current come from? Inputs from neurons connected to this one Through weighted synaptic connections Let's think about neurons in a simple case A communication channel Let's say we have two groups of neurons One group represents $x$ One group represents $y$ Can we pass the value from one group of neurons to the other? Without worrying about biological plausibility to start, we can formulate this in two steps Drive the first population $a$ with the input, $x$, then decoded it to give $\hat{x}$ Now use $y=\hat{x}$ to drive the 2nd population $b$, and then decode that Let's start by first constructing the two populations Stimulate them both directly and decode to compare <img src="files/lecture4/comm_channel.png"> End of explanation """ #Have to run previous cells first model.connections.remove(stim_B) del stim_B def xhat_fcn(t): idx = int(t/sim.dt) if idx>=1000: idx=999 return xhat_i[idx] with model: xhat = nengo.Node(xhat_fcn) nengo.Connection(xhat, ensB) xhat_p = nengo.Probe(xhat) sim = nengo.Simulator(model, seed=3) sim.run(T) d_j = sim.data[connectionB].weights.T B_j = sim.data[b_rates] B_j = B_j + numpy.random.normal(scale=0.2*numpy.max(B_j), size=B_j.shape) yhat_j = numpy.dot(B_j, d_j) t = sim.trange() figure(figsize=(8,4)) subplot(1,2,1) plot(t, xhat_i, 'g', label='$\hat{x}$') plot(t, x, 'b', label='$x$') legend() xlabel('time (seconds)') ylabel('value') title('$\hat{x}$') ylim(-1,1) subplot(1,2,2) plot(t, yhat_j, 'g', label='$\hat{y}$') plot(t, x, 'b', label='$x$') legend() xlabel('time (seconds)') ylabel('value') title('$\hat{y}$ (second population)') ylim(-1,1); """ Explanation: So everything works fine if we drive each population with the same $x$, let's switch to $\hat{x}$ in the middle End of explanation """ #Have to run previous cells first n = nengo.neurons.LIFRate() alpha_j = sim.data[ensB].gain bias_j = sim.data[ensB].bias encoders_j = sim.data[ensB].encoders.T connection_weights = np.outer(alpha_j*encoders_j, d_i) J_j = np.dot(connection_weights, sim.data[a_rates].T).T + bias_j B_j = n.rates(J_j, gain=1, bias=0) #Gain and bias already in the previous line B_j = B_j + numpy.random.normal(scale=0.2*numpy.max(B_j), size=B_j.shape) xhat_j = numpy.dot(B_j, d_j) figure(figsize=(8,4)) subplot(1,2,1) plot(t, xhat_i, 'g', label='$\hat{x}$') plot(t, x, 'b', label='$x$') legend() xlabel('Time (s)') ylabel('Value') title('Decode from A') ylim(-1,1) subplot(1,2,2) plot(t, xhat_j, 'g', label='$\hat{y}$') plot(t, x, 'b', label='$y$') legend() xlabel('Time (s)') title('Decode from B'); ylim(-1,1); """ Explanation: Looks pretty much the same! (just delayed, maybe) So now we've passed one value to the other, but it's implausible The brain doesn't decode and then re-encode Can we skip those steps? Or combine them? A shortcut Let's write down what we've done: Encode into $a$: $a_i = G_i[\alpha_i e_i x + J^{bias}_i]$ Decode from $a$: $\hat{x} = \sum_i a_i d_i$ Set $y = \hat{x}$ Encode into $b$: $b_j = G_j[\alpha_j e_j y + J^{bias}_j]$ Decode from $b$: $\hat{y} = \sum_j b_j d_j$ Now let's just do the substitutions: I.e. substitute $y = \hat{x} = \sum_i a_i d_i$ into $b$ $b_j = G_j[\alpha_j e_j \sum_i a_i d_i + J^{bias}_j]$ $b_j = G_j[\sum_i \alpha_j e_j d_i a_i + J^{bias}_j]$ $b_j = G_j[\sum_i \omega_{ij}a_i + J^{bias}_j]$ where $\omega_{ij} = \alpha_j e_j \cdot d_i$ (an outer product) In other words, we can get the entire weight matrix just by multiplying the decoders from the first population with the encoders from the second population End of explanation """ J_j = numpy.outer(numpy.dot(A_i, d_i), alpha_j*encoders_j)+bias_j """ Explanation: In fact, instead of computing $\omega_{ij}$ at all, it is (usually) more efficient to just do the encoding/decoding Saves a lot of memory space, since you don't have to store a giant weight matrix Also, you have NxM multiplies for weights, but only do ~N+M multiplies for encode/decode End of explanation """ import nengo from nengo.processes import WhiteNoise from nengo.utils.matplotlib import rasterplot T = 1.0 max_freq = 5 model = nengo.Network() with model: stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3)) ensA = nengo.Ensemble(25, dimensions=1) ensB = nengo.Ensemble(23, dimensions=1) nengo.Connection(stim, ensA) nengo.Connection(ensA, ensB, transform=2) #function=lambda x: 2*x) stim_p = nengo.Probe(stim) ensA_p = nengo.Probe(ensA, synapse=.01) ensB_p = nengo.Probe(ensB, synapse=.01) ensA_spikes_p = nengo.Probe(ensA.neurons, 'spikes') ensB_spikes_p = nengo.Probe(ensB.neurons, 'spikes') sim = nengo.Simulator(model, seed=4) sim.run(T) t = sim.trange() figure(figsize=(8, 6)) subplot(2,1,1) ax = gca() plot(t, sim.data[stim_p],'b') plot(t, sim.data[ensA_p],'g') ylabel("Output EnsA") rasterplot(t, sim.data[ensA_spikes_p], ax=ax.twinx(), colors=['k']*25, use_eventplot=True) #axis('tight') ylabel("Neuron") subplot(2,1,2) ax = gca() plot(t, sim.data[stim_p],'b') plot(t, sim.data[ensB_p],'g') ylabel("Output EnsB") xlabel("Time"); rasterplot(t, sim.data[ensB_spikes_p], ax=ax.twinx(), colors=['k']*23, use_eventplot=True) #axis('tight') ylabel("Neuron"); """ Explanation: This means we get the exact same effect as having a weight matrix $\omega_{ij}$ if we just take the decoded value from one population and feed that into the next population using the normal encoding method These are numerically identical processes, since $\omega_{ij} = \alpha_j e_j \cdot d_i$ Spiking neurons The same approach works for spiking neurons Do exactly the same as before The $a_i(t)$ values are spikes, and we convolve with $h(t)$ Other transformations So this lets us take an $x$ value and feed it into another population Passing information from one group of neurons to the next We call this a 'Communication Channel' as you're just sending the information What about transforming that information in some way? Instead of $y=x$, can we do $y=f(x)$? Let's try $y=2x$ to start We already have a decoder for $\hat{x}$, so how do we get a decoder for $\hat{2x}$? Two ways Either use $2x$ when computing $\Upsilon$ Or just multiply your 'representational' decoder by $2$ End of explanation """ import nengo from nengo.processes import WhiteNoise from nengo.utils.matplotlib import rasterplot T = 1.0 max_freq = 5 model = nengo.Network() with model: stim = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3)) ensA = nengo.Ensemble(25, dimensions=1) ensB = nengo.Ensemble(23, dimensions=1) nengo.Connection(stim, ensA) nengo.Connection(ensA, ensB, function=lambda x: x**2) stim_p = nengo.Probe(stim) ensA_p = nengo.Probe(ensA, synapse=.01) ensB_p = nengo.Probe(ensB, synapse=.01) ensA_spikes_p = nengo.Probe(ensA.neurons, 'spikes') ensB_spikes_p = nengo.Probe(ensB.neurons, 'spikes') sim = nengo.Simulator(model, seed=4) sim.run(T) t = sim.trange() figure(figsize=(8, 6)) subplot(2,1,1) ax = gca() plot(t, sim.data[stim_p],'b') plot(t, sim.data[ensA_p],'g') ylabel("Output") rasterplot(t, sim.data[ensA_spikes_p], ax=ax.twinx(), colors=['k']*25, use_eventplot=True) ylabel("Neuron") subplot(2,1,2) ax = gca() plot(t, sim.data[stim_p],'b') plot(t, sim.data[ensB_p],'g') ylabel("Output") xlabel("Time"); rasterplot(t, sim.data[ensB_spikes_p], ax=ax.twinx(), colors=['k']*23, use_eventplot=True) ylabel("Neuron"); """ Explanation: What about a nonlinear function? $y = x^2$ End of explanation """ import nengo from nengo.processes import WhiteNoise from nengo.utils.matplotlib import rasterplot T = 1.0 max_freq = 5 model = nengo.Network() with model: stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=3)) stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=5)) ensA = nengo.Ensemble(25, dimensions=1) ensB = nengo.Ensemble(23, dimensions=1) ensC = nengo.Ensemble(24, dimensions=1) nengo.Connection(stimA, ensA) nengo.Connection(stimB, ensB) nengo.Connection(ensA, ensC) nengo.Connection(ensB, ensC) stimA_p = nengo.Probe(stimA) stimB_p = nengo.Probe(stimB) ensA_p = nengo.Probe(ensA, synapse=.01) ensB_p = nengo.Probe(ensB, synapse=.01) ensC_p = nengo.Probe(ensC, synapse=.01) sim = nengo.Simulator(model) sim.run(T) figure(figsize=(8,6)) #plot(t, sim.data[stimA_p],'g', label="$x$") plot(t, sim.data[ensA_p],'b', label="$\hat{x}$") #plot(t, sim.data[stimB_p],'c', label="$y$") plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$") #plot(t, sim.data[stimB_p]+sim.data[stimA_p],'r', label="$x+y$") plot(t, sim.data[ensC_p],'k--', label="$\hat{z}$") legend(loc='best') ylabel("Output") xlabel("Time"); """ Explanation: When you set the connection 'function' in Nengo, it solves the same decoding equation as before, but for a function. In equations: $ d^{f(x)} = \Gamma^{-1} \Upsilon^{f(x)} $ $ \Upsilon_i^{f(x)} = \sum_x a_i f(x) \;dx$ $ \Gamma_{ij} = \sum_x a_i a_j \;dx $ $ \hat{f}(x) =\sum_i a_i d_i^{f(x)}$ In code: f_x = myfunction(x) gamma=np.dot(A.T,A)<br> upsilon_f=np.dot(A.T, f_x)<br> d_f = np.dot(np.linalg.pinv(gamma),upsilon) f_xhat = np.dot(A, d_f) We call standard $d_i$ "representational decoders" We call $d_i^{f(x)}$ "transformational decoders" (or "decoders for $f(x)$") Adding What if we want to combine the inputs from two different populations? Linear case: $z=x+y$ <img src="files/lecture4/adding1.png"> We want the total current going into a $z$ neuron to be $J=\alpha e \cdot (x+y) + J^{bias}$ How can we achieve this? Again, substitute into the equation, where $z = x+y \approx \hat{x}+\hat{y}$ $J_k=\alpha_k e \cdot (\hat{x}+\hat{y}) + J_k^{bias}$ $\hat{x} = \sum_i a_i d_i$ $\hat{y} = \sum_j a_j d_j$ $J_k=\alpha_k e_k \cdot (\sum_i a_i d_i+\sum_j a_j d_j) + J_k^{bias}$ $J_k=\sum_i(\alpha_k e_k \cdot d_i a_i) + \sum_j(\alpha_k e_k \cdot d_j a_j) + J_k^{bias}$ $J_k=\sum_i(\omega_{ik} a_i) + \sum_j(\omega_{jk} a_j) + J_k^{bias}$ $\omega_{ik}=\alpha_k e_k \cdot d_i$ and $\omega_{jk}=\alpha_k e_k \cdot d_j$ Putting multiple inputs into a neuron automatically gives us addition! End of explanation """ import nengo from nengo.processes import WhiteNoise from nengo.utils.matplotlib import rasterplot T = 1.0 max_freq = 5 model = nengo.Network() with model: stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=3), size_out=2) stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.3, seed=5), size_out=2) stimA = nengo.Node([.3,.5]) stimB = nengo.Node([.3,-.5]) ensA = nengo.Ensemble(55, dimensions=2) ensB = nengo.Ensemble(53, dimensions=2) ensC = nengo.Ensemble(54, dimensions=2) nengo.Connection(stimA, ensA) nengo.Connection(stimB, ensB) nengo.Connection(ensA, ensC) nengo.Connection(ensB, ensC) stimA_p = nengo.Probe(stimA) stimB_p = nengo.Probe(stimB) ensA_p = nengo.Probe(ensA, synapse=.02) ensB_p = nengo.Probe(ensB, synapse=.02) ensC_p = nengo.Probe(ensC, synapse=.02) sim = nengo.Simulator(model) sim.run(T) figure() plot(sim.data[ensA_p][:,0], sim.data[ensA_p][:,1], 'g', label="$\hat{x}$") plot(sim.data[ensB_p][:,0], sim.data[ensB_p][:,1], 'm', label="$\hat{y}$") plot(sim.data[ensC_p][:,0], sim.data[ensC_p][:,1], 'k', label="$\hat{z}$") legend(loc='best') figure() plot(t, sim.data[stimA_p],'g', label="$x$") plot(t, sim.data[ensA_p],'b', label="$\hat{x}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p],'c', label="$y$") plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p]+sim.data[stimA_p],'r', label="$x+y$") plot(t, sim.data[ensC_p],'k--', label="$\hat{z}$") legend(loc='best') ylabel("Output") xlabel("Time"); """ Explanation: Vectors Almost nothing changes End of explanation """ import nengo from nengo.processes import WhiteNoise from nengo.utils.matplotlib import rasterplot T = 1.0 max_freq = 5 model = nengo.Network() with model: stimA = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5, seed=3)) stimB = nengo.Node(output=WhiteSignal(T, high=max_freq, rms=0.5, seed=5)) ensA = nengo.Ensemble(55, dimensions=1) ensB = nengo.Ensemble(53, dimensions=1) ensC = nengo.Ensemble(200, dimensions=2) ensD = nengo.Ensemble(54, dimensions=1) nengo.Connection(stimA, ensA) nengo.Connection(stimB, ensB) nengo.Connection(ensA, ensC, transform=[[1],[0]]) nengo.Connection(ensB, ensC, transform=[[0],[1]]) nengo.Connection(ensC, ensD, function=lambda x: x[0]*x[1]) stimA_p = nengo.Probe(stimA) stimB_p = nengo.Probe(stimB) ensA_p = nengo.Probe(ensA, synapse=.01) ensB_p = nengo.Probe(ensB, synapse=.01) ensC_p = nengo.Probe(ensC, synapse=.01) ensD_p = nengo.Probe(ensD, synapse=.01) sim = nengo.Simulator(model) sim.run(T) figure() plot(t, sim.data[stimA_p],'g', label="$x$") plot(t, sim.data[ensA_p],'b', label="$\hat{x}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p],'c', label="$y$") plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p]*sim.data[stimA_p],'r', label="$x * y$") plot(t, sim.data[ensD_p],'k--', label="$\hat{z}$") legend(loc='best') ylabel("Output") xlabel("Time"); from nengo_gui.ipython import IPythonViz IPythonViz(model)#, "configs/mult.py.cfg") """ Explanation: Summary We can use the decoders to find connection weights between groups of neurons $\omega_{ij}=\alpha_j e_j \cdot d_i$ Using connection weights is numerically identical to decoding and then encoding again Which can be much more efficient to implement Feeding two inputs into the same population results in addition These shortcuts rely on two assumptions: The input to a neuron is a weighted sum of its synaptic inputs $J_j = \sum_i a_i \omega_{ij}$ The mapping from $x$ to $J$ is of the form $J_j=\alpha_j e_j \cdot x + J_j^{bias}$ If these assumptions don't hold, you have to do some other form of optimization If you already have a decoder for $x$, you can quickly find a decoder for any linear function of $x$ If the decoder for $x$ is $d$, the decoder for $Mx$ is $Md$ For some other function of $x$, substitute in that function $f(x)$ when finding $\Upsilon$ Taking all of this into account, the most general form of the weights is: $\omega_{ij} = \alpha_j e_j M d_i^{f(x)}$ A recipe To find weights for any linear transformation Define the repn (enc/dec) for all variables involved in the operation. Write the transformation in terms of these variables. Write the transformation using the decoding expressions for all variables except the output variable. Substitute this expression into the encoding expression of the output variable. Volunteer for: $z = x+y$ $z = Rx$ R is a 2D rotation matrix: $$\left[ \begin{array}{cc} \cos \theta & \sin \theta \ -\sin \theta & \cos \theta \end{array} \right]$$ $z = x \times y$ General nonlinear functions What if we want to combine to compute a nonlinear function of two inputs? E.g., $z=x \times y$ We know how to compute nonlinear functions of a vector space E.g., $x^2$ If $x$ is a vector, you get a bunch of cross terms E.g. if $x$ is 2D this gives $x_1^2 + 2 x_1 x_2 + x_2^2$ This means that if you combine two inputs into a 2D space, you can get out their product End of explanation """ with model: stimB.output = lambda t: 0 if (t<.5) else .5 sim = nengo.Simulator(model) sim.run(T) figure() plot(t, sim.data[stimA_p],'g', label="$x$") plot(t, sim.data[ensA_p],'b', label="$\hat{x}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p],'c', label="$y$") plot(t, sim.data[ensB_p],'m--', label="$\hat{y}$") legend(loc='best') ylabel("Output") xlabel("Time") figure() plot(t, sim.data[stimB_p]*sim.data[stimA_p],'r', label="$x*y$") plot(t, sim.data[ensD_p],'k--', label="$\hat{z}$") legend(loc='best') ylabel("Output") xlabel("Time"); """ Explanation: Multiplication is quite powerful, and has lots of uses Gating of signals Attention effects Binding Statistical inference Here's a simple gating example using the same network End of explanation """
gregunz/ada2017
exam/data_cluedo/4-politicians.ipynb
mit
# Run the following to import necessary packages and import dataset. Do not use any additional plotting libraries. import pandas as pd from modules.util_politicians import evaluate, toggle_display dataset = "dataset/politicians.csv" df = pd.read_csv(dataset) df.head() """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Politicians-Module" data-toc-modified-id="Politicians-Module-1">Politicians Module</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Instructions-/-Notes:" data-toc-modified-id="Instructions-/-Notes:-1.0.1">Instructions / Notes:</a></span></li></ul></li><li><span><a href="#Try-to-find-publishable-(i.e.,-statistically-significant)-results-about-the-U.S.-economy-and-which-politicians-are-in-charge" data-toc-modified-id="Try-to-find-publishable-(i.e.,-statistically-significant)-results-about-the-U.S.-economy-and-which-politicians-are-in-charge-1.1">Try to find publishable (i.e., statistically significant) results about the U.S. economy and which politicians are in charge</a></span></li><li><span><a href="#Clue" data-toc-modified-id="Clue-1.2">Clue</a></span><ul class="toc-item"><li><span><a href="#Read-this-article-on-P-Hacking:-https://fivethirtyeight.com/features/science-isnt-broken/" data-toc-modified-id="Read-this-article-on-P-Hacking:-https://fivethirtyeight.com/features/science-isnt-broken/-1.2.1">Read this article on P-Hacking: <a href="https://fivethirtyeight.com/features/science-isnt-broken/" target="_blank">https://fivethirtyeight.com/features/science-isnt-broken/</a></a></span></li></ul></li></ul></li></ul></div> Politicians Module Instructions / Notes: Read these carefully Read and execute each cell in order, without skipping forward You may create new Jupyter notebook cells to use for e.g. testing, debugging, exploring, etc.- this is encouraged in fact!- just make sure that your final answer dataframes and answers use the set variables outlined below Have fun! End of explanation """ d_politicians, d_economy, d_outliers = toggle_display('Democrats') """ Explanation: Try to find publishable (i.e., statistically significant) results about the U.S. economy and which politicians are in charge End of explanation """ # Part 1 Democrats d_pval, d_corr = evaluate(df, 'Democrats', d_politicians, d_economy, d_outliers, 1) r_politicians, r_economy, r_outliers = toggle_display('republican') """ Explanation: Toggle the variables above and run the evaluate() function until you find a publishable result! End of explanation """ # Part 1 Republicans r_pval, r_corr = evaluate(df, 'Republicans', r_politicians, r_economy, r_outliers, 1) """ Explanation: Toggle the variables above and run evaluate until you find a publishable result! End of explanation """ d_politicians_clue, d_economy_clue, d_outliers_clue = toggle_display('Democrats') # Part 2 Democrats d_pval_clue, d_corr_clue = evaluate(df, 'Democrats', d_politicians_clue, d_economy_clue, d_outliers_clue, 2) r_politicians_clue, r_economy_clue, r_outliers_clue = toggle_display('republican') # Part 2 Republicans r_pval_clue, r_corr_clue = evaluate(df, 'Republicans', r_politicians_clue, r_economy_clue, r_outliers_clue, 2) """ Explanation: Clue Read this article on P-Hacking: https://fivethirtyeight.com/features/science-isnt-broken/ If you found that both parties impact the economy positively or negatively, try again below to show that one party is better than the other on the economy. If you already found one party is better than the other, try again below to find the opposite relationship. End of explanation """
d-k-b/udacity-deep-learning
tensorboard/Anna_KaRNNa_Name_Scoped.ipynb
mit
import time from collections import namedtuple import numpy as np import tensorflow as tf """ Explanation: Anna KaRNNa In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book. This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN. <img src="assets/charseq.jpeg" width="500"> End of explanation """ with open('anna.txt', 'r') as f: text=f.read() vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32) text[:100] chars[:100] """ Explanation: First we'll load the text file and convert it into integers for our network to use. End of explanation """ def split_data(chars, batch_size, num_steps, split_frac=0.9): """ Split character data into training and validation sets, inputs and targets for each set. Arguments --------- chars: character array batch_size: Size of examples in each of batch num_steps: Number of sequence steps to keep in the input and pass to the network split_frac: Fraction of batches to keep in the training set Returns train_x, train_y, val_x, val_y """ slice_size = batch_size * num_steps n_batches = int(len(chars) / slice_size) # Drop the last few characters to make only full batches x = chars[: n_batches*slice_size] y = chars[1: n_batches*slice_size + 1] # Split the data into batch_size slices, then stack them into a 2D matrix x = np.stack(np.split(x, batch_size)) y = np.stack(np.split(y, batch_size)) # Now x and y are arrays with dimensions batch_size x n_batches*num_steps # Split into training and validation sets, keep the virst split_frac batches for training split_idx = int(n_batches*split_frac) train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps] val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:] return train_x, train_y, val_x, val_y train_x, train_y, val_x, val_y = split_data(chars, 10, 200) train_x.shape train_x[:,:10] """ Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text. Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches. The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set. End of explanation """ def get_batch(arrs, num_steps): batch_size, slice_size = arrs[0].shape n_batches = int(slice_size/num_steps) for b in range(n_batches): yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs] def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2, learning_rate=0.001, grad_clip=5, sampling=False): if sampling == True: batch_size, num_steps = 1, 1 tf.reset_default_graph() # Declare placeholders we'll feed into the graph with tf.name_scope('inputs'): inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs') x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot') with tf.name_scope('targets'): targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets') y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot') y_reshaped = tf.reshape(y_one_hot, [-1, num_classes]) keep_prob = tf.placeholder(tf.float32, name='keep_prob') # Build the RNN layers with tf.name_scope("RNN_layers"): lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) with tf.name_scope("RNN_init_state"): initial_state = cell.zero_state(batch_size, tf.float32) # Run the data through the RNN layers with tf.name_scope("RNN_forward"): outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state) final_state = state # Reshape output so it's a bunch of rows, one row for each cell output with tf.name_scope('sequence_reshape'): seq_output = tf.concat(outputs, axis=1,name='seq_output') output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output') # Now connect the RNN putputs to a softmax layer and calculate the cost with tf.name_scope('logits'): softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1), name='softmax_w') softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b') logits = tf.matmul(output, softmax_w) + softmax_b with tf.name_scope('predictions'): preds = tf.nn.softmax(logits, name='predictions') with tf.name_scope('cost'): loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss') cost = tf.reduce_mean(loss, name='cost') # Optimizer for training, using gradient clipping to control exploding gradients with tf.name_scope('train'): tvars = tf.trainable_variables() grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip) train_op = tf.train.AdamOptimizer(learning_rate) optimizer = train_op.apply_gradients(zip(grads, tvars)) # Export the nodes export_nodes = ['inputs', 'targets', 'initial_state', 'final_state', 'keep_prob', 'cost', 'preds', 'optimizer'] Graph = namedtuple('Graph', export_nodes) local_dict = locals() graph = Graph(*[local_dict[each] for each in export_nodes]) return graph """ Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch. End of explanation """ batch_size = 100 num_steps = 100 lstm_size = 512 num_layers = 2 learning_rate = 0.001 """ Explanation: Hyperparameters Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability. End of explanation """ model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) file_writer = tf.summary.FileWriter('./logs/3', sess.graph) """ Explanation: Write out the graph for TensorBoard End of explanation """ !mkdir -p checkpoints/anna epochs = 10 save_every_n = 200 train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps) model = build_rnn(len(vocab), batch_size=batch_size, num_steps=num_steps, learning_rate=learning_rate, lstm_size=lstm_size, num_layers=num_layers) saver = tf.train.Saver(max_to_keep=100) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Use the line below to load a checkpoint and resume training #saver.restore(sess, 'checkpoints/anna20.ckpt') n_batches = int(train_x.shape[1]/num_steps) iterations = n_batches * epochs for e in range(epochs): # Train network new_state = sess.run(model.initial_state) loss = 0 for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1): iteration = e*n_batches + b start = time.time() feed = {model.inputs: x, model.targets: y, model.keep_prob: 0.5, model.initial_state: new_state} batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], feed_dict=feed) loss += batch_loss end = time.time() print('Epoch {}/{} '.format(e+1, epochs), 'Iteration {}/{}'.format(iteration, iterations), 'Training loss: {:.4f}'.format(loss/b), '{:.4f} sec/batch'.format((end-start))) if (iteration%save_every_n == 0) or (iteration == iterations): # Check performance, notice dropout has been set to 1 val_loss = [] new_state = sess.run(model.initial_state) for x, y in get_batch([val_x, val_y], num_steps): feed = {model.inputs: x, model.targets: y, model.keep_prob: 1., model.initial_state: new_state} batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed) val_loss.append(batch_loss) print('Validation loss:', np.mean(val_loss), 'Saving checkpoint!') saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss))) tf.train.get_checkpoint_state('checkpoints/anna') """ Explanation: Training Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint. End of explanation """ def pick_top_n(preds, vocab_size, top_n=5): p = np.squeeze(preds) p[np.argsort(p)[:-top_n]] = 0 p = p / np.sum(p) c = np.random.choice(vocab_size, 1, p=p)[0] return c def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "): prime = "Far" samples = [c for c in prime] model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, checkpoint) new_state = sess.run(model.initial_state) for c in prime: x = np.zeros((1, 1)) x[0,0] = vocab_to_int[c] feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) for i in range(n_samples): x[0,0] = c feed = {model.inputs: x, model.keep_prob: 1., model.initial_state: new_state} preds, new_state = sess.run([model.preds, model.final_state], feed_dict=feed) c = pick_top_n(preds, len(vocab)) samples.append(int_to_vocab[c]) return ''.join(samples) checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt" samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt" samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far") print(samp) """ Explanation: Sampling Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that. The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. End of explanation """
dmolina/es_intro_python
09-Errors-and-Exceptions.ipynb
gpl-3.0
print(Q) """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="fig/cover-small.jpg"> This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub. The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook. <!--NAVIGATION--> < Defining and Using Functions | Contents | Iterators > Errors and Exceptions No matter your skill as a programmer, you will eventually make a coding mistake. Such mistakes come in three basic flavors: Syntax errors: Errors where the code is not valid Python (generally easy to fix) Runtime errors: Errors where syntactically valid code fails to execute, perhaps due to invalid user input (sometimes easy to fix) Semantic errors: Errors in logic: code executes without a problem, but the result is not what you expect (often very difficult to track-down and fix) Here we're going to focus on how to deal cleanly with runtime errors. As we'll see, Python handles runtime errors via its exception handling framework. Runtime Errors If you've done any coding in Python, you've likely come across runtime errors. They can happen in a lot of ways. For example, if you try to reference an undefined variable: End of explanation """ 1 + 'abc' """ Explanation: Or if you try an operation that's not defined: End of explanation """ 2 / 0 """ Explanation: Or you might be trying to compute a mathematically ill-defined result: End of explanation """ L = [1, 2, 3] L[1000] """ Explanation: Or maybe you're trying to access a sequence element that doesn't exist: End of explanation """ try: print("this gets executed first") except: print("this gets executed only if there is an error") """ Explanation: Note that in each case, Python is kind enough to not simply indicate that an error happened, but to spit out a meaningful exception that includes information about what exactly went wrong, along with the exact line of code where the error happened. Having access to meaningful errors like this is immensely useful when trying to trace the root of problems in your code. Catching Exceptions: try and except The main tool Python gives you for handling runtime exceptions is the try...except clause. Its basic structure is this: End of explanation """ try: print("let's try something:") x = 1 / 0 # ZeroDivisionError except: print("something bad happened!") """ Explanation: Note that the second block here did not get executed: this is because the first block did not return an error. Let's put a problematic statement in the try block and see what happens: End of explanation """ def safe_divide(a, b): try: return a / b except: return 1E100 safe_divide(1, 2) safe_divide(2, 0) """ Explanation: Here we see that when the error was raised in the try statement (in this case, a ZeroDivisionError), the error was caught, and the except statement was executed. One way this is often used is to check user input within a function or another piece of code. For example, we might wish to have a function that catches zero-division and returns some other value, perhaps a suitably large number like $10^{100}$: End of explanation """ safe_divide (1, '2') """ Explanation: There is a subtle problem with this code, though: what happens when another type of exception comes up? For example, this is probably not what we intended: End of explanation """ def safe_divide(a, b): try: return a / b except ZeroDivisionError: return 1E100 safe_divide(1, 0) safe_divide(1, '2') """ Explanation: Dividing an integer and a string raises a TypeError, which our over-zealous code caught and assumed was a ZeroDivisionError! For this reason, it's nearly always a better idea to catch exceptions explicitly: End of explanation """ raise RuntimeError("my error message") """ Explanation: We're now catching zero-division errors only, and letting all other errors pass through un-modified. Raising Exceptions: raise We've seen how valuable it is to have informative exceptions when using parts of the Python language. It's equally valuable to make use of informative exceptions within the code you write, so that users of your code (foremost yourself!) can figure out what caused their errors. The way you raise your own exceptions is with the raise statement. For example: End of explanation """ def fibonacci(N): L = [] a, b = 0, 1 while len(L) < N: a, b = b, a + b L.append(a) return L """ Explanation: As an example of where this might be useful, let's return to our fibonacci function that we defined previously: End of explanation """ def fibonacci(N): if N < 0: raise ValueError("N must be non-negative") L = [] a, b = 0, 1 while len(L) < N: a, b = b, a + b L.append(a) return L fibonacci(10) fibonacci(-10) """ Explanation: One potential problem here is that the input value could be negative. This will not currently cause any error in our function, but we might want to let the user know that a negative N is not supported. Errors stemming from invalid parameter values, by convention, lead to a ValueError being raised: End of explanation """ N = -10 try: print("trying this...") print(fibonacci(N)) except ValueError: print("Bad value: need to do something else") """ Explanation: Now the user knows exactly why the input is invalid, and could even use a try...except block to handle it! End of explanation """ try: x = 1 / 0 except ZeroDivisionError as err: print("Error class is: ", type(err)) print("Error message is:", err) """ Explanation: Diving Deeper into Exceptions Briefly, I want to mention here some other concepts you might run into. I'll not go into detail on these concepts and how and why to use them, but instead simply show you the syntax so you can explore more on your own. Accessing the error message Sometimes in a try...except statement, you would like to be able to work with the error message itself. This can be done with the as keyword: End of explanation """ class MySpecialError(ValueError): pass raise MySpecialError("here's the message") """ Explanation: With this pattern, you can further customize the exception handling of your function. Defining custom exceptions In addition to built-in exceptions, it is possible to define custom exceptions through class inheritance. For instance, if you want a special kind of ValueError, you can do this: End of explanation """ try: print("do something") raise MySpecialError("[informative error message here]") except MySpecialError: print("do something else") """ Explanation: This would allow you to use a try...except block that only catches this type of error: End of explanation """ try: print("try something here") except: print("this happens only if it fails") else: print("this happens only if it succeeds") finally: print("this happens no matter what") """ Explanation: You might find this useful as you develop more customized code. try...except...else...finally In addition to try and except, you can use the else and finally keywords to further tune your code's handling of exceptions. The basic structure is this: End of explanation """
MarneeDear/softwarecarpentry
python lessons/Fundamentals/Functions.ipynb
mit
# a simple function that looks like a mathematical function # define a function called add_two_numbers that take 2 arguments: num1 and num2 def add_two_numbers(num1, num2): # Under the def must be indented return num1 + num2 # use the return statment to tell the function what to return """ Explanation: Functions Functions in Python are like mathematical functions. A function will take some values, do something to them or with them, and return something. Python functions can take any combination of data types and data structures and return a single value that can be any data type or data structure End of explanation """ add_two_numbers(905, 90) # written a different way # define a function called add_two_numbers that take 2 arguments: num1 and num2 def add_two_numbers(num1, num2): total = num1 + num2 # do the stuff # This is the body of the function return total # use the return statment to tell the function what to return result = add_two_numbers(905, 90) print(result) print(add_two_numbers(905, 90)) """ Explanation: Use return ... to give a value back to the caller. A function that doesn’t explicitly return a value automatically returns None. Defining a function does not run it. You must call the function to execute the code it contains. End of explanation """ # write your function here # Run this cell after defining your function print(quote('name', '"')) """ Explanation: Question 00 What does the following program print? (Don't actually code, just think about it.) ``` def report(pressure): print(‘pressure is: ’, pressure) report(22.5) ``` Practice 00 “Adding” two strings produces their concatenation: 'a' + 'b' is 'ab'. Write a function called quote that takes two parameters called original and wrapper and returns a new string that has the wrapper character at the beginning and end of the original. A call to your function should look like this: print(quote('name', '"')) "name" End of explanation """ # write your function here # Run this cell after defining your function print(outer('helium')) """ Explanation: Practice 01 If the variable s refers to a string, then s[0] is the string’s first character and s[-1] is its last. Write a function called outer that returns a string made up of just the first and last characters of its input. A call to your function should look like this: print(outer('helium')) hm End of explanation """ def fahr_to_kelvin(temp_f): # write your function here """ Explanation: Question 01 Explain why the two lines of output appeared in the order they did. ``` def print_date(year, month, day): joined = str(year) + '/' + str(month) + '/' + str(day) print(joined) result = print_date(1871, 3, 19) print('result of call is:', result) ``` OUTPUT: 1871/3/19 result of call is: None COMMIT YOUR WORK Why Use Functions? Functions let us break down our programs into smaller bits that can be reused and tested * Human beings can only keep a few items in working memory at a time. * Understand larger/more complicated ideas by understanding and combining pieces. * Components in a machine. * Functions serve the same purpose in programs. * Encapsulate complexity so that we can treat it as a single “thing”. * Also enables re-use. * Write one time, use many times. Testability Imagine a really big program with lots of lines of code. There is a problem somewhere in the code because you are not getting the results you expect How do you find the problem in your code? If your program is composed of lots of small functions that only do one thing then you can test each function individually. Reusability Imagine a really big program with lots of lines of code. There is a section of code you want to use in a different part of the program. How do you reuse that part of the code? If you just have one big program then you have to copy and paste that bit of code where you want it to go, but if that bit was a function, you could just use that function Always keep both of these concepts in mind when writing programs. Try to write small functions that do one thing Your programs should be composed of lots of functions that do one thing Never have one giant function that does a million things. Our Problem Last month we ran an experiment in the lab, but one of the windows was left open. If the temperature in the lab fell below 285 degrees Kelvin all of the data is ruined. Luckily a data logger was running, but unfortunately it only collects the temperature in fahrenheit. Example log data: beginTime,endTime,Temp 1/1/2017 0:00,1/1/2017 1:00,54.0 1/1/2017 1:00,1/1/2017 2:00,11.7 1/1/2017 2:00,1/1/2017 3:00,11.7 Write a function that converts temperatures from Fahrenheit to Kelvin. ((temp_f - 32) * (5/9)) + 273.15) End of explanation """ help(round) """ Explanation: COMMIT YOUR WORK Documentation Along the same lines as testability and resusability is documenation. While python is easy to read and follow, you will either need to share your code with others or you will forget what you did. End of explanation """ # Run this cell after adding documentation help(fahr_to_kelvin) """ Explanation: Adding documentation to your own code is simple and easy. Immediately after defining the function add a documenation block with triple-quotes (''') def add_two_numbers(num1, num2): ''' This is where to put documentation Return the sum of two numbers ''' return num1 + num2 Add documentation to your function fahr_to_kelvin. End of explanation """ # write your function here """ Explanation: COMMIT YOUR WORK We read the packaging on the materials wrong! If the temperature in the lab fell below -5 degrees Celsius all of the data is ruined. Write a function that converts temperatures from Kelvin into Celsius. temp_k - 273.15 End of explanation """ # write your function here """ Explanation: COMMIT YOUR WORK Because we know issues like this happen all of the time, let's prepare for the inevitability. Write a function to convert fahrenheit to celsius, without a formula. We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created End of explanation """ # write your function here """ Explanation: This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-large chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on. COMMIT YOUR WORK Write a function to convert from celsius to fahrenheit. (9/5) * temp_c + 32 End of explanation """ def display(a=1, b=2, c=3): print('a:', a, 'b:', b, 'c:', c) print('no parameters:') display() print('one parameter:') display(55) print('two parameters:') display(55, 66) """ Explanation: COMMIT YOUR WORK Arguments in call are matched to parameters in definition. Functions are most useful when they can operate on different data. Specify parameters when defining a function. These become variables when the function is executed. Are assigned the arguments in the call (i.e., the values passed to the function). Default Values If we usually want a function to work one way, but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier. End of explanation """ print('only setting the value of c') display(c=77) import numpy # Why does this not work? # What is wrong? How to fix it? numpy.loadtxt('LabTempHourlyJanuary2017.csv', ',') help(numpy.loadtxt) """ Explanation: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in: End of explanation """ def convert_temp(temp, temp_type='F'): # write your function here # Run this cell after writing convert_temp assert(convert_temp(-40.0, 'F'), -40.0) assert(convert_temp(0.0, 'C'), 32.0) assert(convert_temp(32.0, 'F'), 0.0) assert(convert_temp(54.0), 12.2) assert(convert_temp(12.2, 'C'), 54.0) """ Explanation: The filename is assigned to fname (which is what we want), but the delimiter string ',' is assigned to dtype rather than delimiter, because dtype is the second parameter in the list. However ‘,’ isn’t a known dtype so our code produced an error message when we tried to run it. When we call loadtxt we don’t have to provide fname= for the filename because it’s the first item in the list, but if we want the ‘,’ to be assigned to the variable delimiter, we do have to provide delimiter= for the second parameter since delimiter is not the second parameter in the list. It looks like the logger actually can collect celsius after all! Unfortunately it forgets what temperature type to log and has been intermittently logging both. Example log data: beginTime,endTime,Temp,TempType 1/1/2017 0:00,1/1/2017 1:00,54.0,F 1/1/2017 1:00,1/1/2017 2:00,11.7,C 1/1/2017 2:00,1/1/2017 3:00,11.7,C Write a function that either converts to fahrenheit or celsius based on a parameter, with a default assuming fahrenheit ('F') Remember if/else: End of explanation """ # write your code here """ Explanation: COMMIT YOUR WORK Let's load the logger data! Use numpy.loadtxt Don't forget the help function! What does numpy.loadtxt return? Write a function that loads a file and loops over each line and converts the temperature to celcius. End of explanation """
google-research/rlds
rlds/examples/rlds_tfds_envlogger.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Envlogger and TFDS End of explanation """ #@title Install Pip packages !pip install rlds[tensorflow] !pip install envlogger[tfds] !apt-get install libgmp-dev !pip install numpy #@title Imports import os import rlds import envlogger from envlogger.backends import rlds_utils from envlogger.backends import tfds_backend_writer from envlogger.testing import catch_env import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import time from typing import Optional, List #@title Auxiliary function to get dataset directories _METADATA_FILENAME='features.json' def get_ds_paths(pattern: str) -> List[str]: """Returns the paths of tfds datasets under a (set of) directories. We assume that a sub-directory with features.json file contains the dataset files. Args: pattern: Root directory to search for dataset paths or a glob that matches a set of directories, e.g. /some/path or /some/path/prefix*. See tf.io.gfile.glob for the supported patterns. Returns: A list of paths that contain the environment logs. Raises: ValueError if the specified pattern matches a non-directory. """ paths = set([]) for root_dir in tf.io.gfile.glob(pattern): if not tf.io.gfile.isdir(root_dir): raise ValueError(f'{root_dir} is not a directory.') print(f'root: {root_dir}') for path, _, files in tf.io.gfile.walk(root_dir): if _METADATA_FILENAME in files: print(f'path: {path}') paths.add(path) return list(paths) """ Explanation: <table class="tfo-notebook-buttons" align="left"> <td> <a href="https://colab.research.google.com/github/google-research/rlds/blob/main/rlds/examples/rlds_tfds_envlogger.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Run In Google Colab"/></a> </td> </table> End of explanation """ generate_data_dir='/tmp/tensorflow_datasets/catch/' # @param num_episodes= 20 # @param max_episodes_per_shard = 1000 # @param os.makedirs(generate_data_dir, exist_ok=True) def record_data(data_dir, num_episodes, max_episodes_per_shard): env = catch_env.Catch() def step_fn(unused_timestep, unused_action, unused_env): return {'timestamp_ns': time.time_ns()} ds_config = tfds.rlds.rlds_base.DatasetConfig( name='catch_example', observation_info=tfds.features.Tensor( shape=(10, 5), dtype=tf.float32, encoding=tfds.features.Encoding.ZLIB), action_info=tf.int64, reward_info=tf.float64, discount_info=tf.float64, step_metadata_info={'timestamp_ns': tf.int64}) with envlogger.EnvLogger( env, backend = tfds_backend_writer.TFDSBackendWriter( data_directory=data_dir, split_name='train', max_episodes_per_file=max_episodes_per_shard, ds_config=ds_config), step_fn=step_fn) as env: print('Done wrapping environment with EnvironmentLogger.') print(f'Training a random agent for {num_episodes} episodes...') for i in range(num_episodes): print(f'episode {i}') timestep = env.reset() while not timestep.last(): action = np.random.randint(low=0, high=3) timestep = env.step(action) print(f'Done training a random agent for {num_episodes} episodes.') record_data(generate_data_dir, num_episodes, max_episodes_per_shard) """ Explanation: Generate a dataset In this example, we use the local TFDS backend. In order to generate the dataset, use the parameters below to configure: root_dir: where the dataset will be created. num_episodes: how many episodes to generate. max_episodes_per_shard: maximum number of episodes to include per file (episodes will be stored in multiple files and then read as a single dataset). End of explanation """ recover_dataset_path = '/tmp/tensorflow_datasets/catch/' # @param builder = tfds.builder_from_directory(recover_dataset_path) builder = rlds_utils.maybe_recover_last_shard(builder) """ Explanation: Recover a dataset When the process of generating one dataset didn't finish properly, it is possible for the last shard to be incomplete. Envlogger provides the functionality to recover this last shard. End of explanation """ load_dataset_path = '/tmp/tensorflow_datasets/catch/' # @param loaded_dataset = tfds.builder_from_directory(load_dataset_path).as_dataset(split='all') for e in loaded_dataset: print(e) """ Explanation: Load one dataset Loading one dataset generated with the TFDS backend uses just regular TFDS functionality. End of explanation """ multiple_dataset_path = '/tmp/tensorflow_datasets/catch' # @param subdir_A = 'subdir_A' # @param subdir_B = 'subdir_B' # @param dir_A = os.path.join(multiple_dataset_path, subdir_A) dir_B = os.path.join(multiple_dataset_path, subdir_B) os.makedirs(dir_A, exist_ok=True) os.makedirs(dir_B, exist_ok=True) record_data(dir_A, num_episodes, max_episodes_per_shard) record_data(dir_B, num_episodes, max_episodes_per_shard) ds = tfds.builder_from_directories([dir_A, dir_B]).as_dataset(split='all') for e in ds.take(5): print(e) """ Explanation: Load a dataset from multiple directories TFDS supports loading of one dataset from multiple directories as long as the data has the same shape. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/csiro-bom/cmip6/models/sandbox-3/landice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'landice') """ Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: CSIRO-BOM Source ID: SANDBOX-3 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:56 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation """
enoordeh/StatisticalMethods
code/mc2_sandbox.ipynb
gpl-2.0
import numpy as np import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt import scipy.stats %matplotlib inline """ Explanation: Efficient Sampling Sandbox This notebook is for playing with different MCMC algorithms, given a few difficult posterior distributions, below. Choose one of the speed-ups from the Efficient Monte Carlo Sampling cgunk (something that sounds implementable in a reasonable time) and modify/replace the sampler below to use it on one or more of the given PDFs. Compare performance (burn-in length, acceptance rate, etc.) with simple Metropolis. (Apart from the Gaussian PDF, chances are that nothing you do will work brilliantly, but you should see a difference!) For Gibbs sampling, interpret the Gaussian as a likelihood function and put some thought into how you define your priors. Otherwise, just take the given PDF to be the posterior distribution. Preliminaries Import things End of explanation """ def Rosenbrock_lnP(x, y, a=1.0, b=100.0): if y < 0.0: return -np.inf return -( (a-x)**2 + b*(y-x**2)**2 ) """ Explanation: Some (log) PDFs to try sampling All of these are two-dimensional, with parameters x and y. Rosenbrock Here, $y\geq0$. End of explanation """ def eggbox_lnP(x, y): return (2.0 + np.cos(0.5*x)*np.cos(0.5*y))**3 """ Explanation: Eggbox End of explanation """ def sphshell_lnP(x, y, s=0.1): return -(np.sqrt(x**2+y**2) - 1)**2/(2.0*s**2) """ Explanation: Spherical shell End of explanation """ gaussian_data = np.random.normal(size=(20)) # arbitrary data set def gaussian_lnP(m, v): if v <= 0.0: return -np.inf return -0.5*(m-gaussian_data)^2/v - 0.5*np.log(v) """ Explanation: Gaussian This one is not hard to sample using simple methods, but you can use it as a test to make sure things are working. The real idea is to do conjugate Gibbs sampling for a Gaussian likelihood - for which you would not actually make direct use of the function below. The model parameters are the mean and variance of the Gaussian. End of explanation """ def propose(params, width): return params + width * np.random.randn(params.shape[0]) def step(current_params, current_lnP, width=1.0): trial_params = propose(current_params, width=width) trial_lnP = lnPost(trial_params, x, y) if np.log(np.random.rand(1)) <= trial_lnP-current_lnP: return (trial_params, trial_lnP) else: return (current_params, current_lnP) """ Explanation: The sampler This is the same off-the-shelf Metropolis sampler as in the Basic MCMC sandbox, with a Gaussian proposal. You'll want to improve on this. End of explanation """ params = -5.0 + np.random.rand(2) * 10.0 lnP = lnPost(params, x, y) Nsamples = 1000 samples = np.zeros((Nsamples, 2)) for i in range(Nsamples): params, lnP = step(params, lnP) samples[i,:] = params """ Explanation: Run This is a generic stub for randomizing initial parameter values and running. Note that the initial parameter range includes values that are not allowed for some of the above PDFs. Modify as necessary. End of explanation """
NlGG/various
mcmc.ipynb
mit
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import pymc as pm2 import pymc3 as pm import time import math import numpy.random as rd import pandas as pd from pymc3 import summary from pymc3.backends.base import merge_traces import theano.tensor as T """ Explanation: <p>参考にしました</p> <p>http://qiita.com/kenmatsu4/items/a0c703762a2429e21793</p> <p>http://www.slideshare.net/shima__shima/2014-mtokyoscipy6</p> <p>https://github.com/scipy-japan/tokyo-scipy/tree/master/006/shima__shima</p> <p>岩波データサイエンス</p> <p>データ解析のための統計モデリング入門</p> End of explanation """ def comb(n, r): if n == 0 or r == 0: return 1 return comb(n, r - 1) * (n - r + 1) / r def prob(n, y, q): p = comb(n, y) * q ** y * (1 - q) ** (n - y) return p def likelighood(n, y, q): p = 1.0 for i in y: p = p*prob(n, i, q) return p def metropolis(n, y, q, b, num): qlist = np.array([q]) for i in range(num): old_q = q q = q+np.random.choice([b, -b]) old_l = likelighood(n, y, old_q) new_l = likelighood(n, y, q) if new_l > old_l: old_q = q else: r = new_l/old_l q = np.random.choice([q, old_q], p=[r, 1.0-r]) q = round(q, 5) qlist = np.append(qlist, q) return q, qlist y = [4, 3, 4, 5, 5, 2, 3, 1, 4, 0, 1, 5, 5, 6, 5, 4, 4, 5, 3, 4] q, qlist = metropolis(8, y, 0.3, 0.01, 10000) plt.plot(qlist) plt.hist(qlist) qlist.mean() N = 40 X = np.random.uniform(10, size=N) Y = X*30 + 4 + np.random.normal(0, 16, size=N) plt.plot(X, Y, "o") multicore = False saveimage = False itenum = 1000 t0 = time.clock() chainnum = 3 with pm.Model() as model: alpha = pm.Normal('alpha', mu=0, sd =20) beta = pm.Normal('beta', mu=0, sd=20) sigma = pm.Uniform('sigma', lower=0) y = pm.Normal('y', mu=beta*X + alpha, sd=sigma, observed=Y) start = pm.find_MAP() step = pm.NUTS(state=start) with model: if(multicore): trace = pm.sample(itenum, step, start=start, njobs=chainnum, random_seed=range(chainnum), progress_bar=False) else: ts = [pm.sample(itenum, step, chain=i, progressbar=False) for i in range(chainnum)] trace = merge_traces(ts) if(saveimage): pm.tracepot(trace).savefig("simple_linear_trace.png") print "Rhat="+str(pm.gelman_rubin(trace)) t1=time.clock() print "elapsed time=" + str(t1-t0) if(not multicore): trace = ts[0] with model: pm.traceplot(trace, model.vars) pm.forestplot(trace) summary(trace) multicore = True t0 = time.clock() with model: if(multicore): trace = pm.sample(itenum, step, start=start, njobs=chainnum, random_seed=range(chainnum), progress_bar=False) else: ts = [pm.sample(itenum, step, chain=i, progressbar=False) for i in range(chainnum)] trace = merge_traces(ts) if(saveimage): pm.tracepot(trace).savefig("simple_linear_trace.png") print "Rhat="+str(pm.gelman_rubin(trace)) t1=time.clock() print "elapsed time=" + str(t1-t0) if(not multicore): trace = ts[0] with model: pm.traceplot(trace, model.vars) """ Explanation: <p>メトロポリス法</p> <p>(1)パラメーターqの初期値を選ぶ</p> <p>(2)qを増やすか減らすかをランダムに決める</p> <p>(3)q(新)において尤度が大きくなるならqの値をq(新)に変更する</p> <p>(4)q(新)で尤度が小さくなる場合であっても、確率rでqの値をq(新)に変更する</p> End of explanation """ data = pd.read_csv("http://hosho.ees.hokudai.ac.jp/~kubo/stat/iwanamibook/fig/hbm/data7a.csv") plt.bar(range(9), data.groupby('y').sum().id) data.groupby('y').sum().T """ Explanation: <p>例題:個体差と生存種子数</p> ある植物を考える。i番目の個体の生存種子数をyiとする。yiは0以上8以下である。以下はヒストグラムである。 End of explanation """ plt.hist(np.random.normal(0, 100, 1000)) """ Explanation: 種子生存確率が9通の生存確率qの二項分布で説明できるとする。 <p>実際のデータを説明できていない。</p> ・・・個体差を考慮できるGLMMを用いる。 <p>logit(qi) = β + ri</p> 切片βは全個体に共通するパラメーター、riは個体差で平均0、標準偏差sの正規分布に従う。 事後分布∝p(Y|β, {ri})×事前分布 <p>βの事前分布には無情報事前分布を指定する。</p> p(β)=1/√2π×100^2 × exp(-β^2/2×100^2) End of explanation """ Y = np.array(data.y)[:6] """ Explanation: <p>riの事前分布には平均0、標準偏差sの正規分布を仮定する。</p> p(ri|s)=1/√2π×s^2 × exp(-ri^2/2×s^2)  <p>sの事前分布には無情報事前分布を指定する。</p> p(s)=(0から10^4までの連続一様分布) End of explanation """ def invlogit(v): return T.exp(v)/(T.exp(v) + 1) with pm.Model() as model_hier: s = pm.Uniform('s', 0, 1.0E+2) beta = pm.Normal('beta', 0, 1.0E+2) r = pm.Normal('r', 0, s, shape=len(Y)) q = invlogit(beta+r) y = pm.Binomial('y', 8, q, observed=Y) #p(q|Y) step = pm.Slice([s, beta, r]) trace_hier = pm.sample(1000, step) with model_hier: pm.traceplot(trace_hier, model_hier.vars) summary(trace_hier) trace_hier x_sample = np.random.normal(loc=1.0, scale=1.0, size=1000) with pm.Model() as model: mu = pm.Normal('mu', mu=0., sd=0.1) x = pm.Normal('x', mu=mu, sd=1., observed=x_sample) with model: start = pm.find_MAP() step = pm.NUTS() trace = pm.sample(10000, step, start) pm.traceplot(trace) plt.savefig("result1.jpg") ndims = 2 nobs = 20 n = 1000 y_sample = np.random.binomial(1, 0.5, size=(n,)) x_sample=np.empty(n) x_sample[y_sample==0] = np.random.normal(-1, 1, size=(n, ))[y_sample==0] x_sample[y_sample==1] = np.random.normal(1, 1, size=(n, ))[y_sample==1] with pm.Model() as model: p = pm.Beta('p', alpha=1.0, beta=1.0) y = pm.Bernoulli('y', p=p, observed=y_sample) mu0 = pm.Normal('mu0', mu=0., sd=1.) mu1 = pm.Normal('mu1', mu=0., sd=1.) mu = pm.Deterministic('mu', mu0 * (1-y_sample) + mu1 * y_sample) x = pm.Normal('x', mu=mu, sd=1., observed=x_sample) with model: start = pm.find_MAP() step = pm.NUTS() trace = pm.sample(10000, step, start) pm.traceplot(trace) plt.savefig("result2.jpg") """ Explanation: yの数が大きく全て使うと時間がかかりすぎるので6体だけ選び出す。 End of explanation """
oroszl/szamprob
notebooks/Package04/Interact.ipynb
gpl-3.0
%pylab inline from ipywidgets import * # az interaktivitásért felelős csomag """ Explanation: Interaktív függvények és ábrák Az alábbiakban vizsgáljunk meg egy egyszerű módszert arra, hogy hogyan tehetjük Python-függvényeinket interaktívvá! Ehhez az ipywidgets csomag lesz segítségünkre! End of explanation """ t=linspace(0,2*pi,100); plot(t,sin(t)) """ Explanation: Mostanra már tudjuk, hogy hogyan ábrázoljunk egy matematikai függvényt: End of explanation """ def freki(omega): plot(t,sin(omega*t)) freki(2.0) """ Explanation: Írjunk egy függvényt, ami egy megadott frekvenciájú jelet rajzol ki! End of explanation """ interact(freki,omega=(0,10,0.1)); """ Explanation: Most jön a varázslat! Az interact() függvény segítségével interaktívvá tehetjük a fent definiált függvényünket! End of explanation """ def func(x): print(x) """ Explanation: Nézzük meg egy kicsit közelebbről, hogy is működik ez az interact() konstrukció! Definiáljunk ehhez először egy nagyon egyszerű függvényt! End of explanation """ interact(func,x=(0,10)); """ Explanation: Az interact egy olyan függvény, amely az első paramétereként egy függvényt vár, és kulcsszavakként várja a függvény bemenő paramétereit! Amit visszaad, az egy interaktív widget, ami lehet sokfajta, de alapvetően azt a célt szolgálja, hogy a func függvényt kiszolgálja. Annak ad egy bemenő paramétert, lefuttatja, majd vár, hogy a felhasználó újra megváltoztassa az állapotot. Ha a kulcsszavas argumentumnak zárójelbe írt egész számokat adunk meg, akkor egy egész számokon végigmenő csúszkát kapunk: End of explanation """ interact(func,x=False); """ Explanation: Ha egy bool értéket adunk meg, akkor egy pipálható dobozt: End of explanation """ interact(func,x=['hétfő','kedd','szerda']); """ Explanation: Ha egy általános listát adunk meg, akkor egy legördülő menüt kapunk: End of explanation """ interact(func,x=(0,10,0.1)); """ Explanation: Ha a sima zárójelbe írt számok nem egészek (legalább az egyik) akkor egy float csúszkát kapunk: End of explanation """ interact(func,x=IntSlider(min=0,max=10,step=2,value=2,description='egesz szamos csuszka x=')); interact(func,x=FloatSlider(min=0,max=10,step=0.01,value=2,description='float szamos csuszka x=')); interact(func,x=Dropdown(options=['Hétfő','Kedd','Szerda'],description='legörülő x=')); interact(func,x=Checkbox()); interact(func,x=Text()); """ Explanation: Ha pontosan specifikálni szeretnénk, hogy milyen interaktivitást akarunk, akkor azt az alábbiak szerint tehetjük meg, egész csúszka$\rightarrow$IntSlider() float csúszka$\rightarrow$FloatSlider() legördülő menü$\rightarrow$Dropdown() pipa doboz$\rightarrow$Checkbox() szövegdoboz$\rightarrow$Text() Ezt alább néhány példa illusztrálja: End of explanation """ interact_manual(func,x=(0,10)); """ Explanation: Ha egy függvényt sokáig tart kiértékelni, akkor interact helyett érdemes interact_manual-t használni. Ez csak akkor futtatja le a függvényt, ha a megjelenő gombot megnyomjuk. End of explanation """ t=linspace(0,2*pi,100); def oszci(A,omega,phi,szin): plot(t,A*sin(omega*t+phi),color=szin) plot(pi,A*sin(omega*pi+phi),'o') xlim(0,2*pi) ylim(-3,3) xlabel('$t$',fontsize=20) ylabel(r'$A\,\sin(\omega t+\varphi)$',fontsize=20) grid(True) interact(oszci, A =FloatSlider(min=1,max=2,step=0.1,value=2,description='A'), omega=FloatSlider(min=0,max=10,step=0.1,value=2,description=r'$\omega$'), phi =FloatSlider(min=0,max=2*pi,step=0.1,value=0,description=r'$\varphi$'), szin =Dropdown(options=['red','green','blue','darkcyan'],description='szín')); """ Explanation: A widgetekről bővebben itt található több információ. Végül nézünk meg egy több változós interactot! End of explanation """
johntruckenbrodt/pyroSAR
datacube_prepare.ipynb
mit
from pyroSAR.datacube_util import Product, Dataset from pyroSAR.ancillary import groupby, find_datasets # define a directory containing processed SAR scenes dir = '/path/to/some/data' # define a name for the product YML; this is used for creating a new product in the datacube yml_product = './product_def.yml' # define a directory for storing the indexing YMLs; these are used to index the dataset in the datacube yml_index_outdir = './yml_indexing' # define a name for the ingestion YML; this is used to ingest the indexed datasets into the datacube yml_ingest = './ingestion.yml' # product description product_name_indexed = 'S1_GRD_index' product_name_ingested = 'S1_GRD_ingest' product_type = 'gamma0' description = 'this is just some test' # define the units of the dataset measurements (i.e. polarizations) units = 'backscatter' # alternatively this could be a dictionary: # units = {'VV': 'backscatter VV', 'VH': 'backscatter VH'} ingest_location = './ingest' # find pyroSAR files by metadata attributes files = find_datasets(dir, recursive=True, sensor=('S1A', 'S1B'), acquisition_mode='IW') # group the found files by their file basenames # files with the same basename are considered to belong to the same dataset grouped = groupby(files, 'outname_base') print(len(files)) print(len(grouped)) """ Explanation: This is a quick notebook to demonstrate the pyroSAR functionality for importing processed SAR scenes into an Open Data Cube End of explanation """ # create a new product and add the collected datasets to it # alternatively, an existing product can be used by providing the corresponding product YML file with Product(name=product_name_indexed, product_type=product_type, description=description) as prod: for dataset in grouped: with Dataset(dataset, units=units) as ds: # add the datasets to the product # this will generalize the metadata from those datasets to measurement descriptions, # which define the product definition prod.add(ds) # parse datacube indexing YMLs from product and dataset metadata prod.export_indexing_yml(ds, yml_index_outdir) # write the product YML prod.write(yml_product) # print the product metadata, which is written to the product YML print(prod) """ Explanation: In the next step we create a new product, add the grouped datasets to it and create YML files for indexing the datasets in the cube. End of explanation """ with Product(yml_product) as prod: prod.export_ingestion_yml(yml_ingest, product_name_ingested, ingest_location, chunking={'x': 512, 'y': 512, 'time': 1}) """ Explanation: Now that we have a YML file for creating a new product and individual YML files for indexing the datasets, we can create a last YML file, which will ingest the indexed datasets into the cube. For this a new product is created and the files are converted to NetCDF, which are optimised for useage in the cube. The location of those NetCDF files also needs to be defined. End of explanation """
fionapigott/Data-Science-45min-Intros
bandit-algorithms-101/bandit-algorithms-101.ipynb
unlicense
from IPython.display import Image Image(filename='img/treat_aud_reward.jpg') import matplotlib import matplotlib.pyplot as plt import numpy as np import seaborn as sns import pandas as pd from numpy.random import binomial from ggplot import * import random import sys plt.figure(figsize=(6,6),dpi=80); %matplotlib inline """ Explanation: Bandit Agorithms 101 2015 August 28 See imports below for requirements In the same directory you cloned Data-Science-45min-Intros, you should also clone the BanditBook repository from https://github.com/johnmyleswhite/BanditsBook Agenda A simulation approach to building intuition about A/B testing Practicalities of testing and operation Optimizing outcomes with multiple options and different payoffs per option 1. A simulation approach to building intuition about A/B testing This section isn't a cookbook for A/B testing. Rather, I am pointing to some key aspecst of how we design and analyse A/B tests that will be useful when we get to the section on bandit algorithms. When thinking about A/B tests, let's focus on 3 key components: Treatment (e.g. content presented, vaccination, etc.) Reward (e.g. impression, click, proceeds of sale, immunity) Audience and context (The context is an arbitrary but specifica set of circumstances under which the audience experiences the treatment.) End of explanation """ Image(filename='img/ab.jpg') """ Explanation: Simulating A/B testing to build intuition Situation: * 2 treatments * 1 uniform audience, split into to random groups * reward is merely the outcome Simulation: * discrete outcomes * 50/50 chances of outcome * Many trials End of explanation """ Image(filename='img/a.jpg') # This is A/ testing! # This is the result of 1 arm, 100 trials df = pd.DataFrame({"coin_toss":binomial(1,0.5,100)}) df.hist() # Everyone got the same treatment, this is the distribution of the outcome # reward is the total height of the right-hand bar plt.show() """ Explanation: Each test is like flipping a fair coin N times End of explanation """ # every sample is 0/1, heads or tails df.head() # now with a high probability of heads df = pd.DataFrame({"coin_toss":binomial(1,0.6,100)}) df.hist() plt.show() # Compare the variability across many different experiments # of 100 flips each (variability of the mean) df = pd.DataFrame({"coin_%i"%i:binomial(1,0.5,100) for i in range(20)}) df.hist() plt.show() # Can we distinguish a small differce in probability? df = pd.DataFrame({"coin_%i"%i:binomial(1,0.52,100) for i in range(20)}) df.hist() plt.show() """ Explanation: Run the cell above a few times. Can you easily make the case that the average reward is 50? End of explanation """ # 1 arm payoff = [-0.1,0.5] a = np.bincount(binomial(1,0.5,100)) print "Number of 0s and 1s:", a print "Total reward with pay off specified =", np.dot(a, payoff) """ Explanation: Rewards of the actions might be different What if reward for choice 0 = -0.1 and choice 1 = 0.5? End of explanation """ # 2-arm, equal unity reward per coin # (4 outcomes but 1,0=0,1 with this payoff vector) payoff = [0,1,2] a = np.bincount(binomial(2,0.5,100)) print a print np.dot(a, payoff) """ Explanation: Add treatment B to make it more instesting Now we want to compare the rewards for an individuals in the audience choosing A or B. End of explanation """ payoff1=[0,1] reward1 = np.dot(np.bincount(binomial(1,0.5,100)), payoff1) print "Arm A reward = ", reward1 payoff2=[0,1.05] reward2 = np.dot(np.bincount(binomial(1,0.5,100)), payoff2) print "Arm B reward = ", reward2 total_reward = reward1 + reward2 print "Total reward for arms A and B = ", total_reward """ Explanation: Simulating multiple arms with different payoffs But more often, * Arm A (outcome 1/0) has a reward (e.g. 1.0 below) and * Arm B (outcome 1/0) has a different reward (eg.g 1.05 below) For example, imagine the case of tweet engagement. a user can click on a URL in a tweet (outcome 1/0) a user can retweet (outcome 1/0) rewards for a site visit might be 10x those of a retweet End of explanation """ def a_b_test(one_payoff=[1, 1.01]): # assume payoff for outcome 0 is 0 reward1 = np.bincount(binomial(1,0.5,100))[1] * one_payoff[0] reward2 = np.bincount(binomial(1,0.5,100))[1] * one_payoff[1] return reward1, reward2, reward1 + reward2, reward1-reward2 n_tests = 1000 sim = np.array([a_b_test() for i in range(n_tests)]) df = pd.DataFrame(sim, columns=["t1", "t2", "tot", "diff"]) print "Number of tests in which Arm B won (expect > {} because of payoff) = {}".format( n_tests/2 , len(df[df["diff"] <= 0.0])) df.hist() plt.show() """ Explanation: Why worry about the total reward? I thought we wanted to know if A > B? Hold that thought for a couple of more cells... From now on, assume reward = 0 for outcome 0 to keep thing a little simpler. Everything we do can be generalized to a reward vector as above. How easy is it to differentiat two arms with different payoffs? Lets ask about choosing a winner. End of explanation """ def a_b_test(ps=[0.5, 0.51], one_payoff=[1, 1]): reward1 = np.bincount(binomial(1,ps[0],100))[1] * one_payoff[0] reward2 = np.bincount(binomial(1,ps[1],100))[1] * one_payoff[1] return reward1, reward2, reward1 + reward2, reward1-reward2 n_tests= 100 sim = np.array([a_b_test() for i in range(n_tests)]) df = pd.DataFrame(sim, columns=["t1", "t2", "tot", "diff"]) print "Number of tests in which Arm B won (expect > {} because of probability) = {}".format( n_tests/2 , len(df[df["diff"] <= 0.0])) df.hist() plt.show() """ Explanation: Now is them it jump to your power and significance testing expertise. We are going to continue building intution through simulation. Can we differentiat two arms with different probabilities? End of explanation """ Image(filename='img/abcd.jpg') # repeating what did before with equal equal payoff, more arms # remember the degenerate outcomes df = pd.DataFrame({"tot_reward":binomial(2,0.5,100)}) df.hist() plt.show() # ok, now with 4 df = pd.DataFrame({"tot_reward":binomial(4,0.5,100)}) df.hist() plt.show() """ Explanation: More Arms End of explanation """ # a little more practice with total reward distribution trials = 100 probabilities = [0.1, 0.1, 0.9] reward = np.zeros(trials) for m in probabilities: # equal rewards of 1 or 0 reward += binomial(1,m,trials) df = pd.DataFrame({"reward":reward, "fair__uniform_reward":binomial(3,0.5,trials)}) df.hist() plt.show() """ Explanation: Quick aside Easy to simulate many arms with different probabilities when we have 0/1 reward. Maybe interesting to compare to uniform probability case? End of explanation """ sys.path.append('../../BanditsBook/python') from core import * """ Explanation: 2. Practicalities of testing and operation Explore vs Exploit? Marketer is going to have a hard time waiting for rigorous testing as winners appear to emerge Implement online system vs. testing system Chase successes vs. analyzing failures So maybe set some new objectives instead of is A>B: * Want to automate balance of explore and exploit and run continuously * Implement one system Carefull: * Some danger in forgetting about significance and power Bandit Book Utilities from "Bandit Algorithms" by John Myles White Three utilities: * arms * algorithms * monte carlo testing framework End of explanation """ random.seed(1) # Mean (arm probabilities) (Bernoulli) means = [0.1, 0.1, 0.1, 0.1, 0.9] # Mulitple arms! n_arms = len(means) random.shuffle(means) arms = map(lambda (mu): BernoulliArm(mu), means) print("Best arm is " + str(ind_max(means))) t_horizon = 250 n_sims = 1000 data = [] for epsilon in [0.1, 0.2, 0.3, 0.4, 0.5]: algo = EpsilonGreedy(epsilon, [], []) algo.initialize(n_arms) # results are column oriented # simulation_num, time, chosen arm, reward, cumulative reward results = test_algorithm(algo, arms, n_sims, t_horizon) results.append([epsilon]*len(results[0])) data.extend(np.array(results).T) df = pd.DataFrame(data , columns = ["Sim" , "T" , "ChosenArm" , "Reward" , "CumulativeReward" , "Epsilon"]) df.head() a=df.groupby(["Epsilon", "T"]).mean().reset_index() a.head() ggplot(aes(x="T",y="Reward", color="Epsilon"), data=a) + geom_line() ggplot(aes(x="T",y="CumulativeReward", color="Epsilon"), data=a) + geom_line() """ Explanation: 3. Optimizing outcomes with multiple options with different payoffs How can we explore arms and exploit the best arm more often, but still explore? Answer 1: occasionally, we randomly explore losers. Epsilon Greedy Startup by visiting all the arms Keep track of currrent probabilities for each arm based on visits so far After startup, split experiment and exploit for each new individual from the audience with probability epsilon Notes: * epsilon is the fraction of exploration * randomize everything all the time What's a good value for $\epsilon$? Depends on number of arms Depends on individuals per time and total time What are these parameters when one of the options is 9 times better than all of the others? Needs a simulation! (To keep it simple outcome 1/0 has reward 1/0.) End of explanation """ t_horizon = 250 n_sims = 1000 algo = AnnealingSoftmax([], []) algo.initialize(n_arms) data = np.array(test_algorithm(algo, arms, n_sims, t_horizon)).T df = pd.DataFrame(data) #df.head() df.columns = ["Sim", "T", "ChosenArm", "Reward", "CumulativeReward"] df.head() a=df.groupby(["T"]).mean().reset_index() a.head() ggplot(aes(x="T",y="Reward", color="Sim"), data=a) + geom_line() ggplot(aes(x="T",y="CumulativeReward", color="Sim"), data=a) + geom_line() """ Explanation: Anealing Softmax Upgrades to $\epsilon$-Greedy: * Need to run more experiments if rewards appear to be nearly equal * Keep track of results for exploration as well as exploitation Tempted choose each are in proportion to its current value, i.e.: $p(A) \propto \frac{rA}{rA + RB}$ $p(B) \propto \frac{rB}{rA + RB}$ Remember Boltzmann, and about adding a temperature, $\tau$: $p(A) \propto \frac{-\exp(rA/\tau)}{(\exp(rA/\tau) + \exp(rB/\tau))}$ $p(B) \propto \frac{-\exp(rB/\tau)}{(\exp(rA/\tau) + \exp(rB/\tau))}$ And what is annealing? $\tau = 0$ is deterministic case of winner takes all $\tau = \infty$ is all random, all time Let the temperature go to zero over time to settle into the state slowly (adiabatically) End of explanation """ t_horizon = 250 n_sims = 1000 data = [] for alpha in [0.1, 0.3, 0.5, 0.7, 0.9]: algo = UCB2(alpha, [], []) algo.initialize(n_arms) results = test_algorithm(algo, arms, n_sims, t_horizon) results.append([alpha]*len(results[0])) data.extend(np.array(results).T) df = pd.DataFrame(data, columns = ["Sim", "T", "ChosenArm", "Reward", "CumulativeReward", "Alpha"]) df.head() a=df.groupby(["Alpha", "T"]).mean().reset_index() a.head() ggplot(aes(x="T",y="Reward", color="Alpha"), data=a) + geom_line() ggplot(aes(x="T",y="CumulativeReward", color="Alpha"), data=a) + geom_line() """ Explanation: UCB2 Add a confidnece measure to our estimates of averages! End of explanation """
molpopgen/fwdpy
docs/examples/views.ipynb
gpl-3.0
from __future__ import print_function import fwdpy as fp import pandas as pd from background_selection_setup import * """ Explanation: Example of taking 'views' from simulated populations End of explanation """ mutations = [fp.view_mutations(i) for i in pops] """ Explanation: Get the mutations that are segregating in each population: End of explanation """ for i in mutations: print(i[0]) """ Explanation: Look at the raw data in the first element of each list: End of explanation """ mutations2 = [pd.DataFrame(i) for i in mutations] for i in mutations2: print(i.head()) """ Explanation: Let's make that nicer, and convert each list of dictionaries to a Pandas DataFrame object: End of explanation """ nmuts = [i[i.neutral == True] for i in mutations2] for i in nmuts: print(i.head()) """ Explanation: The columns are: g = the generation when the mutation first arose h = the dominance n = the number of copies of the mutation in the population. You can use this to get its frequency. neutral = a boolean pos = the position of the mutation s = selection coefficient/effect size label = The label assigned to a mutation. These labels can be associated with Regions and Sregions. Here, 1 is a mutation from the neutral region, 2 a selected mutation from the 'left' region and 3 a selected mutation from the 'right' regin. We can do all the usual subsetting, etc., using regular pandas tricks. For example, let's get the neutral mutations for each population: End of explanation """ gametes = [fp.view_gametes(i) for i in pops] """ Explanation: We can also take views of gametes: End of explanation """ for i in gametes: print(i[0]) """ Explanation: The format is really ugly. v Each gamete is a dict with two elements: 'neutral' is a list of mutations not affecting fitness. The format is the same as for the mutation views above. 'selected' is a list of mutations that do affect fitness. The format is the same as for the mutation views above. End of explanation """ smuts = [i['selected'] for i in gametes[0]] """ Explanation: OK, let's clean that up. We'll focus on the selected mutations for each individual, and turn everything into a pd.DataFrame. We're only going to do this for the first simulated population. End of explanation """ smutsdf = pd.DataFrame() ind=0 ##Add the non-empty individuals to the df for i in smuts: if len(i)>0: smutsdf = pd.concat([smutsdf,pd.DataFrame(i,index=[ind]*len(i))]) ind += 1 smutsdf.head() """ Explanation: We now have a list of lists stored in 'smuts'. End of explanation """ dips = [fp.view_diploids(i,[0,1]) for i in pops] """ Explanation: That's much better. We can use the index to figure out which individual has which mutations, and their effect sizes, etc. Finally, we can also take views of diploids. Let's get the first two diploids in each population: End of explanation """ for key in dips[0][0]: print(key) """ Explanation: Again, the format here is ugly. Each diploid view is a dictionary: End of explanation """
thewtex/SimpleITK-Notebooks
55_VH_Resample.ipynb
apache-2.0
from __future__ import print_function import matplotlib.pyplot as plt %matplotlib inline import SimpleITK as sitk print(sitk.Version()) from myshow import myshow # Download data to work on from downloaddata import fetch_data as fdata OUTPUT_DIR = "Output" """ Explanation: Resampling an Image onto Another's Physical Space The purpose of this Notebook is to demonstrate how the physical space described by the meta-data is used when resampling onto a reference image. End of explanation """ fixed = sitk.ReadImage(fdata("vm_head_rgb.mha")) moving = sitk.ReadImage(fdata("vm_head_mri.mha")) print(fixed.GetSize()) print(fixed.GetOrigin()) print(fixed.GetSpacing()) print(fixed.GetDirection()) print(moving.GetSize()) print(moving.GetOrigin()) print(moving.GetSpacing()) print(moving.GetDirection()) import sys resample = sitk.ResampleImageFilter() resample.SetReferenceImage(fixed) resample.SetInterpolator(sitk.sitkBSpline) resample.AddCommand(sitk.sitkProgressEvent, lambda: print("\rProgress: {0:03.1f}%...".format(100*resample.GetProgress()),end='')) resample.AddCommand(sitk.sitkProgressEvent, lambda: sys.stdout.flush()) out = resample.Execute(moving) """ Explanation: Load the RGB cryosectioning of the Visible Human Male dataset. The data is about 1GB so this may take several seconds, or a bit longer if this is the first time the data is downloaded from the midas repository. End of explanation """ myshow(out) #combine the two images using a checkerboard pattern: #because the moving image is single channel with a high dynamic range we rescale it to [0,255] and repeat #the channel 3 times vis = sitk.CheckerBoard(fixed,sitk.Compose([sitk.Cast(sitk.RescaleIntensity(out),sitk.sitkUInt8)]*3), checkerPattern=[15,10,1]) myshow(vis) """ Explanation: Because we are resampling the moving image using the physical location of the fixed image without any transformation (identity), most of the resulting volume is empty. The image content appears in slice 57 and below. End of explanation """ import os sitk.WriteImage(vis, os.path.join(OUTPUT_DIR, "example_resample_vis.mha")) temp = sitk.Shrink(vis,[3,3,2]) sitk.WriteImage(temp, [os.path.join(OUTPUT_DIR,"r{0:03d}.jpg".format(i)) for i in range(temp.GetSize()[2])]) """ Explanation: Write the image to the Output directory: (1) original as a single image volume and (2) as a series of smaller JPEG images which can be constructed into an animated GIF. End of explanation """
buzmakov/tomography_scripts
tomo/yaivan/empty_frames.ipynb
mit
empty = plt.imread(data_root+'first_projection.tif').astype('float32') corr = plt.imread(data_root+'first_projection_corr.tif').astype('float32') tomo = plt.imread(data_root+'Raw/pin_2.24um_0000.tif').astype('float32') white = np.fromfile(data_root+'white0202_2016-02-11.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_1 = np.fromfile(data_root+'black0101_2016-02-09.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) black_2 = np.fromfile(data_root+'black0201_2016-02-16.ffr',dtype='<u2').astype('float32').reshape((2096, 4000)) def show_frame(data, label): data_filtered = cv2.medianBlur(data,5) plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.title(label+' filtered') plt.colorbar(orientation='horizontal') plt.show() plt.figure(figsize=(12,8)) plt.plot(data[1000]) plt.grid(True) plt.title(label+' filtered: central cut') plt.show() plt.figure(figsize=(12,10)) plt.imshow(data_filtered) plt.colorbar(orientation='horizontal') plt.title(label) plt.show() plt.figure(figsize=(12,8)) plt.plot(data_filtered[1000]) plt.grid(True) plt.title(label+': central cut') plt.show() """ Explanation: Список файлов: * empty - файл полученный с томографа без коррекций * corr - то же изображение что и empty, но с коррекцией * tomo - то же, что и empty, но полученное в ходе проведения эксперимента * white - пустой пучок используемый для нормировки изображений (получен в тот-же день при калибровке) * black_1, black_2 - темновые токи, полученные в разное время End of explanation """ show_frame(white, 'White') """ Explanation: Вот пучок без объекта. По осям - отсчёты детектора. Здесь и далее первая картинка и центральное сечение - как есть. Вторая картинка - с применениейм медианной фильтрации (чтобы убрать шумы сцинстиллятора). End of explanation """ show_frame(black_1, 'Black_1') """ Explanation: Вот темновой ток 1 по осям - отсчёты детектора End of explanation """ show_frame(black_2, 'Black_2') """ Explanation: Вот темновой ток 2 по осям - отсчёты детектора End of explanation """ show_frame(black_1 - black_2, 'Black_1 - Black_2') """ Explanation: Вот разница между темновыми токами End of explanation """ show_frame(empty, 'Empty') """ Explanation: Вот никак не скорректированное изображение End of explanation """ show_frame(corr, 'Corr') """ Explanation: Вот отнормированное изображение (силами томографа) Странно, что на центральном срезе максимум не на 65535 (2^16), а примерно 65535*0.8. Это значит что нам при реконструкции нужно нормироваться не на 65535 при взятии логарифма, а на максимум по синограмме? End of explanation """ show_frame(tomo, 'tomo image') """ Explanation: Вот изображение из томографического зксперимента End of explanation """ show_frame(corr - tomo, 'corr / tomo image') """ Explanation: Вот разница изображений отнормированных томографом в ручном режиме и режиме томографа Они видимо немого сдвинуты End of explanation """ white_norm = (white - black_1) white_norm[white_norm<1] = 1 empty_norm = (empty/16 - black_1) empty_norm[empty_norm<1] =1 my_corr = empty_norm/white_norm my_corr[my_corr>1.1] = 1.1 show_frame(my_corr, 'my_corr image') """ Explanation: Вот моя попытка отнормировать изображение Видны следы от прямого пучка (сетка на заднем фоне), но это видимо связано с тем, что прямой пучок зависит от расстояний детектор-источник (сферичность интенсивности), и прямой пучок был померян для другого рассотояния. К тому-же интенсивнось прямого пучка видимо была меньше (в 16 раз?), чем при проведениии зксперимента. (это надо проверить) End of explanation """ show_frame(my_corr*65535*0.87/corr, 'my_corr/corr image') """ Explanation: Скорректированный пучок нами поделеённый на скорреткироваанный скайсканом. Они вроде совпадают, с точностью до шумов. Отсюда следует, что нормировка происходит по формуле $$Signal=k\times 2^{16}\frac{I_1-dark}{I_0-dark}, k=0.87$$ End of explanation """
ini-python-course/ss15
notebooks/Avoiding numerical pitfalls.ipynb
mit
print repr(2-1.8) print str(2-1.8) """ Explanation: Avoiding numerical pitfalls The harmonic series is convergent in floating point arithmetics \begin{align} \sum_{n=1}^{\infty} \; \frac{1}{n} \quad = \quad 34.1220356680478715816207113675773143768310546875 \end{align} (usually it is famously known to diverge to $\infty$) References Proof of divergence: http://www.free-education-resources.com/www.mathematik.net/harmonische-reihen/hr1s20.htm Proof of convergence for floating point: http://www.maths.tcd.ie/pub/ims/bull71/recipnote.pdf Resolution: http://fredrik-j.blogspot.de/2009/02/how-not-to-compute-harmonic-numbers.html Basics References http://www.johndcook.com/blog/2009/04/06/numbers-are-a-leaky-abstraction/ http://www.codeproject.com/Articles/29637/Five-Tips-for-Floating-Point-Programming http://www.codeproject.com/Articles/25294/Avoiding-Overflow-Underflow-and-Loss-of-Precision https://docs.python.org/2/tutorial/floatingpoint.html In floating point arithmetic, subtraction is rather inaccurate. Observe that 2-1.8 is not 0.2. However, Python catches this well known phenomenon in its str-method and converts the output to a convenient number. The following two lines illustrate not only numeric subtaction inaccuracy, but also the difference between repr and str. repr is designed to represent the value accurately, while str is intended for a convenient output format. End of explanation """ print (2-1.8 == 0.2) #Python-hack that actually works surprisingly well: print (str(2-1.8) == str(0.2)) #Recommended method with control over matching-precision: threshold = 0.000000001 print (((2-1.8) - 0.2)**2 < threshold) """ Explanation: Just to mention for completeness: Don't use exact equals-operator on floats: End of explanation """ import numpy as np import scipy.optimize as opt a = 3.0 b = 10e5 c = 5.0 pol = np.polynomial.Polynomial((c, b, a)) def f(x): return a*x**2+b*x+c #return (a*x+b)*x+c def f1(x): return 2*a*x+b def f2(x): return 2*a def solve_pq(): p = b/a q = c/a D = (p/2.0)**2 - q r1 = -p/2.0+D**0.5 r2 = -p/2.0-D**0.5 return (r1, r2) def solve_pq2(): p = b/a q = c/a D = (p/2.0)**2 - q r1 = -2.0*q/(p+2.0*D**0.5) r2 = -p/2.0-D**0.5 return (r1, r2) def solve_companion(): p = b/a q = c/a C = np.array([[0.0, -q], [1.0, -p]]) return np.linalg.eigvals(C) def solve_newton(r): return opt.newton(f, r, tol=1.48e-10)#, fprime=None, args=(), tol=1.48e-08, maxiter=50, fprime2=None def solve_newton2(r): return opt.newton(f, r, tol=1.48e-10, fprime=f1)#, args=(), tol=1.48e-08, maxiter=50, fprime2=None def solve_newton3(r): return opt.newton(f, r, tol=1.48e-10, fprime=f1, fprime2=f2) result = solve_pq() print "pq" print repr(result) print repr(f(result[0])) print repr(f(result[1])) result = solve_pq2() print "pq2" print repr(result) print repr(f(result[0])) print repr(f(result[1])) result = solve_companion() print "companion" print repr(result) print repr(f(result[0])) print repr(f(result[1])) result[0] = solve_newton(result[0]) result[1] = solve_newton(result[1]) print "newton" print repr(result) print repr(f(result[0])) print repr(f(result[1])) result = np.polynomial.polynomial.polyroots((c, b, a)) print "numpy" print repr(result) print repr(f(result[1])) print repr(f(result[0])) """ Explanation: Let's solve a quadratic equation. Naive solving becomes bad if low and large coefficients occur. Consider the equation $3 x^2 + 10^5 x + 5 = 0$ End of explanation """ import math n = 645645665476.43e160 m = 125624536575.76e150 #print repr(n**4/m**4) print repr((n/m)**4) print repr(math.exp(4*math.log(n)-4*math.log(m))) """ Explanation: Logarithms can avoid floating point overflow Especially probabilities often involve large factorials. These can become astronomically large. End of explanation """ import numpy as np import scipy #m = np.array([[0.5e90, 0.00008, -0.1, 46786767], [-0.5, 0.2, -0.00001, 0.000008653], [1200000000000000.00002, -600.8, -0.5, 0.0], [-12000, 600.8, -0.698065, 650.0]]) m = np.array([[0.5, 0.00008, -0.1, 4667], [-0.5, 0.2, -0.00001, 0.000008653], [1200.00002, -600.8, -0.5, 0.0], [-12000, 600.8, -0.698065, 650.0]]) #print m #mI = m**(-1) mI = np.linalg.inv(m) #print mI #print m.dot(mI) ev = [1.0e-12, 2.0, 88.8, -0.005] A = m.dot(np.diag(ev)).dot(mI) print A print "" AI = np.linalg.inv(A) print AI print "" print A.dot(AI) b = np.array([1.0, 2.0, 3.0, 4.0]) # Required is x solving Ax = b def err(x1): v = np.dot(A, x1)-b return np.sqrt(np.inner(v, v)) x = np.dot(AI, b) print err(x) x2 = scipy.linalg.solve(A, b) print err(x2) # A = QR Q, R = np.linalg.qr(A) Qb = np.dot(Q.T, b) x3 = scipy.linalg.solve_triangular(R, Qb) print err(x3) """ Explanation: Don't invert that matrix References http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ Summary: There's hardly ever a good reason to invert a matrix Solve linear equation systems directly Apply a QR-decomposition to solve multiple systems with the same matrix (but different right side) Even if the inverse is given for free, direct solving is still more accurate Inverses of sparse matrices are in general dense End of explanation """
HemantTiwariGitHub/AndroidNDSunshineProgress
HiddenMarkovModel_PoSTaggingFromScratch.ipynb
apache-2.0
#Importing libraries import nltk import random from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import time nltk.download('treebank') nltk.download('universal_tagset') # reading the Treebank tagged sentences nltk_data = list(nltk.corpus.treebank.tagged_sents(tagset='universal')) """ Explanation: <a href="https://colab.research.google.com/github/HemantTiwariGitHub/AndroidNDSunshineProgress/blob/master/HiddenMarkovModel_PoSTaggingFromScratch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> POS tagging using modified Viterbi Data Preparation End of explanation """ #check some details print(nltk_data[:5]) # we will need to split here as train and test needs to happen on entire sentences # Splitting into train and test random.seed(100) train_set, validation_set = train_test_split(nltk_data,test_size=0.05) print("Total Training Sentences : " + str(len(train_set))) print("Total Validations Sentences : " + str(len(validation_set))) print(train_set[:5]) # Create List of Word and Tokens WordTagPairList = [pair for sent in train_set for pair in sent] print(WordTagPairList[:5]) # Finding Total Data, Unique words and Tags print("Total Word Tag Pairs : " + str(len(WordTagPairList))) wordTokens = [pair[0] for pair in WordTagPairList] tagTokens = [pair[1] for pair in WordTagPairList] #Unique word tokens is the vocabulary Vocab = set(wordTokens) #Unique Tags are the tag set TagSet = set(tagTokens) print("Vocab List : " +str(Vocab)) print("Unique Vocab Size : " + str(len(Vocab))) print ("Tags List : " + str(TagSet)) print("Unique Tags Size : " + str(len(TagSet))) """ Explanation: Exploring and Processing Data End of explanation """ #First We need to build the Transition Probability (Tag to Tag) and Emission Probability (Tag to Vocab Word) Matrix """ Explanation: Build the vanilla Viterbi based POS tagger End of explanation """ # compute probability next tag based on previous tag def getTransitionProbability(nextTag, baseTag, train_bag = WordTagPairList): tags = [pair[1] for pair in train_bag] TotalCountOfBaseTag = len([t for t in tags if t==baseTag]) TotalCountNextTagFollowingBaseTag = 0 for index in range(len(tags)-1): if tags[index]==baseTag and tags[index+1] == nextTag: TotalCountNextTagFollowingBaseTag += 1 return (TotalCountNextTagFollowingBaseTag, TotalCountOfBaseTag) def getProbability(Numerator, Denominator): return Numerator/Denominator #Test print(getTransitionProbability(nextTag='VERB', baseTag='NOUN')) count1 , count2 = getTransitionProbability(nextTag='VERB', baseTag='NOUN') print(getProbability(count1, count2)) print(getTransitionProbability(nextTag='VERB', baseTag='ADV')) print(getTransitionProbability(nextTag='VERB', baseTag='.')) print(getTransitionProbability(nextTag='DET', baseTag='.')) #create the Transition Probability Matrix TransitionProbabilityMatrix = np.zeros((len(TagSet), len(TagSet)), dtype='float32') for i, baseTag in enumerate(list(TagSet)): for j, nextTag in enumerate(list(TagSet)): nextTagOnBaseTagCount, baseTagCount = getTransitionProbability(nextTag, baseTag) TransitionProbabilityMatrix[i, j] = nextTagOnBaseTagCount/baseTagCount # checking values of Transition Probabilities print(TransitionProbabilityMatrix) print("Dimensions of Transition Matrix are : " +str(len(TransitionProbabilityMatrix)) + "," + str(len(TransitionProbabilityMatrix[0])) ) TransitionProbabilityMatrixDF = pd.DataFrame(TransitionProbabilityMatrix, columns = list(TagSet), index=list(TagSet)) TransitionProbabilityMatrixDF.head(5) #checking the heatmap for how good the dependencies are. plt.figure(figsize=(12, 12)) sns.heatmap(TransitionProbabilityMatrixDF) plt.show() #Creating Vocab Emission Probability based on Tag State (TagSize x Vocab)matrix tagCount = len(TagSet) vocabCount = len(Vocab) EmissionProbabilityMatrix = np.zeros((tagCount, vocabCount)) #Emission Probability def getEmissionProbability(word, tag, train_bag = WordTagPairList): BaseWordTagPairList = [pair for pair in train_bag if pair[1]==tag] BaseTagCount = len(BaseWordTagPairList) WordEmissionCountOnTag = [pair[0] for pair in BaseWordTagPairList if pair[0]==word] WordEmissionCount = len(WordEmissionCountOnTag) return (WordEmissionCount, BaseTagCount) #test print(getEmissionProbability('best', 'ADJ')) """ Explanation: Solving "The learning problem" Stage End of explanation """ #Tagging Using Hiden Markov Model and Viterbi Heuristic def HMMTaggerWithViterbi(wordList, train_bag = WordTagPairList): PredictedTagList = [] TagsList = list(set([pair[1] for pair in train_bag])) for key, word in enumerate(wordList): #initialise list of probability column for a given observation CandidateProbabilityList = [] for tag in TagsList: if key == 0: #Probability of Tag Just after Start State TransitionProbability = TransitionProbabilityMatrixDF.loc['.', tag] else: #Probability of Tag after Previous Tag TransitionProbability = TransitionProbabilityMatrixDF.loc[PredictedTagList[-1], tag] #Emission Probability EmissionCount, TotalCount = getEmissionProbability(wordList[key], tag) EmissionProbability = EmissionCount/TotalCount #State Probability StateProbability = EmissionProbability * TransitionProbability CandidateProbabilityList.append(StateProbability) maximumStateProbability = max(CandidateProbabilityList) # getting state for which probability is maximum PredictedTag = TagsList[CandidateProbabilityList.index(maximumStateProbability)] PredictedTagList.append(PredictedTag) return list(zip(wordList, PredictedTagList)) """ Explanation: Creating the HMM Model with Viterbi Heuristics End of explanation """ random.seed(100) # choose random 5 sents rndom = [random.randint(1,len(validation_set)) for x in range(5)] # list of validation sentences validationSentences = [validation_set[i] for i in rndom] #list of validation Tuples created from Validation Sentences validationTuples = [tup for sent in validationSentences for tup in sent] print(validationTuples) #list of only Validation words, stripped off the tags. These will be predicted by HMMTagger and later compared for accuracy ValidationWords = [tup[0] for sent in validationSentences for tup in sent] print(ValidationWords) def getHMMAccuracy(testSetList, OriginalWordTagList): start = time.time() TaggedWordSequence = HMMTaggerWithViterbi(testSetList) end = time.time() difference = end-start print("Time taken in seconds: ", difference) print(TaggedWordSequence) print(OriginalWordTagList) correctlyTaggedWordCount = [i for i, j in zip(TaggedWordSequence, OriginalWordTagList) if i == j] accuracy = len(correctlyTaggedWordCount)/len(TaggedWordSequence) print (accuracy) getHMMAccuracy(ValidationWords, validationTuples ) """ Explanation: Validation Set Evaluation End of explanation """
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
doc/notebooks/automaton.reduce.ipynb
gpl-3.0
import vcsn """ Explanation: automaton.reduce Compute an equivalent automaton with a minimal number of states. Preconditions: - its labelset is free - its weightset is a division ring or $\mathbb{Z}$. Postconditions: - the result is equivalent to the input automaton - the result is both accessible and co-accessible Caveats: - The reduction algorithm may well produce an automaton which will look more 'complicated' than the original one, especially when the latter is already of minimal dimension. See examples in $\mathbb{Z}$ below. - The computation of reduced representations implies the exact resolution of linear systems of equations which becomes problematic when the dimension of the systems grows. See also: - automaton.minimize Examples End of explanation """ c = vcsn.context('lal_char(ab), q') a = c.expression('<2>(<3>a+<5>b+<7>a)*<11>', 'associative').standard() a a.reduce() """ Explanation: In $\mathbb{Q}$ End of explanation """ a.minimize() """ Explanation: To be contrasted with the results from automaton.minimize: End of explanation """ c = vcsn.context('lal_char(ab), z') a = c.expression('<2>(<3>a+<5>b+<7>a)*<11>', 'associative').standard() a.reduce() """ Explanation: In $\mathbb{Z}$ The same automaton, but on $\mathbb{Z}$, gives: End of explanation """ %%automaton a context = "lal_char(abc), z" $ -> 0 0 -> 0 a, b 0 -> 1 b 0 -> 2 <2>b 2 -> 2 <2>a, <2>b 2 -> 1 <2>b 1 -> 1 <4>a, <4>b 1 -> $ a.reduce() a = vcsn.context('lal_char(ab), z').expression('[ab]*b(<2>[ab])*').automaton() a a.reduce() """ Explanation: Caveats In $\mathbb{Z}$ the result may be suprising. For instance: End of explanation """
wehlutyk/brainscopypaste
notebooks/filter_evaluations.ipynb
gpl-3.0
SAMPLE_SIZE = 100 """ Explanation: Filter evaluations by precision-recall analysis 1 Setup Flags and settings. End of explanation """ from random import sample from textwrap import indent, fill import numpy as np %cd -q .. from brainscopypaste.conf import settings %cd -q notebooks from brainscopypaste.db import Cluster, Quote, Substitution from brainscopypaste.utils import init_db, session_scope, langdetect from brainscopypaste.filter import filter_quote_offset from brainscopypaste.mine import Model, Time, Source, Past, Durl engine = init_db() """ Explanation: Imports and database setup. End of explanation """ #with session_scope() as session: # quote_ids = sample([id for (id,) # in session.query(Quote.id).filter(Quote.filtered == False)], # SAMPLE_SIZE) """ Explanation: 2 Evaluate language filtering Either run this cell to generate a new selection... End of explanation """ quote_ids = [1956761, 1987670, 2013139, 1762813, 514301, 203909, 2369165, 1912805, 1534840, 464066, 152061, 720930, 2331572, 268789, 457062, 1219889, 790024, 2309594, 2712468, 1282691, 730591, 1840441, 382029, 376767, 1058965, 2423390, 1013911, 1558617, 2491471, 777666, 648372, 2123449, 630199, 1479403, 488899, 2501461, 1453754, 1190828, 239246, 1981112, 491273, 2554944, 80036, 1433989, 1067430, 67526, 937337, 952234, 110443, 123775, 1454732, 1600502, 1877933, 1746630, 1387646, 654399, 1896017, 254697, 72175, 657151, 2296113, 2692900, 2340689, 1253031, 2671619, 1299967, 2323288, 321528, 2391467, 2704508, 2596163, 2044260, 420936, 448593, 966688, 252997, 1573373, 328472, 1176294, 494275, 1110521, 74025, 647887, 2469274, 1748974, 209377, 116114, 2657880, 923507, 2530004, 200625, 671881, 209809, 2651830, 1105095, 120217, 16153, 95646, 1162318, 1267527] with session_scope() as session: strings = [session.query(Quote).get(id).string for id in quote_ids] strings_langs = [(string, langdetect(string)) for string in strings] print("Over a sample of {}, {} quotes are rejected because their detected " "language is not English" .format(SAMPLE_SIZE, np.sum([lang != 'en' for _, lang in strings_langs]))) """ Explanation: ... or this one to use the previous selection (which has already been coded). End of explanation """ for i, (string, lang) in enumerate(strings_langs): title = ' {} / {}'.format(i + 1, SAMPLE_SIZE) print('-' * (80 - len(title)) + title) print('Language:', lang) print() print(indent(fill(string), ' ' * 5)) print() """ Explanation: Here are the individual strings and their detected languages. End of explanation """ #with session_scope() as session: # cluster_ids = sample([id for (id,) # in session.query(Cluster.id).filter(Cluster.filtered == False)], # SAMPLE_SIZE) """ Explanation: The question, here, is whether language is properly detected or not (both precision and recall are important to us here). Hand-coding these sentences gives a direct precision-recall answer to this. See the paper for details. 3 Evaluate full cluster filtering Either run this cell to generate a new selection... End of explanation """ cluster_ids = [678375, 1030236, 200131, 227946, 182411, 1782509, 2571784, 2385863, 548464, 1417058, 442671, 417270, 206587, 2119286, 2509349, 1363677, 2384075, 98089, 2472784, 2291625, 1252250, 766764, 2005041, 1786498, 2532146, 547696, 119025, 99671, 2293637, 37913, 198342, 1824441, 393254, 1438581, 726763, 1162402, 136659, 700003, 1530925, 868029, 1988243, 1528193, 453911, 1118515, 942719, 163403, 37335, 2531211, 1207286, 139905, 2098233, 985830, 1309232, 1949178, 1208225, 163161, 2390394, 1000682, 33330, 1438761, 143158, 914871, 1012869, 1515844, 2155238, 318470, 1705922, 1998918, 1559891, 1567974, 1072475, 1377888, 2659498, 214657, 2307338, 2702297, 1718647, 1830269, 1075089, 563301, 2675991, 1936421, 650209, 1000108, 270310, 224933, 1078355, 2665218, 988251, 921771, 193937, 1005385, 922307, 966693, 1468503, 1112509, 476413, 1537564, 799385, 1344350] with session_scope() as session: clusters = [session.query(Cluster).get(id) for id in cluster_ids] strings_kepts = [] for c in clusters: fcluster = c.filter() if fcluster is not None: kept_quote_ids = set([q.id - filter_quote_offset() for q in fcluster.quotes]) else: kept_quote_ids = set([]) strings_kepts.append(([(q.string, q.id in kept_quote_ids) for q in c.quotes], c.filter() is not None)) print("Over a sample of {}, {} clusters are rejected by the cluster filter" .format(SAMPLE_SIZE, SAMPLE_SIZE - np.sum([kept for _, kept in strings_kepts]))) """ Explanation: ... or this one to use the previous selection (which has already been coded). End of explanation """ for i, (strings, ckept) in enumerate(strings_kepts): title = ' {} / {}'.format(i + 1, SAMPLE_SIZE) print('-' * (80 - len(title)) + title) print('Kept:', 'yes' if ckept else 'no') print() for string, skept in strings: fstring = indent(fill(string), ' ' * 5) if not skept: fstring = fstring[0] + 'x' + fstring[2:] print(fstring) print() """ Explanation: Here are the individual cluster strings and their respective rejected/kept status. End of explanation """ #with session_scope() as session: # # Sample 5 times more to make sure we get at least 100 substitutions # cluster_ids = sample([id for (id,) # in session.query(Cluster.id).filter(Cluster.filtered == True)], # 15 * SAMPLE_SIZE) """ Explanation: The question, here, is whether the cluster filter lets us keep only real clusters (i.e. not clusters of causally unrelated quotes; in other words, we'd like high precision, and we're not really interested in recall). The codings for these clusters are in the codings/filter_evaluations_clusters-precision-recall file, and precision-recall follows from them. 4 Evaluate the substitution filtering 4.0 Sample clusters and substitutions Sample clusters Either run this cell to generate a new selection... End of explanation """ cluster_ids = [ 1001058714, 1000234230, 1001661015, 1000793763, 1000992927, 1001447160, 1000928909, 1000652632, 1002429300, 1000424511, 1000028725, 1001804178, 1001962169, 1002010696, 1002620164, 1000541603, 1002038276, 1002178216, 1001264080, 1000658653, 1002656290, 1000602035, 1002444841, 1001270720, 1000980555, 1000049133, 1001594109, 1000874654, 1000270878, 1001267640, 1000587081, 1001502597, 1001290073, 1000742456, 1000803305, 1001768144, 1001790796, 1000420570, 1002288719, 1002195335, 1000630276, 1001310108, 1002544437, 1001355443, 1001215220, 1002184159, 1001954566, 1000417270, 1001062143, 1000193302, 1001073091, 1001318792, 1001333529, 1002230713, 1000253352, 1002165662, 1001000108, 1001492202, 1002697831, 1002554478, 1002203884, 1000368041, 1000743708, 1001803629, 1001123734, 1001892371, 1001817722, 1001887535, 1001566025, 1001800790, 1000778799, 1000917456, 1001772704, 1001500452, 1001176167, 1001580298, 1000786039, 1000276995, 1001329982, 1002416136, 1001861324, 1000314572, 1000873226, 1002526111, 1001214205, 1002498111, 1000073947, 1001636514, 1000381995, 1001214611, 1001605320, 1000100464, 1000853926, 1002047137, 1000532977, 1002113182, 1002222906, 1001773241, 1000492857, 1000949715, 1000164833, 1000263282, 1000669491, 1000537798, 1000493265, 1001739413, 1002188621, 1002425058, 1000997844, 1001798690, 1000158229, 1000131343, 1001936787, 1001056713, 1000069848, 1001032119, 1000492141, 1000001841, 1000062295, 1001365462, 1001259190, 1002360197, 1000417145, 1001266464, 1000395595, 1002109225, 1000519609, 1002341113, 1002627477, 1001447816, 1002598858, 1000739902, 1001334186, 1000421527, 1001369574, 1001468521, 1001221778, 1002397089, 1000194804, 1001834376, 1000847659, 1000857363, 1001389106, 1000404597, 1002673066, 1002010483, 1002482970, 1001309374, 1002135445, 1000676392, 1001402554, 1000023387, 1000861742, 1000739666, 1002208103, 1001610066, 1002091675, 1000055741, 1000098549, 1002606093, 1001019247, 1002081961, 1000046067, 1002202634, 1002023513, 1001803811, 1002270829, 1001090860, 1000942827, 1002488258, 1000073692, 1002394081, 1002642836, 1000964777, 1001327690, 1000005769, 1001231453, 1001843109, 1002095337, 1000144749, 1000856624, 1001408503, 1002077967, 1000689166, 1002604060, 1002619245, 1000260034, 1002549738, 1000886518, 1001146812, 1001499168, 1000981840, 1000510458, 1001824441, 1000023650, 1002645020, 1002476030, 1001445978, 1001729766, 1002396899, 1002704984, 1000456379, 1000164580, 1001594835, 1001879217, 1001658148, 1000222104, 1000175721, 1001934132, 1000458812, 1000597134, 1000983803, 1001045067, 1001934934, 1001101372, 1002064679, 1001354812, 1001387573, 1001552266, 1001049178, 1000211591, 1001535131, 1000068077, 1002667160, 1001958712, 1002049050, 1001354277, 1001406656, 1001333150, 1001800505, 1001948524, 1000398683, 1002031029, 1001171493, 1001750650, 1000486399, 1002344374, 1000798536, 1002483840, 1001327852, 1000574008, 1002405878, 1001603998, 1001658395, 1000311644, 1001804403, 1001249334, 1001265825, 1001434048, 1001393656, 1001794262, 1001120310, 1001337829, 1001450332, 1002610877, 1001479081, 1000806304, 1000310768, 1000399139, 1001749658, 1000062651, 1001672488, 1000168730, 1000287594, 1000031766, 1002453238, 1000907880, 1000172027, 1002431192, 1001999079, 1000072697, 1001986304, 1002115343, 1001729577, 1001750064, 1000630283, 1000459150, 1002202737, 1002558133, 1001727072, 1000130129, 1000657054, 1000701933, 1000953247, 1002349120, 1001881970, 1002607340, 1002248182, 1001559921, 1001252335, 1000301764, 1001307914, 1001284847, 1001340444, 1001785513, 1000158707, 1000072111, 1002577972, 1001846670, 1002551562, 1000423962, 1001392074, 1002514611, 1001067331, 1002031448, 1000179811, 1001451249, 1002350833, 1000722484, 1000541045, 1001513981, 1002327622, 1001060749, 1002530663, 1002644774, 1001007144, 1000257361, 1002563963, 1002394972, 1001058481, 1000854528, 1002613959, 1001676866, 1000073613, 1000209836, 1001243402, 1002440440, 1002364622, 1000090769, 1002031916, 1001978714, 1002287608, 1001866823, 1001583138, 1002663786, 1002190005, 1001188041, 1000497933, 1000061665, 1000964817, 1001367239, 1001935938, 1002551108, 1002637119, 1000525740, 1002690274, 1000762606, 1002218984, 1002540254, 1002012695, 1002535392, 1002045997, 1001761229, 1001345695, 1001540708, 1000409133, 1002281542, 1002031988, 1001905217, 1001927225, 1000456501, 1000103149, 1001595739, 1002629715, 1000633871, 1002582219, 1000125203, 1002622788, 1000152868, 1001691221, 1002550150, 1001367563, 1001517118, 1000871577, 1002129646, 1001997590, 1000404780, 1002438973, 1002241374, 1001038946, 1000088364, 1001781958, 1001978086, 1001035980, 1001110133, 1001604717, 1002078831, 1002488583, 1000792847, 1000641673, 1001165015, 1001283536, 1001500299, 1001336583, 1001464976, 1001775715, 1001334053, 1001752220, 1001059238, 1000061280, 1000343167, 1002262659, 1001017521, 1000926454, 1001549906, 1001399657, 1001596953, 1001511989, 1002134226, 1000997323, 1002423339, 1002561081, 1002351089, 1000541923, 1001581475, 1001867280, 1001543175, 1001503038, 1000093464, 1002711441, 1000730805, 1001702743, 1001948884, 1000065378, 1000215299, 1001326290, 1001638327, 1000007295, 1001515868, 1002212135, 1001575847, 1002161948, 1001357387, 1001766184, 1000727306, 1000699503, 1000075976, 1002128547, 1002602954, 1001760223, 1000659375, 1002062608, 1000750867, 1001376415, 1000364749, 1001058467, 1000618691, 1001144316, 1001536519, 1002587087, 1001101833, 1000247382, 1000609876, 1001645973, 1001840226, 1001602395, 1002684664, 1000776962, 1001157755, 1002342517, 1000022381, 1001920189, 1001801346, 1001162402, 1002689307, 1002156227, 1001687826, 1002596959, 1002349648, 1001894040, 1001450690, 1001735336, 1001498688, 1001180316, 1000810720, 1001220544, 1001529035, 1001497563, 1000466115, 1000218057, 1002531776, 1001810079, 1000174553, 1000145964, 1001540017, 1002531576, 1001728260, 1002033880, 1001150137, 1001953094, 1002077979, 1001228678, 1002168298, 1000151281, 1002335600, 1002526758, 1001714987, 1001291988, 1000419221, 1002166820, 1000925310, 1000982141, 1001751176, 1001191639, 1000428712, 1000885331, 1000130902, 1001035008, 1000538396, 1002436753, 1002272435, 1001444011, 1000873940, 1000706171, 1000465018, 1000006623, 1000275277, 1002396180, 1002582721, 1000118329, 1000037089, 1000453971, 1000963958, 1000640803, 1001129926, 1001224067, 1000047577, 1000393604, 1001300568, 1000147695, 1000808608, 1001958358, 1001269596, 1002713792, 1002275177, 1002410739, 1002315383, 1000223504, 1001345610, 1000036857, 1002471258, 1001375224, 1002695360, 1001511054, 1002619075, 1002701703, 1000428130, 1000207474, 1000367210, 1002676594, 1002290678, 1000780766, 1002248783, 1000331455, 1000869013, 1002351131, 1001827981, 1001624275, 1001359969, 1000562736, 1000393157, 1001947238, 1002170490, 1002493182, 1000951354, 1000734774, 1000272483, 1001067850, 1002234686, 1002259959, 1001223336, 1002086622, 1002102465, 1002404394, 1000833030, 1001881525, 1002214009, 1001142763, 1001334771, 1001399809, 1000971111, 1000841798, 1000351662, 1000918770, 1000051328, 1001085842, 1001426393, 1001499469, 1002673391, 1002439989, 1002656579, 1000503223, 1000199277, 1001102502, 1000804320, 1001829920, 1002377023, 1000669157, 1000789097, 1002582408, 1000755806, 1000826738, 1001776772, 1001368136, 1000804767, 1002148559, 1001719604, 1002662183, 1002320519, 1000047835, 1000240968, 1000980627, 1000889403, 1002350939, 1001131879, 1001271289, 1002320021, 1000812131, 1001424421, 1002461839, 1001799302, 1000362981, 1001116881, 1001201610, 1001904014, 1002449788, 1000361053, 1000143359, 1000922307, 1001675730, 1000773716, 1000964829, 1000830471, 1000431368, 1002320455, 1002506830, 1001079629, 1002573767, 1000514517, 1002143145, 1001158803, 1001644001, 1002618006, 1002492025, 1000676599, 1001041617, 1002145451, 1000017230, 1001866916, 1001265926, 1002274458, 1001815958, 1002702195, 1002236588, 1002247341, 1000118770, 1002178890, 1001472895, 1000063547, 1000762613, 1002297834, 1000807747, 1001970314, 1002276553, 1000562856, 1001140852, 1001601748, 1002639679, 1001182746, 1002650044, 1001227746, 1000170311, 1000564560, 1001740055, 1001904646, 1000539800, 1001501829, 1000531802, 1000128379, 1000790358, 1001940684, 1000158028, 1001881112, 1002492922, 1000803880, 1000643541, 1000052799, 1000297615, 1001726109, 1002543957, 1000054716, 1000155125, 1002652618, 1000769467, 1002042242, 1000368676, 1000858673, 1001822210, 1000350231, 1001442450, 1000050760, 1002041174, 1000233535, 1001572148, 1001432199, 1001742887, 1001172196, 1000364258, 1001970826, 1002105767, 1002037960, 1000451410, 1002099403, 1000967350, 1000147974, 1000341770, 1001680878, 1001613527, 1002528697, 1002715601, 1002474922, 1000818246, 1001278242, 1000681152, 1000997312, 1000010867, 1000952787, 1000993230, 1001786224, 1000203905, 1000911729, 1000885883, 1002075501, 1001614854, 1001924725, 1000385909, 1001433831, 1000201148, 1001606931, 1001941294, 1000962064, 1001860524, 1000774230, 1002033685, 1001261980, 1000288611, 1001334187, 1001921383, 1002166359, 1001938346, 1001262841, 1000993575, 1001270506, 1002702506, 1001842295, 1000382304, 1000042449, 1000021771, 1000481204, 1002121723, 1001652776, 1001226182, 1002225233, 1001253286, 1001562116, 1000154064, 1000094446, 1001633759, 1001546738, 1001004738, 1002556326, 1001205897, 1000902561, 1002001937, 1001700843, 1002435437, 1000989228, 1001653126, 1001334470, 1002071621, 1001522103, 1001548001, 1001285963, 1000090895, 1000746072, 1001574873, 1002175814, 1002531988, 1002144195, 1001770967, 1001451200, 1001856086, 1000559388, 1001297817, 1000837494, 1001534871, 1000220313, 1001259385, 1002100069, 1001332921, 1000544786, 1001841512, 1001383653, 1000684470, 1002161309, 1002231396, 1002348115, 1001758109, 1001747036, 1000284497, 1000832130, 1001282170, 1002010186, 1000427241, 1001581775, 1000578192, 1000078262, 1000016694, 1002705118, 1001733624, 1001363335, 1002436545, 1001328603, 1002180807, 1000767722, 1002485195, 1002272469, 1001271235, 1001481701, 1001348533, 1000388469, 1001219997, 1001460577, 1001590875, 1001240142, 1000238824, 1000121322, 1002004474, 1001592727, 1001385776, 1002470366, 1002664998, 1001997277, 1001120647, 1002575276, 1001722300, 1002216129, 1000067607, 1000994775, 1001498352, 1002699359, 1000587363, 1000217706, 1002109991, 1000288982, 1002699713, 1002233823, 1001153232, 1000914136, 1002548433, 1001734510, 1002342520, 1002452277, 1001100860, 1000950342, 1000388189, 1000127650, 1002315582, 1001566979, 1000957774, 1000052007, 1000364394, 1001996793, 1001908176, 1002105936, 1002086992, 1002016069, 1001183918, 1000800618, 1001022239, 1001270359, 1002242646, 1002505634, 1000716919, 1000901112, 1002481410, 1000298970, 1000516124, 1000057052, 1002361761, 1002647536, 1000845811, 1002171985, 1002050592, 1000006586, 1001255552, 1001837963, 1001256691, 1001761220, 1000938672, 1000530532, 1000311534, 1001776809, 1000318357, 1002053600, 1001444195, 1002365910, 1001746878, 1000228196, 1002060204, 1001739958, 1001886840, 1000698210, 1002509868, 1001502009, 1001893527, 1002638619, 1002066935, 1001550280, 1002294683, 1000107898, 1000010223, 1001508748, 1001689201, 1001003737, 1001014351, 1002082703, 1002224120, 1002157272, 1000882553, 1000053316, 1001414012, 1001342905, 1002156509, 1000251513, 1001232667, 1001399253, 1001855493, 1001013265, 1001688212, 1002583768, 1001529428, 1001127460, 1001698379, 1002315633, 1002318183, 1001620522, 1001378536, 1001574209, 1000569652, 1001604632, 1002697581, 1001575675, 1001117505, 1002051194, 1000183177, 1000855203, 1001423033, 1001777736, 1002318334, 1001056375, 1002426820, 1002236516, 1000150483, 1001578508, 1001358370, 1000011455, 1002362070, 1000806847, 1001801348, 1000085743, 1001413671, 1001540172, 1001829995, 1001851813, 1001597485, 1000078746, 1001840646, 1001995820, 1000160512, 1002451933, 1001191721, 1002203693, 1000238465, 1002033572, 1001320063, 1000872380, 1000623197, 1002147129, 1000011914, 1001530603, 1002446364, 1000079514, 1002448631, 1001076382, 1002223593, 1000490815, 1000630667, 1000376213, 1002462823, 1000941817, 1002357231, 1001797587, 1000926673, 1000594216, 1001861749, 1001684910, 1002501511, 1001579538, 1002606713, 1000695748, 1000903985, 1001546192, 1001996840, 1001472327, 1000032826, 1001545056, 1002014395, 1002095028, 1001913231, 1000075824, 1000952746, 1001853596, 1002182047, 1002628022, 1002004078, 1001566113, 1002163349, 1000790111, 1002285133, 1002494233, 1000397761, 1001439740, 1001101891, 1001618046, 1001291605, 1002314992, 1001234028, 1001559891, 1001477621, 1000780535, 1000875694, 1000645845, 1000484118, 1002111321, 1001878295, 1002646120, 1002039447, 1002029116, 1000156966, 1000010498, 1001769120, 1000175849, 1000418713, 1001915737, 1000709712, 1001922556, 1001283094, 1002208729, 1000333316, 1000012505, 1001047126, 1001172398, 1000177517, 1001487440, 1000001465, 1000882506, 1002659495, 1002052955, 1001398557, 1001303140, 1000860772, 1002525449, 1002527325, 1002183331, 1001776468, 1000205762, 1000597585, 1000113771, 1001870016, 1000025477, 1002086198, 1000502730, 1001981118, 1002473830, 1001480043, 1002528637, 1000838413, 1001653286, 1001330880, 1002679849, 1002506095, 1000486043, 1001982507, 1002103300, 1000257805, 1001964267, 1000874462, 1001326641, 1001782479, 1001021296, 1001867545, 1000694265, 1002033815, 1000304505, 1000985515, 1000135881, 1002495192, 1000821217, 1001810184, 1002310190, 1002635800, 1000729831, 1000784775, 1001668437, 1002274466, 1000797484, 1002537649, 1000475111, 1000437411, 1001657664, 1001278789, 1001836754, 1001983301, 1001494451, 1002357233, 1002212481, 1001782077, 1001629749, 1000824935, 1002484633, 1000050369, 1000716672, 1001310861, 1001954605, 1002473461, 1000000607, 1000825586, 1001437178, 1000204392, 1002359775, 1001499856, 1001810430, 1002093548, 1001448073, 1001173925, 1000746976, 1002502228, 1000177952, 1001304657, 1001653859, 1000214289, 1000157134, 1001597852, 1002060579, 1000735315, 1002615587, 1001055343, 1002059341, 1000356177, 1000447259, 1000699286, 1002457427, 1001989841, 1001801036, 1001434979, 1001957769, 1000007005, 1000916205, 1001169392, 1001889767, 1000075558, 1002025030, 1001976875, 1001560208, 1001115658, 1001598286, 1001205754, 1002231878, 1000813552, 1001981838, 1000677677, 1001053575, 1002255917, 1002025140, 1001734307, 1001057270, 1001087649, 1002574757, 1001915861, 1001674897, 1001364136, 1002154777, 1001019768, 1001614035, 1000379490, 1001908572, 1002591014, 1000164476, 1000021830, 1002468679, 1001717996, 1001998254, 1000470822, 1001431751, 1001319414, 1002050667, 1000560326, 1001259195, 1000866918, 1001379510, 1002113186, 1001517995, 1001646691, 1000465784, 1001374778, 1001896574, 1002089665, 1000872939, 1001914402, 1001909903, 1002351476, 1002080607, 1001717244, 1000018324, 1001994530, 1001422230, 1001353347, 1000339608, 1000475920, 1001694035, 1000982021, 1001590385, 1001535698, 1000544821, 1001163684, 1000525655, 1002208497, 1001140991, 1000766128, 1000607192, 1000151074, 1000723980, 1002133819, 1000500270, 1000834415, 1002225356, 1000373694, 1001824087, 1000067090, 1000297958, 1002317409, 1001770960, 1001508749, 1000167477, 1001947065, 1001940057, 1000781919, 1000988086, 1002518061, 1001558269, 1001815508, 1001091260, 1000263913, 1000405911, 1001511990, 1001723810, 1001074987, 1002376815, 1001680122, 1001506913, 1001594016, 1001519696, 1001789977, 1000692462, 1000029089, 1000338311, 1001956638, 1002575343, 1001274784, 1001851680, 1002390771, 1001326868, 1001743949, 1002550572, 1002559841, 1002530236, 1001459660, 1000449148, 1002536266, 1002701796, 1002253595, 1001124589, 1001133908, 1002463064, 1000392223, 1001121726, 1001289467, 1000289591, 1002105764, 1002381694, 1002341368, 1000827595, 1002084983, 1001682153, 1000633018, 1001113583, 1000928142, 1000033000, 1000621627, 1001084527, 1001200554, 1001458154, 1002412589, 1001972202, 1002691528, 1000265992, 1002659026, 1001366843, 1000467244, 1001664670, 1002489296, 1002483595, 1001900662, 1002686469, 1001330713, 1000830278, 1000795336, 1002364454, 1000811242, 1001678392, 1000033963, 1002634804, 1001804844, 1000527231, 1002344474, 1002369472, 1001578312, 1001845592, 1001540167, 1002337924, 1000539731, 1002311229, 1000201698, 1000953073, 1000629221, 1002663893, 1001791750, 1001914401, 1002604562, 1000828450, 1002613802, 1001268711, 1000927905, 1001578575, 1001005287, 1002131006, 1002396856, 1000010276, 1001202203, 1000137591, 1001716758, 1001404162, 1002497367, 1000410225, 1000002836, 1000456777, 1000005943, 1002065249, 1002074323, 1000814733, 1002442276, 1002663916, 1001447228, 1000979497, 1000956742, 1001038308, 1002638096, 1000415025, 1001868899, 1001017794, 1000056720, 1000058183, 1000821880, 1001228149, 1000480755, 1002499177, 1001537485, 1000339767, 1002044113, 1002573766, 1001269992, 1001354843, 1000118888, 1000149428, 1001803975, 1000222326, 1001257849, 1002187778, 1002149432, 1001445651, 1001971190, 1001377902, 1000168509, 1001351656, 1001713927, 1001104442, 1001890822, 1001869495, 1002575689, 1000762777, 1000150393, 1001259333, 1000565784, 1002312763, 1001455163, 1002431805, 1002170320, 1001727497, 1002120786, 1001211845, 1002009293, 1002002759, 1002256034, 1000960075, 1000905489, 1002410523, 1000381073, 1000173214, 1002395671, 1002256806, 1001768142, 1000177679, 1002692425, 1002088771, 1000856361, 1001423847, 1001575759, 1000449711, 1001550004, 1001554359, 1001906605, 1001710361, 1001862213, 1002048548, 1000734007, 1001059720, 1002352183, 1001121521, 1000213563, 1001664822, 1002450758, 1001128119, 1001734650, 1000402851, 1000517539, 1001565687, 1001288446, 1000169483, 1000780042, 1000862817, 1000904640, 1001952654, 1000734391, 1001156795, 1001074923, 1001855482, 1001336701, 1001371862, 1002545044, 1001454360, 1001719858, 1001598709, 1001450807, 1000713806, 1000792912, 1002677052, 1001580304, 1002257558, 1001500784, 1002261599, 1000810175, 1001677954, 1001202743, 1002314510, 1001777562, 1000822655, 1001002115, 1001217819, 1002520043, 1000716537, 1001958057, 1002669138, 1002021941, 1000828887, 1000177607, 1001993644, 1002485054, 1001084532, 1000857473, ] """ Explanation: ... or this one to use the previous selection (which has already been coded). End of explanation """ #substitution_ids = {} #with session_scope() as session: # for max_distance in range(1, 3): # model = Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, max_distance) # # Sample 5 times more to make sure we get at least 100 substitutions # # even with different quotes # substitution_ids[model] = sample( # [id for (id,) # in session.query(Substitution.id).filter(Substitution.model == model)], # 5 * SAMPLE_SIZE # ) """ Explanation: Sample substitutions Either run this cellto generate a new selection... End of explanation """ substitution_ids = { Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 1): [ 5472, 10466, 8963, 6380, 778, 9747, 474, 830, 8524, 8271, 674, 6907, 5478, 9421, 8203, 7668, 6772, 2201, 6117, 3916, 8525, 10101, 1863, 1846, 2741, 8648, 9870, 4681, 1871, 7960, 6830, 7839, 4664, 2974, 5660, 7994, 4239, 5473, 8170, 10334, 4306, 9317, 1887, 9638, 6799, 6115, 849, 3303, 4259, 6105, 7455, 10475, 2176, 5343, 913, 8808, 420, 489, 5431, 872, 6754, 389, 9343, 8201, 6234, 498, 3411, 9359, 8402, 7506, 7084, 6809, 7417, 9337, 1924, 5685, 648, 415, 5702, 4612, 10714, 8804, 4269, 6758, 10980, 7140, 1967, 2968, 5443, 2661, 9635, 3911, 4346, 3496, 946, 879, 4845, 7148, 3445, 6574, 5650, 8046, 8943, 7115, 9914, 5688, 54, 1425, 932, 1498, 3318, 7357, 10723, 571, 6440, 10721, 5723, 4271, 943, 854, 4067, 2694, 6135, 57, 2434, 8190, 417, 5571, 2213, 6612, 2674, 7063, 4353, 6653, 800, 6138, 2688, 459, 859, 2430, 5269, 9871, 10381, 8986, 2072, 9381, 200, 354, 4194, 7394, 9877, 7033, 723, 9883, 8626, 4206, 486, 3617, 243, 5339, 5355, 9338, 1724, 811, 6130, 2224, 3311, 4616, 4521, 9410, 5482, 6369, 7963, 10768, 4256, 4061, 565, 3623, 6657, 141, 4226, 8642, 11052, 4052, 786, 40, 2025, 632, 541, 5755, 7486, 2706, 9560, 213, 10879, 5441, 5800, 8684, 2953, 6822, 411, 9342, 6847, 9494, 392, 6820, 1841, 6945, 5656, 467, 6433, 9407, 442, 6663, 1996, 5639, 7359, 2162, 8546, 4488, 7548, 6777, 3649, 2435, 4313, 7125, 348, 10991, 7082, 6795, 7142, 9535, 5464, 4620, 867, 9760, 8974, 5705, 4223, 7458, 8382, 8076, 2199, 6183, 10719, 10356, 988, 4330, 10971, 6871, 4374, 8899, 42, 2157, 8092, 5514, 744, 9361, 1961, 5555, 8163, 4463, 8887, 4134, 865, 432, 1413, 5754, 5469, 9414, 3305, 5669, 6181, 570, 7551, 8759, 695, 2419, 6903, 2202, 969, 7570, 2438, 5429, 4465, 179, 5239, 10438, 7501, 573, 2559, 9486, 9457, 6146, 949, 1377, 8926, 4390, 9745, 5780, 3404, 279, 9490, 3765, 4197, 2164, 4691, 5753, 10775, 277, 11048, 8414, 2014, 7678, 10338, 10636, 6367, 140, 4348, 4429, 3910, 4220, 410, 2095, 3294, 2230, 4833, 2045, 3626, 5466, 3306, 3622, 2243, 5322, 8510, 2192, 6235, 4231, 564, 9423, 9333, 5492, 1725, 10378, 8270, 10173, 2154, 201, 2151, 8562, 3928, 7006, 5647, 9228, 2721, 8970, 3909, 2686, 9061, 5765, 7027, 824, 5436, 4860, 7534, 2090, 8192, 4126, 9466, 3446, 1009, 3450, 5248, 6182, 9888, 22, 513, 7389, 8819, 992, 11047, 5549, 1519, 7352, 10615, 616, 3624, 7112, 3290, 2181, 8941, 10877, 977, 2719, 4221, 10399, 2200, 5504, 6238, 9562, 10862, 9418, 10709, 3618, 9063, 9388, 7137, 897, 3407, 6362, 10468, 2047, 5673, 10618, 9552, 851, 3416, 4058, 8947, 6814, 5581, 701, 972, 4340, 4215, 689, 8713, 3956, 8707, 6176, 3413, 10021, 4516, 4780, 9422, 3347, 7032, 3340, 6431, 5724, 4288, 4496, 9417, 6119, 4057, 325, 4413, 9869, 8500, 220, 6152, 9385, 3293, 7996, 793, 219, 4456, 939, 8086, 613, 6948, 212, 1965, 4417, 10623, 804, 4449, 5424, 9766, 7542, 9782, 4361, 6356, 1964, 8061, 9393, 8566, 4240, 9401, 4323, 5484, 10423, 501, 8033, 3414, 8542, 4257, 6462, 4048, 2034, 7669, 3277, 4295, 7113, 973, 9588, 4480, 2463, 4451, 11113 ], Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 2): [ 7242, 6639, 9155, 6065, 8704, 3122, 9273, 3556, 7633, 1514, 2335, 7235, 1309, 4084, 3690, 6671, 4171, 3205, 1207, 2595, 2587, 4151, 2790, 9504, 3008, 9022, 1610, 3169, 9163, 10168, 2783, 9271, 8480, 2497, 3782, 9805, 3991, 9513, 10277, 2374, 5875, 5527, 6079, 8333, 11029, 3028, 3366, 7255, 6691, 3160, 2108, 10525, 10165, 1810, 6285, 2738, 3897, 2917, 10509, 8718, 10011, 2801, 1703, 8783, 4901, 6560, 5945, 669, 9647, 3802, 604, 8024, 7037, 7303, 4930, 2729, 6622, 2346, 4744, 3386, 10184, 7689, 1708, 3536, 10207, 3973, 7517, 5877, 6349, 2534, 8851, 1481, 7818, 2912, 5895, 9050, 2483, 1910, 8360, 3992, 4899, 1147, 5040, 1222, 4766, 3190, 2305, 8869, 10458, 773, 2730, 4994, 10863, 7299, 7566, 3534, 5334, 6016, 1490, 7904, 3686, 8794, 7664, 8698, 1239, 1809, 4185, 7523, 1915, 1894, 6569, 10556, 4018, 7376, 7699, 10256, 9717, 8717, 2765, 5843, 2266, 9266, 9921, 7896, 7400, 7091, 1800, 4887, 7860, 1454, 812, 3831, 68, 9168, 9933, 4541, 3696, 9804, 2288, 3329, 7612, 2500, 1494, 8122, 7167, 9614, 9172, 3486, 6000, 8482, 8479, 3660, 10679, 10053, 8923, 1312, 10216, 9127, 2516, 9024, 8322, 3794, 9817, 10508, 7323, 5372, 9778, 1628, 1359, 5092, 6670, 8738, 3899, 7284, 9911, 4722, 1674, 3067, 10746, 2284, 4019, 10223, 8139, 9034, 6887, 7629, 3700, 1899, 2736, 5335, 8690, 2980, 10641, 9726, 7431, 2793, 4716, 3082, 5532, 3006, 2498, 1397, 7041, 1311, 1493, 6507, 10681, 257, 9895, 10098, 7653, 6561, 10542, 600, 5220, 1149, 10573, 5311, 2622, 9117, 5373, 5064, 3687, 8865, 7354, 9822, 11044, 5292, 8267, 8030, 1775, 8313, 9595, 8247, 578, 4509, 6320, 3674, 3115, 8456, 9001, 3482, 5396, 11096, 7934, 2678, 8108, 10962, 5079, 6995, 9603, 5048, 7695, 3161, 6517, 5022, 5059, 10265, 10130, 5121, 3584, 7716, 6197, 10282, 4628, 11060, 5003, 3509, 9068, 644, 2537, 4761, 3374, 1877, 1353, 10655, 11076, 9553, 5119, 107, 9610, 3801, 7880, 10871, 3144, 4578, 4767, 7774, 653, 3753, 251, 1067, 10076, 1460, 9184, 10580, 8921, 8604, 6709, 2914, 5143, 145, 10453, 9861, 8003, 4167, 9819, 5007, 10941, 5290, 7891, 8228, 2833, 4163, 1757, 1826, 3177, 3581, 4965, 7078, 6219, 3381, 8848, 1412, 7381, 9837, 1209, 2352, 6516, 314, 10728, 8349, 1423, 9057, 8356, 91, 6198, 159, 9939, 8769, 2878, 10560, 10729, 9287, 9735, 4591, 9907, 211, 1472, 5595, 6740, 9069, 1768, 5392, 4092, 3002, 6201, 6419, 9053, 2481, 1408, 1621, 4525, 6955, 7759, 8745, 7575, 2649, 658, 5938, 4139, 1602, 3085, 1895, 5106, 1650, 9721, 4954, 9521, 3763, 3221, 4972, 6093, 5826, 6340, 2623, 7742, 11065, 113, 5280, 6668, 7035, 3050, 4915, 7265, 8733, 7271, 1157, 9976, 10112, 3713, 10088, 7942, 9083, 2540, 4963, 2527, 5512, 6651, 9722, 5905, 1597, 6606, 5948, 172, 4991, 4756, 1111, 9708, 1231, 8615, 1560, 3134, 3708, 7892, 4593, 3850, 1566, 7438, 8113, 4181, 1182, 1685, 5405, 5913, 7779, 8107, 1177, 1734, 5950, 2600, 4913, 2261, 6039, 6297, 10818, 1154, 8788, 5289, 118, 1645, 5197, 3118, 2792, 1945, 10583, 1049, 3324, 8121, 2533, 10425, 7713, 9774, 10939, 6957, 6558, 6705, 10066, 2116, 10930, 4633, 1194, 1692, 5828, 9840, 3780, 4508, 4995, 1624, 10176 ] } """ Explanation: ... or this one to use the previous selection (which has already been coded). End of explanation """ def print_substitution(number, substitution): title = ' {} / {}'.format(number, SAMPLE_SIZE) print('-' * (80 - len(title)) + title) if substitution.validate(): print('Kept: yes') else: print('Kept: no') print() print(' Tokens: {tokens[0]} -> {tokens[1]}' .format(tokens=substitution.tokens)) print(' Lemmas: {lemmas[0]} -> {lemmas[1]}' .format(lemmas=substitution.lemmas)) print() print(indent(fill(substitution.source.string), ' ' * 5)) print() print(indent(fill(substitution.destination.string), ' ' * 5)) print() def mine_print_substitutions(model): seen = 0 seen_substitutions = set() for cluster_id in cluster_ids: if seen >= SAMPLE_SIZE: break model.drop_caches() with session_scope() as session: cluster = session.query(Cluster).get(cluster_id) for substitution in cluster.substitutions(model): session.rollback() if seen >= SAMPLE_SIZE: break if (substitution.destination.id, substitution.position) in seen_substitutions: break seen += 1 seen_substitutions.add((substitution.destination.id, substitution.position)) print_substitution(seen, substitution) if seen < SAMPLE_SIZE: print("Didn't find {} substitutions, you should sample more clusters" .format(SAMPLE_SIZE)) def db_print_substitutions(model): seen = 0 seen_substitutions = set() for substitution_id in substitution_ids[model]: if seen >= SAMPLE_SIZE: break with session_scope() as session: substitution = session.query(Substitution).get(substitution_id) if (substitution.destination.id, substitution.position) in seen_substitutions: continue seen += 1 seen_substitutions.add((substitution.destination.id, substitution.position)) print_substitution(seen, substitution) """ Explanation: Our substitution printing function: End of explanation """ mine_print_substitutions(Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 1)) """ Explanation: 4.1 Single-substitution mining Here are the individual substitutions and their respective rejected/kept status, with single substitution. End of explanation """ db_print_substitutions(Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 1)) """ Explanation: Now with only valid substitutions, to make the precision analysis more accurate. End of explanation """ mine_print_substitutions(Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 2)) """ Explanation: Here, the only possible source of bad filtering is stopwords and rejection because of levenshtein distance <= 1 (the other checks are very specific). We already know we don't want substitutions on stopwords because we want to focus on meaningful changes, so the question is: 1. precision of the filter (i.e. are there any obvious missing checks?) 2. whether the levenshtein distance <= 1 rejection causes many meaningful substitutions to be lost or not: we want to know if there is high recall for this, so as to know whether using orthographic neighborhood density is useful or not. The codings the substitutions are in codings/filter_evaluations_substitutions-precision-recall; precision answers the first question, recall the second one. 4.2 Double-substitution mining Now the same with double-substitutions. End of explanation """ db_print_substitutions(Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 2)) """ Explanation: Now with only valid substitutions, to make the precision analysis more accurate. End of explanation """
ljo/collatex-tutorial
unit3/CollateX.ipynb
gpl-3.0
!pip install --pre --upgrade collatex """ Explanation: Collating for real with CollateX Okay, let's do some serious hands-on collation. First of all we want to make sure that you have the latest version of CollateX. That's why we do… End of explanation """ from collatex import * """ Explanation: You don't need to do this every time, but make sure you do it regularly. Next we need to tell Python that we will be needing the Python library that holds the code for CollateX… End of explanation """ collation = Collation() """ Explanation: Now we're ready to make a collation object. We do this with the slightly hermetic line of code: collation = Collation() Here the small caps collation is the arbitrary named variable that refers to a copy (officially it is called an instance) of the CollateX collation engine. We simply tell the collation library to create a new instance by saying Collation(). End of explanation """ collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.") collation.add_plain_witness( "B", "The brown fox jumped over the dog." ) collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog." ) """ Explanation: Now we add some witnesses. Each witness gets a letter or name that will identify them, and for each we add the literal text of the witness to the collation object, like so… End of explanation """ alignment_table = collate(collation, layout='vertical', segmentation=False ) """ Explanation: And now we can let CollateX do its work of collating these witnesses and sit back for about 0.001 seconds. The result will be an alignment table, so we'll refer to the result with a variable named alignment_table. End of explanation """ print( alignment_table ) """ Explanation: Well, that worked nicely it seems. But there's no printout, no visualization. That's okay, we can come up with a printout of the alignment table too: End of explanation """ alignment_table = collate(collation, layout='vertical' ) print( alignment_table ) """ Explanation: Usually you will want those segments that run parallel to be collected and displayed together. Since this is what most people seem to want, CollateX does that by default. We switched this option off in the example above for a moment because the result then shows you more clearly what the underlying primary structure is that CollateX returns. But for all practicle purposes you will probabably lose that segmentation=False option. So, let's get rid of that, and collate again… End of explanation """ graph = collate( collation, output="svg", segmentation=True ) """ Explanation: The aligment table visualization is CollateX's default way of rendering a collation result. There are various ways in which one can depict collated results of course. The output in alignment table form can be a good basis for further visualizations. CollateX can also format your collation as a variant graph. This is a visualization that lets you trace from left to right through a directed network, to follow which witness carries what readings. End of explanation """ collation = Collation() witness_1859 = open( "./fixtures/Darwin/txt/darwin1859_par1.txt", encoding='utf-8' ).read() witness_1860 = open( "./fixtures/Darwin/txt/darwin1860_par1.txt", encoding='utf-8' ).read() witness_1861 = open( "./fixtures/Darwin/txt/darwin1861_par1.txt", encoding='utf-8' ).read() witness_1866 = open( "./fixtures/Darwin/txt/darwin1866_par1.txt", encoding='utf-8' ).read() witness_1869 = open( "./fixtures/Darwin/txt/darwin1869_par1.txt", encoding='utf-8' ).read() witness_1872 = open( "./fixtures/Darwin/txt/darwin1872_par1.txt", encoding='utf-8' ).read() collation.add_plain_witness( "1859", witness_1859 ) collation.add_plain_witness( "1860", witness_1860 ) collation.add_plain_witness( "1861", witness_1861 ) collation.add_plain_witness( "1866", witness_1866 ) collation.add_plain_witness( "1869", witness_1869 ) collation.add_plain_witness( "1872", witness_1872 ) """ Explanation: Okay, that's all good and nice, but that's just tiny fragments—we want decent chunks of text to collate! Well, we can do that too, although is requires a little more work. Specifically for reading in text files from the file system. If we didn't do it that way, we would have to key in all characters of each witness, and that's just a lot of unnecessary work if we have done those texts already in a file. The below code uses the open command to open each text file and appoint the contents to a variable with an appropriately chosen name. The encoding="utf-8" bit is needed because you should always tell Python which encoding your data uses. This is probably the only place and time where you will use that encoding directive: when you open a (text) file. End of explanation """ print( witness_1859 ) print( witness_1860 ) """ Explanation: Now let's check if these witnesses actually contain some text. End of explanation """ alignment_table = collate(collation, layout='vertical', output='html') """ Explanation: And now let's collate those witnesses and let's put the result up as an HTML-formatted alignment table… End of explanation """ alignment_table = collate(collation, layout='vertical', output='html2') """ Explanation: Hmm… but that's still a little hard to read. Wouldn't it be nice if we got a hint where the actual differences are? Sure, try… End of explanation """ graph = collate( collation, output="svg" ) """ Explanation: And finally, we can also generate the variant graph for this collation… End of explanation """
rubennj/pvlib-python
docs/tutorials/tmy_and_diffuse_irrad_models.ipynb
bsd-3-clause
# built-in python modules import os import inspect # scientific python add-ons import numpy as np import pandas as pd # plotting stuff # first line makes the plots appear in the notebook %matplotlib inline import matplotlib.pyplot as plt # finally, we import the pvlib library import pvlib # Find the absolute file path to your pvlib installation pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib))) # absolute path to a data file datapath = os.path.join(pvlib_abspath, 'data', '703165TY.csv') # read tmy data with year values coerced to a single year tmy_data, meta = pvlib.tmy.readtmy3(datapath, coerce_year=2015) tmy_data.index.name = 'Time' # TMY data seems to be given as hourly data with time stamp at the end # shift the index 30 Minutes back for calculation of sun positions tmy_data = tmy_data.shift(freq='-30Min')['2015'] tmy_data.GHI.plot() plt.ylabel('Irradiance (W/m**2)') tmy_data.DHI.plot() plt.ylabel('Irradiance (W/m**2)') surface_tilt = 30 surface_azimuth = 180 # pvlib uses 0=North, 90=East, 180=South, 270=West convention albedo = 0.2 # create pvlib Location object based on meta data sand_point = pvlib.location.Location(meta['latitude'], meta['longitude'], tz='US/Alaska', altitude=meta['altitude'], name=meta['Name'].replace('"','')) print(sand_point) solpos = pvlib.solarposition.get_solarposition(tmy_data.index, sand_point.latitude, sand_point.longitude) solpos.plot() # the extraradiation function returns a simple numpy array # instead of a nice pandas series. We will change this # in a future version dni_extra = pvlib.irradiance.get_extra_radiation(tmy_data.index) dni_extra = pd.Series(dni_extra, index=tmy_data.index) dni_extra.plot() plt.ylabel('Extra terrestrial radiation (W/m**2)') airmass = pvlib.atmosphere.get_relative_airmass(solpos['apparent_zenith']) airmass.plot() plt.ylabel('Airmass') """ Explanation: TMY data and diffuse irradiance models This tutorial explores using TMY data as inputs to different plane of array diffuse irradiance models. This tutorial requires pvlib > 0.6.0. Authors: * Rob Andrews (@Calama-Consulting), Heliolytics, June 2014 * Will Holmgren (@wholmgren), University of Arizona, July 2015, March 2016, August 2018 Setup See the tmy_to_power tutorial for more detailed explanations for the initial setup End of explanation """ diffuse_irrad = pd.DataFrame(index=tmy_data.index) models = ['Perez', 'Hay-Davies', 'Isotropic', 'King', 'Klucher', 'Reindl'] """ Explanation: Diffuse irradiance models Make an empty pandas DataFrame for the results. End of explanation """ diffuse_irrad['Perez'] = pvlib.irradiance.perez(surface_tilt, surface_azimuth, dhi=tmy_data.DHI, dni=tmy_data.DNI, dni_extra=dni_extra, solar_zenith=solpos.apparent_zenith, solar_azimuth=solpos.azimuth, airmass=airmass) """ Explanation: Perez End of explanation """ diffuse_irrad['Hay-Davies'] = pvlib.irradiance.haydavies(surface_tilt, surface_azimuth, dhi=tmy_data.DHI, dni=tmy_data.DNI, dni_extra=dni_extra, solar_zenith=solpos.apparent_zenith, solar_azimuth=solpos.azimuth) """ Explanation: HayDavies End of explanation """ diffuse_irrad['Isotropic'] = pvlib.irradiance.isotropic(surface_tilt, dhi=tmy_data.DHI) """ Explanation: Isotropic End of explanation """ diffuse_irrad['King'] = pvlib.irradiance.king(surface_tilt, dhi=tmy_data.DHI, ghi=tmy_data.GHI, solar_zenith=solpos.apparent_zenith) """ Explanation: King Diffuse model End of explanation """ diffuse_irrad['Klucher'] = pvlib.irradiance.klucher(surface_tilt, surface_azimuth, dhi=tmy_data.DHI, ghi=tmy_data.GHI, solar_zenith=solpos.apparent_zenith, solar_azimuth=solpos.azimuth) """ Explanation: Klucher Model End of explanation """ diffuse_irrad['Reindl'] = pvlib.irradiance.reindl(surface_tilt, surface_azimuth, dhi=tmy_data.DHI, dni=tmy_data.DNI, ghi=tmy_data.GHI, dni_extra=dni_extra, solar_zenith=solpos.apparent_zenith, solar_azimuth=solpos.azimuth) """ Explanation: Reindl End of explanation """ yearly = diffuse_irrad.resample('A', how='sum').dropna().squeeze() / 1000.0 # kWh monthly = diffuse_irrad.resample('M', how='sum', kind='period') / 1000.0 daily = diffuse_irrad.resample('D', how='sum') / 1000.0 """ Explanation: Calculate yearly, monthly, daily sums. End of explanation """ ax = diffuse_irrad.plot(title='In-plane diffuse irradiance', alpha=.75, lw=1) ax.set_ylim(0, 800) ylabel = ax.set_ylabel('Diffuse Irradiance [W]') plt.legend() diffuse_irrad.describe() diffuse_irrad.dropna().plot(kind='density') """ Explanation: Plot Results End of explanation """ ax_daily = daily.tz_convert('UTC').plot(title='Daily diffuse irradiation') ylabel = ax_daily.set_ylabel('Irradiation [kWh]') """ Explanation: Daily End of explanation """ ax_monthly = monthly.plot(title='Monthly average diffuse irradiation', kind='bar') ylabel = ax_monthly.set_ylabel('Irradiation [kWh]') """ Explanation: Monthly End of explanation """ yearly.plot(kind='barh') """ Explanation: Yearly End of explanation """ mean_yearly = yearly.mean() yearly_mean_deviation = (yearly - mean_yearly) / yearly * 100.0 yearly_mean_deviation.plot(kind='bar') """ Explanation: Compute the mean deviation from measured for each model and display as a function of the model End of explanation """
AnkitMalviya/Cognitive-Robot
notebooks/node_red_dsx_workflow.ipynb
apache-2.0
!pip install websocket-client """ Explanation: Derive insights on Olympics data using Python Pandas <font color='blue'> Expose an integration point using websockets for orchestration with Node-RED.</font> 1. Setup To prepare your environment, you need to install some packages. 1.1 Install the necessary packages You need the latest versions of these packages:<br> - websocket-client: is a python client for the Websockets.<br> - python-swiftclient: is a python client for the Swift API.<br><br> Install the websocket client: End of explanation """ !pip install python-swiftclient """ Explanation: Install IBM Bluemix Object Storage Client: End of explanation """ import pandas as pd import matplotlib.pyplot as plt import json import websocket import thread import time import swiftclient import codecs from io import StringIO """ Explanation: 1.2 Import packages and libraries Import the packages and libraries that you'll use: End of explanation """ olympics_data_filename = 'olympics.csv' dictionary_data_filename = 'dictionary.csv' """ Explanation: 2. Configuration Add configurable items of the notebook below 2.1 Add your service credentials for Object Storage You must create Object Storage service on Bluemix. To access data in a file in Object Storage, you need the Object Storage authentication credentials. Insert the Object Storage authentication credentials as <i><b>credentials_1</b></i> in the following cell after removing the current contents in the cell. 2.3 Global Variables Add global variables. End of explanation """ auth_url = credentials_1['auth_url']+"/v3" container = credentials_1["container"] IBM_Objectstorage_Connection = swiftclient.Connection( key=credentials_1['password'], authurl=auth_url, auth_version='3', os_options={ "project_id": credentials_1['project_id'], "user_id": credentials_1['user_id'], "region_name": credentials_1['region']}) def create_container(container_name): """ Create a container on Object Storage. """ x = IBM_Objectstorage_Connection.put_container(container_name) return x def put_object(container_name, fname, contents, content_type): """ Write contents to Object Storage. """ x = IBM_Objectstorage_Connection.put_object( container_name, fname, contents, content_type) return x def get_object(container_name, fname): """ Retrieve contents from Object Storage. """ Object_Store_file_details = IBM_Objectstorage_Connection.get_object( container_name, fname) return Object_Store_file_details[1] """ Explanation: 3. Persistence and Storage 3.1 Configure Object Storage Client End of explanation """ olympics = pd.read_csv(StringIO(get_object(container, olympics_data_filename).decode('utf-8'))) olympics = olympics.rename(columns = {'Country':'Code'}) olympics = olympics.rename(columns = {'Year':'Edition'}) dictionary = pd.read_csv(StringIO(get_object(container, dictionary_data_filename).decode('utf-8'))) olympics = pd.merge(olympics, dictionary, on='Code') olympics.head() """ Explanation: 4. Data 4.1 Prepare data Combine the olympics and dictionary data into a single dataframe: - Read olympics data from Object Storage.<br> - Rename columns<br> - Populate the data in the dictionary to the Olympics data with a merge<br><br> End of explanation """ def get_medals_gb_year_country(): """ Group by edition and country and sum medals count. """ medals_groupedBy_yearCountry = olympics.groupby(['Edition','Code']).apply(lambda country: country['Code'].count()) return medals_groupedBy_yearCountry def get_medals_gb_year_country_medal(): """ Group by edition, country, medal type and sum medals count. """ medals_groupedBy_yearCountryMedal = olympics.groupby(['Edition', 'Code', 'Medal']).apply(lambda country: country['Medal'].count()) return medals_groupedBy_yearCountryMedal def get_medals_last_10_years(countrycode): """ Get Gold, Silver and Bronze medals for a country for last 10 editions. """ last10pics = olympics['Edition'].unique() yrs = pd.Series(last10pics).nlargest(10) df = pd.DataFrame([], columns=['Year', 'Gold', 'Silver', 'Bronze']) medalsdf = get_medals_gb_year_country_medal() for yr in yrs: medaltally = medalsdf[yr][countrycode] gold = 0 silver = 0 bronze = 0 if 'Gold' in medaltally: gold = medaltally['Gold'] if 'Silver' in medaltally: silver = medaltally['Silver'] if 'Bronze' in medaltally: bronze = medaltally['Bronze'] df1 = pd.DataFrame([[yr,gold, silver, bronze]], columns=['Year', 'Gold', 'Silver', 'Bronze']) df = df.append(df1, ignore_index=True) df = df.sort_values(by=['Year'], ascending=True) df = df.reset_index() del df['index'] return df def get_correlation_medalstally(): """ Get correlation between the medals tally and population, GDP per capita. """ df = get_medals_gb_year_country() values = get_all_olympic_years().values size = values.size correlations = [] for i in range(size): year = values[i][0] df1 = df[year].to_frame(name="Tally") df1 = df1.reset_index() df2 = pd.merge(df1,dictionary, on='Code') corrpop = df2.corr().values[0][1] corrgdp = df2.corr().values[0][2] resp = {"Year": year, "Population":corrpop, "GDP":corrgdp} correlations.append(resp) return correlations def get_medals_category(countrycode, year): """ Get the medals count in different sports category for a country in an edition. """ df = olympics[olympics['Edition'] == year] df1 = df[df['Code'] == countrycode] df2 = df1.groupby(['Sport']).apply(lambda country: country['Medal'].count()) return df2 def get_medals_category_all(countrycode): """ Get the medals count in different sports category for a country for last ten editions. """ df1 = olympics[olympics['Code'] == countrycode] df2 = df1.groupby(['Sport']).apply(lambda country: country['Medal'].count()) return df2 def get_top_ten_gold_tally(year): """ Get the top ten gold medal winning countries in an edition. """ df = olympics[olympics['Edition'] == year] df1 = df[df['Medal'] == 'Gold'] df2 = df1.groupby(['Code']).apply(lambda country: country['Medal'].count()) return df2 def get_top_ten_total_tally(year): """ Get the top ten total medal winning countries in an edition. """ df = olympics[olympics['Edition'] == year] df1 = df.groupby(['Code']).apply(lambda country: country['Medal'].count()) return df1 def get_year_venue(): """ Get edition venue matrix. """ df = olympics[['Edition', 'City']] df = df.drop_duplicates() df = df.reset_index() df = df.set_index('Edition') del df['index'] return df.sort_index() def get_all_olympic_years(): """ Get list of all olympic editions. """ df = olympics['Edition'] df = df.drop_duplicates() df = df.reset_index() del df['index'] return df.sort_index() def get_all_countries(): """ Get list of all countries. """ df = olympics[['Code','Country']] df = df.drop_duplicates() df = df.reset_index() del df['index'] return df.sort(['Country'],ascending=[True]) def get_country_edition_data(countrycode,edition): """ Get data for a country and edition. """ df = olympics[olympics["Code"] == countrycode] df1 = df[df["Edition"] == edition] return df1 """ Explanation: 5. Insights on the data using Python Pandas Create re-usable functions End of explanation """ def on_message(ws, message): print(message) msg = json.loads(message) cmd = msg['cmd'] if cmd == 'MBY': country = msg['country'] tally = get_medals_last_10_years(country) tallyarray=[] for i, row in tally.iterrows(): medaltally = {"Year":int(row["Year"]), "Gold":int(row["Gold"]), "Silver":int(row["Silver"]), "Bronze":int(row["Bronze"])} tallyarray.append(medaltally) wsresponse = {} wsresponse["forcmd"] = "MBY" wsresponse["response"] = tallyarray ws.send(json.dumps(wsresponse)) elif cmd == 'MBSC': country = msg['country'] year = 2008 response = get_medals_category(country, year) ct = response.count() if ct > 5: response = response.nlargest(5) medals = [] categories = [] for i, row in response.iteritems(): categories.append(i) medals.append(row) wsresponse = {} wsresponse["forcmd"] = "MBSC" wsresponse["response"] = { "categories":categories, "medals":medals} ws.send(json.dumps(wsresponse)) elif cmd == 'MBSA': country = msg['country'] response = get_medals_category_all(country) ct = response.count() if ct > 5: response = response.nlargest(5) medals = [] categories = [] for i, row in response.iteritems(): categories.append(i) medals.append(row) wsresponse = {} wsresponse["forcmd"] = "MBSA" wsresponse["response"] = { "categories":categories, "medals":medals} ws.send(json.dumps(wsresponse)) elif cmd == 'T10G': edition = msg["edition"] response = get_top_ten_gold_tally(edition) ct = response.count() if ct > 10: response = response.nlargest(10) medals = [] for i, row in response.iteritems(): data = {"country":i,"tally":row} medals.append(data) wsresponse = {} wsresponse["forcmd"] = "T10G" wsresponse["response"] = medals print(wsresponse) ws.send(json.dumps(wsresponse)) elif cmd == 'T10M': year = msg["edition"] response = get_top_ten_total_tally(year) ct = response.count() if ct > 10: response = response.nlargest(10) medals = [] for i, row in response.iteritems(): data = {"country":i,"tally":row} medals.append(data) wsresponse = {} wsresponse["forcmd"] = "T10M" wsresponse["response"] = medals print(wsresponse) ws.send(json.dumps(wsresponse)) elif cmd == 'CORR': corr = get_correlation_medalstally() wsresponse = {} wsresponse["forcmd"] = "CORR" wsresponse["response"] = corr ws.send(json.dumps(wsresponse)) elif cmd == 'YV': yearvenue = get_year_venue() yearvenuearray = [] for i in range(yearvenue.size): value = {"Year":yearvenue.index[i],"Venue":yearvenue.values[i].tolist()[0]} yearvenuearray.append(value) responsejson = {} responsejson["forcmd"]="YV" responsejson["response"]=yearvenuearray ws.send(json.dumps(responsejson)) elif cmd == 'DATA': country = msg['country'] edition = msg['edition'] olympicsslice = get_country_edition_data(country,edition) data = [] numofcolumns = olympicsslice.columns.size cols = [] values = [] for column in olympicsslice.columns: cols.append(column) for value in olympicsslice.values: values.append(value.tolist()) data = {"cols":cols,"vals":values} responsejson = {} responsejson['forcmd']='DATA' responsejson['response']= data ws.send(json.dumps(responsejson)) elif cmd == 'EDITIONS': years = get_all_olympic_years() yearsarray = [] for i,row in years.iteritems(): for value in row: yearsarray.append(value) length = len(yearsarray) wsresponse = [] for i in range(length): year = {"text":yearsarray[i],"value":yearsarray[i]} wsresponse.append(year) responsejson = {} responsejson['forcmd']='EDITIONS' responsejson['response']= wsresponse ws.send(json.dumps(responsejson)) elif cmd == 'COUNTRIES': countries = get_all_countries() countriesarray = [] codearray = [] for i,row in countries.iteritems(): if i=='Code': for value in row: codearray.append(value) elif i=='Country': for value in row: countriesarray.append(value) length = len(codearray) wsresponse = [] for i in range(length): country = {"text":countriesarray[i],"value":codearray[i]} wsresponse.append(country) responsejson = {} responsejson['forcmd']='COUNTRIES' responsejson['response']= wsresponse ws.send(json.dumps(responsejson)) def on_error(ws, error): print(error) def on_close(ws): ws.send("DSX Listen End") def on_open(ws): def run(*args): for i in range(10000): hbeat = '{"cmd":"Olympics DSX HeartBeat"}' ws.send(hbeat) time.sleep(100) thread.start_new_thread(run, ()) def start_websocket_listener(): websocket.enableTrace(True) ws = websocket.WebSocketApp("ws://NODERED_BASE_URL/ws/orchestrate", on_message = on_message, on_error = on_error, on_close = on_close) ws.on_open = on_open ws.run_forever() """ Explanation: 6. Expose integration point with a websocket client End of explanation """ start_websocket_listener() """ Explanation: 7. Start websocket client End of explanation """
farr/emcee
docs/_static/notebooks/parallel.ipynb
mit
import os os.environ["OMP_NUM_THREADS"] = "1" """ Explanation: Parallelization End of explanation """ import emcee print(emcee.__version__) """ Explanation: With emcee, it's easy to make use of multiple CPUs to speed up slow sampling. There will always be some computational overhead introduced by parallelization so it will only be beneficial in the case where the model is expensive, but this is often true for real research problems. All parallelization techniques are accessed using the pool keyword argument in the :class:EnsembleSampler class but, depending on your system and your model, there are a few pool options that you can choose from. In general, a pool is any Python object with a map method that can be used to apply a function to a list of numpy arrays. Below, we will discuss a few options. This tutorial was executed with the following version of emcee: End of explanation """ import time import numpy as np def log_prob(theta): t = time.time() + np.random.uniform(0.005, 0.008) while True: if time.time() >= t: break return -0.5*np.sum(theta**2) """ Explanation: In all of the following examples, we'll test the code with the following convoluted model: End of explanation """ np.random.seed(42) initial = np.random.randn(32, 5) nwalkers, ndim = initial.shape nsteps = 100 sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob) start = time.time() sampler.run_mcmc(initial, nsteps, progress=True) end = time.time() serial_time = end - start print("Serial took {0:.1f} seconds".format(serial_time)) """ Explanation: This probability function will randomly sleep for a fraction of a second every time it is called. This is meant to emulate a more realistic situation where the model is computationally expensive to compute. To start, let's sample the usual (serial) way: End of explanation """ from multiprocessing import Pool with Pool() as pool: sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool) start = time.time() sampler.run_mcmc(initial, nsteps, progress=True) end = time.time() multi_time = end - start print("Multiprocessing took {0:.1f} seconds".format(multi_time)) print("{0:.1f} times faster than serial".format(serial_time / multi_time)) """ Explanation: Multiprocessing The simplest method of parallelizing emcee is to use the multiprocessing module from the standard library. To parallelize the above sampling, you could update the code as follows: End of explanation """ from multiprocessing import cpu_count ncpu = cpu_count() print("{0} CPUs".format(ncpu)) """ Explanation: I have 4 cores on the machine where this is being tested: End of explanation """ with open("script.py", "w") as f: f.write(""" import sys import time import emcee import numpy as np from schwimmbad import MPIPool def log_prob(theta): t = time.time() + np.random.uniform(0.005, 0.008) while True: if time.time() >= t: break return -0.5*np.sum(theta**2) with MPIPool() as pool: if not pool.is_master(): pool.wait() sys.exit(0) np.random.seed(42) initial = np.random.randn(32, 5) nwalkers, ndim = initial.shape nsteps = 100 sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool) start = time.time() sampler.run_mcmc(initial, nsteps) end = time.time() print(end - start) """) mpi_time = !mpiexec -n {ncpu} python script.py mpi_time = float(mpi_time[0]) print("MPI took {0:.1f} seconds".format(mpi_time)) print("{0:.1f} times faster than serial".format(serial_time / mpi_time)) """ Explanation: We don't quite get the factor of 4 runtime decrease that you might expect because there is some overhead in the parallelization, but we're getting pretty close with this example and this will get even closer for more expensive models. MPI Multiprocessing can only be used for distributing calculations across processors on one machine. If you want to take advantage of a bigger cluster, you'll need to use MPI. In that case, you need to execute the code using the mpiexec executable, so this demo is slightly more convoluted. For this example, we'll write the code to a file called script.py and then execute it using MPI, but when you really use the MPI pool, you'll probably just want to edit the script directly. To run this example, you'll first need to install the schwimmbad library because emcee no longer includes its own MPIPool. End of explanation """ def log_prob_data(theta, data): a = data[0] # Use the data somehow... t = time.time() + np.random.uniform(0.005, 0.008) while True: if time.time() >= t: break return -0.5*np.sum(theta**2) data = np.random.randn(5000, 200) sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, args=(data,)) start = time.time() sampler.run_mcmc(initial, nsteps, progress=True) end = time.time() serial_data_time = end - start print("Serial took {0:.1f} seconds".format(serial_data_time)) """ Explanation: There is often more overhead introduced by MPI than multiprocessing so we get less of a gain this time. That being said, MPI is much more flexible and it can be used to scale to huge systems. Pickling, data transfer & arguments All parallel Python implementations work by spinning up multiple python processes with identical environments then and passing information between the processes using pickle. This means that the probability function must be picklable. Some users might hit issues when they use args to pass data to their model. These args must be pickled and passed every time the model is called. This can be a problem if you have a large dataset, as you can see here: End of explanation """ with Pool() as pool: sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, pool=pool, args=(data,)) start = time.time() sampler.run_mcmc(initial, nsteps, progress=True) end = time.time() multi_data_time = end - start print("Multiprocessing took {0:.1f} seconds".format(multi_data_time)) print("{0:.1f} times faster(?) than serial".format(serial_data_time / multi_data_time)) """ Explanation: We basically get no change in performance when we include the data argument here. Now let's try including this naively using multiprocessing: End of explanation """ def log_prob_data_global(theta): a = data[0] # Use the data somehow... t = time.time() + np.random.uniform(0.005, 0.008) while True: if time.time() >= t: break return -0.5*np.sum(theta**2) with Pool() as pool: sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data_global, pool=pool) start = time.time() sampler.run_mcmc(initial, nsteps, progress=True) end = time.time() multi_data_global_time = end - start print("Multiprocessing took {0:.1f} seconds".format(multi_data_global_time)) print("{0:.1f} times faster than serial".format(serial_data_time / multi_data_global_time)) """ Explanation: Brutal. We can do better than that though. It's a bit ugly, but if we just make data a global variable and use that variable within the model calculation, then we take no hit at all. End of explanation """
mobarski/sandbox
covid19/inverness_covid19_medical_care_bkp2.ipynb
mit
!pip install inverness """ Explanation: Notebook structure Approach idea Common section for all sub-tasks Outcomes data for COVID-19 after mechanical ventilation adjusted for age -> results Model recalculation Approach idea PROS todo todo CONS todo todo Common section for all sub-tasks Installation of required packages End of explanation """ import inverness model = inverness.Model('model_all_v7/').load(['fun','meta','phraser','dictionary','tfidf','lsi','dense_ann']) """ Explanation: Model load End of explanation """ from IPython.core.display import display, HTML from matplotlib import pyplot as plt from tqdm import tqdm import pandas as pd import re pd.set_option('display.max_rows', 100) """ Explanation: Standard imports End of explanation """ def score_text(text, criteria): """""" total = 0 value = 1 for c in criteria: if type(c) in (int,float): value = c else: c = c.replace('_',r'\b') matches = re.findall(c,text,re.I) total += value if matches else 0 #total += value*len(matches) return total def score_results(i_d_lists, criteria, mark=None): """""" results = [] for i,d in zip(*i_d_lists): doc = model.get_doc(i) text = model.doc_to_text(doc).replace('\n',' ').replace('\r',' ') s = score_text(text, criteria) html = highlight(text, criteria, style_by_group_id, default_style) rec = s,d,i,html,doc results += [rec] results.sort(key=lambda x:(-x[0],x[1])) return results # TODO ??? tqdm def score_queries(queries, criteria, K=50): """""" by_score = [] for query in queries: q = model.text_to_dense(query) i_d = model.dense_ann_query(q,K) results = score_results(i_d, criteria) score = agg_results(results) by_score += [(score,query)] by_score.sort() return by_score def highlight(text, criteria, styles={}, default='w=bold'): """""" group_id = 0 for c in criteria: if type(c) in (int,float): group_id += 1 else: c = c.replace('_',r'\b') c = f'({c}\\w*)' style = styles.get(group_id,default) style_props = [] for prop in style.split(','): k,_,v = prop.partition('=') if k=='w': style_props += [f'font-weight:{v}'] if k=='fg': style_props += [f'foreground-color:{v}'] if k=='bg': style_props += [f'background-color:{v}'] before = f'<span style="{";".join(style_props)}">' after = '</span>' text = re.sub(c, before+'\\1'+after, text, flags=re.I) # TODO default return text # L2 score def agg_results(results): """""" scores = [x[0] for x in results] return sum([x*x for x in scores])**0.5 # TODO break title into multiple lines def plot_results(results,title=''): """""" scores = [x[0] for x in results] scores.sort(reverse=True) plt.plot(scores) if title: plt.title(title) score = agg_results(results) plt.figtext(0.4, 1, f"L2 score: {score:.02f}") plt.show() # TEST #highlight_old("this is a test of this function",['thi',5,'_is'],mark=[1]) #highlight("this is a test of this function",['thi',5,'_is'],styles={0:'w=bold',1:'bold,bg=#FF0000,fg=#0000FF'}) #highlight("this is a test of this function",['thi',5,'_is']) """ Explanation: Helper functions End of explanation """ criteria = [ 50,'mechanical','ventilat', 2,'adjust','_age','_years','_old', 2,'_surviv','discharge','extubate','alive', 2,'nonsurviv','_died','dead','death','mortality','complication', 5,'Kaplan.Meier','APACHE','SOFA','RIFLE','Glasgow.Coma','GCS','SAPS','_RESP_','RSBI','1000.person_', 2,'figure','_fig[.]','_table', 2,'outcome','result','occurr','cohort','median', 1,'duration','time','_day','patients','_stay','_week' ] style_by_group_id = {1:'w=bold',2:'bg=#FFFF00',3:'bg=#00FF00',4:'bg=#FFAAAA',5:'bg=#FFCC00',6:'bg=#FFAAFF',7:'bg=#00FFFF'} default_style = '' # https://www.mdcalc.com/covid-19 """ Explanation: Outcomes data for COVID-19 after mechanical ventilation adjusted for age Scoring criteria Color codes: green - positive outcome red - negative outcome amber - estimator yellow - age related cyan - outcome / result magenta - table / chart / figure End of explanation """ K = 50 queries = [ 'Outcomes data for COVID-19 after mechanical ventilation adjusted for age', 'results after mechnical ventilation discharged dead died', 'results after mechnical ventilation discharged dead died survived survivors adjusted age years old', 'results after mechnical ventilation discharged died survived survivors extubated adjusted', 'results after mechnical ventilation discharged dead died survived survivors adjusted age years old', 'results after mechnical ventilation discharged died survived extubated adjusted', 'results after mechnical ventilation discharged died survived extubated adjusted age', 'outcomes after mechnical ventilation discharged died survived extubated adjusted', 'results outcomes after mechnical ventilation discharged died survived extubated adjusted age', 'results outcomes after mechnical ventilation discharged died survived extubated', 'results outcomes mechnical ventilation discharged died survived extubated', 'results outcomes after mechnical ventilation discharged died survived extubated adjusted', ] for score,query in score_queries(queries, criteria, K): print(f"{score:10.02f} -- {query}") """ Explanation: Query selection While we could query our model with verbatim task question the intuition/experience tells that keyword based query can give better results. Here we score different queries (including verbatim task question) using criteria defined above. End of explanation """ query = 'results outcomes after mechnical ventilation \n discharged died survived extubated adjusted' """ Explanation: Final query End of explanation """ K = 500 q = model.text_to_dense(query) i_d_lists = model.dense_ann_query(q, K) results = score_results(i_d_lists, criteria, mark) plot_results(results, title=query) """ Explanation: Query the model End of explanation """ N = 20 for score,dist,i,html,doc in results[:N]: display(HTML(f"{score} :: {dist:.03f} :: {i}<br>{html}")) """ Explanation: Display results End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cnrm-cerfacs/cmip6/models/cnrm-esm2-1/ocnbgchem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1', 'ocnbgchem') """ Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: CNRM-CERFACS Source ID: CNRM-ESM2-1 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:52 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) """ Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) """ Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) """ Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) """ Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) """ Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) """ Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation """
termanli/CLIOL
你好,Colaboratory.ipynb
lgpl-3.0
import tensorflow as tf input1 = tf.ones((2, 3)) input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3)) output = input1 + input2 with tf.Session(): result = output.eval() result #help(tf.reshape) #help(tf.range) #tf.range(1, 7, dtype=tf.float32) #help(tf.reshape(tf.range(1, 10, dtype=tf.float32), (3, 3))) with tf.Session(): result=tf.reshape(tf.range(1, 10, dtype=tf.float32), (3, 3)).eval() result """ Explanation: <a href="https://colab.research.google.com/github/termanli/CLIOL/blob/master/%E4%BD%A0%E5%A5%BD%EF%BC%8CColaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <img height="60px" src="/img/colab_favicon.ico" align="left" hspace="20px" vspace="5px"> 欢迎使用 Colaboratory! Colaboratory 是免费的 Jupyter 笔记本环境,不需要进行任何设置就可以使用,并且完全在云端运行。要了解更多信息,请参阅我们的常见问题解答。 使用入门 Colaboratory 概览 加载和保存数据:本地文件、云端硬盘、表格、Google Cloud Storage 导入库和安装依赖项 使用 Google Cloud BigQuery 表单、图表、Markdown 以及微件 支持 GPU 的 TensorFlow 机器学习速成课程:Pandas 简介以及使用 TensorFlow 的起始步骤 重要功能 执行 TensorFlow 代码 借助 Colaboratory,您只需点击一下鼠标,即可在浏览器中执行 TensorFlow 代码。下面的示例展示了两个矩阵相加的情况。 $\begin{bmatrix} 1. & 1. & 1. \ 1. & 1. & 1. \ \end{bmatrix} + \begin{bmatrix} 1. & 2. & 3. \ 4. & 5. & 6. \ \end{bmatrix} = \begin{bmatrix} 2. & 3. & 4. \ 5. & 6. & 7. \ \end{bmatrix}$ End of explanation """ import matplotlib.pyplot as plt import numpy as np x = np.arange(20) y = [x_i + np.random.randn(1) for x_i in x] a, b = np.polyfit(x, y, 1) _ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-') """ Explanation: GitHub 您可以通过依次转到“文件”>“在 GitHub 中保存一份副本…”,保存一个 Colab 笔记本副本 只需在 colab.research.google.com/github/ 后面加上路径,即可在 GitHub 上加载任何 .ipynb。例如,colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb 将在 GitHub 上加载此 .ipynb。 可视化 Colaboratory 包含很多已被广泛使用的库(例如 matplotlib),因而能够简化数据的可视化过程。 End of explanation """ !pip install -q matplotlib-venn from matplotlib_venn import venn2 _ = venn2(subsets = (3, 2, 1)) """ Explanation: 想使用新的库?请在笔记本的顶部通过 pip install 命令安装该库。然后,您就可以在笔记本的任何其他位置使用该库。要了解导入常用库的方法,请参阅导入库示例笔记本。 End of explanation """
y2ee201/Deep-Learning-Nanodegree
my-experiments/Autoencoders/Autoencoder Keras.ipynb
mit
import numpy as np from keras.layers import Dense from keras.models import Sequential from keras.optimizers import Adam from sklearn.cross_validation import train_test_split from PIL import Image from matplotlib.pyplot import imshow """ Explanation: Simple Autoencoder using Keras Autoencoders are models that try to condense a high dimensional source to a low dimension intermediary signature. This signature is then passed through a decoder that provides data in the dimensions of the source. Autoencoders can be approximated to compression algorithms in theory. These are lossy in nature. In the below example, we take the mnist digit dataset and then use a neural network with a single hidden layer to encode the images in 784 dimensional space to 32 dimensional signature, and try to replicate the 784 dimensional image from the signature. We are trying to build a simple encoder/decoder which can be used to compress images and replicate them with suitable accuracy. End of explanation """ data = np.genfromtxt('train.csv', delimiter=',', skip_header = 1) print(data.shape) """ Explanation: Data we are going to use MNIST digit recognition dataset from kaggle. The dataset consists of 42000 handwritten digits converted to images and serves as one of the datasets widely used in hello world programs for computer vision. End of explanation """ X = data[:,1:] y = data[:,0] X = X/255 print(X.shape) """ Explanation: Preprocessing In the MNIST dataset, each sample is an image of 28 X 28 pixels flattened to a 784 dimension space. Pixel values range from 0 to 255 and this dataset has only one channel. To help the neural network converge faster we will normalize pixel data between 0 and 1. We do this by dividing the pixel values with 255. The first column of the dataset is the label for the images. We can discard this column as we aren't interested in classification. End of explanation """ model = Sequential() model.add(Dense(32, input_dim=784, activation='relu')) model.add(Dense(784, activation='linear')) model.compile(loss='mse', optimizer=Adam(0.001)) """ Explanation: Neural Network We are building a neural network which will be our autoencoder. The architecture of the neural network is pretty simple, we have one hidden layer of 32 neurons. The input and output layers both contain 784 neurons to match the input and output sample dimensions. As this is an encoder/decoder, the training data and the output are the same. We use keras to add the fully connected layers and have rectified linear activation for the hidden layer. End of explanation """ model.fit(x = X, y = X, epochs=20, batch_size=200, verbose=1) """ Explanation: Training This is pretty straightforward. As can be witnessed, we are feeding the neural network images and optimizing the loss on the logits of the same images as output. End of explanation """ i = np.random.randint(42000, size=1) encoded = data[i,1:] decoded = model.predict((X[i,]).reshape(-1,784)) decoded = decoded * 255 %matplotlib inline im = Image.fromarray(encoded.reshape(28,28)) imshow(im) %matplotlib inline im = Image.fromarray(decoded.reshape(28,28)) imshow(im) """ Explanation: Validation We are now trying to validate how well the model performs visually. As you can see in the above cell, the model has reached an acceptable optimization result in the training. However, to check if the model is able to replicate a number in a satisfactory manner, we will run some basic tests. And once we run a random sample through the encoder/decoder, we can notice the replication of the image. As you can see, the replication is lossy, has more jagged artifacts than compared to the input due to lower resolution reimaging. End of explanation """
probml/pyprobml
notebooks/book1/01/iris_dtree.ipynb
mit
# Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os import pandas as pd from matplotlib.colors import ListedColormap from sklearn.datasets import load_iris import seaborn as sns # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt # Font sizes SIZE_SMALL = 18 # 14 SIZE_MEDIUM = 20 # 18 SIZE_LARGE = 24 # https://stackoverflow.com/a/39566040 plt.rc("font", size=SIZE_SMALL) # controls default text sizes plt.rc("axes", titlesize=SIZE_SMALL) # fontsize of the axes title plt.rc("axes", labelsize=SIZE_SMALL) # fontsize of the x and y labels plt.rc("xtick", labelsize=SIZE_SMALL) # fontsize of the tick labels plt.rc("ytick", labelsize=SIZE_SMALL) # fontsize of the tick labels plt.rc("legend", fontsize=SIZE_SMALL) # legend fontsize plt.rc("figure", titlesize=SIZE_LARGE) # fontsize of the figure title """ Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/iris_dtree.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Decision tree classifier on Iris data Based on https://github.com/ageron/handson-ml2/blob/master/06_decision_trees.ipynb End of explanation """ iris = load_iris() X = iris.data y = iris.target print(iris.feature_names) # Convert to pandas dataframe df = pd.DataFrame(data=X, columns=iris.feature_names) df["label"] = pd.Series(iris.target_names[y], dtype="category") # we pick a color map to match that used by decision tree graphviz # cmap = ListedColormap(['#fafab0','#a0faa0', '#9898ff']) # orange, green, blue/purple # cmap = ListedColormap(['orange', 'green', 'purple']) palette = {"setosa": "orange", "versicolor": "green", "virginica": "purple"} g = sns.pairplot(df, vars=df.columns[0:4], hue="label", palette=palette) # g = sns.pairplot(df, vars = df.columns[0:4], hue="label") plt.savefig("iris_scatterplot_v2.pdf") plt.show() from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_iris iris = load_iris() print(iris.target_names) print(iris.feature_names) # ndx = [0, 2] # sepal length, petal length ndx = [2, 3] # petal lenght and width X = iris.data[:, ndx] y = iris.target xnames = [iris.feature_names[i] for i in ndx] ynames = iris.target_names def plot_surface(clf, X, y, xnames, ynames): n_classes = 3 plot_step = 0.02 markers = ["o", "s", "^"] plt.figure(figsize=(10, 10)) x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.xlabel(xnames[0]) plt.ylabel(xnames[1]) # we pick a color map to match that used by decision tree graphviz cmap = ListedColormap(["orange", "green", "purple"]) # cmap = ListedColormap(['blue', 'orange', 'green']) # cmap = ListedColormap(sns.color_palette()) plot_colors = [cmap(i) for i in range(4)] cs = plt.contourf(xx, yy, Z, cmap=cmap, alpha=0.5) # Plot the training points for i, color, marker in zip(range(n_classes), plot_colors, markers): idx = np.where(y == i) plt.scatter( X[idx, 0], X[idx, 1], label=ynames[i], edgecolor="black", color=color, s=50, cmap=cmap, marker=marker ) plt.legend() """ Explanation: Data End of explanation """ tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42) tree_clf.fit(X, y) from graphviz import Source from sklearn.tree import export_graphviz export_graphviz( tree_clf, out_file="iris_tree.dot", feature_names=xnames, class_names=ynames, rounded=True, impurity=False, filled=True, ) Source.from_file("iris_tree.dot") plt.savefig("dtree_iris_depth2_tree_v2.pdf") plot_surface(tree_clf, X, y, xnames, ynames) plt.savefig("dtree_iris_depth2_surface_v2.pdf") """ Explanation: Depth 2 End of explanation """ tree_clf = DecisionTreeClassifier(max_depth=3, random_state=42) tree_clf.fit(X, y) export_graphviz( tree_clf, out_file="iris_tree.dot", feature_names=xnames, class_names=ynames, rounded=True, impurity=False, filled=True, ) Source.from_file("iris_tree.dot") plot_surface(tree_clf, X, y, xnames, ynames) """ Explanation: Depth 3 End of explanation """ tree_clf = DecisionTreeClassifier(max_depth=None, random_state=42) tree_clf.fit(X, y) from graphviz import Source from sklearn.tree import export_graphviz export_graphviz( tree_clf, out_file="iris_tree.dot", feature_names=xnames, class_names=ynames, rounded=True, filled=False, impurity=False, ) Source.from_file("iris_tree.dot") plot_surface(tree_clf, X, y, xnames, ynames) """ Explanation: Depth unrestricted End of explanation """
m3at/Labelizer
Labelizer_part2.ipynb
mit
%matplotlib inline from __future__ import absolute_import from __future__ import print_function # import local library import tools import nnlstm # import library to build the neural network from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.layers.embeddings import Embedding from keras.layers.recurrent import LSTM from keras.optimizers import Adam #%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py %load_ext watermark # for reproducibility %watermark -a 'Paul Willot' -mvp numpy,scipy,keras """ Explanation: Building an LSTM neural network End of explanation """ X_train, y_train, X_test, y_test, feature_names,max_features, classes_names, vectorizer = tools.load_pickle("data/unpadded_4_BacObjMetCon.pickle") """ Explanation: Let's gather the datas from the previous notebook End of explanation """ X_train, X_test, y_train, y_test = nnlstm.pad_sequence(X_train, X_test, y_train, y_test, maxlen=100) """ Explanation: and pad each vector to a regular size (necessary for the sequence processing) End of explanation """ X_train, y_train, X_test, y_test, feature_names, max_features, classes_names, vectorizer = tools.load_pickle("/Users/meat/Documents/NII/data/training_4_BacObjMetCon.pickle") """ Explanation: Or directly get a bigger training and testing set: End of explanation """ X_train[0][:100] # one-hot vector for the 4 different labels y_train[0] """ Explanation: Our data look like this: End of explanation """ %%time # take approximately 50s to build dim_out = len(classes_names) net = Sequential() net.add(Embedding(max_features, 16)) net.add(LSTM(16, 16)) net.add(Dense(16, dim_out)) net.add(Dropout(0.5)) net.add(Activation('softmax')) net.compile(loss='categorical_crossentropy', optimizer='adam', class_mode="categorical") """ Explanation: Choosing the architecture We use the Keras library, build on Theano. Here I choose a very simple architecture because of my low performance system (with no graphic card), but of course feel free to try any. Especially, stacking layers of LSTM could improve performance, according to this paper from Karpathy (which I used a lot as reference) End of explanation """ batch_size = 100 length_train = 15000 # length of the reduced training set (can put to -1 for all) length_test = 5000 # length of the reduced testing set (can put to -1 for all) nb_epoch = 10 patience = 2 # when to apply early stopping, if necessary history = nnlstm.train_network(net, X_train[:length_train], y_train[:length_train], X_test[:length_test], y_test[:length_test], nb_epoch, batch_size=batch_size, path_save="weights", patience=patience) """ Explanation: Training on a small subset End of explanation """ net.load_weights("weights/best.hdf5") nnlstm.show_history(history) """ Explanation: The weights are saved at each epoch, and you can load 'best' for the epoch with the higher (accuracy * (loss/10)) End of explanation """ nnlstm.evaluate_network(net, X_test[:length_test], y_test[:length_test], classes_names, length=-1) """ Explanation: Evaluate the network End of explanation """
0x4a50/udacity-0x4a50-deep-learning-nanodegree
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [] for line in source_text.split('\n'): source_id_text.append([source_vocab_to_int[word] for word in line.split()]) target_id_text = [] for line in target_text.split('\n'): target_id_text.append([target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']]) return (source_id_text, target_id_text) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ inputs = tf.placeholder(tf.int32, shape=(None, None), name="input") targets = tf.placeholder(tf.int32, shape=(None, None)) learning_rate = tf.placeholder(tf.float32, shape=(), name="learning_rate") keep_prob = tf.placeholder(tf.float32, shape=(), name="keep_prob") target_sequence_length = tf.placeholder(tf.int32, shape=(None,), name="target_sequence_length") max_target_length = tf.reduce_max(target_sequence_length, name="max_target_len") source_sequence_length= tf.placeholder(tf.int32, shape=(None,), name="source_sequence_length") return (inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_length, source_sequence_length) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation """ def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ sliced = tf.strided_slice(target_data, [0,0], [batch_size, -1], strides=[1,1]) return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), sliced], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) """ Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation """ from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ embedding = tf.contrib.layers.embed_sequence( rnn_inputs, vocab_size=source_vocab_size, embed_dim=encoding_embedding_size ) stacked_lstm = tf.contrib.rnn.MultiRNNCell( [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), keep_prob) for _ in range(num_layers)] ) output, state = tf.nn.dynamic_rnn( stacked_lstm, embedding, sequence_length=source_sequence_length, dtype=tf.float32 ) return output, state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, initial_state=encoder_state, output_layer=output_layer) return tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length)[0] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer=output_layer) return tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length)[0] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation """ def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) dec_cells = tf.contrib.rnn.MultiRNNCell( [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), keep_prob) for _ in range(num_layers)] ) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(stddev=0.1)) with tf.variable_scope('decode'): training_decoder_logits = decoding_layer_train( encoder_state, dec_cells, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob ) with tf.variable_scope('decode', reuse=True): infer_decoder_logits = decoding_layer_infer( encoder_state, dec_cells, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob ) return (training_decoder_logits, infer_decoder_logits) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ enc_layer = encoding_layer( input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size ) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) return decoding_layer( dec_input, enc_layer[1], target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size ) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation """ # Number of Epochs epochs = 2 # Batch Size batch_size = 128 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.7 display_step = 10 """ Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths """ Explanation: Batch and pad the source and target sequences End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: Checkpoint End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower() word_ids = [] for word in sentence.split(): if word in vocab_to_int: word_ids.append(vocab_to_int[word]) else: word_ids.append(vocab_to_int["<UNK>"]) return word_ids """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) """ Explanation: Translate This will translate translate_sentence from English to French. End of explanation """
icrtiou/coursera-ML
ex6-SVM/3- search for the best parameters.ipynb
mit
mat = sio.loadmat('./data/ex6data3.mat') print(mat.keys()) training = pd.DataFrame(mat.get('X'), columns=['X1', 'X2']) training['y'] = mat.get('y') cv = pd.DataFrame(mat.get('Xval'), columns=['X1', 'X2']) cv['y'] = mat.get('yval') print(training.shape) training.head() print(cv.shape) cv.head() """ Explanation: load data End of explanation """ candidate = [0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100] # gamma to comply with sklearn parameter name combination = [(C, gamma) for C in candidate for gamma in candidate] len(combination) search = [] for C, gamma in combination: svc = svm.SVC(C=C, gamma=gamma) svc.fit(training[['X1', 'X2']], training['y']) search.append(svc.score(cv[['X1', 'X2']], cv['y'])) best_score = search[np.argmax(search)] best_param = combination[np.argmax(search)] print(best_score, best_param) best_svc = svm.SVC(C=100, gamma=0.3) best_svc.fit(training[['X1', 'X2']], training['y']) ypred = best_svc.predict(cv[['X1', 'X2']]) print(metrics.classification_report(cv['y'], ypred)) """ Explanation: manual grid search for $C$ and $\sigma$ http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC End of explanation """ parameters = {'C': candidate, 'gamma': candidate} svc = svm.SVC() clf = GridSearchCV(svc, parameters, n_jobs=-1) clf.fit(training[['X1', 'X2']], training['y']) clf.best_params_ clf.best_score_ ypred = clf.predict(cv[['X1', 'X2']]) print(metrics.classification_report(cv['y'], ypred)) """ Explanation: sklearn GridSearchCV http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html#sklearn.grid_search.GridSearchCV End of explanation """
tensorflow/docs-l10n
site/es-419/tutorials/quickstart/advanced.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Authors. End of explanation """ import tensorflow as tf from tensorflow.keras.layers import Dense, Flatten, Conv2D from tensorflow.keras import Model """ Explanation: Guia inicial de TensorFlow 2.0 para expertos <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver en TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Ejecutar en Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver codigo en GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/es-419/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Descargar notebook</a> </td> </table> Note: Nuestra comunidad de Tensorflow ha traducido estos documentos. Como las traducciones de la comunidad son basados en el "mejor esfuerzo", no hay ninguna garantia que esta sea un reflejo preciso y actual de la Documentacion Oficial en Ingles. Si tienen sugerencias sobre como mejorar esta traduccion, por favor envian un "Pull request" al siguiente repositorio tensorflow/docs. Para ofrecerse como voluntario o hacer revision de las traducciones de la Comunidad por favor contacten al siguiente grupo docs@tensorflow.org list. Este es un notebook de Google Colaboratory. Los programas de Python se executan directamente en tu navegador —una gran manera de aprender y utilizar TensorFlow. Para poder seguir este tutorial, ejecuta este notebook en Google Colab presionando el boton en la parte superior de esta pagina. En Colab, selecciona "connect to a Python runtime": En la parte superior derecha de la barra de menus selecciona: CONNECT. Para ejecutar todas las celdas de este notebook: Selecciona Runtime > Run all. Descarga e installa el paquete TensorFlow 2.0 version. Importa TensorFlow en tu programa: Import TensorFlow into your program: End of explanation """ mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # Agrega una dimension de canales x_train = x_train[..., tf.newaxis] x_test = x_test[..., tf.newaxis] """ Explanation: Carga y prepara el conjunto de datos MNIST End of explanation """ train_ds = tf.data.Dataset.from_tensor_slices( (x_train, y_train)).shuffle(10000).batch(32) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32) """ Explanation: Utiliza tf.data to separar por lotes y mezclar el conjunto de datos: End of explanation """ class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.conv1 = Conv2D(32, 3, activation='relu') self.flatten = Flatten() self.d1 = Dense(128, activation='relu') self.d2 = Dense(10, activation='softmax') def call(self, x): x = self.conv1(x) x = self.flatten(x) x = self.d1(x) return self.d2(x) # Crea una instancia del modelo model = MyModel() """ Explanation: Construye el modelo tf.keras utilizando la API de Keras model subclassing API: End of explanation """ loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adam() """ Explanation: Escoge un optimizador y una funcion de perdida para el entrenamiento de tu modelo: End of explanation """ train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy') """ Explanation: Escoge metricas para medir la perdida y exactitud del modelo. Estas metricas acumulan los valores cada epoch y despues imprimen el resultado total. End of explanation """ @tf.function def train_step(images, labels): with tf.GradientTape() as tape: predictions = model(images) loss = loss_object(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) train_accuracy(labels, predictions) """ Explanation: Utiliza tf.GradientTape para entrenar el modelo. End of explanation """ @tf.function def test_step(images, labels): predictions = model(images) t_loss = loss_object(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) EPOCHS = 5 for epoch in range(EPOCHS): for images, labels in train_ds: train_step(images, labels) for test_images, test_labels in test_ds: test_step(test_images, test_labels) template = 'Epoch {}, Perdida: {}, Exactitud: {}, Perdida de prueba: {}, Exactitud de prueba: {}' print(template.format(epoch+1, train_loss.result(), train_accuracy.result()*100, test_loss.result(), test_accuracy.result()*100)) # Reinicia las metricas para el siguiente epoch. train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states() """ Explanation: Prueba el modelo: End of explanation """
biothings/biothings_explorer
jupyter notebooks/EXPLAIN_ACE2_hydroxychloroquine_demo.ipynb
apache-2.0
!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer """ Explanation: Introduction This notebook demonstrates basic usage of BioThings Explorer, an engine for autonomously querying a distributed knowledge graph. BioThings Explorer can answer two classes of queries -- "PREDICT" and "EXPLAIN". PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe EXPLAIN queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides. EXPLAIN queries are designed to identify plausible reasoning chains to explain the relationship between two entities. For example, in this notebook, we explore the question: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"Why does hydroxychloroquine have an effect on ACE2?" To experiment with an executable version of this notebook, load it in Google Colaboratory. Step 0: Load BioThings Explorer modules First, install the biothings_explorer and biothings_schema packages, as described in this README. This only needs to be done once (but including it here for compability with colab). End of explanation """ # import modules from biothings_explorer from biothings_explorer.hint import Hint from biothings_explorer.user_query_dispatcher import FindConnection """ Explanation: Next, import the relevant modules: Hint: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names) FindConnection: Find intermediate bio-entities which connects user specified input and output End of explanation """ ht = Hint() # find all potential representations of ACE2 ace2_hint = ht.query("ACE2") # select the correct representation of ACE2 ace2 = ace2_hint['Gene'][0] ace2 # find all potential representations of hydroxychloroquine hydroxychloroquine_hint = ht.query("hydroxychloroquine") # select the correct representation of hydroxychloroquine hydroxychloroquine = hydroxychloroquine_hint['ChemicalSubstance'][0] hydroxychloroquine """ Explanation: Step 1: Find representation of "ACE2" and "hydroxychloroquine" in BTE In this step, BioThings Explorer translates our query strings "ACE2" and "hydroxychloroquine " into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown. Search terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., "lupus"), ChemicalSubstance (e.g., "acetaminophen"), Gene (e.g., "CDK2"), BiologicalProcess (e.g., "T cell differentiation"), and Pathway (e.g., "Citric acid cycle"). End of explanation """ help(FindConnection.__init__) """ Explanation: Step 2: Find intermediate nodes connecting ACE2 and hydroxychloroquine In this section, we find all paths in the knowledge graph that connect ACE2 and hydroxychloroquine . To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months. The parameters for FindConnection are described below: End of explanation """ fc = FindConnection(input_obj=ace2, output_obj=hydroxychloroquine, intermediate_nodes=['BiologicalEntity']) """ Explanation: Here, we formulate a FindConnection query with "CML" as the input_ojb, "imatinib" as the output_obj. We further specify with the intermediate_nodes parameter that we are looking for paths joining chronic myelogenous leukemia and imatinib with one intermediate node that is a Gene. (The ability to search for longer reasoning paths that include additional intermediate nodes will be added shortly.) End of explanation """ # set verbose=True will display all steps which BTE takes to find the connection fc.connect(verbose=True) """ Explanation: We next execute the connect method, which performs the query path planning and query path execution process. In short, BioThings Explorer is deconstructing the query into individual API calls, executing those API calls, then assembling the results. A verbose log of this process is displayed below: End of explanation """ df = fc.display_table_view() df.head() """ Explanation: Step 3: Display and Filter results This section demonstrates post-query filtering done in Python. Later, more advanced filtering functions will be added to the query path execution module for interleaved filtering, thereby enabling longer query paths. More details to come... First, all matching paths can be exported to a data frame. Let's examine a sample of those results. End of explanation """ df.node1_type.unique() """ Explanation: While most results are based on edges from semmed, edges from DGIdb, biolink, disgenet, mydisease.info and drugcentral were also retrieved from their respective APIs. Next, let's look to see which genes are mentioned the most. End of explanation """
OceanPARCELS/parcels
parcels/examples/tutorial_timevaryingdepthdimensions.ipynb
mit
%matplotlib inline from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ParticleFile, plotTrajectoriesFile import numpy as np from datetime import timedelta as delta from os import path """ Explanation: Tutorial on how to use S-grids with time-evolving depth dimensions Some hydrodynamic models (such as SWASH) have time-evolving depth dimensions, for example because they follow the waves on the free surface. Parcels can work with these types of models, but it is a bit involved to set up. That is why we explain here how to run Parcels on FieldSets with time-evoloving depth dimensions End of explanation """ filenames = path.join('SWASH_data', 'field_*.nc') variables = {'U': 'cross-shore velocity', 'V': 'along-shore velocity', 'depth_u': 'time varying depth_u'} """ Explanation: Here, we use sample data from the SWASH model. We first set the filenames and variables End of explanation """ dimensions = {'U': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'}, 'V': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'}, 'depth_u': {'lon': 'x', 'lat': 'y', 'depth': 'not_yet_set', 'time': 't'}} """ Explanation: Now, the first key step when reading time-evolving depth dimensions is that we specify depth as 'not_yet_set' in the dimensions dictionary End of explanation """ fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, mesh='flat', allow_time_extrapolation=True) fieldset.U.set_depth_from_field(fieldset.depth_u) fieldset.V.set_depth_from_field(fieldset.depth_u) """ Explanation: Then, after we create the FieldSet object, we set the depth dimension of the relevant Fields to fieldset.depth_u and fieldset.depth_w, using the Field.set_depth_from_field() method End of explanation """ pset = ParticleSet(fieldset, JITParticle, lon=9.5, lat=12.5, depth=-0.1) pfile = pset.ParticleFile("SwashParticles", outputdt=delta(seconds=0.05)) pset.execute(AdvectionRK4, dt=delta(seconds=0.005), output_file=pfile) pfile.export() # export the trajectory data to a netcdf file plotTrajectoriesFile('SwashParticles.nc'); """ Explanation: Now, we can create a ParticleSet, run those and plot them End of explanation """
suriyan/ethnicolr
ethnicolr/examples/ethnicolr_app_contrib20xx-census_ln.ipynb
mit
import pandas as pd df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', nrows=100) df.columns from ethnicolr import census_ln """ Explanation: Application: 2000/2010 Political Campaign Contributions by Race Using ethnicolr, we look to answer three basic questions: <ol> <li>What proportion of contributions were made by blacks, whites, Hispanics, and Asians? <li>What proportion of unique contributors were blacks, whites, Hispanics, and Asians? <li>What proportion of total donations were given by blacks, whites, Hispanics, and Asians? </ol> End of explanation """ df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name']) sdf = df[df.contributor_type=='I'].copy() rdf2000 = census_ln(sdf, 'contributor_lname', 2000) rdf2000['year'] = 2000 df = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name']) sdf = df[df.contributor_type=='I'].copy() rdf2010 = census_ln(sdf, 'contributor_lname', 2010) rdf2010['year'] = 2010 rdf = pd.concat([rdf2000, rdf2010]) rdf.head(20) rdf.replace('(S)', 0, inplace=True) rdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']] = rdf[['pctwhite', 'pctblack', 'pctapi', 'pctaian', 'pct2prace', 'pcthispanic']].astype(float) gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}") """ Explanation: Load and Subset on Individual Contributors End of explanation """ rdf['white'] = rdf.pctwhite / 100.0 rdf['black'] = rdf.pctblack / 100.0 rdf['api'] = rdf.pctapi / 100.0 rdf['hispanic'] = rdf.pcthispanic / 100.0 gdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'}) gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}") """ Explanation: What proportion of contributons were by blacks, whites, Hispanics, and Asians? End of explanation """ udf = rdf.drop_duplicates(subset=['contributor_name']).copy() udf['white'] = udf.pctwhite / 100.0 udf['black'] = udf.pctblack / 100.0 udf['api'] = udf.pctapi / 100.0 udf['hispanic'] = udf.pcthispanic / 100.0 gdf = udf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'}) gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}") """ Explanation: What proportion of the donors were blacks, whites, Hispanics, and Asians? End of explanation """ rdf['white'] = rdf.amount * rdf.pctwhite / 100.0 rdf['black'] = rdf.amount * rdf.pctblack / 100.0 rdf['api'] = rdf.amount * rdf.pctapi / 100.0 rdf['hispanic'] = rdf.amount * rdf.pcthispanic / 100.0 gdf = rdf.groupby(['year']).agg({'white': 'sum', 'black': 'sum', 'api': 'sum', 'hispanic': 'sum'}) / 10e6 gdf.style.format("{:0.2f}") gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}") """ Explanation: What proportion of the total donation was given by blacks, whites, Hispanics, and Asians? End of explanation """
cranium/deep-learning
first-neural-network/DLND Your first neural network.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation """ data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() """ Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation """ rides[:24*10].plot(x='dteday', y='cnt') """ Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation """ dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() """ Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation """ quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std """ Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation """ # Save the last 21 days test_data = data[-21*24:] data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] """ Explanation: Splitting the data into training, testing, and validation sets We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation """ # Hold out the last 60 days of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] """ Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation """ def sigmoid(x): return 1/(1+np.exp(-x)) def sigmoid_prime(x): return sigmoid(x) * (1 - sigmoid(x)) class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function self.activation_function = sigmoid def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. output_grad = 1 hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer hidden_grad = sigmoid_prime(hidden_inputs) # TODO: sigmoid prime can be further reduced self.weights_hidden_to_output += self.lr * np.dot(output_grad * output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * np.dot(hidden_grad * hidden_errors, inputs.T) # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) """ Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation """ import sys ### Set the hyperparameters here ### epochs = 1500 learning_rate = .01 hidden_nodes = 6 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5) """ Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) """ Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation """ import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) """ Explanation: Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below The model does a good job of predicting the data - especially the daily upswings and downswings. It didn't do as well at handling the days from Dec 21 - 31. The late December holiday season seems like it would be a difficult time to predict as it doesn't follow other holidays throughout the year. The training data only covers 2 years, so additional neurons and training data would help the network recognize the holiday season. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. End of explanation """
sebp/scikit-survival
doc/user_guide/coxnet.ipynb
gpl-3.0
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sksurv.datasets import load_breast_cancer from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis from sksurv.preprocessing import OneHotEncoder from sklearn.model_selection import GridSearchCV, KFold from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler """ Explanation: Penalized Cox Models Cox's proportional hazard's model is often an appealing model, because its coefficients can be interpreted in terms of hazard ratio, which often provides valuable insight. However, if we want to estimate the coefficients of many features, the standard Cox model falls apart, because internally it tries to invert a matrix that becomes non-singular due to correlations among features. Ridge This mathematical problem can be avoided by adding a $\ell_2$ penalty term on the coefficients that shrinks the coefficients to zero. The modified objective has the form $$ \arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \frac{\alpha}{2} \sum_{j=1}^p \beta_j^2 , $$ where $\mathrm{PL}(\beta)$ is the partial likelihood function of the Cox model, $\beta_1,\ldots,\beta_p$ are the coefficients for $p$ features, and $\alpha \geq 0$ is a hyper-parameter that controls the amount of shrinkage. The resulting objective is often referred to as ridge regression. If $\lambda$ is set to zero, we obtain the standard, unpenalized Cox model. End of explanation """ X, y = load_breast_cancer() Xt = OneHotEncoder().fit_transform(X) Xt.round(2).head() """ Explanation: To demonstrate the use of penalized Cox models we are going to use the breast cancer data, which contains the expression levels of 76 genes, age, estrogen receptor status (er), tumor size and grade for 198 individuals. The objective is to predict the time to distant metastasis. First, we load the data and perform one-hot encoding of categorical variables er and grade. End of explanation """ alphas = 10. ** np.linspace(-4, 4, 50) coefficients = {} cph = CoxPHSurvivalAnalysis() for alpha in alphas: cph.set_params(alpha=alpha) cph.fit(Xt, y) key = round(alpha, 5) coefficients[key] = cph.coef_ coefficients = (pd.DataFrame .from_dict(coefficients) .rename_axis(index="feature", columns="alpha") .set_index(Xt.columns)) """ Explanation: Let us begin by fitting a penalized Cox model to various values of $\alpha$ using sksurv.linear_model.CoxPHSurvivalAnalysis and recording the coefficients we obtained for each $\alpha$. End of explanation """ def plot_coefficients(coefs, n_highlight): _, ax = plt.subplots(figsize=(9, 6)) n_features = coefs.shape[0] alphas = coefs.columns for row in coefs.itertuples(): ax.semilogx(alphas, row[1:], ".-", label=row.Index) alpha_min = alphas.min() top_coefs = coefs.loc[:, alpha_min].map(abs).sort_values().tail(n_highlight) for name in top_coefs.index: coef = coefs.loc[name, alpha_min] plt.text( alpha_min, coef, name + " ", horizontalalignment="right", verticalalignment="center" ) ax.yaxis.set_label_position("right") ax.yaxis.tick_right() ax.grid(True) ax.set_xlabel("alpha") ax.set_ylabel("coefficient") plot_coefficients(coefficients, n_highlight=5) """ Explanation: Now, we can inspect how the coefficients change for varying $\alpha$. End of explanation """ cox_lasso = CoxnetSurvivalAnalysis(l1_ratio=1.0, alpha_min_ratio=0.01) cox_lasso.fit(Xt, y) coefficients_lasso = pd.DataFrame( cox_lasso.coef_, index=Xt.columns, columns=np.round(cox_lasso.alphas_, 5) ) plot_coefficients(coefficients_lasso, n_highlight=5) """ Explanation: We can see that if the penalty has a large weight (to the right), all coefficients are shrunk almost to zero. As the penalty's weight is decreased, the coefficients' value increases. We can also observe that the paths for X203391_at and tumor grade quickly separate themselves from the remaining coefficients, which indicates that this particular gene expression level and tumor grade are important predictive factors for time to distant metastasis. LASSO While the $\ell_2$ (ridge) penalty does solve the mathematical problem of fitting a Cox model, we would still need to measure the expression levels of all 76 genes to make predictions. Ideally, we would like to select a small subset of features that are most predictive and ignore the remaining gene expression levels. This is precisely what the LASSO (Least Absolute Shrinkage and Selection Operator) penalty does. Instead of shrinking coefficients to zero it does a type of continuous subset selection, where a subset of coefficients are set to zero and are effectively excluded. This reduces the number of features that we would need to record for prediction. In mathematical terms, the $\ell_2$ penalty is replaced by a $\ell_1$ penalty, which leads to the optimization problem $$ \arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \alpha \sum_{j=1}^p |\beta_j| . $$ The main challenge is that we cannot directly control the number of features that get selected, but the value of $\alpha$ implicitly determines the number of features. Thus, we need a data-driven way to select a suitable $\alpha$ and obtain a parsimonious model. We can do this by first computing the $\alpha$ that would ignore all features (coefficients are all zero) and then incrementally decrease its value, let's say until we reach 1% of the original value. This has been implemented in sksurv.linear_model.CoxnetSurvivalAnalysis by specifying l1_ratio=1.0 to use the LASSO penalty and alpha_min_ratio=0.01 to search for 100 $\alpha$ values up to 1% of the estimated maximum. End of explanation """ cox_elastic_net = CoxnetSurvivalAnalysis(l1_ratio=0.9, alpha_min_ratio=0.01) cox_elastic_net.fit(Xt, y) coefficients_elastic_net = pd.DataFrame( cox_elastic_net.coef_, index=Xt.columns, columns=np.round(cox_elastic_net.alphas_, 5) ) plot_coefficients(coefficients_elastic_net, n_highlight=5) """ Explanation: The figure shows that the LASSO penalty indeed selects a small subset of features for large $\alpha$ (to the right) with only two features (purple and yellow line) being non-zero. As $\alpha$ decreases, more and more features become active and are assigned a non-zero coefficient until the entire set of features is used (to the left left). Similar to the plot above for the ridge penalty, the path for X203391_at stands out, indicating its importance in breast cancer. However, the overall most important factor seems to be a positive estrogen receptor status (er). Elastic Net The LASSO is a great tool to select a subset of discriminative features, but it has two main drawbacks. First, it cannot select more features than number of samples in the training data, which is problematic when dealing with very high-dimensional data. Second, if data contains a group of features that are highly correlated, the LASSO penalty is going to randomly choose one feature from this group. The Elastic Net penalty overcomes these problems by using a weighted combination of the $\ell_1$ and $\ell_2$ penalty by solving: $$ \arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \alpha \left( r \sum_{j=1}^p |\beta_j| + \frac{1 - r}{2} \sum_{j=1}^p \beta_j^2 \right) , $$ where $r \in [0; 1[$ is the relative weight of the $\ell_1$ and $\ell_2$ penalty. The Elastic Net penalty combines the subset selection property of the LASSO with the regularization strength of the Ridge penalty. This leads to better stability compared to the LASSO penalized model. For a group of highly correlated features, the latter would choose one feature randomly, whereas the Elastic Net penalized model would tend to select all. Usually, it is sufficient to give the $\ell_2$ penalty only a small weight to improve stability of the LASSO, e.g. by setting $r = 0.9$. As for the LASSO, the weight $\alpha$ implicitly determines the size of the selected subset, and usually has to be estimated in a data-driven manner. End of explanation """ import warnings from sklearn.exceptions import FitFailedWarning from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler coxnet_pipe = make_pipeline( StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9, alpha_min_ratio=0.01, max_iter=100) ) warnings.simplefilter("ignore", UserWarning) warnings.simplefilter("ignore", FitFailedWarning) coxnet_pipe.fit(Xt, y) """ Explanation: Choosing penalty strength $\alpha$ Previously, we focused on the estimated coefficients to get some insight into which features are important for estimating time to distant metastasis. However, for prediction, we need to pick one particular $\alpha$, and the subset of features it implies. Here, we are going to use cross-validation to determine which subset and $\alpha$ generalizes best. Before we can use GridSearchCV, we need to determine the set of $\alpha$ which we want to evaluate. To do this, we fit a penalized Cox model to the whole data and retrieve the estimated set of alphas. Since, we are only interested in alphas and not the coefficients, we can use only a few iterations for improved speed. Note that we are using StandardScaler to account for scale differences among features and allow direct comparison of coefficients. End of explanation """ estimated_alphas = coxnet_pipe.named_steps["coxnetsurvivalanalysis"].alphas_ cv = KFold(n_splits=5, shuffle=True, random_state=0) gcv = GridSearchCV( make_pipeline(StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9)), param_grid={"coxnetsurvivalanalysis__alphas": [[v] for v in estimated_alphas]}, cv=cv, error_score=0.5, n_jobs=1).fit(Xt, y) cv_results = pd.DataFrame(gcv.cv_results_) """ Explanation: Using the estimated set of alphas, we perform 5 fold cross-validation to estimate the performance – in terms of concordance index – for each $\alpha$. Note: this can take a while. End of explanation """ alphas = cv_results.param_coxnetsurvivalanalysis__alphas.map(lambda x: x[0]) mean = cv_results.mean_test_score std = cv_results.std_test_score fig, ax = plt.subplots(figsize=(9, 6)) ax.plot(alphas, mean) ax.fill_between(alphas, mean - std, mean + std, alpha=.15) ax.set_xscale("log") ax.set_ylabel("concordance index") ax.set_xlabel("alpha") ax.axvline(gcv.best_params_["coxnetsurvivalanalysis__alphas"][0], c="C1") ax.axhline(0.5, color="grey", linestyle="--") ax.grid(True) """ Explanation: We can visualize the results by plotting the mean concordance index and its standard deviation across all folds for each $\alpha$. End of explanation """ best_model = gcv.best_estimator_.named_steps["coxnetsurvivalanalysis"] best_coefs = pd.DataFrame( best_model.coef_, index=Xt.columns, columns=["coefficient"] ) non_zero = np.sum(best_coefs.iloc[:, 0] != 0) print("Number of non-zero coefficients: {}".format(non_zero)) non_zero_coefs = best_coefs.query("coefficient != 0") coef_order = non_zero_coefs.abs().sort_values("coefficient").index _, ax = plt.subplots(figsize=(6, 8)) non_zero_coefs.loc[coef_order].plot.barh(ax=ax, legend=False) ax.set_xlabel("coefficient") ax.grid(True) """ Explanation: The figure shows that there is a range for $\alpha$ to the right where it is too large and sets all coefficients to zero, as indicated by the 0.5 concordance index of a purely random model. On the other extreme, if $\alpha$ becomes too small, too many features enter the model and the performance approaches that of a random model again. The sweet spot (orange line) is somewhere in the middle. Let's inspect that model. End of explanation """ coxnet_pred = make_pipeline( StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9, fit_baseline_model=True) ) coxnet_pred.set_params(**gcv.best_params_) coxnet_pred.fit(Xt, y) """ Explanation: The model selected a total of 21 features, and it deemed X204540_at to be the most important one, followed by X203391_at and positive estrogen receptor status: Survival and Cumulative Hazard Function Having selected a particular $\alpha$, we can perform prediction, either in terms of risk score using the predict function or in terms of survival or cumulative hazard function. For the latter two, we first need to re-fit the model with fit_baseline_model enabled. End of explanation """ surv_fns = coxnet_pred.predict_survival_function(Xt) time_points = np.quantile(y["t.tdm"], np.linspace(0, 0.6, 100)) legend_handles = [] legend_labels = [] _, ax = plt.subplots(figsize=(9, 6)) for fn, label in zip(surv_fns, Xt.loc[:, "er=positive"].astype(int)): line, = ax.step(time_points, fn(time_points), where="post", color="C{:d}".format(label), alpha=0.5) if len(legend_handles) <= label: name = "positive" if label == 1 else "negative" legend_labels.append(name) legend_handles.append(line) ax.legend(legend_handles, legend_labels) ax.set_xlabel("time") ax.set_ylabel("Survival probability") ax.grid(True) """ Explanation: For instance, we can now select a patient and determine how positive or negative estrogen receptor status would affect the survival function. End of explanation """
kimkipyo/dss_git_kkp
Python 복습/12일차.금_Pandas의 고급기능_DB/12일차_4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql ).ipynb
mit
-실제 엑셀 파일 데이터를 바탕으로 위의 것들을 다시 한 번 실습 -국가별 파일 입출력했음 번외로 수학계산을 해 볼 것이다. max, mean, min, sum df = pd.DataFrame([{"Name": "KiPyo Kim", "Age": 29}, {"Name": "KiDong Kim", "Age": 33}]) df # 옵션에 대해서만 알아가자 df.to_csv("fastcampus.csv") df.to_csv("fastcampus.csv", index=False) df.to_csv("fastcampus.csv", index=False, header=False) """ Explanation: 4T_Pandas Basic (4) - 파일 입출력 ( csv, excel, sql ) End of explanation """ df.to_csv("fastcampus.csv", index=False, header=False, sep="|") """ Explanation: CSV(Comma Seperated Value) => 각각의 데이터가 ","를 기준으로 나뉜 데이터 예를 들어 김기표 | 29 | 분석가 // sep="|" 이거였어 // 이렇게 하면 ,가 |이걸로 바뀌게 된다. End of explanation """ df = pd.read_csv # 이렇게 간단하다 # 엑셀 파일을 일괄적으로 csv 형태로 바꿔주는 프로그래밍 => Pandas로 하면 금방 한다 # read_excel().to_csv 이런 식으로 하면. 해보자 pd.read_csv("fastcampus.csv") pd.read_csv("fastcampus.csv", header=None, sep="|") df = pd.read_csv("fastcampus.csv", header=None, sep="|") df.rename(columns={0: "Age", 1: "Name"}, inplace=True) df """ Explanation: 데이터 분석을 사용할 때 2가지 양식이 있다 csv(엑셀), XML, JSON == 데이터베이스 Pickle(데이터 분석에서 상당히 중요하다) => 파이썬의 객체 그대로 저장할 수 있다. 파이썬 코드를 그대로 저장 즉, 클래스나 함수를 바이너리 형태로 저장해서 언제든 쓸 수 있도록 End of explanation """
Yu-Group/scikit-learn-sandbox
benchmarks/examine_outputs_py.ipynb
mit
import py_irf_benchmarks_utils import numpy as np import matplotlib.pyplot as plt import sys sys.path.insert(0, '../jupyter/utils') from irf_jupyter_utils import _get_histogram # recall output file file_in = 'specs/iRF_mod01.yaml' specs = py_irf_benchmarks_utils.yaml_to_dict(inp_yaml=file_in) # specify output file file_out = 'output/iRF_mod01_out.yaml' bm = py_irf_benchmarks_utils.yaml_to_dict(inp_yaml=file_out) """ Explanation: Lets examine iRF results in python End of explanation """ # calling a helper function # third argument is x_axis, fourth argument is y_axis py_irf_benchmarks_utils.plot_bm(bm, specs, 'n_estimators', 'time') py_irf_benchmarks_utils.plot_bm(bm, specs, 'n_estimators', 'accuracy_score') py_irf_benchmarks_utils.plot_bm(bm, specs, 'n_estimators', 'log_loss') """ Explanation: Plot some metrics End of explanation """ # Print the feature importance rankings from one trial print("Feature ranking:") feature_importances = bm[2]['feature_importances'][0] feature_importances_rank_idx = np.argsort(feature_importances)[::-1] for f in range(len(feature_importances)): print("%d. feature %d (%f)" % (f + 1 , feature_importances_rank_idx[f] , feature_importances[feature_importances_rank_idx[f]])) # Plot the feature importance rankings from one trial width = 12 height = 8 plt.figure(figsize=(width, height)) plt.title("Feature importances") plt.bar(range(len(feature_importances)) , feature_importances[feature_importances_rank_idx] , color="r" , align="center") plt.xticks(range(len(feature_importances)), feature_importances_rank_idx) plt.xlim([-1, len(feature_importances)]) plt.show() # lets look at the top 5 features across the trials for i in range(specs['n_trials'][0]): feature_importances = bm[2]['feature_importances'][i] feature_importances_rank_idx = np.argsort(feature_importances)[::-1] print('trial'+str(i), feature_importances_rank_idx[0:5]) """ Explanation: Lets look at feature importances from the last iteration of iRF The results printed in the notebook below are for n_estimators = 60 End of explanation """ # plot stability scores for one trial stability_scores = bm[2]['stability_all'][0] _get_histogram(stability_scores, sort = True) # examine top 5 stability scores across trials for i in range(specs['n_trials'][0]): stability_scores = bm[2]['stability_all'][i] data_y = sorted(stability_scores.values(), reverse=True) data_x = sorted(stability_scores, key=stability_scores.get, reverse=True) print('trial'+str(i), data_x[0:5]) """ Explanation: Examine stability scores End of explanation """
wasit7/book_pae
pae/nb/parallel forest.ipynb
mit
import numpy as np from matplotlib import pyplot as plt import pickle import os %pylab inline """ Explanation: Parallel Forest Tutorial This notebooke show the traing process of Parallel Random Forest. For cluster training please check https://github.com/wasit7/parallel_forest import modules Import all necessary modules End of explanation """ clmax=5 spc=5e2 theta_range=2 #samples is list of labels samples=np.zeros(spc*clmax,dtype=np.uint32) #I is fessture vector I=np.zeros((spc*clmax,theta_range),dtype=np.float32) marker=['bo','co','go','ro','mo','yo','ko', 'bs','cs','gs','rs','ms','ys','ks'] # number of datasets being generated # 8 for training # another one for evaluation N=9 path="train/" if not os.path.exists(path): os.makedirs(path) for n in xrange(N): for cl in xrange(clmax): xo=cl*spc #define label samples[xo:xo+spc]=cl phi = np.linspace(0, 2*np.pi, spc) + \ np.random.randn(spc)*0.4*np.pi/clmax + \ 2*np.pi*cl/clmax r = np.linspace(0.1, 1, spc) I[xo:xo+spc,:]=np.transpose(np.array([r*np.cos(phi), r*np.sin(phi)])) with open(path+'dataset%02d.pic'%(n), 'wb') as pickleFile: #write label and feature vector theta_dim=1 pickle.dump((clmax,theta_dim,theta_range,len(samples),samples,I,None), pickleFile, pickle.HIGHEST_PROTOCOL) """ Explanation: Generating datasets End of explanation """ z=np.random.randint( 0,spc*clmax,1000) for i in z: #ax.plot(dset.I[i,0],dset.I[i,1],marker[dset2.samples[i]]) plt.plot(I[i,0],I[i,1],marker[samples[i]]) plt.hold(True) """ Explanation: Visualization of the dataset End of explanation """ from pforest.master import master m=master() m.reset() m.train() """ Explanation: Training End of explanation """ with open('out_tree.pic', 'wb') as pickleFile: pickle.dump(m.root, pickleFile, pickle.HIGHEST_PROTOCOL) with open('out_tree.pic', 'rb') as pickleFile: root = pickle.load(pickleFile) """ Explanation: Write and read the tree You may need to save/load the tree to/from a pickle file End of explanation """ ls """ Explanation: Check the file size End of explanation """ from pforest.dataset import dataset from pforest.tree import tree #init the test tree t=tree() t.settree(root) t.show() """ Explanation: The result decision tree Termination code (Q:min bag size, G:no information gain, D:reaching maximum depth) End of explanation """ #load the last dataset that never use for training dset=dataset(8) correct=0; for x in xrange(dset.size): L=t.getL(np.array([x]),dset) if dset.getL(x) == L: correct=correct+1 dset.setL(x,L) print("recall rate: {}%".format(correct/float(dset.size)*100)) """ Explanation: Recall rate Loading a new dataset, the last on, for computing a recall rate End of explanation """ #setup the new test-set #load dataset dset=dataset(8) d=0.05 y, x = np.mgrid[slice(-1, 1+d, d), slice(-1, 1+d, d)] #start labeling L=np.zeros(x.shape,dtype=int) for r in xrange(x.shape[0]): for c in xrange(x.shape[1]): u=( x[r,c],y[r,c] ) Prob=t.classify(u) L[r,c]=np.argmax(Prob) """ Explanation: Labelling The computer use the decision tree to classify the unknown feature vector u End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.axis([-1,1,-1,1]) ax.pcolor(x,y,L) ax.hold(True) """ Explanation: 2D space partitioning by the decision tree Displaying the labelled result End of explanation """ z=np.random.randint(0,dset.size,1000) for i in z: ax.plot(dset.I[i,0],dset.I[i,1],marker[dset.samples[i]]) fig t.classify([0.75,0.0]) """ Explanation: Overlay the dataset End of explanation """
dietmarw/EK5312_ElectricalMachines
Chapman/Ch4-Problem_4-02.ipynb
unlicense
%pylab notebook %precision 0 """ Explanation: Excercises Electric Machinery Fundamentals Chapter 4 Problem 4-2 End of explanation """ Vl = 13.8e3 # [V] PF = 0.9 Xs = 2.5 # [Ohm] Ra = 0.2 # [Ohm] P = 50e6 # [W] Pf_w = 1.0e6 # [W] Pcore = 1.5e6 # [W] Pstray = 0 # [W] n_m = 1800 # [r/min] """ Explanation: Description Given a 13.8-kV, 50-MVA, 0.9-power-factor-lagging, 60-Hz, four-pole, Y-connected synchronous machine with: a synchronous reactance of $2.5\,\Omega$ an armature resistance of $0.2\,\Omega$. at 60 Hz, its friction and windage losses are 1 MW its core losses are 1.5 MW. The field circuit has a dc voltage of 120 V, the maximum $I_F$ is 10 A. The current of the field circuit is adjustable over the range from 0 to 10 A. The OCC of this generator is shown in Figure P4-1 below <img src="figs/FigC_P4-1.jpg" width="70%"> End of explanation """ ia = P / (sqrt(3) * Vl) Ia_angle = -arccos(PF) Ia = ia * (cos(Ia_angle) + sin(Ia_angle)*1j) print('Ia = {:.0f} A ∠{:.1f}°'.format(abs(Ia), Ia_angle/pi *180)) """ Explanation: (a) How much field current is required to make the terminal voltage $V_T$ (or line voltage $V_L$ ) equal to 13.8 kV when the generator is running at no load? (b) What is the internal generated voltage $E_A$ of this machine at rated conditions? (c) What is the phase voltage $V_\phi$ of this generator at rated conditions? (d) How much field current is required to make the terminal voltage $V_T$ equal to 13.8 kV when the generator is running at rated conditions? (e) Suppose that this generator is running at rated conditions, and then the load is removed without changing the field current. What would the terminal voltage of the generator be? (f) How much steady-state power and torque must the generator’s prime mover be capable of supplying to handle the rated conditions? (g) Construct a capability curve for this generator. SOLUTION (a) If the no-load terminal voltage is 13.8 kV, the required field current can be read directly from the open-circuit characteristic. It is $\underline{\underline{I_F = 3.50\,A}}$. (b) This generator is Y-connected, so $I_L = I_A$ . At rated conditions, the line and phase current in this generator is: $$I_A = I_L = \frac{P}{\sqrt{3}V_L}$$ End of explanation """ V_phase = Vl / sqrt(3) print('V_phase = {:.0f} V'.format(V_phase)) """ Explanation: The phase voltage of this machine is: $$V_\phi = V_T / \sqrt{3}$$ End of explanation """ Ea = V_phase + Ra*Ia + Xs*1j*Ia Ea_angle = arctan(Ea.imag/Ea.real) print(''' Ea = {:.0f} V ∠{:.1f}° =================='''.format(abs(Ea), Ea_angle/pi*180)) """ Explanation: The internal generated voltage of the machine is: $$\vec{E}A = \vec{V}\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$ End of explanation """ print(''' V_phase = {:.0f} V ================'''.format(V_phase)) """ Explanation: (c) The phase voltage of the machine at rated conditions is: End of explanation """ Vt_oc = sqrt(3) * abs(Ea) print('Vt_oc = {:.0f} kV'.format(Vt_oc/1000)) """ Explanation: (d) The equivalent open-circuit terminal voltage corresponding to an $E_A$ of the value calculated in (b) is: End of explanation """ abs(Ea) """ Explanation: From the OCC, the required field current is $\underline{\underline{I_F = 10\,A}}$. (e) If the load is removed without changing the field current then $V_\phi = E_A$: End of explanation """ Pout = P*PF print('Pout = {:.0f} MW'.format(Pout/1e6)) """ Explanation: The corresponding terminal voltage would be $\underline{\underline{V_T = 20\,kV}}$. (f) The input power to this generator is equal to the output power plus losses. The rated output power is: End of explanation """ Pcu = 3 * abs(Ia)**2 * Ra print('Pcu = {:.1f} MW'.format(Pcu/1e6)) Pin = Pout +Pcu + Pf_w + Pcore + Pstray print('Pin = {:.1f} MW'.format(Pin/1e6)) """ Explanation: $$P_{CU} = 3I^2_AR_A$$ End of explanation """ w_m = n_m * (2*pi/60.0) tau_app = Pin / w_m print(''' tau_app = {:.0f} Nm ==================='''.format(tau_app)) """ Explanation: Therefore the prime mover must be capable of supplying $P_{in}$. Since the generator is a four-pole 60 Hz machine, to must be turning at 1800 r/min. The required torque is: $$\tau_{app} = \frac{P_{in}}{\omega_m}$$ End of explanation """ Q = - (3 * V_phase**2) / Xs print('Q = {:.2f} Mvar'.format(Q/1e6)) """ Explanation: (e) The rotor current limit of the capability curve would be drawn from an origin of: $$Q = -\frac{3V^2_\phi}{X_S}$$ End of explanation """ De = (3 * V_phase * abs(Ea)) / Xs print('De = {:.0f} Mvar'.format(De/1e6)) """ Explanation: The radius of the rotor current limit is: $$D_E = \frac{3V_\phi E_A}{X_S}$$ End of explanation """ S = 3 * V_phase * abs(Ia) print('S = {:.0f} Mvar'.format(S/1e6)) """ Explanation: The stator current limit is a circle at the origin of radius: $$S = 3V_\phi I_A$$ End of explanation """ theta = arange(-95,95) # angle in degrees rad = theta * pi/180 # angle in radians s_curve = S * ( cos(rad) + sin(rad)*1j) """ Explanation: Get points for stator current limit: End of explanation """ orig = Q*1j theta = arange(65,115) # angle in degrees rad = theta * pi / 180 # angle in radians r_curve = orig + De * ( cos(rad) + sin(rad)*1j ) """ Explanation: Get points for rotor current limit: End of explanation """ fig= figure() ax=fig.add_subplot(1, 1, 1) ax.plot(real(s_curve/1e6),imag(s_curve/1e6),'b') ax.plot(real(r_curve/1e6),imag(r_curve/1e6),'r--') ax.set_title('Synchronous Generator Capability Diagram') ax.set_xlabel('Power (MW)') ax.set_ylabel('Reactive Power (Mvar)') ax.set_aspect('equal', 'datalim') ax.legend(('stator current limit', 'rotor current limit'), loc=3); ax.grid() """ Explanation: Plot the capability diagram: End of explanation """
turbomanage/training-data-analyst
courses/machine_learning/deepdive/08_image/flowers_fromscratch_tpu.ipynb
apache-2.0
%%bash pip install apache-beam[gcp] """ Explanation: Flowers Image Classification with TensorFlow on Cloud ML Engine TPU This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. Unlike flowers_fromscratch.ipynb, here we do it on a TPU. Therefore, this will work only if you have quota for TPUs (not in Qwiklabs). It will cost about $3 if you want to try it out. End of explanation """ import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 MODEL_TYPE = 'tpu' # do not change these os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['MODEL_TYPE'] = MODEL_TYPE os.environ['TFVERSION'] = '1.11' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION """ Explanation: After doing a pip install, click on Reset Session so that the Python environment picks up the new package End of explanation """ %%bash gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | sed 's/,/ /g' | awk '{print $2}' | sort | uniq > /tmp/labels.txt %%bash gsutil cat gs://cloud-ml-data/img/flower_photos/train_set.csv | wc -l gsutil cat gs://cloud-ml-data/img/flower_photos/eval_set.csv | wc -l %%bash export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu gsutil -m rm -rf gs://${BUCKET}/tpu/flowers/data python -m trainer.preprocess \ --train_csv gs://cloud-ml-data/img/flower_photos/train_set.csv \ --validation_csv gs://cloud-ml-data/img/flower_photos/eval_set.csv \ --labels_file /tmp/labels.txt \ --project_id $PROJECT \ --output_dir gs://${BUCKET}/tpu/flowers/data %%bash gsutil ls gs://${BUCKET}/tpu/flowers/data/ """ Explanation: Preprocess JPEG images to TF Records While using a GPU, it is okay to read the JPEGS directly from our input_fn. However, TPUs are too fast and it will be very wasteful to have the TPUs wait on I/O. Therefore, we'll preprocess the JPEGs into TF Records. This runs on Cloud Dataflow and will take <b> 15-20 minutes </b> End of explanation """ %%bash WITHOUT_TPU="--train_batch_size=2 --train_steps=5" OUTDIR=./flowers_trained rm -rf $OUTDIR export PYTHONPATH=${PYTHONPATH}:${PWD}/flowersmodeltpu python -m flowersmodeltpu.task \ --output_dir=$OUTDIR \ --num_train_images=3300 \ --num_eval_images=370 \ $WITHOUT_TPU \ --learning_rate=0.01 \ --project=${PROJECT} \ --train_data_path=gs://${BUCKET}/tpu/flowers/data/train* \ --eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation* """ Explanation: Run as a Python module First run locally without --use_tpu -- don't be concerned if the process gets killed for using too much memory. End of explanation """ %%bash WITH_TPU="--train_batch_size=256 --train_steps=3000 --batch_norm --use_tpu" WITHOUT_TPU="--train_batch_size=2 --train_steps=5" OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE}_delete JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=flowersmodeltpu.task \ --package-path=${PWD}/flowersmodeltpu \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_TPU \ --runtime-version=$TFVERSION \ -- \ --output_dir=$OUTDIR \ --num_train_images=3300 \ --num_eval_images=370 \ $WITH_TPU \ --learning_rate=0.01 \ --project=${PROJECT} \ --train_data_path=gs://${BUCKET}/tpu/flowers/data/train-* \ --eval_data_path=gs://${BUCKET}/tpu/flowers/data/validation-* %%bash MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1) saved_model_cli show --dir $MODEL_LOCATION --all """ Explanation: Then, run it on Cloud ML Engine with --use_tpu End of explanation """ from google.datalab.ml import TensorBoard TensorBoard().start('gs://{}/flowers/trained_{}'.format(BUCKET, MODEL_TYPE)) for pid in TensorBoard.list()['pid']: TensorBoard().stop(pid) print 'Stopped TensorBoard with pid {}'.format(pid) """ Explanation: Monitoring training with TensorBoard Use this cell to launch tensorboard End of explanation """ %%bash MODEL_NAME="flowers" MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME} #gcloud ml-engine models delete ${MODEL_NAME} #gcloud ml-engine models create ${MODEL_NAME} --regions $REGION gcloud alpha ml-engine versions create ${MODEL_VERSION} --machine-type mls1-c4-m4 --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION """ Explanation: Deploying and predicting with model Deploy the model: End of explanation """ %%bash gcloud alpha ml-engine models list """ Explanation: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg" /> End of explanation """ %%bash IMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg # Copy the image to local disk. gsutil cp $IMAGE_URL flower.jpg # Base64 encode and create request message in json format. python -c 'import base64, sys, json; img = base64.b64encode(open("flower.jpg", "rb").read()).decode(); print(json.dumps({"image_bytes":{"b64": img}}))' &> request.json """ Explanation: The online prediction service expects images to be base64 encoded as described here. End of explanation """ %%bash gcloud ml-engine predict \ --model=flowers2 \ --version=${MODEL_TYPE} \ --json-instances=./request.json """ Explanation: Send it to the prediction service End of explanation """
YzPaul3/h2o-3
h2o-py/demos/H2O_tutorial_eeg_eyestate.ipynb
apache-2.0
import h2o # Start an H2O Cluster on your local machine h2o.init() """ Explanation: H2O Tutorial: EEG Eye State Classification Author: Erin LeDell Contact: erin@h2o.ai This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Most of the functionality for a Pandas DataFrame is exactly the same syntax for an H2OFrame, so if you are comfortable with Pandas, data frame manipulation will come naturally to you in H2O. The modeling syntax in the H2O Python API may also remind you of scikit-learn. References: H2O Python API documentation and H2O general documentation Install H2O in Python Prerequisites This tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies): bash pip install requests pip install tabulate pip install scikit-learn If you have any problems (for example, installing the scikit-learn package), check out this page for tips. Install h2o Once the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is currently "Tibshirani-8." The installation instructions are on the "Install in Python" tab on this page. ```bash The following command removes the H2O module for Python (if it already exists). pip uninstall h2o Next, use pip to install this version of the H2O Python module. pip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/8/Python/h2o-3.6.0.8-py2.py3-none-any.whl ``` For reference, the Python documentation for the latest stable release of H2O is here. Start up an H2O cluster In a Python terminal, we can import the h2o package and start up an H2O cluster. End of explanation """ # This will not actually do anything since it's a fake IP address # h2o.init(ip="123.45.67.89", port=54321) """ Explanation: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows: End of explanation """ #csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv" csv_url = "https://h2o-public-test-data.s3.amazonaws.com/eeg_eyestate_splits.csv" data = h2o.import_file(csv_url) """ Explanation: Download EEG Data The following code downloads a copy of the EEG Eye State dataset. All data is from one continuous EEG measurement with the Emotiv EEG Neuroheadset. The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data. We can import the data directly into H2O using the import_file method in the Python API. The import path can be a URL, a local path, a path to an HDFS file, or a file on Amazon S3. End of explanation """ data.shape """ Explanation: Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame: End of explanation """ data.head() """ Explanation: Now let's take a look at the top of the frame: End of explanation """ data.columns """ Explanation: The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions: train (60%), valid (%20) and test (20%) and marked which split each row belongs to in the "split" column. Let's take a look at the column names. The data contains derived features from the medical images of the tumors. End of explanation """ columns = ['AF3', 'eyeDetection', 'split'] data[columns].head() """ Explanation: To select a subset of the columns to look at, typical Pandas indexing applies: End of explanation """ y = 'eyeDetection' data[y] """ Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely: End of explanation """ data[y].unique() """ Explanation: It looks like a binary response, but let's validate that assumption: End of explanation """ data[y] = data[y].asfactor() """ Explanation: If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default. Therefore, we should convert the response column to a more efficient "enum" representation -- in this case it is a categorial variable with two levels, 0 and 1. If the only column in my data that is categorical is the response, I typically don't bother specifying the column type during the parse, and instead use this one-liner to convert it aftewards: End of explanation """ data[y].nlevels() """ Explanation: Now we can check that there are two levels in our response column: End of explanation """ data[y].levels() """ Explanation: We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are: End of explanation """ data.isna() data[y].isna() """ Explanation: We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column. End of explanation """ data[y].isna().sum() """ Explanation: The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look: End of explanation """ data.isna().sum() """ Explanation: Great, no missing labels. :-) Out of curiosity, let's see if there is any missing data in this frame: End of explanation """ data[y].table() """ Explanation: The sum is still zero, so there are no missing values in any of the cells. The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution: End of explanation """ n = data.shape[0] # Total number of training samples data[y].table()['Count']/n """ Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). Let's calculate the percentage that each class represents: End of explanation """ train = data[data['split']=="train"] train.shape valid = data[data['split']=="valid"] valid.shape test = data[data['split']=="test"] test.shape """ Explanation: Split H2O Frame into a train and test set So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set. If you want H2O to do the splitting for you, you can use the split_frame method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want. Subset the data H2O Frame on the "split" column: End of explanation """ # Import H2O GBM: from h2o.estimators.gbm import H2OGradientBoostingEstimator """ Explanation: Machine Learning in H2O We will do a quick demo of the H2O software using a Gradient Boosting Machine (GBM). The goal of this problem is to train a model to predict eye state (open vs closed) from EEG data. Train and Test a GBM model End of explanation """ model = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1) """ Explanation: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters. End of explanation """ x = list(train.columns) x del x[12:14] #Remove the 13th and 14th columns, 'eyeDetection' and 'split' x """ Explanation: Specify the predictor set and response The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables. The x argument should be a list of predictor names in the training frame, and y specifies the response column. We have already set y = "eyeDetector" above, but we still need to specify x. End of explanation """ model.train(x=x, y=y, training_frame=train, validation_frame=valid) """ Explanation: Now that we have specified x and y, we can train the model: End of explanation """ print(model) """ Explanation: Inspect Model The type of results shown when you print a model, are determined by the following: - Model class of the estimator (e.g. GBM, RF, GLM, DL) - The type of machine learning problem (e.g. binary classification, multiclass classification, regression) - The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds) Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score. The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF. Lastly, for tree-based methods (GBM and RF), we also print variable importance. End of explanation """ perf = model.model_performance(test) print(perf.__class__) """ Explanation: Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we just ran the model once, so our validation set (passed as validation_frame), could have also served as a "test set." We technically have already created test set predictions and evaluated test set performance. However, when performing model selection over a variety of model parameters, it is common for users to train a variety of models (using different parameters) using the training set, train, and a validation set, valid. Once the user selects the best model (based on validation set performance), the true test of model performance is performed by making a final set of predictions on the held-out (never been used before) test set, test. You can use the model_performance method to generate predictions on a new dataset. The results are stored in an object of class, "H2OBinomialModelMetrics". End of explanation """ perf.r2() perf.auc() perf.mse() """ Explanation: Individual model performance metrics can be extracted using methods like r2, auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC). End of explanation """ cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1, nfolds=5) cvmodel.train(x=x, y=y, training_frame=data) """ Explanation: Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument. When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which is called data. End of explanation """ print(cvmodel.auc(train=True)) print(cvmodel.auc(xval=True)) """ Explanation: This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric. End of explanation """ ntrees_opt = [5,50,100] max_depth_opt = [2,3,5] learn_rate_opt = [0.1,0.2] hyper_params = {'ntrees': ntrees_opt, 'max_depth': max_depth_opt, 'learn_rate': learn_rate_opt} """ Explanation: Grid Search One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over: - ntrees: Number of trees - max_depth: Maximum depth of a tree - learn_rate: Learning rate in the GBM We will define a grid as follows: End of explanation """ from h2o.grid.grid_search import H2OGridSearch gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params) """ Explanation: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters: End of explanation """ gs.train(x=x, y=y, training_frame=train, validation_frame=valid) """ Explanation: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid. End of explanation """ print(gs) # print out the auc for all of the models auc_table = gs.sort_by('auc(valid=True)',increasing=False) print(auc_table) """ Explanation: Compare Models End of explanation """ best_model = h2o.get_model(auc_table['Model Id'][0]) best_model.auc() """ Explanation: The "best" model in terms of validation set AUC is listed first in auc_table. End of explanation """ best_perf = best_model.model_performance(test) best_perf.auc() """ Explanation: The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC. End of explanation """
ralph-group/pymeasure
examples/Notebook Experiments/script2.ipynb
mit
%%writefile my_config.ini [Filename] prefix = my_data_ dated_folder = 1 directory = data ext = csv index = datetimeformat = %Y%m%d_%H%M%S [Logging] console = 1 console_level = WARNING filename = test.log file_level = DEBUG [matplotlib.rcParams] axes.axisbelow = True axes.prop_cycle = cycler('color', ['b', 'g', 'r', 'c', 'm', 'y', 'k']) axes.edgecolor = 'white' axes.facecolor = '#EAEAF2' axes.grid = True axes.labelcolor = '.15' axes.labelsize = 11.0 axes.linewidth = 0.0 axes.titlesize = 12.0 figure.facecolor = 'white' figure.figsize = [8.0, 5.5] font.sans-serif = ['Arial', 'Liberation Sans', 'Bitstream Vera Sans', 'sans-serif'] grid.color = 'white' grid.linestyle = '-' grid.linewidth = 1.0 image.cmap = 'Greys' legend.fontsize = 10.0 legend.frameon = False legend.numpoints = 1 legend.scatterpoints = 1 lines.linewidth = 1.75 lines.markeredgewidth = 0.0 lines.markersize = 7.0 lines.solid_capstyle = 'round' patch.facecolor = (0.2980392156862745, 0.4470588235294118, 0.6901960784313725) patch.linewidth = 0.3 text.color = '.15' xtick.color = '.15' xtick.direction = 'out' xtick.labelsize = 10.0 xtick.major.pad = 7.0 xtick.major.size = 0.0 xtick.major.width = 1.0 xtick.minor.size = 0.0 ytick.color = '.15' ytick.direction = 'out' ytick.labelsize = 10.0 ytick.major.pad = 7.0 ytick.major.size = 0.0 ytick.major.width = 1.0 ytick.minor.size = 0.0 %%writefile procedures.py import random from time import sleep import logging log = logging.getLogger('') log.addHandler(logging.NullHandler()) from pymeasure.experiment import Procedure, IntegerParameter, Parameter, FloatParameter, Measurable class TestProcedure(Procedure): iterations = IntegerParameter('Loop Iterations', default=100) delay = FloatParameter('Delay Time', units='s', default=0.2) seed = Parameter('Random Seed', default='12345') iteration = Measurable('Iteration', default = 0) random_number = Measurable('Random Number', random.random) offset = Measurable('Random Number + 1', default = 0) def startup(self): log.info("Setting up random number generator") random.seed(self.seed) def measure(self): data = self.get_datapoint() data['Random Number + 1'] = data['Random Number'] + 1 log.debug("Produced numbers: %s" % data) self.emit('results', data) self.emit('progress', 100.*self.iteration.value/self.iterations) def execute(self): log.info("Starting to generate numbers") for self.iteration.value in range(self.iterations): self.measure() sleep(self.delay) if self.should_stop(): log.warning("Catch stop command in procedure") break def shutdown(self): log.info("Finished") %%writefile analysis.py def add_offset(data, offset): return data['Random Number'] + offset def analyse(data): data['Random Number + 2'] = add_offset(data, 2) return data from pymeasure.experiment import Experiment, config from procedures import TestProcedure from analysis import analyse config.set_file('my_config.ini') %matplotlib inline procedure = TestProcedure(iterations=10, delay=.1) experiment = Experiment('test', procedure, analyse) experiment.start() import pylab as pl pl.figure(figsize=(10,4)) ax1 = pl.subplot(121) experiment.plot('Iteration', 'Random Number', ax=ax1) ax2 = pl.subplot(122) experiment.plot('Iteration', 'Random Number + 1', ax=ax2) experiment.plot_live() """ Explanation: More features for Experiment class: custom config, Measurable parameter, analysis function This example uses the Experiment class to create a measurement from a procedure object, with the Measurable parameter to automatically generate sorted DATA_COLUMNS and MEASURE lists (which is then passed to the get_datapoint function of the Procedure class). The file my_config.ini is passed to set custom data saving, logging and matplotlib options. The analysis function is passed as an optional attribute, to produce on-the-fly data analysis for live plotting (only the raw data is saved on disk). To have analysed data save on disk, create an empty Measurable and update it in the measure loop as also shown in the example below. End of explanation """ experiment.data """ Explanation: Analysed data End of explanation """ experiment.results.data """ Explanation: Raw data (as saved on disk) End of explanation """ experiment.filename """ Explanation: Filename generated by config preferences End of explanation """
Upward-Spiral-Science/team1
code/Assignment10_Emily.ipynb
apache-2.0
plt.figure() plt.figure(figsize=(28,7)) # x-direction # sum up y-z plane at each x plt.subplot(131) unique_x = np.unique(csv_clean[:,0]) sum_x = [0]*len(unique_x) i = 0 for x in unique_x: sum_x[i] = np.sum(csv_clean[csv_clean[:,0] == x][4])*0.0001 i = i+1 plt.bar(unique_x, sum_x, 1) plt.xlim(450, 3600) plt.ylabel('density in synapses/voxel 1e+4',fontsize=20) plt.xlabel('x-coordinate',fontsize=20) plt.title('Total Density across Each X-Layer',fontsize=20) # y-direction # sum up x-z plane at each y plt.subplot(132) unique_y = np.unique(csv_clean[:,1]) sum_y = [0]*len(unique_y) i = 0 for y in unique_y: sum_y[i] = np.sum(csv_clean[csv_clean[:,1] == y][4])*.0001 i = i+1 plt.bar(unique_y, sum_y, 1) plt.xlim(1570, 3190) plt.ylabel('density in synapses/voxel 1e+4',fontsize=20) plt.xlabel('y-coordinate',fontsize=20) plt.title('Total Density across Each Y-Layer',fontsize=20) # z-direction # sum up x-y plane at each z plt.subplot(133) unique_z = np.unique(csv_clean[:,2]) sum_z = [0]*len(unique_z) i = 0 for z in unique_z: sum_z[i] = np.sum(csv_clean[csv_clean[:,2] == z][4])*.0001 i = i+1 plt.bar(unique_z, sum_z, 1) plt.ylabel('density in synapses/voxel 1e+4',fontsize=20) plt.xlabel('z-coordinate',fontsize=20) plt.title('Total Density across Each Z-Layer',fontsize=20) """ Explanation: 1. See how Density Varies in the X, Y, Z directions End of explanation """ # Divide by middle z-layer from scipy.signal import argrelextrema as relext x_minima = relext(np.array(sum_x), np.less) print 'X minima ', x_minima delimeter = np.median(csv_clean[:,2]) left_volume = csv_clean[csv_clean[:,2] <= delimeter] sum_left = [0]*len(unique_x) i = 0 for x in unique_x: sum_left[i] = np.sum(left_volume[left_volume[:,0] == x][:,4])*.0001 i = i+1 left_minima = relext(np.array(sum_left),np.less) right_volume = csv_clean[csv_clean[:,2] > delimeter] sum_right = [0]*len(unique_x) i = 0 for x in unique_x: sum_right[i] = np.sum(right_volume[right_volume[:,0] == x][:,4])*.0001 i = i+1 right_minima = relext(np.array(sum_right),np.less) print "Left minima: ", left_minima print "Right minima: ", right_minima """ Explanation: Note: It looks like there are evident density local minima that may define cortex layers. In the first plot, we see that there are 4 defined local minima that may be cortex layer boundaries. These fall around 600, 1000, 1700, 2500. 2. Do these persist across subsets of the full sample? End of explanation """ import sklearn.mixture as mixture # Check for uniformity in clusters along x-direction plt.figure(figsize=(7,7)) divisions = np.unique(csv_clean[:,2]) # Randomly Sample samples = 10000 perm = np.random.permutation(xrange(1, len(csv_clean[:]))) csv_clean_sample = csv_clean[perm[:samples]] for d in divisions: z_layer = csv_clean_sample[csv_clean_sample[:,2] == d] #Run GMM on layer print 'Running GMM on layer ' + str(d) max_clusters = 35 bic = np.array([]) i = np.array(range(1, max_clusters)) for idx in range(1, max_clusters): #print "Fitting and evaluating model with " + str(idx) + " clusters." gmm = mixture.GMM(n_components=idx,n_iter=1000,covariance_type='diag', random_state=1) gmm.fit(z_layer[:,(0,1,4)]) bic = np.append(bic, gmm.bic(z_layer[:,(0,1,4)])) #print bic plt.plot(i, 1.0/bic) plt.hold(True) plt.title('BIC for each Z-layer') plt.ylabel('score') plt.xlabel('number of clusters') plt.legend(divisions) plt.show() """ Explanation: Note: The local minima are similar across halves of the sample volume. 3. Estimating Optimal Number of Clusters in z-direction Since from 1, it seems as though layering may be present in the z-direction. End of explanation """ import sklearn.cluster as cluster fig = plt.figure(figsize=(10, 7)) ax = fig.gca(projection='3d') for d in divisions: z_layer = csv_clean[csv_clean[:,2] == d] #Run GMM on layer print 'Running GMM on layer ' + str(d) print "Fitting and evaluating model with 6 clusters." gmm = mixture.GMM(n_components=6,n_iter=1000,covariance_type='diag', random_state=1) gmm.fit(z_layer[:,(0,1,4)]) center1 = [gmm.means_[0][0],gmm.means_[0][1],gmm.means_[0][2]] center2 = [gmm.means_[1][0],gmm.means_[1][1],gmm.means_[1][2]] center3 = [gmm.means_[2][0],gmm.means_[2][1],gmm.means_[2][2]] ax.scatter(center1, center2, center3, marker='o', s=100) plt.hold(True) plt.title('Cluster centers for every Z-layer') plt.ylabel('y') plt.xlabel('x') ax.set_zlabel('z') plt.show() """ Explanation: K-means cluster centers across z-values with new optimal cluster number estimate From above, the optimal number of clusters ranges across z-layers. The overall trend looks like the elbows are around 6 clusters. End of explanation """ from mpl_toolkits.mplot3d import axes3d # chopping data based on thresholds on x and y coordinates x_bounds = (409, 3529) y_bounds = (1564, 3124) def check_in_bounds(row, x_bounds, y_bounds): if row[0] < x_bounds[0] or row[0] > x_bounds[1]: return False if row[1] < y_bounds[0] or row[1] > y_bounds[1]: return False if row[3] == 0: return False return True indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds)) data_thresholded = csv[indices_in_bound] print data_thresholded.shape n = data_thresholded.shape[0] # Original total_unmasked = np.sum(data_thresholded[:, 3]) total_syn = np.sum(data_thresholded[:, 4]) a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded) #hist_n, bins, _ = plt.hist(a, 5000) #plt.xlim(-.0001, .0035) #plt.show() # Spike spike = a[np.logical_and(a <= 0.0015, a >= 0.0012)] print "Points in spike: ", len(spike) print "Average Density: ", np.mean(spike) print "Std Deviation: ", np.std(spike) # Histogram hist_n, bins, _ = plt.hist(spike, 2000) plt.title('Histogram of Synaptic Density') plt.xlabel('Synaptic Density (syn/voxel)') plt.ylabel('frequency') # Scatter plot data_thresholded[:,4] = a spike_coords = data_thresholded[np.logical_and(data_thresholded[:,4] <= 0.0015, data_thresholded[:,4] >= 0.0012)] # Randomly Sample samples = 500 perm = np.random.permutation(xrange(1, len(spike_coords[:]))) spike_sample = spike_coords[perm[:samples]] fig = plt.figure(figsize=(10, 7)) ax = fig.gca(projection='3d') ax.view_init() ax.dist = 10 # distance ax.scatter( spike_sample[:, 0], spike_sample[:, 1], spike_sample[:, 2], # data s = [10**(n*1500) for n in spike_sample[:,4]] ) ax.set_xlim(np.min(spike_sample[:,0]),np.max(spike_sample[:,0])) ax.set_ylim(np.min(spike_sample[:,1]),np.max(spike_sample[:,1])) ax.set_zlim(np.min(spike_sample[:,2]),np.max(spike_sample[:,2])) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') plt.title('Random Sample of Points in Spike') plt.show() """ Explanation: 4. Exploring the "spike" Histogram from last time: average density 0.00115002980202 , std dev 0.000406563246763 End of explanation """ fig = plt.figure(figsize=(10,7)) ax = fig.gca(projection='3d') for d in divisions: z_layer = csv_clean[csv_clean[:,2] == d] #Run GMM on layer gmm = mixture.GMM(n_components=6,n_iter=1000,covariance_type='diag', random_state=1) gmm.fit(z_layer[:,(0,1,4)]) center1 = [gmm.means_[0][0],gmm.means_[0][1],gmm.means_[0][2]] center2 = [gmm.means_[1][0],gmm.means_[1][1],gmm.means_[1][2]] center3 = [gmm.means_[2][0],gmm.means_[2][1],gmm.means_[2][2]] ax.scatter(center1, center2, center3, marker='o', s=100) plt.hold(True) plt.hold(True) resultant_estimate = np.array([-2.578, -1.369, -1]) print resultant_estimate.shape x, y, z = np.meshgrid(np.arange(-500, 3500, 1000), np.arange(0, 3500, 1000), np.arange(0, 3000, 1000)) ax = fig.gca(projection='3d') ax.quiver(x, y, z, resultant_estimate[0], resultant_estimate[1], resultant_estimate[2], length=500, color="Tomato", alpha=.8, arrow_length_ratio=.2) ax.view_init(elev=18, azim=-30) plt.title('Cluster centers for each Z-layer') plt.ylabel('y') plt.xlabel('x') ax.set_zlabel('z') ax.set_xticks([-1000, 0, 1000, 2000, 3000]) ax.set_yticks([0, 1000, 2000, 3000]) ax.set_zticks([0, 1000, 2000, 3000]) plt.show() """ Explanation: Note: There is no evident grouping or layering of the points in the spike. Needs further investigation. 5. Does or previous direction estimate make sense? End of explanation """
peendebak/SPI-rack
examples/D5b.ipynb
mit
# Import SPI rack and D5b module from spirack import SPI_rack, D5b_module import logging logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) """ Explanation: D5b example notebook Example notebook of the D5b, 8 channel 18-bit module. The module contains the same DACs as in the 16 channel D5a module, but it also contains an ARM microcontroller. This allows for operations where exact timing is needed. At the moment of writing it supports setting the DACs in a normal DC mode, or allows them to toggle between two values with fixed and precise timing. This toggling happens after an external trigger (by another module for example). It can be used in conjuction with a B2 or D4b module to act as a lock-in system. The D5b sends out triggers at every DAC toggle, this allows the B2/D4a to sync the measurements to this change SPI Rack setup To use the D5b module, we need to import both the D5b_module and the SPI_rack module from the spirack library. All the communication with the SPI Rack runs through the SPI_rack object which communicates through a virtual COM port. This COM port can only be open on one instance on the PC. Make sure you close the connection here before you can use it somewhere else. Import the logging library to be able to display the logging messages. End of explanation """ COM_port = 'COM4' # COM port of the SPI rack COM_speed = 1e6 # Baud rate, not of much importance timeout = 1 # Timeout value in seconds spi_rack = SPI_rack(COM_port, COM_speed, timeout) spi_rack.unlock() # Unlock the controller to be able to send data to the rack spi_rack.unlock() """ Explanation: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value. End of explanation """ print('Version: ' + spi_rack.get_firmware_version()) print('Temperature: {:.2f} C'.format(spi_rack.get_temperature())) battery_v = spi_rack.get_battery() print('Battery: {:.3f}V, {:.3f}V'.format(battery_v[0], battery_v[1])) """ Explanation: Read back the version of the microcontroller software. This should return 1.6 or higher to be able to use the D5b properly. Als read the temperature and the battery voltages through the C1b, this way we verify that the connection with the SPI Rack is working. End of explanation """ d5b = D5b_module(spi_rack, module=1, reset_voltages=True) print("Firmware version: {}".format(d5b.get_firmware_version())) """ Explanation: Create a new D5b module object at the correct module address using the SPI object. By default the module resets the output voltages to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_voltages = False then the output will not be changed. To see that the we have a connection, we read back the firmware version. End of explanation """ d5b.set_clock_source('internal') print("Clock source: {}".format(d5b.get_clock_source())) """ Explanation: Configuring the D5b The D5b module can run from either a local (inside the module) clock or a user provided clock from the backplane. This backplane clock should be 10 MHz and either a square or a sine wave. If there are more modules with microcontrollers in the rack, and they need to run synchronously, it is recommended to use the backplane clock. For a single module it is fine to run it using the local clock. If the external clock is selected but not present, the user will get an ERROR to the logger and the microcontroller will keep running on the internal clock. Never turn off the external clock if the microcontroller is running on it. This will stop the module from functioning. In this example we will use the internal clock: End of explanation """ d5b.set_toggle_time(300) toggle_value = d5b.get_toggle_time() print('Toggle time: {} x 100 ns = {} s'.format(toggle_value, round(toggle_value*100e-9, 7))) d5b.set_toggle_amount(6) print('Toggle amount: {}'.format(d5b.get_toggle_amount())) """ Explanation: The toggle time of the DACs is set in steps of 100 ns (the 10 MHz clock) with a minimum of 30 &mu;s. We need to input a value as a multiple of this 100 ns. The toggle amount should be an even number with a minimum of two. End of explanation """ d5b.set_trigger_holdoff_time(30e-6) print('Holdoff time: {} s'.format(d5b.get_trigger_holdoff_time())) """ Explanation: The module will start toggling after it receives a trigger from the backplane (either directly controlled from the PC or from another module). If there are any filters and delays in the setup, we might want to wait with toggling the DAC before these are settled. This is what the hold-off time is for. It can be set in steps of 100 ns, with a minimum of 30 &mu;s. This time should be set in seconds. End of explanation """ DAC = 0 d5b.set_DAC_span(DAC, '4V_bi') print("Span DAC {}: {}".format(DAC, d5b.get_DAC_span(DAC))) d5b.set_DAC_mode(DAC, 'toggle') print("DAC {} mode: {}\n".format(DAC, d5b.get_DAC_mode(DAC))) d5b.set_DAC_voltage(DAC, 0) d5b.set_DAC_neg_toggle_voltage(DAC, -2) d5b.set_DAC_pos_toggle_voltage(DAC, 2) values = d5b.get_DAC_voltages(DAC) print('Voltage: {:.3f} V\nNegative Toggle: {:.3f} V\nPositive Toggle: {:.3f} V'.format(values[0],values[1],values[2])) DAC = 1 d5b.set_DAC_span(DAC, '4V_bi') print("Span DAC {}: {}".format(DAC, d5b.get_DAC_span(DAC))) d5b.set_DAC_mode(DAC, 'toggle') print("DAC {} mode: {}\n".format(DAC, d5b.get_DAC_mode(DAC))) d5b.set_DAC_voltage(DAC, 1) d5b.set_DAC_neg_toggle_voltage(DAC, -1) d5b.set_DAC_pos_toggle_voltage(DAC, 3) values = d5b.get_DAC_voltages(DAC) print('Voltage: {:.3f} V\nNegative Toggle: {:.3f} V\nPositive Toggle: {:.3f} V'.format(values[0],values[1],values[2])) """ Explanation: Running the D5b We will now set two DAC outputs to toggling mode and &pm;4V span. DAC 0 we will set to toggle &pm;2V around 0V and DAC1 to toggle between &pm;2V around 1V. For more details on the span, stepsize and voltages. See the documentation on the website and the D5a module example notebook. End of explanation """ spi_rack.trigger_now() """ Explanation: Here we generate the trigger directly from the PC to demonstrate the usage. End of explanation """ spi_rack.close() """ Explanation: You should be able to see the following output on your oscilloscope (minus the two top traces): <img src="Images/Scope_Image.png" alt="Scope Image" title="Scope Image" width="850" /> When done with this example, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to access the device. End of explanation """
hainm/mdtraj
examples/clustering.ipynb
lgpl-2.1
from __future__ import print_function %matplotlib inline import mdtraj as md import numpy as np import matplotlib.pyplot as plt import scipy.cluster.hierarchy """ Explanation: In this example, we cluster our alanine dipeptide trajectory using the RMSD distance metric and Ward's method. End of explanation """ traj = md.load('ala2.h5') """ Explanation: Let's load up our trajectory. This is the trajectory that we generated in the "Running a simulation in OpenMM and analyzing the results with mdtraj" example. The first step is to build the rmsd cache, which precalculates some values for the RMSD computation. End of explanation """ distances = np.empty((traj.n_frames, traj.n_frames)) for i in range(traj.n_frames): distances[i] = md.rmsd(traj, traj, i) print('Max pairwise rmsd: %f nm' % np.max(distances)) """ Explanation: Lets compute all pairwise rmsds between conformations. End of explanation """ linkage = scipy.cluster.hierarchy.ward(distances) """ Explanation: scipy.cluster implements the ward linkage algorithm (among others) End of explanation """ plt.title('RMSD Ward hierarchical clustering') scipy.cluster.hierarchy.dendrogram(linkage, no_labels=True, count_sort='descendent') None """ Explanation: Lets plot the resulting dendrogram. End of explanation """
jonathf/chaospy
docs/user_guide/fundamentals/quadrature_integration.ipynb
mit
import numpy import chaospy from problem_formulation import joint joint nodes = joint.sample(500, seed=1234) weights = numpy.repeat(1/500, 500) from matplotlib import pyplot pyplot.scatter(*nodes) pyplot.show() """ Explanation: Quadrature integration Quadrature methods, or numerical integration, is broad class of algorithm for performing integration of any function $g$ that are defined without requiring an analytical definition. In the scope of chaospy we limit this scope to focus on methods that can be reduced to the following approximation: $$\int p(q) g(q) dx \approx \sum_{n=1}^N W_n g(Q_n)$$ Here $p(q)$ is an weight function, which is assumed to be an probability density function, and $W_n$ and $Q_n$ are respectively quadrature weights and abscissas used to define the approximation. The simplest application of such an approximation is Monte Carlo integration. In Monte Carlo you only need to select $W_n=1/N$ for all $n$ and $Q_n$ to be independent identical distributed samples drawn from the distribution of $p(q)$. For example: End of explanation """ from problem_formulation import model_solver evaluations = numpy.array([model_solver(node) for node in nodes.T]) estimate = numpy.sum(weights*evaluations.T, axis=-1) """ Explanation: Having the nodes and weights, we can now apply the quadrature integration. for a simple example, this might look something like: End of explanation """ gauss_nodes, gauss_weights = chaospy.generate_quadrature(6, joint, rule="gaussian") pyplot.scatter(*gauss_nodes, s=gauss_weights*1e4) pyplot.show() """ Explanation: How to apply quadrature rules to build model approximations is discussed in more details in pseudo-spectral projection. Gaussian quadrature Most integration problems when dealing with polynomial chaos expansion comes with a weight function $p(x)$ which happens to be the probability density function. Gaussian quadrature creates weights and abscissas that are tailored to be optimal with the inclusion of a weight function. It is therefore not one method, but a collection of methods, each tailored to different probability density functions. In chaospy Gaussian quadrature is a functionality attached to each probability distribution. This means that instead of explicitly supporting a list of Gaussian quadrature rules, all feasible rules are supported through the capability of the distribution implementation. For common distribution, this means that the quadrature rules are calculated analytically, while others are estimated using numerically stable methods. Generating optimal Gaussian quadrature in chaospy by passing the flag rule="gaussian" to the chaospy.generate_quadrature() function: End of explanation """ grid_nodes, grid_weights = chaospy.generate_quadrature( 3, joint, rule=["genz_keister_24", "fejer_2"], growth=True) pyplot.scatter(*grid_nodes, s=grid_weights*6e3) pyplot.show() """ Explanation: Since joint is bivariate consists of a normal and a uniform distribution, the optimal quadrature here is a tensor product of Gauss-Hermite and Gauss-Legendre quadrature. Weightless quadrature Most quadrature rules optimized to a given weight function are part of the Gaussian quadrature family. It does the embedding of the weight function automatically as that is what it is designed for. This is quite convenient in uncertainty quantification as weight functions in the form of probability density functions is almost always assumed. For most other quadrature rules, including a weight function is typically not canonical, however. So for consistency and convenience, chaospy does a small trick and embeds the influence of the weight into the quadrature weights: $$\int p(q) g(q) dq \approx \sum_i W_i p(Q_i) g(Q_i) = \sum_i W^{*}_i g(Q_i)$$ Here we substitute $W^{*}=W_i p(Q_i)$. This ensures us that all quadrature rules in chaospy behaves similarly as the Gaussian quadrature rules. For our bivariate example, we can either choose a single rule for each dimension, or individual rules. Since joint consists of both a normal and a uniform, it makes sense to choose the latter. For example Genz-Keister is know to work well with normal distributions and Clenshaw-Curtis should work well with uniform distributions: End of explanation """ interval = (-1, 1) chaospy.generate_quadrature(4, interval, rule="clenshaw_curtis") """ Explanation: As a sidenote: Even though embedding the density function into the weights is what is the most convenient think to do in uncertainty quantification, it might not be what is needed always. So if one do not want the embedding, it is possible to retrieve the classical unembedded scheme by replacing the probability density function as the second argument of chaospy.generate_quadrature() with an interval of interest. For example: End of explanation """ from pytest import raises with raises(AssertionError): chaospy.generate_quadrature(4, interval, rule="gaussian") """ Explanation: Also note that for quadrature rules that do require a weighting function, passing an interval instead of an distribution will cause an error: End of explanation """ sparse_nodes, sparse_weights = chaospy.generate_quadrature( 3, joint, rule=["genz_keister_24", "clenshaw_curtis"], sparse=True) idx = sparse_weights > 0 pyplot.scatter(*sparse_nodes[:, idx], s=sparse_weights[idx]*2e3) pyplot.scatter(*sparse_nodes[:, ~idx], s=-sparse_weights[~idx]*2e3, color="grey") pyplot.show() """ Explanation: Smolyak sparse-grid As the number of dimensions increases, the number of nodes and weights quickly grows out of hands, making it unfeasible to use quadrature integration. This is known as the curse of dimensionality, and except for Monte Carlo integration, there is really no way to completely guard against this problem. However there are a few ways to partially mitigate the problem, like Smolyak sparse-grid. Smolyak sparse-grid uses a rule over a combination of different quadrature orders and tailor it together into a new scheme. If the quadrature nodes are more or less nested between the different quadrature orders, as in the same nodes get reused a lot, then the Smolyak method can drastically reduce the quadrature nodes needed. To use Smolyak sparse-grid in chaospy, just pass the flag sparse=True to the chaospy.generate_quadrature() function. For example: End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/recommendation_systems/labs/1_content_based_by_hand.ipynb
apache-2.0
!python3 -m pip freeze | grep tensorflow==2 || \ python3 -m pip --install tensorflow """ Explanation: Content Based Filtering by hand This lab illustrates how to implement a content based filter using low level Tensorflow operations. The code here follows the technique explained in Module 2 of Recommendation Engines: Content Based Filtering. Learning Objectives Understand the mathematics behind the user feature matrix Know how to calculate user ratings from user features and item features To run this lab, we need to use TensorFlow version 2.0. End of explanation """ import numpy as np import tensorflow as tf """ Explanation: Make sure to restart your kernel to ensure this change has taken place. End of explanation """ users = ["Ryan", "Danielle", "Vijay", "Chris"] movies = [ "Star Wars", "The Dark Knight", "Shrek", "The Incredibles", "Bleu", "Memento", ] features = ["Action", "Sci-Fi", "Comedy", "Cartoon", "Drama"] num_users = len(users) num_movies = len(movies) num_feats = len(features) num_recommendations = 2 """ Explanation: To start, we'll create our list of users, movies and features. While the users and movies represent elements in our database, for a content-based filtering method the features of the movies are likely hand-engineered and rely on domain knowledge to provide the best embedding space. Here we use the categories of Action, Sci-Fi, Comedy, Cartoon, and Drama to describe our movies (and thus our users). In this example, we will assume our database consists of four users and six movies, listed below. End of explanation """ # Each row represents a user's rating for the different movies. users_movies = tf.constant( [ [4, 6, 8, 0, 0, 0], [0, 0, 10, 0, 8, 3], [0, 6, 0, 0, 3, 7], [10, 9, 0, 5, 0, 2], ], dtype=tf.float32, ) # Features of the movies one-hot encoded. # e.g. columns could represent # ['Action', 'Sci-Fi', 'Comedy', 'Cartoon', 'Drama'] movies_feats = tf.constant( [ [1, 1, 0, 0, 1], [1, 1, 0, 0, 0], [0, 0, 1, 1, 0], [1, 0, 1, 1, 0], [0, 0, 0, 0, 1], [1, 0, 0, 0, 1], ], dtype=tf.float32, ) """ Explanation: Initialize our users, movie ratings, and features We'll need to enter the user's movie ratings and the k-hot encoded movie features matrix. Each row of the users_movies matrix represents a single user's rating (from 1 to 10) for each movie. A zero indicates that the user has not seen/rated that movie. The movies_feats matrix contains the features for each of the given movies. Each row represents one of the six movies, while the columns represent the five categories. A one indicates that a movie fits within a given genre/category. End of explanation """ users_feats = # TODO: Use matrix multplication to find the user features. users_feats """ Explanation: Computing the user feature matrix We will compute the user feature matrix; that is, a matrix containing each user's embedding in the five-dimensional feature space. TODO 1: Calculuate this as the matrix multiplication of the users_movies tensor with the movies_feats tensor. End of explanation """ users_feats = users_feats / tf.reduce_sum(users_feats, axis=1, keepdims=True) users_feats """ Explanation: Next we normalize each user feature vector to sum to 1. Normalizing isn't strictly neccesary, but it makes it so that rating magnitudes will be comparable between users. End of explanation """ top_users_features = tf.nn.top_k(users_feats, num_feats)[1] top_users_features for i in range(num_users): feature_names = [features[int(index)] for index in top_users_features[i]] print(f"{users[i]}: {feature_names}") """ Explanation: Ranking feature relevance for each user We can use the users_feats computed above to represent the relative importance of each movie category for each user. End of explanation """ users_ratings = # TODO: Use matrix multplication to find user ratings. users_ratings """ Explanation: Determining movie recommendations. We'll now use the users_feats tensor we computed above to determine the movie ratings and recommendations for each user. To compute the projected ratings for each movie, we compute the similarity measure between the user's feature vector and the corresponding movie feature vector. We will use the dot product as our similarity measure. In essence, this is a weighted movie average for each user. TODO 2: Implement this as a matrix multiplication. Hint: one of the operands will need to be transposed. End of explanation """ users_unseen_movies = tf.equal(users_movies, tf.zeros_like(users_movies)) ignore_matrix = tf.zeros_like(tf.cast(users_movies, tf.float32)) users_ratings_new = tf.where(users_unseen_movies, users_ratings, ignore_matrix) users_ratings_new """ Explanation: The computation above finds the similarity measure between each user and each movie in our database. To focus only on the ratings for new movies, we apply a mask to the all_users_ratings matrix. If a user has already rated a movie, we ignore that rating. This way, we only focus on ratings for previously unseen/unrated movies. End of explanation """ top_movies = tf.nn.top_k(users_ratings_new, num_recommendations)[1] top_movies for i in range(num_users): movie_names = [movies[index] for index in top_movies[i]] print(f"{users[i]}: {movie_names}") """ Explanation: Finally, let's grab and print out the top 2 rated movies for each user. End of explanation """
dwcaraway/intro-to-python-talk
python-intermediate.ipynb
unlicense
def say_hello(): print('hello, world!') """ Explanation: Python Course 2: Intermediate Python Functions Functions encapsulate repeatable code. They're defined with the def keyword End of explanation """ say_hello() def hi(name): print('hi', name) hi("pythonistas") """ Explanation: Functions are invoked using parenthesis (). Argumements are passed between the parenthesis. End of explanation """ def double(value): return value*2 print(double(4)) """ Explanation: Functions can use the return keyword to stop execution and send a value back to the caller. End of explanation """ "abcde"[:2] """ Explanation: exercise Write a function, stars that takes a number and returns that number of * (asterisk). ```python print(stars(5)) ``` End of explanation """ empty_list = [] list_with_numbers = [0, 1, 2, 3, 4, 5, 6] list_with_mixed = ["zero", 1, "TWO", 3, 4, "FIVE", "Six"] """ Explanation: Lists This data type orders elements using a 0-based index. End of explanation """ list_with_lists = ["this", "contains", "a", ["list", "of", ["lists"] ]] def some_func(message): print('message is ', message) list_with_function = [some_func] # Let's call the function list_with_function[0]('hello from a list') """ Explanation: Any type of element, including functions and other lists, can be stored in a list. End of explanation """ iter_example = ['a', 'b', 'c', 'd', 'e'] for elem in iter_example: print(elem) """ Explanation: Lists are a class of data structure called iterables that allow for easily going over all the elements of the list End of explanation """ list_with_dupes = [1, 2, 3, 3, 4, 5] print(set(list_with_dupes)) #removes duplicate 3 """ Explanation: List Comprehension Lists can be created using list comprehension, which puts for loops inside brackets, capturing the results as a list. The format is [x for x in iter_example] Slicing Slicing allows for the selection of elements from a list. python a[start:end] # items start through end-1 a[start:] # items start through the rest of the array a[:end] # items from the beginning through end-1 a[:] # a copy of the whole array There is also the step value, which can be used with any of the above: python a[start:end:step] # start through not past end, by step The key point to remember is that the :end value represents the first value that is not in the selected slice. So, the difference beween end and start is the number of elements selected (if step is 1, the default). The other feature is that start or end may be a negative number, which means it counts from the end of the array instead of the beginning. So: python a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items Similarly, step may be a negative number: python a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed exercise Write a function that takes a string and returns the reverse of it. python reverse('abcde') # returns `edcba` Sets Sets are like lists except they cannot have duplicate elements. End of explanation """ a_dict = {'some': 'value', 'another': 'value'} print(a_dict['another']) """ Explanation: Dictionaries (dicts) dictionaries key and value pairs. they let you store data with an object that you can use to look up the value. They're constructed using the curly brace {} End of explanation """ 'another'.__hash__() """ Explanation: The dictionary works by calculating the hash of the key. You can see this using the __hash__ function. End of explanation """ mixed_keys = {4 : 'somevalue'} print(mixed_keys[4]) New elements will be added or changed just by referencing them. The `del` word will delete entries. changing_dict = {'foo': 'bar', 'goner': "gone soon"} changing_dict['foo'] = 'biv' #update existing value changing_dict['notfound'] = 'found now!' #insert new value del changing_dict['goner'] #removes key / value print(changing_dict) You can iterate over keys in the dictionary using a for loop starter_dictionary = {'a': 1, 'b': 2, 'c': 3} for key in starter_dictionary: print(key) """ Explanation: Any value that is hashable can be used as a key, including numbers. End of explanation """ for val in starter_dictionary.values(): print(val) """ Explanation: You can iterate over the values using the values() method. End of explanation """ class ExampleClass: pass ex = ExampleClass() #construct an instance of the ExampleClass """ Explanation: We'll cover these if there's time Classes Classes encapsulate data and functions. They're a handy abstraction in many languages such as Java and C++. They're used in python but not as often as functions. Create them using the class keyword. End of explanation """ # foo.py def foo(): return 42 # test_foo.py import unittest class TestFoo(unittest.TestCase): def test_foo_returns_42(self): expected = 42 actual = foo() self.assertTrue(expected == actual) # Below doesn't work in a jupyter notebook # if __name__ == '__main__': # unittest.main() if __name__ == '__main__': unittest.main(argv=['first-arg-is-ignored'], exit=False) """ Explanation: Unit Testing End of explanation """
jhamrick/original-nbgrader
examples/create_assignment/Assignment Template.ipynb
mit
def squares(n): """Compute the squares of numbers from 1 to n, such that the ith element of the returned list equals i^2. """ {% if solution %} if n < 1: raise ValueError("n must be greater than or equal to 1") return [i ** 2 for i in xrange(1, n + 1)] {% else %} # YOUR CODE HERE raise NotImplementedError {% endif %} """ Explanation: Problem 1 Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError. End of explanation """ squares(10) """Check that squares returns the correct output for several inputs""" from nbgrader.tests import assert_equal assert_equal(squares(1), [1]) assert_equal(squares(2), [1, 4]) assert_equal(squares(10), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]) assert_equal(squares(11), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]) """Check that squares raises an error for invalid inputs""" from nbgrader.tests import assert_raises assert_raises(ValueError, squares, 0) assert_raises(ValueError, squares, -4) """ Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does: End of explanation """ def sum_of_squares(n): """Compute the sum of the squares of numbers from 1 to n.""" {% if solution %} return sum(squares(n)) {% else %} # YOUR CODE HERE raise NotImplementedError {% endif %} """ Explanation: Problem 2 Part A Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality. End of explanation """ sum_of_squares(10) """Check that sum_of_squares returns the correct answer for various inputs.""" assert_equal(sum_of_squares(1), 1) assert_equal(sum_of_squares(2), 5) assert_equal(sum_of_squares(10), 385) assert_equal(sum_of_squares(11), 506) """Check that sum_of_squares relies on squares.""" orig_squares = squares del squares try: assert_raises(NameError, sum_of_squares, 1) except AssertionError: raise AssertionError("sum_of_squares does not use squares") finally: squares = orig_squares """ Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get: End of explanation """ fig, ax = plt.subplots() # do not delete this line! {% if solution %} x = range(1, 16) y = [sum_of_squares(x[i]) for i in xrange(len(x))] ax.plot(x, y) ax.set_title("Sum of squares from 1 to $n$") ax.set_xlabel("$n$") ax.set_ylabel("sum") ax.set_xlim([1, 15]) {% else %} # YOUR CODE HERE raise NotImplementedError {% endif %} """Check that the axis limits are correct.""" assert_equal(ax.get_xlim(), (1.0, 15.0)) """Check that the xlabel is set.""" from nbgrader.tests import assert_unequal assert_unequal(ax.get_xlabel(), "", "xlabel not set") """Check that the ylabel is set.""" assert_unequal(ax.get_ylabel(), "", "ylabel not set") """Check that the title is set.""" assert_unequal(ax.get_title(), "", "title not set") """Check that the correct xdata was used.""" from nbgrader.tests import assert_allclose, assert_same_shape lines = ax.get_lines() assert_equal(len(lines), 1) xdata = lines[0].get_xdata() xdata_correct = np.arange(1, 16) assert_same_shape(xdata, xdata_correct) assert_allclose(xdata, xdata_correct) """Check that the correct ydata was used.""" lines = ax.get_lines() assert_equal(len(lines), 1) xdata = lines[0].get_xdata() ydata = lines[0].get_ydata() ydata_correct = [sum_of_squares(x) for x in xdata] assert_same_shape(ydata, ydata_correct) assert_allclose(ydata, ydata_correct) """ Explanation: Part B Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function. {% if solution %} $\sum_{i=1}^n i^2$ {% else %} YOUR ANSWER HERE {% endif %} Part C Create a plot of the sum of squares for $n=1$ to $n=15$. Make sure to appropriately label the $x$-axis and $y$-axis, and to give the plot a title. Set the $x$-axis limits to be 1 (minimum) and 15 (maximum). End of explanation """