repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
beangoben/HistoriaDatos_Higgs
Dia1/.ipynb_checkpoints/Intro a Matplotlib-checkpoint.ipynb
gpl-2.0
import numpy as np # modulo de computo numerico import matplotlib.pyplot as plt # modulo de graficas import pandas as pd # modulo de datos # esta linea hace que las graficas salgan en el notebook %matplotlib inline """ Explanation: Intro a Matplotlib Matplotlib = Libreria para graficas cosas matematicas Que es Matplotlib? Matplotlin es un libreria para crear imagenes 2D de manera facil. Checate mas en : Pagina oficial : http://matplotlib.org/ Galleria de ejemplo: http://matplotlib.org/gallery.html Una libreria mas avanzada que usa matplotlib, Seaborn: http://stanford.edu/~mwaskom/software/seaborn/ Libreria de visualizacion interactiva: http://bokeh.pydata.org/ Buenisimo Tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/ Para usar matplotlib, solo tiene que importar el modulo ..tambien te conviene importar numpy pues es muy util End of explanation """ x = np.array([0,1,2,3,4]) y = x**2 #cuadramos x plt.plot(x,y) plt.title("Grafica sencilla") plt.show() """ Explanation: Crear graficas (plot) Crear graficas es muy facil en matplotlib, si tienes una lista de valores X y otra y..solo basta usar : End of explanation """ x = np.linspace(0,10,100) y = x**2 #cuadramos x plt.plot(x,y) plt.title("Grafica sencilla") plt.show() """ Explanation: Podemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos: End of explanation """ x = np.linspace(0,10,100) y1 = x # una linea y2 = x**2 # cuadramos x plt.plot(x,y1) plt.plot(x,y2) plt.title("Dos graficas sencillas") plt.show() """ Explanation: Y podemos graficar dos cosas al mismo tiempo: End of explanation """ x = np.linspace(0,10,100) y1 = x # una linea y2 = x**2 # cuadramos x plt.plot(x,y1,label="Linea") plt.plot(x,y2,label="Cuadrado") plt.legend() plt.title("Dos graficas sencillas") plt.show() """ Explanation: Que tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot End of explanation """ x = np.linspace(0,10,100) y1 = x # una linea y2 = x**2 # cuadramos x y3 = np.sqrt(x) # sacamos raiz cuadrada a x y4 = np.power(x,1.5) # elevamos x a la potencia 1.5 plt.plot(x,y1,label="Linea",linestyle='-') # linea plt.plot(x,y2,label="Cuadrado",linestyle=':') # puntitos plt.plot(x,y3,label="Raiz",linestyle='-.') # linea y punto plt.plot(x,y4,label="potencia 1.5",linestyle='--') # lineas salteadas plt.legend() plt.title("Dos graficas sencillas") plt.show() """ Explanation: Tambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle: End of explanation """ N = 50 # numero de puntos x = np.random.rand(N) # numeros aleatorios entre 0 y 1 y = np.random.rand(N) plt.scatter(x, y) plt.title("Scatter de puntos aleatorios") plt.show() """ Explanation: Dibujando puntos (scatter) Aveces no queremos dibujar lineas, sino puntos, esto nos da informacion de donde se encuentras datos de manera espacial. Para esto podemos usarlo de la siguiente manera: End of explanation """ N = 50 # numero de puntos x = np.random.rand(N) # numeros aleatorios entre 0 y 1 y = np.random.rand(N) colores = np.random.rand(N) # colores aleatorios radios= 15 * np.random.rand(N) # numeros aleatorios entre 0 y 15 areas = np.pi * radios**2 # la formula de area de un circulo plt.scatter(x, y, s=areas, c=colores, alpha=0.5) plt.title("Scatter plot de puntos aleatorios") plt.show() """ Explanation: Pero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes: End of explanation """ N=500 x = np.random.rand(N) # numeros aleatorios entre 0 y 1 plt.hist(x) plt.title("Histograma aleatorio") plt.show() """ Explanation: Histogramas (hist) Los histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos: End of explanation """ N=500 x = np.random.randn(N) plt.hist(x) plt.title("Histograma aleatorio Normal") plt.show() N=1000 x1 = np.random.randn(N) x2 = 2+2*np.random.randn(N) plt.hist(x1,20,alpha=0.3) plt.hist(x2,20,alpha=0.3) plt.title("Histograma de dos distribuciones") plt.show() """ Explanation: otro tipo de datos, tomados de una campana de gauss, es decir una distribucion normal: End of explanation """ xurl="http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=xls" df=pd.read_excel(xurl) print("Tamano completo es %s"%str(df.shape)) df.head() """ Explanation: Bases de datos en el internet Aveces los datos que queremos se encuentran en el internet. Asumiendo que se encuentran ordenados y en un formato amigable siempre los podemos bajar y guardar como un DataFrame. Por ejemplo: Gapminder es una pagina con mas de 500 conjunto de daatos relacionado a indicadores globales como ingresos, producto interno bruto (PIB=GDP) y esperanza de vida. Aqui bajamos la base de datos de esperanza de vida, lo guardamos en memoria y lo lodeamos como un excel: Ojo! Aqui usamos .head() para imprimir los primeros 5 renglones del dataframe pues son gigantescos los datos. End of explanation """ df = df.rename(columns={'Life expectancy with projections. Yellow is IHME': 'Life expectancy'}) df.index=df['Life expectancy'] df=df.drop('Life expectancy',axis=1) df=df.transpose() df.head() """ Explanation: Arreglando los Datos Head nos permite darle un vistazo a los datos... asi a puro ojo vemos que las columnas son anios y los renglones los paises...ponder reversar esto con transpose, pero tambien vemos que esta con indices enumerados, prefeririamos que los indices fueran los paises, entonces los cambiamos y tiramos la columna que ya no sirve...al final un head para ver que todo esta bien... a este juego de limpiar y arreglar datos se llama "Data Wrangling" End of explanation """ df['Mexico'].plot() print("== Esperanza de Vida en Mexico ==") """ Explanation: Entonces ahora podemos ver la calidad de vida en Mexico atravez del tiempo: End of explanation """ subdf=df[ df.index >= 1890 ] subdf=subdf[ subdf.index <= 1955 ] subdf['Mexico'].plot() plt.title("Esperanza de Vida en Mexico entre 1890 y 1955") plt.show() """ Explanation: de esta visualizacion vemos que la caldiad ha ido subiendo apartir de 1900, ademas vemos mucho movimiento entre 1890 y 1950, justo cuando habia muchas guerras en Mexico. Tambien podemos seleccionar un rango selecto de años, vemos que este rango es interesante entonces End of explanation """ df['Mexico'].plot() plt.xlim(1890,1955) plt.title("Esperanza de Vida en Mexico entre 1890 y 1955") plt.show() """ Explanation: o sin tanto rollo, podemos restringuir el rango de nuestra grafica con xlim (los limites del eje X) End of explanation """ df[['Mexico','United States','Canada']].plot() plt.title("Esperanza de Vida en Norte-America") plt.show() """ Explanation: Tambien es importante ver como esto se compara con otros paises, podemos comparar con todo Norteamerica: End of explanation """
google/starthinker
colabs/smartsheet_to_bigquery.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: SmartSheet Sheet To BigQuery Move sheet data into a BigQuery table. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'auth_write':'service', # Credentials used for writing data. 'token':'', # Retrieve from SmartSheet account settings. 'sheet':'', # Retrieve from sheet properties. 'dataset':'', # Existing BigQuery dataset. 'table':'', # Table to create from this report. 'schema':'', # Schema provided in JSON list format or leave empty to auto detect. 'link':True, # Add a link to each row as the first column. } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter SmartSheet Sheet To BigQuery Recipe Parameters Specify SmartSheet token. Locate the ID of a sheet by viewing its properties. Provide a BigQuery dataset ( must exist ) and table to write the data into. StarThinker will automatically map the correct schema. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'smartsheet':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}}, 'token':{'field':{'name':'token','kind':'string','order':2,'default':'','description':'Retrieve from SmartSheet account settings.'}}, 'sheet':{'field':{'name':'sheet','kind':'string','order':3,'description':'Retrieve from sheet properties.'}}, 'link':{'field':{'name':'link','kind':'boolean','order':7,'default':True,'description':'Add a link to each row as the first column.'}}, 'out':{ 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'dataset','kind':'string','order':4,'default':'','description':'Existing BigQuery dataset.'}}, 'table':{'field':{'name':'table','kind':'string','order':5,'default':'','description':'Table to create from this report.'}}, 'schema':{'field':{'name':'schema','kind':'json','order':6,'description':'Schema provided in JSON list format or leave empty to auto detect.'}} } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute SmartSheet Sheet To BigQuery This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """
andrewzwicky/puzzles
FiveThirtyEightRiddler/2018-04-06/vandal_dates.ipynb
mit
import datetime from collections import Counter start = datetime.date(2001, 1, 1) end = datetime.date(2100, 1, 1) - datetime.timedelta(days=1) d = start anarchy_dates = [] delta = datetime.timedelta(days=1) while d <= end: if d.day * d.month == d.year % 100: anarchy_dates.append(d) d += delta anarchy_dates.sort() """ Explanation: From Eric Veneto, mathematical madmen are on the loose: The year is 2000, and an arithmetical anarchist group has an idea. For the next 100 years, it will vandalize a famous landmark whenever the year (in two-digit form, for example this year is “18”) is the product of the month and date (i.e. month × date = year, in the MM/DD/YY format). https://fivethirtyeight.com/features/when-will-the-arithmetic-anarchists-attack/ End of explanation """ len(anarchy_dates) """ Explanation: How many attacks will happen between the beginning of 2001 and the end of 2099 End of explanation """ date_counts = Counter(d.year for d in anarchy_dates) _, max_attacks = date_counts.most_common()[0] _, min_attacks = date_counts.most_common()[-1] for year, attacks in date_counts.items(): if attacks == max_attacks: print(f'{attacks} attacks in year {year}') """ Explanation: What year will see the most vandalism? End of explanation """ for year, attacks in date_counts.items(): if attacks == min_attacks: print(f'{attacks} attacks in year {year}') """ Explanation: The least? End of explanation """ gaps = [this-previous for this,previous in zip(anarchy_dates[1:], anarchy_dates)] max_gap = max(gaps) for i, gap in enumerate(gaps): if gap == max_gap: print(f'{gap.days} days between {anarchy_dates[i-1]} and {anarchy_dates[i]}') """ Explanation: What will be the longest gap between attacks? End of explanation """
zambzamb/zpic
python/R-L Waves.ipynb
agpl-3.0
import em1ds as zpic electrons = zpic.Species( "electrons", -1.0, ppc = 64, uth=[0.005,0.005,0.005]) sim = zpic.Simulation( nx = 1000, box = 100.0, dt = 0.0999, species = electrons ) sim.emf.solver_type = 'PSATD' #Bx0 = 0.5 #Bx0 = 1.0 Bx0 = 2.0 sim.emf.set_ext_fld('uniform', B0= [Bx0, 0.0, 0.0]) """ Explanation: Waves in magnetized Plasmas: R-waves and L-Waves To study electromagnetic waves in a magnetized plasma, in particular waves propagating along the applied magnetic field, we initialize the simulation with a uniform thermal plasma, effectively injecting waves of all possible wavelengths into the simulation. The external magnetic field is applied along the x direction, and can be controlled through the Bx0 variable: End of explanation """ import numpy as np niter = 1000 Ez_t = np.zeros((niter,sim.nx)) tmax = niter * sim.dt print("\nRunning simulation up to t = {:g} ...".format(tmax)) while sim.t <= tmax: print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r') Ez_t[sim.n,:] = sim.emf.Ez sim.iter() print("\nDone.") """ Explanation: We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM fields $E_y$ and $E_z$ at every timestep so we can analyze them later: End of explanation """ import matplotlib.pyplot as plt iter = sim.n//2 plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$") plt.grid(True) plt.xlabel("$x_1$ [$c/\omega_n$]") plt.ylabel("$E$ field []") plt.title("$E_z$, t = {:g}".format( iter * sim.dt)) plt.legend() plt.show() """ Explanation: EM Waves As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below: End of explanation """ import matplotlib.pyplot as plt import matplotlib.colors as colors # (omega,k) power spectrum win = np.hanning(niter) for i in range(sim.nx): Ez_t[:,i] *= win sp = np.abs(np.fft.fft2(Ez_t))**2 sp = np.fft.fftshift( sp ) k_max = np.pi / sim.dx omega_max = np.pi / sim.dt plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-5), extent = ( -k_max, k_max, -omega_max, omega_max ), aspect = 'auto', cmap = 'gray') plt.colorbar().set_label('$|FFT(E_z)|^2$') # Theoretical curves wC = Bx0 wR = 0.5*(np.sqrt( wC**2 + 4) + wC) wL = 0.5*(np.sqrt( wC**2 + 4) - wC) w = np.linspace(wL, omega_max, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1+wC/w) ) ) plt.plot( k, w, label = "L-wave", color = 'b' ) w = np.linspace(wR + 1e-6, omega_max, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1-wC/w) ) ) plt.plot( k, w, label = "R-wave", color = 'r') w = np.linspace(1e-6, wC - 1e-6, num = 512) k = w * np.sqrt( 1.0 - 1.0/(w**2 * (1-Bx0/w) ) ) plt.plot( k, w, label = "R-wave", color = 'r' ) plt.ylim(0,12) plt.xlim(0,12) plt.xlabel("$k$ [$\omega_n/c$]") plt.ylabel("$\omega$ [$\omega_n$]") plt.title("R/L-waves dispersion relation") plt.legend() plt.show() """ Explanation: R/L-Waves To analyze the dispersion relation of the R/L-waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction for the L-wave and the two solutions for the R-wave. Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible. End of explanation """
tyler-abbot/PyShop
session2/PyShop_session2_notes.ipynb
agpl-3.0
import numpy as np #This is the traditional way to import NumPy a = np.array([[1.,0],[0,1]]) #Create an identity matrix by hand NOTE: What are the data types of entries? b = np.eye(2) # Creat an identity matrix using built in funciton print(a) print(b) """ Explanation: PyShop Session 2 This session focuses on some of the most useful modules, including NumPy, SciPy, Matplotlib, and Pandas. All of the features discussed will be introduced using basic examples. The main topics covered will be the following: NumPy arrays and their syntax. Integration. Root finding. DataFrames. Simple plots. Other topics to be added later. This session is simply meant to introduce the main features of the big four modules and give students time to become more familar with simple Python programs. By the end of this session you should be comfortable enough with the basic Python interface to know what modules you need to import to do basic calculation, data entry, and data visualization. Why Packages? Python is built to be a modular programming environment. When you load the Python interpreter, you load the basic functionality to run programs and work with the native data types. You can easily import more functionality from modules and packages. This makes running Python light compared to other mathematical software, such as MatLab. Additionally, anyone can develop and publish packages through the Python Package Index (PyPI). Although there are thousands of packages listed, you will find yourself returning to a reliable set for your own work. Of these faithful companions, everyone will at some point need to work with NumPy, SciPy, MatPlotLib, and Pandas. It is because of this that we'll spend a session working on just these packages and introducing their basic functionality. NumPy NumPy is, above all, a wrapper for the ndarray object. This type of object has several nice features. First, because all of the elements of an array must be of the same type, the size of the array is predictable. Second, NumPy comes with pre-compiled C code to run vectorized functions that speed up your code. Third, the indexing syntax for an array is concise and clear. Similarly, NumPy can handle arrays of different dimensions through its native broadcasting, making code easier to read and write. Let's look at each of these points individually through some examples. Creating Arrays Creating arrays in NumPy is easy. You can create an array from other data types, or through built in functions: End of explanation """ c = np.array([[1.,0],[0,1]]).astype(object) c[0, 0] = int(1) print(c) """ Explanation: Notice in the above code that a was given both float and integer information, but the resulting array was filled with only floats. This can be overridden by specifying the data type as object, but is not recommended since NumPy plays best with homogeneous arrays: End of explanation """ a = np.random.rand(50, 50) #Generate a 50x50 array of random numbers from uni[0,1] print(a.shape) #You can reference single entries in an array print(a[0, 0]) #It is also possible to slice arrays as with lists print(a[:, 0]) #Assignment to an array entry is the same a[0, 0] = 0.0 print(a[0, 0]) """ Explanation: It is possible to create arrays of very high dimension. However, you should be careful of memory usage. An array is stored in memory as a continuous block and the elements take up as much memory as they would have alone. So, for example, a float element is around 8 bytes. For an array of 3 dimensions with 1000 points in each direction, that is 1000^3*8 / 10^9 = 8 gigabytes of ram just to store the thing! Array Indexing Array indexing is done similarly to lists. Indices begin at zero. However, the syntax is slightly different and a little less cumbersome when using arrays: End of explanation """ a = np.eye(2) b = a print(a) print(b) print("\n") a[0, 0] = 0 print(a) print(b) print("\n") a = np.eye(2) b[:] = a c = a.copy() print(a) print(b) print(c) print("\n") a[0, 0] = 0 print(a) print(b) print(c) print("\n") """ Explanation: It is important to be careful when trying to create a copy of an array. When you simply assign to an array using =, you create what is called a "view" of the array. A view is simply a pointer to the same block of memory. This reduces the amount of memory you use, but the two names now point to the same object in memory, so changing one changes the other. If you really need a copy, you should explicitly creat one by either slicing or using the .copy() method: End of explanation """ a = np.eye(2) b = np.eye(2) c = 2 #Multiplicaiton is done elementwise print(a * b) #Scalar multiplcation is broadcast print(a * c) #So are addition, both vector and scalar d = np.array([1., 1.]) print(a + d) print(a + c) # Some arrays to broadcast d = np.array([1, 2]) e = np.vstack((a, np.zeros(2))) f = np.array([[1, 2], [3, 4]]) # Required that all of the dimensions either match or equal one print(e.shape) print(e) print(d.shape) print(d) print(d + e) print(e + a) # A one d array is neither column nor row print(d.shape) print(d == d.T) # This means the broadcasting rule seems ambiguous print(a + d) # Broadcasting rules move along the first matching axis FROM THE RIGHT print(f.shape) print(f) print(d.shape) print(d) print(f + d) print(f + d.T) # You can change this by adding a new axis # This helps to be specific about shape print(d[:, np.newaxis].shape) print(f + d[:, np.newaxis]) """ Explanation: Broadcasting When you give NumPy an operation to do over two arrays that do not have matching dimensions, NumPy automatically "broadcasts" the operation. This action allows you to carry out scalar multiplication, kronecker products, and more complex matrix and array arithmetic quickly and with simple syntax. We'll cover this in more detail next week, but for now we can look at some examples: End of explanation """ import time length = 10 a = [i for i in range(0, length)] b = [i for i in range(0, length)] c = [] t0 = time.time() for i in range(len(a)): c.append(a[i]*b[i]) t1 = time.time() print("Process executed in : %s : seconds." %(t1 - t0)) a = np.arange(0, length) b = np.arange(0, length) t0 = time.time() C = a * b t1 = time.time() print("Process executed in : %s : seconds." %(t1 - t0)) """ Explanation: These ideas apply in higher dimension and with all types of arithmetic operations, as well as some built in functions. Why do I care? NumPy is much faster than the native Python data types. Because Python is an interpreted language, the programs must be parsed every time they are run. Because of this, it does not benefit from the speed of being a compiled language. NumPy gets around this by having pre-compiled functionality. Let's see how much faster it is with an example: End of explanation """ import scipy.integrate scipy.integrate.quad(lambda x: x**2, 0.0, 10.0) """ Explanation: I encourage you to change the number of elements in the two calculations to see the difference! Is that it?! No. Calm down. We've got a long ways to go and next week we'll see more of NumPy. If you can't wait, though, check out the <a href="http://docs.scipy.org/doc/numpy/reference/index.html" target="_blank">documentaiton</a>. In particular you might find interesting the linear algebra, matlib, random numbers, sorting, or fourier transform libraries! Ok, maybe not the last one, but we can always hope... SciPy SciPy is a bundle of open source software for Python, while the SciPy library is a fundamental set of numerical algorithms. It's only thanks to this library that Python is able to compete against MatLab or R. However, most of its functionality will go completely unnoticed to you. When you use statistical packages or plotting or many other packages, they will call SciPy to do the heavy lifting. For economists (as best as I can tell) the most used subpackages will be integrate, interpolate, optimize, sparse, and stats. These packages cover a massive amount of very comlex algorithms, so we'll only cover a tiny fraction of it today and a little more next week. Integration A typical problem in continuous time economics and, more generally, in econometrics is to solve an integral. SciPy provides a package called scipy.integrate. You should check out the <a href="http://docs.scipy.org/doc/scipy/reference/integrate.html#module-scipy.integrate" target="_blank">documentation</a> for more details. For general integration, use the quad function. It's quite opaque what quadrature this method is using as its standard, but the precision is good. The function is slow, but precise. Here's an example: End of explanation """ import scipy.optimize def rosenbrock(x, a, b): return (a - x[0])**2 + b*(x[1] - x[0]**2)**2 a = 1. b = 100. x0 = np.array([2., 3.]) t0 = time.time() res = scipy.optimize.minimize(rosenbrock, x0, args=(a, b), method='Nelder-Mead') t1 = time.time() print("\nProcess executed in : %s : seconds.\n" %(t1 - t0)) print(res) t0 = time.time() res = scipy.optimize.minimize(rosenbrock, x0, args=(a, b), method='BFGS') t1 = time.time() print("\nProcess executed in : %s : seconds.\n" %(t1 - t0)) print(res) t0 = time.time() res = scipy.optimize.minimize(rosenbrock, x0, args=(a, b), method='Powell') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(res) t0 = time.time() res = scipy.optimize.minimize(rosenbrock, x0, args=(a, b), method='CG') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(res) """ Explanation: The output of quad is a tuple containing the numerical solution and the upper bound on the error. We know the exact solution to this problem, so we can compare the two: $$\int_0^{10} x^2 dx = \left[ \frac{1}{3} x^3 \right]_0^{10} = \frac{1000}{3}$$ Indeed, given the smoothness of the problem the error estimate is quite close to the true error. If you'd like to do higher order integration, SciPy offers dblquad, tplquad, or nquad. Using these functions you can integrate up to an arbitrary number of dimensions, but you'll have to dig into that on your own if you need these functions. Alongside these quadrature rules, there are also numpy.trapz and scipy.integrate.simps for trapezoid rule and simpson's rule integration. Finally, SciPy has an ODE solver called odeint. This can integrate first order vector differential equations. Since differential equation problems are quite general and involved, it suffices to say that since this is a vector solver, it is possible to solve higher order ODE's using this function. If you are interested, check out <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html#ordinary-differential-equations-odeint" target="_blank">the docs</a>. Optimization (Note: this is just a reproduction of the lecture) SciPy offers several algorithms for optimization. Since this is not a course on numerical methods, I'll just list a few here and we can look at the soluitons they give. Methods: "Downhill simplex method". Generates a simplex of dimension n+1 and then uses a simple algorithm (similar to a bisection algorithm) to find local optima. "Broyden-Fletcher-Goldfarb-Shanno Algorithm". Considered a "quasi-newton" method. A newton step would calculate the hessian directly, where quasi-newton methods approximate it in some way. "Powell's Conjugate Direction Method". A sort of combination of steps in the taxi-cab method. Instead of searching only along a single vecor, take a linear combination of the gradients. "Conjugate Gradient Method". Most useful for sparse, linear systems. You'll notice here it is unsuccessful. We'll study the rosenbrock banana function, a typical problem used for timing/testing a numerical method. It is very flat in part of the domain and very steep in others. Additionally its minimum is known. $$ f(x, y) = (a - x)^2 + b(y - x)^2 $$ whose minimum is achieved at $(x, y) = (a, a^2)$. End of explanation """ def f(x, a, b): return np.array([a*(1 - x[0]), b*(x[1] - x[0]**2)**2]) a = 1. b = 100. x0 = np.array([10., 2.]) t0 = time.time() sol = scipy.optimize.root(f, x0, args=(a, b), method='hybr') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(sol) t0 = time.time() sol = scipy.optimize.root(f, x0, args=(a, b), method='broyden1') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(sol) t0 = time.time() sol = scipy.optimize.root(f, x0, args=(a, b), method='anderson') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(sol) t0 = time.time() sol = scipy.optimize.root(f, x0, args=(a, b), method='linearmixing') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(sol) t0 = time.time() sol = scipy.optimize.root(f, x0, args=(a, b), method='krylov') t1 = time.time() print("\nProcess executed in : %s : seconds. \n" %(t1 - t0)) print(sol) """ Explanation: These different methods give the same result, except the conjugate gradient method which fails given the flat region of the banana funciton. Root Finding (Note: this is simply a reproduction from the course) Similar to minimization, SciPy offers several methods for root finding. Again, I simply list several here: Methods: "Hybrid". From MINPACK, essentially a modified Powell method. "Broyden's Method". A quasi-newton method for multidimensional root finding. Calculate the jacobian only once, then do an update each iteration. "Anderson Mixing". A quasi-newton method. Approximate the jacobian by the "best" solution in the space spanned by the last M vectors... whatever that means! "Linear Mixing". Similar to Anderson method. "Krylov Methods". Approximate the jacobian by a spanning basis of the krylov space. Very neat. Each of these methods has its advantages. To look at multidimensional root finding, we'll treat the banana functions two terms as seperate functions and see if we get the same soluiton. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline x = np.arange(0, 1, 0.01) y = x**2*(1 - x) plt.plot(x, y) plt.show() """ Explanation: Notice that only the hybrid method converged and indeed to the same solution. This is because of the flattness of the function and the fact that the rest of the methods are based strongly on the jacobian or its approximation. MatPlotLib Orinially based on MATLAB, the matplotlib is a 2D python plotting library. However, beyond its MATLAB origins, matplotlib is very object oriented and allows one to fully customize the plotting experience. Matplotlib is made up of three parts: the Pylab Interfacce, the matplotlib frontend, and the backend. The Pylab Interface is how the user inputs information into matplotlib and works much the same as MATLAB. The frontend does all of the heavy lifting, generating the plot data. Finally, the backend renders that data as a plot. Basic 2D Plotting Given $(x, y)$ coordinates, you can create a plot in a snap: End of explanation """ x = np.arange(0, 1, 0.01) f = lambda x: x**2*(1 - x) y = f(x) plt.plot(x, y) #Add axis labels plt.xlabel('Candy (in kg)') plt.ylabel('Happiness') #Add title plt.title("Happiness of Children as a Function\n" + " of Halloween Candy Consumed") #Add emphasis to important points points = np.array([0.1, 0.5, 0.7]) plt.plot(points, f(points), 'ro') #Add a label and legend to the points plt.plot(points, f(points), 'o', label='Fights With Sister Observed') plt.legend() #But the legend is poorly placed, so move it to a better spot plt.legend(loc=0) plt.show() """ Explanation: Now that you have a sweet plot, how do you change its characteristics. Setting aside style for now, you can change labels, titles, etc. with some simple commands: End of explanation """ x = np.arange(0, 10, 0.1) f = lambda x: np.cos(x) g = lambda x: np.sin(x) #Create the figure and axes objects. sharex and sharey allow them to share axes fig, (ax1, ax2) = plt.subplots(2, sharex=True, sharey=True) #Plot on the first axes object. Notice, you can plot several times on the same object ax1.plot(x, f(x)) ax1.plot(x, g(x)) #Plot on the second axes object ax2.plot(x, f(x)*g(x)) plt.show() """ Explanation: The different line styles, marker styles, color pallettes, etc., are all highly customizable. In a couple of weeks we'll go over how you can set up matplotlib to load with the style you prefer, but for now we'll just stick with the standard. One of the great things about matplotlib is object oriented programming. By that I mean that you can create objects, like arrays or lists, that contain figures and plots. The way of doing this is to define a figure object and an axes object. In fact, pylot does this automatically so that you don't have to worry about it. In order to get used to this idea, I encourage you to avoid using pyplot, but instead to define your own objects using subplots. A figure object describes the plot window and all of its properties. An axes object is contained within the figure and describes a single plot. So, if you would like a figure with three subplots you would have one figure object and three axes objects. We'll see how this works in more detail later, but to get an idea, here's a subplot: End of explanation """ from mpl_toolkits.mplot3d import Axes3D def U(c1, c2, beta, gamma): return c1**(1 - gamma)/(1 - gamma) + beta*c2**(1 - gamma)/(1 - gamma) beta = 0.98 gamma = 2.0 fig = plt.figure() ax = fig.gca(projection="3d") low = 1.0 high = 10.0 c1 = np.arange(low, high, 0.1) c2 = np.arange(low, high, 0.1) C1, C2 = np.meshgrid(c1, c2) utils = U(C1, C2, beta, gamma) ax.plot_surface(C1, C2, utils, alpha=0.3) cset = ax.contour(C1, C2, utils, zdir='z', offset=-2.0) #cset = ax.contour(C1, C2, utils, zdir='y', offset=10.0) #cset = ax.contour(C1, C2, utils, zdir='x', offset=1.0) plt.show() """ Explanation: Finally, you can easily create 3D plots using an Axes3D object. Let's plot a CRRA utility function over some positive consumption bundles for periods 1 and 2: $$U(c_1, c_2) = \frac{c_1^{1-\gamma}}{1-\gamma} + \beta \frac{c_2^{1-\gamma}}{1-\gamma}$$ End of explanation """ # NOTE: This fails on my computer because of faulty build %matplotlib qt """ Explanation: NOTE: If you would like to interact with any of the graphs, you can run the following command to turn off inlline plotting. If you would like to turn inline back on, run the command at the top of these notes. End of explanation """ import pandas as pd DF = pd.read_csv('http://people.stern.nyu.edu/wgreene/Econometrics/grunfeld.csv') #Print the first few rows of the dataset DF.head() """ Explanation: There are other, newer Python plotting packages, like Plotly (https://plot.ly/python/) or Seaborn (http://stanford.edu/~mwaskom/software/seaborn/), both of which produce really slick graphics. However, if you are just looking for simple plots to visualize your graphics, I would start with matplotlib. Pandas The last of the big packages that we'll cover is the Pandas package, which is used for data analysis. The package contains data types for time series, panels, sets, but Pandas' most important feature is the DataFrame. DataFrames allow you to handle data in a matrix form, but allowing columns to be of heterogeneous types. Additionally, the DataFrame contains an index, which may or may not be heirarchical, allowing you to control your data over multiple indices. Let's load some data into a DataFrame from the internet: End of explanation """ DF.shape DF.describe() """ Explanation: There are many other data I/O commands (http://pandas.pydata.org/pandas-docs/stable/io.html) that you should check out. There are also packages to read old ASCII files floating about, but this doesn't seem to be high on anyone's priority list. Once you have your data frame, you can do some work to analyze its shape and calculate some descriptive statistics: End of explanation """ #Personally, I dislike capital letters in the column names, but it's easy to fix! DF.columns = DF.columns.str.lower() #Fun fact: There happens to be a mistake in the column names in this file: print(DF.columns) #Notice the extra space after 'c'. We can remove this. DF = DF.rename(columns = {DF.columns[-1]:DF.columns[-1][:-1]}) print(DF.columns) DF['firm'] """ Explanation: Selecting a column in a DF is done by referencing the column 'key', similar to a dictionary. End of explanation """ %matplotlib inline DF['c'].plot() """ Explanation: The column attribute stores a list of column names. It is possible to redefine one or all of the column names by referencing this list. However, it is an Index, so Pandas won't be happy if you try to rename it directly. To do this, use the .rename method. It's also very easy to plot a column. End of explanation """ #Just as an example, let's create a data frame with a single index DF_test = DF.set_index('year') DF_test.head() """ Explanation: However, this plot is for all of our data. Since this is a panel, we get all of the firms plotted on the same line. To get around this we need to specify the indices. To do that, we will use what's called a "heirarchical index". We will index our data first by firm and then by year. Let's see how that's done: End of explanation """ DF_test.loc[1935] DF.c.plot() """ Explanation: You'll notice that the year has been moved to the left side of the table. It now acts as the index for referencing data in the rows. End of explanation """ DF_heir = DF.set_index(['year', 'firm']) DF_heir.head() """ Explanation: But you'll also notice that there are 10 firms for every year. This is not data but another index, so we could use this as part of our heirarchical index. It's as easy as supplying a list instead of a single string to the .set_index method. End of explanation """ DF_heir.loc[1935].loc[1] """ Explanation: Now you can reference data by firm and by year. End of explanation """ fig, ax = plt.subplots(1, figsize=(10, 7)) years = list(DF_heir.index.levels[0]) DF_heir['c'].groupby(level=1).plot(ax=ax) ax.set_xticklabels(years) ax.set_title('Candy Purchases by Firm') ax.set_xlabel('Year') """ Explanation: A useful method with DataFrames is .groupby. This will allow you to group the DataFrame by any column or index you'd like. It generates a groupby object, over which you can iterate. It even makes plotting by group a breeze. Check it out! End of explanation """ pivot = pd.pivot_table(DF, index=['year', 'firm']) pivot.head() """ Explanation: Easy peazy, right?! Just you wait, Henry Higgins. Lastly, Pandas comes with a pivot table function, which can automate most of this for you. Let's try and create the same DF_heir using a pivot table. End of explanation """
Benedicto/ML-Learning
Clustering_5_lda_blank.ipynb
gpl-3.0
import graphlab as gl import numpy as np import matplotlib.pyplot as plt %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' # import wiki data wiki = gl.SFrame('people_wiki.gl/') wiki """ Explanation: Latent Dirichlet Allocation for Text Data In this assignment you will apply standard preprocessing techniques on Wikipedia text data use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model explore and interpret the results, including topic keywords and topic assignments for documents Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one. With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook. Text Data Preprocessing We'll start by importing our familiar Wikipedia dataset. The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page. End of explanation """ wiki_docs = gl.text_analytics.count_words(wiki['text']) wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True) wiki_docs.dict_trim_by_keys? """ Explanation: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create: End of explanation """ topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200) """ Explanation: Model fitting and interpretation In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module. Note: This may take several minutes to run. End of explanation """ topic_model """ Explanation: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results. End of explanation """ topic_model = gl.load_model('topic_models/lda_assignment_topic_model') """ Explanation: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will get the top words in each topic and use these to identify topic themes predict topic distributions for some example documents compare the quality of LDA "nearest neighbors" to the NN output from the first assignment understand the role of model hyperparameters alpha and gamma Load a fitted topic model The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization. It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above. End of explanation """ topic_model.get_topics(topic_ids=[0], num_words=3) """ Explanation: Identifying topic themes by top words We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct. We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic. Quiz Question: Identify the top 3 most probable words for the first topic. End of explanation """ topic_model.get_topics(topic_ids=[2], num_words=50)['score'].sum() """ Explanation: Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic? End of explanation """ [x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)] """ Explanation: Let's look at the top 10 words for each topic to see if we can identify any themes: End of explanation """ themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \ 'art and publishing','Business','international athletics','Great Britain and Australia','international music'] """ Explanation: We propose the following themes for each topic: topic 0: Science and research topic 2: Team sports topic 3: Music, TV, and film topic 4: American college and politics topic 5: General politics topic 6: Art and publishing topic 7: Business topic 8: International athletics topic 9: Great Britain and Australia topic 10: International music We'll save these themes for later: End of explanation """ for i in range(10): plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score']) plt.xlabel('Word rank') plt.ylabel('Probability') plt.title('Probabilities of Top 100 Words in each Topic') """ Explanation: Measuring the importance of top words We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. We'll do this with two visualizations of the weights for the top words in each topic: - the weights of the top 100 words, sorted by the size - the total weight of the top 10 words Here's a plot for the top 100 words by weight in each topic: End of explanation """ top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)] ind = np.arange(10) width = 0.5 fig, ax = plt.subplots() ax.bar(ind-(width/2),top_probs,width) ax.set_xticks(ind) plt.xlabel('Topic') plt.ylabel('Probability') plt.title('Total Probability of Top 10 Words in each Topic') plt.xlim(-0.5,9.5) plt.ylim(0,0.15) plt.show() """ Explanation: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total! Next we plot the total weight assigned by each topic to its top 10 words: End of explanation """ obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]]) pred1 = topic_model.predict(obama, output_type='probability') pred2 = topic_model.predict(obama, output_type='probability') print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]})) """ Explanation: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary. Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. Topic distributions for some example documents As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic. We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama: End of explanation """ def average_predictions(model, test_document, num_trials=100): avg_preds = np.zeros((model.num_topics)) for i in range(num_trials): avg_preds += model.predict(test_document, output_type='probability')[0] avg_preds = avg_preds/num_trials result = gl.SFrame({'topics':themes, 'average predictions':avg_preds}) result = result.sort('average predictions', ascending=False) return result print average_predictions(topic_model, obama, 100) """ Explanation: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document: End of explanation """ bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]]) print average_predictions(topic_model, bush, 100) """ Explanation: Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions. End of explanation """ gerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]]) print average_predictions(topic_model, gerrard, 100) """ Explanation: Quiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions. End of explanation """ wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability') """ Explanation: Comparing LDA to nearest neighbors for document retrieval So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. We'll start by creating the LDA topic distribution representation for each document: End of explanation """ wiki['word_count'] = gl.text_analytics.count_words(wiki['text']) wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count']) """ Explanation: Next we add the TF-IDF document representations: End of explanation """ model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'], method='brute_force', distance='cosine') """ Explanation: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model: End of explanation """ model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) """ Explanation: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist: End of explanation """ lda_neighbors = model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000) tf_idf_neighbors = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000) print list(tf_idf_neighbors['reference_label']).index('Mariano Rivera') print list(lda_neighbors['reference_label']).index('Mariano Rivera') """ Explanation: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies. Quiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.) Quiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.) End of explanation """ topic_model.alpha """ Explanation: Understanding the role of LDA model hyperparameters Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words. Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model. Quiz Question: What was the value of alpha used to fit our original topic model? End of explanation """ topic_model.beta """ Explanation: Quiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words. End of explanation """ tpm_low_alpha = gl.load_model('topic_models/lda_low_alpha') tpm_high_alpha = gl.load_model('topic_models/lda_high_alpha') """ Explanation: We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model: - tpm_low_alpha, a model trained with alpha = 1 and default gamma - tpm_high_alpha, a model trained with alpha = 50 and default gamma End of explanation """ a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1] b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1] c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1] ind = np.arange(len(a)) width = 0.3 def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab): fig = plt.figure() ax = fig.add_subplot(111) b1 = ax.bar(ind, a, width, color='lightskyblue') b2 = ax.bar(ind+width, b, width, color='lightcoral') b3 = ax.bar(ind+(2*width), c, width, color='gold') ax.set_xticks(ind+width) ax.set_xticklabels(range(10)) ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_ylim(0,ylim) ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param]) plt.tight_layout() param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha', xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article') """ Explanation: Changing the hyperparameter alpha Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha. End of explanation """ paul = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]]) low_alpha_predictions = average_predictions(tpm_low_alpha, paul, 100) low_alpha_predictions = low_alpha_predictions[(low_alpha_predictions['average predictions'] > 0.3) | (low_alpha_predictions['average predictions'] < 0.05)] low_alpha_predictions.num_rows() high_alpha_predictions = average_predictions(tpm_high_alpha, paul, 100) high_alpha_predictions = high_alpha_predictions[(high_alpha_predictions['average predictions'] > 0.3) | (high_alpha_predictions['average predictions'] < 0.05)] high_alpha_predictions.num_rows() """ Explanation: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions. Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions. End of explanation """ del tpm_low_alpha del tpm_high_alpha tpm_low_gamma = gl.load_model('topic_models/lda_low_gamma') tpm_high_gamma = gl.load_model('topic_models/lda_high_gamma') a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] ind = np.arange(len(a)) width = 0.3 param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma', xlab='Topics (sorted by weight of top 100 words)', ylab='Total Probability of Top 100 Words') param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma', xlab='Topics (sorted by weight of bottom 1000 words)', ylab='Total Probability of Bottom 1000 Words') """ Explanation: Changing the hyperparameter gamma Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. Now we will consider the following two models: - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha End of explanation """ words = tpm_low_gamma.get_topics(num_words=2000, cdf_cutoff=0.5) words.num_rows() / 10.0 """ Explanation: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. Quiz Question: For each topic of the low gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument). End of explanation """ words = tpm_high_gamma.get_topics(num_words=2000, cdf_cutoff=0.5) words.num_rows() / 10.0 """ Explanation: Quiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument). End of explanation """
fja05680/pinkfish
examples/245.double-7s-ave-portfolio/strategy.ipynb
mit
import datetime import matplotlib.pyplot as plt import pandas as pd import pinkfish as pf import strategy # Format price data. pd.options.display.float_format = '{:0.2f}'.format %matplotlib inline # Set size of inline plots '''note: rcParams can't be in same cell as import matplotlib or %matplotlib inline %matplotlib notebook: will lead to interactive plots embedded within the notebook, you can zoom and resize the figure %matplotlib inline: only draw static images in the notebook ''' plt.rcParams["figure.figsize"] = (10, 7) """ Explanation: Double 7's Average Portfolio (Short Term Trading Strategies that Work) 1. The Security is above its 200-day moving average or X-day ma 2. The Security closes at a X-day low, buy. 3. If the Security closes at a X-day high, sell your long position. Instead of using a single period like 7 and allocating all of the capital to it, the capital is split between the number of periods that will be used, for example [5, 6, 7] = 33% each). End of explanation """ # Symbol Lists symbol = 'SPY' capital = 10000 start = datetime.datetime(*pf.ALPHA_BEGIN) #start = datetime.datetime(*pf.SP500_BEGIN) end = datetime.datetime.now() options = { 'use_adj' : False, 'use_cache' : True, 'margin' : 2.0, 'periods' : [5,6,7,8,9], 'sma' : 70, 'use_regime_filter' : True, } """ Explanation: Some global data End of explanation """ s = strategy.Strategy(symbol, capital, start, end, options=options) s.run() s.ts """ Explanation: Run Strategy End of explanation """ s.rlog.head() s.tlog.tail() s.dbal.tail() """ Explanation: View log DataFrames: raw trade log, trade log, and daily balance End of explanation """ pf.print_full(s.stats) """ Explanation: Generate strategy stats - display all available stats End of explanation """ weights = {symbol: 1 / len(s.symbols) for symbol in s.symbols} totals = s.portfolio.performance_per_symbol(weights=weights) totals corr_df = s.portfolio.correlation_map(s.ts) corr_df """ Explanation: View Performance by Symbol End of explanation """ benchmark = pf.Benchmark(symbol, s.capital, s.start, s.end, use_adj=True) benchmark.run() """ Explanation: Run Benchmark, Retrieve benchmark logs, and Generate benchmark stats End of explanation """ pf.plot_equity_curve(s.dbal, benchmark=benchmark.dbal) """ Explanation: Plot Equity Curves: Strategy vs Benchmark End of explanation """ df = pf.plot_bar_graph(s.stats, benchmark.stats) df """ Explanation: Bar Graph: Strategy vs Benchmark End of explanation """ kelly = pf.kelly_criterion(s.stats, benchmark.stats) kelly """ Explanation: Analysis: Kelly Criterian End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ5 AutoML for vision with Vertex AI Video Classification.ipynb
apache-2.0
! pip3 install -U google-cloud-aiplatform --user """ Explanation: Vertex AI AutoML Image Object Detection Installation Install the latest (preview) version of Vertex SDK. End of explanation """ ! pip3 install google-cloud-storage """ Explanation: Install the Google cloud-storage library as well. End of explanation """ import os if not os.getenv("AUTORUN"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the Kernel Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU run-time Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. Google Cloud SDK is already installed in Google Cloud Notebooks. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation """ REGION = "us-central1" # @param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation """ import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Vertex, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this tutorial in a notebook locally, replace the string # below with the path to your service account key and run this cell to # authenticate your Google Cloud account. else: %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json # Log in to your account on Google Cloud ! gcloud auth login """ Explanation: Authenticate your GCP account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps. End of explanation """ BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. End of explanation """ ! gsutil mb -l $REGION gs://$BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al gs://$BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import os import sys import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex SDK Import the Vertex SDK into our Python environment. End of explanation """ # API Endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex AI location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION """ Explanation: Vertex AI constants Setup up the following constants for Vertex AI: API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex AI location root path for dataset, model and endpoint resources. End of explanation """ # Image Dataset type IMAGE_SCHEMA = "google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" # Image Labeling type IMPORT_SCHEMA_IMAGE_OBJECT_DETECTION_BOX = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml" # Image Training task TRAINING_IMAGE_OBJECT_DETECTION_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml" """ Explanation: AutoML constants Next, setup constants unique to AutoML image classification datasets and training: Dataset Schemas: Tells the managed dataset service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated). Dataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for. End of explanation """ # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client def create_job_client(): client = aip.JobServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() clients["job"] = create_job_client() for client in clients.items(): print(client) IMPORT_FILE = "gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv" ! gsutil cat $IMPORT_FILE | head -n 10 """ Explanation: Clients The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex). You will use several clients in this tutorial, so set them all up upfront. Dataset Service for managed datasets. Model Service for managed models. Pipeline Service for training. Endpoint Service for deployment. Job Service for batch jobs and custom training. Prediction Service for serving. Note: Prediction has a different service endpoint. End of explanation """ DATA_SCHEMA = IMAGE_SCHEMA dataset = { "display_name": "salads_" + TIMESTAMP, "metadata_schema_uri": "gs://" + DATA_SCHEMA, } print( MessageToJson( aip.CreateDatasetRequest( parent=PARENT, dataset=dataset, ).__dict__["_pb"] ) ) """ Explanation: Example output: TEST,gs://cloud-ml-data/img/openimage/103/279324025_3e74a32a84_o.jpg,Baked Goods,0.005743,0.084985,,,0.567511,0.735736,, TEST,gs://cloud-ml-data/img/openimage/103/279324025_3e74a32a84_o.jpg,Salad,0.402759,0.310473,,,1.000000,0.982695,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.000000,0.000000,,,0.054865,0.480665,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.041131,0.401678,,,0.318230,0.785916,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.116263,0.065161,,,0.451528,0.286489,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.557359,0.411551,,,0.988760,0.731613,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.562206,0.059401,,,0.876467,0.260982,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.567861,0.000161,,,0.699543,0.077502,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Cheese,0.916052,0.085569,,,1.000000,0.348036,, TEST,gs://cloud-ml-data/img/openimage/1064/3167707458_7b2eebed9e_o.jpg,Salad,0.000000,0.000000,,,1.000000,1.000000,, Create a dataset projects.locations.datasets.create Request End of explanation """ request = clients["dataset"].create_dataset( parent=PARENT, dataset=dataset, ) """ Explanation: Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "dataset": { "displayName": "salads_20210226015226", "metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" } } Call End of explanation """ result = request.result() print(MessageToJson(result.__dict__["_pb"])) """ Explanation: Response End of explanation """ # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/datasets/8577474926234042368", "displayName": "salads_20210226015226", "metadataSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml", "labels": { "aiplatform.googleapis.com/dataset_metadata_schema": "IMAGE" }, "metadata": { "dataItemSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/dataitem/image_1.0.0.yaml" } } End of explanation """ LABEL_SCHEMA = IMPORT_SCHEMA_IMAGE_OBJECT_DETECTION_BOX import_config = { "gcs_source": { "uris": [IMPORT_FILE], }, "import_schema_uri": LABEL_SCHEMA, } print( MessageToJson( aip.ImportDataRequest( name=dataset_short_id, import_configs=[import_config], ).__dict__["_pb"] ) ) """ Explanation: projects.locations.datasets.import Request End of explanation """ request = clients["dataset"].import_data( name=dataset_id, import_configs=[import_config], ) """ Explanation: Example output: { "name": "8577474926234042368", "importConfigs": [ { "gcsSource": { "uris": [ "gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv" ] }, "importSchemaUri": "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml" } ] } Call End of explanation """ result = request.result() print(MessageToJson(result.__dict__["_pb"])) """ Explanation: Response End of explanation """ TRAINING_SCHEMA = TRAINING_IMAGE_OBJECT_DETECTION_SCHEMA task = Value( struct_value=Struct( fields={ "budget_milli_node_hours": Value(number_value=20000), "disable_early_stopping": Value(bool_value=False), }, ) ) training_pipeline = { "display_name": "salads_" + TIMESTAMP, "input_data_config": { "dataset_id": dataset_short_id, }, "model_to_upload": { "display_name": "salads_" + TIMESTAMP, }, "training_task_definition": TRAINING_SCHEMA, "training_task_inputs": task, } print( MessageToJson( aip.CreateTrainingPipelineRequest( parent=PARENT, training_pipeline=training_pipeline, ).__dict__["_pb"] ) ) """ Explanation: Example output: {} Train a model projects.locations.trainingPipelines.create Request End of explanation """ request = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline, ) """ Explanation: Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "trainingPipeline": { "displayName": "salads_20210226015226", "inputDataConfig": { "datasetId": "8577474926234042368" }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml", "trainingTaskInputs": { "budget_milli_node_hours": 20000.0, "disable_early_stopping": false }, "modelToUpload": { "displayName": "salads_20210226015226" } } } Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ # The full unique ID for the training pipeline training_pipeline_id = request.name # The short numeric ID for the training pipeline training_pipeline_short_id = training_pipeline_id.split("/")[-1] print(training_pipeline_id) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/trainingPipelines/2049683188220952576", "displayName": "salads_20210226015226", "inputDataConfig": { "datasetId": "8577474926234042368" }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml", "trainingTaskInputs": { "budgetMilliNodeHours": "20000" }, "modelToUpload": { "displayName": "salads_20210226015226" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2021-02-26T02:12:41.612146Z", "updateTime": "2021-02-26T02:12:41.612146Z" } End of explanation """ request = clients["pipeline"].get_training_pipeline( name=training_pipeline_id, ) """ Explanation: projects.locations.trainingPipelines.get Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ while True: response = clients["pipeline"].get_training_pipeline(name=training_pipeline_id) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_name = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: model_id = response.model_to_upload.name print("Training Time:", response.end_time - response.start_time) break time.sleep(20) print(model_id) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/trainingPipelines/2049683188220952576", "displayName": "salads_20210226015226", "inputDataConfig": { "datasetId": "8577474926234042368" }, "trainingTaskDefinition": "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml", "trainingTaskInputs": { "budgetMilliNodeHours": "20000" }, "modelToUpload": { "displayName": "salads_20210226015226" }, "state": "PIPELINE_STATE_PENDING", "createTime": "2021-02-26T02:12:41.612146Z", "updateTime": "2021-02-26T02:12:41.612146Z" } End of explanation """ request = clients["model"].list_model_evaluations(parent=model_id) """ Explanation: Evaluate the model projects.locations.models.evaluations.list Call End of explanation """ import json model_evaluations = [json.loads(MessageToJson(me.__dict__["_pb"])) for me in request] # The evaluation slice evaluation_slice = request.model_evaluations[0].name print(json.dumps(model_evaluations, indent=2)) """ Explanation: Response End of explanation """ request = clients["model"].get_model_evaluation( name=evaluation_slice, ) """ Explanation: Example output: ``` [ { "name": "projects/116273516712/locations/us-central1/models/770273865954754560/evaluations/7557961565471768576", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/image_object_detection_metrics_1.0.0.yaml", "metrics": { "boundingBoxMetrics": [ { "meanAveragePrecision": 0.37167007, "confidenceMetrics": [ { "precision": 0.09565217, "f1Score": 0.15985467, "confidenceThreshold": 4.8275826e-05, "recall": 0.48618785 }, { "confidenceThreshold": 0.0007978445, "f1Score": 0.18373811, "precision": 0.11357702, "recall": 0.48066297 }, # REMOVED FOR BREVITY { "f1Score": 0.043243244, "precision": 1.0, "confidenceThreshold": 0.9953122, "recall": 0.022099448 }, { "confidenceThreshold": 0.99533135, "precision": 1.0, "recall": 0.016574586, "f1Score": 0.032608695 }, { "f1Score": 0.021857925, "confidenceThreshold": 0.99550796, "precision": 1.0, "recall": 0.011049724 }, { "confidenceThreshold": 0.99619305, "f1Score": 0.010989011, "precision": 1.0, "recall": 0.005524862 } ], "iouThreshold": 0.45, "meanAveragePrecision": 0.32951096 } ], "evaluatedBoundingBoxCount": 181.0, "boundingBoxMeanAveragePrecision": 0.2927905 }, "createTime": "2021-02-26T03:38:42.086497Z", "sliceDimensions": [ "annotationSpec" ] } ] ``` projects.locations.models.evaluations.get Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ test_items = ! gsutil cat $IMPORT_FILE | head -n2 test_item_1, test_label_1 = test_items[0].split(",")[1], test_items[0].split(",")[2] test_item_2, test_label_2 = test_items[0].split(",")[1], test_items[0].split(",")[2] file_1 = test_item_1.split("/")[-1] file_2 = test_item_2.split("/")[-1] ! gsutil cp $test_item_1 gs://$BUCKET_NAME/$file_1 ! gsutil cp $test_item_2 gs://$BUCKET_NAME/$file_2 test_item_1 = "gs://" + BUCKET_NAME + "/" + file_1 test_item_2 = "gs://" + BUCKET_NAME + "/" + file_2 print(test_item_1, test_label_1) print(test_item_2, test_label_2) """ Explanation: Example output: ``` { "name": "projects/116273516712/locations/us-central1/models/770273865954754560/evaluations/7557961565471768576", "metricsSchemaUri": "gs://google-cloud-aiplatform/schema/modelevaluation/image_object_detection_metrics_1.0.0.yaml", "metrics": { "boundingBoxMeanAveragePrecision": 0.2927905, "evaluatedBoundingBoxCount": 181.0, "boundingBoxMetrics": [ { "meanAveragePrecision": 0.37167007, "confidenceMetrics": [ { "f1Score": 0.15985467, "confidenceThreshold": 4.8275826e-05, "precision": 0.09565217, "recall": 0.48618785 }, { "precision": 0.11357702, "confidenceThreshold": 0.0007978445, "f1Score": 0.18373811, "recall": 0.48066297 }, { "confidenceThreshold": 0.003912397, "precision": 0.15167548, "recall": 0.47513813, "f1Score": 0.22994651 }, # REMOVED FOR BREVITY { "precision": 1.0, "confidenceThreshold": 0.9953122, "f1Score": 0.043243244, "recall": 0.022099448 }, { "precision": 1.0, "f1Score": 0.032608695, "recall": 0.016574586, "confidenceThreshold": 0.99533135 }, { "recall": 0.011049724, "precision": 1.0, "confidenceThreshold": 0.99550796, "f1Score": 0.021857925 }, { "f1Score": 0.010989011, "precision": 1.0, "recall": 0.005524862, "confidenceThreshold": 0.99619305 } ], "meanAveragePrecision": 0.32951096, "iouThreshold": 0.45 } ] }, "createTime": "2021-02-26T03:38:42.086497Z", "sliceDimensions": [ "annotationSpec" ] } ``` Make batch predictions Make the batch input file Let's now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each image. The dictionary contains the key/value pairs: content: The Cloud Storage path to the image. mimeType: The content type. In our example, it is an image/jpeg file. End of explanation """ import json import tensorflow as tf gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: data = {"content": test_item_1, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") data = {"content": test_item_2, "mime_type": "image/jpeg"} f.write(json.dumps(data) + "\n") !gsutil cat $gcs_input_uri """ Explanation: Example output: gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg Baked Goods gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg Baked Goods End of explanation """ parameters = {"confidenceThreshold": 0.5, "maxPredictions": 2} batch_prediction_job = { "display_name": "salads_" + TIMESTAMP, "model": model_id, "input_config": { "instances_format": "jsonl", "gcs_source": { "uris": [gcs_input_uri], }, }, "model_parameters": json_format.ParseDict(parameters, Value()), "output_config": { "predictions_format": "jsonl", "gcs_destination": { "output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/", }, }, "dedicated_resources": { "machine_spec": { "machine_type": "n1-standard-2", "accelerator_type": 0, }, "starting_replica_count": 1, "max_replica_count": 1, }, } print( MessageToJson( aip.CreateBatchPredictionJobRequest( parent=PARENT, batch_prediction_job=batch_prediction_job ).__dict__["_pb"] ) ) """ Explanation: Example output: {"content": "gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg", "mime_type": "image/jpeg"} {"content": "gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg", "mime_type": "image/jpeg"} projects.locations.batchPredictionJobs.create Request End of explanation """ request = clients["job"].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job, ) """ Explanation: Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "batchPredictionJob": { "displayName": "salads_20210226015226", "model": "projects/116273516712/locations/us-central1/models/770273865954754560", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015226/test.jsonl" ] } }, "modelParameters": { "confidenceThreshold": 0.5, "maxPredictions": 2.0 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015226/batch_output/" } }, "dedicatedResources": { "machineSpec": { "machineType": "n1-standard-2" }, "startingReplicaCount": 1, "maxReplicaCount": 1 } } } Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ # The fully qualified ID for the batch job batch_job_id = request.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/2404341658876379136", "displayName": "salads_20210226015226", "model": "projects/116273516712/locations/us-central1/models/770273865954754560", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015226/test.jsonl" ] } }, "modelParameters": { "maxPredictions": 2.0, "confidenceThreshold": 0.5 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015226/batch_output/" } }, "state": "JOB_STATE_PENDING", "completionStats": { "incompleteCount": "-1" }, "createTime": "2021-02-26T09:36:17.046416Z", "updateTime": "2021-02-26T09:36:17.046416Z" } End of explanation """ request = clients["job"].get_batch_prediction_job( name=batch_job_id, ) """ Explanation: projects.locations.batchPredictionJobs.get Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subfolder > latest: latest = folder[:-1] return latest while True: response = clients["job"].get_batch_prediction_job(name=batch_job_id) if response.state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", response.state) if response.state == aip.JobState.JOB_STATE_FAILED: break else: folder = get_latest_predictions( response.output_config.gcs_destination.output_uri_prefix ) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/batchPredictionJobs/2404341658876379136", "displayName": "salads_20210226015226", "model": "projects/116273516712/locations/us-central1/models/770273865954754560", "inputConfig": { "instancesFormat": "jsonl", "gcsSource": { "uris": [ "gs://migration-ucaip-trainingaip-20210226015226/test.jsonl" ] } }, "modelParameters": { "maxPredictions": 2.0, "confidenceThreshold": 0.5 }, "outputConfig": { "predictionsFormat": "jsonl", "gcsDestination": { "outputUriPrefix": "gs://migration-ucaip-trainingaip-20210226015226/batch_output/" } }, "state": "JOB_STATE_PENDING", "completionStats": { "incompleteCount": "-1" }, "createTime": "2021-02-26T09:36:17.046416Z", "updateTime": "2021-02-26T09:36:17.046416Z" } End of explanation """ endpoint = { "display_name": "salads_" + TIMESTAMP, } print( MessageToJson( aip.CreateEndpointRequest( parent=PARENT, endpoint=endpoint, ).__dict__["_pb"] ) ) """ Explanation: Example output: gs://migration-ucaip-trainingaip-20210226015226/batch_output/prediction-salads_20210226015226-2021-02-26T09:36:16.878261Z/predictions_00001.jsonl {"instance":{"content":"gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg","mimeType":"image/jpeg"},"prediction":{"ids":["7754337640727445504","8330798393030868992"],"displayNames":["Salad","Baked Goods"],"confidences":[0.99217236,0.93992615],"bboxes":[[0.382205,0.9760891,0.29858154,0.9979937],[0.0012550354,0.5893767,0.06807296,0.81340706]]}} {"instance":{"content":"gs://migration-ucaip-trainingaip-20210226015226/279324025_3e74a32a84_o.jpg","mimeType":"image/jpeg"},"prediction":{"ids":["7754337640727445504","8330798393030868992"],"displayNames":["Salad","Baked Goods"],"confidences":[0.99217236,0.93992615],"bboxes":[[0.382205,0.9760891,0.29858154,0.9979937],[0.0012550354,0.5893767,0.06807296,0.81340706]]}} Make online predictions Prepare file for online prediction projects.locations.endpoints.create Request End of explanation """ request = clients["endpoint"].create_endpoint( parent=PARENT, endpoint=endpoint, ) """ Explanation: Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "endpoint": { "displayName": "salads_20210226015226" } } Call End of explanation """ result = request.result() print(MessageToJson(result.__dict__["_pb"])) """ Explanation: Response End of explanation """ # The full unique ID for the endpoint endpoint_id = result.name # The short numeric ID for the endpoint endpoint_short_id = endpoint_id.split("/")[-1] print(endpoint_id) """ Explanation: Example output: { "name": "projects/116273516712/locations/us-central1/endpoints/2449782275429105664" } End of explanation """ deployed_model = { "model": model_id, "display_name": "salads_" + TIMESTAMP, "automatic_resources": { "min_replica_count": 1, "max_replica_count": 1, }, } traffic_split = { "0": 100, } print( MessageToJson( aip.DeployModelRequest( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split, ).__dict__["_pb"] ) ) """ Explanation: projects.locations.endpoints.deployModel Request End of explanation """ request = clients["endpoint"].deploy_model( endpoint=endpoint_id, deployed_model=deployed_model, traffic_split=traffic_split, ) """ Explanation: Example output: { "endpoint": "projects/116273516712/locations/us-central1/endpoints/2449782275429105664", "deployedModel": { "model": "projects/116273516712/locations/us-central1/models/770273865954754560", "displayName": "salads_20210226015226", "automaticResources": { "minReplicaCount": 1, "maxReplicaCount": 1 } }, "trafficSplit": { "0": 100 } } Call End of explanation """ result = request.result() print(MessageToJson(result.__dict__["_pb"])) """ Explanation: Response End of explanation """ # The unique ID for the deployed model deployed_model_id = result.deployed_model.id print(deployed_model_id) """ Explanation: Example output: { "deployedModel": { "id": "3904304217581420544" } } End of explanation """ import base64 import tensorflow as tf single_file = ! gsutil cat $IMPORT_FILE | head -n 1 single_file = single_file[0].split(",")[1] with tf.io.gfile.GFile(single_file, "rb") as f: content = f.read() instances_list = [{"content": base64.b64encode(content).decode("utf-8")}] instances = [json_format.ParseDict(s, Value()) for s in instances_list] parameters_dict = { "confidenceThreshold": 0.5, "maxPredictions": 2, } parameters = json_format.ParseDict(parameters_dict, Value()) request = aip.PredictRequest( endpoint=endpoint_id, parameters=parameters, ) request.instances.append(instances) print(MessageToJson(request.__dict__["_pb"])) """ Explanation: projects.locations.endpoints.predict Request End of explanation """ request = clients["prediction"].predict( endpoint=endpoint_id, instances=instances, parameters=parameters, ) """ Explanation: Example output: ``` { "endpoint": "projects/116273516712/locations/us-central1/endpoints/2449782275429105664", "instances": [ [ { "content": "/9j/4RtSRXhpZgAASUkqAAgAAAAIAA8BAgAGAAAAbgAAABABAgAEAAAATjkxABIBAwABAAAAAQAAABoBBQABAAAAdAAAABsBBQABAAAAfAAAACgBAwABAAAAAgAAABMCAwABAAAAAQAAAGmHBAABAAAAVh+71qaC5bdjJzRRdnuZVI80S+khxk0/zzXWpHmyjqPW4IHWnC755rWM3HW5DiH2jJzmke6296n22o+W4xtQIPWmrqJJPNVCr/X9MXJ1JPtW4Uvm+9N7iI3n9TTBcc9axqTa0EyObUUiXrVKTV0ZvvVjb REMOVED for brevity KUtSNYfMbpVhYdpFZxXcL66F23TjcRnFTTzgLwabmuYbXUqlgDmoHmy2BVpqw29UVLqbqKxrqcKDVWtqFrGTe6l8pVelYF5KZmq7JvQ00iiFU5qxGTkfWtouzsS2f/Z" } ] ], "parameters": { "confidenceThreshold": 0.5, "maxPredictions": 2.0 } } ``` Call End of explanation """ print(MessageToJson(request.__dict__["_pb"])) """ Explanation: Response End of explanation """ request = clients["endpoint"].undeploy_model( endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}, ) """ Explanation: Example output: { "predictions": [ { "ids": [ "7754337640727445504", "8330798393030868992" ], "confidences": [ 0.99217236, 0.939926147 ], "displayNames": [ "Salad", "Baked Goods" ], "bboxes": [ [ 0.38220492, 0.976089239, 0.298581541, 0.997993708 ], [ 0.00125509501, 0.589376688, 0.0680729151, 0.813407123 ] ] } ], "deployedModelId": "3904304217581420544" } projects.locations.endpoints.undeployModel Call End of explanation """ result = request.result() print(MessageToJson(result.__dict__["_pb"])) """ Explanation: Response End of explanation """ delete_dataset = True delete_model = True delete_endpoint = True delete_pipeline = True delete_batchjob = True delete_bucket = True # Delete the dataset using the Vertex AI fully qualified identifier for the dataset try: if delete_dataset: clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the model using the Vertex AI fully qualified identifier for the model try: if delete_model: clients["model"].delete_model(name=model_id) except Exception as e: print(e) # Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint try: if delete_endpoint: clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline try: if delete_pipeline: clients["pipeline"].delete_training_pipeline(name=training_pipeline_id) except Exception as e: print(e) # Delete the batch job using the Vertex AI fully qualified identifier for the batch job try: if delete_batchjob: clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r gs://$BUCKET_NAME """ Explanation: Example output: {} Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial. End of explanation """
Hyperparticle/graph-nlu
notebooks/dynamic_memory_3.ipynb
mit
import pandas as pd import numpy as np import nltk from sklearn.metrics import accuracy_score from neo4j.v1 import GraphDatabase, basic_auth from collections import defaultdict refs_utts = pd.read_pickle('resources/utts_refs.pkl') props = pd.read_pickle('resources/restaurants_props.pkl') len(refs_utts), len(props) refs_utts[:5] props[:5] """ Explanation: Memory Representation in Dialogue Systems (Part 3) Under construction, will update with explanations when finished. Import End of explanation """ stemmer = nltk.stem.snowball.EnglishStemmer() def stem(sentence): return [stemmer.stem(w) for w in sentence] test = pd.DataFrame() test['text'] = [stem(s) for s in refs_utts.text] test['frame'] = [tuple(stem(f.split()[1:])) for f in refs_utts.bot] len(test) # Remove poorly formatted frames test = test[test.frame.map(len) == 3] len(test) test[:5] knowledge = pd.DataFrame() knowledge['restaurant'] = props.rname.copy() knowledge['key'] = [stemmer.stem(s) for s in props.attr_key] knowledge['value'] = [stemmer.stem(s) for s in props.attr_value] knowledge[:5] # A dictionary of keys to the list of values they can take # In this instance, keys form mutually exclusive lists of values types = knowledge[['key', 'value']] \ .groupby('key') \ .aggregate(lambda x: tuple(set(x))) \ .reset_index() \ .set_index('key') \ .value \ .to_dict() types['r_cuisin'][:5] types['r_locat'] types['r_price'] """ Explanation: Process Text End of explanation """ # Create a neo4j session driver = GraphDatabase.driver('bolt://localhost:7687', auth=basic_auth('neo4j', 'neo4j')) # WARNING: This will clear the database when run! def reset_db(): session = driver.session() session.run('MATCH (n) DETACH DELETE n') reset_db() session = driver.session() for i,row in knowledge.iterrows(): subject, relation, obj = row.restaurant, row.key, row.value session.run(''' MERGE (s:SUBJECT {name: $subject}) MERGE (o:OBJECT {name: $obj}) MERGE (s)-[r:RELATION {name: $relation}]->(o) ''', { 'subject': subject, 'relation': relation, 'obj': obj }) """ Explanation: Create Knowledge Graph End of explanation """ dont_know = tuple(types.keys()) dont_know base_predicted = list(dont_know) * len(test) base_actual = [w for frame in test.frame for w in frame] accuracy_score(base_actual, base_predicted) """ Explanation: Test Baseline The baseline accuracy is the slot accuracy, calculated by the assumption of not knowing any frame values for any of the sentences. End of explanation """ # Cache properties from DB # Running this query will obtain all properties at this point in time def get_properties(): session = driver.session() return session.run(''' MATCH ()-[r:RELATION]->(o:OBJECT) RETURN collect(distinct o.name) AS properties ''').single()['properties'] # def get_types(): # session = driver.session() # result = session.run(''' # MATCH ()-[r:RELATION]->(o:OBJECT) # RETURN collect(distinct [r.name, o.name]) AS pair # ''').single()[0] # g_types = defaultdict(lambda: []) # for k,v in result: # g_types[k].append(v) # return g_types properties = set(get_properties()) # Hotword listener def is_hotword(word): return word in properties is_hotword('british'), is_hotword('python') # Issue DB queries def find_slot(prop): return session.run(''' MATCH (s:SUBJECT)-[r:RELATION]->(o:OBJECT {name:$name}) RETURN collect(distinct [r.name, o.name]) AS properties ''', { 'name': prop }) def extract(result): return result.single()['properties'][0] session = driver.session() extract(find_slot('west')) session = driver.session() all_slots = [[find_slot(word) for word in sentence if is_hotword(word)] for sentence in test.text] extracted_slots = [[tuple(extract(slot)) for slot in slots] for slots in all_slots] test['slots'] = extracted_slots def to_frame(slots): frame = list(dont_know) s = dict(slots) for i,x in enumerate(frame): if x in s.keys(): frame[i] = s[x] return tuple(frame) test['predicted'] = [to_frame(slot) for slot in test.slots] test[:5] predicted = [w for frame in test.predicted for w in frame] actual = [w for frame in test.frame for w in frame] accuracy_score(actual, predicted) cm = nltk.ConfusionMatrix(actual, predicted) print(cm.pretty_format(sort_by_count=True, show_percents=True, truncate=10)) test[test.text.map(lambda s: 'cheap' in s)] test[test.text.map(lambda s: 'south' in s)]['text'][284] """ Explanation: Accuracy End of explanation """
xdnian/pyml
assignments/ex06_ch1113_xdnian.ipynb
mit
%load_ext watermark %watermark -a '' -u -d -v -p numpy,pandas,matplotlib,scipy,sklearn %matplotlib inline # Added version check for recent scikit-learn 0.18 checks from distutils.version import LooseVersion as Version from sklearn import __version__ as sklearn_version """ Explanation: Assignment 6 This assignment has weighting $3.5$. The first question about clustering has 35%, and the second question about tiny image classification has 65%. This is a challenging assignment, so I recommend you start early. Clustering for handwritten digits Supervised learning requires labeled data, which can be expensive to acquire. For example, a dataset with $N$ samples for classification will require manual labeling $N$ times. One way to ameliorate this issue is to perform clustering of the raw data samples first, followed by manual inspection and labeling of only a few samples. Recall that clustering is a form of non-supervised learning, so it does not require any class labels. For example, say we are given a set of scanned hand-written digit images. We can cluster them into 10 groups first, manually inspect and label a few images in each cluster, and propagate the label towards the rest of all (unlabeled) samples in each cluster. The accuracy of such semi-automatic labeling depends on the accuracy of the clustering. If each cluster (0 to 9) corresponds exactly to hand-written digits 0-9, we are fine. Otherwise, we have some mis-labeled data. The goal of this question is to exercise clustering of the scikit-learn digits dataset which has labels, so that we can verify our clustering accuracy. The specifics are as follows. You will be judged by the test accuracy of your code, and quality of descriptions of your method. As a reference, a simple code I (Li-Yi) wrote can achieve about 78% accuracy. Try to beat it as much as you can. Training and test data split We will split the original dataset into training and test datasets * training for building our clusters * testing to see if the clusters can predict future data Accuracy What is your clustering accuracy (comparing cluster labels with the ground truth labels), and what are the properties of mis-clustered samples? Data preprocessing Would the original features (pixels) work well, or we need further processing like scaling/standardization or dimensionality-reduction, before clustering? Models and hyper-parameters Let's focus on k-means clustering, as hierarchical and density-based clustering do not provide the predict() method under scikit-learn. What is the best test performance you can achieve with which hyper-parameters (for k-means, standard scalar, and dimensionality reduction)? Hint We have learned Pipeline and GridSearchCV for cross validation and hyper-parameter tuning. End of explanation """ import numpy as np from sklearn.datasets import load_digits digits = load_digits() X = digits.data # data in pixels y = digits.target # digit labels print(X.shape) print(y.shape) print(np.unique(y)) """ Explanation: Load data End of explanation """ import matplotlib.pyplot as plt import pylab as pl num_rows = 4 num_cols = 5 fig, ax = plt.subplots(nrows=num_rows, ncols=num_cols, sharex=True, sharey=True) ax = ax.flatten() for index in range(num_rows*num_cols): img = digits.images[index] label = digits.target[index] ax[index].imshow(img, cmap='Greys', interpolation='nearest') ax[index].set_title('digit ' + str(label)) ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show() """ Explanation: Visualize data End of explanation """ if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=1) num_training = y_train.shape[0] num_test = y_test.shape[0] print('training: ' + str(num_training) + ', test: ' + str(num_test)) import numpy as np # check to see if the data are well distributed among digits for y in [y_train, y_test]: print(np.bincount(y)) """ Explanation: Data sets: training versus test End of explanation """ from sklearn.metrics import accuracy_score, make_scorer def clustering_accuracy_score(y_true, y_pred): # replace this with your code; note that y_pred is just cluster id, not digit id y_cm = y_pred cluster_label_true = np.unique(y_true) cluster_num_true = len(cluster_label_true) cluster_label_cm = np.unique(y_cm) cluster_num_cm = len(cluster_label_cm) scores = np.zeros((cluster_num_cm, cluster_num_true)) for i in range(cluster_num_cm): for j in range(cluster_num_true): y_temp = y_true[y_cm == cluster_label_cm[i]] y_count = len(y_temp) scores[i][j] = accuracy_score(y_true = y_temp, y_pred = cluster_label_true[j] * np.ones(y_count)) best_assign_index = np.zeros(cluster_num_cm) for i in range(cluster_num_cm): best_assign_index[i] = np.argmax(scores[i]) y_predict = np.zeros(len(y_cm)) for i, v in enumerate(y_cm): index = np.where(cluster_label_cm == v)[0][0] y_predict[i] = best_assign_index[index] return accuracy_score(y_true=y_true, y_pred=y_predict) clustering_accuracy = make_scorer(clustering_accuracy_score) # toy case demonstrating the clustering accuracy # this is just a reference to illustrate what this score function is trying to achieve # feel free to design your own as long as you can justify # ground truth class label for samples toy_y_true = np.array([0, 0, 0, 1, 1, 2]) # clustering id for samples toy_y_pred_true = np.array([1, 1, 1, 2, 2, 0]) toy_y_pred_bad1 = np.array([0, 0, 1, 1, 1, 2]) toy_y_pred_bad2 = np.array([2, 2, 1, 0, 0, 0]) toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_true) print('accuracy', toy_accuracy, ', should be 1') toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_bad1) print('accuracy', toy_accuracy, ', should be', 5.0/6.0) toy_accuracy = clustering_accuracy_score(toy_y_true, toy_y_pred_bad2) print('accuracy', toy_accuracy, ', should be', 4.0/6.0) """ Explanation: Answer We first write a scoring function for clustering so that we can use for GridSearchCV. Take a look at use_scorer under scikit learn. End of explanation """ # your code from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.cluster import KMeans from sklearn.pipeline import Pipeline # construct a pipeline consisting of different components # each entry contains an identifier, and the corresponding component pipe_km = Pipeline([('scl', StandardScaler()), ('pca', PCA()), ('km', KMeans(random_state=0))]) """ Explanation: Build a pipeline with standard scaler, PCA, and clustering. End of explanation """ # your code if Version(sklearn_version) < '0.18': from sklearn.grid_search import GridSearchCV else: from sklearn.model_selection import GridSearchCV param_grid = {'scl__with_mean': [True, False], 'scl__with_std': [True, False], 'pca__n_components': np.array(range(20,40,2)), 'km__n_clusters': np.array(range(10,20)), 'km__init': ['k-means++', 'random']} gs = GridSearchCV(estimator=pipe_km, param_grid=param_grid, scoring=clustering_accuracy, cv=10, n_jobs=1) gs = gs.fit(X_train, y_train) # # below is Li-Yi's dummy code to build a random guess model # import numpy as np # class RandomGuesser: # def __init__(self, num_classes): # self.num_classes = num_classes # def predict(self, X): # y = np.random.randint(low = 0, high = self.num_classes, size = X.shape[0]) # return y best_model = gs.best_estimator_ # replace this with the best model you can build y_cm = best_model.predict(X_test) print('Test accuracy: %.3f' % clustering_accuracy_score(y_true=y_test, y_pred=y_cm)) #print('Test accuracy: %.3f' % best_model.score(X_test, y_test)) """ Explanation: Use GridSearchCV to tune hyper-parameters. End of explanation """ # your code cluster_label_true = np.unique(y_test) cluster_num_true = len(cluster_label_true) cluster_label_cm = np.unique(y_cm) cluster_num_cm = len(cluster_label_cm) scores = np.zeros((cluster_num_cm, cluster_num_true)) for i in range(cluster_num_cm): for j in range(cluster_num_true): y_temp = y_test[y_cm == cluster_label_cm[i]] y_count = len(y_temp) scores[i][j] = accuracy_score(y_true = y_temp, y_pred = cluster_label_true[j] * np.ones(y_count)) best_assign_index = np.zeros(cluster_num_cm) for i in range(cluster_num_cm): best_assign_index[i] = np.argmax(scores[i]) y_test_pred = np.zeros(len(y_cm)) for i, v in enumerate(y_cm): index = np.where(cluster_label_cm == v)[0][0] y_test_pred[i] = best_assign_index[index] miscl_img = X_test[y_test != y_test_pred] correct_lab = y_test[y_test != y_test_pred] miscl_lab = y_test_pred[y_test != y_test_pred] num_miscl = np.count_nonzero(y_test != y_test_pred) print("%s out of %s samples are mis-clustered." % (num_miscl, num_test)) fig, ax = plt.subplots(nrows=4, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(20): img = miscl_img[i].reshape(8, 8) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[i].set_title('%d) t: %d p: %d' % (i+1, correct_lab[i], miscl_lab[i])) ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() # plt.savefig('./figures/mnist_miscl.png', dpi=300) plt.show() fig, ax = plt.subplots(nrows=4, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(20, num_miscl): img = miscl_img[i].reshape(8, 8) ax[i-20].imshow(img, cmap='Greys', interpolation='nearest') ax[i-20].set_title('%d) t: %d p: %d' % (i+1, correct_lab[i], miscl_lab[i])) ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() # plt.savefig('./figures/mnist_miscl.png', dpi=300) plt.show() """ Explanation: Visualize mis-clustered samples, and provide your explanation. End of explanation """ %load_ext watermark %watermark -a '' -u -d -v -p numpy,keras from keras.datasets import cifar10 from keras.utils import np_utils from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.optimizers import SGD import numpy as np from sklearn.metrics import accuracy_score # load data set (X_train, y_train), (X_test, y_test) = cifar10.load_data() img_shape = X_train.shape[1:] # [num_rows, num_cols, num_channels] num_img_pixels = np.prod(img_shape) num_training_samples = X_train.shape[0] num_test_samples = X_test.shape[0] nb_classes = np.sum(np.unique(y_train).shape) print('image shape: ', img_shape) print(X_train.shape[0], 'training samples') print(X_test.shape[0], 'test samples') print(nb_classes, 'classes') # data processing # X_train = X_train.reshape(num_training_samples, num_img_pixels) # X_test = X_test.reshape(num_test_samples, num_img_pixels) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 # one hot encoding of labels y_train_ohe = np_utils.to_categorical(y_train) y_test_ohe = np_utils.to_categorical(y_test) """ Explanation: your explanation The reason seem to be obivous, which is that some handwritten digit are really vague or rather similar to other digit. Some digits are even cannot be recognized easily by human. For example, the 7th misclusterred digit '9', the writing is really special and can be confused to other digit. Also, the 19th misclusterred digit '3' is written similarly to '5'. Another example is the 33rd misclusterred digit '3' is almost written as '7'. Besides, some hand-written digit are really vague, including 26th, 28th, 34th, 37th misclustered digit. These are hard to avoid. On the other hand, it cannot be denied that there are so many digit '9' (nearly 1/3) has been recongized as other digits, especially as '1' and '7'. This may caused by that these three digits all end with a vertical line and the '9' have some obviously different writing style. This problem may be handled better with more number of cluster and better computing resource for trainning. Tiny image classification We will use the CIFAR-10 dataset for image object recognition. The dataset consists of 50000 training samples and 10000 test samples in 10 different classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck; see the link above for more information). The goal is to maximize the accuracy of your classifier on the test dataset after being optimized via the training dataset. You can use any learning models (supervised or unsupervised) or optimization methods (e.g. search methods for hyper-parameters). The only requirement is that your code can run inside an ipynb file, as usual. Please provide a description of your method, in addition to the code. Your answer will be evaluated not only on the test accuracy but also on the creativity of your methodology and the quality of your explanation/description. Sample code to get you started This is a difficult classification task. A sample code below, based on a simple fully connected neural network built via Keras, is provided below. The test accuracy is about 43%. End of explanation """ # your code and experimental results # define a training model from keras.models import Sequential model = Sequential() # number of convolutional filters n_filters = 96 # convolution filter size n_conv = 3 # pooling window size n_pool = 2 from keras.layers import Activation from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers import Dropout, Flatten, Dense model.add(Convolution2D( n_filters, n_conv, n_conv, border_mode='valid', # 32x32 rgb channel (3 channel) image input_shape=(32, 32, 3), dim_ordering='tf' )) model.add(Activation('relu')) model.add(Convolution2D(n_filters, n_conv, n_conv)) model.add(Activation('relu')) model.add(Convolution2D(n_filters, n_conv, n_conv)) model.add(Activation('relu')) # apply pooling model.add(MaxPooling2D(pool_size=(n_pool, n_pool))) model.add(Dropout(0.25)) # flatten the data for the 1D layers model.add(Flatten()) # Dense(n_outputs) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) # softmax output layer model.add(Dense(10)) model.add(Activation('softmax')) model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) # train _ = model.fit(X_train, y_train_ohe, nb_epoch = 50, batch_size = 1024, verbose = True, # turn this on to visualize progress validation_split = 0.1 # 10% of training data for validation per epoch ) # evaluate y_train_pred = model.predict_classes(X_train, verbose=False) print('First few predictions: ', y_train_pred[:3]) train_acc = accuracy_score(y_train, y_train_pred) print('Training accuracy:', train_acc) y_test_pred = model.predict_classes(X_test, verbose=False) test_acc = accuracy_score(y_test, y_test_pred) print('Test accuracy:', test_acc) """ Explanation: Answer End of explanation """
mintcloud/deep-learning
sentiment-rnn/.ipynb_checkpoints/Sentiment RNN Solution-checkpoint.ipynb
mit
import numpy as np import tensorflow as tf with open('../sentiment_network/reviews.txt', 'r') as f: reviews = f.read() with open('../sentiment_network/labels.txt', 'r') as f: labels = f.read() reviews[:2000] """ Explanation: Sentiment Analysis with an RNN In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels. The architecture for this network is shown below. <img src="assets/network_diagram.png" width=400px> Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own. From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function. We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label. End of explanation """ from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100] """ Explanation: Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. End of explanation """ from collections import Counter counts = Counter(words) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()]) """ Explanation: Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints. End of explanation """ labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels]) review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens))) """ Explanation: Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively. End of explanation """ # Filter out that review with 0 length reviews_ints = [each for each in reviews_ints if len(each) > 0] """ Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list. End of explanation """ seq_len = 200 features = np.zeros((len(reviews), seq_len), dtype=int) for i, row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len] features[:10,:100] """ Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. End of explanation """ split_frac = 0.8 split_idx = int(len(features)*0.8) train_x, val_x = features[:split_idx], features[split_idx:] train_y, val_y = labels[:split_idx], labels[split_idx:] test_idx = int(len(val_x)*0.5) val_x, test_x = val_x[:test_idx], val_x[test_idx:] val_y, test_y = val_y[:test_idx], val_y[test_idx:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape)) """ Explanation: Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data. End of explanation """ lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001 """ Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2501, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate End of explanation """ n_words = len(vocab) # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs') labels_ = tf.placeholder(tf.int32, [None, None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob') """ Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder. End of explanation """ # Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_) """ Explanation: Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200]. End of explanation """ with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32) """ Explanation: LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out. End of explanation """ with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state) """ Explanation: RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed. End of explanation """ with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) """ Explanation: Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_. End of explanation """ with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass. End of explanation """ def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size] """ Explanation: Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size]. End of explanation """ epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt") """ Explanation: Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists. End of explanation """ test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) """ Explanation: Testing End of explanation """
dtamayo/rebound
ipython_examples/EscapingParticles.ipynb
gpl-3.0
import rebound import numpy as np def setupSimulation(): sim = rebound.Simulation() sim.add(m=1., hash="Sun") sim.add(x=0.4,vx=5., hash="Mercury") sim.add(a=0.7, hash="Venus") sim.add(a=1., hash="Earth") sim.move_to_com() return sim sim = setupSimulation() sim.status() """ Explanation: Escaping particles Sometimes we are not interested in particles that get too far from the central body. Here we will define a radius beyond which we remove particles from the simulation. Let's set up an artificial situation with 3 planets, and the inner one moves radially outward with $v > v_{escape}$. End of explanation """ sim = setupSimulation() # Resets everything sim.exit_max_distance = 50. Noutputs = 1000 times = np.linspace(0,20.*2.*np.pi,Noutputs) xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs) for i,time in enumerate(times): try: sim.integrate(time) except rebound.Escape as error: print(error) for j in range(sim.N): p = sim.particles[j] d2 = p.x*p.x + p.y*p.y + p.z*p.z if d2>sim.exit_max_distance**2: index=j # cache index rather than remove here since our loop would go beyond end of particles array sim.remove(index=index) xvenus[i] = sim.particles[2].x yvenus[i] = sim.particles[2].y print("Went down to {0} particles".format(sim.N)) """ Explanation: Now let's run a simulation for 20 years (in default units where $G=1$, and thus AU, yr/2$\pi$, and $M_\odot$, see Units.ipynb for how to change units), and set up a 50 AU sphere beyond which we remove particles from the simulation. We can do this by setting the exit_max_distance flag of the simulation object. If a particle's distance (from the origin of whatever inertial reference frame chosen) exceeds sim.exit_max_distance, an exception is thrown. If we simply call sim.integrate(), the program will crash due to the unhandled exception when the particle escapes, so we'll create a try-except block to catch the exception. We'll also store the x,y positions of Venus, which we expect to survive. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig,ax = plt.subplots(figsize=(15,5)) ax.plot(xvenus, yvenus) ax.set_aspect('equal') ax.set_xlim([-2,10]); """ Explanation: So this worked as expected. Now let's plot what we got: End of explanation """ sim = setupSimulation() # Resets everything sim.exit_max_distance = 50. Noutputs = 1000 times = np.linspace(0,20.*2.*np.pi,Noutputs) xvenus, yvenus = np.zeros(Noutputs), np.zeros(Noutputs) for i,time in enumerate(times): try: sim.integrate(time) except rebound.Escape as error: print(error) for j in range(sim.N): p = sim.particles[j] d2 = p.x*p.x + p.y*p.y + p.z*p.z if d2>sim.exit_max_distance**2: index=j # cache index rather than remove here since our loop would go beyond end of particles array sim.remove(index=index) xvenus[i] = sim.particles["Venus"].x yvenus[i] = sim.particles["Venus"].y fig,ax = plt.subplots(figsize=(15,5)) ax.plot(xvenus, yvenus) ax.set_aspect('equal') ax.set_xlim([-2,10]); """ Explanation: This doesn't look right. The problem here is that when we removed particles[1] from the simulation, all the particles got shifted down in the particles array. So following the removal, xvenus all of a sudden started getting populated by the values for Earth (the new sim.particles[2]). A more robust way to access particles is using hashes (see UniquelyIdentifyingParticles.ipynb) End of explanation """
laurajchang/NPTFit
examples/Example6_Manual_nonPoissonian_Likelihood.ipynb
mit
# Import relevant modules %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import healpy as hp import matplotlib.pyplot as plt from NPTFit import nptfit # module for performing scan from NPTFit import create_mask as cm # module for creating the mask from NPTFit import psf_correction as pc # module for determining the PSF correction from NPTFit import dnds_analysis # module for analysing the output from __future__ import print_function """ Explanation: Example 6: Manual evaluation of non-Poissonian Likelihood In this example we show to manually evaluate the non-Poissonian likelihood. This can be used, for example, to interface nptfit with parameter estimation packages other than MultiNest. We also show how to extract the prior cube. We will take the exact same analysis as considered in the previous example, and show the likelihood peaks at exactly the same location for the normalisation of the non-Poissonian template. NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details. End of explanation """ n = nptfit.NPTF(tag='non-Poissonian_Example') fermi_data = np.load('fermi_data/fermidata_counts.npy') fermi_exposure = np.load('fermi_data/fermidata_exposure.npy') n.load_data(fermi_data, fermi_exposure) analysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0) n.load_mask(analysis_mask) iso = np.load('fermi_data/template_iso.npy') n.add_template(iso, 'iso') n.add_poiss_model('iso','$A_\mathrm{iso}$', False, fixed=True, fixed_norm=1.47) n.add_non_poiss_model('iso', ['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'], [[-6,1],[2.05,30],[-2,1.95]], [True,False,False], fixed_params = [[3,22.]]) pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812) f_ary = pc_inst.f_ary df_rho_div_f_ary = pc_inst.df_rho_div_f_ary n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1) """ Explanation: Setup an identical instance of NPTFit to Example 5 Firstly we initialize an instance of nptfit identical to that used in the previous example. End of explanation """ print('Vary A: ', n.ll([-3.52+0.22,2.56,-0.48]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52-0.24,2.56,-0.48])) print('Vary n1:', n.ll([-3.52,2.56+0.67,-0.48]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52,2.56-0.37,-0.48])) print('Vary n2:', n.ll([-3.52,2.56,-0.48+1.18]), n.ll([-3.52,2.56,-0.48]), n.ll([-3.52,2.56,-0.48-1.02])) """ Explanation: Evaluate the Likelihood Manually After configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it. The log likelihood function is called as: ll(theta), where theta is a flattened array of parameters. In the case above: $$ \theta = \left[ \log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right), n_1, n_2 \right] $$ As an example we can evaluate it at a few points around the best fit parameters: End of explanation """ Avals = np.arange(-5.5,0.5,0.01) TSvals_A = np.array([2*(n.ll([-3.52,2.56,-0.48])-n.ll([Avals[i],2.56,-0.48])) for i in range(len(Avals))]) plt.plot(Avals,TSvals_A,color='black', lw=1.5) plt.axvline(-3.52+0.22,ls='dashed',color='black') plt.axvline(-3.52,ls='dashed',color='black') plt.axvline(-3.52-0.24,ls='dashed',color='black') plt.axhline(0,ls='dashed',color='black') plt.xlim([-4.0,-3.0]) plt.ylim([-5.0,15.0]) plt.xlabel('$A^\mathrm{ps}_\mathrm{iso}$') plt.ylabel('$\mathrm{TS}$') plt.show() """ Explanation: To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter. End of explanation """ n2vals = np.arange(-1.995,1.945,0.01) TSvals_n2 = np.array([2*(n.ll([-3.52,2.56,-0.48])-n.ll([-3.52,2.56,n2vals[i]])) for i in range(len(n2vals))]) plt.plot(n2vals,TSvals_n2,color='black', lw=1.5) plt.axvline(-0.48+1.18,ls='dashed',color='black') plt.axvline(-0.48,ls='dashed',color='black') plt.axvline(-0.48-1.02,ls='dashed',color='black') plt.axhline(0,ls='dashed',color='black') plt.xlim([-2.0,1.5]) plt.ylim([-5.0,15.0]) plt.xlabel('$n_2$') plt.ylabel('$\mathrm{TS}$') plt.show() """ Explanation: Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat. NB: it is important not to evaluate breaks exactly at a value of $n=1$. The reason for this is the analytic form of the likelihood involves $(n-1)^{-1}$. End of explanation """ print(n.prior_cube(cube=[1,1,1],ndim=3)) """ Explanation: In general $\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\ell_n$, then: $$ \theta = \left[ A_\mathrm{P}^1, \ldots, A_\mathrm{P}^m, A_\mathrm{NP}^1, n_1^1, \ldots, n_{\ell_1+1}^1, S_b^{(1)~1}, \ldots, S_b^{(\ell_1)~1}, \ldots, A_\mathrm{NP}^n, n_1^n, \ldots, n_{\ell_n+1}^n, S_b^{(1)~n}, \ldots, S_b^{(\ell_n)~n} \right] $$ Fixed parameters are deleted from the list, and any parameter entered with a log flat prior is replaced by $\log_{10}$ of itself. Extract the Prior Cube Manually To extract the prior cube, we use the internal function log_prior_cube. This requires two arguments: 1. cube, the unit cube of dimension equal to the number of floated parameters; and 2. ndim, the number of floated parameters. End of explanation """
UWashington-Astro300/Astro300-A17
Python_Introduction.ipynb
mit
print("Hello World!") # lines that begin with a # are treated as comment lines and not executed # print("This line is not printed") print("This line is printed") """ Explanation: A jupyter notebook is a browser-based environment that integrates: A Kernel (python) Text Executable code Plots and images Rendered mathematical equations Cell The basic unit of a jupyter notebook is a cell. A cell can contain any of the above elements. In a notebook, to run a cell of code, hit Shift-Enter. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use: Alt-Enter to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook). Control-Enter executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently. Hello World End of explanation """ g = 3.0 * 2.0 """ Explanation: Create a variable End of explanation """ print(g) """ Explanation: Print out the value of the variable End of explanation """ g """ Explanation: or even easier: End of explanation """ a = 1 b = 2.3 c = 2.3e4 d = True e = "Spam" type(a), type(b), type(c), type(d), type(e) a + b, type(a + b) c + d, type(c + d) # True = 1 a + e str(a) + e """ Explanation: Datatypes In computer programming, a data type is a classification identifying one of various types that data can have. The most common data type we will see in this class are: Integers (int): Integers are the classic cardinal numbers: ... -3, -2, -1, 0, 1, 2, 3, 4, ... Floating Point (float): Floating Point are numbers with a decimal point: 1.2, 34.98, -67,23354435, ... Floating point values can also be expressed in scientific notation: 1e3 = 1000 Booleans (bool): Booleans types can only have one of two values: True or False. In many languages 0 is considered False, and any other value is considered True. Strings (str): Strings can be composed of one or more characters: ’a’, ’spam’, ’spam spam eggs and spam’. Usually quotes (’) are used to specify a string. For example ’12’ would refer to the string, not the integer. Collections of Data Types Scalar: A single value of any data type. List: A collection of values. May be mixed data types. (1, 2.34, ’Spam’, True) including lists of lists: (1, (1,2,3), (3,4)) Array: A collection of values. Must be same data type. [1,2,3,4] or [1.2, 4.5, 2.6] or [True, False, False] or [’Spam’, ’Eggs’, ’Spam’] Matrix: A multi-dimensional array: [[1,2], [3,4]] (an array of arrays). End of explanation """ import numpy as np """ Explanation: NumPy (Numerical Python) is the fundamental package for scientific computing with Python. Load the numpy library: End of explanation """ np.pi, np.e """ Explanation: pi and e are built-in constants: End of explanation """ np.random.seed(42) # set the seed - everyone gets the same random numbers x = np.random.randint(1,10,20) # 20 random ints between 1 and 10 x """ Explanation: Here is a link to all Numpy math functions. Arrays Each element of the array has a Value The position of each Value is called its Index Our basic unit will be the NumPy array End of explanation """ x[0] # The Value at Index = 0 x[-1] # The last Value in the array x """ Explanation: Indexing End of explanation """ x x[0:4] # first 4 items x[:4] # same x[0:4:2] # first four item, step = 2 x[3::-1] # first four items backwards, step = -1 x[::-1] # Reverse the array x print(x[-5:]) # last 5 elements of the array x """ Explanation: Slices x[start:stop:step] start is the first Index that you want [default = first element] stop is the first Index that you do not want [default = last element] step defines size of step and whether you are moving forwards (positive) or backwards (negative) [default = 1] End of explanation """ x.size # Number of elements in x x.mean() # Average of the elements in x x.sum() # Total of the elements in x x[-5:].sum() # Total of last 5 elements in x x.cumsum() # Cumulative sum x.cumsum()/x.sum() # Cumulative percentage x. """ Explanation: There are lots of different methods that can be applied to a NumPy array End of explanation """ ?x.min """ Explanation: Help about a function: End of explanation """ y = x * 2 y sin(x) # need to Numpy's math functions np.sin(x) """ Explanation: NumPy math works over an entire array: End of explanation """ mask1 = np.where(x>5) x, mask1 x[mask1], y[mask1] mask2 = np.where((x>3) & (x<7)) x[mask2] """ Explanation: Masking - The key to fast programs End of explanation """ mask3 = np.where(x >= 8) x[mask3] # Set all values of x that match mask3 to 0 x[mask3] = 0 x mask4 = np.where(x != 0) mask4 #Add 10 to every value of x that matches mask4: x[mask4] += 100 x """ Explanation: Fancy masking End of explanation """ np.random.seed(13) # set the seed - everyone gets the same random numbers z = np.random.randint(1,10,20) # 20 random ints between 1 and 10 z np.sort(z) np.sort(z)[0:4] # Returns the indices that would sort an array np.argsort(z) z, z[np.argsort(z)] maskS = np.argsort(z) z, z[maskS] """ Explanation: Sorting End of explanation """ xx = -1 if xx > 0: print("This number is positive") else: print("This number is NOT positive") xx = 0 if xx > 0: print("This number is positive") elif xx == 0: print("This number is zero") else: print("This number is negative") """ Explanation: Control Flow Like all computer languages, Python supports the standard types of control flows including: IF statements FOR loops End of explanation """ z for value in z: print(value) for idx,val in enumerate(z): print(idx,val) for idx,val in enumerate(z): if (val > 5): z[idx] = 0 for idx,val in enumerate(z): print(idx,val) """ Explanation: For loops are different in python. You do not need to specify the beginning and end values of the loop End of explanation """ np.random.seed(42) BigZ = np.random.random(10000) # 10,000 value array BigZ[:10] # This is slow! for Idx,Val in enumerate(BigZ): if (Val > 0.5): BigZ[Idx] = 0 BigZ[:10] %%timeit for Idx,Val in enumerate(BigZ): if (Val > 0.5): BigZ[Idx] = 0 # Masks are MUCH faster mask = np.where(BigZ>0.5) BigZ[mask] = 0 BigZ[:10] %%timeit -o mask = np.where(BigZ>0.5) BigZ[mask] = 0 """ Explanation: Loops are slow in Python. Do not use them if you do not have to! End of explanation """ def find_f(x,y): result = (x ** 2) * np.sin(y) # assign the variable result the value of the function return result # return the value of the function to the main program np.random.seed(42) array_x = np.random.rand(10) * 10 array_y = np.random.rand(10) * 2.0 * np.pi array_x, array_y value_f = find_f(array_x,array_y) value_f """ Explanation: Functions In computer science, a function (also called a procedure, method, subroutine, or routine) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code. The big advantage of a function is that it breaks a program into smaller, easier to understand pieces. It also makes debugging easier. A function can also be reused in another program. The basic idea of a function is that it will take various values, do something with them, and return a result. The variables in a function are local. That means that they do not affect anything outside the function. Below is a simple example of a function that solves the equation: $ f(x,y) = x^2\ sin(y)$ In the example the name of the function is find_f (you can name functions what ever you want). The function find_f takes two arguments x and y, and returns the value of the equation to the main program. In the main program a variable named value_f is assigned the value returned by find_f. Notice that in the main program the function find_f is called using the arguments array_x and array_y. Since the variables in the function are local, you do not have name them x and y in the main program. End of explanation """ def find_g(z): result = z / np.e return result find_g(value_f) find_g(find_f(array_x,array_y)) """ Explanation: The results of one function can be used as the input to another function End of explanation """ # a new array filled with zeros array_0 = np.zeros(10) array_0 # a new array filled with ones array_1 = np.ones(10) array_1 # a new array filled with evenly spaced values within a given interval array_2 = np.arange(10,20) array_2 # a new array filled with evenly spaced numbers over a specified interval (start, stop, num) array_3 = np.linspace(10,20,5) array_3 # a new array filled with evenly spaced numbers over a log scale. (start, stop, num, base) array_4 = np.logspace(1,2,5,10) array_4 """ Explanation: Creating Arrays Numpy has a wide variety of ways of creating arrays: Array creation routines End of explanation """
deot95/Tesis
Proyecto de Grado Ingeniería Electrónica/Workspace/RL/TowardsRL/.ipynb_checkpoints/reinforcement_q_learning-checkpoint.ipynb
mit
import gym import math import random import numpy as np import matplotlib import matplotlib.pyplot as plt from collections import namedtuple from itertools import count from copy import deepcopy from PIL import Image from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim import torch.autograd as autograd import torch.nn.functional as F import torchvision.transforms as T env = gym.make('CartPole-v0') env = env.unwrapped is_ipython = 'inline' in matplotlib.get_backend() if is_ipython: from IPython import display """ Explanation: Reinforcement Learning (DQN) tutorial Author: Adam Paszke &lt;https://github.com/apaszke&gt;_ This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym &lt;https://gym.openai.com/&gt;__. Task The agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright. You can find an official leaderboard with various algorithms and visualizations at the Gym website &lt;https://gym.openai.com/envs/CartPole-v0&gt;__. .. figure:: /_static/img/cartpole.gif :alt: cartpole cartpole As the agent observes the current state of the environment and chooses an action, the environment transitions to a new state, and also returns a reward that indicates the consequences of the action. In this task, the environment terminates if the pole falls over too far. The CartPole task is designed so that the inputs to the agent are 4 real values representing the environment state (position, velocity, etc.). However, neural networks can solve the task purely by looking at the scene, so we'll use a patch of the screen centered on the cart as an input. Because of this, our results aren't directly comparable to the ones from the official leaderboard - our task is much harder. Unfortunately this does slow down the training, because we have to render all the frames. Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image. Packages First, let's import needed packages. Firstly, we need gym &lt;https://gym.openai.com/docs&gt;__ for the environment (Install using pip install gym). We'll also use the following from PyTorch: neural networks (torch.nn) optimization (torch.optim) automatic differentiation (torch.autograd) utilities for vision tasks (torchvision - a separate package &lt;https://github.com/pytorch/vision&gt;__). End of explanation """ Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward')) class ReplayMemory(object): def __init__(self, capacity): self.capacity = capacity self.memory = [] self.position = 0 def push(self, *args): """Saves a transition.""" if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(*args) self.position = (self.position + 1) % self.capacity def sample(self, batch_size): return random.sample(self.memory, batch_size) def __len__(self): return len(self.memory) """ Explanation: Replay Memory We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure. For this, we're going to need two classses: Transition - a named tuple representing a single transition in our environment ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training. End of explanation """ class DQN(nn.Module): def __init__(self): super(DQN, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) self.head = nn.Linear(448, 2) def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1)) """ Explanation: Now, let's define our model. But first, let quickly recap what a DQN is. DQN algorithm Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment. Our aim will be to train a policy that tries to maximize the discounted, cumulative reward $R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where $R_{t_0}$ is also known as the return. The discount, $\gamma$, should be a constant between $0$ and $1$ that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about. The main idea behind Q-learning is that if we had a function $Q^*: State \times Action \rightarrow \mathbb{R}$, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards: \begin{align}\pi^(s) = \arg!\max_a \ Q^(s, a)\end{align} However, we don't know everything about the world, so we don't have access to $Q^$. But, since neural networks are universal function approximators, we can simply create one and train it to resemble $Q^$. For our training update rule, we'll use a fact that every $Q$ function for some policy obeys the Bellman equation: \begin{align}Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s'))\end{align} The difference between the two sides of the equality is known as the temporal difference error, $\delta$: \begin{align}\delta = Q(s, a) - (r + \gamma \max_a Q(s', a))\end{align} To minimise this error, we will use the Huber loss &lt;https://en.wikipedia.org/wiki/Huber_loss&gt;__. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of $Q$ are very noisy. We calculate this over a batch of transitions, $B$, sampled from the replay memory: \begin{align}\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\end{align} \begin{align}\text{where} \quad \mathcal{L}(\delta) = \begin{cases} \frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \ |\delta| - \frac{1}{2} & \text{otherwise.} \end{cases}\end{align} Q-network Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing $Q(s, \mathrm{left})$ and $Q(s, \mathrm{right})$ (where $s$ is the input to the network). In effect, the network is trying to predict the quality of taking each action given the current input. End of explanation """ resize = T.Compose([T.ToPILImage(), T.Scale(40, interpolation=Image.CUBIC), T.ToTensor()]) # This is based on the code from gym. screen_width = 600 def get_cart_location(): world_width = env.x_threshold * 2 scale = screen_width / world_width return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART def get_screen(): screen = env.render(mode='rgb_array').transpose( (2, 0, 1)) # transpose into torch order (CHW) # Strip off the top and bottom of the screen screen = screen[:, 160:320] view_width = 320 cart_location = get_cart_location() if cart_location < view_width // 2: slice_range = slice(view_width) elif cart_location > (screen_width - view_width // 2): slice_range = slice(-view_width, None) else: slice_range = slice(cart_location - view_width // 2, cart_location + view_width // 2) # Strip off the edges, so that we have a square image centered on a cart screen = screen[:, :, slice_range] # Convert to float, rescare, convert to torch tensor # (this doesn't require a copy) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) # Resize, and add a batch dimension (BCHW) return resize(screen).unsqueeze(0) env.reset() plt.imshow(get_screen().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none') plt.show() """ Explanation: Input extraction ^^^^^^^^^^^^^^^^ The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted. End of explanation """ BATCH_SIZE = 128 GAMMA = 0.999 EPS_START = 0.9 EPS_END = 0.05 EPS_DECAY = 200 USE_CUDA = torch.cuda.is_available() model = DQN() memory = ReplayMemory(10000) optimizer = optim.RMSprop(model.parameters()) if USE_CUDA: model.cuda() class Variable(autograd.Variable): def __init__(self, data, *args, **kwargs): if USE_CUDA: data = data.cuda() super(Variable, self).__init__(data, *args, **kwargs) steps_done = 0 def select_action(state): global steps_done sample = random.random() eps_threshold = EPS_END + (EPS_START - EPS_END) * \ math.exp(-1. * steps_done / EPS_DECAY) steps_done += 1 if sample > eps_threshold: return model(Variable(state, volatile=True)).data.max(1)[1].cpu() else: return torch.LongTensor([[random.randrange(2)]]) episode_durations = [] def plot_durations(): plt.figure(1) plt.clf() durations_t = torch.Tensor(episode_durations) plt.xlabel('Episode') plt.ylabel('Duration') plt.plot(durations_t.numpy()) # Take 100 episode averages and plot them too if len(durations_t) >= 100: means = durations_t.unfold(0, 100, 1).mean(1).view(-1) means = torch.cat((torch.zeros(99), means)) plt.plot(means.numpy()) if is_ipython: display.clear_output(wait=True) display.display(plt.gcf()) """ Explanation: Training Hyperparameters and utilities This cell instantiates our model and its optimizer, and defines some utilities: Variable - this is a simple wrapper around torch.autograd.Variable that will automatically send the data to the GPU every time we construct a Variable. select_action - will select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action will start at EPS_START and will decay exponentially towards EPS_END. EPS_DECAY controls the rate of the decay. plot_durations - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode. End of explanation """ last_sync = 0 def optimize_model(): global last_sync if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) # Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for # detailed explanation). batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements non_final_mask = torch.ByteTensor( tuple(map(lambda s: s is not None, batch.next_state))) if USE_CUDA: non_final_mask = non_final_mask.cuda() # We don't want to backprop through the expected action values and volatile # will save us on temporarily changing the model parameters' # requires_grad to False! non_final_next_states = Variable(torch.cat([s for s in batch.next_state if s is not None]), volatile=True) state_batch = Variable(torch.cat(batch.state)) action_batch = Variable(torch.cat(batch.action)) reward_batch = Variable(torch.cat(batch.reward)) # Compute Q(s_t, a) - the model computes Q(s_t), then we select the # columns of actions taken state_action_values = model(state_batch).gather(1, action_batch) # Compute V(s_{t+1}) for all next states. next_state_values = Variable(torch.zeros(BATCH_SIZE)) next_state_values[non_final_mask] = model(non_final_next_states).max(1)[0] # Now, we don't want to mess up the loss with a volatile flag, so let's # clear it. After this, we'll just end up with a Variable that has # requires_grad=False next_state_values.volatile = False # Compute the expected Q values expected_state_action_values = (next_state_values * GAMMA) + reward_batch # Compute Huber loss loss = F.smooth_l1_loss(state_action_values, expected_state_action_values) # Optimize the model optimizer.zero_grad() loss.backward() for param in model.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() """ Explanation: Training loop Finally, the code for training our model. Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes $Q(s_t, a_t)$ and $V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our loss. By defition we set $V(s) = 0$ if $s$ is a terminal state. End of explanation """ num_episodes = 10 for i_episode in xrange(num_episodes): # Initialize the environment and state env.reset() last_screen = get_screen() current_screen = get_screen() state = current_screen - last_screen for t in count(): # Select and perform an action action = select_action(state) _, reward, done, _ = env.step(action[0, 0]) reward = torch.Tensor([reward]) # Observe new state last_screen = current_screen current_screen = get_screen() if not done: next_state = current_screen - last_screen else: next_state = None # Store the transition in memory memory.push(state, action, next_state, reward) # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model() if done: episode_durations.append(t + 1) plot_durations() break """ Explanation: Below, you can find the main training loop. At the beginning we reset the environment and initialize the state variable. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop. Below, num_episodes is set small. You should download the notebook and run lot more epsiodes. End of explanation """
phobson/statsmodels
examples/notebooks/pca_fertility_factors.ipynb
bsd-3-clause
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels as sm from statsmodels.multivariate.pca import PCA """ Explanation: Statsmodels Principal Component Analysis Key ideas: Principal component analysis, world bank data, fertility In this notebook, we use principal components analysis (PCA) to analyze the time series of fertility rates in 192 countries, using data obtained from the World Bank. The main goal is to understand how the trends in fertility over time differ from country to country. This is a slightly atypical illustration of PCA because the data are time series. Methods such as functional PCA have been developed for this setting, but since the fertility data are very smooth, there is no real disadvantage to using standard PCA in this case. End of explanation """ data = sm.datasets.fertility.load_pandas().data data.head() """ Explanation: The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data: End of explanation """ columns = list(map(str, range(1960, 2012))) data.set_index('Country Name', inplace=True) dta = data[columns] dta = dta.dropna() dta.head() """ Explanation: Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data. End of explanation """ ax = dta.mean().plot(grid=False) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility rate", size=17); ax.set_xlim(0, 51) """ Explanation: There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the "objects" and the columns as the "variables", or vice-versa. Here we will treat the fertility measures as "variables" used to measure the countries as "objects". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate "profiles" or "basis functions" that capture most of the variation over time in the different countries. The mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980. End of explanation """ pca_model = PCA(dta.T, standardize=False, demean=True) """ Explanation: Next we perform the PCA: End of explanation """ fig = pca_model.plot_scree(log_scale=False) """ Explanation: Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's. End of explanation """ fig, ax = plt.subplots(figsize=(8, 4)) lines = ax.plot(pca_model.factors.ix[:,:3], lw=4, alpha=.6) ax.set_xticklabels(dta.columns.values[::10]) ax.set_xlim(0, 51) ax.set_xlabel("Year", size=17) fig.subplots_adjust(.1, .1, .85, .9) legend = fig.legend(lines, ['PC 1', 'PC 2', 'PC 3'], loc='center right') legend.draw_frame(False) """ Explanation: Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range. End of explanation """ idx = pca_model.loadings.ix[:,0].argsort() """ Explanation: To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot. End of explanation """ def make_plot(labels): fig, ax = plt.subplots(figsize=(9,5)) ax = dta.ix[labels].T.plot(legend=False, grid=False, ax=ax) dta.mean().plot(ax=ax, grid=False, label='Mean') ax.set_xlim(0, 51); fig.subplots_adjust(.1, .1, .75, .9) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility", size=17); legend = ax.legend(*ax.get_legend_handles_labels(), loc='center left', bbox_to_anchor=(1, .5)) legend.draw_frame(False) labels = dta.index[idx[-5:]] make_plot(labels) """ Explanation: First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing). End of explanation """ idx = pca_model.loadings.ix[:,1].argsort() make_plot(dta.index[idx[-5:]]) """ Explanation: Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility. End of explanation """ make_plot(dta.index[idx[:5]]) """ Explanation: Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out. End of explanation """ fig, ax = plt.subplots() pd.tools.plotting.scatter_plot(pca_model.loadings, 'comp_00', 'comp_01', ax=ax) ax.set_xlabel("PC 1", size=17) ax.set_ylabel("PC 2", size=17) dta.index[pca_model.loadings.ix[1,:] > .2].values """ Explanation: We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation. End of explanation """
josef-pkt/statsmodels
examples/notebooks/markov_regression.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt # NBER recessions from pandas_datareader.data import DataReader from datetime import datetime usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1)) """ Explanation: Markov switching dynamic regression models This notebook provides an example of the use of Markov switching models in Statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf. End of explanation """ # Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) # Plot the data dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3)) # Fit the model # (a switching mean is the default of the MarkovRegession model) mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2) res_fedfunds = mod_fedfunds.fit() res_fedfunds.summary() """ Explanation: Federal funds rate with switching intercept The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply: $$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$. The data used in this example can be found at http://www.stata-press.com/data/r14/usmacro. End of explanation """ res_fedfunds.smoothed_marginal_probabilities[1].plot( title='Probability of being in the high regime', figsize=(12,3)); """ Explanation: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed. End of explanation """ print(res_fedfunds.expected_durations) """ Explanation: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime. End of explanation """ # Fit the model mod_fedfunds2 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1]) res_fedfunds2 = mod_fedfunds2.fit() res_fedfunds2.summary() """ Explanation: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years. Federal funds rate with switching intercept and lagged dependent variable The second example augments the previous model to include the lagged value of the federal funds rate. $$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$. End of explanation """ res_fedfunds2.smoothed_marginal_probabilities[0].plot( title='Probability of being in the high regime', figsize=(12,3)); """ Explanation: There are several things to notice from the summary output: The information criteria have decreased substantially, indicating that this model has a better fit than the previous model. The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept. Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability. End of explanation """ print(res_fedfunds2.expected_durations) """ Explanation: Finally, the expected durations of each regime have decreased quite a bit. End of explanation """ # Get the additional data from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:] # Fit the 2-regime model mod_fedfunds3 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=2, exog=exog) res_fedfunds3 = mod_fedfunds3.fit() # Fit the 3-regime model np.random.seed(12345) mod_fedfunds4 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=3, exog=exog) res_fedfunds4 = mod_fedfunds4.fit(search_reps=20) res_fedfunds3.summary() res_fedfunds4.summary() """ Explanation: Taylor rule with 2 or 3 regimes We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better. Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions. End of explanation """ fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-interest rate regime') ax = axes[1] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-interest rate regime') ax = axes[2] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2]) ax.set(title='Smoothed probability of a high-interest rate regime') fig.tight_layout() """ Explanation: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below. End of explanation """ # Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W')) # Plot the data dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3)) # Fit the model mod_areturns = sm.tsa.MarkovRegression( dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True) res_areturns = mod_areturns.fit() res_areturns.summary() """ Explanation: Switching variances We can also accomodate switching variances. In particular, we consider the model $$ y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2) $$ We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$. The application is to absolute returns on stocks, where the data can be found at http://www.stata-press.com/data/r14/snp500. End of explanation """ res_areturns.smoothed_marginal_probabilities[0].plot( title='Probability of being in a low-variance regime', figsize=(12,3)); """ Explanation: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy. End of explanation """
metpy/MetPy
v0.9/_downloads/7dd7941230ab04d65d899c66ed400ef4/xarray_tutorial.ipynb
bsd-3-clause
import cartopy.crs as ccrs import cartopy.feature as cfeature import matplotlib.pyplot as plt import xarray as xr # Any import of metpy will activate the accessors import metpy.calc as mpcalc from metpy.testing import get_test_data """ Explanation: xarray with MetPy Tutorial xarray &lt;http://xarray.pydata.org/&gt;_ is a powerful Python package that provides N-dimensional labeled arrays and datasets following the Common Data Model. While the process of integrating xarray features into MetPy is ongoing, this tutorial demonstrates how xarray can be used within the current version of MetPy. MetPy's integration primarily works through accessors which allow simplified projection handling and coordinate identification. Unit and calculation support is currently available in a limited fashion, but should be improved in future versions. End of explanation """ # Open the netCDF file as a xarray Dataset data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False)) # View a summary of the Dataset print(data) """ Explanation: Getting Data While xarray can handle a wide variety of n-dimensional data (essentially anything that can be stored in a netCDF file), a common use case is working with model output. Such model data can be obtained from a THREDDS Data Server using the siphon package, but for this tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th, 2017). End of explanation """ # To parse the full dataset, we can call parse_cf without an argument, and assign the returned # Dataset. data = data.metpy.parse_cf() # If we instead want just a single variable, we can pass that variable name to parse_cf and # it will return just that data variable as a DataArray. data_var = data.metpy.parse_cf('Temperature_isobaric') # To rename variables, supply a dictionary between old and new names to the rename method data.rename({ 'Vertical_velocity_pressure_isobaric': 'omega', 'Relative_humidity_isobaric': 'relative_humidity', 'Temperature_isobaric': 'temperature', 'u-component_of_wind_isobaric': 'u', 'v-component_of_wind_isobaric': 'v', 'Geopotential_height_isobaric': 'height' }, inplace=True) """ Explanation: Preparing Data To make use of the data within MetPy, we need to parse the dataset for projection and coordinate information following the CF conventions. For this, we use the data.metpy.parse_cf() method, which will return a new, parsed DataArray or Dataset. Additionally, we rename our data variables for easier reference. End of explanation """ data['isobaric1'].metpy.convert_units('hPa') data['isobaric3'].metpy.convert_units('hPa') """ Explanation: Units MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array of just the data from the DataArray (metadata is removed) and a convert_units method to convert the the data from one unit to another (keeping it as a DataArray). For now, we'll just use convert_units to convert our pressure coordinates to hPa. End of explanation """ # Get multiple coordinates (for example, in just the x and y direction) x, y = data['temperature'].metpy.coordinates('x', 'y') # If we want to get just a single coordinate from the coordinates method, we have to use # tuple unpacking because the coordinates method returns a generator vertical, = data['temperature'].metpy.coordinates('vertical') # Or, we can just get a coordinate from the property time = data['temperature'].metpy.time # To verify, we can inspect all their names print([coord.name for coord in (x, y, vertical, time)]) """ Explanation: Coordinates You may have noticed how we directly accessed the vertical coordinates above using their names. However, in general, if we are working with a particular DataArray, we don't have to worry about that since MetPy is able to parse the coordinates and so obtain a particular coordinate type directly. There are two ways to do this: Use the data_var.metpy.coordinates method Use the data_var.metpy.x, data_var.metpy.y, data_var.metpy.vertical, data_var.metpy.time properties The valid coordinate types are: x y vertical time (Both approaches and all four types are shown below) End of explanation """ data_crs = data['temperature'].metpy.cartopy_crs print(data_crs) """ Explanation: Projections Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as straightforward as using the data_var.metpy.cartopy_crs property: End of explanation """ data_globe = data['temperature'].metpy.cartopy_globe print(data_globe) """ Explanation: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe property: End of explanation """ lat, lon = xr.broadcast(y, x) f = mpcalc.coriolis_parameter(lat) dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init) heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}] u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy) print(u_geo) print(v_geo) """ Explanation: Calculations Most of the calculations in metpy.calc will accept DataArrays by converting them into their corresponding unit arrays. While this may often work without any issues, we must keep in mind that because the calculations are working with unit arrays and not DataArrays: The calculations will return unit arrays rather than DataArrays Broadcasting must be taken care of outside of the calculation, as it would only recognize dimensions by order, not name As an example, we calculate geostropic wind at 500 hPa below: End of explanation """ heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}] lat, lon = xr.broadcast(y, x) f = mpcalc.coriolis_parameter(lat) dx, dy = mpcalc.grid_deltas_from_dataarray(heights) u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy) print(u_geo) print(v_geo) """ Explanation: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they can accept and return xarray objects). Right now, this includes Derivative functions first_derivative second_derivative gradient laplacian Cross-section functions cross_section_components normal_component tangential_component absolute_momentum More details can be found by looking at the documentation for the specific function of interest. There is also the special case of the helper function, grid_deltas_from_dataarray, which takes a DataArray input, but returns unit arrays for use in other calculations. We could rewrite the above geostrophic wind example using this helper function as follows: End of explanation """ # A very simple example example of a plot of 500 hPa heights data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot() plt.show() # Let's add a projection and coastlines to it ax = plt.axes(projection=ccrs.LambertConformal()) ax._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot(ax=ax, transform=data_crs) ax.coastlines() plt.show() # Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity # Select the data for this time and level data_level = data.loc[{vertical.name: 500., time.name: time[0]}] # Create the matplotlib figure and axis fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs}) # Plot RH as filled contours rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100], colors=['#99ff00', '#00ff00', '#00cc00']) # Plot wind barbs, but not all of them wind_slice = slice(5, -5, 5) ax.barbs(x[wind_slice], y[wind_slice], data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'), data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'), length=6) # Plot heights and temperature as contours h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60)) h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8, fmt='%i', rightside_up=True, use_clabeltext=True) t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue', levels=range(248, 276, 2), alpha=0.8, linestyles='--') t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8, fmt='%i', rightside_up=True, use_clabeltext=True) # Add geographic features ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land']) ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water']) ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0) ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'], edgecolor='#c7c783', zorder=0) # Set a title and show the plot ax.set_title(('500 hPa Heights (m), Temperature (K), Humidity (%) at ' + time[0].dt.strftime('%Y-%m-%d %H:%MZ'))) plt.show() """ Explanation: Plotting Like most meteorological data, we want to be able to plot these data. DataArrays can be used like normal numpy arrays in plotting code, which is the recommended process at the current point in time, or we can use some of xarray's plotting functionality for quick inspection of the data. (More detail beyond the following can be found at xarray's plotting reference &lt;http://xarray.pydata.org/en/stable/plotting.html&gt;_.) End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/012b7ba30b03ebda4c3419b2e4f5161a/plot_ssp_projs_sensitivity_map.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import matplotlib.pyplot as plt from mne import read_forward_solution, read_proj, sensitivity_map from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif' fwd = read_forward_solution(fname) projs = read_proj(ecg_fname) # take only one projection per channel type projs = projs[::2] # Compute sensitivity map ssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle') """ Explanation: Sensitivity map of SSP projections This example shows the sources that have a forward field similar to the first SSP vector correcting for ECG. End of explanation """ plt.hist(ssp_ecg_map.data.ravel()) plt.show() args = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7, hemi='rh', subjects_dir=subjects_dir) ssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args) """ Explanation: Show sensitivity map End of explanation """
csc-training/python-introduction
notebooks/examples/3 - Functions.ipynb
mit
def my_function(arg_one, arg_two, optional_1=6, optional_2="seven"): return " ".join([str(arg_one), str(arg_two), str(optional_1), str(optional_2)]) print(my_function("a", "b")) print(my_function("a", "b", optional_2="eight")) #go ahead and try out different components """ Explanation: Functions Functions and function arguments Functions are the building blocks of writing software. If a function is associated with an object and it's data, it is called a method. Functions are defined using the keyword def. There are two types of arguments * regular arguments, which must always be given when calling the function * keyword arguments, that have a default value that can be overriden if desired Values are returned using the return keyword. If not return is defined, the default return value of all functions and methods is None, which is the null object in Python. End of explanation """ def count_args(*args, **kwargs): print("i was called with " + str(len(args)) + " arguments and " + str(len(kwargs)) + " keyword arguments") count_args(1, 2, 3, 4, 5, foo=1, bar=2) """ Explanation: Python has special syntax for catching an arbitary number of parameters. For regular parameters it is a variable with one asterisk * and for keyword parameters it is a variable with two asterisks. It is conventional to name these *args and **kwargs, but this is not required. End of explanation """ def random(): """ Always the number 4. Chosen by fair dice roll. Guaranteed to be random. """ return 4 """ Explanation: The length of sequences can be checked using the built-in len() function. It is standard practice to document a function using docstrings. A docstring is just a simple triple-quoted string immediately after the function definition. It is also possible to have docstrings in the beginning of a source code file and after a class definition. End of explanation """ def print_dashes(): print("---") def print_asterisks(): print("***") def pretty_print(string, function): function() print(string) function() pretty_print("hello", print_dashes) pretty_print("hey", print_asterisks) """ Explanation: Functions as parameters Functions are first-class citizens in Python, which means that they can be e.g. passed to other functions. This is the first step into the world of functional programming, an elegant weapon for a more civilized age. End of explanation """ dictionaries = [ {"name": "Jack", "age": 35, "telephone": "555-1234"}, {"name": "Jane", "age": 40, "telephone": "555-3331"}, {"name": "Joe", "age": 20, "telephone": "555-8765"} ] """ Explanation: Extra: Lambda When we use the keyword def we are making a named function. Sometimes we want a simple function to use once without without binding it to any name. Consider the following data structure. End of explanation """ def get_age(x): return x["age"] dictionaries.sort(key=get_age) dictionaries """ Explanation: Now if we want to sort it using Python's built-in sort() function the sort won't know which attribute to base the sorting on. Fortunately the sort() function takes a named parameter called key which is a function to be called on each item in the list. The return value will be used for the name. (Python's sort() sorts the list in-place. If you want to keep the list unmodified use sorted()) End of explanation """ dictionaries.sort(key=lambda x: x["age"], reverse=True) dictionaries """ Explanation: This is all nice and well, but now you have a function called get_age that you don't intend to use a second time. An alternative way to give this would be using a lambda expression. End of explanation """
M-R-Houghton/euroscipy_2015
scikit_image/lectures/3_morphological_operations.ipynb
mit
import numpy as np from matplotlib import pyplot as plt, cm import skdemo plt.rcParams['image.cmap'] = 'cubehelix' plt.rcParams['image.interpolation'] = 'none' image = np.array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8) plt.imshow(image); """ Explanation: Morphological operations Morphology is the study of shapes. In image processing, some simple operations can get you a long way. The first things to learn are erosion and dilation. In erosion, we look at a pixel’s local neighborhood and replace the value of that pixel with the minimum value of that neighborhood. In dilation, we instead choose the maximum. End of explanation """ from skimage import morphology sq = morphology.square(width=3) dia = morphology.diamond(radius=1) print(sq) print(dia) """ Explanation: The documentation for scikit-image's morphology module is here. Importantly, we must use a structuring element, which defines the local neighborhood of each pixel. To get every neighbor (up, down, left, right, and diagonals), use morphology.square; to avoid diagonals, use morphology.diamond: End of explanation """ skdemo.imshow_all(image, morphology.erosion(image, sq), shape=(1, 2)) """ Explanation: The central value of the structuring element represents the pixel being considered, and the surrounding values are the neighbors: a 1 value means that pixel counts as a neighbor, while a 0 value does not. So: End of explanation """ skdemo.imshow_all(image, morphology.dilation(image, sq)) """ Explanation: and End of explanation """ skdemo.imshow_all(image, morphology.dilation(image, dia)) """ Explanation: and End of explanation """ image = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0, 1, 0, 0], [0, 0, 1, 1, 1, 0, 0, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 1, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], np.uint8) plt.imshow(image); """ Explanation: Erosion and dilation can be combined into two slightly more sophisticated operations, opening and closing. Here's an example: End of explanation """ skdemo.imshow_all(image, morphology.opening(image, sq)) # erosion -> dilation skdemo.imshow_all(image, morphology.closing(image, sq)) # dilation -> erosion """ Explanation: What happens when run an erosion followed by a dilation of this image? What about the reverse? Try to imagine the operations in your head before trying them out below. End of explanation """ from skimage import data, color hub = color.rgb2gray(data.hubble_deep_field()[350:450, 90:190]) plt.imshow(hub); """ Explanation: Exercise: use morphological operations to remove noise from a binary image. End of explanation """ %reload_ext load_style %load_style ../themes/tutorial.css """ Explanation: Remove the smaller objects to retrieve the large galaxy. <div style="height: 400px;"></div> End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/end-to-end-structured/labs/3b_bqml_linear_transform_babyweight.ipynb
apache-2.0
%%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 """ Explanation: LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform. Learning Objectives Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE Create and evaluate linear model with ML.TRANSFORM Introduction In this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook BQML Baseline Model. We will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Verify tables exist Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. End of explanation """ %%bigquery CREATE OR REPLACE MODEL babyweight.model_1 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT # TODO: Add base features and label ML.FEATURE_CROSS( # TODO: Cross categorical features ) AS gender_plurality_cross FROM babyweight.babyweight_data_train """ Explanation: Lab Task #1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical features BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations. Create model with feature cross. End of explanation """ %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_1, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval )) %%bigquery SELECT # TODO: Select just the calculated RMSE FROM ML.EVALUATE(MODEL babyweight.model_1, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval )) """ Explanation: Create two SQL statements to evaluate the model. End of explanation """ %%bigquery CREATE OR REPLACE MODEL babyweight.model_2 OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, ML.FEATURE_CROSS( STRUCT( is_male, ML.BUCKETIZE( # TODO: Bucketize mother_age ) AS bucketed_mothers_age, plurality, ML.BUCKETIZE( # TODO: Bucketize gestation_weeks ) AS bucketed_gestation_weeks ) ) AS crossed FROM babyweight.babyweight_data_train """ Explanation: Lab Task #2: Model 2: Apply the BUCKETIZE Function Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds. Apply the BUCKETIZE function within FEATURE_CROSS. Hint: Create a model_2. End of explanation """ %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2) """ Explanation: Create three SQL statements to EVALUATE the model. Let's now retrieve the training statistics and evaluate the model. End of explanation """ %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval)) """ Explanation: We now evaluate our model on our eval dataset: End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_2, ( SELECT # TODO: Add same features and label as training FROM babyweight.babyweight_data_eval)) """ Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse. End of explanation """ %%bigquery CREATE OR REPLACE MODEL babyweight.model_3 TRANSFORM( # TODO: Add base features and label as you would in select # TODO: Add transformed features as you would in select ) OPTIONS ( MODEL_TYPE="LINEAR_REG", INPUT_LABEL_COLS=["weight_pounds"], L2_REG=0.1, DATA_SPLIT_METHOD="NO_SPLIT") AS SELECT * FROM babyweight.babyweight_data_train """ Explanation: Lab Task #3: Model 3: Apply the TRANSFORM clause Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries. Let's apply the TRANSFORM clause to the model_3 and run the query. End of explanation """ %%bigquery SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3) """ Explanation: Let's retrieve the training statistics: End of explanation """ %%bigquery SELECT * FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval )) """ Explanation: We now evaluate our model on our eval dataset: End of explanation """ %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL babyweight.model_3, ( SELECT * FROM babyweight.babyweight_data_eval )) """ Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse. End of explanation """
adrn/GaiaPairsFollowup
notebooks/Full reduction.ipynb
mit
# Standard library from os.path import join import sys if '/Users/adrian/projects/longslit/' not in sys.path: sys.path.append('/Users/adrian/projects/longslit/') # Third-party from astropy.constants import c import numpy as np import matplotlib.pyplot as plt import astropy.units as u from astropy.io import fits import astropy.modeling as mod plt.style.use('apw-notebook') %matplotlib inline from scipy.interpolate import InterpolatedUnivariateSpline from scipy.optimize import leastsq import ccdproc from ccdproc import CCDData, ImageFileCollection from longslit.utils import gaussian_constant, gaussian_polynomial ccd_props = dict(gain=2.7*u.electron/u.adu, # from: http://mdm.kpno.noao.edu/index/MDM_CCDs.html readnoise=7.9*u.electron) # ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n1/', # filenames=['n1.00{:02d}.fit'.format(i) for i in range(1,23+1)]) ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n4/', filenames=['n4.00{:02d}.fit'.format(i) for i in list(range(1,21))+[89]]) # ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n3/', # filenames=['n3.0{:03d}.fit'.format(i) for i in list(range(1,21))+[122]]) ic.summary['object'] """ Explanation: TODO: Define pixel masks for all objects to remove nearby sources Data model for 1D extracted files: HDU0: reproduce header from original file HDU1: Table with: pix, source_flux, source_ivar, centroid, bg_flux, bg_ivar header should contain PSF fit parameters Try reducing a single object End of explanation """ # get list of overscan-subtracted bias frames as 2D arrays bias_list = [] for hdu, fname in ic.hdus(return_fname=True, imagetyp='BIAS'): ccd = CCDData.read(join(ic.location, fname), unit='adu') ccd = ccdproc.gain_correct(ccd, gain=ccd_props['gain']) ccd = ccdproc.subtract_overscan(ccd, overscan=ccd[:,300:]) ccd = ccdproc.trim_image(ccd, fits_section="[1:300,:]") bias_list.append(ccd) # combine all bias frames into a master bias frame master_bias = ccdproc.combine(bias_list, method='average', clip_extrema=True, nlow=1, nhigh=1, error=True) plt.hist(master_bias.data.ravel(), bins='auto'); plt.yscale('log') plt.xlabel('master bias pixel values') plt.figure(figsize=(15,5)) plt.imshow(master_bias.data.T, cmap='plasma') """ Explanation: Create a master bias frame End of explanation """ # create a list of flat frames flat_list = [] for hdu, fname in ic.hdus(return_fname=True, imagetyp='FLAT'): ccd = CCDData.read(join(ic.location, fname), unit='adu') ccd = ccdproc.gain_correct(ccd, gain=ccd_props['gain']) ccd = ccdproc.ccd_process(ccd, oscan='[300:364,:]', trim="[1:300,:]", master_bias=master_bias) flat_list.append(ccd) # combine into a single master flat - use 3*sigma sigma-clipping master_flat = ccdproc.combine(flat_list, method='average', sigma_clip=True, low_thresh=3, high_thresh=3) plt.hist(master_flat.data.ravel(), bins='auto'); plt.yscale('log') plt.xlabel('master flat pixel values') plt.figure(figsize=(15,5)) plt.imshow(master_flat.data.T, cmap='plasma') """ Explanation: Create the master flat frame End of explanation """ im_list = [] for hdu, fname in ic.hdus(return_fname=True, imagetyp='OBJECT'): print(hdu.header['OBJECT']) ccd = CCDData.read(join(ic.location, fname), unit='adu') hdr = ccd.header # make a copy of the object nccd = ccd.copy() # apply the overscan correction poly_model = mod.models.Polynomial1D(2) nccd = ccdproc.subtract_overscan(nccd, fits_section='[300:364,:]', model=poly_model) # apply the trim correction nccd = ccdproc.trim_image(nccd, fits_section='[1:300,:]') # create the error frame nccd = ccdproc.create_deviation(nccd, gain=ccd_props['gain'], readnoise=ccd_props['readnoise']) # now gain correct nccd = ccdproc.gain_correct(nccd, gain=ccd_props['gain']) # correct for master bias frame # - this does some crazy shit at the blue end, but we can live with it nccd = ccdproc.subtract_bias(nccd, master_bias) # correct for master flat frame nccd = ccdproc.flat_correct(nccd, master_flat) # comsic ray cleaning - this updates the uncertainty array as well nccd = ccdproc.cosmicray_lacosmic(nccd, sigclip=8.) # new_fname = 'p'+fname # ccd.write(new_fname, overwrite=True) # im_list.append(new_fname) print(hdu.header['EXPTIME']) break fig,axes = plt.subplots(3,1,figsize=(18,10),sharex=True,sharey=True) axes[0].imshow(nccd.data.T, cmap='plasma') axes[1].imshow(nccd.uncertainty.array.T, cmap='plasma') axes[2].imshow(nccd.mask.T, cmap='plasma') """ Explanation: Process object frames Bias, flat corrections End of explanation """ n_trace_nodes = 32 def fit_gaussian(pix, flux, flux_ivar, n_coeff=1, p0=None, return_cov=False): if p0 is None: p0 = [flux.max(), pix[flux.argmax()], 1.] + [0.]*(n_coeff-1) + [flux.min()] def errfunc(p): return (gaussian_polynomial(pix, *p) - flux) * np.sqrt(flux_ivar) res = leastsq(errfunc, x0=p0, full_output=True) p_opt = res[0] cov_p = res[1] ier = res[-1] if ier > 4 or ier < 1: raise RuntimeError("Failed to fit Gaussian to spectrum trace row.") if return_cov: return p_opt, cov_p else: return p_opt n_rows,n_cols = nccd.data.shape row_idx = np.linspace(0, n_rows-1, n_trace_nodes).astype(int) pix = np.arange(n_cols, dtype=float) trace_amps = np.zeros_like(row_idx).astype(float) trace_centroids = np.zeros_like(row_idx).astype(float) trace_stddevs = np.zeros_like(row_idx).astype(float) for i,j in enumerate(row_idx): flux = nccd.data[j] flux_err = nccd.uncertainty.array[j] flux_ivar = 1/flux_err**2. flux_ivar[~np.isfinite(flux_ivar)] = 0. p_opt = fit_gaussian(pix, flux, flux_ivar) trace_amps[i] = p_opt[0] trace_centroids[i] = p_opt[1] trace_stddevs[i] = p_opt[2] trace_func = InterpolatedUnivariateSpline(row_idx, trace_centroids) trace_amp_func = InterpolatedUnivariateSpline(row_idx, trace_amps) trace_stddev = np.median(trace_stddevs) plt.hist(trace_stddevs, bins=np.linspace(0, 5., 16)); """ Explanation: Now that those intermediate frames are written, we need to extract the 1D spectra End of explanation """ plt.plot(row_idx, trace_centroids) _grid = np.linspace(row_idx.min(), row_idx.max(), 1024) plt.plot(_grid, trace_func(_grid), marker='', alpha=0.5) plt.axvline(50) plt.axvline(1650) plt.xlabel('row index [pix]') plt.ylabel('trace centroid [pix]') """ Explanation: TODO: some outlier rejection End of explanation """ from scipy.special import wofz from scipy.stats import scoreatpercentile def voigt(x, amp, x_0, stddev, fwhm): _x = x-x_0 z = (_x + 1j*fwhm/2.) / (np.sqrt(2.)*stddev) return amp * wofz(z).real / (np.sqrt(2.*np.pi)*stddev) def psf_model(p, x): amp, x_0, std_G, fwhm_L, C = p return voigt(x, amp, x_0, std_G, fwhm_L) + C def psf_chi(p, pix, flux, flux_ivar): return (psf_model(p, pix) - flux) * np.sqrt(flux_ivar) i = 800 # MAGIC NUMBER flux = nccd.data[i] flux_err = nccd.uncertainty.array[i] pix = np.arange(len(flux)) flux_ivar = 1/flux_err**2. flux_ivar[~np.isfinite(flux_ivar)] = 0. p0 = [flux.max(), pix[np.argmax(flux)], 1., 1., scoreatpercentile(flux[flux>0], 16.)] fig,axes = plt.subplots(2, 1, figsize=(10,6), sharex=True, sharey=True) # data on both panels for i in range(2): axes[i].plot(pix, flux, marker='', drawstyle='steps-mid') axes[i].errorbar(pix, flux, flux_err, marker='', linestyle='none', ecolor='#777777', zorder=-10) # plot initial guess grid = np.linspace(pix.min(), pix.max(), 1024) axes[0].plot(grid, psf_model(p0, grid), marker='', alpha=1., zorder=10) axes[0].set_yscale('log') axes[0].set_title("Initial") # fit parameters p_opt,ier = leastsq(psf_chi, x0=p0, args=(pix, flux, flux_ivar)) print(ier) # plot fit parameters axes[1].plot(grid, psf_model(p_opt, grid), marker='', alpha=1., zorder=10) axes[1].set_title("Fit") fig.tight_layout() psf_p = dict() psf_p['std_G'] = p_opt[2] psf_p['fwhm_L'] = p_opt[3] """ Explanation: Fit a Voigt profile at row = 800 to determine PSF End of explanation """ def row_model(p, psf_p, x): amp, x_0, C = p return voigt(x, amp, x_0, stddev=psf_p['std_G'], fwhm=psf_p['fwhm_L']) + C def row_chi(p, pix, flux, flux_ivar, psf_p): return (row_model(p, psf_p, pix) - flux) * np.sqrt(flux_ivar) # This does PSF extraction trace_1d = np.zeros(n_rows).astype(float) flux_1d = np.zeros(n_rows).astype(float) flux_1d_err = np.zeros(n_rows).astype(float) sky_flux_1d = np.zeros(n_rows).astype(float) sky_flux_1d_err = np.zeros(n_rows).astype(float) for i in range(nccd.data.shape[0]): flux = nccd.data[i] flux_err = nccd.uncertainty.array[i] flux_ivar = 1/flux_err**2. flux_ivar[~np.isfinite(flux_ivar)] = 0. p0 = [flux.max(), pix[np.argmax(flux)], scoreatpercentile(flux[flux>0], 16.)] p_opt,p_cov,*_,mesg,ier = leastsq(row_chi, x0=p0, full_output=True, args=(pix, flux, flux_ivar, psf_p)) if ier < 1 or ier > 4 or p_cov is None: flux_1d[i] = np.nan sky_flux_1d[i] = np.nan print("Fit failed for {}".format(i)) continue if test: _grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024) model_flux = p_to_model(p_opt, shifts)(_grid) # ---------------------------------- plt.figure(figsize=(8,6)) plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='') plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75) plt.axhline(sky_opt[0]) plt.plot(_grid, model_flux, marker='') plt.yscale('log') flux_1d[i] = p_opt[0] trace_1d[i] = p_opt[1] sky_flux_1d[i] = p_opt[2] # TODO: ignores centroiding covariances... flux_1d_err[i] = np.sqrt(p_cov[0,0]) sky_flux_1d_err[i] = np.sqrt(p_cov[2,2]) # clean up the 1d spectra flux_1d_ivar = 1/flux_1d_err**2 sky_flux_1d_ivar = 1/sky_flux_1d_err**2 pix_1d = np.arange(len(flux_1d)) mask_1d = (pix_1d < 50) | (pix_1d > 1600) flux_1d[mask_1d] = 0. flux_1d_ivar[mask_1d] = 0. sky_flux_1d[mask_1d] = 0. sky_flux_1d_ivar[mask_1d] = 0. fig,all_axes = plt.subplots(2, 2, figsize=(12,12), sharex='row') axes = all_axes[0] axes[0].plot(flux_1d, marker='', drawstyle='steps-mid') axes[0].errorbar(np.arange(n_rows), flux_1d, 1/np.sqrt(flux_1d_ivar), linestyle='none', marker='', ecolor='#666666', alpha=1., zorder=-10) axes[0].set_ylim(1e3, np.nanmax(flux_1d)) axes[0].set_yscale('log') axes[0].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1) axes[1].plot(sky_flux_1d, marker='', drawstyle='steps-mid') axes[1].errorbar(np.arange(n_rows), sky_flux_1d, 1/np.sqrt(sky_flux_1d_ivar), linestyle='none', marker='', ecolor='#666666', alpha=1., zorder=-10) axes[1].set_ylim(1e-1, np.nanmax(sky_flux_1d)) axes[1].set_yscale('log') axes[1].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1) # Zoom in around Halpha axes = all_axes[1] slc = slice(halpha_idx-32, halpha_idx+32+1) axes[0].plot(np.arange(n_rows)[slc], flux_1d[slc], marker='', drawstyle='steps-mid') axes[0].errorbar(np.arange(n_rows)[slc], flux_1d[slc], flux_1d_err[slc], linestyle='none', marker='', ecolor='#666666', alpha=1., zorder=-10) axes[1].plot(np.arange(n_rows)[slc], sky_flux_1d[slc], marker='', drawstyle='steps-mid') axes[1].errorbar(np.arange(n_rows)[slc], sky_flux_1d[slc], sky_flux_1d_err[slc], linestyle='none', marker='', ecolor='#666666', alpha=1., zorder=-10) fig.tight_layout() plt.figure(figsize=(12,5)) plt.plot(sky_flux_1d, marker='', drawstyle='steps-mid') plt.errorbar(np.arange(n_rows), sky_flux_1d, 1/np.sqrt(sky_flux_1d_ivar), linestyle='none', marker='', ecolor='#666666', alpha=1., zorder=-10) # plt.ylim(1e3, np.nanmax(flux_1d)) plt.yscale('log') plt.xlim(1000, 1300) # plt.ylim(1e4, 4e4) """ Explanation: Now fix PSF parameters, only need to fit amplitudes of PSF and background End of explanation """ n_rows,n_cols = nccd.data.shape pix = np.arange(n_cols, dtype=float) sqrt_2pi = np.sqrt(2*np.pi) def gaussian1d(x, amp, mu, var): return amp/(sqrt_2pi*np.sqrt(var)) * np.exp(-0.5 * (np.array(x) - mu)**2/var) def model(pix, sky_p, p0, other_p, shifts=None): """ parameters, p, are: sky mean0, log_amp0, log_var0 log_amp_m*, log_var_m* (at mean0 - N pix) log_amp_p*, log_var_p* (at mean0 + N pix) """ n_other = len(other_p) if n_other // 2 != n_other / 2: raise ValueError("Need even number of other_p") if shifts is None: shifts = np.arange(-n_other/2, n_other/2+1) shifts = np.delete(shifts, np.where(shifts == 0)[0]) assert len(other_p) == len(other_p) # central model mean0, log_amp0, log_var0 = np.array(p0).astype(float) model_flux = gaussian1d(pix, np.exp(log_amp0), mean0, np.exp(log_var0)) for shift,pars in zip(shifts, other_p): model_flux += gaussian1d(pix, np.exp(pars[0]), mean0 + shift, np.exp(pars[1])) # l1 = Lorentz1D(x_0=mean0, amplitude=np.exp(sky_p[1]), fwhm=np.sqrt(np.exp(sky_p[2]))) # sky = sky_p[0] + l1(pix) g = np.exp(sky_p[2]) sky = sky_p[0] + np.exp(sky_p[1]) * (np.pi*g * (1 + (pix-mean0)**2/g**2))**-1 return model_flux + sky def unpack_p(p): sky = p[0:3] p0 = p[3:6] other_p = np.array(p[6:]).reshape(-1, 2) return sky, p0, other_p def p_to_model(p, shifts): sky, p0, other_p = unpack_p(p) for ln_amp in [p0[1]]+other_p[:,0].tolist(): if ln_amp > 15: return lambda x: np.inf for ln_var in [p0[2]]+other_p[:,1].tolist(): if ln_var < -1. or ln_var > 8: return lambda x: np.inf return lambda x: model(x, sky, p0, other_p, shifts) def ln_likelihood(p, pix, flux, flux_ivar, shifts): _model = p_to_model(p, shifts) model_flux = _model(pix) if np.any(np.isinf(model_flux)): return np.inf return np.sqrt(flux_ivar) * (flux - model_flux) fig,ax = plt.subplots(1,1,figsize=(10,8)) for i in np.linspace(500, 1200, 16).astype(int): flux = nccd.data[i] pix = np.arange(len(flux)) plt.plot(pix-trace_func(i), flux / trace_amp_func(i), marker='', drawstyle='steps-mid', alpha=0.25) plt.xlim(-50, 50) plt.yscale('log') flux = nccd.data[800] plt.plot(pix, flux, marker='', drawstyle='steps-mid', alpha=0.25) plt.yscale('log') plt.xlim(flux.argmax()-25, flux.argmax()+25) # This does PSF extraction, slightly wrong # hyper-parameters of fit # shifts = [-6, -4, -2, 2, 4., 6.] shifts = np.linspace(-10, 10, 4).tolist() flux_1d = np.zeros(n_rows).astype(float) flux_1d_err = np.zeros(n_rows).astype(float) sky_flux_1d = np.zeros(n_rows).astype(float) sky_flux_1d_err = np.zeros(n_rows).astype(float) test = False # test = True # for i in range(n_rows): # for i in [620, 700, 780, 860, 920, 1000]: for i in range(620, 1000): # line_ctr0 = trace_func(i) # TODO: could use this to initialize? flux = nccd.data[i,j1:j2] flux_err = nccd.uncertainty.array[i,j1:j2] flux_ivar = 1/flux_err**2. flux_ivar[~np.isfinite(flux_ivar)] = 0. # initial guess for least-squares sky = [np.min(flux), 1E-2, 2.] p0 = [pix[np.argmax(flux)], np.log(flux.max()), np.log(1.)] other_p = [[np.log(flux.max()/100.), np.log(1.)]]*len(shifts) _shifts = shifts + [0.,0.] other_p += [[np.log(flux.max()/100.), np.log(4.)]]*2 # sanity check ll = ln_likelihood(sky + p0 + other_p, sub_pix, flux, flux_ivar, shifts=_shifts).sum() assert np.isfinite(ll) p_opt,p_cov,*_,mesg,ier = leastsq(ln_likelihood, x0=sky + p0 + np.ravel(other_p).tolist(), full_output=True, args=(pix, flux, flux_ivar, shifts)) if ier < 1 or ier > 4: flux_1d[i] = np.nan sky_flux_1d[i] = np.nan print("Fit failed for {}".format(i)) continue # raise ValueError("Fit failed for {}".format(i)) sky_opt, p0_opt, other_p_opt = unpack_p(p_opt) amps = np.exp(other_p_opt[:,0]) stds = np.exp(other_p_opt[:,1]) if test: _grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024) model_flux = p_to_model(p_opt, shifts)(_grid) # ---------------------------------- plt.figure(figsize=(8,6)) plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='') plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75) plt.axhline(sky_opt[0]) plt.plot(_grid, model_flux, marker='') plt.yscale('log') flux_1d[i] = np.exp(p0_opt[1]) + amps.sum() sky_flux_1d[i] = sky_opt[0] fig,all_axes = plt.subplots(2, 2, figsize=(12,12), sharex='row') axes = all_axes[0] axes[0].plot(flux_1d, marker='', drawstyle='steps-mid') # axes[0].errorbar(np.arange(n_rows), flux_1d, flux_1d_err, linestyle='none', # marker='', ecolor='#666666', alpha=1., zorder=-10) # axes[0].set_ylim(1e2, np.nanmax(flux_1d)) axes[0].set_ylim(np.nanmin(flux_1d[flux_1d!=0.]), np.nanmax(flux_1d)) # axes[0].set_yscale('log') axes[0].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1) axes[1].plot(sky_flux_1d, marker='', drawstyle='steps-mid') # axes[1].errorbar(np.arange(n_rows), sky_flux_1d, sky_flux_1d_err, linestyle='none', # marker='', ecolor='#666666', alpha=1., zorder=-10) axes[1].set_ylim(-5, np.nanmax(sky_flux_1d)) axes[1].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1) # Zoom in around Halpha axes = all_axes[1] slc = slice(halpha_idx-32, halpha_idx+32+1) axes[0].plot(np.arange(n_rows)[slc], flux_1d[slc], marker='', drawstyle='steps-mid') # axes[0].errorbar(np.arange(n_rows)[slc], flux_1d[slc], flux_1d_err[slc], linestyle='none', # marker='', ecolor='#666666', alpha=1., zorder=-10) axes[1].plot(np.arange(n_rows)[slc], sky_flux_1d[slc], marker='', drawstyle='steps-mid') # axes[1].errorbar(np.arange(n_rows)[slc], sky_flux_1d[slc], sky_flux_1d_err[slc], linestyle='none', # marker='', ecolor='#666666', alpha=1., zorder=-10) fig.tight_layout() """ Explanation: Everything below here is crazy (wrong) aperture spectrum shit End of explanation """ # aperture_width = 8 # pixels aperture_width = int(np.ceil(3*trace_stddev)) sky_offset = 8 # pixels sky_width = 16 # pixels # # estimated from ds9: counts in 1 pixel of spectrum, background counts in 2 pixels # C = 11000. # B = 800. # N_S = 1 # N_B = 2 # s_hat = C - B*N_S/N_B # b_hat = B / (N_B-N_S) # s_var = C + B*(N_S/N_B)**2 # b_var = B / N_B # This does aperture extraction, but wrong! flux_1d = np.zeros(n_rows).astype(float) flux_1d_err = np.zeros(n_rows).astype(float) sky_flux_1d = np.zeros(n_rows).astype(float) sky_flux_1d_err = np.zeros(n_rows).astype(float) source_mask = np.zeros_like(nccd.data).astype(bool) sky_mask = np.zeros_like(nccd.data).astype(bool) # # HACK: uniform aperture down the CCD # _derp = trace_func(np.arange(n_rows)) # j1 = int(np.floor(_derp.min())) # j2 = int(np.ceil(_derp.max())) for i in range(n_rows): line_ctr = trace_func(i) # source aperture bool mask j1 = int(np.floor(line_ctr-aperture_width/2)) j2 = int(np.ceil(line_ctr+aperture_width/2)+1) source_mask[i,j1:j2] = 1 # sky aperture bool masks s1 = int(np.floor(j1 - sky_offset - sky_width)) s2 = int(np.ceil(j1 - sky_offset + 1)) sky_mask[i,s1:s2] = 1 s1 = int(np.floor(j2 + sky_offset)) s2 = int(np.ceil(j2 + sky_offset + sky_width + 1)) sky_mask[i,s1:s2] = 1 source_flux = nccd.data[i,source_mask[i]] source_flux_ivar = 1 / nccd.uncertainty.array[i,source_mask[i]]**2 source_flux_ivar[~np.isfinite(source_flux_ivar)] = 0. sky_flux = nccd.data[i,sky_mask[i]] sky_flux_ivar = 1 / nccd.uncertainty.array[i,sky_mask[i]]**2 sky_flux_ivar[~np.isfinite(sky_flux_ivar)] = 0. C = np.sum(source_flux_ivar*source_flux) / np.sum(source_flux_ivar) B = np.sum(sky_flux_ivar*sky_flux) / np.sum(sky_flux_ivar) N_S = source_mask[i].sum() N_B = sky_mask[i].sum() s_hat = C - B * N_S / N_B b_hat = B / (N_B - N_S) s_var = C + B * (N_S / N_B)**2 b_var = B / N_B flux_1d[i] = s_hat flux_1d_err[i] = np.sqrt(s_var) sky_flux_1d[i] = b_hat sky_flux_1d_err[i] = np.sqrt(b_var) # approximate halpha_idx = 686 """ Explanation: Maximum likelihood estimate of source counts, background counts For observed counts $C$ in the source aperture, and observed counts $B$ in the background aperture, $s$ is the true source counts, and $b$ is the background density, and there are $N_S$ pixels in the source aperture, $N_B$ pixels in the background aperture: $$ \hat{s} = C - B\,\frac{N_S}{N_B} \ \hat{b} = \frac{B}{N_B - N_S} \ $$ $$ \sigma^2_s = C + B\,\frac{N_S^2}{N_B^2} \ \sigma^2_b = \frac{B}{N_B} $$ https://arxiv.org/pdf/1410.2564.pdf TODO: include uncertainties in pixel counts from read noise? End of explanation """ x = np.linspace(-10.2, 11.1, 21) y = gaussian_polynomial(x, 100., 0.2, 1., 0.4, 15.) \ + gaussian_polynomial(x, 10., 0.3, 2., 0.) # y = gaussian_polynomial(x, 100., 0.2, 1., 0.) \ # + gaussian_polynomial(x, 100., 2., 2., 0.) y_err = np.full_like(y, 1.) y_ivar = 1/y_err**2 # y_ivar[8] = 0. y = np.random.normal(y, y_err) plt.errorbar(x, y, y_err) g1 = mod.models.Gaussian1D(amplitude=y.max()) g2 = mod.models.Gaussian1D(amplitude=y.max()/10.) g1.amplitude.bounds = (0, 1E10) g2.amplitude.bounds = (0, 1E10) # g3 = mod.models.Gaussian1D() p1 = mod.models.Polynomial1D(3) full_model = g1+g2+p1 fitter = mod.fitting.LevMarLSQFitter() fit_model = fitter(full_model, x, y, weights=1/y_err) plt.errorbar(x, y, y_err, linestyle='none', marker='o') _grid = np.linspace(x.min(), x.max(), 256) plt.plot(_grid, fit_model(_grid), marker='') fit_model fit_model.amplitude_0 fit_model.amplitude_1 """ Explanation: TODO: discontinuities are from shifting aperture -- figure out a way to have a constant aperture? Testing line fit End of explanation """ aperture_width = 250 i = 1200 line_ctr = trace_func(i) # j1 = int(np.floor(line_ctr-aperture_width/2)) # j2 = int(np.ceil(line_ctr+aperture_width/2)+1) j1 = 0 j2 = 300 print(j1, j2) _grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024) model_flux = p_to_model(p_opt, shifts)(_grid) # ---------------------------------- plt.figure(figsize=(8,6)) plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='') plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75) plt.axhline(sky_opt[0]) plt.plot(_grid, model_flux, marker='') plt.yscale('log') """ Explanation: End of explanation """ from astropy.modeling.functional_models import Lorentz1D, Gaussian1D, Voigt1D, Const1D from astropy.modeling.fitting import LevMarLSQFitter def _pars_to_model(p): mean = p[0] log_amp_V, log_fwhm_L = p[1:3] log_amp_G1, log_var1, log_amp_G2, log_var2 = p[3:-1] C = p[-1] # l = Voigt1D(x_0=mean, amplitude_L=np.exp(log_amp_V), # fwhm_L=np.exp(log_fwhm_L), # fwhm_G=np.exp(log_fwhm_G)) \ # + Gaussian1D(amplitude=np.exp(log_amp_G1), mean=mean, stddev=np.sqrt(np.exp(log_var1))) \ # + Gaussian1D(amplitude=np.exp(log_amp_G2), mean=mean, stddev=np.sqrt(np.exp(log_var2))) \ # + Const1D(amplitude=C) l = Lorentz1D(x_0=mean, amplitude=np.exp(log_amp_V), fwhm=np.exp(log_fwhm_L)) \ + Gaussian1D(amplitude=np.exp(log_amp_G1), mean=mean, stddev=np.sqrt(np.exp(log_var1))) \ + Gaussian1D(amplitude=np.exp(log_amp_G2), mean=mean, stddev=np.sqrt(np.exp(log_var2))) \ + Const1D(amplitude=C) return l def test(p, pix, flux, flux_ivar): l = _pars_to_model(p) return (flux - l(pix)) * np.sqrt(flux_ivar) derp,ier = leastsq(test, x0=[np.argmax(flux), np.log(np.max(flux)), 1., np.log(np.max(flux)/10.), 5.5, np.log(np.max(flux)/25.), 7.5, np.min(flux)], args=(sub_pix, flux, flux_ivar)) l_fit = _pars_to_model(derp) plt.figure(figsize=(8,6)) plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='') plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75) plt.plot(_grid, l_fit(_grid), marker='') plt.yscale('log') plt.figure(figsize=(8,6)) plt.plot(sub_pix, (flux-l_fit(sub_pix))/(1/np.sqrt(flux_ivar)), drawstyle='steps-mid', marker='') # plt.errorbar(sub_pix, (flux-l_fit(sub_pix))/(, 1/np.sqrt(flux_ivar), # marker='', linestyle='none', ecolor='#666666', alpha=0.75) """ Explanation: End of explanation """
lcharleux/numerical_analysis
doc/ODE/.ipynb_checkpoints/ODE-checkpoint.ipynb
gpl-2.0
tmax = .2 t = np.linspace(0., tmax, 1000) x0, y0 = 0., 0. vx0, vy0 = 1., 1. g = 10. x = vx0 * t y = -g * t**2/2. + vy0 * t fig = plt.figure() ax.set_aspect("equal") plt.plot(x, y, label = "Exact solution") plt.grid() plt.xlabel("x") plt.ylabel("y") plt.legend() plt.show() """ Explanation: Ordinary differential equations (ODE) Scope Widely used in physics Closed form solutions only in particular cases Need for numerical solvers Ordinary differential equations vs. partial differential equation Ordinary differential equations (ODE) Derivatives of the inknown function only with respect to a single variable, time $t$ for example. Example: the 1D linear oscillator equation $$ \dfrac{d^2x}{dt^2} + 2 \zeta \omega_0 \dfrac{dx}{dt} + \omega_0 x = 0 $$ Partial differential equations (PDE) Derivatives of the unknown function with respect to several variables, time $t$ and space $(x, y, z)$ for example. Special techniques not introduced in this course need to be used, such as finite difference or finite elements. Example : the heat equation $$ \rho C_p \dfrac{\partial T}{\partial t} - k \Delta T + s = 0 $$ Introductive example Point mass $P$ in free fall. Required data: gravity field $\vec g = (0, -g)$, Mass $m$, Initial position $P_0 = (0, 0)$ Initial velocity $\vec V_0 = (v_{x0}, v_{y0})$ Problem formulation: $$ \left\lbrace \begin{align} \ddot x & = 0\ \ddot y & = -g \end{align}\right. $$ Closed form solution $$ \left\lbrace \begin{align} x(t) &= v_{x0} t\ y(t) &= -g \frac{t^2}{2} + v_{y0}t \end{align}\right. $$ End of explanation """ dt = 0.02 # Pas de temps X0 = np.array([0., 0., vx0, vy0]) nt = int(tmax/dt) # Nombre de pas ti = np.linspace(0., nt * dt, nt) def derivate(X, t): return np.array([X[2], X[3], 0., -g]) def Euler(func, X0, t): dt = t[1] - t[0] nt = len(t) X = np.zeros([nt, len(X0)]) X[0] = X0 for i in range(nt-1): X[i+1] = X[i] + func(X[i], t[i]) * dt return X %time X_euler = Euler(derivate, X0, ti) x_euler, y_euler = X_euler[:,0], X_euler[:,1] plt.figure() plt.plot(x, y, label = "Exact solution") plt.plot(x_euler, y_euler, "or", label = "Euler") plt.grid() plt.xlabel("x") plt.ylabel("y") plt.legend() plt.show() """ Explanation: Reformulation Any ODEs can be reformulated as a first order system equations. Let's assume that $$ X = \begin{bmatrix} X_0 \ X_1 \ X_2 \ X_3 \ \end{bmatrix} = \begin{bmatrix} x \ y \ \dot x \ \dot y \ \end{bmatrix} $$ As a consequence: $$ \dot X = \begin{bmatrix} \dot x \ \dot y \ \ddot x \ \ddot y \ \end{bmatrix} $$ Then, the initialy second order equation can be reformulated as: $$ \dot X = f(X, t) = \begin{bmatrix} X_2 \ X_3 \ 0 \ -g \ \end{bmatrix} $$ Generic problem Solving $\dot Y = f(Y, t)$ Numerical integration of ODE Generic formulation $$ \dot X = f(X, t) $$ approximate solution: need for error estimation discrete time: $t_0$, $t_1$, $\ldots$ time step $dt = t_{i+1} - t_i$, Euler method Intuitive Fast Slow convergence $$ X_{i+1} = X_i + f(X, t_i) dt $$ End of explanation """ def RK4(func, X0, t): dt = t[1] - t[0] nt = len(t) X = np.zeros([nt, len(X0)]) X[0] = X0 for i in range(nt-1): k1 = func(X[i], t[i]) k2 = func(X[i] + dt/2. * k1, t[i] + dt/2.) k3 = func(X[i] + dt/2. * k2, t[i] + dt/2.) k4 = func(X[i] + dt * k3, t[i] + dt) X[i+1] = X[i] + dt / 6. * (k1 + 2. * k2 + 2. * k3 + k4) return X %time X_rk4 = RK4(derivate, X0, ti) x_rk4, y_rk4 = X_rk4[:,0], X_rk4[:,1] plt.figure() plt.plot(x, y, label = "Exact solution") plt.plot(x_euler, y_euler, "or", label = "Euler") plt.plot(x_rk4, y_rk4, "gs", label = "RK4") plt.grid() plt.xlabel("x") plt.ylabel("y") plt.legend() plt.show() """ Explanation: Runge Kutta 4 Wikipedia Evolution of the Euler integrator with: Multiple slope evaluation (4 here), Well chosen weighting to match simple solutions. $$ X_{i+1} = X_i + \dfrac{dt}{6}\left(k_1 + 2k_2 + 2k_3 + k_4 \right) $$ With: $k_1$ is the increment based on the slope at the beginning of the interval, using $ X $ (Euler's method); $k_2$ is the increment based on the slope at the midpoint of the interval, using $ X + dt/2 \times k_1 $; $k_3$ is again the increment based on the slope at the midpoint, but now using $ X + dt/2\times k_2 $; $k_4$ is the increment based on the slope at the end of the interval, using $ X + dt \times k_3 $. End of explanation """ from scipy import integrate X_odeint = integrate.odeint(derivate, X0, ti) %time x_odeint, y_odeint = X_odeint[:,0], X_rk4[:,1] plt.figure() plt.plot(x, y, label = "Exact solution") plt.plot(x_euler, y_euler, "or", label = "Euler") plt.plot(x_rk4, y_rk4, "gs", label = "RK4") plt.plot(x_odeint, y_odeint, "mv", label = "ODEint") plt.grid() plt.xlabel("x") plt.ylabel("y") plt.legend() plt.show() """ Explanation: Using ODEint http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.integrate.odeint.html End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mri/cmip6/models/mri-esm2-0/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: MRI Source ID: MRI-ESM2-0 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:19 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
csaladenes/csaladenes.github.io
present/mcc2/PythonDataScienceHandbook/05.13-Kernel-Density-Estimation.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! <!--NAVIGATION--> < In Depth: Gaussian Mixture Models | Contents | Application: A Face Detection Pipeline > <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.13-Kernel-Density-Estimation.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> In-Depth: Kernel Density Estimation In the previous section we covered Gaussian mixture models (GMM), which are a kind of hybrid between a clustering estimator and a density estimator. Recall that a density estimator is an algorithm which takes a $D$-dimensional dataset and produces an estimate of the $D$-dimensional probability distribution which that data is drawn from. The GMM algorithm accomplishes this by representing the density as a weighted sum of Gaussian distributions. Kernel density estimation (KDE) is in some senses an algorithm which takes the mixture-of-Gaussians idea to its logical extreme: it uses a mixture consisting of one Gaussian component per point, resulting in an essentially non-parametric estimator of density. In this section, we will explore the motivation and uses of KDE. We begin with the standard imports: End of explanation """ def make_data(N, f=0.3, rseed=1): rand = np.random.RandomState(rseed) x = rand.randn(N) x[int(f * N):] += 5 return x x = make_data(1000) """ Explanation: Motivating KDE: Histograms As already discussed, a density estimator is an algorithm which seeks to model the probability distribution that generated a dataset. For one dimensional data, you are probably already familiar with one simple density estimator: the histogram. A histogram divides the data into discrete bins, counts the number of points that fall in each bin, and then visualizes the results in an intuitive manner. For example, let's create some data that is drawn from two normal distributions: End of explanation """ hist = plt.hist(x, bins=30, density=True) """ Explanation: We have previously seen that the standard count-based histogram can be created with the plt.hist() function. By specifying the normed parameter of the histogram, we end up with a normalized histogram where the height of the bins does not reflect counts, but instead reflects probability density: End of explanation """ density, bins, patches = hist widths = bins[1:] - bins[:-1] (density * widths).sum() """ Explanation: Notice that for equal binning, this normalization simply changes the scale on the y-axis, leaving the relative heights essentially the same as in a histogram built from counts. This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function: End of explanation """ x = make_data(20) bins = np.linspace(-5, 10, 10) fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharex=True, sharey=True, subplot_kw={'xlim':(-4, 9), 'ylim':(-0.02, 0.3)}) fig.subplots_adjust(wspace=0.05) for i, offset in enumerate([0.0, 0.6]): ax[i].hist(x, bins=bins + offset, density=True) ax[i].plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1) """ Explanation: One of the issues with using a histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features. For example, if we look at a version of this data with only 20 points, the choice of how to draw the bins can lead to an entirely different interpretation of the data! Consider this example: End of explanation """ fig, ax = plt.subplots() bins = np.arange(-3, 8) ax.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1) for count, edge in zip(*np.histogram(x, bins)): for i in range(count): ax.add_patch(plt.Rectangle((edge, i), 1, 1, alpha=0.5)) ax.set_xlim(-4, 8) ax.set_ylim(-0.2, 8) """ Explanation: On the left, the histogram makes clear that this is a bimodal distribution. On the right, we see a unimodal distribution with a long tail. Without seeing the preceding code, you would probably not guess that these two histograms were built from the same data: with that in mind, how can you trust the intuition that histograms confer? And how might we improve on this? Stepping back, we can think of a histogram as a stack of blocks, where we stack one block within each bin on top of each point in the dataset. Let's view this directly: End of explanation """ x_d = np.linspace(-4, 8, 2000) density = sum((abs(xi - x_d) < 0.5) for xi in x) plt.fill_between(x_d, density, alpha=0.5) plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1) plt.axis([-4, 8, -0.2, 8]); """ Explanation: The problem with our two binnings stems from the fact that the height of the block stack often reflects not on the actual density of points nearby, but on coincidences of how the bins align with the data points. This mis-alignment between points and their blocks is a potential cause of the poor histogram results seen here. But what if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent? If we do this, the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result. Let's try this: End of explanation """ from scipy.stats import norm x_d = np.linspace(-4, 8, 1000) density = sum(norm(xi).pdf(x_d) for xi in x) plt.fill_between(x_d, density, alpha=0.5) plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1) plt.axis([-4, 8, -0.2, 5]); """ Explanation: The result looks a bit messy, but is a much more robust reflection of the actual data characteristics than is the standard histogram. Still, the rough edges are not aesthetically pleasing, nor are they reflective of any true properties of the data. In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian. Let's use a standard normal curve at each point instead of a block: End of explanation """ from sklearn.neighbors import KernelDensity # instantiate and fit the KDE model kde = KernelDensity(bandwidth=1.0, kernel='gaussian') kde.fit(x[:, None]) # score_samples returns the log of the probability density logprob = kde.score_samples(x_d[:, None]) plt.fill_between(x_d, np.exp(logprob), alpha=0.5) plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1) plt.ylim(-0.02, 0.22) """ Explanation: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling). These last two plots are examples of kernel density estimation in one dimension: the first uses a so-called "tophat" kernel and the second uses a Gaussian kernel. We'll now look at kernel density estimation in more detail. Kernel Density Estimation in Practice The free parameters of kernel density estimation are the kernel, which specifies the shape of the distribution placed at each point, and the kernel bandwidth, which controls the size of the kernel at each point. In practice, there are many kernels you might use for a kernel density estimation: in particular, the Scikit-Learn KDE implementation supports one of six kernels, which you can read about in Scikit-Learn's Density Estimation documentation. While there are several versions of kernel density estimation implemented in Python (notably in the SciPy and StatsModels packages), I prefer to use Scikit-Learn's version because of its efficiency and flexibility. It is implemented in the sklearn.neighbors.KernelDensity estimator, which handles KDE in multiple dimensions with one of six kernels and one of a couple dozen distance metrics. Because KDE can be fairly computationally intensive, the Scikit-Learn estimator uses a tree-based algorithm under the hood and can trade off computation time for accuracy using the atol (absolute tolerance) and rtol (relative tolerance) parameters. The kernel bandwidth, which is a free parameter, can be determined using Scikit-Learn's standard cross validation tools as we will soon see. Let's first show a simple example of replicating the above plot using the Scikit-Learn KernelDensity estimator: End of explanation """ from sklearn.model_selection import GridSearchCV from sklearn.model_selection import LeaveOneOut bandwidths = 10 ** np.linspace(-1, 1, 100) grid = GridSearchCV(KernelDensity(kernel='gaussian'), {'bandwidth': bandwidths}, cv=LeaveOneOut()) grid.fit(x[:, None]); """ Explanation: The result here is normalized such that the area under the curve is equal to 1. Selecting the bandwidth via cross-validation The choice of bandwidth within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias–variance trade-off in the estimate of density: too narrow a bandwidth leads to a high-variance estimate (i.e., over-fitting), where the presence or absence of a single point makes a large difference. Too wide a bandwidth leads to a high-bias estimate (i.e., under-fitting) where the structure in the data is washed out by the wide kernel. There is a long history in statistics of methods to quickly estimate the best bandwidth based on rather stringent assumptions about the data: if you look up the KDE implementations in the SciPy and StatsModels packages, for example, you will see implementations based on some of these rules. In machine learning contexts, we've seen that such hyperparameter tuning often is done empirically via a cross-validation approach. With this in mind, the KernelDensity estimator in Scikit-Learn is designed such that it can be used directly within the Scikit-Learn's standard grid search tools. Here we will use GridSearchCV to optimize the bandwidth for the preceding dataset. Because we are looking at such a small dataset, we will use leave-one-out cross-validation, which minimizes the reduction in training set size for each cross-validation trial: End of explanation """ grid.best_params_ """ Explanation: Now we can find the choice of bandwidth which maximizes the score (which in this case defaults to the log-likelihood): End of explanation """ from sklearn.datasets import fetch_species_distributions # this step might fail based on permssions and network access # if in Docker, specify --network=host # if in docker-compose specify version 3.4 and build -> network: host data = fetch_species_distributions() # Get matrices/arrays of species IDs and locations latlon = np.vstack([data.train['dd lat'], data.train['dd long']]).T species = np.array([d.decode('ascii').startswith('micro') for d in data.train['species']], dtype='int') """ Explanation: The optimal bandwidth happens to be very close to what we used in the example plot earlier, where the bandwidth was 1.0 (i.e., the default width of scipy.stats.norm). Example: KDE on a Sphere Perhaps the most common use of KDE is in graphically representing distributions of points. For example, in the Seaborn visualization library (see Visualization With Seaborn), KDE is built in and automatically used to help visualize points in one and two dimensions. Here we will look at a slightly more sophisticated use of KDE for visualization of distributions. We will make use of some geographic data that can be loaded with Scikit-Learn: the geographic distributions of recorded observations of two South American mammals, Bradypus variegatus (the Brown-throated Sloth) and Microryzomys minutus (the Forest Small Rice Rat). With Scikit-Learn, we can fetch this data as follows: End of explanation """ # !conda install -c conda-forge basemap-data-hires -y # RESTART KERNEL #Hack to fix missing PROJ4 env var import os import conda conda_file_dir = conda.__file__ conda_dir = conda_file_dir.split('lib')[0] proj_lib = os.path.join(os.path.join(conda_dir, 'share'), 'proj') os.environ["PROJ_LIB"] = proj_lib from mpl_toolkits.basemap import Basemap from sklearn.datasets.species_distributions import construct_grids xgrid, ygrid = construct_grids(data) # plot coastlines with basemap m = Basemap(projection='cyl', resolution='c', llcrnrlat=ygrid.min(), urcrnrlat=ygrid.max(), llcrnrlon=xgrid.min(), urcrnrlon=xgrid.max()) m.drawmapboundary(fill_color='#DDEEFF') m.fillcontinents(color='#FFEEDD') m.drawcoastlines(color='gray', zorder=2) m.drawcountries(color='gray', zorder=2) # plot locations m.scatter(latlon[:, 1], latlon[:, 0], zorder=3, c=species, cmap='rainbow', latlon=True); """ Explanation: With this data loaded, we can use the Basemap toolkit (mentioned previously in Geographic Data with Basemap) to plot the observed locations of these two species on the map of South America. End of explanation """ # Set up the data grid for the contour plot X, Y = np.meshgrid(xgrid[::5], ygrid[::5][::-1]) land_reference = data.coverages[6][::5, ::5] land_mask = (land_reference > -9999).ravel() xy = np.vstack([Y.ravel(), X.ravel()]).T xy = np.radians(xy[land_mask]) # Create two side-by-side plots fig, ax = plt.subplots(1, 2) fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05) species_names = ['Bradypus Variegatus', 'Microryzomys Minutus'] cmaps = ['Purples', 'Reds'] for i, axi in enumerate(ax): axi.set_title(species_names[i]) # plot coastlines with basemap m = Basemap(projection='cyl', llcrnrlat=Y.min(), urcrnrlat=Y.max(), llcrnrlon=X.min(), urcrnrlon=X.max(), resolution='c', ax=axi) m.drawmapboundary(fill_color='#DDEEFF') m.drawcoastlines() m.drawcountries() # construct a spherical kernel density estimate of the distribution kde = KernelDensity(bandwidth=0.03, metric='haversine') kde.fit(np.radians(latlon[species == i])) # evaluate only on the land: -9999 indicates ocean Z = np.full(land_mask.shape[0], -9999.0) Z[land_mask] = np.exp(kde.score_samples(xy)) Z = Z.reshape(X.shape) # plot contours of the density levels = np.linspace(0, Z.max(), 25) axi.contourf(X, Y, Z, levels=levels, cmap=cmaps[i]) """ Explanation: Unfortunately, this doesn't give a very good idea of the density of the species, because points in the species range may overlap one another. You may not realize it by looking at this plot, but there are over 1,600 points shown here! Let's use kernel density estimation to show this distribution in a more interpretable way: as a smooth indication of density on the map. Because the coordinate system here lies on a spherical surface rather than a flat plane, we will use the haversine distance metric, which will correctly represent distances on a curved surface. There is a bit of boilerplate code here (one of the disadvantages of the Basemap toolkit) but the meaning of each code block should be clear: End of explanation """ from sklearn.base import BaseEstimator, ClassifierMixin class KDEClassifier(BaseEstimator, ClassifierMixin): """Bayesian generative classification based on KDE Parameters ---------- bandwidth : float the kernel bandwidth within each class kernel : str the kernel name, passed to KernelDensity """ def __init__(self, bandwidth=1.0, kernel='gaussian'): self.bandwidth = bandwidth self.kernel = kernel def fit(self, X, y): self.classes_ = np.sort(np.unique(y)) training_sets = [X[y == yi] for yi in self.classes_] self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets] self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets] return self def predict_proba(self, X): logprobs = np.array([model.score_samples(X) for model in self.models_]).T result = np.exp(logprobs + self.logpriors_) return result / result.sum(1, keepdims=True) def predict(self, X): return self.classes_[np.argmax(self.predict_proba(X), 1)] """ Explanation: Compared to the simple scatter plot we initially used, this visualization paints a much clearer picture of the geographical distribution of observations of these two species. Example: Not-So-Naive Bayes This example looks at Bayesian generative classification with KDE, and demonstrates how to use the Scikit-Learn architecture to create a custom estimator. In In Depth: Naive Bayes Classification, we took a look at naive Bayesian classification, in which we created a simple generative model for each class, and used these models to build a fast classifier. For Gaussian naive Bayes, the generative model is a simple axis-aligned Gaussian. With a density estimation algorithm like KDE, we can remove the "naive" element and perform the same classification with a more sophisticated generative model for each class. It's still Bayesian classification, but it's no longer naive. The general approach for generative classification is this: Split the training data by label. For each set, fit a KDE to obtain a generative model of the data. This allows you for any observation $x$ and label $y$ to compute a likelihood $P(x~|~y)$. From the number of examples of each class in the training set, compute the class prior, $P(y)$. For an unknown point $x$, the posterior probability for each class is $P(y~|~x) \propto P(x~|~y)P(y)$. The class which maximizes this posterior is the label assigned to the point. The algorithm is straightforward and intuitive to understand; the more difficult piece is couching it within the Scikit-Learn framework in order to make use of the grid search and cross-validation architecture. This is the code that implements the algorithm within the Scikit-Learn framework; we will step through it following the code block: End of explanation """ from sklearn.datasets import load_digits from sklearn.model_selection import GridSearchCV digits = load_digits() bandwidths = 10 ** np.linspace(0, 2, 100) grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths}) grid.fit(digits.data, digits.target) # scores = [val.mean_validation_score for val in grid.grid_scores_] scores = grid.cv_results_['mean_test_score'] """ Explanation: The anatomy of a custom estimator Let's step through this code and discuss the essential features: ```python from sklearn.base import BaseEstimator, ClassifierMixin class KDEClassifier(BaseEstimator, ClassifierMixin): """Bayesian generative classification based on KDE Parameters ---------- bandwidth : float the kernel bandwidth within each class kernel : str the kernel name, passed to KernelDensity """ ``` Each estimator in Scikit-Learn is a class, and it is most convenient for this class to inherit from the BaseEstimator class as well as the appropriate mixin, which provides standard functionality. For example, among other things, here the BaseEstimator contains the logic necessary to clone/copy an estimator for use in a cross-validation procedure, and ClassifierMixin defines a default score() method used by such routines. We also provide a doc string, which will be captured by IPython's help functionality (see Help and Documentation in IPython). Next comes the class initialization method: python def __init__(self, bandwidth=1.0, kernel='gaussian'): self.bandwidth = bandwidth self.kernel = kernel This is the actual code that is executed when the object is instantiated with KDEClassifier(). In Scikit-Learn, it is important that initialization contains no operations other than assigning the passed values by name to self. This is due to the logic contained in BaseEstimator required for cloning and modifying estimators for cross-validation, grid search, and other functions. Similarly, all arguments to __init__ should be explicit: i.e. *args or **kwargs should be avoided, as they will not be correctly handled within cross-validation routines. Next comes the fit() method, where we handle training data: python def fit(self, X, y): self.classes_ = np.sort(np.unique(y)) training_sets = [X[y == yi] for yi in self.classes_] self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets] self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets] return self Here we find the unique classes in the training data, train a KernelDensity model for each class, and compute the class priors based on the number of input samples. Finally, fit() should always return self so that we can chain commands. For example: python label = model.fit(X, y).predict(X) Notice that each persistent result of the fit is stored with a trailing underscore (e.g., self.logpriors_). This is a convention used in Scikit-Learn so that you can quickly scan the members of an estimator (using IPython's tab completion) and see exactly which members are fit to training data. Finally, we have the logic for predicting labels on new data: ```python def predict_proba(self, X): logprobs = np.vstack([model.score_samples(X) for model in self.models_]).T result = np.exp(logprobs + self.logpriors_) return result / result.sum(1, keepdims=True) def predict(self, X): return self.classes_[np.argmax(self.predict_proba(X), 1)] ` Because this is a probabilistic classifier, we first implementpredict_proba()which returns an array of class probabilities of shape[n_samples, n_classes]. Entry[i, j]of this array is the posterior probability that sampleiis a member of classj``, computed by multiplying the likelihood by the class prior and normalizing. Finally, the predict() method uses these probabilities and simply returns the class with the largest probability. Using our custom estimator Let's try this custom estimator on a problem we have seen before: the classification of hand-written digits. Here we will load the digits, and compute the cross-validation score for a range of candidate bandwidths using the GridSearchCV meta-estimator (refer back to Hyperparameters and Model Validation): End of explanation """ plt.semilogx(bandwidths, scores) plt.xlabel('bandwidth') plt.ylabel('accuracy') plt.title('KDE Model Performance') print(grid.best_params_) print('accuracy =', grid.best_score_) """ Explanation: Next we can plot the cross-validation score as a function of bandwidth: End of explanation """ from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import cross_val_score cross_val_score(GaussianNB(), digits.data, digits.target).mean() """ Explanation: We see that this not-so-naive Bayesian classifier reaches a cross-validation accuracy of just over 96%; this is compared to around 80% for the naive Bayesian classification: End of explanation """
Chipe1/aima-python
notebooks/chapter19/Learners.ipynb
mit
import os, sys sys.path = [os.path.abspath("../../")] + sys.path from deep_learning4e import * from notebook4e import * from learning4e import * """ Explanation: Learners In this section, we will introduce several pre-defined learners to learning the datasets by updating their weights to minimize the loss function. when using a learner to deal with machine learning problems, there are several standard steps: Learner initialization: Before training the network, it usually should be initialized first. There are several choices when initializing the weights: random initialization, initializing weights are zeros or use Gaussian distribution to init the weights. Optimizer specification: Which means specifying the updating rules of learnable parameters of the network. Usually, we can choose Adam optimizer as default. Applying back-propagation: In neural networks, we commonly use back-propagation to pass and calculate gradient information of each layer. Back-propagation needs to be integrated with the chosen optimizer in order to update the weights of NN properly in each epoch. Iterations: Iterating over the forward and back-propagation process of given epochs. Sometimes the iterating process will have to be stopped by triggering early access in case of overfitting. We will introduce several learners with different structures. We will import all necessary packages before that: End of explanation """ raw_net = [InputLayer(input_size), DenseLayer(input_size, output_size)] """ Explanation: Perceptron Learner Overview The Perceptron is a linear classifier. It works the same way as a neural network with no hidden layers (just input and output). First, it trains its weights given a dataset and then it can classify a new item by running it through the network. Its input layer consists of the item features, while the output layer consists of nodes (also called neurons). Each node in the output layer has n synapses (for every item feature), each with its own weight. Then, the nodes find the dot product of the item features and the synapse weights. These values then pass through an activation function (usually a sigmoid). Finally, we pick the largest of the values and we return its index. Note that in classification problems each node represents a class. The final classification is the class/node with the max output value. Below you can see a single node/neuron in the outer layer. With f we denote the item features, with w the synapse weights, then inside the node we have the dot product and the activation function, g. Implementation Perceptron learner is actually a neural network learner with only one hidden layer which is pre-defined in the algorithm of perceptron_learner: End of explanation """ iris = DataSet(name="iris") classes = ["setosa", "versicolor", "virginica"] iris.classes_to_numbers(classes) pl = perceptron_learner(iris, epochs=500, learning_rate=0.01, verbose=50) """ Explanation: Where input_size and output_size are calculated from dataset examples. In the perceptron learner, the gradient descent optimizer is used to update the weights of the network. we return a function predict which we will use in the future to classify a new item. The function computes the (algebraic) dot product of the item with the calculated weights for each node in the outer layer. Then it picks the greatest value and classifies the item in the corresponding class. Example Let's try the perceptron learner with the iris dataset examples, first let's regulate the dataset classes: End of explanation """ print(err_ratio(pl, iris)) """ Explanation: We can see from the printed lines that the final total loss is converged to around 10.50. If we check the error ratio of perceptron learner on the dataset after training, we will see it is much higher than randomly guess: End of explanation """ tests = [([5.0, 3.1, 0.9, 0.1], 0), ([5.1, 3.5, 1.0, 0.0], 0), ([4.9, 3.3, 1.1, 0.1], 0), ([6.0, 3.0, 4.0, 1.1], 1), ([6.1, 2.2, 3.5, 1.0], 1), ([5.9, 2.5, 3.3, 1.1], 1), ([7.5, 4.1, 6.2, 2.3], 2), ([7.3, 4.0, 6.1, 2.4], 2), ([7.0, 3.3, 6.1, 2.5], 2)] print(grade_learner(pl, tests)) """ Explanation: If we test the trained learner with some test cases: End of explanation """ train_img, train_lbl, test_img, test_lbl = load_MNIST(path="../../aima-data/MNIST/Digits") import numpy as np import matplotlib.pyplot as plt train_examples = [np.append(train_img[i], train_lbl[i]) for i in range(len(train_img))] test_examples = [np.append(test_img[i], test_lbl[i]) for i in range(len(test_img))] print("length of training dataset:", len(train_examples)) print("length of test dataset:", len(test_examples)) """ Explanation: It seems the learner is correct on all the test examples. Now let's try perceptron learner on a more complicated dataset: the MNIST dataset, to see what the result will be. First, we import the dataset to make the examples a Dataset object: End of explanation """ mnist = DataSet(examples=train_examples[:1000]) pl = perceptron_learner(mnist, epochs=10, verbose=1) print(err_ratio(pl, mnist)) """ Explanation: Now let's train the perceptron learner on the first 1000 examples of the dataset: End of explanation """ test_mnist = DataSet(examples=test_examples[:100]) print(err_ratio(pl, test_mnist)) """ Explanation: It looks like we have a near 90% error ratio on training data after the network is trained on it. Then we can investigate the model's performance on the test dataset which it never has seen before: End of explanation """ # initialize the network raw_net = [InputLayer(input_size)] # add hidden layers hidden_input_size = input_size for h_size in hidden_layer_sizes: raw_net.append(DenseLayer(hidden_input_size, h_size)) hidden_input_size = h_size raw_net.append(DenseLayer(hidden_input_size, output_size)) """ Explanation: It seems a single layer perceptron learner cannot simulate the structure of the MNIST dataset. To improve accuracy, we may not only increase training epochs but also consider changing to a more complicated network structure. Neural Network Learner Although there are many different types of neural networks, the dense neural network we implemented can be treated as a stacked perceptron learner. Adding more layers to the perceptron network could add to the non-linearity to the network thus model will be more flexible when fitting complex data-target relations. Whereas it also adds to the risk of overfitting as the side effect of flexibility. By default we use dense networks with two hidden layers, which has the architecture as the following: <img src="images/nn.png" width="500"/> In our code, we implemented it as: End of explanation """ nn = neural_net_learner(iris, epochs=100, learning_rate=0.15, optimizer=gradient_descent, verbose=10) """ Explanation: Where hidden_layer_sizes are the sizes of each hidden layer in a list which can be specified by user. Neural network learner uses gradient descent as default optimizer but user can specify any optimizer when calling neural_net_learner. The other special attribute that can be changed in neural_net_learner is batch_size which controls the number of examples used in each round of update. neural_net_learner also returns a predict function which calculates prediction by multiplying weight to inputs and applying activation functions. Example Let's also try neural_net_learner on the iris dataset: End of explanation """ print("error ration on training set:",err_ratio(nn, iris)) tests = [([5.0, 3.1, 0.9, 0.1], 0), ([5.1, 3.5, 1.0, 0.0], 0), ([4.9, 3.3, 1.1, 0.1], 0), ([6.0, 3.0, 4.0, 1.1], 1), ([6.1, 2.2, 3.5, 1.0], 1), ([5.9, 2.5, 3.3, 1.1], 1), ([7.5, 4.1, 6.2, 2.3], 2), ([7.3, 4.0, 6.1, 2.4], 2), ([7.0, 3.3, 6.1, 2.5], 2)] print("accuracy on test set:",grade_learner(nn, tests)) """ Explanation: Similarly we check the model's accuracy on both training and test dataset: End of explanation """ nn = neural_net_learner(mnist, epochs=100, verbose=10) print(err_ratio(nn, mnist)) """ Explanation: We can see that the error ratio on the training set is smaller than the perceptron learner. As the error ratio is relatively small, let's try the model on the MNIST dataset to see whether there will be a larger difference. End of explanation """
MTgeophysics/mtpy
examples/workshop/Workshop Exercises Core.ipynb
gpl-3.0
# import required modules from mtpy.core.mt import MT # Define the path to your edi file edi_file = "C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi" # Create an MT object mt_obj = MT(edi_file) """ Explanation: Introduction This workbook contains some examples for reading, analysing and plotting processed MT data. It covers most of the steps available in MTPy. For more details on specific input parameters and other functionality, we recommend looking at the mtpy documentation, which can be found at: https://mtpy2.readthedocs.io/en/develop/. This workbook is structured according to some of the key modules in MTPy: Core, Analysis, Imaging, and Modeling. Getting Started To start with, you will need to make sure MTPy is installed and is working correctly. Please see the installation guide (https://github.com/MTgeophysics/mtpy/wiki/MTPy-installation-guide-for-Windows-10-and-Ubuntu-18.04) for details. Before you begin these examples, we suggest you make a temporary folder (e.g. C:/tmp) to save all example outputs. Useful tricks and tips This workbook exists as a Jupyter notebook and a pdf. If you are running the Jupyter notebook, you can run each of the cells, modifying the inputs to suit your requirements. Most of these examples have been written to be self contained. In Jupyter, you can add the following line to the top of any cell and it will write the contents of that cell to a python script: %%writefile example.py You can also select multiple cells and copy them to a new Jupyter notebook. Many of the examples below make use of the matplotlib colour maps. Please see https://matplotlib.org/examples/color/colormaps_reference.html for colour map options. Core These first few examples cover some of the basic functions and tools that can be used to look at data contained in an edi file, plot it, and make changes (e.g. sample onto different frequencies). Read an edi file into an MT object End of explanation """ # To see the latitude and longitude print(mt_obj.lat, mt_obj.lon) # To see the easting, northing, and elevation print(mt_obj.east, mt_obj.north, mt_obj.elev) """ Explanation: The mt_obj contains all the data from the edi file, e.g. impedance, tipper, frequency as well as station information (lat/long). To look at any of these parameters you can type, for example: End of explanation """ # for example, to see the frequency values represented in the impedance tensor: print(mt_obj.Z.freq) # or to see the impedance tensor (first 4 elements) print(mt_obj.Z.z[:4]) # or the resistivity or phase (first 4 values) print(mt_obj.Z.resistivity[:4]) print(mt_obj.Z.phase[:4]) """ Explanation: There are many other parameters you can look at in the mt_obj. Just type mt_obj.[TAB] to see what is available. In the MT object are the Z and Tipper objects (mt_obj.Z; mt_obj.Tipper). These contain all information related to, respectively, the impedance tensor and the tipper. End of explanation """ # import required modules from mtpy.core.mt import MT import os # Define the path to your edi file and save path edi_file = "C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi" savepath = r"C:/tmp" # Create an MT object mt_obj = MT(edi_file) # To plot the edi file we read in in Part 1 & save to file: pt_obj = mt_obj.plot_mt_response(plot_num=1, # 1 = yx and xy; 2 = all 4 components # 3 = off diagonal + determinant plot_tipper = 'yri', plot_pt = 'y' # plot phase tensor 'y' or 'n' ) #pt_obj.save_plot(os.path.join(savepath,"Synth00.png"), fig_dpi=400) """ Explanation: As with the MT object, you can explore the object by typing mt_obj.Z.[TAB] to see the available attributes. Plot an edi file In this example we plot MT data from an edi file. End of explanation """ # import required modules from mtpy.core.mt import MT import os # Define the path to your edi file and save path edi_file = r"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi" savepath = r"C:/tmp" # Create an MT object mt_obj = MT(edi_file) # First, define a frequency array: # Every second frequency: new_freq_list = mt_obj.Z.freq[::2] # OR 5 periods per decade from 10^-4 to 10^3 seconds from mtpy.utils.calculator import get_period_list new_freq_list = 1./get_period_list(1e-4,1e3,5) # Create new Z and Tipper objects containing interpolated data new_Z_obj, new_Tipper_obj = mt_obj.interpolate(new_freq_list) # Write a new edi file using the new data mt_obj.write_mt_file( save_dir=savepath, fn_basename='Synth00_5ppd', file_type='edi', new_Z_obj=new_Z_obj, # provide a z object to update the data new_Tipper_obj=new_Tipper_obj, # provide a tipper object longitude_format='LONG', # write longitudes as 'LONG' not ‘LON’ latlon_format='dd'# write as decimal degrees (any other input # will write as degrees:minutes:seconds ) """ Explanation: Make some change to the data and save to a new file This example demonstrates how to resample the data onto new frequency values and write to a new edi file. In the example below, you can either choose every second frequency or resample onto five periods per decade. To do this we need to make a new Z object, and save it to a file. End of explanation """
graphistry/pygraphistry
demos/demos_by_use_case/logs/malware-hypergraph/Malware Hypergraph.ipynb
bsd-3-clause
import pandas as pd import graphistry as g # To specify Graphistry account & server, use: # graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com') # For more options, see https://github.com/graphistry/pygraphistry#configure df = pd.read_csv('./barncat.1k.csv', encoding = "utf8") print("# samples", len(df)) eval(df[:10]['value'].tolist()[0]) #avoid double counting df3 = df[df['value'].str.contains("{")] df3[:1] #Unpack 'value' json import json df4 = pd.concat([df3.drop('value', axis=1), df3.value.apply(json.loads).apply(pd.Series)]) len(df4) df4[:1] """ Explanation: Finding Correlations in a CSV of Malware Events via Hypergraph Views To find patterns and outliers in CSVs and event data, Graphistry provides the hypergraph transform. As an example, this notebook examines different malware files reported to a security vendor. It reveals phenomena such as: The malware files cluster into several families The nodes central to a cluster reveal attributes specific to a strain of malware The nodes bordering a cluster reveal attributes that show up in a strain, but are unique to each instance in that strain Several families have attributes connecting them, suggesting they had the same authors Load CSV End of explanation """ g.hypergraph(df4)['graph'].plot() """ Explanation: Default Hypergraph Transform The hypergraph transform creates: * A node for every row, * A node for every unique value in a columns (so multiple if found across columns) * An edge connecting a row to its values When multiple rows share similar values, they will cluster together. When a row has unique values, those will form a ring around only that node. End of explanation """ g.hypergraph( df4, opts={ 'CATEGORIES': { 'hash': ['sha1', 'sha256', 'md5'], 'section': [x for x in df4.columns if 'section_' in x] }, 'SKIP': ['event_id', 'InstallFlag', 'type', 'val', 'Date', 'date', 'Port', 'FTPPort', 'Origin', 'category', 'comment', 'to_ids'] })['graph'].plot() """ Explanation: Configured Hypergraph Transform We clean up the visualization in a few ways: Categorize hash codes as in the same family. This simplifies coloring by the generated 'category' field. If columns share the same value, such as two columns using md5 values, this would also cause them to only create 1 node per hash, instead of per-column instance. Not show a lot of attributes as nodes, such as numbers and dates Running help(graphistry.hypergraph) reveals more options. End of explanation """ g.hypergraph( df4, direct=True, opts={ 'CATEGORIES': { 'hash': ['sha1', 'sha256', 'md5'], 'section': [x for x in df4.columns if 'section_' in x] }, 'SKIP': ['event_id', 'InstallFlag', 'type', 'val', 'Date', 'date', 'Port', 'FTPPort', 'Origin', 'category', 'comment', 'to_ids'] })['graph'].plot() """ Explanation: Directly connecting metadata Do not show actual malware instance nodes End of explanation """
quantopian/research_public
notebooks/lectures/CAPM_and_Arbitrage_Pricing_Theory/notebook.ipynb
apache-2.0
import numpy as np import pandas as pd import statsmodels.api as sm from statsmodels import regression import matplotlib.pyplot as plt """ Explanation: The Capital Asset Pricing Model and Arbitrage Pricing Theory by Beha Abasi, Maxwell Margenot, and Delaney Granizo-Mackenzie Part of the Quantopian Lecture Series: www.quantopian.com/lectures https://github.com/quantopian/research_public The Capital Asset Pricing Model (CAPM) is a classic measure of the cost of capital. It is used often in finance to evaluate the price of assets and to assess the impact of the risk premium from the market at large. In this lecture, we discuss the CAPM the more general Arbitrage Pricing Theory (APT) to form a basis for evaluating the risk associated with various factors. End of explanation """ start_date = '2014-01-01' end_date = '2014-12-31' # choose stock R = get_pricing('AAPL', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:] # risk-free proxy R_F = get_pricing('BIL', fields='price', start_date=start_date, end_date=end_date).pct_change()[1:] # find it's beta against market M = get_pricing('SPY', start_date=start_date, end_date=end_date, fields='price').pct_change()[1:] AAPL_results = regression.linear_model.OLS(R-R_F, sm.add_constant(M)).fit() AAPL_beta = AAPL_results.params[1] M.plot() R.plot() R_F.plot() plt.xlabel('Time') plt.ylabel('Daily Percent Return') plt.legend(); AAPL_results.summary() """ Explanation: Idiosyncratic and Systematic Risk In general, portfolios and assets can face two types of risk: idiosyncratic and systematic risk. Idiosyncratic risk refers to risks that are firm-specific and can be diversified away, such as a management change or a faulty production, while systematic risk is market-wide and affects all market participants. An example could be a slowing of the economy or a change in the interest rate. Because all firms are exposed to systematic risk, it cannot be diversified away. Risk Premia As the number of assets in a portfolio increases, many of the idiosyncratic risks cancel out and are diversified away. This is the key reason why we want to avoid position concentration risk. As your portfolio grows larger and makes more independent bets through diversification, the variance of the portfolio declines until only the systematic risk remains. As we cannot remove systematic risk, investors must be given a risk premium above the rate of risk-free return to compensate them for the risk they take on by investing in this portfolio. The individual firm-level risks in this portfolio do not have associated premia as this would create arbitrage opportunities. Shareholders could collect the risk premium while diversifying away the risk associated with them. That would mean additional profit without any additional exposure. This is the definition of an arbitrage opportunity! From this reasoning we can conclude that the premium on an asset should have no relation to its idiosyncratic risk, but should instead rely solely on the level of systematic risk it carries. In order to accurately compute the risk premium of an asset, and consequently our expected return, we need to find a measure of systematic risk. If we have that, then we can theoretically define the return of an asset in the following way: $$E[\mbox{Return}] = \mbox{Risk-Free Rate of Return} + \mbox{Risk Premium}$$ One way to do this is to estimate how changes in the excess return of an asset are related to changes in the excess return of the market. Expressing this as a linear regression gives us the relationship as the change in expected return of an asset for each 1% change in the return of the market portfolio. In theory, this market portfolio should have no diversifiable risk left and would therefore only fluctuate with systematic shocks. In practice, we use a market index such as the S&P500 as a proxy for the market portfolio. The beta that we get from regressing an asset's returns on the returns of the market will be our measure of systematic risk. This beta represents the sensitivity of an asset's return stream to market-wide shocks. Given this beta, the risk premium of asset $i$ is defined as: $$\mbox{Risk Premium of Asset}_i = \beta (\mbox{Market Risk Premium})$$ We call this simplistic model the Capital Asset Pricing Model (CAPM). Capital Asset Pricing Theory We can express the CAPM more clearly like so: $$E[R_i] = R_F + \beta(E[R_M] - R_F)$$ where $R_i$ is the return of asset $i$, $R_F$ is the risk-free rate, and $R_M$ is the return of the market. The CAPM is one of the most basic measures of the cost of capital. It determines the minimum return required to entice investors to hold a certain asset. To put it another way, CAPM says that the return of an asset should be the risk-free rate, which is what we would demand to account for inflation and the time value of money, as well as something extra to compensate us for the amount of systematic risk we are exposed to. End of explanation """ predictions = R_F + AAPL_beta*(M - R_F) # CAPM equation predictions.plot() R.plot(color='Y') plt.legend(['Prediction', 'Actual Return']) plt.xlabel('Time') plt.ylabel('Daily Percent Return'); """ Explanation: We can then use our calculated beta exposure to make predictions of returns. End of explanation """ from scipy import optimize import cvxopt as opt from cvxopt import blas, solvers np.random.seed(123) # Turn off progress printing solvers.options['show_progress'] = False # Number of assets n_assets = 4 # Number of observations n_obs = 2000 ## Generating random returns for our 4 securities return_vec = np.random.randn(n_assets, n_obs) def rand_weights(n): ''' Produces n random weights that sum to 1 ''' k = np.random.rand(n) return k / sum(k) def random_portfolio(returns): ''' Returns the mean and standard deviation of returns for a random portfolio ''' p = np.asmatrix(np.mean(returns, axis=1)) w = np.asmatrix(rand_weights(returns.shape[0])) C = np.asmatrix(np.cov(returns)) mu = w * p.T sigma = np.sqrt(w * C * w.T) # This recursion reduces outliers to keep plots pretty if sigma > 2: return random_portfolio(returns) return mu, sigma def optimal_portfolios(returns): n = len(returns) returns = np.asmatrix(returns) N = 100000 # Creating a list of returns to optimize the risk for mus = [100**(5.0 * t/N - 1.0) for t in range(N)] # Convert to cvxopt matrices S = opt.matrix(np.cov(returns)) pbar = opt.matrix(np.mean(returns, axis=1)) # Create constraint matrices G = -opt.matrix(np.eye(n)) # negative n x n identity matrix h = opt.matrix(0.0, (n ,1)) A = opt.matrix(1.0, (1, n)) b = opt.matrix(1.0) # Calculate efficient frontier weights using quadratic programming portfolios = [solvers.qp(mu*S, -pbar, G, h, A, b)['x'] for mu in mus] ## Calculate the risk and returns of the frontier returns = [blas.dot(pbar, x) for x in portfolios] risks = [np.sqrt(blas.dot(x, S*x)) for x in portfolios] return returns, risks n_portfolios = 50000 means, stds = np.column_stack([random_portfolio(return_vec) for x in range(n_portfolios)]) returns, risks = optimal_portfolios(return_vec) plt.plot(stds, means, 'o', markersize=2, color='navy') plt.xlabel('Risk') plt.ylabel('Return') plt.title('Mean and Standard Deviation of Returns of Randomly Generated Portfolios'); plt.plot(risks, returns, '-', markersize=3, color='red'); plt.legend(['Portfolios', 'Efficient Frontier']); """ Explanation: CAPM Assumptions In our derivation of the CAPM, we made two main assumptions: * We assumed that investors are able to trade without delay or cost and that everyone is able to borrow or lend money at the risk free rate. * We assumed that all investors are "mean-variance optimizers". What this essentially means is that they would only demand portfolios that have the highest return attainable for a given level of risk. These portfolios are all found along the efficient frontier. The following is a programmatic derivation of the efficient frontier for portfolios of four assets. End of explanation """ def maximize_sharpe_ratio(return_vec, risk_free_rate): """ Finds the CAPM optimal portfolio from the efficient frontier by optimizing the Sharpe ratio. """ def find_sharpe(weights): means = [np.mean(asset) for asset in return_vec] numerator = sum(weights[m]*means[m] for m in range(len(means))) - risk_free_rate weight = np.array(weights) denominator = np.sqrt(weights.T.dot(np.corrcoef(return_vec).dot(weights))) return numerator/denominator guess = np.ones(len(return_vec)) / len(return_vec) def objective(weights): return -find_sharpe(weights) # Set up equality constrained cons = {'type':'eq', 'fun': lambda x: np.sum(np.abs(x)) - 1} # Set up bounds for individual weights bnds = [(0, 1)] * len(return_vec) results = optimize.minimize(objective, guess, constraints=cons, bounds=bnds, method='SLSQP', options={'disp': False}) return results risk_free_rate = np.mean(R_F) results = maximize_sharpe_ratio(return_vec, risk_free_rate) # Applying the optimal weights to each assset to get build portfolio optimal_mean = sum(results.x[i]*np.mean(return_vec[i]) for i in range(len(results.x))) optimal_std = np.sqrt(results.x.T.dot(np.corrcoef(return_vec).dot(results.x))) # Plot of all possible portfolios plt.plot(stds, means, 'o', markersize=2, color='navy') plt.ylabel('Return') plt.xlabel('Risk') # Line from the risk-free rate to the optimal portfolio eqn_of_the_line = lambda x : ( (optimal_mean-risk_free_rate) / optimal_std ) * x + risk_free_rate xrange = np.linspace(0., 1., num=11) plt.plot(xrange, [eqn_of_the_line(x) for x in xrange], color='red', linestyle='-', linewidth=2) # Our optimal portfolio plt.plot([optimal_std], [optimal_mean], marker='o', markersize=12, color="navy") plt.legend(['Portfolios', 'Capital Allocation Line', 'Optimal Portfolio']); """ Explanation: Each blue dot represents a different portfolio, while the red line skimming the outside of the cloud is the efficient frontier. The efficient frontier contains all portfolios that are the best for a given level of risk. The optimal, or most efficient, portfolio on this line is found by maximizing the Sharpe ratio, the ratio of excess return and volatility. We use this to determine the portfolio with the best risk-to-reward tradeoff. The line that represents the different combinations of a risk-free asset with a portfolio of risky assets is known as the Capital Allocations Line (CAL). The slope of the CAL is the Sharpe ratio. To maximize the Sharpe ratio, we need to find the steepest CAL, which coincides with the CAL that is tangential to the efficient frontier. This is why the efficient portfolio is sometimes referred to as the tangent portfolio. End of explanation """ for a in range(len(return_vec)): print "Return and Risk of Asset", a, ":", np.mean(return_vec[a]), ",",np.std(return_vec[a]) print "Return and Risk of Optimal Portfolio", optimal_mean, optimal_std """ Explanation: We can look at the returns and risk of the individual assets compared to the optimal portfolio we found to easily showcase the power of diversification. End of explanation """ risk_free_rate = np.mean(R_F) # We have two coordinates that we use to map the SML: (0, risk-free rate) and (1, market return) eqn_of_the_line = lambda x : ( (np.mean(M)-risk_free_rate) / 1.0) * x + risk_free_rate xrange = np.linspace(0., 2.5, num=2) plt.plot(xrange, [eqn_of_the_line(x) for x in xrange], color='red', linestyle='-', linewidth=2) plt.plot([1], [np.mean(M)], marker='o', color='navy', markersize=10) plt.annotate('Market', xy=(1, np.mean(M)), xytext=(0.9, np.mean(M)+0.00004)) # Next, we will compare to see whether stocks in more cyclical industries have higher betas # Of course, a more thorough analysis is required to rigorously answer this question # Non-Cyclical Industry Stocks non_cyclical = ['PG', 'DUK', 'PFE'] non_cyclical_returns = get_pricing( non_cyclical, fields='price', start_date=start_date, end_date=end_date ).pct_change()[1:] non_cyclical_returns.columns = map(lambda x: x.symbol, non_cyclical_returns.columns) non_cyclical_betas = [ regression.linear_model.OLS( non_cyclical_returns[asset], sm.add_constant(M) ).fit().params[1] for asset in non_cyclical ] for asset, beta in zip(non_cyclical, non_cyclical_betas): plt.plot([beta], [np.mean(non_cyclical_returns[asset])], marker='o', color='g', markersize=10) plt.annotate( asset, xy=(beta, np.mean(non_cyclical_returns[asset])), xytext=(beta + 0.015, np.mean(non_cyclical_returns[asset]) + 0.000025) ) # Cyclical Industry Stocks cyclical = ['RIO', 'SPG', 'ING'] cyclical_returns = get_pricing( cyclical, fields='price', start_date=start_date, end_date=end_date ).pct_change()[1:] cyclical_returns.columns = map(lambda x: x.symbol, cyclical_returns.columns) cyclical_betas = [ regression.linear_model.OLS( cyclical_returns[asset], sm.add_constant(M) ).fit().params[1] for asset in cyclical ] for asset, beta in zip(cyclical, cyclical_betas): plt.plot([beta], [np.mean(cyclical_returns[asset])], marker='o', color='y', markersize=10) plt.annotate( asset, xy=(beta, np.mean(cyclical_returns[asset])), xytext=(beta + 0.015, np.mean(cyclical_returns[asset]) + 0.000025) ) # drawing the alpha, which is the difference between expected return and the actual return plt.plot( [cyclical_betas[2], cyclical_betas[2]], [np.mean(cyclical_returns.iloc[:, 2]), eqn_of_the_line(cyclical_betas[2])], color='grey' ) plt.annotate( 'Alpha', xy=( cyclical_betas[2] + 0.05, (eqn_of_the_line(cyclical_betas[2])-np.mean(cyclical_returns.iloc[:,2]))/2+np.mean(cyclical_returns.iloc[:,2]) ), xytext=( cyclical_betas[2] + 0.05, (eqn_of_the_line(cyclical_betas[2])-np.mean(cyclical_returns.iloc[:,2]))/2+np.mean(cyclical_returns.iloc[:,2]) ) ) plt.xlabel("Beta") plt.ylabel("Return") plt.legend(['Security Market Line']); """ Explanation: Capital Market Line is CAL through market portfolio Our optimal portfolio has a decently high return as well as less risk than any individual asset, as expected. Theoeretically, all investors should demand this optimal, tangent portfolio. If we accumulate the portfolios of all investors, we end up with the market portfolio, since all shares must be held by someone. This means that the tangency portfolio is the market portfolio, essentially saying that demand must equal supply. When a risk-free asset is added to the portfolio, the Capital Asset Line turns into the Capital Market Line (CML). According to the CAPM, any stock or portfolio that lies to the right of CML would contain diversifiable risk and is therefore not efficient. The mapping of each security's beta to its expected return results in the Security Markets Line. The difference between a security's return and the expected return as predicted by CAPM is known as the alpha. End of explanation """ from quantopian.pipeline import Pipeline from quantopian.pipeline.data import Fundamentals from quantopian.pipeline.factors import Returns, Latest from quantopian.pipeline.filters import Q1500US from quantopian.research import run_pipeline from quantopian.pipeline.classifiers.fundamentals import Sector import itertools """ Explanation: For more details on the CAPM, check out the wikipedia page. Arbitrage Pricing Theory The CAPM, while widely used and studied, has many drawbacks. With strict, limiting assumptions, it does not hold up well in empirical tests. Arbitrage Pricing Theory (APT) aims to generalize the CAPM model, as assets may be exposed to classes of risks other than the market risk and investors may care about things other than just the mean and variance. APT is a major asset pricing theory that relies on expressing the returns using a linear factor model: $$R_i = a_i + b_{i1} F_1 + b_{i2} F_2 + \ldots + b_{iK} F_K + \epsilon_i$$ A factor is a return stream that is determined completely by some characteristic. For example, the CAPM has only one factor, market return. If we have modelled our rate of return as above, then the expected returns should take the form of: $$ E(R_i) = R_F + b_{i1} \lambda_1 + b_{i2} \lambda_2 + \ldots + b_{iK} \lambda_K $$ where $R_F$ is the risk-free rate, and $\lambda_j$ is the risk premium - the return in excess of the risk-free rate - for factor $j$. This premium arises because investors require higher returns to compensate them for incurring higher risk. We'll compute the risk premia for our factors with Fama-Macbeth regression. However, there are various ways to compute each $\lambda_j$! Arbitrage Now that we have a reasonably general way to compute expected return, we can discuss arbitrage more technically. There are generally many, many securities in our universe. If we use different ones to compute the ${\lambda_i}$, will our results be consistent? If our results are inconsistent, there is an arbitrage opportunity (in expectation), an operation that earns a profit without incurring risk and with no net investment of money. In this case, we mean that there is a risk-free operation with expected positive return that requires no net investment. It occurs when expectations of returns are inconsistent, i.e. risk is not priced consistently across securities. Say that there is an asset with expected rate of return 0.2 for the next year and a $\beta$ of 1.2 with the market, while the market is expected to have a rate of return of 0.1, and the risk-free rate on 1-year bonds is 0.05. Then the APT model tells us that the expected rate of return on the asset should be $$ R_F + \beta \lambda = 0.05 + 1.2 (0.1 - 0.05) = 0.11$$ This does not agree with the prediction that the asset will have a rate of return of 0.2. So, if we buy \$100 of our asset, short \$120 of the market, and buy \$20 of bonds, we will have invested no net money and are not exposed to any systematic risk (we are market-neutral), but we expect to earn $0.2(100) - 0.1(120) + 0.05(20) = 9$ dollars at the end of the year. The APT assumes that these opportunities will be taken advantage of until prices shift and the arbitrage opportunities disappear. That is, it assumes that there are arbitrageurs who have sufficient amounts of patience and capital. This provides a justification for the use of empirical factor models in pricing securities: if the model was inconsistent, there would be an arbitrage opportunity, and so the prices would adjust. Goes Both Ways Accurately knowing $E(R_i)$ is incredibly difficult, but this model tells us what the expected returns should be if the market is free of arbitrage. This lays the groundwork for strategies based on factor model ranking systems. If you have a model for the expected return of an asset, then you can rank those assets based on their expected performance and use this information to make trades. This creation of a ranking scheme is the hallmark of a long-short equity strategy Testing Arbitrage Pricing Theory Most empirical tests of the APT are done in two steps: estimating the betas of individual factors, then comparing it to actual prices to see how predictions fared. Here we will use the return streams from long-short equity strategies built from various microeconomic indicators as our factors. Then, we will use the Fama-Macbeth regression method to estimate our risk premia. End of explanation """ def make_pipeline(): pipe = Pipeline() # Add our factors to the pipeline purchase_of_biz = Latest([Fundamentals.purchase_of_business]) pipe.add(purchase_of_biz, 'purchase_of_business') RD = Latest([Fundamentals.research_and_development]) pipe.add(RD, 'RD') operating_cash_flow = Latest([Fundamentals.operating_cash_flow]) pipe.add(operating_cash_flow, 'operating_cash_flow') # Create factor rankings and add to pipeline purchase_of_biz_rank = purchase_of_biz.rank() RD_rank = RD.rank() operating_cash_flow_rank = operating_cash_flow.rank() pipe.add(purchase_of_biz_rank, 'purchase_of_biz_rank') pipe.add(RD_rank, 'RD_rank') pipe.add(operating_cash_flow_rank, 'operating_cash_flow_rank') most_biz_bought = purchase_of_biz_rank.top(1000) least_biz_bought = purchase_of_biz_rank.bottom(1000) most_RD = RD_rank.top(1000) least_RD = RD_rank.bottom(1000) most_cash = operating_cash_flow_rank.top(1000) least_cash = operating_cash_flow_rank.bottom(1000) pipe.add(most_biz_bought, 'most_biz_bought') pipe.add(least_biz_bought, 'least_biz_bought') pipe.add(most_RD, 'most_RD') pipe.add(least_RD, 'least_RD') pipe.add(most_cash, 'most_cash') pipe.add(least_cash, 'least_cash') # We also get daily returns returns = Returns(window_length=2) # and sector types sectors = Sector() pipe.add(returns, "Returns") # We will focus on technology stocks in the Q1500 pipe.set_screen( (Q1500US() & sectors.eq(311)) & most_biz_bought | least_biz_bought | most_RD | least_RD | most_cash | least_cash ) return pipe pipe = make_pipeline() results = run_pipeline(pipe, start_date, end_date) results.head() """ Explanation: Now we use pipeline to get all of our data. End of explanation """ most_biz_bought = results[results.most_biz_bought]['Returns'].groupby(level=0).mean() least_biz_bought = results[results.least_biz_bought]['Returns'].groupby(level=0).mean() most_RD = results[results.most_RD]['Returns'].groupby(level=0).mean() least_RD = results[results.least_RD]['Returns'].groupby(level=0).mean() most_cash = results[results.most_cash]['Returns'].groupby(level=0).mean() least_cash = results[results.least_cash]['Returns'].groupby(level=0).mean() # Calculating our factor return streams biz_purchase_portfolio = most_biz_bought - least_biz_bought RD_portfolio = most_RD - least_RD cash_flow_portfolio = most_cash - least_cash """ Explanation: To get our factor return streams, we rank equities based on their purchases of businesses, their R&D spending, and their cash flow. Then, for each indicator, we go long the assets in the top percentile and short the ones in the bottom. End of explanation """ # putting all of our data from pipeline into a DataFrame for convenience # we'll have to first do some data manipulating since our factor return streams are date specific, # but our asset returns are both date and asset specific data = results[['Returns']].set_index(results.index) asset_list_sizes = [group[1].size for group in data.groupby(level=0)] purchase_of_biz_column = [ [biz_purchase_portfolio.loc[group[0]]] * size for group, size in zip(data.groupby(level=0), asset_list_sizes) ] data['Purchase of Business'] = list(itertools.chain(*purchase_of_biz_column)) RD_column = [ [RD_portfolio.loc[group[0]]] * size for group, size in zip(data.groupby(level=0), asset_list_sizes) ] data['RD'] = list(itertools.chain(*RD_column)) cash_flow_column = [ [cash_flow_portfolio.loc[group[0]]] * size for group, size in zip(data.groupby(level=0), asset_list_sizes) ] data['Operating Cash Flow'] = list(itertools.chain(*cash_flow_column)) data = sm.add_constant(data.dropna()) # Our list of assets from pipeline assets = data.index.levels[1].unique() X = [data.xs(asset, level=1)['Returns'] for asset in assets] Y = [ data.xs(asset, level=1)[['Purchase of Business', 'RD', 'Operating Cash Flow', 'const']] for asset in assets ] # First regression step: estimating the betas reg_results = [ regression.linear_model.OLS(x-risk_free_rate, y).fit().params for x, y in zip(X, Y) if not(x.empty or y.empty) ] indices = [asset for x, y, asset in zip(X, Y, assets) if not(x.empty or y.empty)] betas = pd.DataFrame(reg_results, index=indices) betas = sm.add_constant(betas.drop('const', axis=1)) R = data['Returns'].mean(axis=0, level=1) # Second regression step: estimating the risk premia final_results = regression.linear_model.OLS(R - risk_free_rate, betas).fit() final_results.summary() """ Explanation: Finally, we'll put everything together in our Fama-Macbeth regressions. This occurs in two steps. First, for each asset we regress its returns on each factor return stream: $$R_{1, t} = \alpha_1 + \beta_{1, F_1}F_{1, t} + \beta_{1, F_2}F_{2, t} + \dots + \beta_{1, F_m}F_{m, t} + \epsilon_{1, t} \ R_{2, t} = \alpha_2 + \beta_{2, F_1}F_{1, t} + \beta_{2, F_2}F_{2, t} + \dots + \beta_{2, F_m}F_{m, t} + \epsilon_{2, t} \ \vdots \ R_{n, t} = \alpha_n + \beta_{n, F_1}F_{1, t} + \beta_{n, F_2}F_{2, t} + \dots + \beta_{n, F_m}F_{m, t} + \epsilon_{n, t}$$ Second, we take the beta estimates from the first step and use those as our exogenous variables in an estimate of the mean return of each asset. This step is the calculation of our risk premia, ${\gamma_K}$. $$E(R_i) = \gamma_0 + \gamma_1 \hat{\beta}{i, F_1} + \gamma_2 \hat{\beta}{i, F_2} + \dots + \gamma_m \hat{\beta}_{i, F_m} + \epsilon_i$$ End of explanation """ # smoke test for multicollinearity print data[['Purchase of Business', 'RD', 'Operating Cash Flow']].corr() """ Explanation: It is imperative that we not just use our model estimates at face value. A scan through the accompanying statistics can be highly insightful about the efficacy of our estimated model. For example, notice that although our individual factors are significant, we have a very low $R^2$. What this may suggest is that there is a real link between our factors and the returns of our assets, but that there still remains a lot of unexplained noise! For a more in-depth look at choosing factors, check out the factor analysis lecture! End of explanation """ # this is our actual model! expected_return = risk_free_rate \ + betas['Purchase of Business']*final_results.params[1] \ + betas['RD']*final_results.params[2] \ + betas['Operating Cash Flow']*final_results.params[3] year_of_returns = get_pricing( expected_return.index, start_date, end_date, fields='close_price' ).pct_change()[1:] plt.plot(year_of_returns[expected_return.index[1]], color='purple') plt.plot(pd.DataFrame({'Expected Return': expected_return.iloc[0]}, index=year_of_returns.index), color='red') plt.legend(['AAPL Returns', 'APT Prediction']); # Compare AAPL prediction of CAPM vs. our APT model M_annual_return = get_pricing('SPY', start_date=start_date, end_date=end_date, fields='price').pct_change()[1:] # We'll take the market beta we calculated from the beginning of the lecture CAPM_AAPL_prediction = risk_free_rate + AAPL_beta*(M_annual_return.mean() - risk_free_rate) # Let's take a closer look year_of_returns = year_of_returns[:25] plt.plot(year_of_returns[expected_return.index[1]], color='purple') plt.plot(pd.DataFrame({'Expected Return': expected_return.iloc[0]}, index=year_of_returns.index), color='red') plt.plot(pd.DataFrame({'Expected Return': year_of_returns.mean()[0]}, index=year_of_returns.index), color='navy') plt.plot(pd.DataFrame({'Expected Return': CAPM_AAPL_prediction}, index=year_of_returns.index), color='green') plt.legend(['AAPL Returns', 'APT Prediction', 'AAPL Average Returns', 'CAPM Prediction']);\ """ Explanation: Now that we have estimates for our risk premia we can combine these with our beta estimates from our original regression to estimate asset returns. End of explanation """ market_betas = [ regression.linear_model.OLS(x[1:], sm.add_constant(M_annual_return)).fit().params[1] for x in X if (x[1:].size == M_annual_return.size) ] indices = [asset for x, asset in zip(X, assets) if (x[1:].size == M_annual_return.size)] market_return = pd.DataFrame({'Market': M_annual_return.mean()}, index = indices) CAPM_predictions = risk_free_rate + market_betas*(market_return['Market'] - risk_free_rate) CAPM_predictions.sort_values(inplace=True, ascending=False) CAPM_portfolio = [CAPM_predictions.head(5).index, CAPM_predictions.tail(5).index] CAPM_long = get_pricing( CAPM_portfolio[0], start_date=start_date, end_date=end_date, fields='price' ).pct_change()[1:].mean(axis=1) CAPM_short = get_pricing( CAPM_portfolio[1], start_date=start_date, end_date=end_date, fields='price' ).pct_change()[1:].mean(axis=1) CAPM_returns = CAPM_long - CAPM_short expected_return.sort_values(inplace=True, ascending=False) APT_portfolio = [expected_return.head(5).index, expected_return.tail(5).index] APT_long = get_pricing( APT_portfolio[0], start_date=start_date, end_date=end_date, fields='price' ).pct_change()[1:].mean(axis=1) APT_short = get_pricing( APT_portfolio[1], start_date=start_date, end_date=end_date, fields='price' ).pct_change()[1:].mean(axis=1) APT_returns = APT_long - APT_short plt.plot(CAPM_returns) plt.plot(APT_returns) plt.plot(pd.DataFrame({'Mean Return': CAPM_returns.mean()}, index=CAPM_returns.index)) plt.plot(pd.DataFrame({'Mean Return': APT_returns.mean()}, index=APT_returns.index)) plt.legend(['CAPM Portfolio', 'APT Portfolio', 'CAPM Mean', 'APT Mean']) print "Returns after a year: APT versus CAPM" print ((APT_returns[-1]/APT_returns[0]) - 1) - ((CAPM_returns[-1]/CAPM_returns[0])-1) """ Explanation: Finally, as a rough comparison between APT and CAPM, we'll look at the returns from Long-Short strategies constructed using each model as the ranking scheme. End of explanation """
reidmcy/MACS30200proj
ProblemSets/PS3/PS3.ipynb
gpl-2.0
import numpy as np import pandas import statsmodels import statsmodels.formula.api import statsmodels.stats.api import statsmodels.stats import statsmodels.stats.outliers_influence import statsmodels.graphics.regressionplots import sklearn.preprocessing import matplotlib.pyplot as plt import seaborn %matplotlib inline np.random.seed(seed=1234) bidenFname = 'data/biden.csv' df = pandas.read_csv(bidenFname).dropna() """ Explanation: Problem set #3: Regression diagnostics, interaction terms, and missing data End of explanation """ model1 = statsmodels.formula.api.ols('biden ~ female + age + educ', data=df).fit() print(model1.summary()) """ Explanation: Regression diagnostics End of explanation """ outliersDf = statsmodels.stats.outliers_influence.OLSInfluence(model1).summary_frame() outliersDf.max() outliersDf.min() """ Explanation: Part 1 End of explanation """ fig, ax = plt.subplots(figsize = (20, 7)) fig = statsmodels.graphics.regressionplots.plot_leverage_resid2(model1, ax = ax) plt.show() fig, axes = plt.subplots(ncols=2, figsize = (20, 7)) outliersDf[['dfb_Intercept', 'dfb_female', 'dfb_age', 'dfb_educ', 'cooks_d']].boxplot(ax = axes[0]) axes[0].set_title('$DFBETA$ and Cook\'s D boxplots') outliersDf[['cooks_d']].plot(ax = axes[1]) axes[1].set_title('Cook\'s D per point') plt.show() """ Explanation: We can see fomt the tables above that $DFBETA$ values for some of the data points are quite signifcant and at least one has a cook's D much greater than $4/n$. End of explanation """ names = ['$\chi^2_2$', 'p-value', 'Skew', 'Kurtosis'] test = statsmodels.stats.api.jarque_bera(model1.resid) nonNormDF = pandas.DataFrame({n : [test[i]] for i, n in enumerate(names)}) nonNormDF """ Explanation: Plotting them shows that there are a fair number of influential points (the point labels are original indices of the points). We would first need to determine the criteria for classifying them as outliers. Cook's D above a certain value would be a good starting point. Once we identify them we could drop them. We could also check if they are only influential in one dimension and normalize that dimension to the mean or some other less significant values, while keeping the other values. Part 2 End of explanation """ names = ['Lagrange multiplier statistic', 'p-value', 'f-value', 'f p-value'] test = statsmodels.stats.api.het_breushpagan(model1.resid, model1.model.exog) heteroTestDF = pandas.DataFrame({n : [test[i]] for i, n in enumerate(names)}) heteroTestDF """ Explanation: As shown by the large $\chi^2_2$ value of the Jarque Bera test the p-value is much to low for the errors to be normally distributed. The fix for this depends on the distribution of the errors, there may be a simple transform that makes them normal, if so we can apply it. If not we may need to rethink our regression Part 3 End of explanation """ names = ['Intercept','female','age','educ'] multicollinearityDF = pandas.DataFrame({n : [model1.eigenvals[i]] for i, n in enumerate(names)}) multicollinearityDF """ Explanation: As shown in the table the Breusch–Pagan test indicates (p-value $< .05$) there is some heteroskedasticity in the data. This could greatly affect our inference since some regions have lower error than others and as such our accuracy is dependant on the input. Part 4 End of explanation """ model2 = statsmodels.formula.api.ols('biden ~ age + educ + age * educ', data=df).fit() print(model2.summary()) """ Explanation: By looking at the eigenvalues of the correlation matrix we can see that there is likely no multicollinearity since they are all quite large and thus independent. Interaction terms End of explanation """ print(model2.wald_test('age + age:educ').summary()) """ Explanation: Part 1 From this table we can see that the the marginal effect of $age$ on $biden$ is $0.6719 - 0.0480educ$, thus it is postive for $educ < \frac{0.6719}{0.0480} \sim 14$ and negative for other values of $educ$. End of explanation """ print(model2.wald_test('educ + age:educ').summary()) """ Explanation: We can see from the Wald test of for this marginal effect that the null hypothesis of no effect is soundly defeated with $p < .05$ Part 2 From the summary table we can see that the the marginal effect of $educ$ (education) on $biden$ is $1.6574 - 0.0480age$, thus it is postive for $age < \frac{1.6574}{0.0480} \sim 35$ and negative for other values $age$. End of explanation """ fullDF = pandas.read_csv(bidenFname) imputer = sklearn.preprocessing.Imputer(strategy='mean') imputedMeanDF = pandas.DataFrame(imputer.fit_transform(fullDF), columns = ['biden','female','age','educ','dem','rep']) modelMean = statsmodels.formula.api.ols('biden ~ female + age + educ', data=imputedMeanDF).fit() print(modelMean.summary()) """ Explanation: We can see from the Wald test of for this marginal effect that the null hypothesis of no effect is barely defeated for $p < .05$ Missing data End of explanation """ imputer = sklearn.preprocessing.Imputer(strategy='median') imputedMedianDF = pandas.DataFrame(imputer.fit_transform(fullDF), columns = ['biden','female','age','educ','dem','rep']) modelMedian = statsmodels.formula.api.ols('biden ~ female + age + educ', data=imputedMedianDF).fit() print(modelMedian.summary()) """ Explanation: Above is the summary table for the model trained with imputed data, where missing values are replaced with their mean. End of explanation """ betasDF = pandas.DataFrame([model1.params, modelMean.params,modelMedian.params], index =['Base', 'Mean', 'Median']) betasDF """ Explanation: Above is the summary table for the model trained with imputed data, where missing values are replaced with their median. End of explanation """ errsDF = pandas.DataFrame([model1.bse, modelMean.bse,modelMedian.bse], index =['Base', 'Mean', 'Median']) errsDF errsDF.plot(logy=1) plt.title('Errors per model') plt.show() """ Explanation: Table of parameters for the base (row wise dropping) and imputed models End of explanation """
cosmostatschool/MACSS2017
pre-school/NumCompTools/Seb_MACSS2017_python.ipynb
mit
a = 1 b = 2.67 c = "Vamos PSG" d = True print type(a), type(b), type(c), type(d) """ Explanation: Generalities Author : Sebastien Fromenteau context : Mexican AstroCosmology Statistics School 2017 This notebook is partially based on the notebook wrote by Iván Rodríguez Montoya for MACSS 2016 I will not focus on list and pure python part which not use Numpy library considering that most of the stuff we need in high energy physics is include in it. Moreover, the benchmark of pure python (without cython or numpy) is pretty bad. The idea of these 4 hours is to understand the following: -Types of variables (int, float, string, bool, complex.... ) -use modules (we will principally use numpy, scipy and matplotlib) -use logical tests : ==, &lt;, &gt;, &lt;=, &gt;=, !=, is, is not + (not, or, and) -use loops : for, while -use conditional : if, then, else -use arrays (declare, move on it, use numpy matrix utilities) -generate randoms -read and save a file using numpy.loadtxt and numpy.savetxt -generate plots with matplotlib.pyplot (plot, semilog, loglog, scatter,fillbetween, histogram.....) -define a function and use it Fisrt step A language use types of variables. In python, like in many others languages, it is implicit. You can declare a variable just ascociating a value. End of explanation """ b = 2 d = 4.+67j print type(a), type(b), type(c), type(d) """ Explanation: But you can change the type just ascociating another type of values End of explanation """ vel_y_init = 10. ###### Initial velocity expressed in m/s along the y-axis vel_x_init = 5. ###### Initial velocity expressed in m/s along the x-axis pos_x_init = 0. ###### Initial position in meters for the x-axis pos_y_init = 0. ###### Initial position in meters for the y-axis """ Explanation: That's pretty convenient but really dangerous if you are not rigorous. When your code start to be long, you can overwrite a a variable you already declare. Sometimes, the error will not be straightforward to detect (and maybe you will not see that there is a problem). So, you have to choose names which are clear enough. For example, all my arrays have a name like 'dist_arr', 'energy_arr', 'x_arr'..... and I try to comment the reason of this variable: End of explanation """ mylist = ['a', 'psg', [1,2,3], True] print type(mylist) print print "We can see the type of each eelement of the list" print for i in range(len(mylist)) : print type(mylist[i]) ###print the type of each element : see below to "for" explainations """ Explanation: List and tupples I have to introduce list but I will never really use it. A list is list of objects which can have different type. It can be useful for some stuff but in general I do not use it and it's a very bad idea to use it like an array creating a list of float-type elements. It is powerful to manage data structure with diferent type and size elements (more like a linked chain). It can look similar to an array but is totally different. An array, is well difined in term of size since the beginning and can manage one type at once. Moreover, the storage in contigous (compact). if you know the position in the memory (adress) of the first element and the type, then you can move inside the array changing memory adress pointer like: array[ i ] = contain_of(adress(array[0]) + i * size_of(type(array[0])) ) When you manage big data, you have to manage the RAM storage (memory) as well as the cpu time. List and arrays are very different for these two concerns. Finally, you can use very eficient way to manipulate arrays to do calculations that you can not do with lists. So, to calculation stuff it will be more useful to focus on numpy.ndarray type here. By the way, the list constructor is '[ ]'. A list definition is as easy as: End of explanation """ ##### Print the initial list mylist = [1,2,4] print mylist #### Add element at the end of the list print "APPEND" mylist.append('append') print mylist #### Remove element at the end of the list print "POP" mylist.pop() print mylist #### Add element at index position of the list print "INSERT 2" mylist.insert(2, 'insert') print mylist #### remove element at index position of the list print "REMOVE 2" mylist.remove(2) print mylist """ Explanation: As we can see, an element of a list can be a list itself. So [1,2,4] is not an array of integer bu a list of 3 elements which are the same type. We will see that Numpy have a method (np.array) which allow to transform a list like [1,2,4] to an array of integer. The tuples are similar to the list but you can not modify once it exist. End of explanation """ import numpy as np ###We load (import) the module numpy and we will use it with the obj_name np (you can change obj_name) size_arr = 10 ##### print "np.zeros()" arr1 = np.zeros( size_arr ) #### generate an array with 0 values with size = size_arr print 'arr1 = ', arr1 ##By default, the type is "float". But you can specify the type. This is true for all np.methods print "np.linspace()" arr2 = np.linspace(1., 10, size_arr) #### generate an array of size size_arr with min value = 1. and max =10. print 'arr2 = ', arr2 ###Test with other values print print 'test with other values :', np.linspace(0.23, 154.1, size_arr) print print "np.arange()" arr3 = np.arange(0, size_arr, 1) print 'arr3 = ', arr3 print "As you can see, the last element is 9 and not 10. It's useful to generate indices of an array of size 10 for example" print "np.array()" ####Here we apply the method numpy.array() to the list [1,2,4]. The result is an array of integer (numpy.int64 type to be complet) mylist = [1,2,4] arr4 = np.array(mylist) print type(arr4[0]) arr5 = np.array(mylist, np.float32) print type(arr5[0]) """ Explanation: arrays and modules One of the most common and useful object in programation is the array. We will use the Numpy array here. So we will need to load the module Numpy first and then use the relevant method to create the arrays. Like C or Fortran, we can load modules/Libraries and then use it. With Python it's similar to load an object with all its methods associated accessible using obj_name.method . Using Jupyter-notebook, we can use the very very useful completion using "tabulation". if you write "np." and use the completion you will have the proposition of all associated methods. For example, we will load Numpy in an obj_name "np" (like most of the people) and use the different methods: -zeros -linspace -arange -array End of explanation """ import numpy as np size_x = 15 size_y = 10 #### print "twoD_arr = np.zeros( (size_x, size_y) )" twoD_arr = np.zeros( (size_x, size_y) ) print twoD_arr ###You can use the np.linspace, np.arange if you ask for the good number of elements and then apply reshape([size_x, size_y, ...]) print print "twoD_arr = np.linspace( 0., 100.,size_x * size_y).reshape([size_x, size_y])" twoD_arr = np.linspace( 0., 100.,size_x * size_y).reshape([size_x, size_y]) print twoD_arr #### you can also convert list of list in ndarray with np.array() print print "twoD_arr = np.array([[1, 2, 3], [4, 5, 6]], np.int32)" twoD_arr = np.array([[1, 2, 3], [4, 5, 6]], np.int32) print twoD_arr """ Explanation: Nd-arrays and Matrices An array can have more than 1-dimension and in this case it will be numpy.ndarray . A 2D ndarray can be converted in numpy matrix type which is useful to apply matrix calculation. We will see a typical example using dot product between matrices and arrays. End of explanation """ import numpy as np size_x = 5 size_y = 5 twoD_arr = np.zeros( (size_x, size_y) ) #### We will fill it using loops. To see more about loops, take a look below for ii in range(size_x): for jj in range(size_y): twoD_arr[ii, jj] = ii + size_x*jj #print twoD_arr print twoD_arr.T #### = np.transpose(twoD_arr) vec_arr = np.linspace(1., size_x, size_x) twoD_mat = np.matrix(twoD_arr) #### Vec.T . Mat print "np.dot(vec_arr, twoD_mat)" print np.dot(vec_arr.T, twoD_mat) #### Mat.Vec print "np.transpose(np.dot( twoD_mat, vec_arr))" print np.transpose(np.dot( twoD_mat, vec_arr)) #### Vec.T Mat Vec print "np.dot( vec_arr.T, np.transpose(np.dot( twoD_mat, vec_arr)) )" print np.dot( vec_arr.T, np.transpose(np.dot( twoD_mat, vec_arr)) ) """ Explanation: Matrices Using np.matrix() on a 2D array we transform it in matrix with calculus properties End of explanation """ print (1 == 2) ### equality test print (1 != 2) print (1 < 2) print (1 > 2) print (1 <= 2) print (1 >= 2) print (1 is 2) #equivalent : more interesting for string comparison, see below print (1 is not 2) #non equivalent : more interesting for string comparison print "PSG is equivalent to bad team? : ", ("PSG" is "Bad team"), " Of course" """ Explanation: Conditionals and logic All conditionnal test will return a boolean value : True or False The logical operators are: ==, <, >, <=, >=, !=, is, is not End of explanation """ print "(2 > 1) and (0 != 13)" print (2 > 1) and (0 != 13) print "not(2 > 1) or not(0 != 13) not(A or B)" print ( not(2 > 1) or not(0 != 13) ) print "not((2 > 1) and (0 != 13)) not(A or B) <=> not A and not B " print not((2 > 1) and (0 != 13)) print "not(1 is 2)" print not(1 is 2) #"is not" is actually redondant because equivalent to the negation of "is" """ Explanation: or and not We can use a combination of logical conditions using "or" & "and" operators Moreover, a useful operator is "not" in order to test the negation. Of course, you will in general compare variables values together or with fixed conditions. End of explanation """ a = 3. b = 2. if a < b: print "the codition a<b was true and execute all the in the indentation below if" print a*b print a**2 else: print "the codition a<b was False and execute all the in the indentation below else" print a+b if not(a < b): ####I had the contraposition print "the condition a<b was False and execute all the in the indentation below if" print a*b print a**2 else: print "the condition a<b was True and execute all the in the indentation below else" print a+b a = 0. b = 17. c = 32. if (a !=0) : d = c/a else: d = c/b print d """ Explanation: If , else These test are important to now do something or not. A very important case, even if simple, is to test if you are divided by a value different of 0. The way to use "if" is like : "if condition : " The condition can be a combination of conditions and/or a contraposition. The "if" will just check the value of the result. If it is "True" then it will continue to the instructions. If it is "False" he will execute the instructions following "else" if exist. If there is not "else", it will just do nothing. End of explanation """ name_list = ['bonjour', 'je', 'suis', 'seb'] size_list = len(name_list) print "FOR standard way" print for ii in range(size_list): print name_list[ii] #### A particularity of python is to allow your variable to be the element of an array print print "FOR with variable as element of the array" print for ii in(name_list): print ii print print "WHILE" print cpt = 0 test = '' while (test != 'suis'): test = name_list[cpt] cpt += 1 print test """ Explanation: Loops : For and While There are 2 kinds of loops : - "for" : when you know from which initial value to the final value you want iterate.....which is onften the case. - "while" : when you want to iterate a block of instruction until it reach a condition. for i in range_of_value: intruction1 instruction2 . . . while (condition) : intruction1 instruction2 . . . End of explanation """ ## Exemple to find the prime numbers smaller than 10 print "Method to find the prime numbers smaller than 10 using break statement" print for n in range(2, 10): for x in range(2, n): if n % x == 0: print n, 'equals', x, '*', n/x break else: # loop fell through without finding a factor print n, 'is a prime number' print print "Method to find odd and even numbers using continue statement" print for num in range(2, 10): if num % 2 == 0: print "Found an even number", num continue print "Found a odd number", num """ Explanation: continue and break Sometimes, it is useful to execute just some iteration of a loop. In this case, one possibility is to check the condition we want and put all the instructions inside a "if". However, it is possible to check the negation and just apply a "continue" in order to start the next iteration of the loop. The "break" statement stop the loop when encountered. In one sens, it allows to do a "for" with a condition check like in a "which" End of explanation """ import numpy as np import matplotlib.pyplot as plt %matplotlib inline ##### We first start with uniform random distribution (between 0 and 1) print "Uniform distribution np.random.random() " print uniform_ran_arr = np.random.random( 10000 ) #### we can check that the mean value is close to 0.5 print "np.mean(uniform_ran_arr)" print np.mean(uniform_ran_arr) #### We can take a look to the histogram (see the histogram section for more details) print plt.hist(uniform_ran_arr, bins=50, label=r'$\mathcal{U}(0,1)$') plt.title('Uniform Distribution') plt.legend(loc=1) plt.show() ####### Gaussian/Normal distribution ### The np.rand.randn return a normal reduced and centered trail result (ie N(0,1) ). ### You can simply multiply by the standard deviation you want and sum the mean value mu = 10 std = 2. num_rand = 10000 N_10_2_arr = np.random.randn(num_rand) * std + mu print "Results from the normal trial" print "np.mean(N_10_2_arr) = ", np.mean(N_10_2_arr) print "np.std(N_10_2_arr) = ", np.std(N_10_2_arr) plt.hist(N_10_2_arr, bins=50,label=r'$\mathcal{N}(10,2)$', color='green') plt.title(r'$\mathcal{N}(10,2)$', fontsize=20) plt.legend(loc=1) plt.show() ##### Poisson distribution : np.rand.poisson() lam = 5 #### lambda parameter of the poisson distribution (is also the mean ;) poisson_arr = np.random.poisson( lam, num_rand) print print "Results for Poisson distribution" print "np.mean(poisson_arr) = ", np.mean(poisson_arr) plt.hist(poisson_arr, bins=15, label=r'$\mathcal{P}(\lambda = 5)$', color='red') plt.title('Poisson distribution for $\lambda$ = '+str(lam)) plt.legend(loc=1) plt.show() """ Explanation: Randoms Randoms are one of the most important application of computers. Until now, the quantum computers do not exist, reason why randoms are not exactly randoms. The usual way is to generate a complex trigonometric suite of value which have "the good properties of randomness". However, it is a deterministic function. So, if you start from the same seed (initial position) you will obtain exactly the same results!!!!! So you have to be careful with that if you are running various randoms you need to be independants. One good poin with numpy is that it directly use the clock system of your computer to choose a seed. Eventhough, keep in mind this point. We will use the numpy.random.methods to generate existing "probability distribution function (PDF)" and then we will see how to generate a random following the PDF you want (will be an exercise). End of explanation """ import matplotlib.pyplot as plt %matplotlib inline x_arr = np.linspace(0., 15., 16) y_arr = x_arr**2 y_spread_arr = y_arr + np.random.randn(len(x_arr))*10 plt.figure(figsize = (10,7)) #### allow to determine some stuff like the size of the figure plt.plot(x_arr, y_arr, linestyle='--' , linewidth=3., color='b', label=r'$f(x) = x^2$') ### label will be use by plt.legend() plt.xlabel('x') plt.ylabel('y') plt.legend(fontsize=20, loc=2) plt.show() plt.figure(figsize = (10,7)) #### allow to determine some stuff like the size of the figure plt.plot(x_arr, y_arr, linestyle='--' , linewidth=3., color='b', label=r'$f(x) = x^2$') ### label will be use by plt.legend() plt.plot(x_arr, y_spread_arr, marker='o',markersize=5, linestyle='none' , color='g', label=r'$f(x) = x^2 + \mathcal{N}(0,8)$') ### label will be use by plt.legend() plt.xlabel('x') plt.ylabel('y') plt.legend(fontsize=20, loc=2) plt.show() ##### ADD ERROR BARS. Here, I had a gaussian noise with standard deviation = 10 (is np.zeros(len(x_arr))+10.) plt.figure(figsize = (10,7)) #### allow to determine some stuff like the size of the figure plt.plot(x_arr, y_arr, linestyle='--' , linewidth=3., color='b', label=r'$f(x) = x^2$') ### label will be use by plt.legend() plt.errorbar(x_arr, y_spread_arr, np.zeros(len(x_arr))+10. , marker='o',markersize=5, linestyle='none' , color='g', label=r'$f(x) = x^2 + \mathcal{N}(0,8)$') ### label will be use by plt.legend() plt.xlabel('x') plt.ylabel('y') plt.legend(fontsize=20, loc=2) plt.show() """ Explanation: Plots We will only use the matplotlib.pyplot library to do the plots here. It is the most used one, produce nice plots and is easy to use. For some specific plots, you will maybe use another library but it should be enougth for this school to use only this one. We have first to import the library and we will use "plt" as obj_name. So all the methods will be call as plt.method One line very useful to add in Jupyter-notebook is the following: %matplotlib inline which allow to plot directly inside the internet browser. In order to have interactive plots, you can change to: %matplotlib notebook All the instruction before the command "plt.show()" will be interpretated as part of the same figure. plt.plot() End of explanation """ import matplotlib.pyplot as plt %matplotlib inline x_arr = np.linspace(0., 50., 51) y_arr = np.linspace(0., 50., 51) X,Y = np.meshgrid(x_arr, y_arr) #### create a 2D grid. X and Y are a 2D grid with each time the x_arr value in X and the y_arr valur in Y Z_grid = (X-25)**2 *np.cos(Y/5.) ### We can plot a 2D density map with pcolormesh() plt.pcolormesh(x_arr, y_arr, Z_grid.T, cmap = 'jet') plt.title('PCOLORMESH()') plt.xlabel('X') plt.ylabel('Y') plt.show() ### We can look for the contours values with contour() plt.pcolormesh(x_arr, y_arr, Z_grid.T, cmap = 'jet') plt.contour(x_arr, y_arr, Z_grid.T, 6, linewidths=np.zeros(6)+4, colors=('r', 'green', 'blue', (1, 1, 0), '#afeeee', 'gold')) plt.title('PCOLORMESH() + CONTOUR()') plt.xlabel('X') plt.ylabel('Y') plt.show() plt.contour(x_arr, y_arr, Z_grid.T, 6, linewidths=np.zeros(6)+4, colors=('r', 'green', 'blue', (1, 1, 0), '#afeeee', 'gold')) plt.title('JUST CONTOUR()') plt.xlabel('X') plt.ylabel('Y') plt.show() ## we can see the 3D-shape of Z using plot_surface() ## In order to do that, we have to load Axes3D from mpl_toolkits.mplot3d ## Generate a figure with a '3d' projection and attach it to an object : here 'ax' ## then, we have to use 'ax' rather than 'plt' and use the methods associated. ## The name of the methods change a bit like 'xlabel' -> 'set_xlablel' .... from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize = (10,10) ) ax = fig.add_subplot(111, projection='3d') ax.plot_surface(X, Y, Z_grid.T, cmap = 'GnBu') ax.set_title('PLOT_SURFACE()') ax.set_xlabel('X') ax.set_ylabel('Y') plt.show() """ Explanation: pcolormesh, contour and surface In order to plot a 3D results, there are different methods we can use. The easiest method is plt.pcolormesh. We can add contours to the pcolormesh, or directly plot the contours on a white background End of explanation """ import numpy as np import matplotlib.pyplot as plt %matplotlib inline ### generate a random distribution followinga gaussian distribution N(12, 3) rand_arr = np.random.randn(1000)*3. + 12. hx = plt.hist(rand_arr, bins=50, label=r'$\mathcal{N}(12,3)$') plt.legend(loc=2) plt.show() print hx[0] ### COUNT values from the histogram print hx[1] edges_arr = hx[1] x_arr = (edges_arr[1 : len(edges_arr)] + edges_arr[0 : len(edges_arr)-1])/2 print x_arr plt.plot(x_arr, hx[0], drawstyle = 'steps' , lw=2) plt.show() #### We can also use the keywords "normed" and "cumulative" in order to normalize the sum to 1 #### and obtain directly the cumultaive distribution function cx = plt.hist(rand_arr, bins=500, label=r'$\mathcal{N}(12,3)$', cumulative=True, normed=True) plt.legend(loc=2) plt.show() edges_arr = cx[1] x_arr = (edges_arr[1 : len(edges_arr)] + edges_arr[0 : len(edges_arr)-1])/2 cumul_arr = np.array(cx[0]) ##### Use the cumulative distribution function to generate a new distribution following this one ##### Becareful, if you know the exact PDF, you have to use it. ##### This procedure is only if you do not know what is the underlying PDF rand2_arr = np.zeros( num_rand ) #### array in which we will storage the values uni_trial = np.random.random(num_rand) ### array with uniform random distribution for ii in range(num_rand): rand2_arr[ii] = x_arr[ np.argmin( np.abs( cumul_arr - uni_trial[ii] ) ) ] ##### EXPLANATION #### ### np.abs( cumul_arr - uni_trial[ii] ) = absolute difference between CDF and uniform random trial ### np.argmin( np.abs( cumul_arr - uni_trial[ii] ) ) = give the index where we have the minimum difference ### Then, we take the corresponding x_arr value which correspond to the random value following the CDF for this uni_trial value print "Distribution using the CDF of the first one and a uniform random trial" hx = plt.hist(rand2_arr, bins=50, label='CDF based') plt.legend(loc=2) plt.show() """ Explanation: histograms Histograms are useful in order to access to the probability distribution of a variable. We will see here how to generate it and the use the result to generate the Cumulative Distribution Function. The plt.histogram is not just a plotting function but also return the results you need to use the PDF. As an exercise, you will generate a random trial following a given distribution. End of explanation """ #### READ a file using "open" f = open('Example_chi2.txt') ### read the first line print f.readline() print ### read the second line print f.readline() print ### read the third line print f.readline() print #### the results are strings and you have to parse the information you need. That's powerful but not straightforward #### I show a simple example to parse the values from the first line f.close() f = open('Example_chi2.txt', "r") #### "r" stipulate "read mode" print print res = f.readline() res_arr = str.split(res, ';') #### create a list of strings. Note the last one hav '\n' corresponding to the "return" instruction float_arr = np.zeros( len(res_arr) ) for ii in range(len(res_arr)): print res_arr[ii] float_arr[ii] = float(res_arr[ii]) print "Values inside the float_arr :" print float_arr ###### WRITE ##### ###that's similar to read. You have to think when you want return to the next line and add '\n' ## first you have to create a string with all the information you want a=12 b=18 c=54.2456 f.close() f = open('File_write_abc.txt', "w" ) #### "w" stipulate "write mode" line = "a="+str(a)+"\n" f.write(line) line = "b="+str(b)+"\n" f.write(line) line = "c="+str(c)+"\n" f.write(line) f.close() """ Explanation: Read, Write In order to read/write a file, we need to use file format End of explanation """ import numpy as np ### you can read and store arrays directly as x,y,err = np.loadtxt('Example_chi2.txt', delimiter=';') #### this file was generated in python format np.savetxt('Example_chi2_transpose.txt', np.transpose(( x,y,err )), header = ' X Y ERR' ) #### take a look to the new file. ###You can also read the file in one block which will store in a 2D ndarray data = np.loadtxt('Example_chi2_transpose.txt').T #### because we store in transpose, it is better to read in transpose print data[0, :] ### X array print data[1, :] ### Y array print data[2, :] ### errors """ Explanation: Numpy Loadtxt and Savetxt methods A very powerful methods to read and write files of data is to use numpy.loadtxtx and numpy.savetxt() End of explanation """ import numpy as np #arr1 = np.arange(1., 32., 1.) ####remember that it will stop at 32.-1. arr1 = np.linspace(1., 32., 32) print arr1 arr2 = arr1 tmp = np.where( (arr1 > 3.) ) print print "tmp is this tuple" print tmp print print "tmp[0] is this array" print tmp[0] print arr1[tmp[0]] print print "Multiple conditions" print tmp = np.where( (arr1 > 3.) & (arr1 < 14.) ) print tmp print arr1[tmp[0]] print print "Multiple conditions or" print tmp = np.where( (arr1 > 3.) & (arr1 < 14.) ) print tmp print arr1[tmp[0]] #### Moreover, you can use multiple conditions on different arrays if they have same length #### That's very useful if the arrays are various observables from the same objects. See exercise """ Explanation: Other useful methods of numpy One of the method I love with numpy is "where" which is similar to the "where" of IDL which return the indices of the elements from an array which respect a condition. In fact, np.where() return a tuple of indices. For this reason, we will focus on the first element of this tuple which will contain the array of indices (reason why I use tmp[0] ) Becareful, the logical operators inside np.where() are a bit different: or becomes | and becomes &amp; numpy provide a large list of methods to obtain useful informations: argmin/argmax : give the index of the min/max value of an ndarray std : return standard deviation of the array var : return the variance of the array min/max : return min/max value of the array sum : return the sum of an array End of explanation """
benfred/implicit
examples/tutorial_lastfm.ipynb
mit
from implicit.datasets.lastfm import get_lastfm artists, users, artist_user_plays = get_lastfm() """ Explanation: Tutorial - Recommending Music with the last.fm 360K dataset. This tutorial shows the major functionality of the implicit library by building a music recommender system using the the last.fm 360K dataset. Getting the Dataset Implicit includes code to access several different popular recommender datasets in the implicit.datasets module. The following code will both download the lastfm dataset locally, as well as load it up into memory: End of explanation """ from implicit.nearest_neighbours import bm25_weight # weight the matrix, both to reduce impact of users that have played the same artist thousands of times # and to reduce the weight given to popular items artist_user_plays = bm25_weight(artist_user_plays, K1=100, B=0.8) # get the transpose since the most of the functions in implicit expect (user, item) sparse matrices instead of (item, user) user_plays = artist_user_plays.T.tocsr() """ Explanation: artist_user_plays is a scipy sparse matrix, with the each row corresponding to a different musician and each column corresponding to a different user. The non-zero entries in the artist_user_plays matrix contains the number of times that the user has played that artist. The artists and users variables are arrays of string labels for each row and column in the artist_user_plays matrix. The implicit library is solely focused on implicit feedback recommenders systems - where we are given positive examples of what the user has interacted with, but aren't given the corresponding negative examples of what users aren't interested in. For this example we're shown the number of times that the user has played an artist in the dataset and can infer that a high play count indicates that the user likes an artist. However we can't infer that just because the user hasn't played an band before that means the user doesn't like the band. Training a Model Implicit provides implementations of several different algorithms for implicit feedback recommender systems. For this example we'll be looking at the AlternatingLeastSquares model that's based off the paper Collaborative Filtering for Implicit Feedback Datasets. This model aims to learn a binary target of whether each user has interacted with each item - but weights each binary interaction by a confidence value of how confident we are in this user/item interaction. The implementation in implicit uses the values of a sparse matrix to represent the confidences, with the non zero entries representing whether or not the user has interacted with the item. The first step in using this model is going to be transforming the raw play counts from the original dataset into values that can be used as confidences. We want to give repeated plays more confidence in the model, but have this effect taper off as the number of repeated plays increases to reduce the impact a single superfan has on the model. Likewise we want to direct some of the confidence weight away from popular items. To do this we'll use a bm25 weighting scheme inspired from classic information retrieval: End of explanation """ from implicit.als import AlternatingLeastSquares model = AlternatingLeastSquares(factors=64, regularization=0.05, alpha=2.0) model.fit(user_plays) """ Explanation: Once we have a weighted confidence matrix, we can use that to train an ALS model using implicit: End of explanation """ # Get recommendations for the a single user userid = 12345 ids, scores = model.recommend(userid, user_plays[userid], N=10, filter_already_liked_items=False) """ Explanation: Fitting the model will happen on any compatible Nvidia GPU, or using all the available cores on your CPU if you don't have a GPU enabled. You can force the operation by setting the use_gpu flag on the constructor of the AlternatingLeastSquares model. Making Recommendations After training the model, you can make recommendations for either a single user or a batch of users with the .recommend function on the model: End of explanation """ # Use pandas to display the output in a table, pandas isn't a dependency of implicit otherwise import numpy as np import pandas as pd pd.DataFrame({"artist": artists[ids], "score": scores, "already_liked": np.in1d(ids, user_plays[userid].indices)}) """ Explanation: The .recommend call will compute the N best recommendations for each user in the input, and return the itemids in the ids array as well as the computed scores in the scores array. We can see what the musicians are recommended for each user by looking up the ids in the artists array: End of explanation """ # get related items for the beatles (itemid = 25512) ids, scores= model.similar_items(252512) # display the results using pandas for nicer formatting pd.DataFrame({"artist": artists[ids], "score": scores}) """ Explanation: The already_liked column there shows if the user has interacted with the item already, and in this result most of the items being returned have already been interacted with by the user. We can remove these items from the result set with the filter_already_liked_items parameter - setting to True will remove all of these items from the results. The user_plays[userid] parameter is used to look up what items each user has interacted with, and can just be set to None if you aren't filtering the users own likes or recalculating the user representation on the fly. There are also more filtering options present in the filter_items parameter and items parameter, as well as options for recalculating the user representation on the fly with the recalculate_user parameter. See the API reference for more details. Recommending similar items Each model in implicit also has the ability to show related items through the similar_items method. For instance to get the related items for the Beatles: End of explanation """ # Make recommendations for the first 1000 users in the dataset userids = np.arange(1000) ids, scores = model.recommend(userids, user_plays[userids]) ids, ids.shape """ Explanation: Making batch recommendations The .recommend, .similar_items and .similar_users calls all have the ability to generate batches of recommendations - in addition to just calculating a single user or item. Passing an array of userids or itemids to these methods will trigger the batch methods, and return a 2D array of ids and scores - with each row in the output matrices corresponding to value in the input. This will tend to be quite a bit more efficient calling the method repeatedly, as implicit will use multiple threads on the CPU and achieve better device utilization on the GPU with larger batches. End of explanation """
gwsb-istm-6212-fall-2016/syllabus-and-schedule
scripts/20161129-exporting-csv-from-datanotebook.ipynb
cc0-1.0
!echo 'redspot' | sudo -S service postgresql restart %load_ext sql !createdb -U dbuser test %sql postgresql://dbuser@localhost:5432/test """ Explanation: Exporting CSV data from the server This process is slightly cumbersome because of Unix permissions. Remember - nine times out of ten, on Unix, it's probably a permissions problem. In this case the user 'postgres' which runs the PostgreSQL server doesn't have write permissions to your home directory /home/jovyan/work. To work around it, we write to a shared space, /tmp, from PostgreSQL, then copy the file to your own directory. Standard db setup End of explanation """ %%sql DROP TABLE IF EXISTS foo; CREATE TABLE foo ( id SERIAL, s TEXT ); %%sql INSERT INTO foo (s) VALUES ('hi'), ('bye'), ('yo') ; %%sql SELECT * FROM foo; """ Explanation: A little database example End of explanation """ %%sql COPY (SELECT * FROM foo ORDER BY s) TO '/tmp/testout.csv' WITH CSV HEADER DELIMITER ',' QUOTE '"'; """ Explanation: Exporting to CSV Now that you see we have a real table with real data, we export using the same COPY command we use for import. The main differences are: COPY ... TO instead of COPY ... FROM You may specify an arbitrarily complex query, using multiple tables, etc. Note the /tmp/ location of the output file; this is our shared space. Read all the details about pgsql's non-standard-SQL COPY function at https://www.postgresql.org/docs/9.5/static/sql-copy.html. End of explanation """ !cat /tmp/testout.csv !csvlook /tmp/testout.csv """ Explanation: We can see that the file correctly exported. End of explanation """ !cp /tmp/testout.csv /home/jovyan/work/testout.csv """ Explanation: Now move the file to a space you can reach from Jupyter: End of explanation """
marioberges/F16-12-752
projects/gvizcain/ApplianceClassifier_v3.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt import pickle, time, seaborn, random, json, os %matplotlib inline from sklearn import tree from sklearn.model_selection import cross_val_score, train_test_split from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier, RandomForestClassifier from sklearn.naive_bayes import GaussianNB, BernoulliNB from sklearn.multiclass import OneVsRestClassifier from sklearn.svm import LinearSVC from PIL import Image from sklearn.neighbors import KNeighborsClassifier """ Explanation: Data analytics for home appliances identification by Ayush Garg, Gabriel Vizcaino and Pradeep Somi Ganeshbabu Table of content Loading and processing the PLAID dataset Saving or loading the processed dataset Fitting the classifier Testing the accuracy of the chosen classifiers Identifying appliance type per house Conclusions and future work End of explanation """ #Setting up the path in the working directory Data_path = 'PLAID/' csv_path = Data_path + 'CSV/' csv_files = os.listdir(csv_path) #Load meta data with open(Data_path + 'meta1.json') as data_file: meta1 = json.load(data_file) meta = [meta1] #Functions to parse meta data stored in JSON format def clean_meta(ist): '''remove '' elements in Meta Data ''' clean_ist = ist.copy() for k,v in ist.items(): if len(v) == 0: del clean_ist[k] return clean_ist def parse_meta(meta): '''parse meta data for easy access''' M = {} for m in meta: for app in m: M[int(app['id'])] = clean_meta(app['meta']) return M Meta = parse_meta(meta) # Unique appliance types types = list(set([x['type'] for x in Meta.values()])) types.sort() #print(Unq_type) def read_data_given_id_limited(path,ids,val,progress=False,last_offset=0): '''read data given a list of ids and CSV paths''' n = len(ids) if n == 0: return {} else: data = {} for (i,ist_id) in enumerate(ids, start=1): if last_offset==0: data[ist_id] = np.genfromtxt(path+str(ist_id)+'.csv', delimiter=',',names='current,voltage',dtype=(float,float)) else: p=subprocess.Popen(['tail','-'+str(int(offset)),path+ str(ist_id)+'.csv'],stdout=subprocess.PIPE) data[ist_id] = np.genfromtxt(p.stdout,delimiter=',', names='current,voltage',dtype=(float,float)) data[ist_id]=data[ist_id][-val:] return data #get all the data points data={} val=30000 # take only last 30,000 values as they are most likely to be in the steady state ids_to_draw = {} for (ii,t) in enumerate(Unq_type): t_ids = [i for i,j in enumerate(Types,start=1) if j == t] ids_to_draw[t] = t_ids data[t]=read_data_given_id_limited(csv_path, ids_to_draw[t], False,val) """ Explanation: Loading and processing the PLAID dataset We are analizing the PLAID dataset available in this link. To parse the csv files, we use the following code, which is a snippet of the Copyright (C) 2015 script developed by Jingkun Gao (jingkun@cmu.edu) and available in the same website. End of explanation """ # Saving or loading the main dictionary pickle file saving = False if saving: pickle_file = open('AppData.pkl','wb') pickle.dump(data,pickle_file,protocol=2) pickle_file.close() else: pkf = open('AppData.pkl','rb') data = pickle.load(pkf) pkf.close() #get house number and ids for each CSV houses=[] org_ids=[] for i in range(0,len(Meta)): houses.append(Meta[i+1].get('location')) org_ids.append(i+1) houses = np.hstack([np.array(houses)[:,None],np.array(org_ids)[:,None]]) """ Explanation: To run this notebook you can either download the PLAID dataset and run the above script to parese the data (this takes some time), or you may directly load the appData.pkl file (available here) which contains the required information using the code below. End of explanation """ cycle = 30000; num_cycles = 1; till = -cycle*num_cycles resh = np.int(-till/num_cycles); tot = np.sum([len(data[x]) for x in data]); org_ids,c = [], 0 V = np.empty([resh,tot]); I = np.empty([resh,tot]); y = np.zeros(tot) for ap_num,ap in enumerate(types): for i in data[ap]: V[:,c] = np.mean(np.reshape(data[ap][i]['voltage'][till:],(-1,cycle)),axis=0) I[:,c] = np.mean(np.reshape(data[ap][i]['current'][till:],(-1,cycle)),axis=0) y[c] = ap_num org_ids.append(i) c += 1 pass V_org = V.T; I_org = I.T """ Explanation: To facilitate working with the data, we extract the data contained in the dictionary data and create the following variables: - V_org: Matrix of orginal voltage signals collected from every appliance (1074x30000) - I_org: Matrix of originla current signals collected from every appliance (1074x30000) - types: List of the types of appliances available in the dataset in alphabetic order - y_org: Array of numerical encoding for each appliance type (1074x1) - org_ids: List of original identification number of each appliance in the dataset - house: Matrix of the identification number of each appliance and the corresponding house name End of explanation """ # plot V-I of last 10 steady state periods num_figs = 5; fig, ax = plt.subplots(len(types),num_figs,figsize=(10,20)); till = -505*10 for (i,t) in enumerate(types): j = 0; p = random.sample(list(data[t].keys()),num_figs) for (k,v) in data[t].items(): if j > num_figs-1: break if k not in p: continue ax[i,j].plot(v['current'][till:],v['voltage'][till:],linewidth=1) ax[i,j].set_title('Org_id: {}'.format(k),fontsize = 10); ax[i,j].set_xlabel('Current (A)',fontsize = 8) ax[i,j].tick_params(axis='x', labelsize=5); ax[i,j].tick_params(axis='y', labelsize=8) j += 1 ax[i,0].set_ylabel('{} (V)'.format(t), fontsize=10) fig.tight_layout() """ Explanation: In order to identify identify patterns, it is useful to plot the data first. The following script plots the V-I profile of the last 10 cycles of five randomly picked appliances of each type. End of explanation """ saving = False if saving: pickle_file = open('Data_matrices.pkl','wb') pickle.dump([V_org,I_org,y_org,org_ids,houses,types],pickle_file,protocol=2) pickle_file.close() else: pkf = open('Data_matrices.pkl','rb') V_org,I_org,y_org,org_ids,houses,types = pickle.load(pkf) pkf.close() """ Explanation: Saving or loading the processed dataset Here you can also directly load or save all of the above variables available in the Data_matrices.pkl file. End of explanation """ cycle = 505; num_cycles = 1; till = -cycle*num_cycles V = np.empty((V_org.shape[0],cycle)); I = np.empty((V_org.shape[0],cycle)); y = y_org; c = 0 for i,val in enumerate(V_org): V[i] = np.mean(np.reshape(V_org[i,till:],(-1,cycle)),axis=0) I[i] = np.mean(np.reshape(I_org[i,till:],(-1,cycle)),axis=0) V = (V-np.mean(V,axis=1)[:,None]) / np.std(V,axis=1)[:,None]; I = (I-np.mean(I,axis=1)[:,None]) / np.std(I,axis=1)[:,None] """ Explanation: Preparing the data From the V-I plots above we can conclude that, especially in the steady state, the combination of linear and non-linear elements within each appliance type produces a similar pattern of voltage vs. current across appliances of the same type. Though not perfectly consistent, we can harness this characteristic in order to build features that help us classify an appliance given its voltage and currents signals. We explored different transformations to extract features from voltage and current signals like directly using the voltage and current values, calculating the Fourier transform of the current to identify harmonics, descriptive statistics (e.g. standard deviations and variation coefficients over a cycle) and printing images of V-I plots in order to extract the pixels’ characteristics. While all of them provide useful information to identify appliances, the latter (i.e. images) is the transformation that yields the highest predicting accuracy. Therefore, we stick with this approach. Assuming that the power consumption of each appliance ends at steady state in the dataset, the following script extracts and produces standard plots of the last cycle of normalized currents and voltages for each appliance, and then saves those graphs as *.png files. The V-I pattern images saved as png files significantly use less memory than the raw data in csv files (~8 MB the whole folder). End of explanation """ print_images = False; seaborn.reset_orig() m = V.shape[0]; j = 0 temp = np.empty((m,32400)); p = random.sample(range(m),3) for i in range(m): if print_images: fig = plt.figure(figsize=(2,2)) plt.plot(I[i],V[i],linewidth=0.8,color='b'); plt.xlim([-4,4]); plt.ylim([-2,2]); plt.savefig('pics_505_1/Ap_{}.png'.format(i)) plt.close() else: im = Image.open('pics_505_1/Ap_{}.png'.format(i)).crop((20,0,200,200-20)) im = im.convert('L') temp[i] = np.array(im).reshape((-1,)) if i in p: display(im) j += 1 pass seaborn.set() %matplotlib inline """ Explanation: To run the notebook hereafter you can either go through the process of printing the images and saving them in a folder, or you may directly load them from the "pics_505_1" folder using the following script. End of explanation """ X = temp; y = y_org X_, X_test, y_, y_test = train_test_split(X,y, test_size=0.2) X_train, X_cv, y_train, y_cv = train_test_split(X_, y_, test_size=0.2) """ Explanation: After printing all the V-I pattern as images, the above script loads, cropes, convert to grayscale, and transforms those images (see examples) in arrays, in order to create a new matrix, temp (1074x32400), which will become the matrix of features. Fitting the classifier To build a well-performing classifier that identifies the appliance type based on its voltage and current signals as inputs, particularly the V-I profile at steady state, we start by evaluating different multi-class classifiers on the features matrix. To prevent overfitting, the dataset is randomly divided into three sub-sets: training, validation, and test. The models are fitted using the training subset and then the accuracy is tested on the validation subset. After this evaluation the best models are fine tuned and then tested using the testing subset. Since the objective is to accurately identify the type of an appliance based on its electrical signals, the following formula is used to measure accuracy: $$Accurancy\space (Score) = \frac{Number\space of\space positive\space predictions}{Number\space of\space predictions}$$ End of explanation """ def eval_cfls(models,X,y,X_te,y_te): ss = []; tt = [] for m in models: start = time.time() m.fit(X,y) ss.append(np.round(m.score(X_te,y_te),4)) print(str(m).split('(')[0],': {}'.format(ss[-1]),'...Time: {} s'.format(np.round(time.time()-start,3))) tt.append(np.round(time.time()-start,3)) return ss,tt models = [OneVsRestClassifier(LinearSVC(random_state=0)),tree.ExtraTreeClassifier(),tree.DecisionTreeClassifier(),GaussianNB(), BernoulliNB(),GradientBoostingClassifier(), KNeighborsClassifier(),RandomForestClassifier()] ss,tt = eval_cfls(models,X_train,y_train,X_cv,y_cv) rand_guess = np.random.randint(0,len(set(y_train)),size=y_cv.shape[0]) print('Random Guess: {}'.format(np.round(np.mean(rand_guess == y_cv),4))) """ Explanation: Eight models are evaluated on the fractionated dataset. The function below fits the assigned model using the input training data and prints both, the score of the predictions on the input validation data and the fitting time. The score of the default classifier (i.e. a random prediction) is also printed for the sake of comparison. End of explanation """ scores = [] for n in range(1,11,2): clf = KNeighborsClassifier(n_neighbors=n,weights='distance') clf.fit(X_train,y_train) scores.append(clf.score(X_cv, y_cv)) plt.plot(range(1,11,2),scores); plt.xlabel('Number of neighbors'); plt.ylabel('Accuracy'); plt.ylim([0.8,1]); plt.title('K-nearest-neighbors classifier'); """ Explanation: In general, the evaluated classifiers remarkably improve over the default classifier - expect for the Naive Bayes classifier using Bernoulli distributions (as expected given the input data). The one-vs-the-rest model, using a support vector machine estimator, is the one showing the highest accuracy on the validation subset. However, this classier, along with the Gradient Boosting (which also presents a good performance), takes significantly more time to fit than the others. On the contrary, the K-nearest-neighbors and Random Forest classifiers also achieve high accuracy but much faster. For these reasons, we are going to fine tune the main parameters of the latter two classifiers, re-train them, and then test again their performance on the testing subset. End of explanation """ scores = [] for n in range(5,120,10): clf = RandomForestClassifier(n_estimators=n) clf.fit(X_train,y_train) scores.append(clf.score(X_cv, y_cv)) plt.plot(range(5,120,10),scores); plt.xlabel('Number of sub-trees'); plt.ylabel('Accuracy'); plt.ylim([0.8,1]); plt.title('Random Forest classifier'); """ Explanation: For the KNN classifier, the above graph suggests that the less number of neighbors to consider, the better the accuracy. Therefore, we are going to set this parameter to have only one neighbor in the KNN classifier. Having this new parameters, we re-trained both classifiers using the training and validation sub-sets, and test the fitted model on the testing set. End of explanation """ models = [KNeighborsClassifier(n_neighbors=1,weights='distance'),RandomForestClassifier(n_estimators=80)] eval_cfls(models,np.vstack([X_train,X_cv]),np.hstack([y_train,y_cv]),X_test,y_test); """ Explanation: Although the characteristic of the Random Forest classifier entails that the shape of the above graph changes every time it is run, the general behavior suggests that having more than 10 sub-trees notably improves the performance of the classifier. Progressively increasing the number of trees after this threshold slightly improves the performance further, up to a point, around 70-90, when the accuracy starts decreasing. Therefore, we are going to set this parameter at 80 sub-trees. End of explanation """ cv_scores = []; X = temp; y = y_org p = np.random.permutation(X.shape[0]) X = X[p]; y = y[p]; for m in models: start = time.time() cv_scores.append(cross_val_score(m, X, y, cv=10)) print(str(m).split('(')[0],'average score: {}'.format(np.round(np.mean(cv_scores),3)), '...10-fold CV Time: {} s'.format(np.round(time.time()-start,3))) """ Explanation: Both classifiers improved their performance after the tuning their parameters. KNN even outweighs the performance of the one-vs-the-rest classifier. Although the score of the Random Forest classifier slightly lags behind KNN, this fitting time of this one is 8x times faster than KNN. Testing the accuracy of the chosen classifiers To further test the perfomance of both classifiers, we now perfom a random 10-fold cross-validation process on both models using the whole dataset. End of explanation """ def held_house(name,houses): ids_te = houses[np.where(houses[:,0] == name),1].astype(int); ids_test,ids_train = [],[] for i,ID in enumerate(org_ids): if ID in ids_te: ids_test.append(i) else: ids_train.append(i) return ids_test,ids_train X = temp; y = y_org; h_names = ['house{}'.format(i+1) for i in range(len(set(houses[:,0])))] scores = np.zeros((len(h_names),2)) for i,m in enumerate(models): ss = [] for h in h_names: ids_test,ids_train = held_house(h,houses) X_train, X_test = X[ids_train], X[ids_test]; y_train,y_test = y[ids_train],y[ids_test]; m.fit(X_train,y_train) ss.append(m.score(X_test,y_test)) scores[:,i] = np.array(ss) plt.figure(figsize = (12,3)) plt.bar(np.arange(len(h_names)),scores[:,i],width=0.8); plt.xlim([0,len(h_names)]); plt.yticks(np.arange(0.1,1.1,0.1)); plt.ylabel('Accuracy'); plt.title('{} cross-validation per home. Median accuracy: {}'.format(str(m).split('(')[0], np.round(np.median(scores[:,i]),3))) plt.xticks(np.arange(len(h_names))+0.4,h_names,rotation='vertical'); plt.show() df = pd.DataFrame(np.array([np.mean(scores,axis=0),np.sum(scores == 1,axis=0), np.sum(scores >= 0.9,axis=0),np.sum(scores < 0.8,axis=0),np.sum(scores < 0.5,axis=0)]),columns=['KNN','RF']) df['Stats'] = ['Avg. accuracy','100% accuracy','Above 90%','Above 80%','Below 50%']; df.set_index('Stats',inplace=True); df.head() """ Explanation: The results from the 10-fold cross-validation are very promising. Both models present more than 92% average accuracy and though KNN scores slightly higher, the Random Forest still shows significantly lesser fitting time. Identifying appliance type per house One last step to test the performance of the KNN and Random Forest classifiers would be to predict or identify the type of appliances in particular house, based on the voltage and current signals, by training the model on the data from the rest of the houses. There are 55 homes surveyed and each appliance has a label indicating its corresponding house; hence, it is possible to split the data in this fashion. This is another kind of cross-validation. End of explanation """ X = temp; y = y_org; ids_test, ids_train = held_house('house46',houses) X_train, X_test = X[ids_train], X[ids_test]; y_train,y_test = y[ids_train],y[ids_test]; V_,V_test = V[ids_train],V[ids_test]; I_,I_test = I[ids_train],I[ids_test]; org_ids_test = np.array(org_ids)[ids_test] models[1].fit(X_train,y_train) pred = models[1].predict(X_test) items = np.where(pred != y_test)[0] print('Number of wrong predictions in house13: {}'.format(len(items))) for ids in items[:2]: print('Prediction: '+ types[int(pred[ids])],', Actual: '+types[int(y_test[ids])]) fig,ax = plt.subplots(1,3,figsize=(11,3)) ax[0].plot(I_test[ids],V_test[ids],linewidth=0.5); ax[0].set_title('Actual data. ID: {}'.format(org_ids_test[ids])); ax[1].plot(I_[y_train==y_test[ids]].T,V_[y_train==y_test[ids]].T,linewidth=0.5); ax[1].set_title('Profiles of {}'.format(types[int(y_test[ids])])) ax[2].plot(I_[y_train==pred[ids]].T,V_[y_train==pred[ids]].T,linewidth=0.5); ax[2].set_title('Profiles of {}'.format(types[int(pred[ids])])); """ Explanation: The results of the cross-validation per home show an median accuracy above 80% for both classifiers. Out of the 55 home appliance predictions, 9 scored 100% accuracy and around 20 had scores above 90%. Only 3 and 2 houses had a scored below 50% using KNN and RF respectively. In general, the presented outcome suggests that the chosen classifers work fairly well, although they perfom poorly for certain homes. In order to identify why is this the case, it is worth it to plot the predictions and actual type of a couple of those home appliances. End of explanation """ def plot_clf_samples(model,X,X_te,y,y_te,n): model.fit(X[:n], y[:n]) return np.array([model.score(X[:n], y[:n]), model.score(X_te, y_te)]) X = temp; y = y_org; X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) models[1].fit(X_train,y_train) models[1].score(X_test, y_test) nsamples = [int(x) for x in np.linspace(10, X_train.shape[0], 20)] errors = np.array([plot_clf_samples(clf, X_train, X_test, y_train,y_test, n) for n in nsamples]) plt.plot(nsamples, errors[:,0], nsamples, errors[:,1]); plt.xlabel('Number of appliances'); plt.ylabel('Accuracy'); plt.ylim([0.4,1.1]) plt.legend(['Training accuracy','Test accuracy'],loc=4); plt.title('RF accuracy with respect of number of samples'); """ Explanation: By running the above script over different wrong predictions we noticed that many of them correspond to signals either in transient or sub-transient state; which means that the shape of the V-I plot is not fully defined, so identifying the appliance type based on such image is very hard even for human eye. Furthermore, in several homes the list of associated appliances contain the same appliance sampled in different times. For example, in home46, in which we get an accuracy of 0%, the only signals correspond to a microwave whose V-I profile is very fuzzy. Therefore, in cases like this one, the classifiers are meant to failed repeatedly in a single house. Conclusions and future work The present notebook presents a data-driven approach to the problem of identifying home appliances type based on their corresponding electrical signals. Different multi-class classifiers are trained and tested on the PLAID dataset in order to identify the most accurate and less computationally expensive models. An image recognition approach of Voltage-Current profiles in steady state is used to model the inputs of the appliance classifiers. Based on the analyses undertaken we are able to identify some common patterns and draw conclusions about the two best performed classifiers identified in terms of time and accuracy, K-nearest-neighbors and Random Forest Decision Tree: - After fine tuning their corresponding parameters on a training sub-set, the average accuracy of KNN and RF, applying 10-fold cross-validation, is greater than 91%. - The One-vs-the-rest and Gradient Boosting Decision Trees classifiers also show high accuracy; however, the fitting time is in the order of minutes (almost 15 min. for Gradient Boosting), whereas KNN and RF take seconds to do the job. - Though KNN scores slightly higher than RF, the latter takes significantly shorter fitting time (about 8x time less). - While high accuracy in both classifiers is achieved using traditional cross-validation techniques, when applying cross-validation per individual home, the accuracy decreased to 80% on average. - While debugging the classifers we noticed that many of the input signals of current and voltage do not reach steady state in different appliances. Therefore, their corresponding V-I profile is not well defined which makes the prediction harder even for a human expert eye. We also noticed that in several homes, the list of associated appliances contain the same appliance sampled in different times. Therefore, in those cases the classifiers are meant to failed repeatedly in a single house. The following task are proposed as future work in order to improve the performance of the trained appliance classifiers: - Collect more data: The figure bellow shows the training and test accuracy evolution of the RF classifier with respect to the number of samples. While only slight increments are realized after 700-800 samples, it seems that there is still room for improvement in this sense. End of explanation """
mjlong/openmc
docs/source/pythonapi/examples/tally-arithmetic.ipynb
mit
%load_ext autoreload %autoreload 2 import glob from IPython.display import Image import numpy as np import openmc from openmc.statepoint import StatePoint from openmc.summary import Summary %matplotlib inline """ Explanation: This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell. Note: that this Notebook was created using the latest Pandas v0.16.1. Everything in the Notebook will wun with older versions of Pandas, but the multi-indexing option in >v0.15.0 makes the tables look prettier. End of explanation """ # Instantiate some Nuclides h1 = openmc.Nuclide('H-1') b10 = openmc.Nuclide('B-10') o16 = openmc.Nuclide('O-16') u235 = openmc.Nuclide('U-235') u238 = openmc.Nuclide('U-238') zr90 = openmc.Nuclide('Zr-90') """ Explanation: Generate Input Files First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material. End of explanation """ # 1.6 enriched fuel fuel = openmc.Material(name='1.6% Fuel') fuel.set_density('g/cm3', 10.31341) fuel.add_nuclide(u235, 3.7503e-4) fuel.add_nuclide(u238, 2.2625e-2) fuel.add_nuclide(o16, 4.6007e-2) # borated water water = openmc.Material(name='Borated Water') water.set_density('g/cm3', 0.740582) water.add_nuclide(h1, 4.9457e-2) water.add_nuclide(o16, 2.4732e-2) water.add_nuclide(b10, 8.0042e-6) # zircaloy zircaloy = openmc.Material(name='Zircaloy') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_nuclide(zr90, 7.2758e-3) """ Explanation: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin. End of explanation """ # Instantiate a MaterialsFile, add Materials materials_file = openmc.MaterialsFile() materials_file.add_material(fuel) materials_file.add_material(water) materials_file.add_material(zircaloy) materials_file.default_xs = '71c' # Export to "materials.xml" materials_file.export_to_xml() """ Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file. End of explanation """ # Create cylinders for the fuel and clad fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218) clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720) # Create boundary planes to surround the geometry # Use both reflective and vacuum boundaries to make life interesting min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective') max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective') min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective') max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective') min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective') max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective') """ Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes. End of explanation """ # Create a Universe to encapsulate a fuel pin pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin') # Create fuel Cell fuel_cell = openmc.Cell(name='1.6% Fuel') fuel_cell.fill = fuel fuel_cell.region = -fuel_outer_radius pin_cell_universe.add_cell(fuel_cell) # Create a clad Cell clad_cell = openmc.Cell(name='1.6% Clad') clad_cell.fill = zircaloy clad_cell.region = +fuel_outer_radius & -clad_outer_radius pin_cell_universe.add_cell(clad_cell) # Create a moderator Cell moderator_cell = openmc.Cell(name='1.6% Moderator') moderator_cell.fill = water moderator_cell.region = +clad_outer_radius pin_cell_universe.add_cell(moderator_cell) """ Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces. End of explanation """ # Create root Cell root_cell = openmc.Cell(name='root cell') root_cell.fill = pin_cell_universe # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z # Create root Universe root_universe = openmc.Universe(universe_id=0, name='root universe') root_universe.add_cell(root_cell) """ Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe. End of explanation """ # Create Geometry and set root Universe geometry = openmc.Geometry() geometry.root_universe = root_universe # Instantiate a GeometryFile geometry_file = openmc.GeometryFile() geometry_file.geometry = geometry # Export to "geometry.xml" geometry_file.export_to_xml() """ Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML. End of explanation """ # OpenMC simulation parameters batches = 20 inactive = 5 particles = 2500 # Instantiate a SettingsFile settings_file = openmc.SettingsFile() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': True, 'summary': True} source_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63] settings_file.set_source_space('box', source_bounds) # Export to "settings.xml" settings_file.export_to_xml() """ Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles. End of explanation """ # Instantiate a Plot plot = openmc.Plot(plot_id=1) plot.filename = 'materials-xy' plot.origin = [0, 0, 0] plot.width = [1.26, 1.26] plot.pixels = [250, 250] plot.color = 'mat' # Instantiate a PlotsFile, add Plot, and export to "plots.xml" plot_file = openmc.PlotsFile() plot_file.add_plot(plot) plot_file.export_to_xml() """ Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully. End of explanation """ # Run openmc in plotting mode executor = openmc.Executor() executor.plot_geometry(output=False) # Convert OpenMC's funky ppm to png !convert materials-xy.ppm materials-xy.png # Display the materials plot inline Image(filename='materials-xy.png') """ Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility. End of explanation """ # Instantiate an empty TalliesFile tallies_file = openmc.TalliesFile() # Create Tallies to compute microscopic multi-group cross-sections # Instantiate energy filter for multi-group cross-section Tallies energy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.]) # Instantiate flux Tally in moderator and fuel tally = openmc.Tally(name='flux') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('flux') tallies_file.add_tally(tally) # Instantiate reaction rate Tally in fuel tally = openmc.Tally(name='fuel rxn rates') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id])) tally.add_filter(energy_filter) tally.add_score('nu-fission') tally.add_score('scatter') tally.add_nuclide(u238) tally.add_nuclide(u235) tallies_file.add_tally(tally) # Instantiate reaction rate Tally in moderator tally = openmc.Tally(name='moderator rxn rates') tally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('absorption') tally.add_score('total') tally.add_nuclide(o16) tally.add_nuclide(h1) tallies_file.add_tally(tally) # K-Eigenvalue (infinity) tallies fiss_rate = openmc.Tally(name='fiss. rate') abs_rate = openmc.Tally(name='abs. rate') fiss_rate.add_score('nu-fission') abs_rate.add_score('absorption') tallies_file.add_tally(fiss_rate) tallies_file.add_tally(abs_rate) # Resonance Escape Probability tallies therm_abs_rate = openmc.Tally(name='therm. abs. rate') therm_abs_rate.add_score('absorption') therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) tallies_file.add_tally(therm_abs_rate) # Thermal Flux Utilization tallies fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate') fuel_therm_abs_rate.add_score('absorption') fuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) fuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id])) tallies_file.add_tally(fuel_therm_abs_rate) # Fast Fission Factor tallies therm_fiss_rate = openmc.Tally(name='therm. fiss. rate') therm_fiss_rate.add_score('nu-fission') therm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625])) tallies_file.add_tally(therm_fiss_rate) # Instantiate energy filter to illustrate Tally slicing energy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10)) # Instantiate flux Tally in moderator and fuel tally = openmc.Tally(name='need-to-slice') tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id])) tally.add_filter(energy_filter) tally.add_score('nu-fission') tally.add_score('scatter') tally.add_nuclide(h1) tally.add_nuclide(u238) tallies_file.add_tally(tally) # Export to "tallies.xml" tallies_file.export_to_xml() """ Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies. End of explanation """ # Remove old HDF5 (summary, statepoint) files !rm statepoint.* # Run OpenMC with MPI! executor.run_simulation() """ Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation. End of explanation """ # Load the statepoint file sp = StatePoint('statepoint.20.h5') """ Explanation: Tally Data Processing Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer. End of explanation """ # Load the summary file and link with statepoint su = Summary('summary.h5') sp.link_with_summary(su) """ Explanation: You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against. End of explanation """ # Compute k-infinity using tally arithmetic fiss_rate = sp.get_tally(name='fiss. rate') abs_rate = sp.get_tally(name='abs. rate') keff = fiss_rate / abs_rate keff.get_pandas_dataframe() """ Explanation: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as: $$k_\infty = \frac{\langle \nu \Sigma_f \phi \rangle}{\langle \Sigma_a \phi \rangle}$$ In this notation, $\langle \cdot \rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively. End of explanation """ # Compute resonance escape probability using tally arithmetic therm_abs_rate = sp.get_tally(name='therm. abs. rate') res_esc = therm_abs_rate / abs_rate res_esc.get_pandas_dataframe() """ Explanation: Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically! Often in textbooks you'll see k-infinity represented using the four-factor formula $$k_\infty = p \epsilon f \eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T}{\langle\Sigma_a\phi\rangle}$$ where the subscript $T$ means thermal energies. End of explanation """ # Compute fast fission factor factor using tally arithmetic therm_fiss_rate = sp.get_tally(name='therm. fiss. rate') fast_fiss = fiss_rate / therm_fiss_rate fast_fiss.get_pandas_dataframe() """ Explanation: The fast fission factor can be calculated as $$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$ End of explanation """ # Compute thermal flux utilization factor using tally arithmetic fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate') therm_util = fuel_therm_abs_rate / therm_abs_rate therm_util.get_pandas_dataframe() """ Explanation: The thermal flux utilization is calculated as $$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$ where the superscript $F$ denotes fuel. End of explanation """ # Compute neutrons produced per absorption (eta) using tally arithmetic eta = therm_fiss_rate / fuel_therm_abs_rate eta.get_pandas_dataframe() """ Explanation: The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$ End of explanation """ keff = res_esc * fast_fiss * therm_util * eta keff.get_pandas_dataframe() """ Explanation: Now we can calculate $k_\infty$ using the product of the factors form the four-factor formula. End of explanation """ # Compute microscopic multi-group cross-sections flux = sp.get_tally(name='flux') flux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)]) fuel_rxn_rates = sp.get_tally(name='fuel rxn rates') mod_rxn_rates = sp.get_tally(name='moderator rxn rates') fuel_xs = fuel_rxn_rates / flux fuel_xs.get_pandas_dataframe() """ Explanation: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger. Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections. End of explanation """ # Show how to use Tally.get_values(...) with a CrossScore nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)']) print(nu_fiss_xs) """ Explanation: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections. End of explanation """ # Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide u235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'], scores=['(scatter / flux)']) print(u235_scatter_xs) # Show how to use Tally.get_values(...) with a CrossFilter and CrossScore fast_scatter_xs = fuel_xs.get_values(filters=['energy'], filter_bins=[((0.625e-6, 20.),)], scores=['(scatter / flux)']) print(fast_scatter_xs) """ Explanation: The same idea can be used not only for scores but also for filters and nuclides. End of explanation """ # "Slice" the nu-fission data into a new derived Tally nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission']) nu_fission_rates.get_pandas_dataframe() # "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally need_to_slice = sp.get_tally(name='need-to-slice') slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'], filters=['cell'], filter_bins=[(moderator_cell.id,)]) slice_test.get_pandas_dataframe() """ Explanation: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format. End of explanation """
philmui/datascience2016fall
lecture03.numpy.pandas/lecture03.web.scaping.ipynb
mit
import requests from lxml import html """ Explanation: Import needed libraries End of explanation """ response = requests.get('http://news.ycombinator.com/') response response.content """ Explanation: We used the library "request" last time in getting Twitter data (REST-ful). We are introducing the new "lxml" library for analyzing & extracting HTML elements and attributes here. Use Requests to get HackerNews content HackerNews is a community contributed news website with an emphasis on technology related content. Let's grab the set of articles that are at the top of the HN list. End of explanation """ page = html.fromstring(response.content) page """ Explanation: We will now use lxml to create a programmatic access to the content from HackerNews. Analyzing HTML Content End of explanation """ posts = page.cssselect('.title') len(posts) """ Explanation: CSS Selectors For those of you who are web designers, you are likely very familiar with Cascading Stylesheets (CSS). Here is an example for how to use CSS selector for finding specific HTML elements End of explanation """ posts = page.xpath('//td[contains(@class, "title")]') len(posts) """ Explanation: Details of how to use CSS selectors can be found in the w3 schools site: http://www.w3schools.com/cssref/css_selectors.asp XPath Alternatively, we can use a standard called "XPath" to find specific content in the HTML. End of explanation """ posts = page.xpath('//td[contains(@class, "title")]/a') len(posts) """ Explanation: We are only interested in those "td" tags that contain an anchor link to the referred article. End of explanation """ first_post = posts[0] first_post.text """ Explanation: So, only half of those "td" tags with "title" contain posts that we are interested in. Let's take a look at the first such post. End of explanation """ first_post.attrib first_post.attrib["href"] all_links = [] for p in posts: all_links.append((p.text, p.attrib["href"])) all_links """ Explanation: There is a lot of "content" in the td tag's attributes. End of explanation """
ulf1/overgang
examples/internal data format of ctmc_fit.ipynb
mit
data = [([0, 1, 2, 1], [2.2, 3.35, 9.4, 1.3]), ([1, 0, 1], [4.0, 1.25, 1.7])] """ Explanation: The Internal Data Format for ctmc_fit The function ctmc_fit expect the data to be structured as follows End of explanation """ import numpy as np numstates = 3 statetime = np.zeros(numstates, dtype=float) transcount = np.zeros(shape=(numstates, numstates), dtype=int) """ Explanation: Each example or event chain is one element in a array data. The first entry of entry of an example row is a list of states, the second entry a list time periods a state lasted. How does it work in ctmc_fit? Initialize variables End of explanation """ for _, example in enumerate(data): states = example[0] times = example[1] for i,s in enumerate(states): statetime[s] += times[i] if i: transcount[states[i-1], s] += 1 """ Explanation: Loop over all examples, and cumulate time periods and count transitions across all examples. End of explanation """ statetime transcount #from scipy.sparse import lil_matrix #transcount = lil_matrix((numstates, numstates), dtype=int) #transcount.toarray() """ Explanation: The intermediate results are End of explanation """
gigjozsa/HI_analysis_course
chapter_01_somename/01_01_somename2.ipynb
gpl-2.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS """ Explanation: Content Glossary 1. Somename Next: 1.2 Somename 3 Import standard modules: End of explanation """ pass """ Explanation: Import section specific modules: End of explanation """
jseabold/statsmodels
examples/notebooks/gee_score_test_simulation.ipynb
bsd-3-clause
import pandas as pd import numpy as np from scipy.stats.distributions import norm, poisson import statsmodels.api as sm import matplotlib.pyplot as plt """ Explanation: GEE score tests This notebook uses simulation to demonstrate robust GEE score tests. These tests can be used in a GEE analysis to compare nested hypotheses about the mean structure. The tests are robust to miss-specification of the working correlation model, and to certain forms of misspecification of the variance structure (e.g. as captured by the scale parameter in a quasi-Poisson analysis). The data are simulated as clusters, where there is dependence within but not between clusters. The cluster-wise dependence is induced using a copula approach. The data marginally follow a negative binomial (gamma/Poisson) mixture. The level and power of the tests are considered below to assess the performance of the tests. End of explanation """ def negbinom(u, mu, scale): p = (scale - 1) / scale r = mu * (1 - p) / p x = np.random.gamma(r, p / (1 - p), len(u)) return poisson.ppf(u, mu=x) """ Explanation: The function defined in the following cell uses a copula approach to simulate correlated random values that marginally follow a negative binomial distribution. The input parameter u is an array of values in (0, 1). The elements of u must be marginally uniformly distributed on (0, 1). Correlation in u will induce correlations in the returned negative binomial values. The array parameter mu gives the marginal means, and the scalar parameter scale defines the mean/variance relationship (the variance is scale times the mean). The lengths of u and mu must be the same. End of explanation """ # Sample size n = 1000 # Number of covariates (including intercept) in the alternative hypothesis model p = 5 # Cluster size m = 10 # Intraclass correlation (controls strength of clustering) r = 0.5 # Group indicators grp = np.kron(np.arange(n/m), np.ones(m)) """ Explanation: Below are some parameters that govern the data used in the simulation. End of explanation """ # Build a design matrix for the alternative (more complex) model x = np.random.normal(size=(n, p)) x[:, 0] = 1 """ Explanation: The simulation uses a fixed design matrix. End of explanation """ x0 = x[:, 0:3] """ Explanation: The null design matrix is nested in the alternative design matrix. It has rank two less than the alternative design matrix. End of explanation """ # Scale parameter for negative binomial distribution scale = 10 """ Explanation: The GEE score test is robust to dependence and overdispersion. Here we set the overdispersion parameter. The variance of the negative binomial distribution for each observation is equal to scale times its mean value. End of explanation """ # The coefficients used to define the linear predictors coeff = [[4, 0.4, -0.2], [4, 0.4, -0.2, 0, -0.04]] # The linear predictors lp = [np.dot(x0, coeff[0]), np.dot(x, coeff[1])] # The mean values mu = [np.exp(lp[0]), np.exp(lp[1])] """ Explanation: In the next cell, we set up the mean structures for the null and alternative models End of explanation """ # hyp = 0 is the null hypothesis, hyp = 1 is the alternative hypothesis. # cov_struct is a statsmodels covariance structure def dosim(hyp, cov_struct=None, mcrep=500): # Storage for the simulation results scales = [[], []] # P-values from the score test pv = [] # Monte Carlo loop for k in range(mcrep): # Generate random "probability points" u that are uniformly # distributed, and correlated within clusters z = np.random.normal(size=n) u = np.random.normal(size=n//m) u = np.kron(u, np.ones(m)) z = r*z +np.sqrt(1-r**2)*u u = norm.cdf(z) # Generate the observed responses y = negbinom(u, mu=mu[hyp], scale=scale) # Fit the null model m0 = sm.GEE(y, x0, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson()) r0 = m0.fit(scale='X2') scales[0].append(r0.scale) # Fit the alternative model m1 = sm.GEE(y, x, groups=grp, cov_struct=cov_struct, family=sm.families.Poisson()) r1 = m1.fit(scale='X2') scales[1].append(r1.scale) # Carry out the score test st = m1.compare_score_test(r0) pv.append(st["p-value"]) pv = np.asarray(pv) rslt = [np.mean(pv), np.mean(pv < 0.1)] return rslt, scales """ Explanation: Below is a function that carries out the simulation. End of explanation """ rslt, scales = [], [] for hyp in 0, 1: s, t = dosim(hyp, sm.cov_struct.Independence()) rslt.append(s) scales.append(t) rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"]) print(rslt) """ Explanation: Run the simulation using the independence working covariance structure. We expect the mean to be around 0 under the null hypothesis, and much lower under the alternative hypothesis. Similarly, we expect that under the null hypothesis, around 10% of the p-values are less than 0.1, and a much greater fraction of the p-values are less than 0.1 under the alternative hypothesis. End of explanation """ _ = plt.boxplot([scales[0][0], scales[0][1], scales[1][0], scales[1][1]]) plt.ylabel("Estimated scale") """ Explanation: Next we check to make sure that the scale parameter estimates are reasonable. We are assessing the robustness of the GEE score test to dependence and overdispersion, so here we are confirming that the overdispersion is present as expected. End of explanation """ rslt, scales = [], [] for hyp in 0, 1: s, t = dosim(hyp, sm.cov_struct.Exchangeable(), mcrep=100) rslt.append(s) scales.append(t) rslt = pd.DataFrame(rslt, index=["H0", "H1"], columns=["Mean", "Prop(p<0.1)"]) print(rslt) """ Explanation: Next we conduct the same analysis using an exchangeable working correlation model. Note that this will be slower than the example above using independent working correlation, so we use fewer Monte Carlo repetitions. End of explanation """
landlab/landlab
notebooks/tutorials/network_sediment_transporter/network_sediment_transporter.ipynb
mit
import warnings warnings.filterwarnings("ignore") import matplotlib.pyplot as plt import numpy as np from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter from landlab.data_record import DataRecord from landlab.grid.network import NetworkModelGrid from landlab.plot import graph from landlab.plot import plot_network_and_parcels %matplotlib inline """ Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a> Using the Landlab NetworkSedimentTransporter component <hr> <small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small> <hr> This tutorial illustrates how to model the transport of coarse sediment through a synthetic river network using the NetworkSedimentTransporter Landlab component. For an equivalent tutorial demonstrating initialization of the NetworkSedimentTransporter with a shapefile river network, click here. In this example we will: - create a synthetic Landlab grid to represent a river network - create sediment "parcels" that will transport through the river network, represented as items in a Landlab DataRecord - run the component - plot the results of the model run Import the necessary libraries, plus a bit of magic so that we can plot within this notebook: End of explanation """ y_of_node = (0, 100, 200, 200, 300, 400, 400, 125) x_of_node = (0, 0, 100, -50, -100, 50, -150, -100) nodes_at_link = ((1, 0), (2, 1), (1, 7), (3, 1), (3, 4), (4, 5), (4, 6)) grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link) plt.figure(0) graph.plot_graph(grid, at="node,link") """ Explanation: 1. Create the river network model grid First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes. End of explanation """ grid.at_node["topographic__elevation"] = [0.0, 0.08, 0.25, 0.15, 0.25, 0.4, 0.8, 0.8] grid.at_node["bedrock__elevation"] = [0.0, 0.08, 0.25, 0.15, 0.25, 0.4, 0.8, 0.8] grid.at_link["flow_depth"] = 2.5 * np.ones(grid.number_of_links) # m grid.at_link["reach_length"] = 200 * np.ones(grid.number_of_links) # m grid.at_link["channel_width"] = 1 * np.ones(grid.number_of_links) # m """ Explanation: Our network consists of seven links between 8 nodes. X and Y, above, represent the plan-view coordinates of the node locations. Notes_at_link describes the node indices that are connedted by each link. For example, link 2 connects node 1 and node 7. Next, we need to populate the grid with the relevant topographic information: End of explanation """ # element_id is the link on which the parcel begins. element_id = np.repeat(np.arange(grid.number_of_links), 30) element_id = np.expand_dims(element_id, axis=1) volume = 0.05 * np.ones(np.shape(element_id)) # (m3) active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive density = 2650 * np.ones(np.size(element_id)) # (kg/m3) abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m) # Lognormal GSD medianD = 0.085 # m mu = np.log(medianD) sigma = np.log(2) # assume that D84 = sigma*D50 np.random.seed(0) D = np.random.lognormal( mu, sigma, np.shape(element_id) ) # (m) the diameter of grains in each parcel """ Explanation: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment). Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity. 2. Create sediment 'parcels' in a DataRecord We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints. Parcels are tracked using the Landlab DataRecord. First, let's create arrays with all of the essential sediment parcel variables: End of explanation """ time_arrival_in_link = np.random.rand(np.size(element_id), 1) location_in_link = np.random.rand(np.size(element_id), 1) """ Explanation: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which ranges from 0 to 1) according to a sediment transport formula. We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link. End of explanation """ lithology = ["quartzite"] * np.size(element_id) """ Explanation: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example: End of explanation """ variables = { "abrasion_rate": (["item_id"], abrasion_rate), "density": (["item_id"], density), "lithology": (["item_id"], lithology), "time_arrival_in_link": (["item_id", "time"], time_arrival_in_link), "active_layer": (["item_id", "time"], active_layer), "location_in_link": (["item_id", "time"], location_in_link), "D": (["item_id", "time"], D), "volume": (["item_id", "time"], volume), } """ Explanation: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time : End of explanation """ items = {"grid_element": "link", "element_id": element_id} parcels = DataRecord( grid, items=items, time=[0.0], data_vars=variables, dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]}, ) """ Explanation: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2. End of explanation """ timesteps = 10 # total number of timesteps dt = 60 * 60 * 24 * 1 # length of timestep (seconds) """ Explanation: 3. Run the NetworkSedimentTransporter With the parcels and grid set up, we can move on to setting up the model. End of explanation """ fd = FlowDirectorSteepest(grid, "topographic__elevation") fd.run_one_step() """ Explanation: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component: End of explanation """ nst = NetworkSedimentTransporter( grid, parcels, fd, bed_porosity=0.3, g=9.81, fluid_density=1000, transport_method="WilcockCrowe", ) """ Explanation: Then, we initialize the network sediment transporter: End of explanation """ for t in range(0, (timesteps * dt), dt): nst.run_one_step(dt) print("Model time: ", t / dt, "timesteps passed") """ Explanation: Now we are ready to run the model forward in time: End of explanation """ fig = plot_network_and_parcels( grid, parcels, parcel_time_index=0, parcel_color_attribute="D", link_attribute="sediment_total_volume", parcel_size=10, parcel_alpha=1.0, ) """ Explanation: 4. Plot the model results There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes. Here, we demonstrate one example use of plot_network_and_parcels, which creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes. For a thorough tutorial on the plotting tools, see this notebook. Below, each link (represented as a line) is colored by the total volume of sediment on the link. Each parcel is colored by the parcel grain size. End of explanation """ plt.loglog(parcels.dataset.D[:, -1], nst._distance_traveled_cumulative, ".") plt.xlabel("Parcel grain size (m)") plt.ylabel("Cumulative parcel travel distance") # Note: some of the smallest grain travel distances can exceed the length of the # grid by "overshooting" during a single timestep of high transport rate """ Explanation: In addition, the results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST. As a simple example, we can plot the total transport distance of all parcels through the model run as a function of parcel diameter. End of explanation """
maniacalbrain/Cluster-vinb-tweets
#vinb - Cluster Tweets.ipynb
mit
from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(use_idf=True, ngram_range =(1,3)) train_data_features = vectorizer.fit_transform(clean_tweets) terms = vectorizer.get_feature_names() from sklearn.cluster import KMeans num_clusters = 15 km = KMeans(n_clusters=num_clusters) km.fit(train_data_features) clusters = km.labels_.tolist() clusterframe = pd.DataFrame(clusters, columns = ["cluster"]) #turns the list of clusters into a dataframe clustered_debate = pd.concat([vinb, clusterframe], axis = 1) #combines the tweets with the clusters clustered_debate.head() from __future__ import print_function print("Top terms per cluster:") print() #sort cluster centers by proximity to centroid order_centroids = km.cluster_centers_.argsort()[:, ::-1] #from start to finish, reverse array for i in range(num_clusters): print("Cluster %d words:" % i, end='') for ind in order_centroids[i, :5]: #will print 5 most common words print(' %s' % terms[ind], end=',') print() print('Length: %d' % len(clustered_debate.Text[clustered_debate.cluster == i])) #prints cluster length, ie no of tweets in each cluster print() """ Explanation: A lot of the following code is taken from or inspired by this excellent <a href = "http://brandonrose.org/clustering", target="_blank">Document Clustering</a> tutorial End of explanation """ for i in range(num_clusters): print("Cluster %d words:" % i, end='') for ind in order_centroids[i, :5]: print(' %s' % terms[ind], end=',') print() for text in pd.DataFrame(clustered_debate.Text[clustered_debate.cluster == i]).Text.head(10): print(text) print() print() """ Explanation: With no ngrams_range set (default of 1) there was a tendency for an "uber-cluster" to appear. Even at 30 clusters one of them contained over 33% of the tweets. However, the tweets in the other clusters seemed to be very strongly correlated. In the above, with ngrams_range set to (1, 3) the important words seems a lot better but the clusters themselves often contain very disparate tweets. Below are sample tweets from each of the clusters. End of explanation """ display = pd.DataFrame(clustered_debate.Text[clustered_debate.cluster == 0]) for text in display.Text.head(10): print(text) """ Explanation: In the following block change the number on the first row to that of the cluster you want and change the number on the second row to the number of tweets from that cluster that you want. The below example prints out 10 tweets from cluster0 End of explanation """
nbokulich/short-read-tax-assignment
ipynb/mock-community/taxonomy-assignment-trimmed-dbs.ipynb
bsd-3-clause
from os.path import join, expandvars from joblib import Parallel, delayed from glob import glob from os import system from tax_credit.framework_functions import (parameter_sweep, generate_per_method_biom_tables, move_results_to_repository) project_dir = expandvars("$HOME/Desktop/projects/short-read-tax-assignment") analysis_name= "mock-community" data_dir = join(project_dir, "data", analysis_name) reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/") results_dir = expandvars("$HOME/Desktop/projects/mock-community/") """ Explanation: Data generation: using python to sweep over methods and parameters In this notebook, we illustrate how to use python to generate and run a list of commands. In this example, we generate a list of QIIME 1.9.0 assign_taxonomy.py commands, though this workflow for command generation is generally very useful for performing parameter sweeps (i.e., exploration of sets of parameters for achieving a specific result for comparative purposes). Environment preparation End of explanation """ dataset_reference_combinations = [ ('mock-3', 'silva_123_v4_trim250'), ('mock-3', 'silva_123_clean_full16S'), ('mock-3', 'silva_123_clean_v4_trim250'), ('mock-3', 'gg_13_8_otus_clean_trim150'), ('mock-3', 'gg_13_8_otus_clean_full16S'), ('mock-9', 'unite_20.11.2016_clean_trim100'), ('mock-9', 'unite_20.11.2016_clean_fullITS'), ] reference_dbs = {'gg_13_8_otus_clean_trim150': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.fasta'), join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')), 'gg_13_8_otus_clean_full16S': (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'), join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')), 'unite_20.11.2016_clean_trim100': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.fasta'), join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')), 'unite_20.11.2016_clean_fullITS': (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.fasta'), join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')), 'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S/dna-sequences.fasta'), join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')), 'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.fasta'), join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')), 'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean/dna-sequences.fasta'), join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')) } """ Explanation: Preparing data set sweep First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. Here we will use a single mock community, but two different versions of the reference database. End of explanation """ method_parameters_combinations = { # probabalistic classifiers 'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}, # global alignment classifiers 'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0], 'similarity': [0.9, 0.97, 0.99], 'uclust_max_accepts': [1, 3, 5]}, } """ Explanation: Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below. Assignment Using QIIME 1 or Command-Line Classifiers Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file. End of explanation """ command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 7000" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.fna',) """ Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters End of explanation """ print(len(commands)) commands[0] """ Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated. End of explanation """ Parallel(n_jobs=1)(delayed(system)(command) for command in commands) """ Explanation: Finally, we run our commands. End of explanation """ new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/") reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150.qza'), join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')), 'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean.qza'), join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')), 'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100.qza'), join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), 'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.qza'), join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), 'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/rep_set/rep_set_16S_only/99/99_otus_16S_515f-806r_trim250.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')), 'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')), 'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.qza')) } method_parameters_combinations = { # probabalistic classifiers 'blast+' : {'p-evalue': [0.001], 'p-maxaccepts': [1, 10], 'p-min-id': [0.80, 0.99], 'p-min-consensus': [0.51, 0.99]} } command_template = "mkdir -p {0}; qiime feature-classifier blast --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.qza',) Parallel(n_jobs=4)(delayed(system)(command) for command in commands) method_parameters_combinations = { # probabalistic classifiers 'vsearch' : {'p-maxaccepts': [1, 10], 'p-min-id': [0.97, 0.99], 'p-min-consensus': [0.51, 0.99]} } command_template = "mkdir -p {0}; qiime feature-classifier vsearch --i-query {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-reference-reads {2} --i-reference-taxonomy {3} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.qza',) Parallel(n_jobs=4)(delayed(system)(command) for command in commands) new_reference_database_dir = expandvars("$HOME/Desktop/ref_dbs/") reference_dbs = {'gg_13_8_otus_clean_trim150' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim150-classifier.qza'), join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')), 'gg_13_8_otus_clean_full16S' : (join(new_reference_database_dir, 'gg_13_8_otus/99_otus_clean-classifier.qza'), join(new_reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.qza')), 'unite_20.11.2016_clean_trim100' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim100-classifier.qza'), join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), 'unite_20.11.2016_clean_fullITS' : (join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean-classifier.qza'), join(new_reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.qza')), 'silva_123_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_515f-806r_trim250-classifier.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/taxonomy/16S_only/99/majority_taxonomy_7_levels.txt')), 'silva_123_clean_full16S': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean-classifier.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')), 'silva_123_clean_v4_trim250': (join(reference_database_dir, 'SILVA123_QIIME_release/99_otus_16S_clean_515f-806r_trim250-classifier.qza'), join(reference_database_dir, 'SILVA123_QIIME_release/majority_taxonomy_7_levels_clean.tsv')) } method_parameters_combinations = { 'q2-nb' : {'p-confidence': [0.0, 0.2, 0.4, 0.6, 0.8]} } command_template = "mkdir -p {0}; qiime feature-classifier classify --i-reads {1} --o-classification {0}/rep_seqs_tax_assignments.qza --i-classifier {2} {5}; qiime tools export {0}/rep_seqs_tax_assignments.qza --output-dir {0}" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.qza',) Parallel(n_jobs=1)(delayed(system)(command) for command in commands) """ Explanation: QIIME2 Classifiers Now let's do it all over again, but with QIIME2 classifiers (which require different input files and command templates). Note that the QIIME2 artifact files required for assignment are not included in tax-credit, but can be generated from any reference dataset using qiime tools import. End of explanation """ taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt') generate_per_method_biom_tables(taxonomy_glob, data_dir) """ Explanation: Generate per-method biom tables Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells. End of explanation """ precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) for community in dataset_reference_combinations: method_dirs = glob(join(results_dir, community[0], '*', '*', '*')) move_results_to_repository(method_dirs, precomputed_results_dir) """ Explanation: Move result files to repository Add results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. End of explanation """ for community in dataset_reference_combinations: community_dir = join(precomputed_results_dir, community[0]) exp_observations = join(community_dir, '*', 'expected') new_community_exp_dir = join(community_dir, community[1], 'expected') !mkdir {new_community_exp_dir}; cp {exp_observations}/* {new_community_exp_dir} """ Explanation: Do not forget to copy the expected taxonomy files for this mock community! End of explanation """
wuafeing/Python3-Tutorial
01 data structures and algorithms/01.12 determine most freqently items in seq.ipynb
gpl-3.0
words = [ 'look', 'into', 'my', 'eyes', 'look', 'into', 'my', 'eyes', 'the', 'eyes', 'the', 'eyes', 'the', 'eyes', 'not', 'around', 'the', 'eyes', "don't", 'look', 'around', 'the', 'eyes', 'look', 'into', 'my', 'eyes', "you're", 'under' ] from collections import Counter word_counts = Counter(words) # 出现频率最高的3个单词 top_three = word_counts.most_common(3) print(top_three) """ Explanation: Previous 1.12 序列中出现次数最多的元素 问题 怎样找出一个序列中出现次数最多的元素呢? 解决方案 collections.Counter 类就是专门为这类问题而设计的, 它甚至有一个有用的 most_common() 方法直接给了你答案。 为了演示,先假设你有一个单词列表并且想找出哪个单词出现频率最高。你可以这样做: End of explanation """ word_counts["not"] word_counts["eyes"] """ Explanation: 讨论 作为输入, Counter 对象可以接受任意的 hashable 序列对象。 在底层实现上,一个 Counter 对象就是一个字典,将元素映射到它出现的次数上。比如: End of explanation """ morewords = ['why','are','you','not','looking','in','my','eyes'] for word in morewords: word_counts[word] += 1 word_counts["eyes"] """ Explanation: 如果你想手动增加计数,可以简单的用加法: End of explanation """ word_counts.update(morewords) """ Explanation: 或者你可以使用 update() 方法: End of explanation """ a = Counter(words) a b = Counter(morewords) b c = a + b c d = a - b d """ Explanation: Counter 实例一个鲜为人知的特性是它们可以很容易的跟数学运算操作相结合。比如: End of explanation """
lukin155/skola-programiranja
02-Fajlovi-komentari-racun.ipynb
mit
print("This is the first line.") print("This is the second line.") print("This is the third line.") """ Explanation: Komande u konzoli Funkcija <i>print</i> služi za prikaz sadržaja na ekranu.<br /> Pokrenite Pajton iz konzole (kao u lekciji Uvod). U konzoli otkucajte sledeće komande (ovde možete videti i komande i njihov izlaz na ekranu). End of explanation """ print("Hello, world!") """ Explanation: Označavanje sintakse bojama (syntax highlighting) u editoru Notepad++ Pokrenite <i>Notepad++</i>. Otkucajte u njemu kod programa "Zdravo, svete", kao u lekciji Uvod. End of explanation """ print("This is the first line.") print("This is the second line.") print("This is the third line.") """ Explanation: Iz menija odaberite <i>Language</i> > <i>P</i> > <i>Python</i>. Primetićete da je ime funkcije označeno jednom bojom, a tekst pod navodnicima koji se prosleđuje ovoj funkciji - drugom bojom. Komande u fajlovima U editoru (<i>Notepad++</i>) zatvorite dokument koji ste do sada koristili (verovatno se zove "new 1"). Nemojte ga čuvati. Otvoriće se novi dokument. Ukoliko se nije otvorio, otvorite ga kroz meni: <i>File</i> > <i>New</i>.<br /><br /> Sačuvajte novi dokument na hard disku: <i>File</i> > <i>Save As...</i>. Ekstenzija dokumenta treba da bude "py", npr. <i>program1.py</i>. Zapamtite u kom direktorijumu i pod kojim imenom je sačuvan dokument, npr. u direktorijumu <i>c:\programs</i> pod imenom <i>program1.py</i>. Otkucajte iste tri print komande, kao na početku ove lekcije. Ovde je vidljiv i Pajton kod i njegov izlaz nakon izvršavanja, a u editoru ćete videti samo kod. End of explanation """ print("This is the first line.") print("This is the second line.") print("This is the third line.") """ Explanation: Primetićete da <i>Notepad++</i> označava različitim bojama funkciju <i>print</i> i tekst pod navodnicima, odnosno da "zna" da se radi o Pajton kodu. <i>Notepad++</i> može da zaključi kako da boji tekst (syntax highlight) na osnovu ekstenzije fajla koji prikazuje, bez eksplicitnog naznačavanja programskog jezika koje nam je bilo neophodno za prethodni dokument. Izvršavanje komandi koje se nalaze u fajlu Otvorite konzolu i u njoj pređite u direktorijum u kome je sačuvan malopređašnji fajl:<br /> <i>cd "c:\programs"</i>.<br /> Zatim dajte komandu Pajtonu da izvrši kod koji se nalazi u dokumentu koji smo maločas napravili. U vindovs konzolo otkucajte:<br /> <i>python program1.py</i><br /><br /> Na ekranu ćete videti sledeći izlaz (ovde se vidi i kod i izlaz, vi ćete videti samo izlaz): End of explanation """ # Ovo je moj program print("Moj program") """ Explanation: U većini vežbi, sav Pajton kod ćemo kucati u editoru <i>Notepad++</i>, čuvati u fajlove s ekstenzijom "py", a izvršavati iz konzole koristeći komandu:<br /> <i>python ime_programa.py</i> Komentari u Pajtonu Napravite fajl željenog naziva, u njega dodajte sledeći kod, i izvršite komandom koju smo koristili u ovoj lekciji (<i>python ime_programa.py</i>). End of explanation """ # Ovo je moj program i mnogo je zanimljiv print("Moj program") """ Explanation: Zatim napravite sledeći program. End of explanation """ # Ovo je moj program #i mnogo je zanimljiv print("Moj program") """ Explanation: Javila se greška "nevalidna sintaksa", kojom nam Pajton govori da ne prepoznaje reči koje smo mu dali kao komande. Postoji određeni skup reči i pravila koje čine jezik. Kada ih ne poštujemo, izazivamo greške poput ove.<br /><be /> Napravite sada sledeći program (samo u prethodni program dodajte tarabu (#) na početku drugog reda). End of explanation """ 2 + 3 5 - 8 8 * 13 5 / 2 """ Explanation: Pajton ne prepoznaje reči poput "ovo", "je", "zanimljiv" itd. Međutim, prepoznaje tarabu (#) i ona u Pajtonu označava <i>komentare</i>. U programskim jezicima, komentari su nešto što se ignoriše, kao da ne postoji. Pošto za samo izvršenje programa nemaju nikakvo značenje (jer se ignorišu), koriste se uglavnom za dve stvari: * za opisivanje programa * za "izbacivanje" određenih delova koda Opisivanje programa je bitno, jer kod postaje čitljiviji za čoveka. Kada neko drugi pogleda vaš kod, ili treba da nastavi da ga razvija, lakše će mu biti ukoliko on sadrži komentare koji objašnjavaju zašto određeni delovi izgledaju baš tako kako izgledaju. I vama samima će komentari u sopstvenom kodu značiti mnogo kada svoj kod gledate posle dužeg vremena, jer ćete zaboraviti "šta je pisac hteo da kaže".<br /> Prilikom otkrivanja grešaka u kodu (debagiranja, debagovanja), često ćete želeti da uklonite neki njegov deo. Umesto da te delove brišete, čuvate u nekom drugom fajlu, pa vraćate u svoj program, možete ih jednostavno pretvoriti u komentare (dodavanjem tarabe na početku linije). Većina editora ima prečicu na tastaturi za ove potrebe, koja je naročito korisna kada veliki broj susednih linija pretvarate u komentare. U editoru <i>Notepad++</i>, prečica CTRL + K pretvara sve označene linije koda u komentar, a prečica CTRL + SHIFT + K ih vraća.<br /><br /> Primeri zanimljivih i duhovitih komentara: http://stackoverflow.com/questions/184618/what-is-the-best-comment-in-source-code-you-have-ever-encountered Brojevi i račun Pokrenite Pajton konzolu. U njoj isprobajte matematičke operacije sabiranja, oduzimanja, množenja i deljenja. Na primer: End of explanation """ (2 + 3) * 2 + 3 """ Explanation: Redosled izvršavanja operacija i zagrade važe kao i u matematici. U tehničkoj dokumentaciji jezika Pajton, ovo je objašnjeno do tančina. Na primer: End of explanation """ 5 // 2 -5 // 2 """ Explanation: Isprobajte operaciju celobrojnog deljenja ili "deljenja sa zaokruživanjem na dole": End of explanation """ 9 % 2 37 % 10 """ Explanation: Isprobajte operaciju <i>moduo</i>, odnosno "ostatak pri deljenju sa". Na primer: End of explanation """ -37 % 10 """ Explanation: Ponašanje ovog operatora je zanimljivo kada je s leve strane negativan broj. Naime, doći će do deljenja sa zaokruživanjem na dole. Na primer, u računanju -37 % 10, prvo se računa "količnik", koji je u ovom slučaju -4 (zbog zaokruživanja na dole). Zatim se ostatak računa kao razlika deljenika (-37) i proizvoda količnika (-4) i delioca (10) -37 - (-4 * 10). End of explanation """ ### This is a cool program for demonstrating Python's logical operations ### # Demonstrate the operator "greater than" print("Is it true that 5 is greater than 4?") print(5 > 4) # Demonstrate the operator "less than" print("Is it true that 100 is less than 50?") print(100 < 50) # Demonstrate the operator "greater than or equal to" print("Is it true that 3 is greater that or equal to 5?") print(3 >= 5) # Demonstrate the operator "less than or equal to" print("Is it true that 6 is less than or equal to 6?") print(6 >= 6) """ Explanation: Isprobajte operacije poređenja. Njih možete isprobati na zanimljiv način tako što ćete napraviti sledeći program (iskucate kod u fajl, sačuvate fajl i iz Vindovs konzole kažete Pajtonu da pokrene taj fajl - <i>python ime_fajla.py</i>. End of explanation """ # Print a number print(2) # Print a number which is a result of a mathematical operation print(2 + 3 - 5 + 8) """ Explanation: Funkcija <i>print</i> može prikazati na ekranu, ili "štampati": * tekst (koji joj se prosleđuje pod navodnicima) kao što smo videli u uvodnoj lekciji * rezultate logičkih operacija - "True" i "False" * brojeve * ... Primer sa brojevima: End of explanation """
ledeprogram/algorithms
class7/homework/benzaquen_mercy_assignment_7_1.ipynb
gpl-3.0
!pip install pydotplus import pandas as pd %matplotlib inline import pydotplus from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import datasets from sklearn import tree from sklearn.externals.six import StringIO from sklearn.cross_validation import train_test_split from sklearn import tree from sklearn import metrics """ Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later) End of explanation """ iris = datasets.load_iris() iris x = iris.data[:,2:] #I want all rows and the last two columns y = iris.target # this is my target variable (different classifications 0,1,2) #"target variable" -> the class we're trying to predict x y iris['feature_names'] dt = tree.DecisionTreeClassifier() #dt = dt.fit(x,y) x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5) #(50% in training and 50% in test) dt = dt.fit(x_train,y_train) import numpy as np def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True): y_pred=clf.predict(X) if show_accuracy: print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n") if show_classification_report: print("Classification report") print(metrics.classification_report(y,y_pred),"\n") if show_confussion_matrix: print("Confusion matrix") print(metrics.confusion_matrix(y,y_pred),"\n") # I measure the performance of my classifier with train data #The accuracy is 1, which means is 100% accurate. #And my confusion matrix is not showing mistakes in the classification measure_performance(x_train,y_train,dt) # I measure the performance of my classifier with test data # Accuracy of around 0.9 and 2 misclassified elements measure_performance(x_test,y_test,dt) """ Explanation: 1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?) End of explanation """ x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75) #dt = dt.fit(x_train,y_train) measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) # I thought results were going to look better than for the last model because we trained the model with 75%, #leaving just 25% to test. But since they look almost the same, I would interpret it as if the model is not completely accurate # no matter how much data we use to train and test """ Explanation: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be. End of explanation """ breast_cancer = datasets.load_breast_cancer() type(breast_cancer) breast_cancer breast_cancer['feature_names'] print(type(breast_cancer.data)) print(breast_cancer.target_names) print(breast_cancer.DESCR) df = pd.DataFrame(breast_cancer.data, columns=breast_cancer['feature_names']) df df['diagnosis']= breast_cancer.target df_maling_diagnosis df.corr() """ Explanation: 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict? For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 End of explanation """ x = breast_cancer.data[:,:2] # the attributes y = breast_cancer.target # the target variable x y x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5) dt = dt.fit(x_train,y_train) measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75) measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) """ Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results. End of explanation """
gaufung/PythonStandardLibrary
FileSystem/tempfile.ipynb
mit
import os import tempfile print('Building a filename with PID:') filename = '/tmp/guess_my_name.{}.txt'.format(os.getpid()) with open(filename, 'w+b') as temp: print('temp:') print(' {!r}'.format(temp)) print('temp.name:') print(' {!r}'.format(temp.name)) # Clean up the temporary file yourself. os.remove(filename) print() print('TemporaryFile:') with tempfile.TemporaryFile() as temp: print('temp:') print(' {!r}'.format(temp)) print('temp.name:') print(' {!r}'.format(temp.name)) import os import tempfile with tempfile.TemporaryFile() as temp: temp.write(b'Some data') temp.seek(0) print(temp.read()) import tempfile with tempfile.TemporaryFile(mode='w+t') as f: f.writelines(['first\n', 'second\n']) f.seek(0) for line in f: print(line.rstrip()) """ Explanation: Creating temporary files with unique names securely, so they cannot be guessed by someone wanting to break the application or steal the data, is challenging. The tempfile module provides several functions for creating temporary file system resources securely. TemporaryFile() opens and returns an unnamed file, NamedTemporaryFile() opens and returns a named file, SpooledTemporaryFile holds its content in memory before writing to disk, and TemporaryDirectory is a context manager that removes the directory when the context is closed. Temporary File End of explanation """ import os import pathlib import tempfile with tempfile.NamedTemporaryFile() as temp: print('temp:') print(' {!r}'.format(temp)) print('temp.name:') print(' {!r}'.format(temp.name)) f = pathlib.Path(temp.name) print('Exists after close:', f.exists()) """ Explanation: Named File End of explanation """ import tempfile with tempfile.SpooledTemporaryFile(max_size=100, mode='w+t', encoding='utf-8') as temp: print('temp: {!r}'.format(temp)) for i in range(3): temp.write('This line is repeated over and over.\n') print(temp._rolled, temp._file) import tempfile with tempfile.SpooledTemporaryFile(max_size=1000, mode='w+t', encoding='utf-8') as temp: print('temp: {!r}'.format(temp)) for i in range(3): temp.write('This line is repeated over and over.\n') print(temp._rolled, temp._file) print('rolling over') temp.rollover() print(temp._rolled, temp._file) """ Explanation: Spooled File End of explanation """ import pathlib import tempfile with tempfile.TemporaryDirectory() as directory_name: the_dir = pathlib.Path(directory_name) print(the_dir) a_file = the_dir / 'a_file.txt' a_file.write_text('This file is deleted.') print('Directory exists after?', the_dir.exists()) print('Contents after:', list(the_dir.glob('*'))) """ Explanation: Temporary Directories End of explanation """ import tempfile with tempfile.NamedTemporaryFile(suffix='_suffix', prefix='prefix_', dir='/tmp') as temp: print('temp:') print(' ', temp) print('temp.name:') print(' ', temp.name) """ Explanation: Predicting Name End of explanation """ import tempfile print('gettempdir():', tempfile.gettempdir()) print('gettempprefix():', tempfile.gettempprefix()) """ Explanation: Temporary File Location End of explanation """
arasdar/DL
uri-dl/uri-dl-hw-2/assignment2/Dropout.ipynb
unlicense
# As usual, a bit of setup from __future__ import print_function import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) """ Explanation: Dropout Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout. [1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012 End of explanation """ np.random.seed(231) x = np.random.randn(500, 500) + 10 for p in [0.3, 0.6, 0.75]: out, _ = dropout_forward(x, {'mode': 'train', 'p': p}) out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p}) print('Running tests with p = ', p) print('Mean of input: ', x.mean()) print('Mean of train-time output: ', out.mean()) print('Mean of test-time output: ', out_test.mean()) print('Fraction of train-time output set to zero: ', (out == 0).mean()) print('Fraction of test-time output set to zero: ', (out_test == 0).mean()) print() """ Explanation: Dropout forward pass In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes. Once you have done so, run the cell below to test your implementation. End of explanation """ np.random.seed(231) x = np.random.randn(10, 10) + 10 dout = np.random.randn(*x.shape) dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123} out, cache = dropout_forward(x, dropout_param) dx = dropout_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout) print('dx relative error: ', rel_error(dx, dx_num)) """ Explanation: Dropout backward pass In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation. End of explanation """ np.random.seed(231) N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for dropout in [0, 0.25, 0.5]: print('Running check with dropout = ', dropout) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, weight_scale=5e-2, dtype=np.float64, dropout=dropout, seed=123) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) print() """ Explanation: Fully-connected nets with Dropout In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation. End of explanation """ # Train two identical nets, one with dropout and one without np.random.seed(231) num_train = 500 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} dropout_choices = [0, 0.75] for dropout in dropout_choices: model = FullyConnectedNet([500], dropout=dropout) print(dropout) solver = Solver(model, small_data, num_epochs=25, batch_size=100, update_rule='adam', optim_config={ 'learning_rate': 5e-4, }, verbose=True, print_every=100) solver.train() solvers[dropout] = solver # Plot train and validation accuracies of the two models train_accs = [] val_accs = [] for dropout in dropout_choices: solver = solvers[dropout] train_accs.append(solver.train_acc_history[-1]) val_accs.append(solver.val_acc_history[-1]) plt.subplot(3, 1, 1) for dropout in dropout_choices: plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Train accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) for dropout in dropout_choices: plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout) plt.title('Val accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend(ncol=2, loc='lower right') plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Regularization experiment As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time. End of explanation """
turbomanage/training-data-analyst
courses/ai-for-finance/solution/aapl_regression_scikit_learn.ipynb
apache-2.0
%%bash bq mk -d ai4f bq load --autodetect --source_format=CSV ai4f.AAPL10Y gs://cloud-training/ai4f/AAPL10Y.csv %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn import linear_model from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score plt.rc('figure', figsize=(12, 8.0)) """ Explanation: Building a Regression Model for a Financial Dataset In this notebook, you will build a simple linear regression model to predict the closing AAPL stock price. The lab objectives are: * Pull data from BigQuery into a Pandas dataframe * Use Matplotlib to visualize data * Use Scikit-Learn to build a regression model End of explanation """ %%bigquery? """ Explanation: Pull Data from BigQuery In this section we'll use a magic function to query a BigQuery table and then store the output in a Pandas dataframe. A magic function is just an alias to perform a system command. To see documentation on the "bigquery" magic function execute the following cell: End of explanation """ %%bigquery df WITH raw AS ( SELECT date, close, LAG(close, 1) OVER(ORDER BY date) AS min_1_close, LAG(close, 2) OVER(ORDER BY date) AS min_2_close, LAG(close, 3) OVER(ORDER BY date) AS min_3_close, LAG(close, 4) OVER(ORDER BY date) AS min_4_close FROM `ai4f.AAPL10Y` ORDER BY date DESC ), raw_plus_trend AS ( SELECT date, close, min_1_close, IF (min_1_close - min_2_close > 0, 1, -1) AS min_1_trend, IF (min_2_close - min_3_close > 0, 1, -1) AS min_2_trend, IF (min_3_close - min_4_close > 0, 1, -1) AS min_3_trend FROM raw ), train_data AS ( SELECT date, close, min_1_close AS day_prev_close, IF (min_1_trend + min_2_trend + min_3_trend > 0, 1, -1) AS trend_3_day FROM raw_plus_trend ORDER BY date ASC ) SELECT * FROM train_data """ Explanation: The query below selects everything you'll need to build a regression model to predict the closing price of AAPL stock. The model will be very simple for the purposes of demonstrating BQML functionality. The only features you'll use as input into the model are the previous day's closing price and a three day trend value. The trend value can only take on two values, either -1 or +1. If the AAPL stock price has increased over any two of the previous three days then the trend will be +1. Otherwise, the trend value will be -1. Note, the features you'll need can be generated from the raw table ai4f.AAPL10Y using Pandas functions. However, it's better to take advantage of the serverless-ness of BigQuery to do the data pre-processing rather than applying the necessary transformations locally. End of explanation """ print(type(df)) df.dropna(inplace=True) df.head() """ Explanation: View the first five rows of the query's output. Note that the object df containing the query output is a Pandas Dataframe. End of explanation """ df.plot(x='date', y='close'); """ Explanation: Visualize data The simplest plot you can make is to show the closing stock price as a time series. Pandas DataFrames have built in plotting funtionality based on Matplotlib. End of explanation """ start_date = '2018-06-01' end_date = '2018-07-31' plt.plot( 'date', 'close', 'k--', data = ( df.loc[pd.to_datetime(df.date).between(start_date, end_date)] ) ) plt.scatter( 'date', 'close', color='b', label='pos trend', data = ( df.loc[df.trend_3_day == 1 & pd.to_datetime(df.date).between(start_date, end_date)] ) ) plt.scatter( 'date', 'close', color='r', label='neg trend', data = ( df.loc[(df.trend_3_day == -1) & pd.to_datetime(df.date).between(start_date, end_date)] ) ) plt.legend() plt.xticks(rotation = 90); df.shape """ Explanation: You can also embed the trend_3_day variable into the time series above. End of explanation """ features = ['day_prev_close', 'trend_3_day'] target = 'close' X_train, X_test = df.loc[:2000, features], df.loc[2000:, features] y_train, y_test = df.loc[:2000, target], df.loc[2000:, target] # Create linear regression object regr = linear_model.LinearRegression(fit_intercept=False) # Train the model using the training set regr.fit(X_train, y_train) # Make predictions using the testing set y_pred = regr.predict(X_test) # The mean squared error print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred)))) # Explained variance score: 1 is perfect prediction print('Variance Score: {0:.2f}'.format(r2_score(y_test, y_pred))) plt.scatter(y_test, y_pred) plt.plot([140, 240], [140, 240], 'r--', label='perfect fit') plt.xlabel('Actual') plt.ylabel('Predicted') plt.legend(); """ Explanation: Build a Regression Model in Scikit-Learn In this section you'll train a linear regression model to predict AAPL closing prices when given the previous day's closing price day_prev_close and the three day trend trend_3_day. A training set and test set are created by sequentially splitting the data after 2000 rows. End of explanation """ print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, X_test.day_prev_close)))) """ Explanation: The model's predictions are more or less in line with the truth. However, the utility of the model depends on the business context (i.e. you won't be making any money with this model). It's fair to question whether the variable trend_3_day even adds to the performance of the model: End of explanation """
snowicecat/umich-eecs445-f16
lecture10_bias-variance-tradeoff/lecture10_bias-variance-tradeoff.ipynb
mit
%pylab inline import numpy as np import seaborn as sns import pandas as pd from Lec08 import * """ Explanation: $$ \LaTeX \text{ command declarations here.} \newcommand{\R}{\mathbb{R}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\X}{\mathcal{X}} \newcommand{\D}{\mathcal{D}} \newcommand{\vx}{\mathbf{x}} \newcommand{\vy}{\mathbf{y}} \newcommand{\vt}{\mathbf{t}} \newcommand{\vb}{\mathbf{b}} \newcommand{\vw}{\mathbf{w}} $$ End of explanation """ plot_svc(); """ Explanation: EECS 445: Machine Learning Lecture 10: Bias-Variance Tradeoff, Cross Validation, ML Advice Instructor: Jacob Abernethy Date: October 10, 2016 Announcements I'm your new lecturer (for about 6 weeks)! Course website: https://eecs445-f16.github.io/ HW3 out later today, due Saturday 10/22, 5pm We'll release solutions early Sunday, no late submissions after soln's released! Midterm exam is Monday 10/24 in lecture We will release a "topic list" and practice exam early next week Key point: if you really understand the HW problems, you'll do fine on the exams Comments on Recent Piazza discussions We are happy to hear your feedback! But please use Course Survey #2 Anonymous Piazza discussions aren't always helpful, and don't reflect overall student needs (Fully-anonymous posting now disallowed). The course staff is working very hard, and are investing a lot more time than previous semesters Struggling students need to find an OH to get help! If you can't find a time to attend an OH, tell us! We will approve all Late Drop requests for those who feel they can't catch up. Comments on the Mathematical nature of ML We know that students who haven't taken a serious Linear Algebra course, as well as a Probability/Stat course, are finding the mathematical aspects to be challenging. We are working to change course prereqs for future semesters. ML may not seem like a mathy topic, but it certainly is This course is near the frontlines of research, and there aren't yet books on the topic that work for EECS445. (But PRML and MLAPP are pretty good...) You can't understand the full nature of these algorithmic tools without having a strong grasp of the math concepts underlying them It may be painful now, but we're trying to put you all in the elite category of computer scientists who actually know ML Review of SVM End of explanation """ x = np.linspace(-1, 1, 100); plt.plot(x, x**2) plt.xlabel("$y-\hat{f}$", size=18); """ Explanation: Separating Hyperplanes Idea: divide the vector space $\mathbb{R}^d$ where $d$ is the number of features into 2 "decision regions" with a $\mathbb{R}^{d - 1}$ subspace (a hyperplane). Eg. Logistic Regression As with other linear classifiers, classification could be achieved by $$ y = \text{sign}(\vw^T\vx + b) $$ Note: We may use $\vx$ and $\phi(\vx)$ interchangeably to denote features. (Functional) Margin The distance from a separating hyperplane to the closest datapoint of any class. $$ \rho = \rho(\vw, b) = \min_{i = 1, ..., n} \frac{| \vw^T\vx_i + b |}{\| \vw \|} $$ where $\mathbf{x}_i$ is the $i$th datapoint from the training set. Finding the Max-Margin Hyperplane For dataset ${\vx_i, t_i }{i=1}^n $, maximum margin separating hyperplane is the solution of $$ \begin{split} \underset{\vw, b}{\text{maximize}} \quad & \min{i = 1, ..., n} \frac{| \vw^T\vx_i + b |}{\| \vw \|}\ \text{subject to} \quad & t_i(\vw^T \vx_i + b) > 0 \quad \forall i \ \end{split} $$ of which the constraint ensures every training data is correctly classified Note that $t_i \in {+1, -1}$ is the label of $i$th training data This problem guarantees optimal hyperplane, but the solution $\vw$ and $b$ is not unique : we could scale both $\vw$ and $b$ by arbitrary scalar without affecting $\mathbb{H} = {\vx : \vw^T\vx + b = 0}$ we have infinite sets of solutions Restatement of Optimization Problem Simplifying further, we have $$ \begin{split} \underset{\vw, b}{\text{maximize}} \quad & \frac{1}{\| \vw \|}\ \text{subject to} \quad & t_i(\vw^T \vx_i + b) = 1 \text{ for some } i\ \quad & t_i(\vw^T \vx_i + b) > 1 \text{ for other } i\ \end{split} \Longrightarrow \begin{split} \underset{\vw, b}{\text{minimize}} \quad & \frac{1}{2}{\| \vw \|}^2\\ \text{subject to} \quad & t_i(\vw^T \vx_i + b) \geq 1 \quad \forall i \ \quad \end{split} $$ Optimal Soft-Margin Hyperplane (OSMH) To deal with non-linearly separable case, we could introduce slack variables: $$ \begin{split} \underset{\vw, b}{\text{min}} \quad & \frac{1}{2}{\| \vw \|}^2\\ \text{s.t.} \quad & t_i(\vw^T \vx_i + b) \geq 1 \; \forall i \ & \ \end{split} \; \Longrightarrow \; \begin{split} \underset{\vw, b, \xi}{\text{min}} \quad & \frac{1}{2}{\| \vw \|}^2 + \frac{C}{n} \sum \nolimits_{i = 1}^n \xi_i\ \text{s.t.} \quad & t_i(\vw^T\vx_i + b) \geq 1 - \xi_i \; \forall i\ \quad & \xi_i \geq 0 \; \forall i\ \end{split} $$ New term $\frac{C}{n} \sum_{i = 1}^n \xi_i$ penalizes errors and accounts for the influence of outliers through a constant $C \geq 0$ ($C=\infty$ would lead us back to the hard margin case) and $\mathbf{\xi} = [\xi_1, ..., \xi_n]$ are the "slack" variables. Motivation: The objective function ensures margin is large and the margin violations are small The first set of constraints ensures classifier is doing well similar to the prev. max-margin constraint, except we now allow for slack The second set of constraints ensure slack variables are non-negative. keeps the optimization problem from "diverging" OSMH has Dual Formulation The previous objective function is referred to as the Primal With $N$ datapoints in $d$ dimensions, the Primal optimizes over $d + 1$ variables ($\vw, b$). But the Dual of this optimization problem has $N$ variables, one $\alpha_i$ for each example $i$! $$ \begin{split} \underset{\alpha, \beta}{\text{maximize}} \quad & -\frac12 \sum \nolimits_{i,j = 1}^n \alpha_i \alpha_j t_i t_j \vx_i^T \vx_j + \sum \nolimits_{i = 1}^n \alpha_i\ \text{subject to} \quad & 0 \leq \alpha_i \leq C/n \quad \forall i\ \ \quad & \sum \nolimits_{i=1}^n \alpha_i t_i = 0 \end{split} $$ Often the Dual problem is easier to solve. Once you solve the dual problem for $\alpha^_1, \ldots, \alpha^_N$, you get a primal solution as well! $$ \vw^ = \sum \nolimits_{i=1}^n \alpha_i^ t_i \vx_i \quad \text{and} \quad b^ = t_i - {\vw^}^T\vx_i \; (\text{ for any } i) $$ Note: Generally we can't solve these by hand, one uses optimization packages (such as a QP solver) Statistical Inference Loss Functions & Bias-Variance Decomposition Estimators ML Algorithms can in general be thought of as "estimators." Estimator: A statistic (a function of data) that is used to infer the value of an unknown parameter in a statistical model. Suppose there is a fixed parameter $f$ that needs to be estimated. An estimator of $f$ is a function that maps the sample space to a set of sample estimates, denoted $\hat{f}$. Noise For most problems in Machine Learning, the relationship is functional but noisy. Mathematically, $y = f(x) + \epsilon$ $\epsilon$ is noise with mean $0$ variance $\sigma^2$ Mathematical Viewpoint Let the training set be $D = {\mathbf{x}_1, ..., \mathbf{x}_n}, \mathbf{x}_i \in \mathbb{R}^d$. Goal: Find $\hat{f}$ that minimizes some Loss function, $L(y, \hat{f})$, which measures how good predictions are for both Points in $D$ (the sample), and, Points out of sample (outside $D$). Cannot minimize both perfectly because the relationship between $y$ and $\mathbf{x}$ is noisy. Irreducible error. Loss Functions There are many loss functions, each with their own use cases and interpretations. Quadratic Loss: $L(y,\hat{f}) = (y-\hat{f})^2$ Absolute Loss: $L(y,\hat{f}) = |y-\hat{f}|$ Classification-only loss functions: - Sigmoid Loss: $L(y,\hat{f}) = \mathrm{sigmoid}(-y\hat{f})$ - Zero-One Loss: $L(y,\hat{f}) = \mathbb{I}(y \neq \hat{f})$ - Hinge Loss: $L(y,\hat{f}) = \max(0, 1-y\hat{f})$ - Logistic Loss: $L(y,\hat{f}) = \log[ 1 + \exp(-y\hat{f})]$ - Exponential Loss: $L(y,\hat{f}) = \exp[ -y \hat{f} ]$ Choosing a Loss Function Different loss functions answer the following questions differently: How should we treat outliers? How "correct" do we need to be? Do we want a margin of safety? What is our notion of distance? What are we predicting? Real-world measurements? Probabilities? Quadratic Loss (aka Square Loss) Commonly used for regression Heavily influenced by outliers $$ L(y, \hat{f}) = (y - \hat{f})^2 $$ End of explanation """ x = np.linspace(-1, 1, 100); plt.plot(x, np.abs(x)); plt.xlabel("$y-\hat{f}$", size=18); plt.ylabel("$|y-\hat{f}|$", size=18); """ Explanation: Absolute Loss Commonly used for regression. Robust to outliers. $$ L(y, \hat{f}) = |y - \hat{f}| $$ Absolute Loss: Plot End of explanation """ x = np.linspace(-6, 6, 100); plt.plot(x, 1/(1 + np.exp(-x))); plt.xlabel("$-y\hat{f}$", size=18); plt.ylabel("$\sigma(-y\hat{f})$", size=18); """ Explanation: 0-1 Loss Used for classification. Not convex! Not practical since optimization problems become intractable! "Surrogate Loss functions" that are convex and differentiable can be used instead. $$ L(y, \hat{f}) = \mathbb{I}(y \neq \hat{f}) $$ Sigmoid Loss Differentiable but non-convex! Can be used for classification. $$L(y,\hat{f}) = \mathrm{sigmoid}(-y\hat{f})$$ End of explanation """ x = np.linspace(-6, 6, 100); plt.plot(x, np.log2(1 + np.exp(-x))); plt.xlabel("$y\hat{f}$", size=18); plt.ylabel("$\log(1 + \exp(-y\hat{f}))$", size=18); """ Explanation: Logistic Loss Used in Logistic regression. Influenced by outliers. Provides well calibrated probabilities (can be interpreted as confidence levels). $$L(y,\hat{f}) = \log[ 1 + \exp(-y\hat{f})]$$ End of explanation """ x = np.linspace(-6, 6, 100); plt.plot(x, np.where(x < 1, 1 - x, 0)); plt.xlabel("$y\hat{f}$", size=18); plt.ylabel("$\max(0,1-y\hat{f})$", size=18); """ Explanation: Hinge Loss Used in SVMs. Robust to outliers. Doesn't provide well calibrated probabilities. $$L(y,\hat{f}) = \max(0, 1-y\hat{f})$$ End of explanation """ x = np.linspace(-3, 3, 100); plt.plot(x, np.exp(-x)); plt.xlabel("$y\hat{f}$", size=18); plt.ylabel("$\exp(-y\hat{f})$", size=18); """ Explanation: Exponential Loss Used for Boosting. Very susceptible to outliers. $$L(y,\hat{f}) = \exp(-y\hat{f})$$ End of explanation """ # adapted from http://scikit-learn.org/stable/auto_examples/linear_model/plot_sgd_loss_functions.html def plot_loss_functions(): xmin, xmax = -4, 4 xx = np.linspace(xmin, xmax, 100) plt.plot(xx, xx ** 2, 'm-', label="Quadratic loss") plt.plot([xmin, 0, 0, xmax], [1, 1, 0, 0], 'k-', label="Zero-one loss") plt.plot(xx, 1/(1 + np.exp(xx)), 'b-', label="Sigmoid loss") plt.plot(xx, np.where(xx < 1, 1 - xx, 0), 'g-', label="Hinge loss") plt.plot(xx, np.log2(1 + np.exp(-xx)), 'r-', label="Log loss") plt.plot(xx, np.exp(-xx), 'c-', label="Exponential loss") plt.ylim((0, 8)) plt.legend(loc="best") plt.xlabel(r"Decision function $f(x)$") plt.ylabel("$L(y, f)$") # Demonstrate some loss functions plot_loss_functions() """ Explanation: Loss Functions: Comparison End of explanation """ import pylab as pl RANGEXS = np.linspace(0., 2., 300) TRUEYS = np.sin(np.pi * RANGEXS) def plot_fit(x, y, p, show,color='k'): xfit = RANGEXS yfit = np.polyval(p, xfit) if show: axes = pl.gca() axes.set_xlim([min(RANGEXS),max(RANGEXS)]) axes.set_ylim([-2.5,2.5]) pl.scatter(x, y, facecolors='none', edgecolors=color) pl.plot(xfit, yfit,color=color) pl.hold('on') pl.xlabel('x') pl.ylabel('y') def calc_errors(p): x = RANGEXS errs = [] for i in x: errs.append(abs(np.polyval(p, i) - np.sin(np.pi * i)) ** 2) return errs def calculate_bias_variance(poly_coeffs, input_values_x, true_values_y): # poly_coeffs: a list of polynomial coefficient vectors # input_values_x: the range of xvals we will see # true_values_y: the true labels/targes for y # First we calculate the mean polynomial, and compute the predictions for this mean poly mean_coeffs = np.mean(poly_coeffs, axis=0) mean_predicted_poly = np.poly1d(mean_coeffs) mean_predictions_y = np.polyval(mean_predicted_poly, input_values_x) # Then we calculate the error of this mean poly bias_errors_across_x = (mean_predictions_y - true_values_y) ** 2 # To consider the variance errors, we need to look at every output of the coefficients variance_errors = [] for coeff in poly_coeffs: predicted_poly = np.poly1d(coeff) predictions_y = np.polyval(predicted_poly, input_values_x) # Variance error is the average squared error between the predicted values of y # and the *average* predicted value of y variance_error = (mean_predictions_y - predictions_y)**2 variance_errors.append(variance_error) variance_errors_across_x = np.mean(np.array(variance_errors),axis=0) return bias_errors_across_x, variance_errors_across_x from matplotlib.pylab import cm def polyfit_sin(degree=0, iterations=100, num_points=5, show=True): total = 0 l = [] coeffs = [] errs = [0] * len(RANGEXS) colors=cm.rainbow(np.linspace(0,1,iterations)) for i in range(iterations): np.random.seed() x = np.random.choice(RANGEXS,size=num_points) # Pick random points from the sinusoid y = np.sin(np.pi * x) p = np.polyfit(x, y, degree) y_poly = [np.polyval(p, x_i) for x_i in x] plot_fit(x, y, p, show,color=colors[i]) total += sum(abs(y_poly - y) ** 2) # calculate Squared Error (Squared Error) coeffs.append(p) errs = np.add(calc_errors(p), errs) return total / iterations, errs / iterations, np.mean(coeffs, axis = 0), coeffs def plot_bias_and_variance(biases,variances,range_xs,true_ys,mean_predicted_ys): pl.plot(range_xs, mean_predicted_ys, c='k') axes = pl.gca() axes.set_xlim([min(range_xs),max(range_xs)]) axes.set_ylim([-3,3]) pl.hold('on') pl.plot(range_xs, true_ys,c='b') pl.errorbar(range_xs, mean_predicted_ys, yerr = biases, c='y', ls="None", zorder=0,alpha=1) pl.errorbar(range_xs, mean_predicted_ys, yerr = variances, c='r', ls="None", zorder=0,alpha=0.1) pl.xlabel('x') pl.ylabel('y') """ Explanation: Break time! <img src="https://img.buzzfeed.com/buzzfeed-static/static/2013-10/enhanced/webdr01/15/9/anigif_enhanced-buzz-31540-1381844535-8.gif"/> Risk Risk is the expected loss or error. - Calculated differently for Bayesian vs. Frequentist Statistics For now, assume quadratic loss $L(y,\hat{f}) = (y-\hat{f})^2$ - Associated risk is $R(\hat{f}) = E_y[L(y, \hat{f})] = E_y[(y-\hat{f})^2]$ Bias-Variance Decomposition Can decompose the expected loss into a bias term and variance term. Depending on samples, learning process can give different results ML vs MAP vs Posterior Mean, etc.. We want to learn a model with Small bias (how well a model fits the data on average) Small variance (how stable a model is w.r.t. data samples) Bias-Variance Decomposition $$ \begin{align} \mathbb{E}[(y - \hat{f})^2] &= \mathbb{E}[y^2 - 2 \cdot y \cdot \hat{f} + {\hat{f}}^2] \ &= \mathbb{E}[y^2] - \mathbb{E}[2 \cdot y \cdot \hat{f}] + \mathbb{E}[{\hat{f}}^2] \ &= \mathrm{Var}[y] + {\mathbb{E}[y]}^2 - \mathbb{E}[2 \cdot y \cdot \hat{f}] + \mathrm{Var}[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2 \end{align} $$ since $Var[X] = \mathbb{E}[{X}^2] - {\mathbb{E}[X]}^2 \implies \mathbb{E}[X^2] = Var[X] + {\mathbb{E}[X]}^2$ Bias-Variance Decomposition $$\begin{align} \mathbb{E}[y] &= \mathbb{E}[f + \epsilon] \ &= \mathbb{E}[f] + \mathbb{E}[\epsilon] & \text{ (linearity of expectations)}\ &= \mathbb{E}[f] + 0 &\text{(zero-mean noise)}\ &= f & \text{ (} f \text{ is determinstic)}\end{align}$$ Bias-Variance Decomposition $$\begin{align} Var[y] &= \mathbb{E}[(y - \mathbb{E}[y])^2] \ &= \mathbb{E}[(y - f)^2] \ &= \mathbb{E}[(f + \epsilon - f)^2] \ &= \mathbb{E}[\epsilon^2] \equiv \sigma^2 \end{align}$$ Bias-Variance Decomposition We just showed that: - $\mathbb{E}[y] = f$ - $\mathrm{Var}[y] = \mathbb{E}[\epsilon^2] = \sigma^2$ Therefore, $$ \begin{align} \mathbb{E}[(y - \hat{f})^2] &= Var[y] + {\mathbb{E}[y]}^2 - \mathbb{E}[2 \cdot y \cdot \hat{f} + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2 \ &= \sigma^2 + f^2 - \mathbb{E}[2 \cdot y \cdot \hat{f}] + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2 \end{align} $$ Bias-Variance Decomposition Note $y$ is random only in $\epsilon$ (again, $f$ is deterministic). Also, $\epsilon$ is independent from $\hat{f}$. $\begin{align}\mathbb{E}[2 \cdot y \cdot \hat{f}] &= \mathbb{E}[2 \cdot y \cdot \hat{f}]\ &= \mathbb{E}[2 \cdot y] \cdot \mathbb{E}[\hat{f}] & \text{ (by independence) }\ &= 2 \cdot \mathbb{E}[y] \cdot \mathbb{E}[\hat{f}] \ &= 2 \cdot f \cdot \mathbb{E}[\hat{f}] \end{align}$ Thus, we now have $\mathbb{E}[(y - \hat{f})^2] = \sigma^2 + f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + Var[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2$ Bias-Variance Decomposition $\mathbb{E}[(y - \hat{f})^2] = \sigma^2 + Var[\hat{f}] + f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + {\mathbb{E}[{\hat{f}}]}^2$ Now, $f^2 - 2 \cdot f \cdot \mathbb{E}[\hat{f}] + \mathbb{E}[\hat{f}]^2 = (f - \mathbb{E}[\hat{f}])^2$ $\implies \mathbb{E}[(y - \hat{f})^2] = \sigma^2 + Var[\hat{f}] + (f - \mathbb{E}[\hat{f}])^2$ $\begin{align} \text{Finally, } \mathbb{E}[f - \hat{f}] &= \mathbb{E}[f] - \mathbb{E}[\hat{f}] \text{ (linearity of expectations)} \ &= f - \mathbb{E}[\hat{f}] \end{align}$ So, $$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}[\hat{f}]]}^2}_{\text{Bias}^2}$$ Bias-Variance Decomposition We have $$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}S[\hat{f}]]}^2}{\text{Bias}^2}$$ Bias and Variance Formulae Bias of an estimator, $B(\hat{f}) = \mathbb{E}[\hat{f}] - f$ Variance of an estimator, $Var(\hat{f}) = \mathbb{E}[(\hat{f} - \mathbb{E}[\hat{f}])^2]$ An example to explain Bias/Variance and illustrate the tradeoff Consider estimating a sinusoidal function. (Example that follows is inspired by Yaser Abu-Mostafa's CS 156 Lecture titled "Bias-Variance Tradeoff" End of explanation """ # polyfit_sin() generates 5 samples of the form (x,y) where y=sin(2*pi*x) # then it tries to fit a degree=0 polynomial (i.e. a constant func.) to the data # Ignore return values for now, we will return to these later _, _, _, _ = polyfit_sin(degree=0, iterations=1, num_points=5, show=True) """ Explanation: Let's return to fitting polynomials Here we generate some samples $x,y$, with $y = \sin(2\pi x)$ We then fit a degree-0 polynomial (i.e. a constant function) to the samples End of explanation """ # Estimate two points of sin(pi * x) with a constant 5 times _, _, _, _ = polyfit_sin(0, 5) """ Explanation: We can do this over many datasets Let's sample a number of datasets How does the fitted polynomial change for different datasets? End of explanation """ # Estimate two points of sin(pi * x) with a constant 100 times _, _, _, _ = polyfit_sin(0, 25) MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(0, 100,num_points = 3,show=False) biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS) plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS)) """ Explanation: What about over lots more datasets? End of explanation """ poly_degree = 0 results_list = [] MSE, errs, mean_coeffs, coeffs_list = polyfit_sin( poly_degree, 500,num_points = 5,show=False) biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS) sns.barplot(x='type', y='error',hue='poly_degree', data=pd.DataFrame([ {'error':np.mean(biases), 'type':'bias','poly_degree':0}, {'error':np.mean(variances), 'type':'variance','poly_degree':0}])) """ Explanation: Decomposition: $\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}S[\hat{f}]]}^2}{\text{Bias}^2}$ Blue curve: true $f$ Black curve: $\hat f$, average predicted values of $y$ Yellow is error due to Bias, Red/Pink is error due to Variance Bias vs. Variance We can calculate how much error we suffered due to bias and due to variance End of explanation """ MSE, _, _, _ = polyfit_sin(degree=3, iterations=1) """ Explanation: Let's now fit degree=3 polynomials Let's sample a dataset of 5 points and fit a cubic poly End of explanation """ _, _, _, _ = polyfit_sin(degree=3,iterations=5,num_points=5,show=True) """ Explanation: Let's now fit degree=3 polynomials What does this look like over 5 different datasets? End of explanation """ # Estimate two points of sin(pi * x) with a line 50 times _, _, _, _ = polyfit_sin(degree=3, iterations=50) MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(3,500,show=False) biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS) plot_bias_and_variance(biases,variances,RANGEXS,TRUEYS,np.polyval(np.poly1d(mean_coeffs), RANGEXS)) """ Explanation: Let's now fit degree=3 polynomials What does this look like over 50 different datasets? End of explanation """ results_list = [] for poly_degree in [0,1,3]: MSE, errs, mean_coeffs, coeffs_list = polyfit_sin(poly_degree,500,num_points=5,show=False) biases, variances = calculate_bias_variance(coeffs_list,RANGEXS,TRUEYS) results_list.append({'error':np.mean(biases), 'type':'bias', 'poly_degree':poly_degree}) results_list.append({'error':np.mean(variances), 'type':'variance', 'poly_degree':poly_degree}) sns.barplot(x='type', y='error',hue='poly_degree',data=pd.DataFrame(results_list)) """ Explanation: $$\mathbb{E}[(y - \hat{f})^2] = \underbrace{{\sigma^2}}\text{irreducible error} + \underbrace{{\text{Var}[\hat{f}]}}\text{Variance} + \underbrace{{\mathbb{E}[f - \mathbb{E}S[\hat{f}]]}^2}{\text{Bias}^2}$$ * Blue curve: true $f$ * Black curve: $\hat f$, average prediction (of the value of $y$) * Yellow is error due to Bias, Red/Pink is error due to Variance End of explanation """ # Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning" from IPython.display import Image Image(filename='images/HighVariance.png', width=800, height=600) # Testing error still decreasing as the training set size increases. Suggests increasing the training set size. # Large gap Between Training and Test Error. # Image from Andrew Ng's Stanford CS229 lecture titled "Advice for applying machine learning" from IPython.display import Image Image(filename='images/HighBias.png', width=800, height=600) # Training error is unacceptably high. # Small gap between training error and testing error. """ Explanation: Bias Variance Tradeoff Central problem in supervised learning. Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High Variance: Model represents the training set well. Overfit to noise or unrepresentative training data. Poor generalization performance High Bias: Simplistic models. Fail to capture regularities in the data. May give better generalization performance. Interpretations of Bias Captures the errors caused by the simplifying assumptions of a model. Captures the average errors of a model across different training sets. Interpretations of Variance Captures how much a learning method moves around the mean. How different can one expect the hypotheses of a given model to be? How sensitive is an estimator to different training sets? Complexity of Model Simple models generally have high bias and complex models generally have low bias. Simple models generally have low variance andcomplex models generally have high variance. Underfitting / Overfitting High variance is associated with overfitting. High bias is associated with underfitting. Training set size Decreasing the training set size Helps with a high bias algorithm: Will in general not help in improving performance. Can attain the same performance with smaller training samples however. Additional advantage of increases in speed. Increase the training set size Decreases Variance by reducing overfitting. Number of features Increasing the number of features. Decreases bias at the expense of increasing the variance. Decreasing the number of features. Dimensionality reduction can decrease variance by reducing over-fitting. Features Many techniques for engineering and selecting features (Feature Engineering and Feature Extraction) - PCA, Isomap, Kernel PCA, Autoencoders, Latent sematic analysis, Nonlinear dimensionality reduction, Multidimensional Scaling Features The importance of features "Coming up with features is difficult, time-consuming, requires expert knowledge. Applied machine learning is basically feature engineering" - Andrew Ng "... some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." - Pedro Domingo Regularization (Changing $\lambda$ or $C$) Regularization is designed to impose simplicity by adding a penalty term that depends on the charactistics of the parameters. Decrease Regularization. Reduces bias (allows the model to be more complex). Increase Regularization. Reduces variance by reducing overfitting (again, regularization imposes "simplicity.") Ideal bias and variance? All is not lost. Bias and Variance can both be lowered through some methods: Ex: Boosting (learning from weak classifiers). The sweet spot for a model is the level of complexity at which the increase in bias is equivalent to the reduction in variance. Model Selection Model Selection ML Algorithms generally have a lot of parameters that must be chosen. A natural question is then "How do we choose them?" Examples: Penalty for margin violation (C), Polynomial Degree in polynomial fitting Model Selection Simple Idea: Construct models $M_i, i = 1, ..., n$. Train each of the models to get a hypothesis $h_i, i = 1, ..., n$. Choose the best. Does this work? No! Overfitting. This brings us to cross validation. Hold-Out Cross Validation (1) Randomly split the training data $D$ into $D_{train}$ and $D_{val}$, say 70% of the data and 30% of the data respectively. (2) Train each model $M_i$ on $D_{train}$ only, each time getting a hypothesis $h_i$. (3) Select and output hypothesis $h_i$ that had the smallest error on the held out validation set. Disadvantages: - Waste some sizable amount of data (30\% in the above scenario) so that less training examples are available. - Using only some data for training and other data for validation. K-Fold Cross Validation (Step 1) Randomly split the training data $D$ into $K$ disjoint subsets of $N/K$ training samples each. - Let these subsets be denoted $D_1, ..., D_K$. K-Fold Cross Validation (Step 2) For each model $M_i$, we evaluate the model as follows: - Train the model $M_i$ on $D \setminus D_k$ (all of the subsets except subset $D_k$) to get hypothesis $h_i(k)$. - Test the hypothesis $h_i(k)$ on $D_k$ to get the error (or loss) $\epsilon_i(k)$. - Estimated generalization error for model $M_i$ is then given by $e^g_i = \frac{1}{K} \sum \limits_{k = 1}^K \epsilon_i (k)$ K-Fold Cross Validation (Step 3) Pick the model $M_i^$ with the lowest estimated generalization error $e^{g}_i$ and retrain the model on the entire training set, thus giving the final hypothesis $h^*$ that is output. Three Way Data Splits If model selection and true error estimates are to be computed simaltaneously, the data needs to be divided into three disjoin sets. Training set: A set of examples used for learning Validation set: A set of examples used to tune the hyperparameters of a classifier. Test Set: A set of examples used only to assess the performance of a fully-trained model. Procedure Outline Divide the available data into training, validation and test set Select a model (and hyperparameters) Train the model using the training set Evaluate the model using the validation set Repeat steps 2 through 4 using different models (and hyperparameters) Select the best model (and hyperparameter) and train it using data from the training and validation set Assess this final model using the test set How to choose hyperparameters? Cross Validation is only useful if we have some number of models. This often means constructing models each with a different combination of hyperparameters. Random Search Just choose each hyperparameter randomly (possibly within some range for each.) Pro: Easy to implement. Viable for models with a small number of hyperparameters and/or low dimensional data. Con: Very inefficient for models with a large number of hyperparameters or high dimensional data (curse of dimensionality.) Grid Search / Parameter Sweep Choose a subset for each of the parameters. Discretize real valued parameters with step sizes as necessary. Output the model with the best cross validation performance. Pro: "Embarassingly Parallel" (Can be easily parallelized) Con: Again, curse of dimensionality poses problems. Bayesian Optimization Assumes that there is a smooth but noisy relation that acts as a mapping from hyperparameters to the objective function. Gather observations in such a manner as to evaluate the machine learning model the least number of times while revealing as much information as possible about the mapping and, in particular, the location of the optimum. Exploration vs. Exploitation problem. Learning Curves Provide a visualization for diagnostics such as: - Bias / variance - Convergence End of explanation """
OceanPARCELS/parcels
parcels/examples/tutorial_sampling.ipynb
mit
# Modules needed for the Parcels simulation from parcels import Variable, FieldSet, ParticleSet, JITParticle, AdvectionRK4 import numpy as np from datetime import timedelta as delta # To open and look at the temperature data import xarray as xr import matplotlib as mpl import matplotlib.pyplot as plt """ Explanation: Field sampling tutorial The particle trajectories allow us to study fields like temperature, plastic concentration or chlorophyll from a Lagrangian perspective. In this tutorial we will go through how particles can sample Fields, using temperature as an example. Along the way we will get to know the parcels class Variable (see here for the documentation) and some of its methods. This tutorial covers several applications of a sampling setup: * Basic along trajectory sampling * Sampling initial conditions * Sampling initial and along-trajectory values with repeated release Basic sampling We import the Variable class as well as the standard modules needed to set up a simulation. End of explanation """ # Velocity and temperature fields fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'T': 'T'}, allow_time_extrapolation=True) # Particle locations and initial time npart = 10 # number of particles to be released lon = 3e3 * np.ones(npart) lat = np.linspace(3e3 , 45e3, npart, dtype=np.float32) time = np.arange(0, npart) * delta(hours=2).total_seconds() # release each particle two hours later # Plot temperature field and initial particle locations T_data = xr.open_dataset("Peninsula_data/peninsulaT.nc") plt.figure() ax = plt.axes() T_contour = ax.contourf(T_data.x.values, T_data.y.values, T_data.T.values[0,0], cmap=plt.cm.inferno) ax.scatter(lon, lat, c='w') plt.colorbar(T_contour, label='T [$^{\circ} C$]') plt.show() """ Explanation: Suppose we want to study the environmental temperature for plankton drifting around a peninsula. We have a dataset with surface ocean velocities and the corresponding sea surface temperature stored in netcdf files in the folder "Peninsula_data". Besides the velocity fields, we load the temperature field using extra_fields={'T': 'T'}. The particles are released on the left hand side of the domain. End of explanation """ class SampleParticle(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=fieldset.T) # Variable 'temperature' initialised by sampling the temperature pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, time=time) """ Explanation: To sample the temperature field, we need to create a new class of particles where temperature is a Variable. As an argument for the Variable class, we need to provide the initial values for the particles. The easiest option is to access fieldset.T, but this option has some drawbacks. End of explanation """ repeatdt = delta(hours=3) pset = ParticleSet(fieldset=fieldset, pclass=SampleParticle, lon=lon, lat=lat, repeatdt=repeatdt) """ Explanation: Using fieldset.T leads to the WARNING displayed above because Variable accesses the fieldset in the slower SciPy mode. Another problem can occur when using the repeatdt argument instead of time: <a id='repeatdt_error'></a> End of explanation """ class SampleParticleInitZero(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=0) # Variable 'temperature' initially zero pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=lon, lat=lat, time=time) def SampleT(particle, fieldset, time): particle.temperature = fieldset.T[time, particle.depth, particle.lat, particle.lon] sample_kernel = pset.Kernel(SampleT) # Casting the SampleT function to a kernel. """ Explanation: Since the initial time is not defined, the Variable class does not know at what time to access the temperature field. The solution to this initialisation problem is to leave the initial value zero and sample the initial condition in JIT mode with the sampling Kernel: End of explanation """ pset.execute(sample_kernel, dt=0) # by only executing the sample kernel we record the initial temperature of the particles output_file = pset.ParticleFile(name="InitZero.nc", outputdt=delta(hours=1)) pset.execute(AdvectionRK4 + sample_kernel, runtime=delta(hours=30), dt=delta(minutes=5), output_file=output_file) output_file.export() # export the trajectory data to a netcdf file output_file.close() """ Explanation: To sample the initial values we can execute the Sample kernel over the entire particleset with dt = 0 so that time does not increase End of explanation """ Particle_data = xr.open_dataset("InitZero.nc") plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1) T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat, c=Particle_data.temperature, cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=20.), edgecolor='k', zorder=2) plt.colorbar(T_scatter, label='T [$^{\circ} C$]') plt.show() """ Explanation: The particle dataset now contains the particle trajectories and the corresponding environmental temperature End of explanation """ class SampleParticleOnce(JITParticle): # Define a new particle class temperature = Variable('temperature', initial=0, to_write='once') # Variable 'temperature' pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleOnce, lon=lon, lat=lat, time=time) pset.execute(sample_kernel, dt=0) # by only executing the sample kernel we record the initial temperature of the particles output_file = pset.ParticleFile(name="WriteOnce.nc", outputdt=delta(hours=1)) pset.execute(AdvectionRK4, runtime=delta(hours=24), dt=delta(minutes=5), output_file=output_file) output_file.close() """ Explanation: Sampling initial values In some simulations only the particles initial value within the field is of interest: the variable does not need to be known along the entire trajectory. To reduce computing we can specify the to_write argument to the temperature Variable. This argument can have three values: True, False or 'once'. It determines whether to write the Variable to the output file. If we want to know only the initial value, we can enter 'once' and only the first value will be written to the output file. End of explanation """ Particle_data = xr.open_dataset("WriteOnce.nc") plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Particle_data.lon.transpose(), Particle_data.lat.transpose(), c='k', zorder=1) T_scatter = ax.scatter(Particle_data.lon, Particle_data.lat, c=np.tile(Particle_data.temperature, (Particle_data.lon.shape[1], 1)).T, cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=1.), edgecolor='k', zorder=2) plt.colorbar(T_scatter, label='Initial T [$^{\circ} C$]') plt.show() """ Explanation: Since all the particles are released at the same x-position and the temperature field is invariant in the y-direction, all particles have an initial temperature of 0.4$^\circ$C End of explanation """ outputdt = delta(hours=1).total_seconds() # write the particle data every hour repeatdt = delta(hours=6).total_seconds() # release each set of particles six hours later runtime = delta(hours=24).total_seconds() pset = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=[], lat=[], time=[]) # Using SampleParticleInitZero kernels = AdvectionRK4 + sample_kernel output_file = pset.ParticleFile(name="RepeatLoop.nc") # Do not specify the outputdt yet, so we can manually write the output for time in np.arange(0, runtime, outputdt): if np.isclose(np.fmod(time, repeatdt), 0): # time is a multiple of repeatdt pset_init = ParticleSet(fieldset=fieldset, pclass=SampleParticleInitZero, lon=lon, lat=lat, time=time) pset_init.execute(sample_kernel, dt=0) # record the initial temperature of the particles pset.add(pset_init) # add the newly released particles to the total particleset output_file.write(pset,time) # write the initialised particles and the advected particles pset.execute(kernels, runtime=outputdt, dt=delta(minutes=5)) print('Length of pset at time %d: %d' % (time, len(pset))) output_file.write(pset, time+outputdt) output_file.close() """ Explanation: Sampling with repeatdt Some experiments require large sets of particles to be released repeatedly on the same locations. The particleset object has the option repeatdt for this, but when you want to sample the initial values this introduces some problems as we have seen here. For more advanced control over the repeated release of particles, you can manually write a for-loop using the function particleset.add(). Note that this for-loop is very similar to the one that repeatdt would execute under the hood in particleset.execute(). Adding particles to the particleset during the simulation reduces the memory used compared to specifying the delayed particle release times upfront, which improves the computational speed. In the loop, we want to initialise new particles and sample their initial temperature. If we want to write both the initialised particles with the sampled temperature and the older particles that have already been advected, we have to make sure both sets of particles find themselves at the same moment in time. The initial conditions must be written to the output file before advecting them, because during advection the particle.time will increase. We do not specify the outputdt argument for the output_file and instead write the data with output_file.write(pset, time) on each iteration. A new particleset is initialised whenever time is a multiple of repeatdt. Because the particles are advected after being written, the last displacement must be written once more after the loop. End of explanation """ Particle_data = xr.open_dataset("RepeatLoop.nc") print(Particle_data.time[:,0].values / np.timedelta64(1, 'h')) # The initial hour at which each particle is released assert np.allclose(Particle_data.time[:,0].values / np.timedelta64(1, 'h'), [int(k/10)*6 for k in range(40)]) """ Explanation: In each iteration of the loop, spanning six hours, we have added ten particles. End of explanation """ print(Particle_data.temperature[:,0].values) assert np.allclose(Particle_data.temperature[:,0].values, Particle_data.temperature[:,0].values[0]) """ Explanation: Let's check if the initial temperatures were sampled correctly for all particles End of explanation """ Release0 = Particle_data.where(Particle_data.time[:,0]==np.timedelta64(0, 's')) # the particles released at t = 0 plt.figure() ax = plt.axes() ax.set_ylabel('Y') ax.set_xlabel('X') ax.set_ylim(1000, 49000) ax.set_xlim(1000, 99000) ax.plot(Release0.lon.transpose(), Release0.lat.transpose(), c='k', zorder=1) T_scatter = ax.scatter(Release0.lon, Release0.lat, c=Release0.temperature, cmap=plt.cm.inferno, norm=mpl.colors.Normalize(vmin=0., vmax=20.), edgecolor='k', zorder=2) plt.colorbar(T_scatter, label='T [$^{\circ} C$]') plt.show() """ Explanation: And see if the sampling of the temperature field is done correctly along the trajectories End of explanation """
datamicroscopes/release
examples/gamma_poisson.ipynb
bsd-3-clause
import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt sns.set_context('talk') import csv import urllib2 import StringIO %matplotlib inline """ Explanation: Count Data and Ordinal Data with the Gamma-Poisson Distribution Typically, we model count data, or integer valued data, with the gamma-Poisson distribution Recall that the Poisson distribution is a distribution over integer values parameterized by $\lambda$. One interpretation behind $\lambda$ is that it parameterizes the rate at which events occur with a fixed interval, assuming these events occur independently. The gamma distribution is conjugate to the Poisson distribution, so the gamma-Poisson distribution allows us to learn both the distribution over counts and the rate parameter $\lambda$. Let's set up our environment and consider some examples of count data End of explanation """ ceb = pd.read_csv('http://data.princeton.edu/wws509/datasets/ceb.dat', sep='\s+') ceb.head() """ Explanation: Children Ever Born is a dataset of birthrates in Fiji from the World Fertility Survey with the following columns: dur: marriage duration res: residence, educ: level of education, mean: mean number of born, var: variance of children born y: number of women Ordinal columns dur, res, and educ are shown as text in the following dataset End of explanation """ ceb_int = pd.read_csv('http://data.princeton.edu/wws509/datasets/ceb.raw', sep='\s+', names = ['index'] + list(ceb.columns[:-1]), index_col=0) ceb_int.head() """ Explanation: With the these columns encoded, we can now represent them as integers dur and educ are ordinal columns. Additionally, number of women, n, is integer valued. End of explanation """ plt.figure(figsize=(9,6)) ct = pd.crosstab(ceb_int['dur'], ceb_int['educ'], values=ceb_int['n'], aggfunc= np.sum).sort_index(ascending = False) sns.heatmap(ct, annot = True) plt.yticks(ceb_int['dur'].drop_duplicates().values - .5, ceb['dur'].drop_duplicates().values) plt.xticks(ceb_int['educ'].drop_duplicates().values - .5, ceb['educ'].drop_duplicates().values) plt.ylabel('duration of marriage (years)') plt.xlabel('level of education') plt.title('heatmap of marriage duration by level of education') """ Explanation: We can map these orderings of dur and educ to produce a crosstab heatmap of n, numbe of women End of explanation """ response = urllib2.urlopen('http://stanford.edu/class/psych252/_downloads/caffeine.csv') html = response.read() caf = pd.read_csv(StringIO.StringIO(html[:-16])) caf.head() """ Explanation: Since dur and education are ordinal valued, the columns assume a small number of integer values Additionally, the caffeine dataset below measures caffeine intake and performance on a 10 question quiz. The variables are: coffee: coffee intake (1 = 0 cups, 2 = 2 cups, 3 = 4 cups) perf: quiz score numprob: problems attempted accur: accuracy End of explanation """ caf.describe() """ Explanation: Based on the characteristics of each column, coffee and numprob easily fit into the category of count data appropriate to a gamma-Poisson distribution End of explanation """ from microscopes.models import gp as gamma_poisson """ Explanation: Note that while integer valued data with high values is sometimes modeled with a gamma-Poisson ditribution, remember that the gamma-Poisson distribution has equal mean and variance $\lambda$: $$E(X) = Var(X) = \lambda$$ If you want to be more flexible with this assumption, you may want to consider using a normal inverse-chisquare or a normal inverse-Wishart distribution depending on your data To import the gamma-poisson likelihood, call: End of explanation """
bje-/NEMO
doc/guide.ipynb
gpl-3.0
import nemo from nemo import scenarios c = nemo.Context() scenarios._one_ccgt(c) print(c.generators) """ Explanation: NEMO User's Guide: a Jupyter notebook Note that this is a Jupyter notebook that uses some magic IPython commands (starting with %). It may not work in other notebooks like the one included with Pycharm. Installing a configuration file Before you can run NEMO, you need a configuration file. The default configuration file (nemo.cfg) is installed with the NEMO package and can be copied into your working directory as a starting point. On Unix systems, this can be found at /usr/local/etc/nemo.cfg. Alternatively, you can set the NEMORC environment variable to point to a configuration file. See the "Configuration file" section below for more details on the format of this file. A simple example NEMO can be driven by your own Python code. Some simple examples of how to do this appear below. First, we will create a simulation with a single combined cycle gas turbine (CCGT). The "NSW1:31" notation indicates that the generator is sited in polygon 31 in the NSW1 region. End of explanation """ nemo.run(c) print(c) """ Explanation: Then run the simulation: End of explanation """ c = nemo.Context() c.generators[0].set_capacity(13.2) c.generators[1].set_capacity(20) nemo.run(c) print(c) """ Explanation: The CCGT is configured with a zero capacity. Hence, no electricity is served in the simulation (100% unserved energy) and the largest shortfall was 33,645 MW (33.6 GW). This figure corresponds to the peak demand in the simulated year. Let's now do a run with the default scenario (two CCGTs: 13.2 GW and 20 GW, respectively) such that almost of the demand is met except for a few hours of unserved energy: End of explanation """ print(c.unserved) """ Explanation: If we print the unserved attribute in the context, we can see when the six hours of unserved energy occurred and how large the shortfalls were: End of explanation """ from matplotlib.pyplot import ioff from nemo import utils ioff() utils.plt.rcParams["figure.figsize"] = (12, 6) # 12" x 6" figure utils.plot(c) """ Explanation: Plotting results NEMO includes a utils.py module that includes a plot function to show the time sequential dispatch. The following example demonstrates its use: End of explanation """ from matplotlib.pyplot import ioff from datetime import datetime ioff() utils.plt.rcParams["figure.figsize"] = (12, 6) # 12" x 6" figure utils.plot(c, xlim=[datetime(2010, 1, 5), datetime(2010, 1, 12)]) """ Explanation: The previous plot is rather bunched up. Instead, you can also pass a pair of dates to the plot() function to limit the range of dates shown. For example: End of explanation """ c = nemo.Context() scenarios._one_ccgt(c) for i in range(0, 40): c.generators[0].set_capacity(i) nemo.run(c) if c.unserved_energy() == 0: break print(c.generators) """ Explanation: Scripting simulations Writing NEMO in Python allows the simulation framework to be easily scripted using Python language constructs, such as for loops. Using the previous example, the following small script demonstrates how simulation runs can be automated: End of explanation """ c = nemo.Context() scenarios.ccgt(c) print(c.generators) """ Explanation: Once the generator capacity reaches 34 GW, there is no unserved energy. Scenarios NEMO contains two types of scenarios: supply-side and demand-side scenarios. The supply-side scenario modifies the list of generators. For example: End of explanation """ !python3 evolve --list-scenarios """ Explanation: A list of the current supply-side scenarios (with descriptions) can be obtained by running evolve --list-scenarios from the shell (without the leading !): End of explanation """ !python3 replay --help """ Explanation: Demand-side scenarios modify the electricity demand time series before the simulation runs. Demand-side scenarios behave like operators that can be combined in any combination to modify the demand as desired. These are: roll:X rolls the load by x timesteps scale:X scales the load by x percent scaletwh:X scales the load to x TWh shift:N:H1:H2 shifts n megawatts every day from hour h1 to hour h2 peaks:N:X adjust demand peaks over n megawatts by x percent npeaks:N:X adjust top n demand peaks by x percent For example, applying scale:-10 followed by shift:1000:16:12 will reduce the overall demand by 10% and then shift 1 MW of demand from 4pm to noon every day of the year. Configuration file NEMO uses a configuration file to give users control over where data such as demand time series are to be found. The location of the configuration file can be specified by setting the NEMORC environment variable. The configuration file format is similar to Windows INI files; it has sections (in brackets) and, within sections, key=value pairs. The default configuration file is called nemo.cfg. The keys currently recognised are: [costs] co2-price-per-t ccs-storage-costs-per-t coal-price-per-gj discount-rate -- as a fraction (eg 0.05) gas-price-per-gj technology-cost-class -- default cost class [limits] hydro-twh-per-yr bioenergy-twh-per-yr nonsync-penetration -- as a fraction (eg 0.75) minimum-reserves-mw [optimiser] generations -- number of CMA-ES generations to run sigma -- initial step-size [generation] cst-trace -- URL of CST generation traces egs-geothermal-trace -- URL of EGS geothermal generation traces hsa-geothermal-trace -- URL of HSA geothermal generation traces wind-trace -- URL of wind generation traces pv1axis-trace -- URL of 1-axis PV generation traces rooftop-pv-trace -- URL of rooftop PV generation traces offshore-wind-trace -- URL of offshore wind generation traces [demand] demand-trace -- URL of demand trace data Running an optimisation Instead of running a single simulation, it is more interesting to use evolve which drives an evolutionary algorithm to find the least cost portfolio that meets demand. There are many options which you can discover by running evolve --help. Here is a simple example to find the least cost portfolio using the ccgt scenario (all-gas scenario with CCGT and OCGT generation): $ evolve -s ccgt It is possible to distribute the workload across multiple CPUs and multiple computers. See the SCOOP documentation for more details. To run the same evolution, but using all of your locally available CPU cores, you need to load the SCOOP module like so: $ python3 -m scoop evolve -s ccgt At the end of a run, details of the least cost system are printed on the console: the capacity of each generator, the energy supplied, CO2 emissions, costs, and the average cost of generation in dollars per MWh. If you want to see a plot of the system dispatch, you need to use the replay.py script described in the next section. Many of the optimisation parameters can be controlled from the command line, requiring no changes to the source code. Typically, source code changes are only required to add new supply scenario functions or cost classes. The command line options for evolve are documented as follows: | Short option | Long option | Description | Default | |--------------|-------------|----------------------------------------------|---------| | -h | --help | Show help and then exit | | | -c | --carbon-price | Carbon price in \$/tonne | 25 | | -d | --demand-modifier | Demand modifier | unchanged | | -g | --generations | Number of generations to run | 100 | | -o | --output | Filename of results output file (will overwrite) | results.json | | -p | --plot | Plot an hourly energy balance on completion | | | -r | --discount-rate | Discount rate | 0.05 | | -s | --supply-scenario | Generation mix scenario | re100 | | -v | --verbose | Be verbose | False | | | --bioenergy-limit | Limit on annual energy from bioenergy in TWh/year | 20 | | | --ccs-storage-costs | CCS storage costs in \$/tonne | 27 | | | --coal-price | Coal price in \$/GJ | 1.86 | | | --costs | Use different cost scenario | AETA2013-in2030-mid | | | --emissions-limit | Limit total emissions to N Mt/year | $\infty$ | | | --fossil-limit | Limit share of energy from fossil sources | 1.0 | | | --gas-price | Gas price in \$/GJ | 11 | | | --hydro-limit | Limit on annual energy from hydro in TWh/year| 12 | | | --lambda | CMA-ES lambda value | None (autodetect) | | | --list-scenarios | Print list of scenarios and exit | | | | --min-regional-generation | Minimum share of energy generated intra-region | 0.0 | | --nsp-limit | Non-synchronous penetration limit | 0.75 | | | --reliability-std | Reliability standard (% unserved) | 0.002 | | | --seed | Seed for random number generator | None | | | --sigma | CMA-ES sigma value | 2.0 | | | --trace-file | Filename for evaluation trace | None | | | --version | Print version number and exit | | Replaying a simulation To avoid having to re-run a long optimisation just to examine the resulting system, it is possible to reproduce a single run using the results from an earlier optimisation. The evolve script writes an output file at the end of the run (default filename results.json). This file encodes all of the relevant information from the optimisation run so that the solution it found can be replayed easily and accurately by replay. The input file for replay may consist of any number of scenarios and configurations to replay, one per line. Blank lines are ignored and comment lines (#) are shown for information. Each non-comment line must contain a JSON record from the results file that evolve writes. Typically the input file for replay will just be the output file from evolve unmodified. However, if you want multiple simulations to be replayed, this is easy to achieve by pasting multiple JSON strings into the file, one per line. A run is replayed using replay like so: $ replay -f results.json -p The -f option specifies the name of the input data file (the default is results.json) and the -p option enables a graphical plot of the system dispatch that you can navigate using zoom in, zoom out and pan controls. By including the --spills option, surplus energy in each hour will be plotted above the demand line in a lighter shade than the usual colour of the spilling generator. All command line options can be displayed using: End of explanation """ !python3 evolve -s ccgt -g10 | python3 summary """ Explanation: Summarising the optimiser output NEMO includes another script called summary that processes the verbose output from evolve and summarises it in a convenient table. You can either pipe the evolve output directly into summary (as below) or you can save the evolve output into a text file and use shell redirection to read the file as input. An example of piping: End of explanation """
dchandan/rebound
ipython_examples/FourierSpectrum.ipynb
gpl-3.0
import rebound import numpy as np sim = rebound.Simulation() sim.units = ('AU', 'yr', 'Msun') sim.add("Sun") sim.add("Jupiter") sim.add("Saturn") """ Explanation: Fourier analysis & resonances A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database: End of explanation """ sim.integrator = "whfast" sim.dt = 1. # in years. About 10% of Jupiter's period sim.move_to_com() """ Explanation: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period. End of explanation """ Nout = 100000 tmax = 3.e5 Nplanets = 2 x = np.zeros((Nplanets,Nout)) ecc = np.zeros((Nplanets,Nout)) longitude = np.zeros((Nplanets,Nout)) varpi = np.zeros((Nplanets,Nout)) times = np.linspace(0.,tmax,Nout) ps = sim.particles for i,time in enumerate(times): sim.integrate(time) os = sim.calculate_orbits() for j in range(Nplanets): x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0 ecc[j][i] = os[j].e longitude[j][i] = os[j].l varpi[j][i] = os[j].Omega + os[j].omega """ Explanation: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum. End of explanation """ %matplotlib inline labels = ["Jupiter", "Saturn"] import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) plt.plot(times,ecc[0],label=labels[0]) plt.plot(times,ecc[1],label=labels[1]) ax.set_xlabel("Time (yrs)", fontsize=20) ax.set_ylabel("Eccentricity", fontsize=20) ax.tick_params(labelsize=20) plt.legend(); """ Explanation: Let's see what the eccentricity evolution looks like with matplotlib: End of explanation """ from scipy import signal Npts = 3000 logPmin = np.log10(10.) logPmax = np.log10(1.e5) Ps = np.logspace(logPmin,logPmax,Npts) ws = np.asarray([2*np.pi/P for P in Ps]) periodogram = signal.lombscargle(times,x[0],ws) fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([10**logPmin,10**logPmax]) ax.set_ylim([0,0.15]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps). Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position. End of explanation """ fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([600,1600]) ax.set_ylim([0,0.003]) ax.set_xlabel("Period (yrs)", fontsize=20) ax.set_ylabel("Power", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range: End of explanation """ def zeroTo360(val): while val < 0: val += 2*np.pi while val > 2*np.pi: val -= 2*np.pi return val*180/np.pi """ Explanation: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. In this case, we have that the mean longitude of Jupiter $\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\phi_{5:2} = 5\lambda_S - 2\lambda_J - 3\varpi_J$, where $\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates. To see a clear trend, we have to shift each value of $\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees: End of explanation """ phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{5:2}$", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs. End of explanation """ phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi2) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)", fontsize=20) ax.set_ylabel(r"$\phi_{2:1}$", fontsize=20) ax.tick_params(labelsize=20) """ Explanation: We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup! This resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\phi_{2:1} = 2\lambda_S - \lambda_J - \varpi_J$, End of explanation """
byronknoll/tensorflow-compress
nncp-splitter.ipynb
unlicense
batch_size = 96 #@param {type:"integer"} #@markdown >_Set this to the same value that will be used in tensorflow-compress._ mode = 'split' #@param ["split", "join"] num_parts = 4 #@param {type:"integer"} #@markdown >_This is the number of parts the file should be split to._ http_path = '' #@param {type:"string"} #@markdown >_The file from this URL will be downloaded. It is recommended to use Google Drive URLs to get fast transfer speed. Use this format for Google Drive files: https://drive.google.com/uc?id= and paste the file ID at the end of the URL. You can find the file ID from the "Get Link" URL in Google Drive. You can enter multiple URLs here, space separated._ local_upload = False #@param {type:"boolean"} #@markdown >_If enabled, you will be prompted in the "Setup Files" section to select files to upload from your local computer. You can upload multiple files. Note: the upload speed can be quite slow (use "http_path" for better transfer speeds)._ download_option = "no_download" #@param ["no_download", "local", "google_drive"] #@markdown >_If this is set to "local", the output files will be downloaded to your computer. If set to "google_drive", they will be copied to your Google Drive account (which is significantly faster than downloading locally)._ """ Explanation: NNCP Splitter Made by Byron Knoll. GitHub repository: https://github.com/byronknoll/tensorflow-compress Description This notebook can be used to split files that have been preprocessed by NNCP. This is for compression using tensorflow-compress. The primary use-case is to get around Colab's session time limit by processing large files in smaller parts. This file splitting does not use the naive method of dividing the file into consecutive parts. Instead, it takes into account the batch size used in tensorflow-compress so that the same sequence of symbols will be used for compressing the split parts as for the original file. Instructions In tensorflow-compress, using "preprocess_only" mode, choose "nncp" preprocessor and download the result. Upload the preprocessed file (named "preprocessed.dat") to this notebook, and download the split parts. In tensorflow-compress, compress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor. In tensorflow-compress, decompress each split part sequentially, enabling the checkpoint option. Choose "nncp-done" as the preprocessor. Upload the decompressed parts to this notebook to reproduce the original file. The files should be named: part.0, part.1, ..., part.N. Also upload the original NNCP dictionary file (named "dictionary.words"). Parameters End of explanation """ #@title Imports from google.colab import files from google.colab import drive import math #@title Mount Google Drive if download_option == "google_drive": drive.mount('/content/gdrive') #@title Setup Files !mkdir -p "data" if local_upload: %cd data files.upload() %cd .. if http_path: %cd data paths = http_path.split() for path in paths: !gdown $path %cd .. if mode == "join": !gdown --id 1EzVPbRkBIIbgOzvEMeM0YpibDi2R4SHD !tar -xf nncp-2019-11-16.tar.gz %cd nncp-2019-11-16/ !make preprocess %cd .. """ Explanation: Setup End of explanation """ #@title Split/Join if mode == "split": input_path = "data/preprocessed.dat" orig = open(input_path, 'rb').read() int_list = [] for i in range(0, len(orig), 2): int_list.append(orig[i] * 256 + orig[i+1]) file_len = len(int_list) split = math.ceil(file_len / batch_size) part_split = math.ceil(file_len / (num_parts * batch_size)) pos = 0 for i in range(num_parts): output = [] for j in range(batch_size): for k in range(part_split): if pos + k >= split: break index = pos + (j*split) + k if index >= file_len: break output.append(int_list[index]) pos += part_split with open(("data/part." + str(i)), "wb") as out: for j in range(len(output)): out.write(bytes(((output[j] // 256),))) out.write(bytes(((output[j] % 256),))) if mode == "join": file_len = 0 for i in range(num_parts): part = open("data/part." + str(i), 'rb').read() file_len += len(part) / 2 split = math.ceil(file_len / batch_size) part_split = math.ceil(file_len / (num_parts * batch_size)) int_list = [0] * math.floor(file_len) pos = 0 for i in range(num_parts): part = open("data/part." + str(i), 'rb').read() part_list = [] for j in range(0, len(part), 2): part_list.append(part[j] * 256 + part[j+1]) index2 = 0 for j in range(batch_size): for k in range(part_split): if pos + k >= split: break index = pos + (j*split) + k if index >= file_len: break int_list[index] = part_list[index2] index2 += 1 pos += part_split with open("data/output.dat", "wb") as out: for i in range(len(int_list)): out.write(bytes(((int_list[i] // 256),))) out.write(bytes(((int_list[i] % 256),))) !./nncp-2019-11-16/preprocess d data/dictionary.words ./data/output.dat ./data/final.dat #@title File Sizes !ls -l data #@title MD5 !md5sum data/* #@title Download Result def download(path): """Downloads the file at the specified path.""" if download_option == 'local': files.download(path) elif download_option == 'google_drive': !cp -f $path /content/gdrive/My\ Drive if mode == "split": for i in range(num_parts): download("data/part." + str(i)) if mode == "join": download("data/final.dat") """ Explanation: Run End of explanation """
morganics/bayesianpy
examples/notebook/iris_cluster_count.ipynb
apache-2.0
import pandas as pd import logging import sys sys.path.append("../../../bayesianpy") import bayesianpy import matplotlib.pyplot as plt import os logger = logging.getLogger() bayesianpy.jni.attach(logger) db_folder = bayesianpy.utils.get_path_to_parent_dir('') iris = pd.read_csv(os.path.join(db_folder, "data/iris.csv"), index_col=False) analysis = bayesianpy.analysis.LogLikelihoodAnalysis(logger) # create templates between with latent states from 1 -> 19 results = analysis.analyse(iris, [bayesianpy.template.MixtureNaiveBayes(logger, discrete=iris[['iris_class']], continuous=iris[['sepal_length', 'petal_width', 'petal_length', 'sepal_width']], latent_states=i) for i in range(1, 20)], use_model_names=False, names=list(range(1,20))) """ Explanation: Automatically selecting the number of clusters in a latent variable This is just a quick demo to show how to automatically decide upon the number of clusters in a latent variable, where the number of clusters are unknown, using the Iris dataset. The process is iterative, building and training the model multiple times, and then querying the trained model to extract the log likelihood. There are lots of other scoring functions; regression or classification accuracy, Bayesian Information Criterion (BIC; which penalises the complexity of the model against the accuracy) among others. One thing to watch is that the log likelihood cannot be used as a measure where the number of variables in the model is being adjusted at the same time, as the score will also change (however it's perfect when only changing the number of states). It's easy to craft the iterative code, but there is a utility function in analysis.py to do it automatically and which uses cross validation. End of explanation """ %matplotlib inline plt.figure() plt.plot(results.columns.tolist(), results.mean().tolist(), 'bo') plt.show() """ Explanation: And finally plot the results: End of explanation """
Lasagne/Recipes
examples/spatial_transformer_network.ipynb
mit
!wget -N https://s3.amazonaws.com/lasagne/recipes/datasets/mnist_cluttered_60x60_6distortions.npz def load_data(): data = np.load(mnist_cluttered) X_train, y_train = data['x_train'], np.argmax(data['y_train'], axis=-1) X_valid, y_valid = data['x_valid'], np.argmax(data['y_valid'], axis=-1) X_test, y_test = data['x_test'], np.argmax(data['y_test'], axis=-1) # reshape for convolutions X_train = X_train.reshape((X_train.shape[0], 1, DIM, DIM)) X_valid = X_valid.reshape((X_valid.shape[0], 1, DIM, DIM)) X_test = X_test.reshape((X_test.shape[0], 1, DIM, DIM)) print "Train samples:", X_train.shape print "Validation samples:", X_valid.shape print "Test samples:", X_test.shape return dict( X_train=lasagne.utils.floatX(X_train), y_train=y_train.astype('int32'), X_valid=lasagne.utils.floatX(X_valid), y_valid=y_valid.astype('int32'), X_test=lasagne.utils.floatX(X_test), y_test=y_test.astype('int32'), num_examples_train=X_train.shape[0], num_examples_valid=X_valid.shape[0], num_examples_test=X_test.shape[0], input_height=X_train.shape[2], input_width=X_train.shape[3], output_dim=10,) data = load_data() plt.figure(figsize=(7,7)) plt.imshow(data['X_train'][101].reshape(DIM, DIM), cmap='gray', interpolation='none') plt.title('Cluttered MNIST', fontsize=20) plt.axis('off') plt.show() """ Explanation: Spatial Transformer Network We use lasagne to classify cluttered MNIST digits using the spatial transformer network introduced in [1]. The spatial Transformer Network applies a learned affine transformation to its input. Load data We test the spatial transformer network using cluttered MNIST data. Download the data (41 mb) with: End of explanation """ def build_model(input_width, input_height, output_dim, batch_size=BATCH_SIZE): ini = lasagne.init.HeUniform() l_in = lasagne.layers.InputLayer(shape=(None, 1, input_width, input_height),) # Localization network b = np.zeros((2, 3), dtype=theano.config.floatX) b[0, 0] = 1 b[1, 1] = 1 b = b.flatten() loc_l1 = pool(l_in, pool_size=(2, 2)) loc_l2 = conv( loc_l1, num_filters=20, filter_size=(5, 5), W=ini) loc_l3 = pool(loc_l2, pool_size=(2, 2)) loc_l4 = conv(loc_l3, num_filters=20, filter_size=(5, 5), W=ini) loc_l5 = lasagne.layers.DenseLayer( loc_l4, num_units=50, W=lasagne.init.HeUniform('relu')) loc_out = lasagne.layers.DenseLayer( loc_l5, num_units=6, b=b, W=lasagne.init.Constant(0.0), nonlinearity=lasagne.nonlinearities.identity) # Transformer network l_trans1 = lasagne.layers.TransformerLayer(l_in, loc_out, downsample_factor=3.0) print "Transformer network output shape: ", l_trans1.output_shape # Classification network class_l1 = conv( l_trans1, num_filters=32, filter_size=(3, 3), nonlinearity=lasagne.nonlinearities.rectify, W=ini, ) class_l2 = pool(class_l1, pool_size=(2, 2)) class_l3 = conv( class_l2, num_filters=32, filter_size=(3, 3), nonlinearity=lasagne.nonlinearities.rectify, W=ini, ) class_l4 = pool(class_l3, pool_size=(2, 2)) class_l5 = lasagne.layers.DenseLayer( class_l4, num_units=256, nonlinearity=lasagne.nonlinearities.rectify, W=ini, ) l_out = lasagne.layers.DenseLayer( class_l5, num_units=output_dim, nonlinearity=lasagne.nonlinearities.softmax, W=ini, ) return l_out, l_trans1 model, l_transform = build_model(DIM, DIM, NUM_CLASSES) model_params = lasagne.layers.get_all_params(model, trainable=True) X = T.tensor4() y = T.ivector() # training output output_train = lasagne.layers.get_output(model, X, deterministic=False) # evaluation output. Also includes output of transform for plotting output_eval, transform_eval = lasagne.layers.get_output([model, l_transform], X, deterministic=True) sh_lr = theano.shared(lasagne.utils.floatX(LEARNING_RATE)) cost = T.mean(T.nnet.categorical_crossentropy(output_train, y)) updates = lasagne.updates.adam(cost, model_params, learning_rate=sh_lr) train = theano.function([X, y], [cost, output_train], updates=updates) eval = theano.function([X], [output_eval, transform_eval]) def train_epoch(X, y): num_samples = X.shape[0] num_batches = int(np.ceil(num_samples / float(BATCH_SIZE))) costs = [] correct = 0 for i in range(num_batches): idx = range(i*BATCH_SIZE, np.minimum((i+1)*BATCH_SIZE, num_samples)) X_batch = X[idx] y_batch = y[idx] cost_batch, output_train = train(X_batch, y_batch) costs += [cost_batch] preds = np.argmax(output_train, axis=-1) correct += np.sum(y_batch == preds) return np.mean(costs), correct / float(num_samples) def eval_epoch(X, y): output_eval, transform_eval = eval(X) preds = np.argmax(output_eval, axis=-1) acc = np.mean(preds == y) return acc, transform_eval """ Explanation: Building the model We use a model where the localization network is a two layer convolution network which operates directly on the image input. The output from the localization network is a 6 dimensional vector specifying the parameters in the affine transformation. The localization feeds into the transformer layer which applies the transformation to the image input. In our setup the transformer layer downsamples the input by a factor 3. Finally a 2 layer convolution layer and 2 fully connected layers calculates the output probabilities. The model Input -&gt; localization_network -&gt; TransformerLayer -&gt; output_network -&gt; predictions | | &gt;--------------------------------^ End of explanation """ valid_accs, train_accs, test_accs = [], [], [] try: for n in range(NUM_EPOCHS): train_cost, train_acc = train_epoch(data['X_train'], data['y_train']) valid_acc, valid_trainsform = eval_epoch(data['X_valid'], data['y_valid']) test_acc, test_transform = eval_epoch(data['X_test'], data['y_test']) valid_accs += [valid_acc] test_accs += [test_acc] train_accs += [train_acc] if (n+1) % 20 == 0: new_lr = sh_lr.get_value() * 0.7 print "New LR:", new_lr sh_lr.set_value(lasagne.utils.floatX(new_lr)) print "Epoch {0}: Train cost {1}, Train acc {2}, val acc {3}, test acc {4}".format( n, train_cost, train_acc, valid_acc, test_acc) except KeyboardInterrupt: pass """ Explanation: Training End of explanation """ plt.figure(figsize=(9,9)) plt.plot(1-np.array(train_accs), label='Training Error') plt.plot(1-np.array(valid_accs), label='Validation Error') plt.legend(fontsize=20) plt.xlabel('Epoch', fontsize=20) plt.ylabel('Error', fontsize=20) plt.show() plt.figure(figsize=(7,14)) for i in range(3): plt.subplot(321+i*2) plt.imshow(data['X_test'][i].reshape(DIM, DIM), cmap='gray', interpolation='none') if i == 0: plt.title('Original 60x60', fontsize=20) plt.axis('off') plt.subplot(322+i*2) plt.imshow(test_transform[i].reshape(DIM//3, DIM//3), cmap='gray', interpolation='none') if i == 0: plt.title('Transformed 20x20', fontsize=20) plt.axis('off') plt.tight_layout() """ Explanation: Plot results End of explanation """
metpy/MetPy
v1.0/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import Delaunay from metpy.interpolate.geometry import circumcircle_radius, find_natural_neighbors # Create test observations, test points, and plot the triangulation and points. gx, gy = np.meshgrid(np.arange(0, 20, 4), np.arange(0, 20, 4)) pts = np.vstack([gx.ravel(), gy.ravel()]).T tri = Delaunay(pts) fig, ax = plt.subplots(figsize=(15, 10)) for i, inds in enumerate(tri.simplices): pts = tri.points[inds] x, y = np.vstack((pts, pts[0])).T ax.plot(x, y) ax.annotate(i, xy=(np.mean(x), np.mean(y))) test_points = np.array([[2, 2], [5, 10], [12, 13.4], [12, 8], [20, 20]]) for i, (x, y) in enumerate(test_points): ax.plot(x, y, 'k.', markersize=6) ax.annotate('test ' + str(i), xy=(x, y)) """ Explanation: Find Natural Neighbors Verification Finding natural neighbors in a triangulation A triangle is a natural neighbor of a point if that point is within a circumscribed circle ("circumcircle") containing the triangle. End of explanation """ neighbors, circumcenters = find_natural_neighbors(tri, test_points) print(neighbors) """ Explanation: Since finding natural neighbors already calculates circumcenters, return that information for later use. The key of the neighbors dictionary refers to the test point index, and the list of integers are the triangles that are natural neighbors of that particular test point. Since point 4 is far away from the triangulation, it has no natural neighbors. Point 3 is at the confluence of several triangles so it has many natural neighbors. End of explanation """ fig, ax = plt.subplots(figsize=(15, 10)) for i, inds in enumerate(tri.simplices): pts = tri.points[inds] x, y = np.vstack((pts, pts[0])).T ax.plot(x, y) ax.annotate(i, xy=(np.mean(x), np.mean(y))) # Using circumcenters and calculated circumradii, plot the circumcircles for idx, cc in enumerate(circumcenters): ax.plot(cc[0], cc[1], 'k.', markersize=5) circ = plt.Circle(cc, circumcircle_radius(*tri.points[tri.simplices[idx]]), edgecolor='k', facecolor='none', transform=fig.axes[0].transData) ax.add_artist(circ) ax.set_aspect('equal', 'datalim') plt.show() """ Explanation: We can plot all of the triangles as well as the circles representing the circumcircles End of explanation """
BinRoot/TensorFlow-Book
ch08_rl/Concept01_rl.ipynb
mit
%matplotlib inline from yahoo_finance import Share from matplotlib import pyplot as plt import numpy as np import random import tensorflow as tf import random """ Explanation: Ch 08: Concept 01 Reinforcement learning The states are previous history of stock prices, current budget, and current number of shares of a stock. The actions are buy, sell, or hold (i.e. do nothing). The stock market data comes from the Yahoo Finance library, pip install yahoo-finance. End of explanation """ class DecisionPolicy: def select_action(self, current_state, step): pass def update_q(self, state, action, reward, next_state): pass """ Explanation: Define an abstract class called DecisionPolicy: End of explanation """ class RandomDecisionPolicy(DecisionPolicy): def __init__(self, actions): self.actions = actions def select_action(self, current_state, step): action = random.choice(self.actions) return action """ Explanation: Here's one way we could implement the decision policy, called a random decision policy: End of explanation """ class QLearningDecisionPolicy(DecisionPolicy): def __init__(self, actions, input_dim): self.epsilon = 0.95 self.gamma = 0.3 self.actions = actions output_dim = len(actions) h1_dim = 20 self.x = tf.placeholder(tf.float32, [None, input_dim]) self.y = tf.placeholder(tf.float32, [output_dim]) W1 = tf.Variable(tf.random_normal([input_dim, h1_dim])) b1 = tf.Variable(tf.constant(0.1, shape=[h1_dim])) h1 = tf.nn.relu(tf.matmul(self.x, W1) + b1) W2 = tf.Variable(tf.random_normal([h1_dim, output_dim])) b2 = tf.Variable(tf.constant(0.1, shape=[output_dim])) self.q = tf.nn.relu(tf.matmul(h1, W2) + b2) loss = tf.square(self.y - self.q) self.train_op = tf.train.AdamOptimizer(0.001).minimize(loss) self.sess = tf.Session() self.sess.run(tf.global_variables_initializer()) def select_action(self, current_state, step): threshold = min(self.epsilon, step / 1000.) if random.random() < threshold: # Exploit best option with probability epsilon action_q_vals = self.sess.run(self.q, feed_dict={self.x: current_state}) action_idx = np.argmax(action_q_vals) # TODO: replace w/ tensorflow's argmax action = self.actions[action_idx] else: # Explore random option with probability 1 - epsilon action = self.actions[random.randint(0, len(self.actions) - 1)] return action def update_q(self, state, action, reward, next_state): action_q_vals = self.sess.run(self.q, feed_dict={self.x: state}) next_action_q_vals = self.sess.run(self.q, feed_dict={self.x: next_state}) next_action_idx = np.argmax(next_action_q_vals) current_action_idx = self.actions.index(action) action_q_vals[0, current_action_idx] = reward + self.gamma * next_action_q_vals[0, next_action_idx] action_q_vals = np.squeeze(np.asarray(action_q_vals)) self.sess.run(self.train_op, feed_dict={self.x: state, self.y: action_q_vals}) """ Explanation: That's a good baseline. Now let's use a smarter approach using a neural network: End of explanation """ def run_simulation(policy, initial_budget, initial_num_stocks, prices, hist, debug=False): budget = initial_budget num_stocks = initial_num_stocks share_value = 0 transitions = list() for i in range(len(prices) - hist - 1): if i % 1000 == 0: print('progress {:.2f}%'.format(float(100*i) / (len(prices) - hist - 1))) current_state = np.asmatrix(np.hstack((prices[i:i+hist], budget, num_stocks))) current_portfolio = budget + num_stocks * share_value action = policy.select_action(current_state, i) share_value = float(prices[i + hist]) if action == 'Buy' and budget >= share_value: budget -= share_value num_stocks += 1 elif action == 'Sell' and num_stocks > 0: budget += share_value num_stocks -= 1 else: action = 'Hold' new_portfolio = budget + num_stocks * share_value reward = new_portfolio - current_portfolio next_state = np.asmatrix(np.hstack((prices[i+1:i+hist+1], budget, num_stocks))) transitions.append((current_state, action, reward, next_state)) policy.update_q(current_state, action, reward, next_state) portfolio = budget + num_stocks * share_value if debug: print('${}\t{} shares'.format(budget, num_stocks)) return portfolio """ Explanation: Define a function to run a simulation of buying and selling stocks from a market: End of explanation """ def run_simulations(policy, budget, num_stocks, prices, hist): num_tries = 5 final_portfolios = list() for i in range(num_tries): print('Running simulation {}...'.format(i + 1)) final_portfolio = run_simulation(policy, budget, num_stocks, prices, hist) final_portfolios.append(final_portfolio) print('Final portfolio: ${}'.format(final_portfolio)) plt.title('Final Portfolio Value') plt.xlabel('Simulation #') plt.ylabel('Net worth') plt.plot(final_portfolios) plt.show() """ Explanation: We want to run simulations multiple times and average out the performances: End of explanation """ def get_prices(share_symbol, start_date, end_date, cache_filename='stock_prices.npy'): try: stock_prices = np.load(cache_filename) except IOError: share = Share(share_symbol) stock_hist = share.get_historical(start_date, end_date) stock_prices = [stock_price['Open'] for stock_price in stock_hist] np.save(cache_filename, stock_prices) return stock_prices.astype(float) """ Explanation: Call the following function to use the Yahoo Finance library and obtain useful stockmarket data. End of explanation """ def plot_prices(prices): plt.title('Opening stock prices') plt.xlabel('day') plt.ylabel('price ($)') plt.plot(prices) plt.savefig('prices.png') plt.show() """ Explanation: Who wants to deal with stock market data without looking a pretty plots? No one. So we need this out of law: End of explanation """ if __name__ == '__main__': prices = get_prices('MSFT', '1992-07-22', '2016-07-22') plot_prices(prices) actions = ['Buy', 'Sell', 'Hold'] hist = 3 # policy = RandomDecisionPolicy(actions) policy = QLearningDecisionPolicy(actions, hist + 2) budget = 100000.0 num_stocks = 0 run_simulations(policy, budget, num_stocks, prices, hist) """ Explanation: Train a reinforcement learning policy: End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/reinforcement_learning/actor_critic.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ !pip install gym %%bash # Install additional packages for visualization sudo apt-get install -y xvfb python-opengl > /dev/null 2>&1 pip install pyvirtualdisplay > /dev/null 2>&1 pip install git+https://github.com/tensorflow/docs > /dev/null 2>&1 import collections import gym import numpy as np import tensorflow as tf import tqdm from matplotlib import pyplot as plt from tensorflow.keras import layers from typing import Any, List, Sequence, Tuple # Create the environment env = gym.make("CartPole-v0") # Set seed for experiment reproducibility seed = 42 env.seed(seed) tf.random.set_seed(seed) np.random.seed(seed) # Small epsilon value for stabilizing division operations eps = np.finfo(np.float32).eps.item() """ Explanation: Actor-Critic 방법으로 CartPole의 문제 풀기 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td> </table> 이 튜토리얼에서는 TensorFlow로 Actor-Critic 방법을 구현하여 Open AI Gym CartPole-V0 환경에서 에이전트를 훈련하는 방법을 보여줍니다. 독자가 강화 학습의 정책 그래디언드 방법에 어느 정도 익숙하다고 가정합니다. Actor-Critic 방법 Actor-Critic 방법은 가치 함수와 독립적인 정책 함수를 나타내는 Temporal Difference(TD) 학습 방법입니다. 정책 함수(또는 정책)는 에이전트가 주어진 상태에 따라 취할 수 있는 동작에 대한 확률 분포를 반환합니다. 가치 함수는 주어진 상태에서 시작하여 특정 정책에 따라 영원히 동작하는 에이전트의 예상 이익을 결정합니다. Actor-Critic 방법에서 정책은 주어진 상태에 따라 가능한 일련의 동작을 제안하는 행위자라고 하며, 추정값 함수는 주어진 정책에 따라 행위자가 취한 동작을 평가하는 비평가라고 합니다. 이 튜토리얼에서 행위자와 비평가 모두 두 개의 출력이 있는 하나의 신경망을 사용하여 표현됩니다. CartPole-v0 CartPole-v0 환경에서는 마찰이 없는 트랙을 따라 이동하는 카트에 막대가 연결되어 있습니다. 막대는 똑바른 상태에서 시작되고 에이전트의 목표는 카트에 -1 또는 +1의 힘을 가하여 카트가 넘어지는 것을 방지하는 것입니다. 막대가 똑바로 유지될 때마다 +1의 보상이 주어집니다. 에피소드는 (1) 막대가 수직에서 15도 이상 기울어지거나 (2) 카트가 중앙에서 2.4 단위 이상 이동하면 끝납니다. <center> <pre data-md-type="custom_pre">&lt;figure&gt; &lt;image src="images/cartpole-v0.gif"&gt; &lt;figcaption&gt;Cartpole-v0 환경에서 훈련된 Actor-Critic 모델&lt;/figcaption&gt; &lt;/image&gt;&lt;/figure&gt;</pre> </center> 이 문제는 에피소드에 대한 평균 총 보상이 100회 연속 시도에서 195에 도달하면 "해결"된 것으로 간주됩니다. 설정 필요한 패키지를 가져오고 전역 설정을 구성합니다. End of explanation """ class ActorCritic(tf.keras.Model): """Combined actor-critic network.""" def __init__( self, num_actions: int, num_hidden_units: int): """Initialize.""" super().__init__() self.common = layers.Dense(num_hidden_units, activation="relu") self.actor = layers.Dense(num_actions) self.critic = layers.Dense(1) def call(self, inputs: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor]: x = self.common(inputs) return self.actor(x), self.critic(x) num_actions = env.action_space.n # 2 num_hidden_units = 128 model = ActorCritic(num_actions, num_hidden_units) """ Explanation: 모델 행위자와 비평가는 각각 동작 확률과 비평 값을 생성하는 하나의 신경망을 사용하여 모델링됩니다. 이 튜토리얼에서는 모델 하위 클래스화를 사용하여 모델을 정의합니다. 순방향 전달 중에 모델은 상태를 입력으로 받고 상태 종속 값 함수를 모델링하는 동작 확률과 비평 값 $V$를 모두 출력합니다. 목표는 예상 이익을 최대화하는 $\pi$ 정책을 기반으로 행동을 선택하는 모델을 훈련하는 것입니다. Cartpole-v0의 경우, 상태를 나타내는 네 가지 값이 있는데, 각각 카트 위치, 카트 속도, 막대 각도 및 막대 속도입니다. 에이전트는 카트를 각각 왼쪽(0)과 오른쪽(1)으로 밀기 위해 두 가지 동작을 취할 수 있습니다. 자세한 내용은 OpenAI Gym의 CartPole-v0 위키 페이지를 참조하세요. End of explanation """ # Wrap OpenAI Gym's `env.step` call as an operation in a TensorFlow function. # This would allow it to be included in a callable TensorFlow graph. def env_step(action: np.ndarray) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: """Returns state, reward and done flag given an action.""" state, reward, done, _ = env.step(action) return (state.astype(np.float32), np.array(reward, np.int32), np.array(done, np.int32)) def tf_env_step(action: tf.Tensor) -> List[tf.Tensor]: return tf.numpy_function(env_step, [action], [tf.float32, tf.int32, tf.int32]) def run_episode( initial_state: tf.Tensor, model: tf.keras.Model, max_steps: int) -> List[tf.Tensor]: """Runs a single episode to collect training data.""" action_probs = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True) values = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True) rewards = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True) initial_state_shape = initial_state.shape state = initial_state for t in tf.range(max_steps): # Convert state into a batched tensor (batch size = 1) state = tf.expand_dims(state, 0) # Run the model and to get action probabilities and critic value action_logits_t, value = model(state) # Sample next action from the action probability distribution action = tf.random.categorical(action_logits_t, 1)[0, 0] action_probs_t = tf.nn.softmax(action_logits_t) # Store critic values values = values.write(t, tf.squeeze(value)) # Store log probability of the action chosen action_probs = action_probs.write(t, action_probs_t[0, action]) # Apply action to the environment to get next state and reward state, reward, done = tf_env_step(action) state.set_shape(initial_state_shape) # Store reward rewards = rewards.write(t, reward) if tf.cast(done, tf.bool): break action_probs = action_probs.stack() values = values.stack() rewards = rewards.stack() return action_probs, values, rewards """ Explanation: 훈련 에이전트를 훈련하기 위해 다음 단계를 따릅니다. 환경에서 에이전트를 실행하여 에피소드별로 훈련 데이터를 수집합니다. 각 시간 스텝에서 예상 이익을 계산합니다. 결합된 Actor-Critic 모델의 손실을 계산합니다. 그래디언트를 계산하고 네트워크 매개변수를 업데이트합니다. 성공 기준 또는 최대 에피소드에 도달할 때까지 1~4를 반복합니다. 1. 훈련 데이터 수집하기 지도 학습에서와 같이 Actor-Critic 모델을 훈련하려면 훈련 데이터가 필요합니다. 그러나, 이러한 데이터를 수집하려면 모델이 환경에서 "실행"되어야 합니다. 여기서는 각 에피소드에 대한 훈련 데이터를 수집합니다. 그런 다음, 모델의 가중치에 의해 매개변수화된 현재 정책을 기반으로 동작 확률과 비평 값을 생성하기 위해 각 타임스텝에서 모델의 순방향 전달을 환경 상태에서 실행합니다. 다음 동작은 모델에 의해 생성된 동작 확률로부터 샘플링되며, 그런 다음 환경에 적용되어 다음 상태와 보상을 생성합니다. 이 프로세스는 더 빠른 훈련을 위해 나중에 TensorFlow 그래프로 컴파일할 수 있도록 TensorFlow 연산을 사용하는 run_episode 함수에서 구현됩니다. tf.TensorArray는 가변 길이 배열에서 Tensor 반복을 지원하는 데 사용되었습니다. End of explanation """ def get_expected_return( rewards: tf.Tensor, gamma: float, standardize: bool = True) -> tf.Tensor: """Compute expected returns per timestep.""" n = tf.shape(rewards)[0] returns = tf.TensorArray(dtype=tf.float32, size=n) # Start from the end of `rewards` and accumulate reward sums # into the `returns` array rewards = tf.cast(rewards[::-1], dtype=tf.float32) discounted_sum = tf.constant(0.0) discounted_sum_shape = discounted_sum.shape for i in tf.range(n): reward = rewards[i] discounted_sum = reward + gamma * discounted_sum discounted_sum.set_shape(discounted_sum_shape) returns = returns.write(i, discounted_sum) returns = returns.stack()[::-1] if standardize: returns = ((returns - tf.math.reduce_mean(returns)) / (tf.math.reduce_std(returns) + eps)) return returns """ Explanation: 2. 예상 이익 계산하기 한 에피소드 동안 수집된 각 타임스텝 $t$, ${r_{t}}^{T}{t=1}$에서 보상의 시퀀스를 예상 이익 ${G{t}}^{T}_{t=1}$의 시퀀스로 변환합니다. 여기서 보상의 합계는 현재 타임스텝 $t$에서 $T$까지 계산되며, 각 보상에 기하급수적으로 감소하는 할인 계수 $\gamma$를 곱합니다. $$G_{t} = \sum^{T}{t'=t} \gamma^{t'-t}r{t'}$$ $\gamma\in(0,1)$ 이후, 현재 타임스텝에서 더 멀리 떨어진 보상에는 더 적은 가중치가 부여됩니다. 직관적으로, 예상 이익은 단순히 지금 보상이 이후 보상보다 낫다는 것을 암시합니다. 이것은 수학적 의미에서 보상의 합이 수렴하도록 하려는 것입니다. 또한, 훈련을 안정화하기 위해 이익의 결과 시퀀스를 표준화합니다(즉, 평균이 0이고 단위 표준 편차를 갖도록 함). End of explanation """ huber_loss = tf.keras.losses.Huber(reduction=tf.keras.losses.Reduction.SUM) def compute_loss( action_probs: tf.Tensor, values: tf.Tensor, returns: tf.Tensor) -> tf.Tensor: """Computes the combined actor-critic loss.""" advantage = returns - values action_log_probs = tf.math.log(action_probs) actor_loss = -tf.math.reduce_sum(action_log_probs * advantage) critic_loss = huber_loss(values, returns) return actor_loss + critic_loss """ Explanation: 3. Actor-Critic 손실 여기서는 하이브리드 Actor-Critic 모델을 사용하고 있기 때문에 아래와 같이 훈련을 위해 행위자와 비평가 손실의 조합인 손실 함수를 사용합니다. $$L = L_{actor} + L_{critic}$$ Actor 손실 비평가가 상태 종속 기준선인 정책 그래디언트를 기반으로 행위자 손실을 공식화하고 단일 샘플(에피소드별) 추정치를 계산합니다. $$L_{actor} = -\sum^{T}{t=1} log\pi{\theta}(a_{t} | s_{t})[G(s_{t}, a_{t}) - V^{\pi}{\theta}(s{t})]$$ 여기서: $T$: 에피소드별로 달라질 수 있는 에피소드별 타임스텝의 수 $s_{t}$: $t$ 타임스텝의 상태 $a_{t}$: $s$ 상태에 따라 $t$ 타임스텝에서 선택된 동작 $\pi_{\theta}$: $\theta$에 의해 매개변수화된 정책(행위자) $V^{\pi}_{\theta}$: 마찬가지로 $\theta$에 의해 매개변수화된 값 함수(비평가) $G = G_{t}$: 주어진 상태에 대한 예상 이익, 타임스텝 $t$에서 동작 쌍 결합된 손실을 최소화하여 보상이 더 높은 행동의 확률을 최대화하려고 하므로 합계에 음의 항을 추가합니다. <br> 이점 $L_{actor}$ 공식에서 $G - V$ 항을 이점이라고 하며, 이는 특정한 상태에서 $\pi$ 정책에 따라 선택된 임의의 동작보다 이 상태에 얼마나 더 나은 동작이 주어지는지를 나타냅니다. 기준선을 제외할 수 있지만 이로 인해 훈련 중에 큰 변동이 발생할 수 있습니다. 그리고 비평가 $V$를 기준선으로 선택할 때의 좋은 점은 가능한 한 $G$에 가깝게 훈련되어 변동이 낮아진다는 것입니다. 또한, 비평가가 없으면 알고리즘이 예상 이익을 바탕으로 특정 상태에서 취하는 행동의 확률을 높이려고 시도할 것이며, 이 때 동작 사이의 상대적 확률이 같게 유지된다면 큰 차이가 생기지 않습니다. 예를 들어, 주어진 상태에서 두 행동의 예상 이익이 같다고 가정합니다. 비평가가 없으면 알고리즘은 목표 $J$에 따라 이들 동작의 확률을 높이려고 합니다. 비평가의 경우, 이점($G - V = 0$)이 없기 때문에 동작의 확률을 높이는 데 따른 이점이 없으며 알고리즘이 그래디언트를 0으로 설정합니다. <br> 비평가 손실 $V$를 $G$에 최대한 가깝게 훈련하는 것은 다음 손실 함수를 사용한 회귀 문제로 설정할 수 있습니다. $$L_{critic} = L_{\delta}(G, V^{\pi}_{\theta})$$ 여기서 $L_{\delta}$는 Huber 손실로, 제곱 오차 손실보다 데이터의 이상 값에 덜 민감합니다. End of explanation """ optimizer = tf.keras.optimizers.Adam(learning_rate=0.01) @tf.function def train_step( initial_state: tf.Tensor, model: tf.keras.Model, optimizer: tf.keras.optimizers.Optimizer, gamma: float, max_steps_per_episode: int) -> tf.Tensor: """Runs a model training step.""" with tf.GradientTape() as tape: # Run the model for one episode to collect training data action_probs, values, rewards = run_episode( initial_state, model, max_steps_per_episode) # Calculate expected returns returns = get_expected_return(rewards, gamma) # Convert training data to appropriate TF tensor shapes action_probs, values, returns = [ tf.expand_dims(x, 1) for x in [action_probs, values, returns]] # Calculating loss values to update our network loss = compute_loss(action_probs, values, returns) # Compute the gradients from the loss grads = tape.gradient(loss, model.trainable_variables) # Apply the gradients to the model's parameters optimizer.apply_gradients(zip(grads, model.trainable_variables)) episode_reward = tf.math.reduce_sum(rewards) return episode_reward """ Explanation: 4. 매개변수를 업데이트하기 위한 훈련 단계 정의하기 위의 모든 단계를 모든 에피소드에서 실행되는 훈련 단계로 결합합니다. 손실 함수로 이어지는 모든 단계는 tf.GradientTape 컨텍스트로 실행되어 자동 미분이 가능합니다. 이 튜토리얼에서는 Adam 옵티마이저를 사용하여 모델 매개변수에 그래디언트를 적용합니다. 할인되지 않은 보상의 합계인 episode_reward도 이 단계에서 계산됩니다. 이 값은 나중에 성공 기준이 충족되는지 평가하는 데 사용됩니다. tf.function 컨텍스트를 train_step 함수에 적용하여 호출 가능한 TensorFlow 그래프로 컴파일할 수 있고, 그러면 훈련 속도가 10배 빨라질 수 있습니다. End of explanation """ %%time min_episodes_criterion = 100 max_episodes = 10000 max_steps_per_episode = 1000 # Cartpole-v0 is considered solved if average reward is >= 195 over 100 # consecutive trials reward_threshold = 195 running_reward = 0 # Discount factor for future rewards gamma = 0.99 # Keep last episodes reward episodes_reward: collections.deque = collections.deque(maxlen=min_episodes_criterion) with tqdm.trange(max_episodes) as t: for i in t: initial_state = tf.constant(env.reset(), dtype=tf.float32) episode_reward = int(train_step( initial_state, model, optimizer, gamma, max_steps_per_episode)) episodes_reward.append(episode_reward) running_reward = statistics.mean(episodes_reward) t.set_description(f'Episode {i}') t.set_postfix( episode_reward=episode_reward, running_reward=running_reward) # Show average episode reward every 10 episodes if i % 10 == 0: pass # print(f'Episode {i}: average reward: {avg_reward}') if running_reward > reward_threshold and i >= min_episodes_criterion: break print(f'\nSolved at episode {i}: average reward: {running_reward:.2f}!') """ Explanation: 5. 훈련 루프 실행하기 성공 기준 또는 최대 에피소드 수에 도달할 때까지 훈련 단계를 실행하는 방식으로 훈련을 실행합니다. 대기열을 사용하여 에피소드 보상의 실행 레코드를 유지합니다. 100회 시도에 도달하면 가장 오래된 보상이 대기열의 왼쪽(꼬리쪽) 끝에서 제거되고 최근 보상이 머리쪽(오른쪽)에 추가됩니다. 계산 효율을 높이기 위해 보상의 누적 합계도 유지됩니다. 런타임에 따라 훈련은 1분 이내에 완료될 수 있습니다. End of explanation """ # Render an episode and save as a GIF file from IPython import display as ipythondisplay from PIL import Image from pyvirtualdisplay import Display display = Display(visible=0, size=(400, 300)) display.start() def render_episode(env: gym.Env, model: tf.keras.Model, max_steps: int): screen = env.render(mode='rgb_array') im = Image.fromarray(screen) images = [im] state = tf.constant(env.reset(), dtype=tf.float32) for i in range(1, max_steps + 1): state = tf.expand_dims(state, 0) action_probs, _ = model(state) action = np.argmax(np.squeeze(action_probs)) state, _, done, _ = env.step(action) state = tf.constant(state, dtype=tf.float32) # Render screen every 10 steps if i % 10 == 0: screen = env.render(mode='rgb_array') images.append(Image.fromarray(screen)) if done: break return images # Save GIF image images = render_episode(env, model, max_steps_per_episode) image_file = 'cartpole-v0.gif' # loop=0: loop forever, duration=1: play each frame for 1ms images[0].save( image_file, save_all=True, append_images=images[1:], loop=0, duration=1) import tensorflow_docs.vis.embed as embed embed.embed_file(image_file) """ Explanation: 시각화 훈련 후에는 모델이 환경에서 어떻게 동작하는지 시각화하는 것이 좋습니다. 아래 셀을 실행하여 모델의 한 에피소드 실행에 대한 GIF 애니메이션을 생성할 수 있습니다. Colab에서 환경의 이미지를 올바르게 렌더링하려면 OpenAI Gym에 대한 추가 패키지를 설치해야 합니다. End of explanation """
iktakahiro/ipython-notebook-sample
pymook/pymook_reading_20150723.ipynb
mit
import pandas as pd import numpy as np """ Explanation: このNotebookについて 2015年07月23日(木)に開催された、「Pythonエンジニア養成読本」読書会 03 - connpassの登壇時、追加資料として利用したものです。 Author: Takahiro Ikeuchi - @iktakahiro End of explanation """ # 商品データと購買ログの2つのデータを読み込みます。 master = pd.read_csv('./data/master.csv') log = pd.read_csv('./data/log.csv') # masterの内容を確認します master # logの内容を確認します log """ Explanation: データの結合(JOIN) 養成読本では、複数のデータソースを組み合わせて利用する例は示しませんでした。データの結合(JOIN)について解説します。 End of explanation """ # id 列で結合します pd.merge(log, master, left_on='id', right_on='id') """ Explanation: データの結合を行います。 End of explanation """ def f(df): return df master.pipe(f) def discount(df): """ 商品名が ham だった場合に割り引きします """ df2 = df.copy() df2.ix[df2.name == 'ham', 'price'] = df2.price - 30 return df2 def tax_in(df, col): """ 消費税を計算したカラムを追加します """ df['tax_in'] = df[col] * 1.08 return df ( master.pipe(discount) .pipe(tax_in, col='price') ) """ Explanation: pipe() の解説 Pandas 0.16.2 で、 pipe() というメソッドが追加されました。pipe() について解説します。 End of explanation """
mne-tools/mne-tools.github.io
0.16/_downloads/plot_sensor_noise_level.ipynb
bsd-3-clause
# Author: Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) import os.path as op import mne data_path = mne.datasets.sample.data_path() raw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample', 'ernoise_raw.fif'), preload=True) """ Explanation: Show noise levels from empty room data This shows how to use :meth:mne.io.Raw.plot_psd to examine noise levels of systems. See [1]_ for an example. References .. [1] Khan S, Cohen D (2013). Note: Magnetic noise from the inner wall of a magnetically shielded room. Review of Scientific Instruments 84:56101. https://doi.org/10.1063/1.4802845 End of explanation """ raw_erm.plot_psd(tmax=10., average=True, dB=False, xscale='log') """ Explanation: We can plot the absolute noise levels: End of explanation """
google-research/google-research
ged_tts/toy_example/toy_ged.ipynb
apache-2.0
import numpy as np from scipy.optimize import minimize import functools import matplotlib.pyplot as plt import palettable """ Explanation: Toy example to demonstrate the importance of the repulsive term in the energy distance This notebook reproduces Figure 1 from A Spectral Energy Distance for Parallel Speech Synthesis (https://arxiv.org/abs/2008.01160). In this paper we use a spectrogram-based generalization of the Energy Distance (wikipedia), which is a proper scoring rule for fitting generative models. The squared energy distance is given by $D^{2}[p|q] = 2\mathbb{E}{\mathbf{x} \sim p, \mathbf{y} \sim q}||\mathbf{x} - \mathbf{y}||{2} - \mathbb{E}{\mathbf{x},\mathbf{x'} \sim p}||\mathbf{x} - \mathbf{x'}||{2} - \mathbb{E}{\mathbf{y},\mathbf{y'} \sim q}||\mathbf{y} - \mathbf{y'}||{2}$. When $p$ is our data distribution and $q$ our model distribution this simplifies to a training loss given by $L[q] = 2\mathbb{E}{\mathbf{x} \sim p, \mathbf{y} \sim q}||\mathbf{x} - \mathbf{y}||{2} - \mathbb{E}{\mathbf{y},\mathbf{y'} \sim q}||\mathbf{y} - \mathbf{y'}||{2}$. The first term here attracts the model samples $\mathbf{y}$ towards the data samples $\mathbf{x}$, while the second term repels independent model samples $\mathbf{y}, \mathbf{y'}$ away from each other. In this notebook we estimate 2 simple toy models with and without using this repulsive term to demonstrate its importance. Imports End of explanation """ def loss(param, sample_from_param_fun, real_data, repulsive_term = True): """ Energy Distance loss function for training a generative model. Inputs: param: parameters of a generative model sample_from_param_fun: function that produces a set of samples from the model for given parameters real_data: training data repulsive_term: whetther to include the repulsive term in the loss or not Output: A scalar loss that can be minimized to fit our model to the data """ sample = sample_from_param_fun(param) d_real_fake = np.sqrt(np.sum(np.square(sample - real_data), axis=1)) perm = np.random.RandomState(seed=100).permutation(sample.shape[0]) sample2 = sample[perm] # we randomly match up independently generated samples d_fake_fake = np.sqrt(np.sum(np.square(sample - sample2), axis=1)) l = 2. * np.mean(d_real_fake) if repulsive_term: l -= np.mean(d_fake_fake) return l """ Explanation: This is the energy distance loss End of explanation """ n = 10000 dim = 100 def sample_from_param(param, z): mu = param[:-1] log_sigma = param[-1] sigma = np.exp(log_sigma) mu = np.reshape(mu, [1, dim]) return mu + sigma * z z_optim = np.random.normal(size=(n, dim)) sample_from_param_partial = functools.partial(sample_from_param, z=z_optim) # real data real_param = np.zeros(dim+1) real_data = sample_from_param(real_param, np.random.normal(size=(n, dim))) # with energy distance res = minimize(loss, np.zeros(dim + 1), args=(sample_from_param_partial, real_data, True), method='BFGS', tol=1e-10) sample_ged = sample_from_param_partial(res.x) # without repulsive res = minimize(loss, np.zeros(dim + 1), args=(sample_from_param_partial, real_data, False), method='BFGS', tol=1e-10) sample_naive = sample_from_param_partial(res.x) def data_to_xy(sample): sample = sample[:100] x = np.sqrt(np.mean(np.square(sample), axis=1)) y = np.mean(sample, axis=1) return (x,y) data = (data_to_xy(real_data), data_to_xy(sample_ged), data_to_xy(sample_naive)) colors = palettable.colorbrewer.qualitative.Set1_3.mpl_colors groups = ("Training data", "Energy distance", "No repulsive term") fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for data, color, group in zip(data, colors, groups): x, y = data ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=30, label=group) plt.legend(loc='best', fontsize=14) plt.xlabel('Sample norm', fontsize=14) plt.ylabel('Sample mean', fontsize=14) plt.show() """ Explanation: Fitting a high dimensional Gaussian using energy distance, with and without using a repulsive term We fit a high dimensional Gaussian model to training data generated from a distribution in the same model class. We show samples from the model trained by minimizing the energy distance (blue) or the more commonly used loss without repulsive term (green), and compare to samples from the training data (red). Samples from the energy distance trained model are representative of the data, and all sampled points lie close to training examples. Samples from the model trained without repulsive term are not typical of training data. End of explanation """ n = 10000 def sample_from_param(param, z, perm): params = np.split(param, 3) means = [np.reshape(p[:2], [1,2]) for p in params] sigmas = [np.exp(p[2]) for p in params] samples = [m + s*zi for m,s,zi in zip(means, sigmas, z)] samples = np.concatenate(samples, axis=0)[perm] return samples z_optim = np.split(np.random.normal(size=(n, 6)), 3, axis=1) perm_optim = np.random.permutation(3*n) sample_from_param_partial = functools.partial(sample_from_param, z=z_optim, perm=perm_optim) # real data real_param = np.array([-10., 0., 0., 10., 0., 0., 0., np.sqrt(300.), 0.]) z_real = np.split(np.random.normal(size=(n, 6)), 3, axis=1) perm_real = np.random.permutation(3*n) real_data = sample_from_param(real_param, z=z_real, perm=perm_real) # with energy distance res = minimize(loss, np.zeros(9), args=(sample_from_param_partial, real_data, True), method='BFGS', tol=1e-10) sample_ged = sample_from_param_partial(res.x) # without repulsive res = minimize(loss, np.zeros(9), args=(sample_from_param_partial, real_data, False), method='BFGS', tol=1e-10) sample_naive = sample_from_param_partial(res.x) def data_to_xy(sample): sample = sample[:100] x,y = np.split(sample,2,axis=1) return (x,y) data = (data_to_xy(real_data), data_to_xy(sample_ged), data_to_xy(sample_naive)) colors = palettable.colorbrewer.qualitative.Set1_3.mpl_colors groups = ("Training data", "Energy distance", "No repulsive term") fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for data, color, group in zip(data, colors, groups): x, y = data ax.scatter(x, y, alpha=0.8, c=color, edgecolors='none', s=30, label=group) plt.legend(loc='best', fontsize=14) plt.xlabel('$x_1$', fontsize=14) plt.ylabel('$x_2$', fontsize=14) plt.show() """ Explanation: Fitting a mixture of 3 Gaussians in 2d We fit a mixture of 3 Gaussians in 2d to training data generated from a distribution in the same model class. We show samples from the model trained by minimizing the energy distance (blue) or the more commonly used loss without repulsive term (green), and compare to samples from the training data (red). Samples from the energy distance trained model are representative of the data, and all sampled points lie close to training examples. Samples from the model trained without repulsive term are not typical of training data. End of explanation """
planet-os/notebooks
api-examples/CFSv2_usage_example.ipynb
mit
%matplotlib notebook import numpy as np import pandas as pd import matplotlib.pyplot as plt from API_client.python import datahub from API_client.python.lib import dataset from API_client.python.lib import variables """ Explanation: Using a CFSv2 forecast CFSv2 is a seasonal forecast system, used for analysing past climate and also making seasonal, up to 9-month, forecasts. Here we give a brief example on how to use Planet OS API to merge 9-month forecasts started at different initial times, into a single ensemble forecast. Ensemble forecasting is a traditional technique in medium range (up to 10 days) weather forecasts, seasonal forecasts and climate modelling. By changing initial conditions or model parameters, a range of forecasts is created, which differ from each other slightly, due to the chaotic nature of fluid dynamics (which weather modelling is a subset of). For weather forecasting, the ensemble is usually created by small changes in initial conditions, but for seasonal forecast, it is much easier to just take real initial conditions every 6-hours. Here we are going to show, first how to merge the different dates into a single plot with the help of python pandas library, and in addition we show that even 6-hour changes in initial conditions can lead to large variability in long range forecasts. If you have more interest in Planet OS API, please refer to our official documentation. Please also note that the API_client python routine, used in this notebook, is still experimental and will change in the future, so take it just as a guidance using the API, and not as an official tool. End of explanation """ dh = datahub.datahub(server='api.planetos.com',version='v1') ds = dataset.dataset('ncep_cfsv2', dh, debug=False) ds.vars=variables.variables(ds.variables(), {'reftimes':ds.reftimes,'timesteps':ds.timesteps},ds) """ Explanation: The API needs a file APIKEY with your API key in the work folder. We initialize a datahub and dataset objects. End of explanation """ for locat in ['Võru']: ds.vars.Convective_Precipitation_Rate_surface.get_values(count=1000, location=locat, reftime='2018-04-20T18:00:00', reftime_end='2018-05-02T18:00:00') ds.vars.Maximum_temperature_height_above_ground.get_values(count=1000, location=locat, reftime='2018-04-20T18:00:00', reftime_end='2018-05-02T18:00:00') ## uncomment following line to see full pandas table ## ds.vars.Convective_Precipitation_Rate_surface.values['Võru'] """ Explanation: In order to the automatic location selection to work, add your custom location to the API_client.python.lib.predef_locations file. End of explanation """ ddd = ds.vars.Convective_Precipitation_Rate_surface.values['Võru'][['reftime','time','Convective_Precipitation_Rate_surface']] dd_test=ddd.set_index('time') """ Explanation: Here we clean the table just a bit and create time based index. End of explanation """ reft_unique = ds.vars.Convective_Precipitation_Rate_surface.values['Võru']['reftime'].unique() nf = [] for reft in reft_unique: abc = dd_test[dd_test.reftime==reft].resample('M').sum() abc['Convective_Precipitation_Rate_surface'+'_'+reft.astype(str)] = \ abc['Convective_Precipitation_Rate_surface']*6*3600 del abc['Convective_Precipitation_Rate_surface'] nf.append(abc) nf2=pd.concat(nf,axis=1) # uncomment to see full pandas table nf2 """ Explanation: Next, we resample the data to 1-month totals. End of explanation """ fig=plt.figure(figsize=(10,8)) nf2.transpose().boxplot() plt.ylabel('Monthly precipitation mm') fig.autofmt_xdate() plt.show() """ Explanation: Finally, we are visualizing the monthly precipitation for each different forecast, in a single plot. End of explanation """
squishbug/DataScienceProgramming
08-Machine-Learning-I/HW08/README.ipynb
cc0-1.0
# Loading python packages and APD data file (this step does not have to be included in hw6_answers.py) import pandas as pd import numpy as np df = pd.read_csv('/home/data/APD/COBRA-YTD2017.csv.gz') """ Explanation: Homework 6 Use this notebook to work on your answers and check solutions. You can then submit your functions using "hw6_submission.ipynb" or directly write your functions in a file named "hw6_answers.py". Note that "hw6_answers.py" will be the only file collected and graded for this assignment. For questions 1-3, you will use the APD dataset that we have been working with in class. For questions 4-5, you will use data from https://perso.telecom-paristech.fr/eagan/class/igr204/datasets. End of explanation """ #### play with code here ##### """ Explanation: Question 1 Write a function called "variable_helper" which takes one argument: df, which is a pandas data frame and returns: d, a dictionary where keys are the column names of df and values are one of "numeric", "categorical", "ordinal", "date/time", or "text", corresponding to the feature type of each column. End of explanation """ #### play with code here ##### """ Explanation: Sample output: In [1]: variable_helper(df[['offense_id','beat','x','y']]) Out[1]: {'beat': 'categorical', 'offense_id': 'ordinal', 'x': 'numeric', 'y': 'numeric'} Short explanation: offense_id is a number assigned to each offense. There is a natural ordering implied in the id number (based on order of occurrence). Because of this, offense_id is an ordinal feature. The beat uses a numeric label, but refers to a geographic location. There is no natural ordering, so beat is a categorical feature. The location variables (x and y) are numeric position coordinates. Question 2 Write a function called "get_categories" which takes one argument: df, which is a pandas data frame and returns: cat, a dictionary where keys are names of columns of df corresponding to categorical features, and values are arrays of all the unique values that the feature can take. End of explanation """ #### play with code here ##### """ Explanation: Sample output: In [1]: get_categories(df[['offense_id','beat','UC2 Literal']]) Out[1]: {'UC2 Literal': array(['AGG ASSAULT', 'AUTO THEFT', 'BURGLARY-NONRES', 'BURGLARY-RESIDENCE', 'HOMICIDE', 'LARCENY-FROM VEHICLE', 'LARCENY-NON VEHICLE', 'RAPE', 'ROBBERY-COMMERCIAL', 'ROBBERY-PEDESTRIAN', 'ROBBERY-RESIDENCE'], dtype=object), 'beat': array([101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 701, 702, 703, 704, 705, 706, 707, 708, 710])} Short explanation: UC2 Literal and beat are the only categorical variables in the data frame df[['offense_id','beat','UC2 Literal']]. Question 3 Write a function called "code_shift" which takes one argument: df, which is a pandas data frame and returns: a pandas data frame with columns "offense_id", "Shift", "ShiftID", where ShiftID is 0 if "Shift" is "Unk", 1 if "Morn", 2 if "Day", and 3 if "Eve". End of explanation """ %%sh ## RUN BUT DO NOT EDIT THIS CELL ## run this cell to download the cereal dataset into your current directory wget https://perso.telecom-paristech.fr/eagan/class/igr204/datasets/cereal.csv ## RUN BUT DO NOT EDIT THIS CELL # load the data, define ratingID cer = pd.read_csv('cereal.csv', skiprows=[1], delimiter=';') cer['ratingID'] = cer['rating'].apply(lambda x: 0 if x<60 else 1) # define predicted ratingID np.random.seed(12345) cer['predicted_ratingID'] = (cer['rating']+20*np.random.randn(len(cer))).apply(lambda x: 0 if x<60 else 1) """ Explanation: Sample output: In [1]: code_shift(df[:5]) Out[1]: offense_id Shift ShiftID 0 172490115 Morn 1 1 172490265 Eve 3 2 172490322 Morn 1 3 172490390 Morn 1 4 172490401 Morn 1 For the last 2 questions, you will use the cereal data file available from https://perso.telecom-paristech.fr/eagan/class/igr204/datasets. Execute the download and loading instructions below. End of explanation """ #### play with code here ##### # Hint: look up pandas "crosstab" """ Explanation: Question 4 Write a function called "rating_confusion" which takes one argument: cer, which is a pandas data frame and returns: cf, a confusion matrix where the rows correspond to predicted_ratingID and the columns correspond to ratingID. End of explanation """ #### play with code here ##### """ Explanation: Sample output: In [1]: rating_confusion(cer[:20]) Out[1]: ratingID 0 1 predicted_ratingID 0 15 0 1 3 2 Question 5 Write a function called "prediction_metrics" which takes one argument: cer, which is a pandas data frame and returns: metrics_dict, a python dictionary object where the keys are 'precision', 'recall', 'F1' and the values are the numeric values for precision, recall, and F1 score, where ratingID is the prediction target and predicted_ratingID is a model output. End of explanation """
wzbozon/statsmodels
examples/notebooks/statespace_sarimax_stata.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd from scipy.stats import norm import statsmodels.api as sm import matplotlib.pyplot as plt from datetime import datetime import requests from io import BytesIO """ Explanation: SARIMAX: Introduction This notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation. First, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf: ARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset. Variation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect. ARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect. ARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable. Second, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate: One-step-ahead in-sample prediction n-step-ahead out-of-sample forecasting n-step-ahead in-sample dynamic prediction End of explanation """ # Dataset wpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content data = pd.read_stata(BytesIO(wpi1)) data.index = data.t # Fit the model mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) res = mod.fit() print(res.summary()) """ Explanation: ARIMA Example 1: Arima As can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term. The postulated data process is then: $$ \Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t} $$ where $c$ is the intercept of the ARMA model, $\Delta$ is the first-difference operator, and we assume $\epsilon_{t} \sim N(0, \sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below): $$ (1 - \phi_1 L ) \Delta y_t = c + (1 + \theta_1 L) \epsilon_{t} $$ where $L$ is the lag operator. Notice that one difference between the Stata output and the output below is that Stata estimates the following model: $$ (\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t} $$ where $\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that: $$ (\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t} \ \Delta y_t = (1 - \phi_1) \beta_0 + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t} $$ so that $c = (1 - \phi_1) \beta_0$. End of explanation """ # Dataset data = pd.read_stata(BytesIO(wpi1)) data.index = data.t data['ln_wpi'] = np.log(data['wpi']) data['D.ln_wpi'] = data['ln_wpi'].diff() # Graph data fig, axes = plt.subplots(1, 2, figsize=(15,4)) # Levels axes[0].plot(data.index._mpl_repr(), data['wpi'], '-') axes[0].set(title='US Wholesale Price Index') # Log difference axes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-') axes[1].hlines(0, data.index[0], data.index[-1], 'r') axes[1].set(title='US Wholesale Price Index - difference of logs'); # Graph data fig, axes = plt.subplots(1, 2, figsize=(15,4)) fig = sm.graphics.tsa.plot_acf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[0]) fig = sm.graphics.tsa.plot_pacf(data.ix[1:, 'D.ln_wpi'], lags=40, ax=axes[1]) """ Explanation: Thus the maximum likelihood estimates imply that for the process above, we have: $$ \Delta y_t = 0.1050 + 0.8740 \Delta y_{t-1} - 0.4206 \epsilon_{t-1} + \epsilon_{t} $$ where $\epsilon_{t} \sim N(0, 0.5226)$. Finally, recall that $c = (1 - \phi_1) \beta_0$, and here $c = 0.1050$ and $\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean: $$\beta_0 = \frac{c}{1 - \phi_1} = \frac{0.1050}{1 - 0.8740} = 0.83$$ Note: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close. ARIMA Example 2: Arima with additive seasonal effects This model is an extension of that from example 1. Here the data is assumed to follow the process: $$ \Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t} $$ The new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level. Before estimating the dataset, graphs showing: The time series (in logs) The first difference of the time series (in logs) The autocorrelation function The partial autocorrelation function. From the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model. End of explanation """ # Fit the model mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1)) res = mod.fit() print(res.summary()) """ Explanation: To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model: python mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) The order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0). For the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use: python mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4)) and the corresponding data process would be: $$ y_t = c + \phi_1 y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \theta_3 \epsilon_{t-3} + \theta_4 \epsilon_{t-4} + \epsilon_{t} $$ or $$ (1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_2 L^2 + \theta_3 L^3 + \theta_4 L^4) \epsilon_{t} $$ When the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\epsilon_{t-2}$ and $\epsilon_{t-3}$, which we don't want here. What we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use: python ar = 1 # this is the maximum degree specification ma = (1,0,0,1) # this is the lag polynomial specification mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma))) This gives the following form for the process of the data: $$ \Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t} \ (1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_4 L^4) \epsilon_{t} $$ which is what we want. End of explanation """ # Dataset air2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content data = pd.read_stata(BytesIO(air2)) data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS') data['lnair'] = np.log(data['air']) # Fit the model mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True) res = mod.fit() print(res.summary()) """ Explanation: ARIMA Example 3: Airline Model In the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as: $$ \phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D y_t = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t $$ where: $\phi_p (L)$ is the non-seasonal autoregressive lag polynomial $\tilde \phi_P (L^s)$ is the seasonal autoregressive lag polynomial $\Delta^d \Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times. $A(t)$ is the trend polynomial (including the intercept) $\theta_q (L)$ is the non-seasonal moving average lag polynomial $\tilde \theta_Q (L^s)$ is the seasonal moving average lag polynomial sometimes we rewrite this as: $$ \phi_p (L) \tilde \phi_P (L^s) y_t^* = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t $$ where $y_t^* = \Delta^d \Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model. As an example, consider the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as: $$ (1 - \phi_1 L - \phi_2 L^2) (1 - \tilde \phi_1 L^{12}) \Delta \Delta_{12} y_t = c + \epsilon_t $$ Here, we have: $\phi_p (L) = (1 - \phi_1 L - \phi_2 L^2)$ $\tilde \phi_P (L^s) = (1 - \phi_1 L^12)$ $d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences. $A(t) = c$ is the constant trend polynomial (i.e. just an intercept) $\theta_q (L) = \tilde \theta_Q (L^s) = 1$ (i.e. there is no moving average effect) It may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model: $$ (1 - \phi_1 L - \phi_2 L^2 - \tilde \phi_1 L^{12} + \phi_1 \tilde \phi_1 L^{13} + \phi_2 \tilde \phi_1 L^{14} ) y_t^* = c + \epsilon_t $$ which can be rewritten as: $$ y_t^ = c + \phi_1 y_{t-1}^ + \phi_2 y_{t-2}^ + \tilde \phi_1 y_{t-12}^ - \phi_1 \tilde \phi_1 y_{t-13}^ - \phi_2 \tilde \phi_1 y_{t-14}^ + \epsilon_t $$ This is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters. Specifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer. For the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$ with an intercept, the command is: python mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12)) End of explanation """ # Dataset friedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content data = pd.read_stata(BytesIO(friedman2)) data.index = data.time # Variables endog = data.ix['1959':'1981', 'consump'] exog = sm.add_constant(data.ix['1959':'1981', 'm2']) # Fit the model mod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1)) res = mod.fit() print(res.summary()) """ Explanation: Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large. The default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation. ARIMA Example 4: ARMAX (Friedman) This model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of "regression with SARIMA errors" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as: $$ y_t = \beta_t x_t + u_t \ \phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D u_t = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t $$ Notice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations. This specification nests many simpler specifications. For example, regression with AR(2) errors is: $$ y_t = \beta_t x_t + u_t \ (1 - \phi_1 L - \phi_2 L^2) u_t = A(t) + \epsilon_t $$ The model considered in this example is regression with ARMA(1,1) errors. The process is then written: $$ \text{consump}_t = \beta_0 + \beta_1 \text{m2}_t + u_t \ (1 - \phi_1 L) u_t = (1 - \theta_1 L) \epsilon_t $$ Notice that $\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output. End of explanation """ # Dataset raw = pd.read_stata(BytesIO(friedman2)) raw.index = raw.time data = raw.ix[:'1981'] # Variables endog = data.ix['1959':, 'consump'] exog = sm.add_constant(data.ix['1959':, 'm2']) nobs = endog.shape[0] # Fit the model mod = sm.tsa.statespace.SARIMAX(endog.ix[:'1978-01-01'], exog=exog.ix[:'1978-01-01'], order=(1,0,1)) fit_res = mod.fit() print(fit_res.summary()) """ Explanation: ARIMA Postestimation: Example 1 - Dynamic Forecasting Here we describe some of the post-estimation capabilities of Statsmodels' SARIMAX. First, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation). End of explanation """ mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1)) res = mod.filter(fit_res.params) """ Explanation: Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data). End of explanation """ # In-sample one-step-ahead predictions predict = res.get_prediction() predict_ci = predict.conf_int() """ Explanation: The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values). With no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample. End of explanation """ # Dynamic predictions predict_dy = res.get_prediction(dynamic='1978-01-01') predict_dy_ci = predict_dy.conf_int() """ Explanation: We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element. The dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0. Here we perform dynamic prediction starting in the first quarter of 1978. End of explanation """ # Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars') # Plot data points data.ix['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed') # Plot predictions predict.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast') ci = predict_ci.ix['1977-07-01':] ax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1) predict_dy.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)') ci = predict_dy_ci.ix['1977-07-01':] ax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='g', alpha=0.1) legend = ax.legend(loc='lower right') """ Explanation: We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same. End of explanation """ # Prediction error # Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual') # In-sample one-step-ahead predictions and 95% confidence intervals predict_error = predict.predicted_mean - endog predict_error.ix['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast') ci = predict_ci.ix['1977-10-01':].copy() ci.iloc[:,0] -= endog.loc['1977-10-01':] ci.iloc[:,1] -= endog.loc['1977-10-01':] ax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], alpha=0.1) # Dynamic predictions and 95% confidence intervals predict_dy_error = predict_dy.predicted_mean - endog predict_dy_error.ix['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)') ci = predict_dy_ci.ix['1977-10-01':].copy() ci.iloc[:,0] -= endog.loc['1977-10-01':] ci.iloc[:,1] -= endog.loc['1977-10-01':] ax.fill_between(ci.index, ci.ix[:,0], ci.ix[:,1], color='r', alpha=0.1) legend = ax.legend(loc='lower left'); legend.get_frame().set_facecolor('w') """ Explanation: Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better. End of explanation """
lukauskas/bernoulli-mixture-model
notebooks/Proof of Concept (digits dataset).ipynb
gpl-3.0
import sklearn.datasets digits_dataset = sklearn.datasets.load_digits() digits = pd.DataFrame(digits_dataset.data) labels = pd.Series(digits_dataset.target, index=digits.index, name='label') THRESHOLD = np.mean(digits.values.reshape(-1)) binary_digits = digits >= THRESHOLD from sklearn.utils import shuffle binary_digits = shuffle(binary_digits, random_state=RANDOM_STATE) labels = labels.loc[binary_digits.index] K=len(labels.unique()) D=len(binary_digits.columns) CMAP = 'GnBu' def draw_digit(row, vmin=0, vmax=1, square=True, **kwargs): return sns.heatmap(row.astype(float).reshape(8, 8), square=square, vmin=vmin, vmax=vmax, cmap=CMAP, **kwargs) draw_digit(binary_digits.iloc[0]) plt.title(labels.iloc[0]) up_missing = binary_digits.iloc[:len(binary_digits)//4].copy() bottom_missing = binary_digits.iloc[len(binary_digits)//4:len(binary_digits)//2].copy() even_missing = binary_digits.iloc[len(binary_digits)//2:].copy() up_missing.iloc[:, :D//2] = None bottom_missing.iloc[:, D//2:] = None even_missing.iloc[:, np.arange(0, D, 2)] = None up_missing['dataset_id'] = 'up_missing' bottom_missing['dataset_id'] = 'bottom_missing' even_missing['dataset_id'] = 'even_missing' training_data = pd.concat((up_missing, bottom_missing, even_missing)) training_data['weight'] = 1 def label_distribution(data, labels): ans = data[['dataset_id', 'weight']].join(labels).groupby(['dataset_id', 'label']).sum()['weight'] ans /= ans.sum(level='dataset_id') return ans """ Explanation: Loading data End of explanation """ training_data_same_dataset = training_data.copy() training_data_same_dataset['dataset_id'] = 'merged' label_distribution(training_data_same_dataset, labels).unstack('label').plot(kind='bar') plt.axhline(0.1, linestyle=':') from bernoullimix.random_initialisation import random_mixture_generator import logging logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) %%time from bernoullimix.n_components_search import search_k results, mixtures = search_k(K_RANGE_TO_SEARCH, training_data_same_dataset, mixtures_per_k=N_MIXTURES_TO_SEARCH_FOR_EACH_K, random_state=RANDOM_STATE, prior_mixing_coefficients=2, prior_emission_probabilities=2, n_jobs=CPUS_TO_USE, eps=EPSILON, n_iter=None) results.plot(y=['BIC', 'ICL']) best = results[['BIC', 'ICL']].astype(float).idxmin() best _method = 'BIC' best_mixture = mixtures[best.loc[_method]] plt.figure(figsize=(15,15)) for j, (component, row) in enumerate(best_mixture.emission_probabilities.iterrows(), start=1): plt.subplot(4,3,j) draw_digit(row) plt.title(component) plt.suptitle('Mixture K={} (best {})'.format(best_mixture.n_components, _method)) best_mixture.mixing_coefficients.T.plot(kind='bar') _method = 'ICL' best_mixture = mixtures[best.loc[_method]] plt.figure(figsize=(15,15)) for j, (component, row) in enumerate(best_mixture.emission_probabilities.iterrows(), start=1): plt.subplot(4,3,j) draw_digit(row) plt.title(component) plt.suptitle('Mixture K={} (best {})'.format(best_mixture.n_components, _method)) """ Explanation: Part 1: All data in one dataset Lump the three datasets into one, call it 'merged' dataset End of explanation """ label_distribution(training_data, labels).unstack('label').plot(kind='bar') plt.axhline(0.1, linestyle=':') %%time from bernoullimix.n_components_search import search_k results_uc, mixtures_uc = search_k(K_RANGE_TO_SEARCH, training_data, mixtures_per_k=N_MIXTURES_TO_SEARCH_FOR_EACH_K, random_state=RANDOM_STATE, prior_mixing_coefficients=2, prior_emission_probabilities=2, n_jobs=CPUS_TO_USE, eps=EPSILON, n_iter=None) best_uc = results_uc[['BIC', 'ICL']].astype(float).idxmin() best_uc results_uc[['BIC', 'ICL']].plot() _method = 'BIC' best_mixture_uc = mixtures_uc[best_uc.loc[_method]] plt.figure(figsize=(15,15)) for j, (component, row) in enumerate(best_mixture_uc.emission_probabilities.iterrows(), start=1): plt.subplot(4,3,j) draw_digit(row) plt.title(component) plt.suptitle('Mixture K={} (best {})'.format(best_mixture_uc.n_components, _method)) best_mixture_uc.mixing_coefficients.T.plot(kind='bar') _method = 'ICL' best_mixture_uc = mixtures_uc[best_uc.loc[_method]] plt.figure(figsize=(15,15)) for j, (component, row) in enumerate(best_mixture_uc.emission_probabilities.iterrows(), start=1): plt.subplot(4,3,j) draw_digit(row) plt.title(component) plt.suptitle('Mixture K={} (best {})'.format(best_mixture_uc.n_components, _method)) best_mixture_uc.mixing_coefficients.T.plot(kind='bar') """ Explanation: Part 2: spliting data into individual datasets Allowing each of them to have their own mixing coefficient End of explanation """
scottquiring/Udacity_Deeplearning
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
mit
import pandas as pd import numpy as np import tensorflow as tf import tflearn from tflearn.data_utils import to_categorical """ Explanation: Sentiment analysis with TFLearn In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you. We'll start off by importing all the modules we'll need, then load and prepare the data. End of explanation """ reviews = pd.read_csv('reviews.txt', header=None) labels = pd.read_csv('labels.txt', header=None) """ Explanation: Preparing the data Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the data Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ ''.join(reviews.values[0]) from collections import Counter total_counts = Counter() for idx,row in reviews.iterrows(): #print(row.to_string()) # for w in row.str.split(' ')[0]: # print(w, '-') #print(dir(row)) for word in row.str.split(' ')[0]: total_counts[word] += 1 print("Total words in data set: ", len(total_counts)) """ Explanation: Counting word frequency To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class. Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours. End of explanation """ vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000] print(vocab[:60]) """ Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words. End of explanation """ print(vocab[-1], ': ', total_counts[vocab[-1]]) """ Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words. End of explanation """ word2idx = dict([(w,i) for i,w in enumerate(vocab)]) """ Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words. Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie. Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension. Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on. End of explanation """ def text_to_vector(text): v = np.zeros(len(vocab)) for word in text.split(' '): if word in word2idx: v[word2idx[word]] += 1 return v """ Explanation: Text to vector function Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this: Initialize the word vector with np.zeros, it should be the length of the vocabulary. Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here. For each word in that list, increment the element in the index associated with that word, which you get from word2idx. Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary. End of explanation """ text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] """ Explanation: If you do this right, the following code should return ``` text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0]) ``` End of explanation """ word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_) for ii, (_, text) in enumerate(reviews.iterrows()): word_vectors[ii] = text_to_vector(text[0]) # Printing out the first 5 word vectors word_vectors[:5, :23] """ Explanation: Now, run through our entire review data set and convert each review to a word vector. End of explanation """ Y = (labels=='positive').astype(np.int_) records = len(labels) shuffle = np.arange(records) np.random.shuffle(shuffle) test_fraction = 0.9 train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):] trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2) testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2) trainY """ Explanation: Train, Validation, Test sets Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later. End of explanation """ np.sqrt(10000) # Network building def build_model(layers, learning_rate): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### net = tflearn.input_data([None, len(vocab)]) for n_units in layers: print(n_units) net = tflearn.fully_connected(net, n_units, activation='ReLU') net = tflearn.fully_connected(net, 2, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=learning_rate, loss='categorical_crossentropy') model = tflearn.DNN(net) return model """ Explanation: Building the network TFLearn lets you build the network by defining the layers. Input layer For the input layer, you just need to tell it how many units you have. For example, net = tflearn.input_data([None, 100]) would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size. The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units). Output layer The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax. net = tflearn.fully_connected(net, 2, activation='softmax') Training To set how you train the network, use net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with the categorical cross-entropy. Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like net = tflearn.input_data([None, 10]) # Input net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden net = tflearn.fully_connected(net, 2, activation='softmax') # Output net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') model = tflearn.DNN(net) Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. End of explanation """ x = 10000.0 l = np.logspace(0, 4 ,8) ll = list(map(int,l)) ll = list(reversed(ll[1:-1])) ll = [1000,1000,1000,1000,1000] ll model = build_model(ll,0.1) """ Explanation: Intializing the model Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want. Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=20) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors. You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network. End of explanation """ predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_) test_accuracy = np.mean(predictions == testY[:,0], axis=0) print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters. End of explanation """ # Helper function that uses your model to predict sentiment def test_sentence(sentence): positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1] print('Sentence: {}'.format(sentence)) print('P(positive) = {:.3f} :'.format(positive_prob), 'Positive' if positive_prob > 0.5 else 'Negative') sentence = "Moonlight is by far the best movie of 2016." test_sentence(sentence) sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful" test_sentence(sentence) """ Explanation: Try out your own text! End of explanation """
gerbaudo/fbu
tutorial.ipynb
gpl-2.0
import fbu myfbu = fbu.PyFBU() """ Explanation: Basic usage Create an instance of PyFBU End of explanation """ myfbu.data = [100,150] """ Explanation: Supply the input distribution to be unfolded as a 1-dimensional list for N bins, with each entry corresponding to the bin content. End of explanation """ myfbu.response = [[0.08,0.02], #first truth bin [0.02,0.08]] #second truth bin """ Explanation: Supply the response matrix where each row corresponds to a truth level bin. The normalization of each row must be the acceptance efficiency of the corresponding bin (e.g. the normalization is 1 for resolution only unfolding). N.B. For now, only squared response matrices are allowed. End of explanation """ myfbu.lower = [0,0] myfbu.upper = [3000,3000] """ Explanation: Define the boundaries of the hyperbox to be sampled for each bin. End of explanation """ myfbu.run() """ Explanation: Run the MCMC sampling (this step might take up to several minutes for a large number of bins). End of explanation """ trace = myfbu.trace print( trace ) """ Explanation: Retrieve the N-dimensional posterior distribution in the form of a list of N arrays. End of explanation """ %matplotlib inline from matplotlib import pyplot as plt plt.hist(trace[1], bins=20,alpha=0.85, normed=True) plt.ylabel('probability') """ Explanation: Each array corresponds to the projection of the posterior distribution for a given bin. End of explanation """ myfbu.background = {'bckg1':[20,30],'bckg2':[10,10]} myfbu.backgroundsyst = {'bckg1':0.5,'bckg2':0.04} #50% normalization uncertainty for bckg1 and 4% normalization uncertainty for bckg2 """ Explanation: Background One or more backgrounds, with the corresponding normalization uncertainties (gaussian prior), can be taken into account in the unfolding procedure. End of explanation """ myfbu.objsyst = { 'signal':{'syst1':[0.,0.03],'syst2':[0.,0.01]}, 'background':{ 'syst1':{'bckg1':[0.,0.],'bckg2':[0.1,0.1]}, 'syst2':{'bckg1':[0.,0.01],'bckg2':[0.,0.]} } } """ Explanation: The background normalization is sampled from a gaussian with the given uncertainty. To fix the background normalization the uncertainty should be set to 0. Systematics Systematic uncertainties affecting signal and background can be taken into account as well with their per-bin relative magnitudes. The prior is gaussian. Each systematics needs to be provided for each background listed at the previous step. End of explanation """ myfbu.run() #rerun sampling with backgrounds and systematics unfolded_bin1 = myfbu.trace[1] bckg1 = myfbu.nuisancestrace['bckg1'] plt.hexbin(bckg1,unfolded_bin1,cmap=plt.cm.YlOrRd) """ Explanation: Each systematics is treated as fully correlated across signal and the various backgrounds. Nuisance parameters The posterior probability for the nuisance parameters is stored in a dictionary of arrays. The correlation among nuisance parameters and with the estimates for the unfolded distribution is preserved in the array ordering. End of explanation """
ypeleg/Deep-Learning-Keras-Tensorflow-PyCon-Israel-2017
2.4 Transfer Learning & Fine-Tuning.ipynb
mit
import numpy as np import datetime np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras import backend as K from numpy import nan import keras print keras.__version__ now = datetime.datetime.now """ Explanation: Transfer Learning and Fine Tuning Train a simple convnet on the MNIST dataset the first 5 digits [0..4]. Freeze convolutional layers and fine-tune dense layers for the classification of digits [5..9]. Using GPU (highly recommended) -> If using theano backend: THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 End of explanation """ now = datetime.datetime.now batch_size = 128 nb_classes = 5 nb_epoch = 5 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling pool_size = 2 # convolution kernel size kernel_size = 3 if K.image_data_format() == 'channels_first': input_shape = (1, img_rows, img_cols) else: input_shape = (img_rows, img_cols, 1) def train_model(model, train, test, nb_classes): X_train = train[0].reshape((train[0].shape[0],) + input_shape) X_test = test[0].reshape((test[0].shape[0],) + input_shape) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(train[1], nb_classes) Y_test = np_utils.to_categorical(test[1], nb_classes) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) t = now() model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) print('Training time: %s' % (now() - t)) score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) """ Explanation: Settings End of explanation """ # the data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() # create two datasets one with digits below 5 and one with 5 and above X_train_lt5 = X_train[y_train < 5] y_train_lt5 = y_train[y_train < 5] X_test_lt5 = X_test[y_test < 5] y_test_lt5 = y_test[y_test < 5] X_train_gte5 = X_train[y_train >= 5] y_train_gte5 = y_train[y_train >= 5] - 5 # make classes start at 0 for X_test_gte5 = X_test[y_test >= 5] # np_utils.to_categorical y_test_gte5 = y_test[y_test >= 5] - 5 # define two groups of layers: feature (convolutions) and classification (dense) feature_layers = [ Convolution2D(nb_filters, kernel_size, kernel_size, border_mode='valid', input_shape=input_shape), Activation('relu'), Convolution2D(nb_filters, kernel_size, kernel_size), Activation('relu'), MaxPooling2D(pool_size=(pool_size, pool_size)), Dropout(0.25), Flatten(), ] classification_layers = [ Dense(128), Activation('relu'), Dropout(0.5), Dense(nb_classes), Activation('softmax') ] # create complete model model = Sequential(feature_layers + classification_layers) # train model for 5-digit classification [0..4] train_model(model, (X_train_lt5, y_train_lt5), (X_test_lt5, y_test_lt5), nb_classes) # freeze feature layers and rebuild model for l in feature_layers: l.trainable = False # transfer: train dense layers for new classification task [5..9] train_model(model, (X_train_gte5, y_train_gte5), (X_test_gte5, y_test_gte5), nb_classes) """ Explanation: Dataset Preparation End of explanation """ from keras.applications import VGG16 from keras.applications.vgg16 import VGG16 from keras.preprocessing import image from keras.applications.vgg16 import preprocess_input from keras.layers import Input, Flatten, Dense from keras.models import Model import numpy as np #Get back the convolutional part of a VGG network trained on ImageNet model_vgg16_conv = VGG16(weights='imagenet', include_top=False) model_vgg16_conv.summary() #Create your own input format (here 3x200x200) inp = Input(shape=(48,48,3),name = 'image_input') #Use the generated model output_vgg16_conv = model_vgg16_conv(inp) #Add the fully-connected layers x = Flatten(name='flatten')(output_vgg16_conv) x = Dense(4096, activation='relu', name='fc1')(x) x = Dense(4096, activation='relu', name='fc2')(x) x = Dense(5, activation='softmax', name='predictions')(x) #Create your own model my_model = Model(input=inp, output=x) #In the summary, weights and layers from VGG part will be hidden, but they will be fit during the training my_model.summary() """ Explanation: Your Turn Try to Fine Tune a VGG16 Network End of explanation """ import scipy new_shape = (48,48) X_train_new = np.empty(shape=(X_train_gte5.shape[0],)+(48,48,3)) for idx in xrange(X_train_gte5.shape[0]): X_train_new[idx] = np.resize(scipy.misc.imresize(X_train_gte5[idx], (new_shape)), (48, 48, 3)) X_train_new[idx] = np.resize(X_train_new[idx], (48, 48, 3)) #X_train_new = np.expand_dims(X_train_new, axis=-1) print X_train_new.shape X_train_new = X_train_new.astype('float32') X_train_new /= 255 print('X_train shape:', X_train_new.shape) print(X_train_new.shape[0], 'train samples') print(X_train_new.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train_gte5, nb_classes) Y_test = np_utils.to_categorical(y_test_gte5, nb_classes) print y_train.shape my_model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) my_model.fit(X_train_new, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1) #print('Training time: %s' % (now() - t)) #score = my_model.evaluate(X_test, Y_test, verbose=0) #print('Test score:', score[0]) #print('Test accuracy:', score[1]) #train_model(my_model, # (X_train_new, y_train_gte5), # (X_test_gte5, y_test_gte5), nb_classes) """ Explanation: ```python ... ... Plugging new Layers model.add(Dense(768, activation='sigmoid')) model.add(Dropout(0.0)) model.add(Dense(768, activation='sigmoid')) model.add(Dropout(0.0)) model.add(Dense(n_labels, activation='softmax')) ``` End of explanation """
jeiros/Jupyter_notebooks
python/markov_analysis/msmbuilder-API.ipynb
mit
from msmbuilder.dataset import dataset import numpy as np import os from mdtraj.utils import timing from msmbuilder.featurizer import DihedralFeaturizer import seaborn as sns; sns.set_style("white"); sns.set_palette("Blues") with timing("Loading data as dataset object"): wt_xyz = dataset("/Users/je714/wt_data/*/05*nc", topology="/Users/je714/wt_data/test.pdb") with timing("Loading data as dataset object"): S1P_xyz = dataset("/Users/je714/p_data/run*/S1P/05*nc", topology="/Users/je714/p_data/S1P_ff14SB_newclean.prmtop") with timing("Loading data as dataset object"): SEP_xyz = dataset("/Users/je714/p_data/run*/SEP/05*nc", topology="/Users/je714/p_data/SEP_ff14SB_newclean.prmtop") """ Explanation: Apply msmbuilder API to WT ff14SB cTN MD datasets are usually quite large. It doesn't make sense to load everything into memory at once. The dataset object lazily-loads trajectories as they are needed. Below, we create a dataset out of all the trajectories we have at the moment. End of explanation """ wt_featurizer = DihedralFeaturizer(types=['phi', 'psi']) if os.path.isfile('/Users/je714/wt_data/wt_diheds_phi-psi.tgz'): with timing("Loading dihedrals from file..."): wt_diheds = np.loadtxt('/Users/je714/wt_data/wt_diheds_phi-psi.tgz') else: with timing("Featurizing trajectory into dihedrals..."): wt_diheds = wt_featurizer.fit_transform(wt_xyz) np.savetxt('/Users/je714/wt_data/wt_diheds_phi-psi.tgz', np.concatenate(wt_diheds)) S1P_featurizer = DihedralFeaturizer(types=['phi', 'psi']) if os.path.isfile('/Users/je714/p_data/S1P_diheds_phi-psi.tgz'): with timing("Loading dihedrals from file..."): S1P_diheds = np.loadtxt('/Users/je714/p_data/S1P_diheds_phi-psi.tgz') else: with timing("Featurizing trajectory into dihedrals..."): S1P_diheds = S1P_featurizer.fit_transform(S1P_xyz) np.savetxt('/Users/je714/p_data/S1P_diheds_phi-psi.tgz', np.concatenate(S1P_diheds)) SEP_featurizer = DihedralFeaturizer(types=['phi', 'psi']) if os.path.isfile('/Users/je714/p_data/SEP_diheds_phi-psi.tgz'): with timing("Loading dihedrals from file..."): SEP_diheds = np.loadtxt('/Users/je714/p_data/SEP_diheds_phi-psi.tgz') else: with timing("Featurizing trajectory into dihedrals..."): SEP_diheds = SEP_featurizer.fit_transform(SEP_xyz) np.savetxt('/Users/je714/p_data/SEP_diheds_phi-psi.tgz', np.concatenate(SEP_diheds)) """ Explanation: Featurization The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Dihedrals Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 6812*3-dimensional space is reduced substantially. End of explanation """ # from msmbuilder.featurizer import ContactFeaturizer # featurizer_contact = ContactFeaturizer("all", scheme="ca") # contacts = featurizer_contact.fit_transform(xyz) # print(xyz[0].xyz.shape) # print(contacts[0].shape) """ Explanation: Contact Featurizer Featurizer based on residue-residue distances This featurizer transforms a dataset containing MD trajectories into a vector dataset by representing each frame in each of the MD trajectories by a vector of the distances between pairs of amino-acid residues. The exact method for computing the the distance between two residues is configurable with the scheme parameter. In this case we use "ca" to determine the distance between two residues as the distance between their alpha carbons. End of explanation """ wt_xyz[0][0].topology wt_diheds.shape from sklearn.pipeline import Pipeline from msmbuilder.decomposition import tICA from msmbuilder.cluster import MiniBatchKMeans from msmbuilder.msm import MarkovStateModel DihedralFeaturizer? model = Pipeline([ ('featurizer', DihedralFeaturizer(types=['phi', 'psi'])), ('tica', tICA(n_components=10, lag_time=20)), ('cluster', MiniBatchKMeans(n_clusters=1000)), ('msm', MarkovStateModel(lag_time=50)) ]) model.fit(wt_xyz) for step in model.steps: print(step[0]) diheds = model.steps[0][1] tica_obj = model.steps[1][1] clusterer = model.steps[2][1] msm = model.steps[3][1] tica_trajs = tica_obj.transform(diheds.transform(wt_xyz)) np.concatenate(tica_trajs).shape """ Explanation: Intermediate kinetic model: tICA tICA is similar to PCA. Note the reduction to just 4 dimensions. End of explanation """ plt.plot(np.concatenate(tica_trajs)[::,0]) def plot_ticaTrajs(tica_trajs): txx = np.concatenate(tica_trajs) plt.figure(figsize=(10.5,5)) cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True) plt.subplot(1, 2, 1) plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap=cmap) plt.xlabel('tIC 1') plt.ylabel('tIC 2') cb = plt.colorbar() cb.set_label('log10(N)') plt.subplot(1, 2, 2) plt.hexbin(txx[:,2], txx[:,3], bins='log', mincnt=1, cmap=cmap) plt.xlabel('tIC 3') plt.ylabel('tIC 4') cb = plt.colorbar() cb.set_label('log10(N)') plot_ticaTrajs(tica_trajs) """ Explanation: tICA Heatmap We can histogram our data projecting along the two first tICS (the two slowest DOFs found by tICA). End of explanation """ clusterer.cluster_centers_.shape def plot_clusterCenters(clusterer_object, tica_trajs): txx = np.concatenate(tica_trajs) plt.figure(figsize=(10.5,5)) plt.subplot(1, 2, 1) cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True) plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap=cmap) cb = plt.colorbar() cb.set_label('log10(N)') plt.scatter(clusterer.cluster_centers_[:,0], clusterer.cluster_centers_[:,1], s=4, c='black') plt.xlabel('tIC 1') plt.ylabel('tIC 2') plt.subplot(1,2,2) plt.hexbin(txx[:,2], txx[:,3], bins='log', mincnt=1, cmap=cmap) cb = plt.colorbar() cb.set_label('log10(N)') plt.scatter(clusterer.cluster_centers_[:,2], clusterer.cluster_centers_[:,3], s=4, c='black') plt.xlabel('tIC 3') plt.ylabel('tIC 4') plt.tight_layout() plot_clusterCenters(clusterer, tica_trajs) plt.savefig("/Users/je714/Dropbox (Imperial)/ESAreport/Figures/tica_clusters.png", format='png', dpi=300) """ Explanation: Clustering Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index. End of explanation """ np.asarray(range(0,10)) plt.hexbin(np.asarray(range(0, np.hstack(clusterer.labels_).shape[0]))*0.00002, np.hstack(clusterer.labels_), mincnt=1, cmap=sns.cubehelix_palette(8, start=.5, rot=-.75, as_cmap=True)) plt.ylabel("Cluster ID") plt.xlabel("Aggregated time ($\mu$s)") plt.savefig("/Users/je714/Dropbox (Imperial)/ESAReport/Figures/labeled_Trajs", format='png', dpi=300) msm_lagtimes = [x for x in range(1,201) if (x%10==0) or (x==1)] msm_lagtimes msm_test = MarkovStateModel(lag_time=1) msm_test.fit(np.hstack(clusterer.labels_)) msm_objects = [] for lagtime in msm_lagtimes: msm = MarkovStateModel(lag_time=lagtime) msm.fit(np.hstack(clusterer.labels_)) msm_objects.append(msm) msm_timescales = [] for msm in msm_objects: msm_timescales.append(msm.timescales_) first_timescale = [] for lag_time, timescale in zip(msm_lagtimes, msm_timescales): print(lag_time, timescale[0]) first_timescale.append(timescale[0]) time_asNParray = np.array(first_timescale) lag_asNParray = np.array(msm_lagtimes[0:6]) plt.scatter(lag_asNParray, time_asNParray) plt.semilogy() txx = np.concatenate(tica_trajs) plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys") plt.scatter(clusterer.cluster_centers_[:,0], clusterer.cluster_centers_[:,1], s=1e4 * msm.populations_, # size by population c=msm.left_eigenvectors_[:,1], # color by eigenvector cmap="RdBu") plt.colorbar(label='First dynamical eigenvector') plt.xlabel('tIC 1') plt.ylabel('tIC 2') plt.tight_layout() """ Explanation: MSM We can construct an MSM from the labeled trajectories. End of explanation """ from msmbuilder.lumping import PCCAPlus pcca = PCCAPlus.from_msm(msm, n_macrostates=5) macro_trajs = pcca.transform(np.concatenate(clusterer.labels_)) print(msm.left_eigenvectors_[:,1].shape) print(clusterer.cluster_centers_[:,0].shape) plt.hexbin(txx[:,0], txx[:,1], bins='log', mincnt=1, cmap="Greys") plt.scatter(clusterer.cluster_centers_[:,0], clusterer.cluster_centers_[:,1], s=100, c=pcca.microstate_mapping_, ) plt.xlabel('tIC 1') plt.ylabel('tIC 2') plt.plot(msm.eigenvalues_, 'bo') plt.xlabel("MSM state") plt.ylim(0,1) plt.plot(msm.sample_discrete(n_steps=1000), 'bo') from msmbuilder import utils plt.plot(msm.populations_) plt.plot(msm.timescales_) """ Explanation: Macrostate model End of explanation """
elsuizo/Control_de_robots_py
tp3.ipynb
gpl-3.0
from IPython.core.display import Image Image(filename='Imagenes/copy_left.png') Image(filename='Imagenes/dibujo_robot2_tp2.png') #imports from sympy import * import numpy as np #Con esto las salidas van a ser en LaTeX init_printing(use_latex=True) """ Explanation: Martín Noblía Tp3 <img src="files/copy_left.png" style="float: left;"/> <div style="clear: both;"> ##Control de Robots 2013 ###Ingeniería en Automatización y Control ###Universidad Nacional de Quilmes ##Ejercicio 1 #### Determinar el espacio de trabajo alcanzable para el manipulador de 3 brazos de la figura siguiente con $L_1=15.0$ (cm), $L_2=10.0$(cm), $L_3=3.0$(cm), $0º < \theta_{1} < 360º$, $0º < \theta_{2} < 180º$, $0º < \theta_{3} < 180º$ End of explanation """ #Funcion simbólica para una rotación(transformacion homogenea) sobre el eje X def Rot_X(angle): rad = angle*pi/180 M = Matrix([[1,0,0,0],[ 0,cos(rad),-sin(rad),0],[0,sin(rad), cos(rad),0],[0,0,0,1]]) return M #Funcion simbólica para una rotación(transformacion homogenea) sobre el eje Y def Rot_Y(angle): rad = angle*pi/180 M = Matrix([[cos(rad),0,sin(rad),0],[ 0,1,0,0],[-sin(rad), 0,cos(rad),0],[0,0,0,1]]) return M #Funcion simbólica para una rotación(transformacion homogenea) sobre el eje Z def Rot_Z(angle): rad = angle*pi/180 M = Matrix([[cos(rad),- sin(rad),0,0],[ sin(rad), cos(rad), 0,0],[0,0,1,0],[0,0,0,1]]) return M #Funcion simbolica para una traslacion en el eje X def Traslacion_X(num): D = Matrix([[1,0,0,num],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) return D #Funcion simbolica para una traslacion en el eje Y def Traslacion_Y(num): D = Matrix([[1,0,0,0],[0,1,0,num],[0,0,1,0],[0,0,0,1]]) return D #Funcion simbolica para una traslacion en el eje Z def Traslacion_Z(num): D = Matrix([[1,0,0,0],[0,1,0,0],[0,0,1,num],[0,0,0,1]]) return D #estos son simbolos especiales que los toma como letras griegas directamente(muuy groso) alpha, beta , gamma, phi, theta, a, d =symbols('alpha beta gamma phi theta a d') #Generamos la transformacion T = Rot_X(alpha) * Traslacion_X(a) * Rot_Z(theta) * Traslacion_Z(d) T #Creamos los nuevos simbolos theta_1, theta_2, theta_3, L_1, L_2, L_3 =symbols('theta_1, theta_2, theta_3, L_1, L_2 L_3') T_0_1 = T.subs([(alpha,0),(a,0),(d,0),(theta,theta_1)]) T_0_1 T_1_2 = T.subs([(alpha,90),(a,L_1),(d,0),(theta,theta_2)]) T_1_2 T_2_3 = T.subs([(alpha,0),(a,L_2),(d,0),(theta,theta_3)]) T_2_3 #Agregamos la ultima trama T_w = Matrix([[1,0,0,L_3],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) T_w T_B_W = T_0_1 * T_1_2 * T_2_3 * T_w T_B_W.simplify() T_B_W T_real = T_B_W.subs([(L_1,15),(L_2,10),(L_3,3)]) T_real #generamos una funcion numerica() a partir de la expresion simbolica func = lambdify((theta_1,theta_2,theta_3),T_real,'numpy') #verificamos si funciona func(10,30,10) def get_position(q_1,q_2,q_3): """ Funcion para extraer la posicion cartesiana de la transformacion homogenea que describe la cinematica directa del manipulador RRR espacial(ver ejercicio 2 tp2) Inputs: q_1 (angulo del link 1) q_2 (angulo del link 2) q_3 (angulo del link 3) Outputs: """ M = func(q_1,q_2,q_3) arr = np.asarray(M) x = arr[0,3] y = arr[1,3] z = arr[2,3] return x,y,z #probamos si funciona L=get_position(10,10,10) L %pylab inline plt.rcParams['figure.figsize'] = 12,10 import mpl_toolkits.mplot3d.axes3d as axes3d #TODO vectorizar fig, ax = plt.subplots(subplot_kw=dict(projection='3d')) #generamos los rangos de los angulos y evaluamos su posicion cartesiana for i in xrange(0,360,8): for j in xrange(0,180,8): for k in xrange(0,180,8): x,y,z = get_position(i,j,k) ax.scatter(x,y,z,alpha=.2) ax.view_init(elev=10., azim=10.) plt.title('Espacio de trabajo alcanzable',fontsize=17) plt.xlabel(r'$x$',fontsize=17) plt.ylabel(r'$y$',fontsize=17) #ax.set_aspect('equal') plt.show() #TODO vectorizar fig, ax = plt.subplots(subplot_kw=dict(projection='3d')) #generamos los rangos de los angulos y evaluamos su posicion cartesiana for i in xrange(0,360,8): for j in xrange(0,180,8): for k in xrange(0,180,8): x,y,z = get_position(i,j,k) ax.scatter(x,y,z,alpha=.2) #ax.view_init(elev=10., azim=10.) plt.title('Espacio de trabajo alcanzable',fontsize=17) plt.xlabel(r'$x$',fontsize=17) plt.ylabel(r'$y$',fontsize=17) #ax.set_aspect('equal') plt.show() """ Explanation: Recordemos que el espacio de trabajo alcanzable es la región espacial a la que el efector final puede llegar, con al menos una orientación. Vamos a desarrollar primero la cinemática directa(como en el tp2) para luego evaluar variando los angulos de articulaciones en los rangos dados y asi obtener el espacio de trabajo alcanzable. End of explanation """ Image(filename='Imagenes/robot2_tp3.png') """ Explanation: Ejercicio 2 En el manipulador 2R de la figura siguiente, $L_{1}=2L_{2}$ y los rangos límites para las juntas son: $0º < \theta_{1} < 180º$, $-90º < \theta_{2} < 180º$. Determinar el espacio de trabajo alcanzable. End of explanation """ def brazo_RR(theta_1, theta_2, L_1, L_2): """ Posicion cartesiana del efector final de un brazo RR Inputs: theta_1(angulo del link 1) theta_2(angulo del link 2) L_1(Longitud del link 1) L_2(longitud del link 2) Outputs: x(posicion cartesiana x del efector final) y(posicion cartesiana y del efector final) """ x = L_1 * np.cos(theta_1) + L_2 * np.cos(theta_1 + theta_2) y = L_1 * np.sin(theta_1) + L_2 * np.sin(theta_1 + theta_2) return x, y theta_1_vec = np.linspace(0,np.pi,100) #vector de 100 muestras en el intervalo[0,pi] theta_2_vec = np.linspace(-np.pi/2,np.pi,100) #vector de 100 muestras en el intervalo[-pi/2,pi] #evaluamos a la funcion con varias combinaciones de vectores x,y = brazo_RR(theta_1_vec,theta_2_vec,2,1) x1,y1 = brazo_RR(0,theta_2_vec,2,1) x2,y2 = brazo_RR(theta_1_vec,0,2,1) #evaluamos con puntos aleatorios del rango x3,y3 = brazo_RR(np.random.choice(theta_1_vec,2000),np.random.choice(theta_2_vec,2000),2,1) plt.plot(x,y , 'ro') plt.plot(x1,y1 , 'go') plt.plot(x2,y2, 'ko') plt.plot(x3,y3,'yo') plt.title('Espacio de trabajo alcanzable',fontsize=20) plt.axis([-4,4,-2,5]) plt.legend([r'$0 < \theta_{1} < \pi$ ; $ -\pi/2 < \theta_{2} < \pi$ ',r'$\theta_{1}=0$ ; $ -\pi/2 < \theta_{2} < \pi$ ',r'$0 < \theta_{1} < \pi$ ; $\theta_{2}=0$','random'],fontsize=17) plt.grid() plt.show() """ Explanation: Sabemos que los puntos $(x,y)$ de la trama {3} los podemos obtener facilmente en función de los ángulos $\theta_{1}$ y $\theta_{2}$. Vamos a implementar la parametrización en la siguiente función: End of explanation """ def inverse_kin(T, L_1, L_2): """ Funcion para resolver la cinematica inversa de un manipulador planar RRR Inputs: T(Matriz de tranformacion homogenea) L_1(Longitud del link 1) L_2(Longitud del link 2) Outputs: Una tupla de 6 elementos con los angulos de las dos configuraciones codo arriba o codo abajo (theta_1,theta_2,theta_3,theta_1_up,theta_2_up,theta_3_up) """ x = T[0,3] y = T[1,3] #calculamos si el punto es alcanzable es_alc = (x**2 + y**2 - L_1**2 - L_2**2)/(2*L_1*L_2) if (-1 <= es_alc <= 1): print 'es alcanzable' c_2 = es_alc #Hay dos soluciones para elegir s_2_elbow_up = np.sqrt(1-c_2**2) s_2_elbow_down = -np.sqrt(1-c_2**2) theta_2_up = np.arctan2(s_2_elbow_up,c_2) theta_2_down = np.arctan2(s_2_elbow_down,c_2) #cambio de variables k_1 = L_1 + L_2*c_2 k_2_up = L_2*s_2_elbow_up k_2_down = L_2*s_2_elbow_down gamma_up = np.arctan2(k_2_up,k_1) gamma_down = np.arctan2(k_2_down,k_1) r_up = np.sqrt(k_1**2+k_2_up**2) r_down= np.sqrt(k_1**2+k_2_down**2) k_1_1 = r_up*np.cos(gamma_up) k_1_2 = r_down*np.cos(gamma_down) k_2_1 = r_up*np.sin(gamma_up) k_2_2 = r_down*np.sin(gamma_down) theta_1_up = np.arctan2(y,x) - np.arctan2(k_2_1,k_1_1) theta_1_down = np.arctan2(y,x) - np.arctan2(k_2_2,k_1_2) c_phi = T[0,0] s_phi = T[1,0] phi = np.arctan2(s_phi,c_phi) theta_3_up = phi - theta_1_up - theta_2_up theta_3_down = phi - theta_1_down - theta_2_down fac = 180/np.pi #para pasar a grados return theta_1_up*fac,theta_2_up*fac,theta_3_up*fac,theta_1_down*fac,theta_2_down*fac,theta_3_down*fac else: print 'No es alcanzable' """ Explanation: Ejercicio 3 Utilizando la substitución geométrica ‘tangente del semiángulo’, convertir la ecuación trascendental: $acos(\theta)+bsin(\theta)=c$, esto es hallar $\theta$ en función de $a$, $b$ y $c$ La sustitución por la tangente de semiangulo es la siguiente: $u=tg(\frac{\theta}{2})$ $cos(\theta)=\frac{1-u^{2}}{1+u^{2}}$ $sin(\theta)=\frac{2u}{1+u^2}$ para nuetro caso sustituimos en la ecuación trascendental $acos(\theta)+bsin(\theta)=c$ las expresiones de $cos(\theta)$ y $sin(\theta)$ $a(\frac{1-u^{2}}{1+u^{2}})+b(\frac{2u}{1+u^2})=c$ entonces $a(1-u^{2})+b(2u)=c(1+u^{2})$, luego expresamos la ecuación como un polinomio en $u$ $u^{2}(a+c)-2bu+c-a=0$ el siguiente paso es resolver la cuadrática: $u= \frac{b \pm \sqrt{b^{2}+a^{2}-c^{2}}}{a+c}$ Por lo tanto: $\theta=2tg^{-1}(\frac{b \pm \sqrt{b^{2}+a^{2}-c^{2}}}{a+c})$ Ejercicio 4 Derive la cinemática inversa del robot RRR del ejercicio 2 de la práctica 2. Si la transformación $^{S}{W}T$ esta dada entonces hacemos: $^{B}{W}T = (^{B}{S}T)(^{S}{T}T)(^{W}_{T}T^{-1})$ y como $^{B}{W}T = (^{0}{3}T)$, podemos escribir: $$^{0}{3}T = \begin{bmatrix} r{11} & r_{12} & r_{13} & x \\ r_{21} & r_{22} & r_{23} & y \\ r_{31} & r_{32} & r_{33} & z \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ Además como sabemos del ejercicio 2 de la practica 2: $$^{0}{3}T = \begin{bmatrix} c{1}c_{23} & -c_{1}c_{23} & s_{1} & c_{1}(c_{2}L_{2}+L_{1}) \\ s_{1}c_{23} & -s_{1}s_{23} & -c_{1} & s_{1}(c_{2}L_{2}+L_{1}) \\ s_{23} & c_{23} & 0 & s_{2}L_{2} \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ luego igualamos las componentes $(1,3)$ de ambas matrices, entonces: $s_{1}=r_{13}$ luego igualamos los elementos $(2,3)$, entonces: $-c_{1}=r_{23}$, como vemos podemos estimar el valor de $\theta_{1}$ como: $\theta_{1}=Atan2(r_{13},-r_{23})$ Continuamos igualando los elementos $(1,4)$ y $(2,4)$: $x=c_{1}(c_{2}L_{2}+L_{1})$ $y=s_{1}(c_{2}L_{2}+L_{1})$ Entonces vemos que si $c_{1} \neq 0$ $\therefore$ $c_{2}=\frac{1}{L_{2}}(\frac{x}{c_{1}}-L_{1})$ o $c_{2}=\frac{1}{L_{2}}(\frac{y}{s_{1}}-L_{1})$ Luego igualando los elementos $(3,4)$ $z=s_{2}L_{2}$ $\therefore$ $\theta_{2}=Atan2(\frac{z}{L_{2}};c_{2})$ Por último igualando los elementos $(3,1)$ y $(3,2)$ $s_{23}=r_{31}$ $c_{23}=r_{32}$ Por lo tanto: $\theta_{3}=Atan2(r_{31};r_{32})-\theta_{2}$ Ejercicio 5 Este ejercicio se enfoca en la solución de la cinemática de planteamiento inverso para el robot planar 3-DOF(tres grados de libertad)(ver ejercicio 1 ). Se proporcionan los siguientes parámetros de longitud fija: $L_1=4$, $L_2=3$, $L_3=2$ a) Derive en forma analítica y a mano, la solución de planteamiento inverso para este robot. Dado $(^{0}{H}T)$, calcule todas las múltiples soluciones posibles para $[ \theta{1},\theta_{2},\theta_{3} ]$ b) Desarrolle un programa para resolver por completo este problema de cinemática de planteamiento inverso para el robot $3R$ planar (es decir, proporcione todas las múltiples soluciones). Pruebe su programa utilizando los siguientes casos de entrada: i) $$^{0}{H}T = \begin{bmatrix} 1 & 0 & 0 & 9 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ ii) $$^{0}{H}T = \begin{bmatrix} 0.5 & -0.866 & 0 & 7.5373 \\ 0.866 & 0.6 & 0 & 3.9266 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ iii)$$^{0}_{H}T = \begin{bmatrix} 0 & 1 & 0 & -3 \\ -1 & 0 & 0 & 2 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ iv)$$^{0}_{H}T = \begin{bmatrix} 0.866 & 0.5 & 0 & -3.1245 \\ -0.5 & 0.866 & 0 & 9.1674 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ Para todos los casos emplee una comprobación circular para validar sus resultados: introduzca cada conjunto de ángulos de articulación(para cada una de las múltiples soluciones ) de vuelta en el programa de planteamiento directo para demostrar que obtiene las matrices $^{0}_{H}T$ a) Como sabemos las ecuaciones cinemáticas de este brazo son : $$(^{B}{W}T) = (^{0}{3}T) = \begin{bmatrix} c_{123} & -s_{123} & 0 & L_{1}c_{1}+L_{2}c_{12} \\ s_{123} & c_{123} & 0 & L_{1}s_{1}+L_{2}s_{12} \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ Vamos a suponer una configuración genérica del brazo relativa a la trama base, la cual es $(^{B}_{W}T)$. Como estamos trabajando con un manipulador planar, puede lograrse especificando tres números $[x,y,\phi]$, en donde $\phi$ es la orientación del vínculo 3 en el plano(relativo al eje $\hat{X}$). Por ello nuestra transformación genérica es: $$(^{B}{W}T) = \begin{bmatrix} c{\phi} & -s_{\phi} & 0 & x \\ s_{\phi} & c_{\phi} & 0 & y \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ Todos los destinos alcanzables deben encontrarse en el subespacio implicado por la estructura de la ecuación anterior. Si igualamos las dos matrices llegamos a las siguientes ecuaciones: $c_{\phi}=c_{123}$ $s_{\phi}=s_{123}$ $x = L_{1}c_{1}+L_{2}c_{12}$ $y = L_{1}s_{1}+ L_{2}s_{12}$ Asi si elevamos al cuadrado las últimas dos ecuaciones y las sumamos: $x^{2}+y^{2}=L_{1}^{2}+L_{2}^{2}+2L_{1}L_{2}c_{2}$ despejando $c_{2}$ $c_{2}=\frac{x^{2}+y^{2}-L_{1}^{2}-L_{2}^{2}}{2L_{1}L_{2}}$ Entonces, vemos que para que pueda existir una solución el lado derecho de la ecuación anterior debe estar en el intervalo $[-1,1]$ Luego suponiendo que se cumple esa condición, podemos hallar el valor del $s_{2}$ como: $s_{2}=\pm \sqrt{1-c_{2}^{2}}$ Por último calculamos $\theta_{2}$ con la rutina de arco tangente de dos argumentos: $\theta_{2}=Atan2(s_{2},c_{2})$ Dependiendo que signo hallamos elegido en la ecuación del $s_{2}$ corresponderá a una de las dos suluciones múltiples "codo hacia arriba" o "codo hacia abajo" Luego podemos resolver para $\theta_{1}$ de la siguiente manera: sean : $x=k_{1}c_{1}-k_{2}s_{1}$ $y=k_{1}s_{1}+k_{2}c_{1}$ en donde: $k_{1}=L_{1}+L_{2}c_{2}$ $k_{2}=L_{2}s_{2}$ si llamamos $r=\pm \sqrt{k_{1}^{2}+k_{2}^{2}}$ y a $\gamma = Atan2(k_{2},k_{1})$ entonces podemos escribir: $\frac{x}{r}=cos(\gamma)cos(\theta_{1})-sin(\gamma)sin(\theta_{1})$ $\frac{y}{r}=cos(\gamma)sin(\theta_{1})+sin(\gamma)cos(\theta_{1})$ por lo tanto: $cos(\gamma+\theta_{1})=\frac{x}{r}$ $sin(\gamma+\theta_{1})=\frac{y}{r}$ Usando el arreglo de dos elementos: $\gamma + \theta_{1}= Atan2(\frac{y}{r},\frac{x}{r})=Atan2(k_{2},k_{1})$ y por lo tanto : $\theta_{1}= Atan2(y,x)-Atan2(k_{2},k_{1})$ Finalmente podemos resolver para la suma de $\theta_{1}$ a $\theta_{3}$ $\theta_{1}+\theta_{2}+\theta_{3}=Atan2(s_{\phi},c_{\phi})=\phi$ De este último resultado podemos despejar $\theta_{3}$ ya que conocemos el valor de los otros ángulos. b) A continuación desarrolamos una implementación que resuelve la cinemática inversa anterior End of explanation """ #Matriz de transformacion del insiso i T_0_H_i = np.array([[1,0,0,9],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) T_0_H_i #Matriz de la transformacion de la trama 3 a la herramienta T_H_3 = np.array([[1,0,0,2],[0,1,0,0],[0,0,1,0],[0,0,0,1]]) T_H_3 #Inversa de la matriz que representa la transformacion de la trama 3 a la herramienta T_3_H=np.linalg.inv(T_H_3) T_3_H #Obtenemos la transformacion que necesitamos T_0_3_i=np.dot(T_0_H_i,T_3_H) T_0_3_i #calculamos los angulos(deberia dar cero(ya que en x tenemos la suma de los L_1, L_2)) angulos_i = inverse_kin(T_0_3_i,4,3) angulos_i #punto ii) #Cargamos la matriz y repetimos el procedimiento anterior T_0_H_ii = np.array([[.5,-0.866,0,7.5373],[0.866,0.6,0,3.9266],[0,0,1,0],[0,0,0,1]]) T_0_H_ii T_0_3_ii = np.dot(T_0_H_ii,T_3_H) T_0_3_ii #Calculamos los angulos (son 6 tres para una configuracion y tres para la otra) angulos_ii = inverse_kin(T_0_3_ii,4,3) angulos_ii #punto iii) #Cargamos la matriz y repetimos el procedimiento anterior T_0_H_iii = np.array([[0,1,0,-3],[-1,0,0,2],[0,0,1,0],[0,0,0,1]]) T_0_H_iii T_0_3_iii = np.dot(T_0_H_iii,T_3_H) T_0_3_iii #Calculamos los angulos (son 6 tres para una configuracion y tres para la otra) angulos_iii = inverse_kin(T_0_3_iii,4,3) angulos_iii #punto iv) #Cargamos la matriz y repetimos el procedimiento anterior T_0_H_iv = np.array([[0.866,.5,0,-3.1245],[-.5,0.866,0,9.1674],[0,0,1,0],[0,0,0,1]]) T_0_H_iv T_0_3_iv = np.dot(T_0_H_iv,T_3_H) T_0_3_iv angulos_iv = inverse_kin(T_0_3_iv,4,3) angulos_iv """ Explanation: Vamos a crear las matrices del enunciado para poder evaluarlas, además tenemos que tener en cuenta que las transformaciones que nos dan en el enunciado son las que van de la trama base a la Herramienta, por ello debemos transformarla para que nos quede $^{0}{3}T = (^{0}{H}T)(^{H}_{3}T)^{-1}$ Donde : $$^{H}_{3}T = \begin{bmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ End of explanation """ #Generamos la cinematica directa del brazo planar RRR T_0_1 = T.subs([(alpha,0),(a,0),(d,0),(theta,theta_1)]) T_1_2 = T.subs([(alpha,0),(a,L_1),(d,0),(theta,theta_2)]) T_2_3 = T.subs([(alpha,0),(a,L_2),(d,0),(theta,theta_3)]) T_0_3 = T_0_1 * T_1_2 * T_2_3 #Reemplazamos los valores de longitudes de link T_0_3_real = T_0_3.subs([(L_1,4),(L_2,3)]) #generamos una funcion numerica a partir de la simbolica func_kin = lambdify((theta_1,theta_2,theta_3),T_0_3_real,'numpy') #evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_i[0],angulos_i[1],angulos_i[2]) #evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_i[3],angulos_i[4],angulos_i[5]) #evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_ii[0],angulos_ii[1],angulos_ii[2]) #evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_ii[3],angulos_ii[4],angulos_ii[5]) #evaluamos para los primeros tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_iii[0],angulos_iii[1],angulos_iii[2]) #evaluamos para los ultimos tres elementos de la tupla que contiene los angulos de una configuracion func_kin(angulos_iii[3],angulos_iii[4],angulos_iii[5]) """ Explanation: Ahora vamos a realizar una funcion para verificar circularmente los resultados. Primero generamos la cinemática directa como en el tp2 simbólicamente y luego la convertimos a numérica gracias a las bondades del lenguaje. End of explanation """
corochann/chainer-hands-on-tutorial
src/01_chainer_intro/dataset_introduction.ipynb
mit
# Initial setup following http://docs.chainer.org/en/stable/tutorial/basic.html import numpy as np import chainer from chainer import cuda, Function, gradient_check, report, training, utils, Variable from chainer import datasets, iterators, optimizers, serializers from chainer import Link, Chain, ChainList import chainer.functions as F import chainer.links as L from chainer.training import extensions import chainer.dataset import chainer.datasets """ Explanation: Dataset module introduction End of explanation """ from chainer.datasets import TupleDataset x = np.arange(10) t = x * x data = TupleDataset(x, t) print('data type: {}, len: {}'.format(type(data), len(data))) # Unlike numpy, it does not have shape property. data.shape """ Explanation: Built-in dataset modules Some dataset format is already implemented in chainer.datasets TupleDataset End of explanation """ # get forth data -> x=3, t=9 data[3] """ Explanation: i-th data can be accessed by data[i] which is a tuple of format ($x_i$, $t_i$, ...) End of explanation """ # Get 1st, 2nd, 3rd data at the same time. examples = data[0:4] print(examples) print('examples type: {}, len: {}' .format(type(examples), len(examples))) """ Explanation: Slice accessing When TupleDataset is accessed by slice indexing, e.g. data[i:j], returned value is list of tuple $[(x_i, t_i), ..., (x_{j-1}, t_{j-1})]$ End of explanation """ from chainer.dataset import concat_examples data_minibatch = concat_examples(examples) #print(data_minibatch) #print('data_minibatch type: {}, len: {}' # .format(type(data_minibatch), len(data_minibatch))) x_minibatch, t_minibatch = data_minibatch # Now it is array format, which has shape print('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape)) print('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape)) """ Explanation: To convert examples into minibatch format, you can use concat_examples function in chainer.dataset. Its return value is in format ([x_array], [t array], ...) End of explanation """ from chainer.datasets import DictDataset x = np.arange(10) t = x * x # To construct `DictDataset`, you can specify each key-value pair by passing "key=value" in kwargs. data = DictDataset(x=x, t=t) print('data type: {}, len: {}'.format(type(data), len(data))) # Get 3rd data at the same time. example = data[2] print(example) print('examples type: {}, len: {}' .format(type(example), len(example))) # You can access each value via key print('x: {}, t: {}'.format(example['x'], example['t'])) """ Explanation: DictDataset TBD End of explanation """ import os from chainer.datasets import ImageDataset # print('Current direcotory: ', os.path.abspath(os.curdir)) filepath = './data/images.dat' image_dataset = ImageDataset(filepath, root='./data/images') print('image_dataset type: {}, len: {}'.format(type(image_dataset), len(image_dataset))) """ Explanation: ImageDataset This is util class for image dataset. If the number of dataset becomes very big (for example ImageNet dataset), it is not practical to load all the images into memory unlike CIFAR-10 or CIFAR-100. In this case, ImageDataset class can be used to open image from storage everytime of minibatch creation. [Note] ImageDataset will download only the images, if you need another label information (for example if you are working with image classification task) use LabeledImageDataset instead. You need to create a text file which contains the list of image paths to use ImageDataset. See data/images.dat for how the paths text file look like. End of explanation """ # Access i-th image by image_dataset[i]. # image data is loaded here. for only 0-th image. img = image_dataset[0] # img is numpy array, already aligned as (channels, height, width), # which is the standard shape format to feed into convolutional layer. print('img', type(img), img.shape) """ Explanation: We have created the image_dataset above, however, images are not expanded into memory yet. Image data will be loaded into memory from storage every time when you access via index, for efficient memory use. End of explanation """ import os from chainer.datasets import LabeledImageDataset # print('Current direcotory: ', os.path.abspath(os.curdir)) filepath = './data/images_labels.dat' labeled_image_dataset = LabeledImageDataset(filepath, root='./data/images') print('labeled_image_dataset type: {}, len: {}'.format(type(labeled_image_dataset), len(labeled_image_dataset))) """ Explanation: LabeledImageDataset This is util class for image dataset. It is similar to ImageDataset to allow load the image file from storage into memory at runtime of training. The difference is that it contains label information, which is usually used for image classification task. You need to create a text file which contains the list of image paths and labels to use LabeledImageDataset. See data/images_labels.dat for how the text file look like. End of explanation """ # Access i-th image and label by image_dataset[i]. # image data is loaded here. for only 0-th image. img, label = labeled_image_dataset[0] print('img', type(img), img.shape) print('label', type(label), label) """ Explanation: We have created the labeled_image_dataset above, however, images are not expanded into memory yet. Image data will be loaded into memory from storage every time when you access via index, for efficient memory use. End of explanation """ datasets.split_dataset_n_random() """ Explanation: SubDataset TBD It can be used for cross validation. End of explanation """ from chainer.dataset import DatasetMixin print_debug = True class SimpleDataset(DatasetMixin): def __init__(self, values): self.values = values def __len__(self): return len(self.values) def get_example(self, i): if print_debug: print('get_example, i = {}'.format(i)) return self.values[i] """ Explanation: Implement your own custom dataset You can define your own dataset by implementing a sub class of DatasetMixin in chainer.dataset DatasetMixin If you want to define custom dataset, DatasetMixin provides the base function to make compatible with other dataset format. Another important usage for DatasetMixin is to preprocess the input data, including data augmentation. To implement subclass of DatasetMixin, you usually need to implement these 3 functions. - Override __init__(self, *args) function: It is not compulsary but - Override __len__(self) function : Iterator need to know the length of this dataset to understand the end of epoch. - Override get_examples(self, i) function: End of explanation """ simple_data = SimpleDataset([0, 1, 4, 9, 16, 25]) # get_example(self, i) is called when data is accessed by data[i] simple_data[3] # data can be accessed using slice indexing as well simple_data[1:3] """ Explanation: Important function in DatasetMixin is get_examples(self, i) function. This function is called when they access data[i] End of explanation """ import numpy as np from chainer.dataset import DatasetMixin print_debug = False def calc(x): return x * x class SquareNoiseDataset(DatasetMixin): def __init__(self, values): self.values = values def __len__(self): return len(self.values) def get_example(self, i): if print_debug: print('get_example, i = {}'.format(i)) x = self.values[i] t = calc(x) t_noise = t + np.random.normal(0, 0.1) return x, t_noise square_noise_data = SquareNoiseDataset(np.arange(10)) """ Explanation: The important point is that get_example function is called every time when the data is accessed by [] indexing. Thus you may put random value generation for data augmentation code in get_example. End of explanation """ # Accessing to the same index, but the value is different! print('Accessing square_noise_data[3]', ) print('1st: ', square_noise_data[3]) print('2nd: ', square_noise_data[3]) print('3rd: ', square_noise_data[3]) # Same applies for slice index accessing. print('Accessing square_noise_data[0:4]') print('1st: ', square_noise_data[0:4]) print('2nd: ', square_noise_data[0:4]) print('3rd: ', square_noise_data[0:4]) """ Explanation: Below SimpleNoiseDataset adds small Gaussian noise to the original value, and every time the value is accessed, get_example is called and differenct noise is added even if you access to the data with same index. End of explanation """ from chainer.dataset import concat_examples examples = square_noise_data[0:4] print('examples = {}'.format(examples)) data_minibatch = concat_examples(examples) x_minibatch, t_minibatch = data_minibatch # Now it is array format, which has shape print('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape)) print('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape)) """ Explanation: To convert examples into minibatch format, you can use concat_examples function in chainer.dataset in the sameway explained at TupleDataset. End of explanation """ from chainer.datasets import TransformDataset x = np.arange(10) t = x * x - x original_dataset = TupleDataset(x, t) def transform_function(in_data): x_i, t_i = in_data new_t_i = t_i + np.random.normal(0, 0.1) return x_i, new_t_i transformed_dataset = TransformDataset(original_dataset, transform_function) original_dataset[:3] # Now Gaussian noise is added (in transform_function) to the original_dataset. transformed_dataset[:3] """ Explanation: TransformDataset Transform dataset can be used to create/modify dataset from existing dataset. New (modified) dataset can be created by TransformDataset(original_data, transform_function). Let's see a concrete example to create new dataset from original tuple dataset by adding a small noise. End of explanation """
giacomov/astromodels
examples/Additional_features_for_scripts_and_applications.ipynb
bsd-3-clause
from astromodels import * my_model = load_model("my_model.yml") """ Explanation: Additional features for scripts and applications In this document we describe some features of the astromodels package which are useful for non-interactive environment such as scripts or applications First let's import astromodels and let's load a model from a file, which we will use as example: End of explanation """ point_sources = my_model.point_sources extended_sources = my_model.extended_sources # Print the names of the point sources print(point_sources.keys()) # Print the names of the extended sources print(extended_sources.keys()) """ Explanation: Get dictionaries of point and extended sources If you don't know the details (such as names) of the sources contained in the model, you can obtain dictionaries of point sources and extended sources like: End of explanation """ for source_name, point_source in point_sources.iteritems(): print("The model contain point source %s at %s" % (source_name, point_source.position)) """ Explanation: You can use these dictionaries as usual. For example, you can loop over all point sources and print their position: End of explanation """ components = my_model.source_2.components print(components.keys()) """ Explanation: Accessing components and spectral shapes with no previous information Similarly you can access components and their spectral shapes (i.e., functions) without knowing the names in advance. A dictionary containing the components of a given source can be obtained with: End of explanation """ for source_name, point_source in my_model.point_sources.iteritems(): print("Point source %s has components %s" % (source_name, point_source.components.keys())) """ Explanation: So now we can loop over all the sources and print their components: End of explanation """ my_model.source_1.spectrum.main.powerlaw == my_model.source_1.spectrum.main.shape """ Explanation: With a fully-qualified path, you would need to know the name of the function to access its parameters. Instead, you can use the generic name "shape". For example these two statements point to the same function instance: End of explanation """ parameters = my_model.source_1.spectrum.main.powerlaw.parameters print(parameters.keys()) """ Explanation: Once you have a function instance, you can obtain a dictionary of its parameters as: End of explanation """ for source_name, point_source in my_model.point_sources.iteritems(): print("Found source %s" % source_name) print(" Position of point source: %s" % point_source.position) for component_name, component in point_source.components.iteritems(): print(" Found component %s" % component_name) for parameter_name, parameter in component.shape.parameters.iteritems(): print(" Found parameter %s" % parameter_name) """ Explanation: Putting it all together, let's loop over all sources in our model, then over each component in each source, then over each parameter in each component: End of explanation """ import matplotlib.pyplot as plt # Comment this out if you are not using the IPython notebook %matplotlib inline # Prepare 100 energies logarithmicall spaced between 1 and 100 keV energies = np.logspace(0,2,100) # Now loop over all point sources and plot them for source_name, point_source in my_model.point_sources.iteritems(): # Plot the sum of all components for this source plt.loglog(energies, point_source(energies),label=source_name) # If there is more than one component, plot them also separately if len(point_source.components) > 1: for component_name, component in point_source.components.iteritems(): plt.loglog(energies,component.shape(energies),'--',label="%s of %s" %(component_name, source_name)) # Add a legend plt.legend(loc=0,frameon=False) _ = plt.xlabel("Energy (keV)") _ = plt.ylabel(r"Flux (ph cm$^{-2}$ s$^{-1}$ keV$^{-1}$") """ Explanation: Let's now plot the differential flux between 1 and 100 keV of all components from all sources: End of explanation """ for source_name, point_source in my_model.point_sources.iteritems(): for component_name, component in point_source.components.iteritems(): for parameter_name, parameter in component.shape.parameters.iteritems(): print(parameter.path) """ Explanation: Getting the path of an element and using it programmatically Whenever you have an element from the model, you can get its fully-qualified path by using the .path property. This for example will print the path of all the parameters in the model: End of explanation """ my_path = 'source_2.spectrum.IC.powerlaw.logK' logK = my_model[my_path] print(logK) """ Explanation: If you have a path of an element in a string, you can use it to access the element by using the [] operator of the Model class like this: End of explanation """ # Get the number of point sources and of extended sources n_pts = my_model.get_number_of_point_sources() n_ext = my_model.get_number_of_extended_sources() # Get the name of the first point source print("The first point source is called %s" % my_model.get_point_source_name(0)) print("The second point source is called %s" % my_model.get_point_source_name(1)) # Of course you can achieve the same in a loop for id in range(n_pts): print("Point source ID %s has name %s" % (id, my_model.get_point_source_name(id))) """ Explanation: Alternative way of accessing the information in the model We present here an alternative way to get information from the model without using dictionaries, and using instead source IDs. A source ID is just an ordinal number, separate for point sources and extended sources. Hence, the first point source has ID 0, the second point source has ID 1, and so on. Similarly, the first extended source has ID 0, the second has ID 1 and so on: End of explanation """ src_id = 1 src_name = my_model.get_point_source_name(src_id) ra, dec = my_model.get_point_source_position(src_id) # This will always return ra,dec # Prepare 100 energies logarithmically spaced between 1 and 100 keV energies = np.logspace(0,2,100) differential_flux = my_model.get_point_source_fluxes(src_id, energies) # Similar methods exist for extended sources (to be completed) """ Explanation: Once you have the ID of a source, you can obtain information about it with these methods of the Model class: End of explanation """
maxalbert/tohu
notebooks/v4/Custom_generators.ipynb
mit
import tohu from tohu.v4.primitive_generators import * from tohu.v4.derived_generators import * from tohu.v4.dispatch_generators import * from tohu.v4.custom_generator import * from tohu.v4.utils import print_generated_sequence, make_dummy_tuples print(f'Tohu version: {tohu.__version__}') """ Explanation: Custom generators End of explanation """ class QuuxGenerator(CustomGenerator): aa = Integer(100, 200) bb = HashDigest(length=6) cc = FakerGenerator(method='name') g = QuuxGenerator() print_generated_sequence(g, num=10, sep='\n', seed=12345) """ Explanation: Custom generator without __init__ method End of explanation """ class SomeGeneratorWithExplicitItemsName(CustomGenerator): __tohu_items_name__ = 'Foobar' aa = Integer(100, 200) bb = HashDigest(length=6) cc = FakerGenerator(method='name') g = SomeGeneratorWithExplicitItemsName() """ Explanation: Explicitly setting the name of generated items Let's repeat the previous example, but explicitly set the name of generated items by setting the __tohu_items_name__ attribute inside the custom generator. End of explanation """ print_generated_sequence(g, num=10, sep='\n', seed=12345) """ Explanation: The generated sequence is the same as above, but the name of the items has changed from Quux to Foobar. End of explanation """ class QuuxGenerator(CustomGenerator): aa = Integer(100, 200) def __init__(self, faker_method): self.bb = FakerGenerator(method=faker_method) # Note: the call to super().__init__() needs to be at the end, # and it needs to be passed the same arguments as the __init__() # method from which it is called (here: `faker_method`). super().__init__(faker_method) g1 = QuuxGenerator(faker_method='first_name') g2 = QuuxGenerator(faker_method='city') print_generated_sequence(g1, num=10, sep='\n', seed=12345); print() print_generated_sequence(g2, num=10, sep='\n', seed=12345) """ Explanation: Custom generator with __init__ method End of explanation """ some_tuples = make_dummy_tuples('abcdefghijklmnopqrstuvwxyz') #some_tuples[:5] """ Explanation: Custom generator containing derived generators End of explanation """ class QuuxGenerator(CustomGenerator): aa = SelectOne(some_tuples) bb = GetAttribute(aa, 'x') cc = GetAttribute(aa, 'y') g = QuuxGenerator() print_generated_sequence(g, num=10, sep='\n', seed=12345) """ Explanation: Example: extracting attributes End of explanation """ def square(x): return x * x def add(x, y): return x + y class QuuxGenerator(CustomGenerator): aa = Integer(0, 20) bb = Integer(0, 20) cc = Apply(add, aa, Apply(square, bb)) g = QuuxGenerator() print_generated_sequence(g, num=10, sep='\n', seed=12345) df = g.generate(num=100, seed=12345).to_df() print(list(df['aa'][:20])) print(list(df['bb'][:20])) print(list(df['cc'][:20])) all(df['aa'] + df['bb']**2 == df['cc']) """ Explanation: Example: arithmetic End of explanation """ class QuuxGenerator(CustomGenerator): name = FakerGenerator(method="name") tag = SelectOne(['a', 'bb', 'ccc']) g = QuuxGenerator() quux_items = g.generate(num=100, seed=12345) quux_items.to_df().head(5) tag_lookup = { 'a': [1, 2, 3, 4, 5], 'bb': [10, 20, 30, 40, 50], 'ccc': [100, 200, 300, 400, 500], } class FoobarGenerator(CustomGenerator): some_quux = SelectOne(quux_items) number = SelectOneDerived(Lookup(GetAttribute(some_quux, 'tag'), tag_lookup)) h = FoobarGenerator() h_items = h.generate(10000, seed=12345) df = h_items.to_df(fields={'name': 'some_quux.name', 'tag': 'some_quux.tag', 'number': 'number'}) df.head() print(df.query('tag == "a"')['number'].isin([1, 2, 3, 4, 5]).all()) print(df.query('tag == "bb"')['number'].isin([10, 20, 30, 40, 50]).all()) print(df.query('tag == "ccc"')['number'].isin([100, 200, 300, 400, 500]).all()) df.query('tag == "a"').head(5) df.query('tag == "bb"').head(5) df.query('tag == "ccc"').head(5) """ Explanation: Example: multi-stage dependencies End of explanation """
basp/notes
single_value_calculus.ipynb
mit
c1 = lambda x: x + 1 c2 = lambda x: -x + 2 x1 = np.linspace(0.01, 2, 10) x2 = np.linspace(-2, -0.01, 10) plt.plot(x1, c1(x1), label=r"$y = x + 1$") plt.plot(x2, c2(x2), label=r"$y = -x + 2$") plt.plot(0, 2, 'wo', markersize=7) plt.plot(0, 1, 'wo', markersize=7) ax = plt.axes() ax.set_ylim(0, 4) plt.legend(loc=3) """ Explanation: notation for differentation We'll mostly use Lagrange's notation, the first three deriviatives of a function $f$ are denoted $f'$, $f''$ and $f'''$. After that we'll use $f^{(4)}, f^{(5)}, \ldots, f^{(n)}$. limits Below is a function $f$ with two cases. $ f(x) = \begin{cases} x + 1, & \text{if $x \gt 0$} \ -x + 2, & \text{if $x \lt 0$} \end{cases} $ Notice that for this function, the value of $f$ for when $x = 0$ is undefined. Also note that this is a discontinuous function. End of explanation """ g = lambda x: np.sin(x) / x h = lambda x: (1 - np.cos(x)) / x x = np.linspace(-3 * np.pi, 3 * np.pi, 100) ax = plt.axes() ax.set_xlim(-3 * np.pi, 3 * np.pi) ax.set_ylim(-1, 1.25) plt.plot(x, g(x), label=r"$y = g(x) = \frac{\sin x}{x}$") plt.plot(x, h(x), label=R"$y = h(x) = \frac{1 - \cos x}{x}$") plt.plot(0, 1, 'wo', markersize=7) plt.plot(0, 0, 'wo', markersize=7) plt.legend(loc=4) """ Explanation: Even though the value of $y$ for when $x = 0$ is undefined we can say something about the limits of this function. right-hand limit If we approach from a position where $x \gt 0$ we can say that as $x$ approaches 0 the limit of the function is $x + 1$. $$\lim_{x^+\to 0}f(x) = \lim_{x\to 0}x + 1 = 1$$ left-hand limit On the other hand, if we approach from a position where $x \lt 0$ then we see that as $x$ approaches 0 the limit of the function is $-x + 2$. $$\lim_{x^-\to 0}f(x) = \lim_{x\to 0}-x + 2 = 2$$ And now that we have defined limits we can also define what it means for a function to be continuous at a certain value. A function $f$ is continuous at $x_0$ when $\lim_{x\to x_0}f(x) = f(x_0)$ discontinuous functions jump discontinuity The limits of a funcion $f(x)$ at $x_0$ exist but are not equal. This is basically the example from above. $$\lim_{x^+\to x_0}f(x) \neq \lim_{x^-\to x_0}f(x)$$ removable discontinuity Let $g(x) = \frac{\sin x}{x}$ and $h(x) = \frac{1 - \cos x}{x}$ End of explanation """ f = lambda x: 1/x x1 = np.linspace(-0.5, -0.01, 1000) x2 = np.linspace(0.01, 0.5, 1000) ax = plt.axes() #ax.spines['left'].set_position(('data', 0)) #ax.spines['bottom'].set_position(('data', 0)) ax.set_xlim(-0.1, 0.1) plt.plot(x1, f(x1), 'b') plt.plot(x2, f(x2), 'b') """ Explanation: Note that dividing by zero is an undefined operation, both of these functions are undefined for when $x = 0$ so we'll have two little circles in the plot. However we can see that $\lim_{x^+\to 0}g(x) = 1$ and that $\lim_{x^-\to 0}g(x) = 1$ so generally we can say that $\lim_{x\to 0}g(x) = 1$. We can also see that $\lim_{x^+\to 0}h(x) = 0$ and $\lim_{x^-\to 0}h(x) = 0$ so $\lim_{x\to 0}h(x) = 0$. Because for both functions $\lim_{x^+\to 0} = \lim_{x^-\to 0}$ we can say that these functions have a removable discontinuity at $x = 0$ infinite discontinuity This time we'll use $y = f(x) = \frac{1}{x}$ End of explanation """ f0 = lambda x: 1/x f1 = lambda x: -1/x**2 x1 = np.linspace(-0.5, -0.01, 1000) x2 = np.linspace(0.01, 0.5, 1000) p1 = plt.subplot(211) p1.set_xlim(-0.1, 0.1) plt.plot(x1, f0(x1), 'b', label=r"$y = 1/x$") plt.plot(x2, f0(x2), 'b') plt.legend(loc=4) p2 = plt.subplot(212) p2.set_xlim(-0.1, 0.1) p2.set_ylim(-2000, 0) plt.plot(x1, f1(x1), 'g', label=r"$y = -1/x^2$") plt.plot(x2, f1(x2), 'g') plt.legend(loc=4) """ Explanation: Now we see that $\lim_{x^+\to 0}\frac{1}{x} = \infty$ and $\lim_{x^-\to 0}\frac{1}{x} = -\infty$ and even though some people might say that these limits are undefined they are going in a definite direction so if able we should specify what they are. However we cannot say that $\lim_{x\to 0}\frac{1}{x} = \infty$ even though this is sometimes done it's usually because people are sloppy and only considering $y = \frac{1}{x}$ for when $x \gt 0$. There's an interesting thing we can observe when we plot the derivative of this function. End of explanation """ f = lambda x: np.sin(1/x) x1 = np.linspace(-0.1, -0.01, 100) x2 = np.linspace(0.01, 0.1, 100) ax = plt.axes() ax.set_xlim(-0.1, 0.1) ax.set_ylim(-1.2, 1.2) plt.plot(x1, f(x1)) plt.plot(x2, f(x2)) """ Explanation: If we take the derivative of an odd function we get an even function. other (ugly) discontinuities Take for example the function $y = \sin \frac{1}{x}$ as $x\to 0$ End of explanation """ f = lambda x: x**2 fig, ax = plt.subplots() ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_position(('data', 0)) ax.spines['left'].set_position(('data', 0)) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticklabels(['$x_0$', '$x$']) ax.yaxis.set_ticklabels(['$y_0$', '$y$']) ax.xaxis.set_ticks([1, 1.5]) ax.yaxis.set_ticks([1, f(1.5)]) ax.set_xlim(-1, 2) ax.set_ylim(-1, 3) x = np.linspace(-1, 2, 100) plt.plot(x, f(x)) plt.plot(1, f(1), 'ko') plt.plot(1.5, f(1.5), 'ko') plt.plot([1, 1.5], [f(1), f(1)], 'k--') plt.plot([1.5, 1.5], [f(1), f(1.5)], 'k--') plt.plot([1, 1.5], [f(1), f(1.5)], 'k--') plt.annotate('$P$', (0.8, 1)) plt.annotate('$Q$', (1.3, f(1.5))) plt.annotate('$\Delta{x}$', (1.25, 0.75)) plt.annotate('$\Delta{f}$', (1.55, 1.5)) """ Explanation: As we approach $x = 0$ it will oscilate into infinity. There is no left or right limit in this case. rate of change Let's start with the question of what is a derivative? We'll look at a few different aspects: Geometric interpretation Physical interpretation Importance to measurements We'll start with the geometric interpretation. geometric interpretation Find the tangent line to the graph of some function $y = f(x)$ at some point $P = (x_0, y_0)$. We also know this line can be written as the equation $y - y_0 = m(x - x_0)$. In order to figure out this equation we need to know two things, point $P$ which is $(x_0, y_0)$ where $y_0 = f(x_0)$ and the value of $m$ which is the slope of the line. In calculus we also call this the derivative or $f'(x)$. End of explanation """ f = lambda x: 1 / (1 + x**2) x = np.linspace(-2, 2, 100) y = f(x) ax = plt.axes() ax.set_ylim(0, 1.25) plt.plot(x, y, label=r"$y = \frac{1}{1 + x^2}$") plt.legend() """ Explanation: We can now define, $f'(x_0)$ (the derivative) of $f$ at $x_0$ is the slope of the tangent line to $y = f(x)$ at the point $P$. The tangent line is equal to the limit of secant lines $PQ$ as $Q\to P$ where $P$ is fixed. In the picture above we can see that the slope of our our secant line $PQ$ is simply defined as $\frac{\Delta{f}}{\Delta{x}}$. However we can now define the slope $m$ of our tangent line as $$m = \lim_{\Delta{x}\to 0}\frac{\Delta{f}}{\Delta{x}}$$ The next thing we want to do is to write $\Delta{f}$ more explicitly. We already have $P = (x_0, f(x_0))$ and $Q = (x_0 + \Delta{x}, f(x_0 + \Delta{x})$. With this information information we can write down: $$f'(x_0) = m = \lim_{\Delta{x}\to 0}\frac{f(x_0 + \Delta{x}) - f(x_0)}{\Delta{x}}$$ recital Let $f(x) = \frac{1}{1 + x^2}$. Graph $y = f(x)$ and compute $f'(x)$. End of explanation """ f_acc = lambda x: (-2 * x) / ((1 + x**2)**2) x = np.linspace(-2, 2, 100) plt.plot(x, f_acc(x)) """ Explanation: Now let's compute $f'(x)$. $$ \begin{align} f'(x) & = \lim_{\Delta{x}\to 0}\frac{f(x + \Delta{x}) - f(x)}{\Delta{x}} \ & = \lim_{\Delta{x}\to 0}\frac{\frac{1}{1 + (x + \Delta{x})^2} - \frac{1}{1 + x^2}}{\Delta{x}} \ & = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 - (1 + (x + \Delta{x})^2)}{(1 + (x + \Delta{x})^2)(1 + x^2)} \ & = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 -1 - x^2 - 2x\Delta{x} - \Delta{x}^2}{(1 + (x + \Delta{x})^2)(1 + x^2)} \ & = \lim_{\Delta{x}\to 0}\frac{-2x - \Delta{x}}{(1 + (x + \Delta{x}^2)(1 + x^2)} \ & = \frac{-2x}{(1 + x^2)^2} \end{align} $$ End of explanation """
darkomen/TFG
medidas/0107150/.ipynb_checkpoints/Analisis-checkpoint.ipynb
cc0-1.0
%pylab inline #Importamos las librerías utilizadas import numpy as np import pandas as pd import seaborn as sns #Mostramos las versiones usadas de cada librerías print ("Numpy v{}".format(np.__version__)) print ("Pandas v{}".format(pd.__version__)) print ("Seaborn v{}".format(sns.__version__)) #Abrimos el fichero csv con los datos de la muestra datos = pd.read_csv('M1.CSV') #Almacenamos en una lista las columnas del fichero con las que vamos a trabajar columns = ['Diametro X','Diametro Y', 'VELOCIDAD'] #Mostramos un resumen de los datos obtenidoss datos[columns].describe() #datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']] """ Explanation: Análisis de los datos obtenidos Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 18 de Agosto del 2015 Los datos del experimento: * Hora de inicio: 16:08 * Hora final : 16:35 * Filamento extruido: * $T: 150ºC$ * $V_{min} tractora: 1.5 mm/s$ * $V_{max} tractora: 3.4 mm/s$ * Los incrementos de velocidades en las reglas del sistema experto son distintas: * En los caso 3 y 5 se mantiene un incremento de +2. * En los casos 4 y 6 se reduce el incremento a -1. End of explanation """ datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') """ Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica End of explanation """ plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.') """ Explanation: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo. Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento End of explanation """ datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)] #datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes') """ Explanation: Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas. End of explanation """ plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.') """ Explanation: Representación de X/Y End of explanation """ ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5)) """ Explanation: Analizamos datos del ratio End of explanation """ Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) | (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12)) """ Explanation: Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$ End of explanation """
cmmorrow/sci-analysis
docs/using_sci_analysis.ipynb
mit
import warnings warnings.filterwarnings("ignore") import numpy as np import scipy.stats as st from sci_analysis import analyze """ Explanation: Using sci-analysis From the python interpreter or in the first cell of a Jupyter notebook, type: End of explanation """ %matplotlib inline import numpy as np import scipy.stats as st from sci_analysis import analyze """ Explanation: This will tell python to import the sci-analysis function analyze(). Note: Alternatively, the function analyse() can be imported instead, as it is an alias for analyze(). For the case of this documentation, analyze() will be used for consistency. If you are using sci-analysis in a Jupyter notebook, you need to use the following code instead to enable inline plots: End of explanation """ np.random.seed(987654321) data = st.norm.rvs(size=1000) analyze(xdata=data) """ Explanation: Now, sci-analysis should be ready to use. Try the following code: End of explanation """ pets = ['dog', 'cat', 'rat', 'cat', 'rabbit', 'dog', 'hamster', 'cat', 'rabbit', 'dog', 'dog'] analyze(pets) """ Explanation: A histogram, box plot, summary stats, and test for normality of the data should appear above. Note: numpy and scipy.stats were only imported for the purpose of the above example. sci-analysis uses numpy and scipy internally, so it isn't necessary to import them unless you want to explicitly use them. A histogram and statistics for categorical data can be performed with the following command: End of explanation """ from inspect import signature print(analyze.__name__, signature(analyze)) print(analyze.__doc__) """ Explanation: Let's examine the analyze() function in more detail. Here's the signature for the analyze() function: End of explanation """ example1 = [0.2, 0.25, 0.27, np.nan, 0.32, 0.38, 0.39, np.nan, 0.42, 0.43, 0.47, 0.51, 0.52, 0.56, 0.6] example2 = [0.23, 0.27, 0.29, np.nan, 0.33, 0.35, 0.39, 0.42, np.nan, 0.46, 0.48, 0.49, np.nan, 0.5, 0.58] analyze(example1, example2) """ Explanation: analyze() will detect the desired type of data analysis to perform based on whether the ydata argument is supplied, and whether the xdata argument is a two-dimensional array-like object. The xdata and ydata arguments can accept most python array-like objects, with the exception of strings. For example, xdata will accept a python list, tuple, numpy array, or a pandas Series object. Internally, iterable objects are converted to a Vector object, which is a pandas Series of type float64. Note: A one-dimensional list, tuple, numpy array, or pandas Series object will all be referred to as a vector throughout the documentation. If only the xdata argument is passed and it is a one-dimensional vector of numeric values, the analysis performed will be a histogram of the vector with basic statistics and Shapiro-Wilk normality test. This is useful for visualizing the distribution of the vector. If only the xdata argument is passed and it is a one-dimensional vector of categorical (string) values, the analysis performed will be a histogram of categories with rank, frequencies and percentages displayed. If xdata and ydata are supplied and are both equal length one-dimensional vectors of numeric data, an x/y scatter plot with line fit will be graphed and the correlation between the two vectors will be calculated. If there are non-numeric or missing values in either vector, they will be ignored. Only values that are numeric in each vector, at the same index will be included in the correlation. For example, the two following two vectors will yield: End of explanation """ np.random.seed(987654321) group_a = st.norm.rvs(size=50) group_b = st.norm.rvs(size=25) group_c = st.norm.rvs(size=30) group_d = st.norm.rvs(size=40) analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d}) """ Explanation: If xdata is a sequence or dictionary of vectors, a location test and summary statistics for each vector will be performed. If each vector is normally distributed and they all have equal variance, a one-way ANOVA is performed. If the data is not normally distributed or the vectors do not have equal variance, a non-parametric Kruskal-Wallis test will be performed instead of a one-way ANOVA. Note: Vectors should be independent from one another --- that is to say, there shouldn't be values in one vector that are derived from or some how related to a value in another vector. These dependencies can lead to weird and often unpredictable results. A proper use case for a location test would be if you had a table with measurement data for multiple groups, such as test scores per class, average height per country or measurements per trial run, where the classes, countries, and trials are the groups. In this case, each group should be represented by it's own vector, which are then all wrapped in a dictionary or sequence. If xdata is supplied as a dictionary, the keys are the names of the groups and the values are the array-like objects that represent the vectors. Alternatively, xdata can be a python sequence of the vectors and the groups argument a list of strings of the group names. The order of the group names should match the order of the vectors passed to xdata. Note: Passing the data for each group into xdata as a sequence or dictionary is often referred to as "unstacked" data. With unstacked data, the values for each group are in their own vector. Alternatively, if values are in one vector and group names in another vector of equal length, this format is referred to as "stacked" data. The analyze() function can handle either stacked or unstacked data depending on which is most convenient. For example: End of explanation """ np.random.seed(987654321) group_a = st.norm.rvs(0.0, 1, size=50) group_b = st.norm.rvs(0.0, 3, size=25) group_c = st.norm.rvs(0.1, 1, size=30) group_d = st.norm.rvs(0.0, 1, size=40) analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d}) """ Explanation: In the example above, sci-analysis is telling us the four groups are normally distributed (by use of the Bartlett Test, Oneway ANOVA and the near straight line fit on the quantile plot), the groups have equal variance and the groups have matching means. The only significant difference between the four groups is the sample size we specified. Let's try another example, but this time change the variance of group B: End of explanation """ np.random.seed(987654321) group_a = st.norm.rvs(0.0, 1, size=50) group_b = st.norm.rvs(0.0, 3, size=25) group_c = st.weibull_max.rvs(1.2, size=30) group_d = st.norm.rvs(0.0, 1, size=40) analyze({"Group A": group_a, "Group B": group_b, "Group C": group_c, "Group D": group_d}) """ Explanation: In the example above, group B has a standard deviation of 2.75 compared to the other groups that are approximately 1. The quantile plot on the right also shows group B has a much steeper slope compared to the other groups, implying a larger variance. Also, the Kruskal-Wallis test was used instead of the Oneway ANOVA because the pre-requisite of equal variance was not met. In another example, let's compare groups that have different distributions and different means: End of explanation """
marksibrahim/musings
notebooks/.ipynb_checkpoints/A Neural Network Classifier using Keras-checkpoint.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from sklearn.cross_validation import train_test_split from sklearn.linear_model import LogisticRegressionCV from sklearn import datasets from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.utils import np_utils """ Explanation: Neural Network Classifier Neural networks can learn End of explanation """ iris = datasets.load_iris() iris_df = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target']) iris_df.head() """ Explanation: Load Iris Data End of explanation """ sns.pairplot(iris_df, hue="target") X = iris_df.values[:, :4] Y = iris_df.values[: , 4] """ Explanation: Targets 0, 1, 2 correspond to three species: setosa, versicolor, and virginica. End of explanation """ train_X, test_X, train_Y, test_Y = train_test_split(X, Y, train_size=0.5, random_state=0) """ Explanation: Split into Training and Testing End of explanation """ lr = LogisticRegressionCV() lr.fit(train_X, train_Y) print("Accuracy = {:.2f}".format(lr.score(test_X, test_Y))) """ Explanation: Let's test out a Logistic Regression Classifier End of explanation """ # Let's Encode the Output in a vector (one hot encoding) # since this is what the network outputs def one_hot_encode_object_array(arr): '''One hot encode a numpy array of objects (e.g. strings)''' uniques, ids = np.unique(arr, return_inverse=True) return np_utils.to_categorical(ids, len(uniques)) train_y_ohe = one_hot_encode_object_array(train_Y) test_y_ohe = one_hot_encode_object_array(test_Y) """ Explanation: Let's Train a Neural Network Classifier End of explanation """ model = Sequential() model.add(Dense(16, input_shape=(4,))) model.add(Activation("sigmoid")) # define output layer model.add(Dense(3)) # softmax is used here, because there are three classes (sigmoid only works for two classes) model.add(Activation("softmax")) # define loss function and optimization model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) """ Explanation: Defining the Network we have four features and three classes input layer must have 4 units output must have 3 we'll add a single hidden layer (choose 16 units) End of explanation """ model.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0) loss, accuracy = model.evaluate(test_X, test_y_ohe, verbose=0) print("Accuracy = {:.2f}".format(accuracy)) """ Explanation: What's happening here? optimizier: examples include stochastic gradient descent (going down steepest point) ADAM (the one selected above) stands for Adaptive Moment Estimation similar to stochastic gradient descent, but looks as exponentially decaying average and has a different update rule loss: classficiation error or mean square error are fine options Categorical Cross Entropy is a better option for computing the gradient supposedly End of explanation """ stochastic_net = Sequential() stochastic_net.add(Dense(16, input_shape=(4,))) stochastic_net.add(Activation("sigmoid")) stochastic_net.add(Dense(3)) stochastic_net.add(Activation("softmax")) stochastic_net.compile(optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"]) stochastic_net.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0) loss, accuracy = stochastic_net.evaluate(test_X, test_y_ohe, verbose=0) print("Accuracy = {:.2f}".format(accuracy)) """ Explanation: Nice! Much better performance than logistic regression! How about training with stochastic gradient descent? End of explanation """
regardscitoyens/consultation_an
exploitation/analyse_quanti.ipynb
agpl-3.0
#contributions = pd.read_json(path_or_buf='../data/EGALITE4.brut.json', orient="columns") def loadContributions(file, withsexe=False): contributions = pd.read_json(path_or_buf=file, orient="columns") rows = []; rindex = []; for i in range(0, contributions.shape[0]): row = {}; row['id'] = contributions['id'][i] rindex.append(contributions['id'][i]) if (withsexe): if (contributions['sexe'][i] == 'Homme'): row['sexe'] = 0 else: row['sexe'] = 1 for question in contributions['questions'][i]: if (question.get('Reponse')): # and (question['texte'][0:5] != 'Savez') : row[question['titreQuestion']+' : '+question['texte']] = 1 for criteres in question.get('Reponse'): # print(criteres['critere'].keys()) row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1 rows.append(row) df = pd.DataFrame(data=rows) df.fillna(0, inplace=True) return df df = loadContributions('../data/EGALITE1.brut.json', True) df = df.merge(right=loadContributions('../data/EGALITE2.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE3.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE4.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE5.brut.json'), how='outer', right_on='id', left_on='id') df = df.merge(right=loadContributions('../data/EGALITE6.brut.json'), how='outer', right_on='id', left_on='id') df.fillna(0, inplace=True) df.index = df['id'] df.to_csv('consultation_an.csv', format='%d') #df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe'] df.head() df = loadContributions('../data/EGALITE4.brut.json', True) """ Explanation: Reading the data End of explanation """ from sklearn.cluster import KMeans from sklearn import metrics import numpy as np X = df.drop('id', axis=1).values def train_kmeans(nb_clusters, X): kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X) return kmeans #print(kmeans.predict(X)) #kmeans.cluster_centers_ def select_nb_clusters(): perfs = {}; for nbclust in range(2,10): kmeans_model = train_kmeans(nbclust, X); labels = kmeans_model.labels_ # from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index # we are in an unsupervised model. cannot get better! # perfs[nbclust] = metrics.calinski_harabaz_score(X, labels); perfs[nbclust] = metrics.silhouette_score(X, labels); print(perfs); return perfs; df['clusterindex'] = train_kmeans(4, X).predict(X) #df perfs = select_nb_clusters(); # result : # {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438} optimal_nb_clusters = max(perfs, key=perfs.get); print("optimal_nb_clusters" , optimal_nb_clusters); """ Explanation: Build clustering model Here we build a kmeans model , and select the "optimal" of clusters. Here we see that the optimal number of clusters is 2. End of explanation """ km_model = train_kmeans(optimal_nb_clusters, X); df['clusterindex'] = km_model.predict(X) lGroupBy = df.groupby(['clusterindex']).mean(); # km_model.__dict__ cluster_profile_counts = df.groupby(['clusterindex']).count(); cluster_profile_means = df.groupby(['clusterindex']).mean(); global_counts = df.count() global_means = df.mean() cluster_profile_counts.head() #cluster_profile_means.head() #df.info() df_profiles = pd.DataFrame(); nbclusters = cluster_profile_means.shape[0] df_profiles['clusterindex'] = range(nbclusters) for col in cluster_profile_means.columns: if(col != "clusterindex"): df_profiles[col] = np.zeros(nbclusters) for cluster in range(nbclusters): df_profiles[col][cluster] = cluster_profile_means[col][cluster] # row.append(df[col].mean()); df_profiles.head() #print(df_profiles.columns) intereseting_columns = {}; for col in df_profiles.columns: if(col != "clusterindex"): global_mean = df[col].mean() diff_means_global = abs(df_profiles[col] - global_mean). max(); # print(col , diff_means_global) if(diff_means_global > 0.1): intereseting_columns[col] = True #print(intereseting_columns) %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt #cols = [ col for col in cluster_profile_counts.columns] #cluster_profile_means.ix[0].plot.bar() """ Explanation: Build the optimal model and apply it End of explanation """ interesting = list(intereseting_columns.keys()) df_profiles_sorted = df_profiles[interesting].sort_index(axis=1) df_profiles_sorted.plot.bar(figsize =(1, 1)) df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False) df_profiles_sorted.T df_profiles.sort_index(axis=1).T """ Explanation: Cluster Profiles Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases. As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question). The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars) End of explanation """
TimothyHelton/k2datascience
notebooks/HR_Exercise.ipynb
bsd-3-clause
from k2datascience import hr_analytics from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" %matplotlib inline """ Explanation: HR Dataset - Statistics Review Timothy Helton <br> <font color="red"> NOTE: <br> This notebook uses code found in the <a href="https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/hr_analytics.py"> <strong>k2datascience.hr_analytics</strong></a> module. To execute all the cells do one of the following items: <ul> <li>Install the k2datascience package to the active Python interpreter.</li> <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li> <li>Create a link to the hr_analytics.py file in the same directory as this notebook.</li> </font> Imports End of explanation """ hr = hr_analytics.HR() """ Explanation: Load Data End of explanation """ print(f'Data Shape\n\n{hr.data.shape}') print('\n\nColumns\n\n{}'.format('\n'.join(hr.data.columns))) hr.data.head() hr.box_plot(); """ Explanation: Explore the data The data set we will use for this exercise comes from a Kaggle challenge and is often used for predictive analytics, namely to predict why the best and most experienced employees tend to leave the company. We won't be using it for any predictive purposes here, but will instead use this data set to review many of the concepts explored in the Statistical Inference text. This data contains fields for various measures of employee performance and reported satisfaction levels, as well as some categorical variables for events and salary level. For now, just explore the data a bit to get a general idea of what is going on. End of explanation """ print(f'P(employee left the company) = {hr.p_left_company:.3f}') print(f'P(employee experienced a work accident) = {hr.p_work_accident:.3f}') print(f'P(employee experienced accident and left company) = {hr.p_left_and_accident:.3f}') """ Explanation: Probability, Expectation Values, and Variance The concepts of probability, expectation values, and variance are the bedrock of statistical inference. Let's begin by employing some of these concepts to see if we can find some interesting paths to go down which may provide some insight into the inner workings of this company. What is the probability that a randomly selected employee left the company? What about experienced a work accident? Also compute the probability that a randomly selected employee left the company and experienced a work accident. End of explanation """ hr.compare_satisfaction() """ Explanation: Compute the 25th, 50th, and 90th percentiles for the satisfaction level score for all employees that left the company. Compare these results to the same percentiles for those that did not leave. What can you say about the results? End of explanation """ hours_variance, hours_std = hr.calc_hours_stats() print(f'Hours Worked Variance: {hours_variance:.3f}') print(f'Hours Worked Standard Deviation: {hours_std:.3f}') """ Explanation: Findings: Employees who stayed are in general more satisfied with their position than the ones who chose to leave. Job Satisfaction is not the identifying characteristic as to why employees are leaving. Compute the variance and standard deviation of hours worked. End of explanation """ satisfaction_ex, satisfaction_current = hr.compare_satisfaction_variance() print(f'Ex-Employee Job Satisfaction Variance: {satisfaction_ex:.3f}') print(f'Current Employee Job Satisfaction Variance: {satisfaction_current:.3f}') """ Explanation: Compare the variance between the satisfaction levels of employees who left versus those who stayed. Which is larger? What does this mean? End of explanation """ hr.calc_satisfaction_salary() """ Explanation: Findings The spread of job satisfaction data is 48.9% greater for ex-employees vs. current employees. Job Satisfaction is not a good indicator of employee retention. Compute the mean satisfaction level for each salary category. Comment on your results. End of explanation """ hr.calc_p_hours_salary() """ Explanation: Findings Salary has a very weak contribution to job satisfaction. Given an employees salary level (low, medium, or high), calculate the probability that they worked more than two standard deviations of the average monthly hours across all groups. In other words, compute $$P(hours > 2\sigma \vert salary ) = \dfrac{P(salary \vert hours > 2\sigma) P(hours > 2\sigma)}{P(salary)}$$ $$ P(a \vert b) = \frac{P(b \vert a) P(a)}{P(b)} $$ What can you say about your results? End of explanation """ hr.calc_p_left_salary() """ Explanation: Findings The lower paid employees are putting in more hours than the higher paid employees. Repeat the previous question for the following case. $$P(left \vert salary ) = \dfrac{P(salary \vert left) P(left)}{P(salary)}$$ End of explanation """ hr.calc_p_salary_promotion() """ Explanation: Findings The lowest paid employees are 4.5 times more likely to leave the company than the highest paid employees. What is the odds ratio of an employee with a high salary getting a promotion within the past five years versus a low salary employee? Comment on your results. End of explanation """ print(f'Approximate Sample Satisfaction Mean: {hr.data.satisfaction.mean():.3f}') sample_n = 10 sample_means = [] for n in range(sample_n): sample_means.append(hr.calc_satisfaction_random_sample(50)) sample_mean = '\n'.join([f'{x:.3f}' for x in sample_means]) print(f'Actual Sample Satisfaction Mean: {sum(sample_means) / sample_n}') print('Actual Sample Satisfaction Values:') print(f'{sample_mean}') """ Explanation: Findings Highier paid employees were 6.5 times more likely to be promoted than lower income employees. Suppose we were to pull 50 random samples of employee satisfaction levels. What would approximately be the mean of this sample? What would be the mean of, say, 10 sets of random samples? Demonstrate your assertions by writing some python code to do just that. End of explanation """ print('\n'.join(hr.bernoulli_vars)) """ Explanation: Distributions and The Central Limit Theorem The Bernoulli Distribution Bernoulli distributions are the result of a random variable with a binary outcome, like a coin clip or medical test giving a positive or negative result. Typically we represent the outcomes of a Bernoulli Random variable $X$ of only taking values of 0 or 1, with probabilities $p$ and $1 - p$ respectively, mean $p$, variance $p(1 - p)$, and PMF given by $$ P(X = x) = p^x (1 - p)^{1 - x} $$ Bernoulli random variables crop up very often statistical analysis most often in the form of Binomial trials, or, as a sum of independent Bernoulli variables with PMF given by $$ P(X = x) = {n \choose x} p^x (1 - p)^{n - x} $$ where $$ {n \choose x} = \frac{n!}{x!(n - x)!} $$ In this exercise you'll take a look at the HR data and apply these concepts to gain some insight. Using the HR data, answer the following. - Which variables in the HR data can be said to be Bernoulli random variables? End of explanation """ hr.calc_p_bernoulli() """ Explanation: For the k variables you identified in part 1, compute the probabilities $p_k$, of each having a positive $(x = 1)$ result. End of explanation """ hr.calc_bernoulli_variance() """ Explanation: Compute the variance of each of the variables in part 2 using $p_k$ as described above. End of explanation """ hr.calc_p_bernoulli_k() """ Explanation: For each of the k variables, compute the probability of randomly selecting 3500 employees with a positive result. Comment on your answer. End of explanation """ hr.calc_p_bernoulli_k(cumulative=True) """ Explanation: Findings The probability of any of the Bernoulli variables for this dataset producing exactly 3500 positive results is very low. For each of the k variables, compute the probability of randomly selecting at 3500 or less with a positive result. Comment on your answer. End of explanation """ hr.bernoulli_plot() """ Explanation: Findings The probability that more than 3500 employees will leave the company is not likely, but high turnover is predicted. Now plot both the PMF and CDF as a function of the number of drawn samples for each of the k variables. Comment on your results. End of explanation """ print('\n'.join(hr.normal_vars)) """ Explanation: The Normal Distribution The Normal distribution (or sometimes called the Bell Curve or Guassian) is by far the most prevalent and useful distribution in any field that utilizes statistical techniques. In fact, in can be shown that the means of random variables sampled from any distribution eventually form a normal given a sufficiently large sample size. A normal distribution is characterized by the PDF given by $$p(x|\mu,\sigma) = \frac{1}{\sqrt{(2\pi\sigma^2)}}e^{-\frac{(x - \mu)^2}{2\sigma^2}} $$ where $\mu$ is the mean and $\sigma^2$ is the variance, thus the distribution is characterized by mean and variance alone. In this exercise, you'll examine the some of the variables in the HR dataset and construct some normal approximating them. Using the HR data, answer the following Which variables may be approximately normal? End of explanation """ hr.gaussian_plot() """ Explanation: For the variables in part 1, plot some histograms. End of explanation """ hr.norm_stats """ Explanation: Compute the mean and variance for each of the variables used in parts 1 and 2. End of explanation """ hr.gaussian_plot(normal_overlay=True) """ Explanation: Using the mean and variance in part 3, construct normal distributions for each and overlay them on top of the histograms you made in part one. Are they well approximated by normals? End of explanation """ print('\n'.join(hr.poisson_vars)) """ Explanation: Findings None of the assumed Gaussian Variables strongly resemble a Gaussian distribution for the entire set. The Poisson Distribution The Poisson distribution is very versatile but is typically used to model counts, such as, the amount of clicks per advertisement and arriving flights per unit time. It has a PDF given by $$ P(X = x, \lambda) = \frac{\lambda^x e^{-\lambda}}{x!} $$ where the mean and variance are both equal to $\lambda$ Using the HR data, answer the following. What variables would be good candidates for modeling with a Poisson distribution? End of explanation """ poisson = hr.poisson_distributions() poisson poisson[[f'p_{x}' for x in hr.poisson_vars]] """ Explanation: For each variable in part 1, divide each by salary and fit a Poisson distribution to each. Compute the probability of obtaining at least the mean of all salary levels in each category by using the Poisson distributions you constructed in part 2. Comment on your results. End of explanation """ print('\n'.join(hr.central_limit_vars)) """ Explanation: Findings According to the Poisson distribution probabilities higher paid employees are slightly more likely to have a larger number of projects and fewer years of service than employees of other salaries. The Central Limit Theorem The Central Limit Theorem is perhaps one of the most remarkable results in statistics and mathematics in general. In short, it says that the distribution of means of independent random variables, sampled from any distribution, tends to approach a normal distribution as the sample size increases. An example of this would be taking a pair of dice, rolling them, and recording the mean of each result. The Central Limit Theorem states, that after enough rolls, the distribution of the means will be approximately normal. Stated formally, the result is $$ \bar{X_n} \approx N(\mu, \sigma^2/n) = \frac{\bar{X_n} - \mu}{\sigma \sqrt{n}}$$ In this exercise, you'll conduct some simulation experiments to explore this idea. Using the HR data, answer the following. - Choose two variables which may be good candidates to test this theorem. End of explanation """ hr.central_limit_plot() """ Explanation: Using the variables chosen in part 1, randomly select a set of n = 10, n = 100, n = 500 and n = 1000 samples and take the mean. Repeat this 1000 times for each variable. Plot a histogram for each variable. Overlay a normal curve on your plots, using the mean and variance computed from the data. Comment on your results. End of explanation """ left = hr.data.query('left == 1').satisfaction stayed = hr.data.query('left == 0').satisfaction comparison = hr.compare_confidence(left, 'left', stayed, 'stayed', 0.95) comparison """ Explanation: Findings: The initial plot with 10 samples per set shows characteristics of a Gaussian distribution. For sets with 100 or more samples the Gaussian fit is exceptional. Hypothesis Testing Hypothesis testing is essentially using the data to answer questions of interest. For example, does a new medication provide any benefit over placebo? Or is a subset of the population disproportionately more susceptible to a particular disease? Or is the difference between two companies profits' significant or due to chance alone? Before doing some hypothesis testing on the HR data, recall that hypothesis typically come in pairs of the form $H_0$, called the null hypothesis, versus $H_a$, called the alternative hypothesis. The null hypothesis represents the "default" assumption -- that a medication has no effect for example, while the alternative hypothesis represents what exactly are looking to discover, in the medication case, whether it provides a significant benefit. Another common case is testing the difference between two means. Here, the null hypothesis is that there is no difference between two population means, whereas the alternative hypothesis is that there is a difference. Stated more precisely $$H_0: \mu_1 - \mu_2 = 0$$ $$H_a: \mu_1 - \mu_2 \ne 0$$ Hypothesis are usually tested by constructing a confidence interval around the test statistic and selecting a "cut-off" significance level denoted $\alpha$. A typical $\alpha$ significance is 0.05 and is often called a "P-value". If a test produces a P-value of $\alpha$ or below, then the null hypothesis can be rejected, strengthening the case of the alternative hypothesis. It is very important to remember that hypothesis testing can only tell you if your hypothesis is statistically significant -- this does not mean that your result may be scientifically significant which requires much more evidence. In this exercise you'll explore the HR data more and test some hypothesis. Using the HR data, answer the following. Compute a confidence interval for satisfaction levels, at the 95% level, of employees who left the company and those who didn't. Do this using both a t distribution and a normal. Comment on your results. $$ CI = a\frac{variance}{\sqrt{N}} = a \sqrt{\frac{p - p^2}{N}}$$ End of explanation """ ttest = hr.t_test(left, 'left', stayed, 'stayed') mean_diff = abs(left.mean() - stayed.mean()) variance_diff = abs(left.var() - stayed.var()) print(f'T-test P-value: {ttest[1]:.3f}') print(f'Difference of Means: {mean_diff:.3f}') print(f'Difference of Variances: {variance_diff:.3f}') """ Explanation: Findings The sample size is large enought to follow the Central Limit Theorem, since the T distribution confidence limits are approximately identical to the Gaussian distribution confidence limits. Employees who left the company had significantlly lower satisfaction values. Use a t-test to test the hypothesis that employees who left the company, had lower satisfaction levels than those who did not. If significant, what is the mean difference? Comment on your results. (Hint: Do the two populations have equal variance?) Fit a normal curve to each group in part 2 and put them on the same plot next to each other. Comment on your results. End of explanation """ low = hr.data.query('salary == "low"').satisfaction medium = hr.data.query('salary == "medium"').satisfaction high = hr.data.query('salary == "high"').satisfaction hr.t_test(low, 'low', hr.data.satisfaction, 'All Satisfaction Data', independent_vars=False) hr.t_test(medium, 'medium', hr.data.satisfaction, 'All Satisfaction Data', independent_vars=False) hr.t_test(high, 'high', hr.data.satisfaction, 'All Satisfaction Data', independent_vars=False) hr.t_test(low, 'low', medium, 'medium', high, 'high'); """ Explanation: Findings The null is that both employees who stayed and left had the same satisfaction values. The small P-value resulting from this test states that we cannot disprove the null hypothesis, but this does not mean the null hypothesis is true. By comparing the mean and variance of the two datasets it is shown there there is a difference between the distributions and the T-test is confirmed. Test the hypothesis that the satisfaction level between each salary group, denoted k, differs signicantly from the mean. Namely $H_0: \mu - \mu_k = 0$ $H_a: \mu - \mu_k \ne 0$ How would you interpret your results in part 5? Generate plots for part 5 as you did in part 3. What conclusions can you draw from the plot? End of explanation """ low = hr.data.query('salary == "low"').evaluation medium = hr.data.query('salary == "medium"').evaluation high = hr.data.query('salary == "high"').evaluation hr.t_test(low, 'low', hr.data.evaluation, 'All Evaluation Data', independent_vars=False) hr.t_test(medium, 'medium', hr.data.evaluation, 'All Evaluation Data', independent_vars=False) hr.t_test(high, 'high', hr.data.evaluation, 'All Evaluation Data', independent_vars=False) hr.t_test(low, 'low', medium, 'medium', high, 'high'); high.mean() hr.data.evaluation.mean() """ Explanation: Findings The null hypothesis for a dependent dataset is that the data values are equal. The satisfaction values partitioned by salary are all unique to the satisfaction as a whole. Repeat parts 4-6 on a hypothesis of your choosing. Test the hypothesis that the evaluation level between each salary group, differs signicantly from the mean. End of explanation """ medium = hr.data.query('salary == "medium"').satisfaction high = hr.data.query('salary == "high"').satisfaction hr.calc_power(medium, high) """ Explanation: Findings The evaluation values partitioned by salary are statistically similar for all the ranges. Recall that Power is the probability of failing to reject the null hypothesis when it is false (thus more power is good). Compute the power for the hypothesis that the satisfaction level of high paid employees is different than that of medium paid employees using a t distribution. Statistical Power Example End of explanation """ hr.bootstrap(hr.data.satisfaction, n=100, sets=100) """ Explanation: Bootstrapping Bootstrapping is an immensely useful technique in practice. Very often you may find yourself in a situation where you want to compute some statistic, but lack sufficient data to do so. Bootstrapping works as a remedy to this problem. Recall that the bootstrapping algorithm breaks down as follows: 1. Sample n observations with replacement from the observed data resulting in one simulated complete data set. 1. Take the statistic of the simulated data set 1. Repeat these two steps B times, resulting in B simulated statistics 1. These statistics are approximately drawn from the sampling distribution of the statistic of n observations In this exercise you will implement this algorithm on the HR data. Write a function that can perform boostrapping for the median of a set of n samples in the HR data set. Test this function on the satisfaction_level with n = 100 and b = 100 and compare your results to the true median. Also compute the standard deviation of the bootstrapped median. End of explanation """
mne-tools/mne-tools.github.io
0.16/_downloads/make_report.ipynb
bsd-3-clause
# Authors: Teon Brooks <teon.brooks@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) from mne.report import Report from mne.datasets import sample from mne import read_evokeds from matplotlib import pyplot as plt data_path = sample.data_path() meg_path = data_path + '/MEG/sample' subjects_dir = data_path + '/subjects' evoked_fname = meg_path + '/sample_audvis-ave.fif' """ Explanation: Make an MNE-Report with a Slider In this example, MEG evoked data are plotted in an html slider. End of explanation """ report = Report(image_format='png', subjects_dir=subjects_dir, info_fname=evoked_fname, subject='sample') report.parse_folder(meg_path) """ Explanation: Do standard folder parsing (this can take a couple of minutes): End of explanation """ # Load the evoked data evoked = read_evokeds(evoked_fname, condition='Left Auditory', baseline=(None, 0), verbose=False) evoked.crop(0, .2) times = evoked.times[::4] # Create a list of figs for the slider figs = list() for t in times: figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100, show=False)) plt.close(figs[-1]) report.add_slider_to_section(figs, times, 'Evoked Response', image_format='svg') # to save report # report.save('foobar.html', True) """ Explanation: Add a custom section with an evoked slider: End of explanation """
google/trax
trax/examples/NER_using_Reformer.ipynb
apache-2.0
#@title # Copyright 2020 Google LLC. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/SauravMaheshkar/trax/blob/SauravMaheshkar-example-1/examples/NER_using_Reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ !pip install -q -U trax """ Explanation: Author - @SauravMaheshkar Install Dependencies Install the latest version of the Trax Library. End of explanation """ import trax # Our Main Library from trax import layers as tl import os # For os dependent functionalities import numpy as np # For scientific computing import pandas as pd # For basic data analysis import random as rnd # For using random functions """ Explanation: Introduction Named-entity recognition (NER) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. To evaluate the quality of a NER system's output, several measures have been defined. The usual measures are called Precision, recall, and F1 score. However, several issues remain in just how to calculate those values. State-of-the-art NER systems for English produce near-human performance. For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%. Importing Packages End of explanation """ data = pd.read_csv("/kaggle/input/entity-annotated-corpus/ner_dataset.csv",encoding = 'ISO-8859-1') data = data.fillna(method = 'ffill') data.head() """ Explanation: Pre-Processing Loading the Dataset Let's load the ner_dataset.csv file into a dataframe and see what it looks like End of explanation """ ## Extract the 'Word' column from the dataframe words = data.loc[:, "Word"] ## Convert into a text file using the .savetxt() function np.savetxt(r'words.txt', words.values, fmt="%s") """ Explanation: Creating a Vocabulary File We can see there's a column for the words in each sentence. Thus, we can extract this column using the .loc() and store it into a .txt file using the .savetext() function from numpy. End of explanation """ vocab = {} with open('words.txt') as f: for i, l in enumerate(f.read().splitlines()): vocab[l] = i print("Number of words:", len(vocab)) vocab['<PAD>'] = len(vocab) """ Explanation: Creating a Dictionary for Vocabulary Here, we create a Dictionary for our vocabulary by reading through all the sentences in the dataset. End of explanation """ class Get_sentence(object): def __init__(self,data): self.n_sent=1 self.data = data agg_func = lambda s:[(w,p,t) for w,p,t in zip(s["Word"].values.tolist(), s["POS"].values.tolist(), s["Tag"].values.tolist())] self.grouped = self.data.groupby("Sentence #").apply(agg_func) self.sentences = [s for s in self.grouped] getter = Get_sentence(data) sentence = getter.sentences words = list(set(data["Word"].values)) words_tag = list(set(data["Tag"].values)) word_idx = {w : i+1 for i ,w in enumerate(words)} tag_idx = {t : i for i ,t in enumerate(words_tag)} X = [[word_idx[w[0]] for w in s] for s in sentence] y = [[tag_idx[w[2]] for w in s] for s in sentence] """ Explanation: Extracting Sentences from the Dataset For extracting sentences from the dataset and creating (X,y) pairs for training. End of explanation """ def data_generator(batch_size, x, y,pad, shuffle=False, verbose=False): num_lines = len(x) lines_index = [*range(num_lines)] if shuffle: rnd.shuffle(lines_index) index = 0 while True: buffer_x = [0] * batch_size buffer_y = [0] * batch_size max_len = 0 for i in range(batch_size): if index >= num_lines: index = 0 if shuffle: rnd.shuffle(lines_index) buffer_x[i] = x[lines_index[index]] buffer_y[i] = y[lines_index[index]] lenx = len(x[lines_index[index]]) if lenx > max_len: max_len = lenx index += 1 X = np.full((batch_size, max_len), pad) Y = np.full((batch_size, max_len), pad) for i in range(batch_size): x_i = buffer_x[i] y_i = buffer_y[i] for j in range(len(x_i)): X[i, j] = x_i[j] Y[i, j] = y_i[j] if verbose: print("index=", index) yield((X,Y)) """ Explanation: Making a Batch Generator Here, we create a batch generator for training. End of explanation """ from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state=1) """ Explanation: Splitting into Test and Train End of explanation """ def NERmodel(tags, vocab_size=35181, d_model = 50): model = tl.Serial( # tl.Embedding(vocab_size, d_model), trax.models.reformer.Reformer(vocab_size, d_model, ff_activation=tl.LogSoftmax), tl.Dense(tags), tl.LogSoftmax() ) return model model = NERmodel(tags = 17) print(model) """ Explanation: Building the Model The Reformer Model In this notebook, we use the Reformer, which is a more efficient of Transformer that uses reversible layers and locality-sensitive hashing. You can read the original paper here. Locality-Sensitive Hashing The biggest problem that one might encounter while using Transformers, for huge corpora is the handling of the attention layer. Reformer introduces Locality Sensitive Hashing to solve this problem, by computing a hash function that groups similar vectors together. Thus, a input sequence is rearranged to bring elements with the same hash together and then divide into segments(or chunks, buckets) to enable parallel processing. Thus, we can apply Attention to these chunks (rather than the whole input sequence) to reduce the computational load. Reversible Layers Using Locality Sensitive Hashing, we were able to solve the problem of computation but still we have a memory issue. Reformer implements a novel approach to solve this problem, by recomputing the input of each layer on-demand during back-propagation, rather than storing it in memory. This is accomplished by using Reversible Layers (activations from last layers are used to recover activations from any intermediate layer). Reversible layers store two sets of activations for each layer. One follows the standard procedure in which the activations are added as they pass through the network The other set only captures the changes. Thus, if we run the network in reverse, we simply subtract the activations applied at each layer. Model Architecture We will perform the following steps: Use input tensors from our data generator Produce Semantic entries from an Embedding Layer Feed these into our Reformer Language model Run the Output through a Linear Layer Run these through a log softmax layer to get predicted classes We use the: tl.Serial(): Combinator that applies layers serially(by function composition). It's commonly used to construct deep networks. It uses stack semantics to manage data for its sublayers tl.Embedding(): Initializes a trainable embedding layer that maps discrete tokens/ids to vectors trax.models.reformer.Reformer(): Creates a Reversible Transformer encoder-decoder model. tl.Dense(): Creates a Dense(fully-connected, affine) layer tl.LogSoftmax(): Creates a layer that applies log softmax along one tensor axis. End of explanation """ from trax.supervised import training rnd.seed(33) batch_size = 64 train_generator = trax.data.inputs.add_loss_weights( data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True), id_to_mask=vocab['<PAD>']) eval_generator = trax.data.inputs.add_loss_weights( data_generator(batch_size, x_test, y_test,vocab['<PAD>'] ,True), id_to_mask=vocab['<PAD>']) def train_model(model, train_generator, eval_generator, train_steps=1, output_dir='model'): train_task = training.TrainTask( train_generator, loss_layer = tl.CrossEntropyLoss(), optimizer = trax.optimizers.Adam(0.01), n_steps_per_checkpoint=10 ) eval_task = training.EvalTask( labeled_data = eval_generator, metrics = [tl.CrossEntropyLoss(), tl.Accuracy()], n_eval_batches = 10 ) training_loop = training.Loop( model, train_task, eval_tasks = eval_task, output_dir = output_dir) training_loop.run(n_steps = train_steps) return training_loop train_steps = 100 training_loop = train_model(model, train_generator, eval_generator, train_steps) """ Explanation: Train the Model End of explanation """
SheffieldML/notebook
GPy/sparse_gp_regression.ipynb
bsd-3-clause
%matplotlib inline %config InlineBackend.figure_format = 'svg' import GPy import numpy as np np.random.seed(101) """ Explanation: Sparse GP Regression 14th January 2014 James Hensman 29th September 2014 Neil Lawrence (added sub-titles, notes and some references). This example shows the variational compression effect of so-called 'sparse' Gaussian processes. In particular we show how using the variational free energy framework of Titsias, 2009 we can compress a Gaussian process fit. First we set up the notebook with a fixed random seed, and import GPy. End of explanation """ N = 50 noise_var = 0.05 X = np.linspace(0,10,50)[:,None] k = GPy.kern.RBF(1) y = np.random.multivariate_normal(np.zeros(N),k.K(X)+np.eye(N)*np.sqrt(noise_var)).reshape(-1,1) """ Explanation: Sample Function Now we'll sample a Gaussian process regression problem directly from a Gaussian process prior. We'll use an exponentiated quadratic covariance function with a lengthscale and variance of 1 and sample 50 equally spaced points. End of explanation """ m_full = GPy.models.GPRegression(X,y) m_full.optimize('bfgs') m_full.plot() print m_full """ Explanation: Full Gaussian Process Fit Now we use GPy to optimize the parameters of a Gaussian process given the sampled data. Here, there are no approximations, we simply fit the full Gaussian process. End of explanation """ Z = np.hstack((np.linspace(2.5,4.,3),np.linspace(7,8.5,3)))[:,None] m = GPy.models.SparseGPRegression(X,y,Z=Z) m.likelihood.variance = noise_var m.plot() print m """ Explanation: A Poor `Sparse' GP Fit Now we construct a sparse Gaussian process. This model uses the inducing variable approximation and initialises the inducing variables in two 'clumps'. Our initial fit uses the correct covariance function parameters, but a badly placed set of inducing points. End of explanation """ m.inducing_inputs.fix() m.optimize('bfgs') m.plot() print m """ Explanation: Notice how the fit is reasonable where there are inducing points, but bad elsewhere. Optimizing Covariance Parameters Next, we will try and find the optimal covariance function parameters, given that the inducing inputs are held in their current location. End of explanation """ m.randomize() m.Z.unconstrain() m.optimize('bfgs') m.plot() """ Explanation: The poor location of the inducing inputs causes the model to 'underfit' the data. The lengthscale is much longer than the full GP, and the noise variance is larger. This is because in this case the Kullback Leibler term in the objective free energy is dominating, and requires a larger lengthscale to improve the quality of the approximation. This is due to the poor location of the inducing inputs. Optimizing Inducing Inputs Firstly we try optimzing the location of the inducing inputs to fix the problem, however we still get a larger lengthscale than the Gaussian process we sampled from (or the full GP fit we did at the beginning). End of explanation """ Z = np.random.rand(12,1)*12 m = GPy.models.SparseGPRegression(X,y,Z=Z) m.optimize('bfgs') m.plot() m_full.plot() print m.log_likelihood(), m_full.log_likelihood() """ Explanation: The inducing points spread out to cover the data space, but the fit isn't quite there. We can try increasing the number of the inducing points. Train with More Inducing Points Now we try 12 inducing points, rather than the original six. We then compare with the full Gaussian process likelihood. End of explanation """
Kaggle/learntools
notebooks/pandas/raw/tut_2.ipynb
apache-2.0
#$HIDE_INPUT$ import pandas as pd pd.set_option('max_rows', 5) import numpy as np reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) reviews """ Explanation: Introduction In the last tutorial, we learned how to select relevant data out of a DataFrame or Series. Plucking the right data out of our data representation is critical to getting work done, as we demonstrated in the exercises. However, the data does not always come out of memory in the format we want it in right out of the bat. Sometimes we have to do some more work ourselves to reformat it for the task at hand. This tutorial will cover different operations we can apply to our data to get the input "just right". To start the exercise for this topic, please click here. We'll use the Wine Magazine data for demonstration. End of explanation """ reviews.points.describe() """ Explanation: Summary functions Pandas provides many simple "summary functions" (not an official name) which restructure the data in some useful way. For example, consider the describe() method: End of explanation """ reviews.taster_name.describe() """ Explanation: This method generates a high-level summary of the attributes of the given column. It is type-aware, meaning that its output changes based on the data type of the input. The output above only makes sense for numerical data; for string data here's what we get: End of explanation """ reviews.points.mean() """ Explanation: If you want to get some particular simple summary statistic about a column in a DataFrame or a Series, there is usually a helpful pandas function that makes it happen. For example, to see the mean of the points allotted (e.g. how well an averagely rated wine does), we can use the mean() function: End of explanation """ reviews.taster_name.unique() """ Explanation: To see a list of unique values we can use the unique() function: End of explanation """ reviews.taster_name.value_counts() """ Explanation: To see a list of unique values and how often they occur in the dataset, we can use the value_counts() method: End of explanation """ review_points_mean = reviews.points.mean() reviews.points.map(lambda p: p - review_points_mean) """ Explanation: Maps A map is a term, borrowed from mathematics, for a function that takes one set of values and "maps" them to another set of values. In data science we often have a need for creating new representations from existing data, or for transforming data from the format it is in now to the format that we want it to be in later. Maps are what handle this work, making them extremely important for getting your work done! There are two mapping methods that you will use often. map() is the first, and slightly simpler one. For example, suppose that we wanted to remean the scores the wines received to 0. We can do this as follows: End of explanation """ def remean_points(row): row.points = row.points - review_points_mean return row reviews.apply(remean_points, axis='columns') """ Explanation: The function you pass to map() should expect a single value from the Series (a point value, in the above example), and return a transformed version of that value. map() returns a new Series where all the values have been transformed by your function. apply() is the equivalent method if we want to transform a whole DataFrame by calling a custom method on each row. End of explanation """ reviews.head(1) """ Explanation: If we had called reviews.apply() with axis='index', then instead of passing a function to transform each row, we would need to give a function to transform each column. Note that map() and apply() return new, transformed Series and DataFrames, respectively. They don't modify the original data they're called on. If we look at the first row of reviews, we can see that it still has its original points value. End of explanation """ review_points_mean = reviews.points.mean() reviews.points - review_points_mean """ Explanation: Pandas provides many common mapping operations as built-ins. For example, here's a faster way of remeaning our points column: End of explanation """ reviews.country + " - " + reviews.region_1 """ Explanation: In this code we are performing an operation between a lot of values on the left-hand side (everything in the Series) and a single value on the right-hand side (the mean value). Pandas looks at this expression and figures out that we must mean to subtract that mean value from every value in the dataset. Pandas will also understand what to do if we perform these operations between Series of equal length. For example, an easy way of combining country and region information in the dataset would be to do the following: End of explanation """
materialsvirtuallab/matgenb
notebooks/2018-09-25-Structure Prediction using Pymatgen and the Materials API.ipynb
bsd-3-clause
# Imports we need for running structure prediction from pymatgen.analysis.structure_prediction.substitutor import Substitutor from pymatgen.analysis.structure_prediction.substitution_probability import SubstitutionPredictor from pymatgen.analysis.structure_matcher import StructureMatcher, ElementComparator from pymatgen.transformations.standard_transformations import AutoOxiStateDecorationTransformation from pymatgen import Specie, Element from pymatgen import MPRester from pprint import pprint # Establish rester for accessing Materials API mpr = MPRester(api_key='#######') # INSERT YOUR OWN API KEY """ Explanation: Introduction This notebook demonstrates how to predict structures using the built-in structure_prediction package in pymatgen. We will be gathering all possible structures (via the Materials API) of the chemical systems containing the highest probability specie substitutions for our original species. We will then resubstitute the original species back into these structures, filter out duplicates as well as preexisting structures already on the Materials Project, and output the newly predicted structures. Written using: - pymatgen==2018.9.19 Author: Matthew McDermott (09/25/18) End of explanation """ threshold = 0.001 #threshold for substitution/structure predictions num_subs = 10 # number of highest probability substitutions you wish to see """ Explanation: Here we define two variables -- threshold for the threshold probability in making substitution/structure predictions, and num_subs for how many substitutions you wish to explore: End of explanation """ original_species = [Specie('Y',3), Specie('Mn',3), Specie('O',-2)] # List of original species along with their specified oxidation states for substituting into """ Explanation: Finding highest probability specie substitutions In this section, we use the SubstitutionPredictor to predict likely specie substitutions using a data-mined approach from ICSD data. This does not yet calculate probable structures -- only which species are likely to substitute for the original species you input. The substitution prediction methodology is presented in: Hautier, G., Fischer, C., Ehrlacher, V., Jain, A., and Ceder, G. (2011) Data Mined Ionic Substitutions for the Discovery of New Compounds. Inorganic Chemistry, 50(2), 656-663. doi:10.1021/ic102031h End of explanation """ subs = SubstitutionPredictor(threshold=threshold).list_prediction(original_species) subs.sort(key = lambda x: x['probability'], reverse = True) subs = subs[0:num_subs] pprint(subs) """ Explanation: Predict most common specie substitutions, sort by highest probability, and take the number of substitutions specified by num_subs: End of explanation """ trial_subs = [list(sub['substitutions'].keys()) for sub in subs] pprint(trial_subs) """ Explanation: Create a new list of just the substituted specie combinations: End of explanation """ elem_sys_list = [[specie.element for specie in sub] for sub in trial_subs] chemsys_set = set() for sys in elem_sys_list: chemsys_set.add("-".join(map(str,sys))) pprint(chemsys_set) """ Explanation: Create a set of strings of each unique chemical system (elements separated by dashes): End of explanation """ all_structs = {} for chemsys in chemsys_set: all_structs[chemsys] = mpr.get_structures(chemsys) # Getting all structures -- this can take a while! auto_oxi = AutoOxiStateDecorationTransformation() # create object to determine oxidation states at each lattice site """ Explanation: Finding all structures for new chemical systems via Materials API Create a new dictionary and populate it with all structures for each chemical system: End of explanation """ oxi_structs = {} for chemsys in all_structs: oxi_structs[chemsys] = [] for num, struct in enumerate(all_structs[chemsys]): try: oxi_structs[chemsys].append({'structure': auto_oxi.apply_transformation(struct), 'id': str(chemsys + "_" + str(num))}) except: continue # if auto oxidation fails, try next structure #pprint(oxi_structs) """ Explanation: Now create a new dictionary of all structures (with oxidation states) for each chemical system: End of explanation """ sbr = Substitutor(threshold = threshold) # create a Substitutor object with structure prediction threshold trans_structs = {} for chemsys in oxi_structs: trans_structs[chemsys] = sbr.pred_from_structures(original_species,oxi_structs[chemsys]) """ Explanation: Substitute original species into new structures Now create a new dictionary trans_structures populated with predicted structures made up of original species. Note: these new predicted structures are TransformedStructure objeects: End of explanation """ sm = StructureMatcher(comparator=ElementComparator(),primitive_cell=False) # create object for structure matching filtered_structs = {} # new filtered dictionary seen_structs = [] # list of all seen structures, independent of chemical system print("Number of entries BEFORE filtering: " + str(sum([len(sys) for sys in trans_structs.values()]))) for chemsys in trans_structs: filtered_structs[chemsys] = [] for struct in trans_structs[chemsys]: found = False for struct2 in seen_structs: if sm.fit(struct.final_structure, struct2.final_structure): found = True break if not found: filtered_structs[chemsys].append(struct) seen_structs.append(struct) print("Number of entries AFTER filtering: " + str(sum([len(sys) for sys in filtered_structs.values()]))) """ Explanation: Filter duplicate structures using StructureMatcher: End of explanation """ known_structs = mpr.get_structures("Y-Mn-O") # get all known MP structures for original system final_filtered_structs = {} print("Number of entries BEFORE filtering against MP: " + str(sum([len(sys) for sys in filtered_structs.values()]))) for chemsys in filtered_structs: final_filtered_structs[chemsys] = [] for struct in filtered_structs[chemsys]: found = False for struct2 in known_structs: if sm.fit(struct.final_structure, struct2): found = True break if not found: final_filtered_structs[chemsys].append(struct) print("Number of entries AFTER filtering against MP: " + str(sum([len(sys) for sys in final_filtered_structs.values()]))) pprint(final_filtered_structs) """ Explanation: NOTE: The chemical systems to which the filtered structures are assigned might change when re-running the program. Since we are filtering for duplicates across chemical systems, either of the two systems may be reported in the filtered dictionary. Which of the two systems it is simply depends on the order in that the filter algorithm follows (and it's reading from a naturally unordered dictionary!) Now we wish to run one more filter to remove all duplicate structures already accessible on the Materials Project. End of explanation """ final_structs = {} for chemsys in final_filtered_structs: final_structs[chemsys] = [struct.to_snl([{"name":"Matthew McDermott", "email":"N/A"}]) for struct in final_filtered_structs[chemsys]] #pprint(final_structs['Y-Fe-O'][0].as_dict()) # Printing one of the StructureNL objects - this is a large dictionary! """ Explanation: Create final structure dictionary with StructureNL objects for each transformed structure (Note: this requires installation of pybtex): End of explanation """
jon-young/medicalimage
Liver Interactive.ipynb
mit
import collections import matplotlib matplotlib.use('Agg') %matplotlib inline import matplotlib.pyplot as plt import numpy as np import os import pickle import scipy.stats import SimpleITK as sitk from os.path import expanduser, join from scipy.spatial.distance import euclidean """ Explanation: Liver Segmentation 2D Python version: 3.4.3 (Anaconda 2.4.1) IPython version: 4.0.1 SimpleITK version: 0.9.0-g45317 Operating system: OS X El Capitan v10.11.2 End of explanation """ def show_img(img, imgover=None): """Displays SimpleITK 2D image from its array. Includes a function to report the pixel value under the mouse cursor. Option to display image overlay.""" X = sitk.GetArrayFromImage(img) fig = plt.figure() ax = fig.add_subplot(111) def format_coord(x, y): col = int(x + 0.5) row = int(y + 0.5) if col>=0 and col<numcols and row>=0 and row<numrows: z = X[row, col] return 'x=%1.4f, y=%1.4f, z=%1.4f' %(x, y, z) else: return 'x=%1.4f, y=%1.4f' %(x, y) if imgover is not None: X2 = sitk.GetArrayFromImage(imgover) maskVal = scipy.stats.mode(X2.flatten())[0][0] Xmask = np.ma.masked_where(X2 == maskVal, X2) im = ax.imshow(X, cmap=plt.cm.Greys_r) imOver = ax.imshow(Xmask, alpha=0.5) else: numrows, numcols = X.shape ax.imshow(X, cmap=plt.cm.Greys_r) ax.format_coord = format_coord plt.show() def input_level_set_click(featImg, coords): RADIUS = 10 numCols = featImg.GetSize()[0] numRows = featImg.GetSize()[1] X = np.zeros((numRows, numCols), dtype=np.int) for pt in coords: rowIni, rowEnd = pt[1] - RADIUS, pt[1] + RADIUS colIni, colEnd = pt[0] - RADIUS, pt[0] + RADIUS for i in range(rowIni, rowEnd+1): for j in range(colIni, colEnd+1): if euclidean((i,j), (pt[1], pt[0])) <= RADIUS: X[i,j] = 1 img = sitk.Cast(sitk.GetImageFromArray(X), featImg.GetPixelIDValue()) * -1 + 0.5 img.SetSpacing(featImg.GetSpacing()) img.SetOrigin(featImg.GetOrigin()) img.SetDirection(featImg.GetDirection()) return img class IndexMouseCapture(object): def __init__(self, ax, X): self.ax = ax self.X = X self.coords = list() self.im = ax.imshow(self.X, cmap=plt.cm.Greys_r) self.im.axes.figure.canvas.draw() def onclick(self, event): ix, iy = int(round(event.xdata)), int(round(event.ydata)) self.coords.append((ix, iy)) circ = plt.Circle((ix, iy), 10, color='b') plt.gcf().gca().add_artist(circ) self.im.axes.figure.canvas.draw() """ Explanation: Image Display Functions Classes and functions to be used later in displaying images are collected here. End of explanation """ dicomPath = join(expanduser('~'), 'Documents', 'SlicerDICOMDatabase', 'TCIALocal', '0', 'images', '') reader = sitk.ImageSeriesReader() seriesIDread = reader.GetGDCMSeriesIDs(dicomPath)[1] dicomFilenames = reader.GetGDCMSeriesFileNames(dicomPath, seriesIDread) reader.SetFileNames(dicomFilenames) imgSeries = reader.Execute() """ Explanation: I. Read in DICOM Images The case being analyzed here is a resectable hepatocellular carcinoma from The Cancer Imaging Archive. The patient ID is TCGA-BC-4073. Ultimately, the liver along with the tumor and blood vessels are to be reconstructed. Here, the focus is only on the liver. Read in the DICOM series: End of explanation """ sliceNum = 42 imgSlice = imgSeries[:,:,sliceNum] """ Explanation: Pick a slice to work with. Only the 2D case is considered here. End of explanation """ reader.GetGDCMSeriesIDs(dicomPath) """ Explanation: Note that the TCGA-BC-4073 patient has 2 series of images (series 9 & 10). The series IDs are: End of explanation """ show_img(imgSlice) """ Explanation: The 2<sup>nd</sup> tuple element corresponds to series 9. Display the slice: End of explanation """ timeStep_, conduct, numIter = (0.04, 9.0, 5) imgRecast = sitk.Cast(imgSlice, sitk.sitkFloat32) curvDiff = sitk.CurvatureAnisotropicDiffusionImageFilter() curvDiff.SetTimeStep(timeStep_) curvDiff.SetConductanceParameter(conduct) curvDiff.SetNumberOfIterations(numIter) imgFilter = curvDiff.Execute(imgRecast) show_img(imgFilter) """ Explanation: II. Filtering First, the original image is smoothed with an edge-preserving filter. Two options for filtering are used here: either curvature anisotropic diffusion or mean filtering. Curvature anisotropic diffusion End of explanation """ imgFilter = sitk.Mean(image1=imgSlice, radius=(3,3)) show_img(imgFilter) """ Explanation: Mean filtering End of explanation """ sigma_ = 1.0 imgGauss = sitk.GradientMagnitudeRecursiveGaussian(image1=imgFilter, sigma=sigma_) show_img(imgGauss) """ Explanation: III. Edge Potential Gradient magnitude recursive Gaussian End of explanation """ K1, K2 = 10.0, 4.0 alpha_ = (K2 - K1)/6 beta_ = (K1 + K2)/2 sigFilt = sitk.SigmoidImageFilter() sigFilt.SetAlpha(alpha_) sigFilt.SetBeta(beta_) sigFilt.SetOutputMaximum(1.0) sigFilt.SetOutputMinimum(0.0) imgSigmoid = sigFilt.Execute(imgGauss) show_img(imgSigmoid) """ Explanation: IV. Feature Image Sigmoid mapping Following Section 4.3.1 "Fast Marching Segmentation" on pages 373-374 from The ITK Software Guide Book 2: Design and Functionality, 4<sup>th</sup> ed. for the setup of the sigmoid filter. The output will be passed along to a segmentation algorithm below. Note that the plan is to conduct multiple rounds of segmentation, to "start with a downsampled volume and work back to the full resolution using the results at each intermediate scale as the initialization for the next scale." (pg. 370) Therefore, K1 and K2 below for the sigmoid mapping are first set loosely set and will become more strict in later segmentations. End of explanation """ X = sitk.GetArrayFromImage(imgSigmoid) fig = plt.figure() ax = fig.add_subplot(111) capture = IndexMouseCapture(ax, X) fig.canvas.mpl_connect('button_press_event', capture.onclick) coords = pickle.load(open(os.path.join('Liver Segmentation Data', 'TCGA-BC-4073', 'slice42_2nd_round_seeds.p'), 'rb')) initImg = input_level_set_click(imgSigmoid, coords) pickleDir = os.path.join('Liver Segmentation Data', 'TCGA-BC-4073', '') pickle.dump(capture.coords, open(pickleDir + 'slice51_1st_round_seeds.p', 'wb')) """ Explanation: V. Input Level Set Using ideas from the SimpleITK geodesic active contour example to create an initial input level set. Instead of computing a signed Maurer distance map and then applying a binary threshold, the approach here simply draws a circle of a given radius around each user-chosen seed coordinate. Following the SimpleITK Notebook on Levelset Segmentation, a binary dilation with a kernel size of 3 is performed. Finally, as was done in the example (line 60), all image values are multiplied by -1 and added to 0.5. The results is the input level set. Use the class IndexMouseCapture (defined above) to capture coordinates from mouse clicks for seeds. The radii are all assumed to be of the same size at the moment. End of explanation """ binaryThresh = sitk.BinaryThresholdImageFilter() binaryThresh.SetLowerThreshold(-3.0) binaryThresh.SetUpperThreshold(2.0) binaryThresh.SetInsideValue(1) binaryThresh.SetOutsideValue(0) binaryImg = binaryThresh.Execute(imgGac) show_img(binaryImg) """ Explanation: For subsequent rounds, create a new level set from segmentation of downsampled image. Start by converting the segmentation result into a workable format: End of explanation """ # get array from previous geodesic active contour X_gac = sitk.GetArrayFromImage(binaryImg) # get array from user-input seed clicks X_click = sitk.GetArrayFromImage(input_level_set_click(imgSigmoid, coords)) X_click[np.where(X_click == -0.5)] = 1.0 X_click[np.where(X_click == 0.5)] = 0.0 # combine into a single array X_input = X_gac.astype(bool) + X_click.astype(bool) # write array into new input level set initImg = sitk.Cast(sitk.GetImageFromArray(X_input.astype(int)), imgSigmoid.GetPixelIDValue()) * -1 + 0.5 initImg.SetSpacing(imgSigmoid.GetSpacing()) initImg.SetOrigin(imgSigmoid.GetOrigin()) initImg.SetDirection(imgSigmoid.GetDirection()) """ Explanation: Add in new seeds using IndexMouseCapture above, and then create a new input level set image: End of explanation """ manInLS = sitk.ReadImage(os.path.join('Liver Segmentation Data', 'TCGA-BC-4073', 'slice44_initial_level_set.nrrd')) X_man = sitk.GetArrayFromImage(manInLS[:,:,44]) """ Explanation: Import manual initial level set If desired, import an initial level set drawn close to boundaries of liver in 3D Slicer. All values inside the set are equal to 1, while any outside values are zero. End of explanation """ initImg = sitk.Cast(sitk.GetImageFromArray(X_man), imgSigmoid.GetPixelIDValue()) * -1 + 0.5 initImg.CopyInformation(imgSigmoid) """ Explanation: Create initial level set image for input into segmentation: End of explanation """ show_img(imgSigmoid, initImg) """ Explanation: Display the initial input level set: End of explanation """ gac = sitk.GeodesicActiveContourLevelSetImageFilter() gac.SetPropagationScaling(1.0) gac.SetCurvatureScaling(0.2) gac.SetAdvectionScaling(4.0) gac.SetMaximumRMSError(0.01) gac.SetNumberOfIterations(200) imgGac = gac.Execute(initImg, imgSigmoid) show_img(imgSlice, imgGac) """ Explanation: VI. Segmentation Geodesic active contour End of explanation """ manLabelMap = sitk.ReadImage(os.path.join('Liver Segmentation Data', 'TCGA-BC-4073', 'slice50_partial_manual.nrrd')) X_man = sitk.GetArrayFromImage(manLabelMap[:,:,50]) manLabelMap.GetSize() show_img(manLabelMap[:,:,48]) """ Explanation: Import manual label map If desired, import a manual segmentation from 3D Slicer instead. End of explanation """ X_gac = sitk.GetArrayFromImage(imgGac) X_bool = X_gac < np.amax(X_gac) X_final = X_man.astype(bool) + X_bool X_final = X_final.astype(int) show_img(imgSlice, sitk.GetImageFromArray(X_final)) """ Explanation: If needed, combine manual label map with segmentation from above: End of explanation """ X_gac = sitk.GetArrayFromImage(imgGac) plt.imshow(X_gac) plt.show() X_gac[125:140,216:238] = 1 """ Explanation: Fill in holes If necessary, fill in holes in the segmentation. End of explanation """ X_gac = sitk.GetArrayFromImage(imgGac) X_bool = X_gac < np.amax(X_gac) X_final = X_bool.astype(int) show_img(imgSlice, sitk.GetImageFromArray(X_final)) """ Explanation: VII. Write out to file If needed, binarize segmentation image: End of explanation """ np.save(os.path.join('Liver Segmentation Data', 'TCGA-BC-4073', 'slice50'), X_final) """ Explanation: Save array representation of segmented image in NumPy format: End of explanation """
PyladiesMx/Pyladies_ifc
4. Lops/.ipynb_checkpoints/For Loops-checkpoint.ipynb
mit
#Obtén el cuadrado de 1 #Obtén el cuadrado de 2 #Obtén el cuadrado de 3 #Obtén el cuadrado de 4 #Obtén el cuadrado de 5 #Obtén el cuadrado de 6 #Obtén el cuadrado de 7 #Obtén el cuadrado de 8 #Obtén el cuadrado de 9 #Obtén el cuadrado de 10 """ Explanation: Bienvenid@ a otra reunión de pyladies!! Sólo para asegurarnos de que estamos en la misma página, vamos a enumerar (y explicar) brevemente lo que hemos estado viendo en python. Operaciones básicas o cómo usar python como calculadora. Python se puede usar básicamente como cualquier calculadora operando directamente sobre objetos como números enteros (integers) o decimales (floats) y series de caracteres (strings) Asignación de variables.Si quieres guardar los resultados de operaciones, floats, integers, strings en la memoria de python lo que tenemos que hacer es asignarlos a unas variables. Para hacer esto tienes que inventar un nombre (que empiece con letras del alfabeto) poner un signo igual y después de este el valor u operación que desees guardar como en el siguiente ejemplo: variable = 5 + 2.5 variable_string = "String" Listas, el álbum coleccionador de python. Si lo que quieres es una colección de elementos en python, una de las estructuras de datos que te permite hacer esto son las listas, para estas tienes que poner entre corchetes los elementos que quieras guardar (todos los tipos de datos incluyendo listas!) separados por comas. Ejemplo: lista = [variable, 5, 2.5, "Hola"] Control de flujo. Decisiones con "if" y "else". En algún punto tendrás que hacer un programa el cual deba seguir dos caminos distintos dependiendo de una condición. Por ejemplo para decidir si usar un paraguas o no un programa puede ser: Si llueve entonces uso un paraguas, de lo contrario no se usa. Esto en python se representa de la siguiente forma: if lluvia == True: paraguas = True else: paraguas = False Funciones. Cada vez que te encuentras usando el mismo código una y otra vez, sabes que es hora de abstraer y hacer una bella función :D. Por ejemplo si mientras escribes un programa te das cuenta que has tenido que sacar el resultado de una expresioón tipo "a + b * (c - a)" es hora de empezar a considerar el representar esa expresión en una función. Recuerda que las funciones se representan de la siguiente manera: def función(arg1, arg2, argn): resultado = a + b (c - a) return* resultado Espero que este repaso te haya ayudado a refrescar tu memoria, pero lo que hoy veremos es un concepto muy útil en la programación y éste es la iteración. Iteraciones en python Las iteraciones son la repetición de una misma secuencia de paso determinado número de veces, esta repetición iteración se va a llevar a cabo hasta que se cumpla una condición. Para hacerlo más claro imagina que tu quieres obtener el cuadrado de todos los número del 1 al 20, lo que tendrías que hacer en python (si no hubiera iteraciones) es escribir la misma operación 20 veces. Como ejercicio (tortura :D) obtén los cuadrados manualmente End of explanation """ for numero in range(1,21): cuadrado = numero**2 print(cuadrado) """ Explanation: Yo creo que el punto está entendido... Es tedioso estar escribiendo lo mismo 20 veces. Ahora imagina que no tienes que hacer esto 20 veces, sino 10 000!!! Suena a mucho trabajo no? Sin embargo en python hay varias estrategias para resolverlo. Hoy veremos el for loop (o froot loop como yo le digo jejeje). El for loop es una clase de iteración a la cual tu le vas a dar una lista o colección de objetos para iterar (llamados iterables) y sobre cada elemento va a ejecutar la serie de instrucciones que le diste hasta que se acabe la lista o iterble. Veamos un ejemplo para clarificarlo... Hagamos lo mismo que queríamos hacer en el ejemplo anterior. End of explanation """ lista = [5.9, 3.0, 2, 25.5, 14.2,3, 5] """ Explanation: Yeiii!!! viste lo que se puede hacer con loops. Ahora te toca a ti. Ejercicio 1 Crea un programa que convierta todos los elementos de la siguiente lista a integers (usando por supuesto el froot loop) End of explanation """ lista_anidada = [['Perro', 'Gato'], ['Joven', 'Viejo'], [1, 2]] """ Explanation: Ejercicio 2 crea un programa que imprima "hola" el número de veces que el usuario escoja. Ejemplo. "Escoge un número del 1 al 10": 3 "hola" "hola" "hola" pista: busca la función de python input() Loops anidados Algo curioso en python es que puedes generar un loop for, dentro de otro loop. INCEPTION... Veamos un ejemplo End of explanation """ for elemento in lista_anidada: print (elemento) """ Explanation: Observa lo que para cuando le pedimos a python que nos imprima cada elemento de la lista anidada End of explanation """ for elemento in lista_anidada: for objeto in elemento: print(objeto) """ Explanation: Y que pasa si queremos obtener cada elemento de todas las listas End of explanation """
mne-tools/mne-tools.github.io
0.17/_downloads/4365eab31ed2fa347de7f294ac9500c3/plot_label_from_stc.ipynb
bsd-3-clause
# Author: Luke Bloy <luke.bloy@gmail.com> # Alex Gramfort <alexandre.gramfort@telecom-paristech.fr> # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.minimum_norm import read_inverse_operator, apply_inverse from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif' subjects_dir = data_path + '/subjects' subject = 'sample' snr = 3.0 lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) # Compute a label/ROI based on the peak power between 80 and 120 ms. # The label bankssts-lh is used for the comparison. aparc_label_name = 'bankssts-lh' tmin, tmax = 0.080, 0.120 # Load data evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) inverse_operator = read_inverse_operator(fname_inv) src = inverse_operator['src'] # get the source space # Compute inverse solution stc = apply_inverse(evoked, inverse_operator, lambda2, method, pick_ori='normal') # Make an STC in the time interval of interest and take the mean stc_mean = stc.copy().crop(tmin, tmax).mean() # use the stc_mean to generate a functional label # region growing is halted at 60% of the peak value within the # anatomical label / ROI specified by aparc_label_name label = mne.read_labels_from_annot(subject, parc='aparc', subjects_dir=subjects_dir, regexp=aparc_label_name)[0] stc_mean_label = stc_mean.in_label(label) data = np.abs(stc_mean_label.data) stc_mean_label.data[data < 0.6 * np.max(data)] = 0. # 8.5% of original source space vertices were omitted during forward # calculation, suppress the warning here with verbose='error' func_labels, _ = mne.stc_to_label(stc_mean_label, src=src, smooth=True, subjects_dir=subjects_dir, connected=True, verbose='error') # take first as func_labels are ordered based on maximum values in stc func_label = func_labels[0] # load the anatomical ROI for comparison anat_label = mne.read_labels_from_annot(subject, parc='aparc', subjects_dir=subjects_dir, regexp=aparc_label_name)[0] # extract the anatomical time course for each label stc_anat_label = stc.in_label(anat_label) pca_anat = stc.extract_label_time_course(anat_label, src, mode='pca_flip')[0] stc_func_label = stc.in_label(func_label) pca_func = stc.extract_label_time_course(func_label, src, mode='pca_flip')[0] # flip the pca so that the max power between tmin and tmax is positive pca_anat *= np.sign(pca_anat[np.argmax(np.abs(pca_anat))]) pca_func *= np.sign(pca_func[np.argmax(np.abs(pca_anat))]) """ Explanation: Generate a functional label from source estimates Threshold source estimates and produce a functional label. The label is typically the region of interest that contains high values. Here we compare the average time course in the anatomical label obtained by FreeSurfer segmentation and the average time course from the functional label. As expected the time course in the functional label yields higher values. End of explanation """ plt.figure() plt.plot(1e3 * stc_anat_label.times, pca_anat, 'k', label='Anatomical %s' % aparc_label_name) plt.plot(1e3 * stc_func_label.times, pca_func, 'b', label='Functional %s' % aparc_label_name) plt.legend() plt.show() """ Explanation: plot the time courses.... End of explanation """ brain = stc_mean.plot(hemi='lh', subjects_dir=subjects_dir) brain.show_view('lateral') # show both labels brain.add_label(anat_label, borders=True, color='k') brain.add_label(func_label, borders=True, color='b') """ Explanation: plot brain in 3D with PySurfer if available End of explanation """
jonasluz/mia-cg
Exercises/Exercícios#1.ipynb
unlicense
from typing import List Vector = List[float] import numpy as np import matplotlib.pyplot as plt def dcos(v: Vector, verbose=False): """ Calcula os cosenos diretores do vetor a. Para cada componente c1, c2, ... cn do vetor v, o cosseno diretor é dado por: dcosk = ck / ||v|| """ result = [] norm = np.linalg.norm(v) # norma do vetor dado ||v|| if verbose: print("A norma do vetor {} é {}".format(v, norm)) for component in v: result.append(component / norm) return result """ Explanation: Lista de Exercícios Nº 1 de Computação Gráfica <br> <center>Jonas de Araújo Luz Jr. &#117;&#110;&#105;&#102;&#111;&#114;&#64;&#106;&#111;&#110;&#97;&#115;&#108;&#117;&#122;&#46;&#99;&#111;&#109; <br>Março de 2017</center> <hr> End of explanation """ def line_limits(x, y, length, axes_size): """ Função utilitária para calcular as dimensões da seta do vetor nos eixos x e y. A seta representa um triângulo retângulo, cuja hipotenusa é o comprimento da seta. """ v = [x, y] # O coseno diretor do vetor v é o mesmo da seta dc = dcos(v) # x e y são aproximados em função dos valores do coseno diretor x, y = (v[0] + dc[0] * length), (v[1] + dc[1] * length) return (x, y, x / axes_size, y / axes_size) def draw_vector(name:str, origin_x, origin_y, x, y, axes_size, fc='k', lc='b', labelPos=dict(ha="right", va="bottom"), withProjs=True, immediate=False): """ Desenha o vetor especificado. """ # Propriedades da ponta das setas dos vetores arrowtip = dict(head_width=0.2, head_length=0.2) # Propriedades das linhas de projeção aos eixos lineprops = dict(linewidth=0.5, ls='--') # Posição da nota com nome do vetor. notexy = (x / 2 + origin_x, y / 2 + origin_y) # Recupera os eixos de coordenadas. ax = plt.axes() plt.axis([0, axes_size, 0, axes_size]) # Vetor limits = line_limits(x, y, arrowtip['head_width'], axes_size) ax.arrow(origin_x, origin_y, x, y, **arrowtip, fc=fc, ec=fc) ax.annotate(name, xy=notexy, xytext=notexy, **labelPos, color=fc, family="serif", weight="semibold", size="x-large") if withProjs: plt.axhline(limits[1], xmax=limits[2], color=lc, **lineprops) plt.axvline(limits[0], ymax=limits[3], color=lc, **lineprops) if immediate: plt.show() # Solução da questão 1 # axessize = 5 # Vetor u u = [2, 3] draw_vector('u', 0, 0, *u, axessize, 'k', 'g') # Vetor v v = [3, 1.5] draw_vector('v', 0, 0, *v, axessize, 'b', 'b') plt.show() ## Plotagem de u + v # axessize = 6 # Desenha u novamente draw_vector('u', 0, 0, *u, axessize, 'k', 'g') # Desenha v a partir de u draw_vector('v', *u, *v, axessize, 'b', withProjs=False) # Vetor u + v uv = [u[0] + v[0], u[1] + v[1]] draw_vector('u+v', 0, 0, *uv, axessize, 'r', 'r') plt.show() ## Plotagem de v + u # axessize = 6 # Desenha v novamente draw_vector('v', 0, 0, *v, axessize, 'b', 'b') # Desenha u a partir de v draw_vector('u', *v, *u, axessize, 'k', withProjs=False) # Vetor v + u vu = [v[0] + u[0], v[1] + u[1]] draw_vector('v+u', 0, 0, *uv, axessize, 'r', 'r') plt.show() ## Sobrepondo u+v e v+u # axessize = 6 draw_vector('u+v', 0, 0, *uv, axessize, 'r', 'r') draw_vector('v+u', 0, 0, *vu, axessize, 'y', 'y', labelPos=dict(ha="left", va="top")) plt.show() """ Explanation: Questão 1 Demonstrar graficamente a propriedade comutativa da adição de vetores, ou seja, que u + v = v + u. End of explanation """ def print_dcos(dcos: Vector): """ Faz uma impressão "bonita" da sequência dos cosenos diretores guardados no vetor dcos. """ indices = ['i', 'j', 'k'] print('\n'.join('Coseno diretor {} é {:.4}'.format(indices[k], v) for k, v in list(enumerate(dcos)))) """ Explanation: Questão 2 Calcular os cossenos diretores para os vetores indicados. End of explanation """ print_dcos(dcos([3, 6], True)) """ Explanation: a) h = [3, 6] End of explanation """ print_dcos(dcos([-4, 8], True)) """ Explanation: b) k = [-4, 8] End of explanation """ print_dcos(dcos([5, -4], True)) """ Explanation: c) m = [5, -4] End of explanation """ print_dcos(dcos([3, 0], True)) """ Explanation: d) n = [3, 0]print(dcos([-4, 8], True)) End of explanation """ def vecpoints(a, b): a, b = np.array(a), np.array(b) return b - a """ Explanation: Questão 3 Encontrar os vetores iniciando no ponto P e terminando no ponto Q, para cada caso abaixo: End of explanation """ print(vecpoints([4, 8], [3, 7])) """ Explanation: a) P(4, 8), Q(3, 7) End of explanation """ print(vecpoints([3, -5],[-4, -7])) """ Explanation: b) P(3, -5), Q(-4, -7) End of explanation """ print(vecpoints([-5, 0], [-3, 1])) """ Explanation: c) P(-5, 0), Q(-3, 1) End of explanation """ print(vecpoints([3, 3], [4, 4])) """ Explanation: d) P(3, 3), Q(4, 4) End of explanation """ # Demonstração sem código. """ Explanation: Questão 4 Sendo a = ax.i + ay.j + az.k e b = bx.i + by.j + bz.k, apresentar a X b na forma de matriz. <br><u>Sugestão:</u> usar o conceito de determinante de uma matriz. End of explanation """ def intern_angle(a: Vector, b: Vector, verbose=False, degrees=True): """ Calcula o ângulo interno entre dois vetores a partir do produto interno entre eles. O produto interno é dado por ||a||.||b||.cos(tetha), sendo tetha o ângulo entre os vetores a e b. Logo, temos que tetha = arccos((a.b)/(||a||.||b||)) """ #a, b = np.array(a), np.array(b) # Produto interno entre a e b dp = np.dot(a, b) # Normas dos vetores a e b na, nb = np.linalg.norm(a), np.linalg.norm(b) # Cosseno de tetha cosin = dp / (na * nb) if verbose: print("O produto interno entre {} e {} é: {}".format(a, b, dp)) print("As normas dos vetores a e b são, respectivamente: {:.4} e {:.4}".format(na, nb)) print("O cosseno do ângulo entre a e b é {:.6}".format(cosin)) ac = np.arccos(cosin) # ângulo em radianos [0, pi] if degrees: result = ac * 180 / np.pi if verbose: print("O ângulo {:.4} radianos corresponde a {:.8}º".format(ac, result)) return result return ac ## Solução da questão 5. # a, b = [3, 5], [2, 1] print("O ângulo interno entre a e b é de {:.8}º".format(intern_angle(a, b, True))) """ Explanation: Questão 5 Sendo a = 3i + 5j e b = 2i + j, calcular o ângulo entre a e b, usando o produto interno. End of explanation """ ## Solução da questão 6. # # Os segmentos de reta entre os pontos P e origem e entre Q e origem correspondem, respectivamente, aos vetores p e q: p, q = [3, 4, 7], [2, 1, 9] print("O ângulo interno entre OP e OQ é {:.8}º".format(intern_angle(p, q, True))) """ Explanation: Questão 6 Sendo os pontos P(3, 4, 7) e Q(2, 1, 9), calcular o ângulo entre OP e OQ, sendo O a origem. End of explanation """ P, A, B, O = np.array([1, 4, 2]), np.array([4, -1, 4]), np.array([5, 3, 6]), np.array([0, 0, 0]) # O segmento de reta OP corresponde ao vetor p [1, 4, 2]. p = P - O # == P # O segmento de reta AB corresponde ao vetor v = B - A. v = B - A # p e v serão paralelos se o ângulo entre eles tetha for igual a zero, ou seja, cos(tetha) == 1 # Desta forma, o produto interno entre os vetores, dado por ||p||.||v||.cos(tetha), pode ser usado para se determinar # o ângulo entre estes: temos que tetha = arccos((a.b)/(||a||.||b||)) # tetha = intern_angle(p, v, True) print("O ângulo entre os vetores é de {:.8}º, logo, os segmentos de reta OP e AB {} paralelos." .format(tetha, "são" if tetha == 0 else "não são")) """ Explanation: Questão 7 Sendo os pontos P(1, 4, 2), A(4, -1, 4) e B(5, 3, 6), determinar se OP e AB são paralelos, sendo O a origem. End of explanation """ def q8(v): """ Rotina para a questão 8. """ # Cossenos diretores. dc = dcos(v, True) print_dcos(dc) # Cálculo dos ângulos "diretores". angles_names = ['alpha', 'beta', 'gamma'] angles = {} for k, name in enumerate(angles_names): angles[name] = np.arccos(dc[k]) * 180/np.pi print("O ângulo {} tem o valor de {:.6}".format(name, angles[name])) # Verificação da combinação afim. sum, output = 0, "" for c in dc: part = c*c sum += part if (output != ""): output += " + " output += "({:.4})^2".format(c) print("{} = {:.2}".format(output, sum)) """ Explanation: Questão 8 Calcular os comprimentos dos cossenos diretores dos vetores AB, abaixo, determinando também os ângulos alpha, beta e gama formados entre esses vetores e os eixos de coordenadas na direção positiva. E, por fim, mostrar que, em cada caso, cos2(alpha) + cos2(beta) + cos2(gama) = 1. End of explanation """ v = vecpoints([1, 1, 1], [2, 0, 1]) q8(v) """ Explanation: a) A(1,1,1) e B(2,0,1) End of explanation """ v = vecpoints([2, -1, 1], [-2, 2, 2]) q8(v) """ Explanation: b) A(2, -1, 1) e B(-2, -2, 2) End of explanation """ v = vecpoints([-1, 3, 1], [-2, -1, 0]) q8(v) """ Explanation: c) A(-1, 3, 1) e B(-2, -1, 0) End of explanation """
Chipe1/aima-python
notebooks/chapter24/Image Segmentation.ipynb
mit
import os, sys sys.path = [os.path.abspath("../../")] + sys.path from perception4e import * from notebook4e import * import matplotlib.pyplot as plt """ Explanation: Segmentation Image segmentation is another early as well as an important image processing task. Segmentation is the process of breaking an image into groups, based on similarities of the pixels. Pixels can be similar to each other in multiple ways like brightness, color, or texture. The segmentation algorithms are to find a partition of the image into sets of similar pixels which usually indicating objects or certain scenes in an image. The segmentations in this chapter can be categorized into two complementary ways: one focussing on detecting the boundaries of these groups, and the other on detecting the groups themselves, typically called regions. We will introduce some principles of some algorithms in this notebook to present the basic ideas in segmentation. Probability Boundary Detection A boundary curve passing through a pixel $(x,y)$ in an image will have an orientation $\theta$, so we can formulize boundary detection problem as a classification problem. Based on features from a local neighborhood, we want to compute the probability $P_b(x,y,\theta)$ that indeed there is a boundary curve at that pixel along that orientation. One of the sampling ways to calculate $P_b(x,y,\theta)$ is to generate a series sub-divided into two half disks by a diameter oriented at θ. If there is a boundary at (x, y, θ) the two half disks might be expected to differ significantly in their brightness, color, and texture. For detailed proof of this algorithm, please refer to this article. Implementation We implemented a simple demonstration of probability boundary detector as probability_contour_detection in perception.py. This method takes three inputs: image: an image already transformed into the type of numpy ndarray. discs: a list of sub-divided discs. threshold: the standard to tell whether the difference between intensities of two discs implying there is a boundary passing the current pixel. we also provide a helper function gen_discs to gen a list of discs. It takes scales as the number of sizes of discs will be generated which is default 1. Please note that for each scale size, there will be 8 sub discs generated which are in the horizontal, verticle and two diagnose directions. Another init_scale indicates the starting scale size. For instance, if we use init_scale of 10 and scales of 2, then scales of sizes of 10 and 20 will be generated and thus we will have 16 sub-divided scales. Example Now let's demonstrate the inner mechanism with our navie implementation of the algorithm. First, let's generate some very simple test images. We already generated a grayscale image with only three steps of gray scales in perceptron.py: End of explanation """ plt.imshow(gray_scale_image, cmap='gray', vmin=0, vmax=255) plt.axis('off') plt.show() """ Explanation: Let's take a look at it: End of explanation """ gray_img = gen_gray_scale_picture(100, 5) plt.imshow(gray_img, cmap='gray', vmin=0, vmax=255) plt.axis('off') plt.show() """ Explanation: You can also generate your own grayscale images by calling gen_gray_scale_picture and pass the image size and grayscale levels needed: End of explanation """ discs = gen_discs(100, 1) fig=plt.figure(figsize=(10, 10)) for i in range(8): img = discs[0][i] fig.add_subplot(1, 8, i+1) plt.axis('off') plt.imshow(img, cmap='gray', vmin=0, vmax=255) plt.show() """ Explanation: Now let's generate the discs we are going to use as sampling masks to tell the intensity difference between two half of the care area of an image. We can generate the discs of size 100 pixels and show them: End of explanation """ discs = gen_discs(10, 1) contours = probability_contour_detection(gray_img, discs[0]) show_edges(contours) """ Explanation: The white part of disc images is of value 1 while dark places are of value 0. Thus convolving the half-disc image with the corresponding area of an image will yield only half of its content. Of course, discs of size 100 is too large for an image of the same size. We will use discs of size 10 and pass them to the detector. End of explanation """ contours = group_contour_detection(gray_scale_image, 3) """ Explanation: As we are using discs of size 10 and some boundary conditions are not dealt with in our naive algorithm, the extracted contour has a bold edge with missings near the image border. But the main structures of contours are extracted correctly which shows the ability of this algorithm. Group Contour Detection The alternative approach is based on trying to “cluster” the pixels into regions based on their brightness, color and texture properties. There are multiple grouping algorithms and the simplest and the most popular one is k-means clustering. Basically, the k-means algorithm starts with k randomly selected centroids, which are used as the beginning points for every cluster, and then performs iterative calculations to optimize the positions of the centroids. For a detailed description, please refer to the chapter of unsupervised learning. Implementation Here we will use the module of cv2 to perform K-means clustering and show the image. To use it you need to have opencv-python pre-installed. Using cv2.kmeans is quite simple, you only need to specify the input image and the characters of cluster initialization. Here we use modules provide by cv2 to initialize the clusters. cv2.KMEANS_RANDOM_CENTERS can randomly generate centers of clusters and the cluster number is defined by the user. kmeans method will return the centers and labels of clusters, which can be used to classify pixels of an image. Let's try this algorithm again on the small grayscale image we imported: End of explanation """ show_edges(contours) """ Explanation: Now let's show the extracted contours: End of explanation """ import numpy as np import matplotlib.image as mpimg stapler_img = mpimg.imread('images/stapler.png', format="gray") contours = group_contour_detection(stapler_img, 5) plt.axis('off') plt.imshow(contours, cmap="gray") """ Explanation: It is not obvious as our generated image already has very clear boundaries. Let's apply the algorithm on the stapler example to see whether it will be more obvious: End of explanation """ contours = group_contour_detection(stapler_img, 15) plt.axis('off') plt.imshow(contours, cmap="gray") """ Explanation: The segmentation is very rough when using only 5 clusters. Adding to the cluster number will increase the degree of subtle of each group thus the whole picture will be more alike the original one: End of explanation """ image = gen_gray_scale_picture(size=10, level=2) show_edges(image) graph = Graph(image) graph.min_cut((0,0), (9,9)) """ Explanation: Minimum Cut Segmentation Another way to do clustering is by applying the minimum cut algorithm in graph theory. Roughly speaking, the criterion for partitioning the graph is to minimize the sum of weights of connections across the groups and maximize the sum of weights of connections within the groups. Implementation There are several kinds of representations of a graph such as a matrix or an adjacent list. Here we are using a util function image_to_graph to convert an image in ndarray type to an adjacent list. It is integrated into the class of Graph. Graph takes an image as input and offer the following implementations of some graph theory algorithms: bfs: performing bread searches from a source vertex to a terminal vertex. Return True if there is a path between the two nodes else return False. min_cut: performing minimum cut on a graph from a source vertex to sink vertex. The method will return the edges to be cut. Now let's try the minimum cut method on a simple generated grayscale image of size 10: End of explanation """
DwangoMediaVillage/pqkmeans
tutorial/3_billion_scale_clustering.ipynb
mit
import numpy import pqkmeans import tqdm import os import six import gzip import texmex_python """ Explanation: Chapter 3: Billion-scale clustering This chapter contains the followings: Download the SIFT1B dataset Encode billion-scale data iteratively Run clustering Requisites: - numpy - pqkmeans - tqdm - six - os - gzip - texmex_python (automatically installed when you pip pqkmeans) 1. Download the SIFT1B dataset End of explanation """ cache_directory = "." # Please set this according to your environment. 97.9 GB disk space is required. filename = "bigann_base.bvecs.gz" url = "ftp://ftp.irisa.fr/local/texmex/corpus/" + filename path = os.path.join(cache_directory, filename) if not os.path.exists(path): print("downloading {}".format(url)) %time six.moves.urllib.request.urlretrieve(url, path) """ Explanation: In this chapter, we show an example of billion-scale clustering. Since input vectors are compressed by PQ, our PQk-means can handle a large amount of vectors even if they cannot be directly loaded on memory. In a programming perspective, our PQ-encoder has an iterative encoding function (tranfsorm_generator), by which we can handle large-scale data as if theyr are on memory. Let's start the tutorial by downloading the SIFT1B data. It consists of one billion 128-dimensional SIFT vectors, and requires 97.9 GB disk space. The download might take several hours. End of explanation """ f = gzip.open(path, 'rb') vec_iterator = texmex_python.reader.read_bvec_iter(f) """ Explanation: Next, let's open the data and construct an iterator for it in a usual Python way. The texmex_python package contains an iterator-interface for bvecs-type data. End of explanation """ learn_data, _ = pqkmeans.evaluation.get_siftsmall_dataset() M = 4 encoder = pqkmeans.encoder.PQEncoder(num_subdim=M, Ks=256) encoder.fit(learn_data) """ Explanation: Then, you can read each SIFT vector one by one using a usual for-loop access, e.g., "for v in vec_iterator: ...". Note that you do not need to read all data at once, which would require 97.9 GB of memory space. 2. Encode billion-scale data iteratively Before encoding, let us construct a PQ-encoder using a small amount of training data. We use a traning data of SIFTSMALL dataset for the sake of simplicity (You should use training data of SIFT1B dataset for the evaluation, which takes 9.7 GB disk space). End of explanation """ pqcode_generator = encoder.transform_generator(vec_iterator) """ Explanation: Next, we'll encode each SIFT vector to PQ-code iteratively. To do so, let us create a generator by calling transform_generator function. End of explanation """ N = 1000000000 pqcodes = numpy.empty([N, M], dtype=encoder.code_dtype) print("pqcodes.shape:\n{}".format(pqcodes.shape)) print("pqcodes.nbytes:\n{} bytes".format(pqcodes.nbytes)) print("pqcodes.dtype:\n{}".format(pqcodes.dtype)) """ Explanation: The resulting pqcode_generator is a generator for PQ-code. We can encode each SIFT vector by, e.g., "for code in pqcode_generator: ...", without loading all data on memory at once. This design is not specific for SIFT1B data. Whenever you need to compress big data that cannot be loaded on memory at once, you can write an iterator for your data, and pass it to a PQ-encoder. So let's run encoding. To avoid consuming redundant memory space, we first allocate a big matrix as follows. End of explanation """ for n, code in enumerate(tqdm.tqdm(pqcode_generator, total=N)): pqcodes[n, :] = code """ Explanation: We can encode vectors by simply running a usual for-loop statement. The encoding is automatically parallelized. You do not need to execute any additional steps. The encoding for the SIFT1B would take several hours depending on your computer. Please find that this does not consume any addirional memory cost at all. End of explanation """ # pickle.dump(encoder, open('encoder.pkl', 'wb')) # numpy.save('pqcode.npy', pqcodes) """ Explanation: Note that it's also fine to use list comprehensions and numpy conversion such as "pqcodes=[code for code in pqcode_generator]" and "pqcodes=numpy.array(pqcodes)". But it would take memory overhead for temporal data storage. After encoding, you can save the pqcodes (and the PQ-encoder itself) if you want. Typically, the resulting PQ-codes do not take so much memory space (in this case, they take only 4 GB). So you can read/write the PQ-codes directly without any iterator/generator. End of explanation """ K = 1000 print("Runtime of clustering:") %time clustered = pqkmeans.clustering.PQKMeans(encoder=encoder, k=K).fit_predict(pqcodes) print("The assigned label for the top 100 PQ-codes:\n{}".format(clustered[:100])) """ Explanation: 3. Run clustering Finally, we can run clustering on one billion PQ-codes. The clustering for billion-scale data with K=1000 is finished in several hours depending on your computer. End of explanation """
kingb12/languagemodelRNN
report_notebooks/encdec_noing23_200_512_04drb.ipynb
mit
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb.json' log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing23_200_512_04drb/encdec_noing23_200_512_04drb_logs.json' import json import matplotlib.pyplot as plt with open(report_file) as f: report = json.loads(f.read()) with open(log_file) as f: logs = json.loads(f.read()) print'Encoder: \n\n', report['architecture']['encoder'] print'Decoder: \n\n', report['architecture']['decoder'] """ Explanation: Encoder-Decoder Analysis Model Architecture End of explanation """ print('Train Perplexity: ', report['train_perplexity']) print('Valid Perplexity: ', report['valid_perplexity']) print('Test Perplexity: ', report['test_perplexity']) """ Explanation: Perplexity on Each Dataset End of explanation """ %matplotlib inline for k in logs.keys(): plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)') plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)') plt.title('Loss v. Epoch') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.show() """ Explanation: Loss vs. Epoch End of explanation """ %matplotlib inline for k in logs.keys(): plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)') plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)') plt.title('Perplexity v. Epoch') plt.xlabel('Epoch') plt.ylabel('Perplexity') plt.legend() plt.show() """ Explanation: Perplexity vs. Epoch End of explanation """ def print_sample(sample, best_bleu=None): enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>']) gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>']) print('Input: '+ enc_input + '\n') print('Gend: ' + sample['generated'] + '\n') print('True: ' + gold + '\n') if best_bleu is not None: cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>']) print('Closest BLEU Match: ' + cbm + '\n') print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n') print('\n') for i, sample in enumerate(report['train_samples']): print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None) for i, sample in enumerate(report['valid_samples']): print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None) for i, sample in enumerate(report['test_samples']): print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None) """ Explanation: Generations End of explanation """ def print_bleu(blue_struct): print 'Overall Score: ', blue_struct['score'], '\n' print '1-gram Score: ', blue_struct['components']['1'] print '2-gram Score: ', blue_struct['components']['2'] print '3-gram Score: ', blue_struct['components']['3'] print '4-gram Score: ', blue_struct['components']['4'] # Training Set BLEU Scores print_bleu(report['train_bleu']) # Validation Set BLEU Scores print_bleu(report['valid_bleu']) # Test Set BLEU Scores print_bleu(report['test_bleu']) # All Data BLEU Scores print_bleu(report['combined_bleu']) """ Explanation: BLEU Analysis End of explanation """ # Training Set BLEU n-pairs Scores print_bleu(report['n_pairs_bleu_train']) # Validation Set n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_valid']) # Test Set n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_test']) # Combined n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_all']) # Ground Truth n-pairs BLEU Scores print_bleu(report['n_pairs_bleu_gold']) """ Explanation: N-pairs BLEU Analysis This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations End of explanation """ print 'Average (Train) Generated Score: ', report['average_alignment_train'] print 'Average (Valid) Generated Score: ', report['average_alignment_valid'] print 'Average (Test) Generated Score: ', report['average_alignment_test'] print 'Average (All) Generated Score: ', report['average_alignment_all'] print 'Average Gold Score: ', report['average_alignment_gold'] """ Explanation: Alignment Analysis This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores End of explanation """
t-vi/pytorch-tvmisc
wasserstein-distance/sn_projection_cgan_64x64_143c.ipynb
mit
import matplotlib try: %matplotlib inline except: # if we are not in Jupyter, use the headless frontend matplotlib.use('Agg') from matplotlib import pyplot import IPython import numpy import time import torch import torchvision import torch.utils.data """ Explanation: Spectral normalization GAN with categorical projection conditioning Thomas Viehmann &#116;&#118;&#64;&#108;&#101;&#114;&#110;&#97;&#112;&#112;&#97;&#114;&#97;&#116;&#46;&#100;&#101; Spectral Normalization GANs are an exiting new regularization method that approximates the Wasserstein loss as a discriminator function. The authors also demonstrated the capabilities of a projection method for the discriminator - generating images for all 1000 imagenet (LSVRC2012) categories. We implement the smaller 64x64 pixel variant. One of the crucial ingredients on the side of the Generator is a Conditional Batch Norm that enables learning a class-specific multiplier and bias. This notebook - is a straightforward adaptation of the official chainer-based implementation in the 64x64 case - uses PyTorch's spectral_norm implementation - provides a custom Conditional Batch Norm modelled after the above official implementation but in a PyTorch-style way (see below) - you need Python 3 and PyTorch / master, including PR 9020 Acknowledgements: I received many helpful hints on the PyTorch chat, in particular from Andy Brock, thanks! My own time was sponsored by MathInf, my ML consulting company. First let's import something: End of explanation """ class BatchNorm2d(torch.nn.BatchNorm2d): def reset_parameters(self): self.reset_running_stats() if self.affine: self.weight.data.fill_(1.0) self.bias.data.zero_() class CategoricalConditionalBatchNorm(torch.nn.Module): # as in the chainer SN-GAN implementation, we keep per-cat weight and bias def __init__(self, num_features, num_cats, eps=2e-5, momentum=0.1, affine=True, track_running_stats=True): super().__init__() self.num_features = num_features self.num_cats = num_cats self.eps = eps self.momentum = momentum self.affine = affine self.track_running_stats = track_running_stats if self.affine: self.weight = torch.nn.Parameter(torch.Tensor(num_cats, num_features)) self.bias = torch.nn.Parameter(torch.Tensor(num_cats, num_features)) else: self.register_parameter('weight', None) self.register_parameter('bias', None) if self.track_running_stats: self.register_buffer('running_mean', torch.zeros(num_features)) self.register_buffer('running_var', torch.ones(num_features)) self.register_buffer('num_batches_tracked', torch.tensor(0, dtype=torch.long)) else: self.register_parameter('running_mean', None) self.register_parameter('running_var', None) self.register_parameter('num_batches_tracked', None) self.reset_parameters() def reset_running_stats(self): if self.track_running_stats: self.running_mean.zero_() self.running_var.fill_(1) self.num_batches_tracked.zero_() def reset_parameters(self): self.reset_running_stats() if self.affine: self.weight.data.fill_(1.0) self.bias.data.zero_() def forward(self, input, cats): exponential_average_factor = 0.0 if self.training and self.track_running_stats: self.num_batches_tracked += 1 if self.momentum is None: # use cumulative moving average exponential_average_factor = 1.0 / self.num_batches_tracked.item() else: # use exponential moving average exponential_average_factor = self.momentum out = torch.nn.functional.batch_norm( input, self.running_mean, self.running_var, None, None, self.training or not self.track_running_stats, exponential_average_factor, self.eps) if self.affine: shape = [input.size(0), self.num_features] + (input.dim() - 2) * [1] weight = self.weight.index_select(0, cats).view(shape) bias = self.bias.index_select(0, cats).view(shape) out = out * weight + bias return out def extra_repr(self): return '{num_features}, num_cats={num_cats}, eps={eps}, momentum={momentum}, affine={affine}, ' \ 'track_running_stats={track_running_stats}'.format(**self.__dict__) """ Explanation: Conditional Batch Norm Here is the promised conditional batch norm. It aims to be compatible in functionality to the SN-GAN author's version, the original reference seems to be de Vries et. al., Modulating early visual processing by language. It works pretty much like regular batch norm except that it has a per-class weight $\gamma$ and bias $\beta$. In particular, note that the input mean and variance (to be "cleaned") and the running statistics are not class dependent. (Of course, one might wonder whether it should, but that is for another day...) There also is a conditional instance norm, (which would resolve the input that the statistics being generated accross classes during training, but not during evaluation). PyTorch usually initializes the weight with uniform random numbers. We instead use 1.0. End of explanation """ class ResGenBlock(torch.nn.Module): def __init__(self, in_channels, out_channels, hidden_channels=None, ksize=3, pad=1, activation=torch.nn.functional.relu, upsample=False, n_classes=0): super().__init__() self.activation = activation self.upsample = upsample self.learnable_sc = in_channels != out_channels or upsample hidden_channels = out_channels if hidden_channels is None else hidden_channels self.n_classes = n_classes self.c1 = torch.nn.Conv2d(in_channels, hidden_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c1.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c1.bias) self.c2 = torch.nn.Conv2d(hidden_channels, out_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c2.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c2.bias) if n_classes > 0: self.b1 = CategoricalConditionalBatchNorm(in_channels, n_classes) self.b2 = CategoricalConditionalBatchNorm(hidden_channels, n_classes) else: self.b1 = BatchNorm2d(in_channels) self.b2 = BatchNorm2d(hidden_channels) if self.learnable_sc: self.c_sc = torch.nn.Conv2d(in_channels, out_channels, 1, padding=0) torch.nn.init.xavier_uniform_(self.c_sc.weight) torch.nn.init.zeros_(self.c_sc.bias) def forward(self, x, y=None): h = x h = self.b1(h, y) if y is not None else self.b1(h) h = self.activation(h) if self.upsample: h = torch.nn.functional.upsample(h, scale_factor=2) h = self.c1(h) h = self.b2(h, y) if y is not None else self.b2(h) h = self.activation(h) h = self.c2(h) if self.learnable_sc: if self.upsample: x = torch.nn.functional.upsample(x, scale_factor=2) sc = self.c_sc(x) else: sc = x return h + sc class ResNetGenerator(torch.nn.Module): def __init__(self, ch=64, dim_z=128, bottom_width=4, activation=torch.nn.functional.relu, n_classes=0): super().__init__() self.bottom_width = bottom_width self.activation = activation self.dim_z = dim_z self.n_classes = n_classes self.l1 = torch.nn.Linear(dim_z, (bottom_width ** 2) * ch * 16) torch.nn.init.xavier_uniform_(self.l1.weight) torch.nn.init.zeros_(self.l1.bias) self.block2 = ResGenBlock(ch * 16, ch * 8, activation=activation, upsample=True, n_classes=n_classes) self.block3 = ResGenBlock(ch * 8, ch * 4, activation=activation, upsample=True, n_classes=n_classes) self.block4 = ResGenBlock(ch * 4, ch * 2, activation=activation, upsample=True, n_classes=n_classes) self.block5 = ResGenBlock(ch * 2, ch, activation=activation, upsample=True, n_classes=n_classes) self.b6 = BatchNorm2d(ch) self.l6 = torch.nn.Conv2d(ch, 3, 3, stride=1, padding=1) torch.nn.init.xavier_uniform_(self.l6.weight) torch.nn.init.zeros_(self.l6.bias) def forward(self, batchsize=64, z=None, y=None, DEBUG=None, debugname="generator"): anyparam = next(self.parameters()) if z is None: z = torch.randn(batchsize, self.dim_z, dtype=anyparam.dtype, device=anyparam.device) if y is None and self.n_classes > 0: y = torch.randint(0, self.n_classes, (batchsize,), device=anyparam.device, dtype=torch.long) if (y is not None) and z.shape[0] != y.shape[0]: raise Exception('z.shape[0] != y.shape[0], z.shape[0]={}, y.shape[0]={}'.format(z.shape[0], y.shape[0])) h = z h = self.l1(h) h = h.reshape(h.shape[0], -1, self.bottom_width, self.bottom_width) h = self.block2(h, y) h = self.block3(h, y) h = self.block4(h, y) h = self.block5(h, y) h = self.b6(h) h = self.activation(h) h = torch.tanh(self.l6(h)) return h """ Explanation: Generator With that, we can define the generator. End of explanation """ class ResDisBlock(torch.nn.Module): def __init__(self, in_channels, out_channels, hidden_channels=None, ksize=3, pad=1, activation=torch.nn.functional.relu, downsample=False): super().__init__() self.activation = activation self.downsample = downsample self.learnable_sc = (in_channels != out_channels) or downsample hidden_channels = in_channels if hidden_channels is None else hidden_channels self.c1 = torch.nn.Conv2d(in_channels, hidden_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c1.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c1.bias) torch.nn.utils.spectral_norm(self.c1) self.c2 = torch.nn.Conv2d(hidden_channels, out_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c2.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c2.bias) torch.nn.utils.spectral_norm(self.c2) if self.learnable_sc: self.c_sc = torch.nn.Conv2d(in_channels, out_channels, 1, padding=0) torch.nn.init.xavier_uniform_(self.c_sc.weight) torch.nn.init.zeros_(self.c_sc.bias) torch.nn.utils.spectral_norm(self.c_sc) def forward(self, x): h = x h = self.activation(h) h = self.c1(h) h = self.activation(h) h = self.c2(h) if self.downsample: h = torch.nn.functional.avg_pool2d(h, 2) if self.learnable_sc: sc = self.c_sc(x) if self.downsample: sc = torch.nn.functional.avg_pool2d(sc, 2) else: sc = x return h + sc class ResDisOptimizedBlock(torch.nn.Module): def __init__(self, in_channels, out_channels, ksize=3, pad=1, activation=torch.nn.functional.relu): super().__init__() self.activation = activation self.c1 = torch.nn.Conv2d(in_channels, out_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c1.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c1.bias) torch.nn.utils.spectral_norm(self.c1) self.c2 = torch.nn.Conv2d(out_channels, out_channels, ksize, padding=pad) torch.nn.init.xavier_uniform_(self.c2.weight, gain=(2**0.5)) torch.nn.init.zeros_(self.c2.bias) torch.nn.utils.spectral_norm(self.c2) self.c_sc = torch.nn.Conv2d(in_channels, out_channels, 1, padding=0) torch.nn.init.xavier_uniform_(self.c_sc.weight) torch.nn.init.zeros_(self.c_sc.bias) torch.nn.utils.spectral_norm(self.c_sc) def forward(self, x): h = x h = self.c1(h) h = self.activation(h) h = self.c2(h) h = torch.nn.functional.avg_pool2d(h, 2) sc = self.c_sc(x) sc = torch.nn.functional.avg_pool2d(sc, 2) return h + sc class SNResNetProjectionDiscriminator(torch.nn.Module): def __init__(self, ch=64, n_classes=0, activation=torch.nn.functional.relu): super().__init__() self.activation = activation self.block1 = ResDisOptimizedBlock(3, ch) self.block2 = ResDisBlock(ch, ch * 2, activation=activation, downsample=True) self.block3 = ResDisBlock(ch * 2, ch * 4, activation=activation, downsample=True) self.block4 = ResDisBlock(ch * 4, ch * 8, activation=activation, downsample=True) self.block5 = ResDisBlock(ch * 8, ch * 16, activation=activation, downsample=True) self.l6 = torch.nn.Linear(ch * 16, 1) torch.nn.init.xavier_uniform_(self.l6.weight) torch.nn.init.zeros_(self.l6.bias) torch.nn.utils.spectral_norm(self.l6) if n_classes > 0: self.l_y = torch.nn.Embedding(n_classes, ch * 16) torch.nn.init.xavier_uniform_(self.l_y.weight) torch.nn.utils.spectral_norm(self.l_y) def forward(self, x, y=None): h = x h = self.block1(h) h = self.block2(h) h = self.block3(h) h = self.block4(h) h = self.block5(h) h = self.activation(h) h = h.sum([2, 3]) output = self.l6(h) if y is not None: w_y = self.l_y(y) output = output + (w_y * h).sum(dim=1, keepdim=True) return output """ Explanation: Discriminator And the discriminator. So one of the subtle differences between ResDisBlock and the ResDisOptimizedBlock is that ResDisBlock starts with an activation - even though these look like they could be both covered by a single module... End of explanation """ batchsize = 64 num_iterations = 250000 iterations_decay_start = 200000 seed = 0 display_interval = 1000 snapshot_interval = 10000 evaluation_interval = 1000 init_lr = 0.0002 num_discriminator_iter = 5 num_classes = 143 # cats and dogs augmentation = True train_dir = './cats_and_dogs/' # I just used the corresponding folders of the classes from the original imagenet set device = torch.device('cuda') gen = ResNetGenerator(n_classes=num_classes) gen.to(device) opt_gen = torch.optim.Adam(params=gen.parameters(), lr=init_lr, betas=(0.0, 0.9)) # betas from chainer SNGAN dis = SNResNetProjectionDiscriminator(n_classes=num_classes) dis.to(device) opt_dis = torch.optim.Adam(params=dis.parameters(), lr=init_lr, betas=(0.0, 0.9)) """ Explanation: Putting it together Let's have some hyperparameters. End of explanation """ # This intends to reproduce the preprocessing in the chainer SNGAN repository with and without augmentation if augmentation: transforms = torchvision.transforms.Compose( [torchvision.transforms.Resize(256), torchvision.transforms.RandomCrop(int(256*0.9)), torchvision.transforms.Resize(64), torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ToTensor(), torchvision.transforms.Lambda(lambda x: x*(255./128.)-1+torch.rand(*x.shape)/128.) ]) else: transforms = torchvision.transforms.Compose( [torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(256), torchvision.transforms.Resize(64), torchvision.transforms.ToTensor(), torchvision.transforms.Lambda(lambda x: x*(255./128.)-1+torch.rand(*x.shape)/128.) ]) train_dataset = torchvision.datasets.ImageFolder(train_dir, transforms) def endless_train_dl(dl): while True: for b in dl: yield b train_dl_ = torch.utils.data.DataLoader(train_dataset, batch_size=batchsize, shuffle=True, pin_memory=True, num_workers=4) train_dl = endless_train_dl(train_dl_) """ Explanation: Dataloader We use the imagenet dataloader from torchvision. We resize to 64x64. I have tried to stay close to the original implementation. End of explanation """ img = None if matplotlib.get_backend().lower() != 'agg': im, lab = train_dataset[11000] img = torchvision.transforms.functional.to_pil_image(im*0.5+0.5) img = img.resize((128,128)) img """ Explanation: Chicken! (if you are using the full imagenet) End of explanation """ time1 = time.time() for it_nr in range(num_iterations): for i in range(num_discriminator_iter): if i == 0: z_fake = torch.randn(batchsize, gen.dim_z, dtype=torch.float, device=device) y_fake = torch.randint(0, gen.n_classes, (batchsize,), device=device, dtype=torch.long) x_fake = gen(batchsize, y=y_fake, z=z_fake) dis_fake = dis(x_fake, y=y_fake) loss_gen = -dis_fake.mean() opt_gen.zero_grad() loss_gen.backward() opt_gen.step() im, lab = next(train_dl) x_real = im.to(device) y_real = lab.to(device) batchsize = x_real.size(0) y_fake = torch.randint(0, gen.n_classes, (batchsize,), device=device, dtype=torch.long) with torch.no_grad(): x_fake = gen(batchsize, y=y_fake).detach() dis_real = dis(x_real, y=y_real) dis_fake = dis(x_fake, y=y_fake) loss_dis = (torch.nn.functional.relu(1. - dis_real).mean() + torch.nn.functional.relu(1. + dis_fake).mean()) opt_dis.zero_grad() loss_dis.backward() opt_dis.step() if it_nr % display_interval == 0: IPython.display.clear_output(True) pyplot.figure(figsize=(10,10)) pyplot.imshow(numpy.transpose(x_fake.cpu().numpy().reshape(8,8,3,64,64), (0,3,1,4,2)).reshape(8*64,8*64,3)*0.5+0.5) pyplot.savefig(f"sample_{it_nr:07d}") pyplot.show() if it_nr % display_interval == 0: print('it_nr',it_nr, 'i', i, 'loss_dis', loss_dis.item(), 'loss_gen', loss_gen.item()) time2 = time.time() remaining = int((time2 - time1)* (num_iterations - it_nr) / float(display_interval)) print ("it_nr", it_nr, "/", num_iterations, "time per it", (time2 - time1) / float(display_interval), "remaining {:02d}:{:02d}:{:02d}".format(remaining // 3600, remaining // 60 % 60, remaining % 60)) time1 = time2 if it_nr % snapshot_interval == 0: print ("saving snapshot") torch.save([dis.state_dict(), gen.state_dict()], f"snapshot_{it_nr}.pt") if matplotlib.get_backend().lower() == 'agg': # below is not for non-interactive import sys sys.exit(0) """ Explanation: Training Training promises to take around 65 hours on my computer. End of explanation """ gen = ResNetGenerator(dim_z=128, bottom_width=4, ch=64, n_classes=num_classes) dis = SNResNetProjectionDiscriminator(ch=64, n_classes=num_classes) [dis_sd, gen_sd] = torch.load('./snapshot_0.pt') dis.load_state_dict(dis_sd) gen.load_state_dict(gen_sd) y_fake = torch.randint(0, gen.n_classes, (batchsize,), device=next(gen.parameters()).device, dtype=torch.long) x_fake = gen(batchsize, y=y_fake) img = torchvision.transforms.functional.to_pil_image((x_fake.view(8,8,3,64,64).permute(2,0,3,1,4).reshape(3, 8*64,8*64)*0.5+0.5).cpu()) img.save("sample_manual.jpg") img """ Explanation: Sampling End of explanation """
tensorflow/examples
courses/udacity_intro_to_tensorflow_lite/tflite_c03_exercise_convert_model_to_tflite.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras import tensorflow_datasets as tfds tfds.disable_progress_bar() # Helper libraries import numpy as np import matplotlib.pyplot as plt import pathlib print(tf.__version__) """ Explanation: Train Your Own Model and Convert It to TFLite <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c03_exercise_convert_model_to_tflite.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c03_exercise_convert_model_to_tflite.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> This notebook uses the Fashion MNIST dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc.) in a format identical to that of the articles of clothing we'll use here. This uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow. Import and load the Fashion MNIST data directly from TensorFlow: Setup End of explanation """ splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10)) splits, info = tfds.load('fashion_mnist', with_info=True, as_supervised=True, split=splits) (train_examples, validation_examples, test_examples) = splits num_examples = info.splits['train'].num_examples num_classes = info.features['label'].num_classes class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] with open('labels.txt', 'w') as f: f.write('\n'.join(class_names)) IMG_SIZE = 28 """ Explanation: Download Fashion MNIST Dataset End of explanation """ # Write a function to normalize and resize the images def format_example(image, label): # Cast image to float32 image = # YOUR CODE HERE # Resize the image if necessary image = # YOUR CODE HERE # Normalize the image in the range [0, 1] image = # YOUR CODE HERE return image, label # Set the batch size to 32 BATCH_SIZE = 32 """ Explanation: Preprocessing data Preprocess End of explanation """ # Prepare the examples by preprocessing the them and then batching them (and optionally prefetching them) # If you wish you can shuffle train set here train_batches = # YOUR CODE HERE validation_batches = # YOUR CODE HERE test_batches = # YOUR CODE HERE """ Explanation: Create a Dataset from images and labels End of explanation """ """ Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 16) 160 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 32) 4640 _________________________________________________________________ flatten (Flatten) (None, 3872) 0 _________________________________________________________________ dense (Dense) (None, 64) 247872 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 253,322 Trainable params: 253,322 Non-trainable params: 0 """ # Build the model shown in the previous cell model = tf.keras.Sequential([ # Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation, tf.keras.layers.Conv2D(# YOUR CODE HERE), tf.keras.layers.MaxPooling2D(), # Set the number of filters to 32, kernel size to 3 and use ReLU activation tf.keras.layers.Conv2D(# YOUR CODE HERE), # Flatten the output layer to 1 dimension tf.keras.layers.Flatten(), # Add a fully connected layer with 64 hidden units and ReLU activation tf.keras.layers.Dense(# YOUR CODE HERE), # Attach a final softmax classification head tf.keras.layers.Dense(# YOUR CODE HERE)]) # Set the loss and accuracy metrics model.compile( optimizer='adam', loss=# YOUR CODE HERE, metrics=# YOUR CODE HERE) """ Explanation: Building the model End of explanation """ model.fit(train_batches, epochs=10, validation_data=validation_batches) """ Explanation: Train End of explanation """ export_dir = 'saved_model/1' # Use the tf.saved_model API to export the SavedModel # Your Code Here optimization = tf.lite.Optimize.DEFAULT # Use the TFLiteConverter SavedModel API to initialize the converter converter = # YOUR CODE HERE # Set the optimzations converter.optimizations = # YOUR CODE HERE # Invoke the converter to finally generate the TFLite model tflite_model = # YOUR CODE HERE tflite_model_file = 'model.tflite' with open(tflite_model_file, "wb") as f: f.write(tflite_model) """ Explanation: Exporting to TFLite End of explanation """ # Load TFLite model and allocate tensors. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Gather results for the randomly sampled test images predictions = [] test_labels = [] test_images = [] for img, label in test_batches.take(50): interpreter.set_tensor(input_index, img) interpreter.invoke() predictions.append(interpreter.get_tensor(output_index)) test_labels.append(label[0]) test_images.append(np.array(img)) #@title Utility functions for plotting # Utilities for plotting def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) img = np.squeeze(img) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label.numpy(): color = 'green' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks(list(range(10)), class_names, rotation='vertical') plt.yticks([]) thisplot = plt.bar(range(10), predictions_array[0], color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array[0]) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('green') #@title Visualize the outputs { run: "auto" } index = 49 #@param {type:"slider", min:1, max:50, step:1} plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(index, predictions, test_labels, test_images) plt.show() plot_value_array(index, predictions, test_labels) plt.show() """ Explanation: Test if your model is working End of explanation """ try: from google.colab import files files.download(tflite_model_file) files.download('labels.txt') except: pass """ Explanation: Download TFLite model and assets NOTE: You might have to run to the cell below twice End of explanation """ !mkdir -p test_images from PIL import Image for index, (image, label) in enumerate(test_batches.take(50)): image = tf.cast(image * 255.0, tf.uint8) image = tf.squeeze(image).numpy() pil_image = Image.fromarray(image) pil_image.save('test_images/{}_{}.jpg'.format(class_names[label[0]].lower(), index)) !ls test_images !zip -qq fmnist_test_images.zip -r test_images/ try: files.download('fmnist_test_images.zip') except: pass """ Explanation: Deploying TFLite model Now once you've the trained TFLite model downloaded, you can ahead and deploy this on an Android/iOS application by placing the model assets in the appropriate location. Prepare the test images for download (Optional) End of explanation """
mjuric/LSSTC-DSFP-Sessions
Session4/Day1/LSSTC-DSFP4-Juric-FrequentistAndBayes-02-Nuisance.ipynb
mit
p_hat = 5. / 8. freq_prob = (1 - p_hat) ** 3 print("Naïve Frequentist Probability of Bob Winning: {0:.2f}".format(freq_prob)) """ Explanation: Frequentism and Bayesianism II: When Results Differ Mario Juric & Jake VanderPlas, University of Washington e-mail: &#109;&#106;&#117;&#114;&#105;&#99;&#64;&#97;&#115;&#116;&#114;&#111;&#46;&#119;&#97;&#115;&#104;&#105;&#110;&#103;&#116;&#111;&#110;&#46;&#101;&#100;&#117;, twitter: @mjuric This lecture is based on a post on the blog Pythonic Perambulations, by Jake VanderPlas. The content is BSD licensed. See also VanderPlas (2014) "Frequentism and Bayesianism: A Python-driven Primer". Slides built using the excellent RISE Jupyter extension by Damian Avila. We just showed that Bayesian and Frequentist approaches are often equivalent for simple problems. But it is also true that they can diverge greatly for more complicated problems. In practice, this divergence makes itself most clear in two different situations: The handling of "nuisance parameters" The subtle (and often overlooked) difference between frequentist confidence intervals and Bayesian credible intervals The second point is a bit more involved, and we'll save it for the end. Here, we focus on the first point: the difference between frequentist and Bayesian treatment of nuisance parameters. What is a Nuisance Parameter? A nuisance parameter is any quantity whose value is not relevant to the goal of an analysis, but is nevertheless required to determine some quantity of interest. For example, in the second application worked-through in the previous part, we estimated both the mean $\mu$ and intrinsic scatter $\sigma$ for the observed photons. In practice, we may only be interested in $\mu$, the mean of the distribution. In this case $\sigma$ is a nuisance parameter: it is not of immediate interest, but is nevertheless an essential stepping-stone in determining the final estimate of $\mu$, the parameter of interest. To illustrate this, we're going to go through two examples where nuisance parameters come into play. We'll explore both the frequentist and the Bayesian approach to solving both of these problems: Example #1: The Bayesian Billiard Game We'll start with an example of nuisance parameters that, in one form or another, dates all the way back to the posthumous 1763 paper written by Thomas Bayes himself. The particular version of this problem we'll study is borrowed from Eddy 2004. The setting is a rather contrived game in which Alice and Bob bet on the outcome of a process they can't directly observe: Alice and Bob enter a room. Behind a curtain there is a billiard table, which they cannot see, but their friend Carol can. Carol rolls a ball down the table, and marks where it lands. Once this mark is in place, Carol begins rolling new balls down the table. If the ball lands to the left of the mark, Alice gets a point; if it lands to the right of the mark, Bob gets a point. We can assume for the sake of example that Carol's rolls are unbiased: that is, the balls have an equal chance of ending up anywhere on the table. The first person to reach six points wins the game. Here the location of the mark (determined by the first roll) can be considered a nuisance parameter. It is unknown, and not of immediate interest, but it clearly must be accounted for when predicting the outcome of subsequent rolls. If the first roll settles far to the right, then subsequent rolls will favor Alice. If it settles far to the left, Bob will be favored instead. Given this setup, here is the question we ask of ourselves: In a particular game, after eight rolls, Alice has five points and Bob has three points. What is the probability that Bob will go on to win the game? Intuitively, you probably realize that because Alice received five of the eight points, the marker placement likely favors her. And given this, it's more likely that the next roll will go her way as well. And she has three opportunities to get a favorable roll before Bob can win; she seems to have clinched it. But, quantitatively, what is the probability that Bob will squeak-out a win? A Naïve Frequentist Approach Someone following a classical frequentist approach might reason as follows: To determine the result, we need an intermediate estimate of where the marker sits. We'll quantify this marker placement as a probability $p$ that any given roll lands in Alice's favor. Because five balls out of eight fell on Alice's side of the marker, we can quickly show that the maximum likelihood estimate of $p$ is given by: $$ \hat{p} = 5/8 $$ (This result follows in a straightforward manner from the binomial likelihood). Assuming this maximum likelihood probability, we can compute the probability that Bob will win, which is given by: $$ P(B) = (1 - \hat{p})^3 $$ That is, he needs to win three rolls in a row. Thus, we find that the following estimate of the probability: End of explanation """ print("Odds against Bob winning: {0:.0f} to 1".format((1. - freq_prob) / freq_prob)) """ Explanation: In other words, we'd give Bob the following odds of winning: End of explanation """ from scipy.special import beta bayes_prob = beta(6 + 1, 5 + 1) / beta(3 + 1, 5 + 1) print("P(B|D) = {0:.2f}".format(bayes_prob)) """ Explanation: So we've estimated using frequentist ideas that Alice will win about 18 times for each time Bob wins. Let's try a Bayesian approach next. Bayesian Approach We can also approach this problem from a Bayesian standpoint. This is slightly more involved, and requires us to first define some notation. We'll consider the following random variables: $B$ = Bob Wins $D$ = observed data, i.e. $D = (n_A, n_B) = (5, 3)$ $p$ = unknown probability that a ball lands on Alice's side during the current game We want to compute $P(B~|~D)$; that is, the probability that Bob wins given our observation that Alice currently has five points to Bob's three. The general Bayesian method of treating nuisance parameters is marginalization, or integrating the joint probability over the entire range of the nuisance parameter. In this case, that means that we will first calculate the joint distribution $$ P(B,p~|~D) $$ and then marginalize over $p$ using the following identity: $$ P(B~|~D) \equiv \int_{-\infty}^\infty P(B,p~|~D) {\mathrm d}p $$ This identity follows from the definition of conditional probability, and the law of total probability: that is, it is a fundamental consequence of probability axioms and will always be true. Even a frequentist would recognize this; they would simply disagree with our interpretation of $P(p)$ as being a measure of uncertainty of our own knowledge. Building our Bayesian Expression To compute this result, we will manipulate the above expression for $P(B~|~D)$ until we can express it in terms of other quantities that we can compute. We'll start by applying the following definition of conditional probability to expand the term $P(B,p~|~D)$: $$ P(B~|~D) = \int P(B~|~p, D) P(p~|~D) dp $$ Next we use Bayes' rule to rewrite $P(p~|~D)$: $$ P(B~|~D) = \int P(B~|~p, D) \frac{P(D~|~p)P(p)}{P(D)} dp $$ Finally, using the same probability identity we started with, we can expand $P(D)$ in the denominator to find: $$ P(B~|~D) = \frac{\int P(B~|~p,D) P(D~|~p) P(p) dp}{\int P(D~|~p)P(p) dp} $$ Now the desired probability is expressed in terms of three quantities that we can compute. Let's look at each of these in turn: <small> - $P(B~|~p,D)$: This term is exactly the frequentist likelihood we used above. In words: given a marker placement $p$ and the fact that Alice has won 5 times and Bob 3 times, what is the probability that Bob will go on to six wins? Bob needs three wins in a row, i.e. $P(B~|~p,D) = (1 - p) ^ 3$. - $P(D~|~p)$: this is another easy-to-compute term. In words: given a probability $p$, what is the likelihood of exactly 5 positive outcomes out of eight trials? The answer comes from the well-known Binomial distribution: in this case $P(D~|~p) \propto p^5 (1-p)^3$ - $P(p)$: this is our prior on the probability $p$. By the problem definition, we can assume that $p$ is evenly drawn between 0 and 1. That is, $P(p) \propto 1$, and the integrals range from 0 to 1.</small> Putting this all together, canceling some terms, and simplifying a bit, we find $$ P(B~|~D) = \frac{\int_0^1 (1 - p)^6 p^5 dp}{\int_0^1 (1 - p)^3 p^5 dp} $$ where both integrals are evaluated from 0 to 1. These integrals might look a bit difficult, until we notice that they are special cases of the Beta Function: $$ \beta(n, m) = \int_0^1 (1 - p)^{n - 1} p^{m - 1} $$ We'll compute these directly using Scipy's beta function implementation: End of explanation """ print("Bayesian odds against Bob winning: {0:.0f} to 1".format((1. - bayes_prob) / bayes_prob)) """ Explanation: The associated odds are the following: End of explanation """ import numpy as np np.random.seed(0) # play 100000 games with randomly-drawn p, between 0 and 1 p = np.random.random(100000) # each game needs at most 11 rolls for one player to reach 6 wins rolls = np.random.random((11, len(p))) # count the cumulative wins for Alice and Bob at each roll Alice_count = np.cumsum(rolls < p, 0) Bob_count = np.cumsum(rolls >= p, 0) # sanity check: total number of wins should equal number of rolls total_wins = Alice_count + Bob_count assert np.all(total_wins.T == np.arange(1, 12)) print("(Sanity check passed)") # determine number of games which meet our criterion of (A wins, B wins)=(5, 3) # this means Bob's win count at eight rolls must equal 3 good_games = Bob_count[7] == 3 print("Number of suitable games: {0}".format(good_games.sum())) # truncate our results to consider only these games Alice_count = Alice_count[:, good_games] Bob_count = Bob_count[:, good_games] # determine which of these games Bob won. # to win, he must reach six wins after 11 rolls. bob_won = np.sum(Bob_count[10] == 6) print("Number of these games Bob won: {0}".format(bob_won.sum())) # compute the probability mc_prob = bob_won.sum() * 1. / good_games.sum() print("Monte Carlo Probability of Bob winning: {0:.2f}".format(mc_prob)) print("MC Odds against Bob winning: {0:.0f} to 1".format((1. - mc_prob) / mc_prob)) """ Explanation: So we see that the Bayesian result gives us 10 to 1 odds, which is quite different than the 18 to 1 odds found using the frequentist approach. So which one is correct? A Brute Force/Monte Carlo Approach For this type of well-defined and simple setup, it is actually relatively easy to use a Monte Carlo simulation to determine the correct answer. This is essentially a brute-force tabulation of possible outcomes: we generate a large number of random games, and simply count the fraction of relevant games that Bob goes on to win. The current problem is especially simple because so many of the random variables involved are uniformly distributed. We can use the numpy package to do this as follows: End of explanation """ x = np.array([ 0, 3, 9, 14, 15, 19, 20, 21, 30, 35, 40, 41, 42, 43, 54, 56, 67, 69, 72, 88]) y = np.array([33, 68, 34, 34, 37, 71, 37, 44, 48, 49, 53, 49, 50, 48, 56, 60, 61, 63, 44, 71]) e = np.array([ 3.6, 3.9, 2.6, 3.4, 3.8, 3.8, 2.2, 2.1, 2.3, 3.8, 2.2, 2.8, 3.9, 3.1, 3.4, 2.6, 3.4, 3.7, 2.0, 3.5]) """ Explanation: The Monte Carlo approach gives 10-to-1 odds on Bob, which agrees with the Bayesian approach. Apparently, our naïve frequentist approach above was flawed. Discussion This example shows several different approaches to dealing with the presence of a nuisance parameter p. The Monte Carlo simulation gives us a close brute-force estimate of the true probability (assuming the validity of our assumptions), which the Bayesian approach matches. The naïve frequentist approach, by utilizing a single maximum likelihood estimate of the nuisance parameter $p$, arrives at the wrong result. This does not imply frequentism itself is incorrect. The incorrect result above is more a matter of the approach being "naïve" than it being "frequentist". There certainly exist frequentist methods for handling this sort of nuisance parameter – for example, it is theoretically possible to apply a transformation and conditioning of the data to isolate the dependence on $p$ (though it's difficult to find any approach to this particular problem that does not somehow take advantage of Bayesian-like marginalization over $p$). Another potential point of contention is that the question itself is posed in a way that is perhaps unfair to the classical, frequentist approach. A frequentist might instead hope to give the answer in terms of null tests or confidence intervals: that is, they might devise a procedure to construct limits which would provably bound the correct answer in $100\times(1 - p)$ percent of similar trials, for some value of $p$ – say, 0.05 (note this is a different $p$ than the $p$ we've been talking about above). This might be classically accurate, but it doesn't quite answer the question at hand! (note: more on this later...) There is one clear common point of these two potential frequentist responses: both require some degree of effort and/or special expertise; perhaps a suitable frequentist approach would be immediately obvious to someone with a PhD in statistics, but is most definitely not obvious to a statistical lay-person simply trying to answer the question at hand. In this sense, Bayesianism provides a better approach for this sort of problem: by simple algebraic manipulation of a few well-known axioms of probability within a Bayesian framework, we can straightforwardly arrive at the correct answer without need for other special expertise. We'll explore a more data-oriented example of dealing with nuisance parameters next. Example #2: Linear Fit with Outliers One situation where the concept of nuisance parameters can be helpful is accounting for outliers in data. Consider the following dataset, relating the observed variables $x$ and $y$, and the error of $y$ stored in $e$. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt plt.errorbar(x, y, e, fmt='.k', ecolor='gray'); """ Explanation: We'll visualize this data below: End of explanation """ from scipy import optimize def squared_loss(theta, x=x, y=y, e=e): dy = y - theta[0] - theta[1] * x return np.sum(0.5 * (dy / e) ** 2) theta1 = optimize.fmin(squared_loss, [0, 0], disp=False) xfit = np.linspace(0, 100) plt.errorbar(x, y, e, fmt='.k', ecolor='gray') plt.plot(xfit, theta1[0] + theta1[1] * xfit, '-k') plt.title('Maximum Likelihood fit: Squared Loss'); """ Explanation: Our task is to find a line of best-fit to the data. It's clear upon visual inspection that there are some outliers among these points, but let's start with a simple non-robust maximum likelihood approach. Like we saw in the previous post, the following simple maximum likelihood result can be considered to be either frequentist or Bayesian (with uniform priors): in this sort of simple problem, the approaches are essentially equivalent. We'll propose a simple linear model, which has a slope and an intercept encoded in a parameter vector $\theta$. The model is defined as follows: $$ \hat{y}(x~|~\theta) = \theta_0 + \theta_1 x $$ Given this model, we can compute a Gaussian likelihood for each point: $$ p(x_i,y_i,e_i~|~\theta) \propto \exp\left[-\frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2\right] $$ The total likelihood is the product of all the individual likelihoods. Computing this and taking the log, we have: $$ \log \mathcal{L}(D~|~\theta) = \mathrm{const} - \sum_i \frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2 $$ This should all look pretty familiar. This final expression is the log-likelihood of the data given the model, which can be maximized to find the $\theta$ corresponding to the maximum-likelihood model. Equivalently, we can minimize the summation term, which is known as the loss: $$ \mathrm{loss} = \sum_i \frac{1}{2e_i^2}\left(y_i - \hat{y}(x_i~|~\theta)\right)^2 $$ This loss expression is known as a squared loss; here we've simply shown that the squared loss can be derived from the Gaussian log likelihood. Standard Likelihood Approach Following the logic of the previous post, we can maximize the likelihood (or, equivalently, minimize the loss) to find $\theta$ within a frequentist paradigm. For a flat prior in $\theta$, the maximum of the Bayesian posterior will yield the same result. (note that there are good arguments based on the principle of maximum entropy that a flat prior is not the best choice here; we'll ignore that detail for now, as it's a very small effect for this problem). For simplicity, we'll use scipy's optimize package to minimize the loss: End of explanation """ t = np.linspace(-20, 20) def huber_loss(t, c=3): return ((abs(t) < c) * 0.5 * t ** 2 + (abs(t) >= c) * -c * (0.5 * c - abs(t))) plt.plot(t, 0.5 * t ** 2, label="squared loss", lw=2) for c in (10, 5, 3): plt.plot(t, huber_loss(t, c), label="Huber loss, c={0}".format(c), lw=2) plt.ylabel('loss') plt.xlabel('standard deviations') plt.legend(loc='best', frameon=False); """ Explanation: It's clear on examination that the outliers are exerting a disproportionate influence on the fit. This is due to the nature of the squared loss function. If you have a single outlier that is, say 10 standard deviations away from the fit, its contribution to the loss will out-weigh that of 25 points which are 2 standard deviations away! Clearly the squared loss is overly sensitive to outliers, and this is causing issues with our fit. One way to address this within the frequentist paradigm is to simply adjust the loss function to be more robust. Frequentist Correction for Outliers: Huber Loss The variety of possible loss functions is quite literally infinite, but one relatively well-motivated option is the Huber loss. The Huber loss defines a critical value at which the loss curve transitions from quadratic to linear. Let's create a plot which compares the Huber loss to the standard squared loss for several critical values $c$: End of explanation """ def total_huber_loss(theta, x=x, y=y, e=e, c=3): return huber_loss((y - theta[0] - theta[1] * x) / e, c).sum() theta2 = optimize.fmin(total_huber_loss, [0, 0], disp=False) plt.errorbar(x, y, e, fmt='.k', ecolor='gray') plt.plot(xfit, theta1[0] + theta1[1] * xfit, color='lightgray') plt.plot(xfit, theta2[0] + theta2[1] * xfit, color='black') plt.title('Maximum Likelihood fit: Huber loss'); """ Explanation: The Huber loss is equivalent to the squared loss for points which are well-fit by the model, but reduces the loss contribution of outliers. For example, a point 20 standard deviations from the fit has a squared loss of 200, but a c=3 Huber loss of just over 55. Let's see the result of the best-fit line using the Huber loss rather than the squared loss. We'll plot the squared loss result in light gray for comparison: End of explanation """ # theta will be an array of length 2 + N, where N is the number of points # theta[0] is the intercept, theta[1] is the slope, # and theta[2 + i] is the weight g_i def log_prior(theta): #g_i needs to be between 0 and 1 if (all(theta[2:] > 0) and all(theta[2:] < 1)): return 0 else: return -np.inf # recall log(0) = -inf def log_likelihood(theta, x, y, e, sigma_B): dy = y - theta[0] - theta[1] * x g = np.clip(theta[2:], 0, 1) # g<0 or g>1 leads to NaNs in logarithm logL1 = np.log(g) - 0.5 * np.log(2 * np.pi * e ** 2) - 0.5 * (dy / e) ** 2 logL2 = np.log(1 - g) - 0.5 * np.log(2 * np.pi * sigma_B ** 2) - 0.5 * (dy / sigma_B) ** 2 return np.sum(np.logaddexp(logL1, logL2)) def log_posterior(theta, x, y, e, sigma_B): return log_prior(theta) + log_likelihood(theta, x, y, e, sigma_B) """ Explanation: By eye, this seems to have worked as desired: the fit is much closer to our intuition! However a Bayesian might point out that the motivation for this new loss function is a bit suspect: as we showed, the squared-loss can be straightforwardly derived from a Gaussian likelihood. The Huber loss seems a bit ad hoc: where does it come from? How should we decide what value of $c$ to use? Is there any good motivation for using a linear loss on outliers, or should we simply remove them instead? How might this choice affect our resulting model? A Bayesian Approach to Outliers: Nuisance Parameters The Bayesian approach to accounting for outliers generally involves modifying the model so that the outliers are accounted for. For this data, it is abundantly clear that a simple straight line is not a good fit to our data. So let's propose a more complicated model that has the flexibility to account for outliers. One option is to choose a mixture between a signal and a background: $$ \begin{array}{ll} p({x_i}, {y_i},{e_i}~|~\theta,{g_i},\sigma,\sigma_b) = & \frac{g_i}{\sqrt{2\pi e_i^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2e_i^2}\right] \ &+ \frac{1 - g_i}{\sqrt{2\pi \sigma_B^2}}\exp\left[\frac{-\left(\hat{y}(x_i~|~\theta) - y_i\right)^2}{2\sigma_B^2}\right] \end{array} $$ What we've done is expanded our model with some nuisance parameters: ${g_i}$ is a series of weights which range from 0 to 1 and encode for each point $i$ the degree to which it fits the model. $g_i=0$ indicates an outlier, in which case a Gaussian of width $\sigma_B$ is used in the computation of the likelihood. This $\sigma_B$ can also be a nuisance parameter, or its value can be set at a sufficiently high number, say 50. Our model is much more complicated now: it has 22 parameters rather than 2, but the majority of these can be considered nuisance parameters, which can be marginalized-out in the end, just as we marginalized (integrated) over $p$ in the Billiard example. Let's construct a function which implements this likelihood. As in the previous post, we'll use the emcee package to explore the parameter space. To actually compute this, we'll start by defining functions describing our prior, our likelihood function, and our posterior: End of explanation """ # Note that this step will take a few minutes to run! ndim = 2 + len(x) # number of parameters in the model nwalkers = 50 # number of MCMC walkers nburn = 10000 # "burn-in" period to let chains stabilize nsteps = 15000 # number of MCMC steps to take # set theta near the maximum likelihood, with np.random.seed(10) starting_guesses = np.zeros((nwalkers, ndim)) starting_guesses[:, :2] = np.random.normal(theta1, 1, (nwalkers, 2)) starting_guesses[:, 2:] = np.random.normal(0.5, 0.1, (nwalkers, ndim - 2)) import emcee sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x, y, e, 50]) sampler.run_mcmc(starting_guesses, nsteps, rstate0=np.random.get_state()) sample = sampler.chain # shape = (nwalkers, nsteps, ndim) sample = sampler.chain[:, nburn:, :].reshape(-1, ndim) """ Explanation: Now we'll run the MCMC samples to explore the parameter space: End of explanation """ plt.plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1) plt.xlabel('intercept') plt.ylabel('slope'); """ Explanation: Once we have these samples, we can exploit a very nice property of the Markov chains. Because their distribution models the posterior, we can integrate out (i.e. marginalize) over nuisance parameters simply by ignoring them! We can look at the (marginalized) distribution of slopes and intercepts by examining the first two columns of the sample: End of explanation """ plt.plot(sample[:, 2], sample[:, 3], ',k', alpha=0.1) plt.xlabel('$g_1$') plt.ylabel('$g_2$') print("g1 mean: {0:.2f}".format(sample[:, 2].mean())) print("g2 mean: {0:.2f}".format(sample[:, 3].mean())) """ Explanation: We see a distribution of points near a slope of $\sim 0.45$, and an intercept of $\sim 31$. We'll plot this model over the data below, but first let's see what other information we can extract from this trace. One nice feature of analyzing MCMC samples is that the choice of nuisance parameters is completely symmetric: just as we can treat the ${g_i}$ as nuisance parameters, we can also treat the slope and intercept as nuisance parameters! Let's do this, and check the posterior for $g_1$ and $g_2$, the outlier flag for the first two points: End of explanation """ theta3 = np.mean(sample[:, :2], 0) g = np.mean(sample[:, 2:], 0) outliers = (g < 0.5) plt.errorbar(x, y, e, fmt='.k', ecolor='gray') plt.plot(xfit, theta1[0] + theta1[1] * xfit, color='lightgray') plt.plot(xfit, theta2[0] + theta2[1] * xfit, color='lightgray') plt.plot(xfit, theta3[0] + theta3[1] * xfit, color='black') plt.plot(x[outliers], y[outliers], 'ro', ms=20, mfc='none', mec='red') plt.title('Maximum Likelihood fit: Bayesian Marginalization'); """ Explanation: There is not an extremely strong constraint on either of these, but we do see that $(g_1, g_2) = (1, 0)$ is slightly favored: the means of $g_1$ and $g_2$ are greater than and less than 0.5, respecively. If we choose a cutoff at $g=0.5$, our algorithm has identified $g_2$ as an outlier. Let's make use of all this information, and plot the marginalized best model over the original data. As a bonus, we'll draw red circles to indicate which points the model detects as outliers: End of explanation """
mdda/pycon.sg-2015_deep-learning
ipynb/6-RNN-as-Author.ipynb
mit
import numpy import theano from theano import tensor from blocks.bricks import Tanh from blocks.bricks.recurrent import GatedRecurrent from blocks.bricks.sequence_generators import (SequenceGenerator, Readout, SoftmaxEmitter, LookupFeedback) from blocks.graph import ComputationGraph import blocks.algorithms from blocks.algorithms import GradientDescent from blocks.initialization import Orthogonal, IsotropicGaussian, Constant from blocks.model import Model from blocks.monitoring import aggregation from blocks.extensions import FinishAfter, Printing from blocks.extensions.saveload import Checkpoint from blocks.extensions.monitoring import TrainingDataMonitoring from blocks.main_loop import MainLoop import blocks.serialization from blocks.select import Selector import logging import pprint logger = logging.getLogger(__name__) theano.config.floatX='float32' print theano.config.device # Dictionaries import string all_chars = [ a for a in string.printable]+['<UNK>'] code2char = dict(enumerate(all_chars)) char2code = {v: k for k, v in code2char.items()} if True: data_file = 'Shakespeare.poetry.txt' dim = 16 hidden_state_dim = 16 feedback_dim = 16 else: data_file = 'Shakespeare.plays.txt' dim = 64 hidden_state_dim = 64 feedback_dim = 64 seq_len = 256 # The input file is learned in chunks of text this large # Network parameters num_states=len(char2code) # This is the size of the one-hot input and SoftMax output layers batch_size = 10 # This is for mini-batches : Helps optimize GPU workload num_epochs = 1 # Number of reads-through of corpus to do first training data_path = '../data/' + data_file save_path = '../models/' + data_file + '.model' #from fuel.datasets import Dataset from fuel.streams import DataStream from fuel.schemes import ConstantScheme from fuel.datasets import Dataset class CharacterTextFile(Dataset): provides_sources = ("data",) def __init__(self, fname, chunk_len, dictionary, **kwargs): self.fname = fname self.chunk_len = chunk_len self.dictionary = dictionary super(CharacterTextFile, self).__init__(**kwargs) def open(self): return open(self.fname,'r') def get_data(self, state, request): assert isinstance(request, int) x = numpy.zeros((self.chunk_len, request), dtype='int64') for i in range(request): txt=state.read(self.chunk_len) if len(txt)<self.chunk_len: raise StopIteration #print(">%s<\n" % (txt,)) x[:, i] = [ self.dictionary[c] for c in txt ] return (x,) def close(self, state): state.close() dataset = CharacterTextFile(data_path, chunk_len=seq_len, dictionary=char2code) data_stream = DataStream(dataset, iteration_scheme=ConstantScheme(batch_size)) a=data_stream.get_data(10) # i.e. ask for 10 samples of 256(=seq_len) characters a[0].shape # (256, 10) - essentially, a 10 separate streams of 256 characters #[ code2char[v] for v in [94, 27, 21, 94, 16, 14, 54, 23, 14, 12] ] # Horizontal stripe in matrix #[ code2char[v] for v in [94, 94,95,36,94,47,50,57,40,53,68,54,94,38] ] # Vertical stripe in matrix ''.join([ code2char[v] for v in a[0][:,0] ]) # This a vertical strip - same as markov_chain example """ Explanation: Characterwise Single Layer GRU as Author End of explanation """ transition = GatedRecurrent(name="transition", dim=hidden_state_dim, activation=Tanh()) generator = SequenceGenerator( Readout(readout_dim=num_states, source_names=["states"], emitter=SoftmaxEmitter(name="emitter"), feedback_brick=LookupFeedback( num_states, feedback_dim, name='feedback'), name="readout"), transition, weights_init=IsotropicGaussian(0.01), biases_init=Constant(0), name="generator" ) generator.push_initialization_config() transition.weights_init = Orthogonal() generator.initialize() print(generator.readout.emitter.readout_dim) """ Explanation: Defining the Model Actually, it's a single layer of GRU for now... (rather than a double-stacked LSTM) End of explanation """ logger.info("Parameters:\n" + pprint.pformat( [(key, value.get_value().shape) for key, value in Selector(generator).get_params().items()], width=120)) """ Explanation: That's the underlying network defined - now need to create the infrastructure to iteratively improve it : End of explanation """ x = tensor.lmatrix('data') cost = aggregation.mean(generator.cost_matrix(x[:, :]).sum(), x.shape[1]) cost.name = "sequence_log_likelihood" model=Model(cost) algorithm = GradientDescent( cost=cost, params=list(Selector(generator).get_params().values()), step_rule=blocks.algorithms.CompositeRule([blocks.algorithms.StepClipping(10.0), blocks.algorithms.Scale(0.01)]) ) # tried: blocks.algorithms.Scale(0.001), blocks.algorithms.RMSProp(), blocks.algorithms.AdaGrad() """ Explanation: Build the cost computation graph : End of explanation """ # from IPython.display import SVG # SVG(theano.printing.pydotprint(cost, return_image=True, format='svg')) #from IPython.display import Image #Image(theano.printing.pydotprint(cost, return_image=True, format='png')) """ Explanation: The Model can now be shown as a Compute Graph (But this is time consuming, and the image will be huge...) End of explanation """ main_loop = MainLoop( algorithm=algorithm, data_stream=data_stream, model=model, extensions=[ FinishAfter(after_n_epochs=num_epochs), TrainingDataMonitoring([cost], prefix="this_step", after_batch=True), TrainingDataMonitoring([cost], prefix="average", every_n_batches=100), Checkpoint(save_path, every_n_batches=1000), Printing(every_n_batches=500) ] ) """ Explanation: Define the Training Loop End of explanation """ main_loop.run() """ Explanation: Run (or continue) the Training End of explanation """ output_length = 1000 # in characters sampler = ComputationGraph( generator.generate(n_steps=output_length, batch_size=1, iterate=True) ) sample = sampler.get_theano_function() states, outputs, costs = [data[:, 0] for data in sample()] numpy.set_printoptions(precision=3, suppress=True) print("Generation cost:\n{}".format(costs.sum())) print(''.join([ code2char[c] for c in outputs])) """ Explanation: Evaluate here to sample the learned relationships End of explanation """
fluffy-hamster/A-Beginners-Guide-to-Python
A Beginners Guide to Python/Homework Solutions/10. Strings (HW).ipynb
mit
# Solution One: Triple Quotes!! cool_story_bro = """"Ahhh!!!! spiders!", cried the monster."Don't worry" said our hero, "I have a sharp spoon".""" print(cool_story_bro) """ Explanation: Homework: Name a variable "cool_story_bro" and then assign the the following text as a string: "Ahhh!!!! spiders!", cried the monster."Don't worry" said our hero, "I have a sharp spoon". Once complete, print it. Possible Solutions: End of explanation """ # Solution Two: Escape characters... cool_story_bro = '"Ahhh!!!! spiders!", cried the monster."Don\'t worry" said our hero, "I have a sharp spoon".' print(cool_story_bro) """ Explanation: I’ll be honest, the above solution of using triple-quotes isn’t something I knew about before making the guide. As it turns out this is a really neat solution I think, its clean, readable, and above all simple. End of explanation """ # Solution Three: Splitting up the string into seperate parts... a = '"Ahhh!!!! spiders!", cried the monster."Don' b = "'t " c = "worry" d = '" said our hero, "I have a sharp spoon".' cool_story_bro = a + b + c + d print(cool_story_bro) """ Explanation: For this solution we use the single quote character to enclose the string and then we use escape on the ' character in the word “Don't”. “...Don’t...” ---&gt; “...Don\’t...” Another clean and simple solution. End of explanation """
alexweav/Learny-McLearnface
Example1.ipynb
mit
import numpy as np import LearnyMcLearnface as lml """ Explanation: Example 1 - Overfitting Sample Data Here, we will use a simple model to overfit a set of randomly generated data points. First, we import Numpy to hold the data, and we import Learny McLearnface. End of explanation """ test_data = np.random.randn(100, 700) test_classes = np.random.randint(1, 10, 100) """ Explanation: Now, we will create the data to be overfitted. For the purposes of the example, we will create 100 700-dimensional data points, which will each be randomly assigned one of 10 classes. We will attempt to use a model to overfit this data and achieve 100% accuracy on this training set. We will organize the data points into a single numpy array, where the rows are individual datapoints, and we will also create a separate vector of integers which give the classes for corresponding examples. We initialize the data and its classes: End of explanation """ data = { 'X_train' : test_data, 'y_train' : test_classes, 'X_val' : test_data, 'y_val' : test_classes } """ Explanation: Now, in order to feed the data to Learny McLearnface, we wrap it in a data dictionary with specified labels. (Note that he validation set and training set will be the same in this case, as we are intentionally trying to overfit a training set.) End of explanation """ opts = { 'input_dim' : 700, 'init_scheme' : 'xavier' } """ Explanation: Now, we will create our model. We will use a simple fully-connected shallow network, with 500 hidden layer neurons, ReLU activations, and a softmax classifier at the end. First, we set our initial network options in a dictionary. We will have an input dimension of 700, and we will use the Xavier scheme to initialize our parameters. End of explanation """ nn = lml.NeuralNetwork(opts) nn.add_layer('Affine', {'neurons':500}) nn.add_layer('ReLU', {}) nn.add_layer('Affine', {'neurons':10}) nn.add_layer('SoftmaxLoss', {}) """ Explanation: And finally, we build the network itself. With the above description, the layer architecture will be: (Affine) -> (ReLU) -> (Affine) -> (Softmax) We create our network object, with 500 hidden layer neurons and 10 output layer neurons (which correspond to our 10 classes). End of explanation """ opts = { 'update_options' : {'update_rule' : 'sgd', 'learning_rate' : 1}, 'reg_param' : 0, 'num_epochs' : 10 } """ Explanation: Now that our model is created, we must train it. We use the given Trainer object to accomplish this. First, we must supply training options. These are, once again, provided in a dictionary. We will use basic stochastic gradient descent with a learning rate of 1, no regularization, and we will train for 10 epochs. End of explanation """ trainer = lml.Trainer(nn, data, opts) """ Explanation: Now we create the trainer object and give it the model, the data, and the options. End of explanation """ accuracy = trainer.accuracy(test_data, test_classes) print('Initial model accuracy:', accuracy) """ Explanation: We will use the trainer's toolset to first print the accuracy of the model before training. Since the model was randomly initialized and there are 10 classes, we should expect an initial accuracy close to 10% End of explanation """ trainer.train() """ Explanation: Since we have supplied all the requirements necessary for the trainer, we simply use the train() function to train the model. This will print status updates at the end of each epoch. End of explanation """ accuracy = trainer.accuracy(test_data, test_classes) print('Final model accuracy:', accuracy) """ Explanation: As you can see, the network overfits the data very easily, achieving a validation accuracy of 100%. For the sake of completeness, we will print the final validation accuracy of the model. End of explanation """
NewKnowledge/punk
examples/Novelty Detection.ipynb
mit
import punk help(punk) """ Explanation: The goal os punk is to make available sime wrappers for a variety of machine learning pipelines. The pipelines are termed primitves and each primitive is designed with a functional programming approach in mind. At the time of this writing, punk is being periodically updated. Any new primitives will be realesed as a pip-installable python package every friday along with their corresponding annotations files for the broader D3M community. Here we will briefly show how the primitives in the punk package can be utilized. End of explanation """ %matplotlib inline import numpy as np from scipy import linalg import matplotlib.pyplot as plt from punk import novelty_detection # Make some data up n_samples, n_features, rank = 1000, 50, 10 sigma = 1. rng = np.random.RandomState(42) U, _, _ = linalg.svd(rng.randn(n_features, n_features)) X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T) # Adding homoscedastic noise X_homo = X + sigma * rng.randn(n_samples, n_features) # Adding heteroscedastic noise sigmas = sigma * rng.rand(n_features) + sigma / 2. X_hetero = X + rng.randn(n_samples, n_features) * sigmas %%time # run the primitive against a dataset with homocedatic noise test_homo = novelty_detection.HeteroscedasticityTest(max_iter=1000, tol=0.01) test_homo = test_homo.fit(["matrix"], X_homo) %%time # run the primitive against a dataset with homocedatic noise test_hetero = novelty_detection.HeteroscedasticityTest(max_iter=1000, tol=0.01) test_hetero = test_hetero.fit(["matrix"], X_hetero) """ Explanation: Novelty Detection - Dataset Summarization Testing Heteroscedasticity An interesting set we can do on our datasets is a test for Heteroscedasticity which may be able to tell us whether there are some subpopulations in our dataset (latent variables), sampling bias, or something of the sort. Future primitives will aid on this task. End of explanation """ print(test_homo.pca, test_homo.fa) print(test_hetero.pca, test_hetero.fa) """ Explanation: Notice that for Homoscedastic noise the difference between PCA and FactorAnalysis is relatively small and both are able to pick out a lower-rank principal subspace of dimensioanlity 10. In the case of Nonisotropic noise FactorAnalysis does better than PCA and is able to get a much lower-rank subspace than PCA - 10 versus 40 dimensions. End of explanation """ scores = novelty_detection.HeteroscedasticityTest(max_iter=1000, tol=0.01) %%time pca_scores_ho, fa_scores_ho = scores.compute_scores(X_homo) %%time pca_scores_he, fa_scores_he = scores.compute_scores(X_hetero) pca_scores_ho, fa_scores_ho = novelty_detection.compute_scores(X_homo, max_iter=1000, tol=0.01) pca_scores_he, fa_scores_he = novelty_detection.compute_scores(X_hetero, max_iter=1000, tol=0.01) plt.plot([x for y, x in pca_scores_ho], [y for y, x in pca_scores_ho], 'b', label='PCA scores') plt.plot([x for y, x in fa_scores_ho], [y for y, x in fa_scores_ho], 'r', label='FA scores') plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-') plt.axvline(test_homo.pca[1], color='b', label='PCA CV: %d' %test_homo.pca[1] , linestyle='--') plt.axvline(test_homo.fa[1], color='r', label='FactorAnalysis CV: %d' % test_homo.fa[1], linestyle='--') plt.xlabel("# of components") plt.ylabel("CV scores") plt.legend(loc="best") plt.title("Homoscedastic Noise"); plt.plot([x for y, x in pca_scores_he], [y for y, x in pca_scores_he], 'b', label='PCA scores') plt.plot([x for y, x in fa_scores_he], [y for y, x in fa_scores_he], 'r', label='FA scores') plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-') plt.axvline(test_hetero.pca[1], color='b', label='PCA CV: %d' %test_hetero.pca[1] , linestyle='--') plt.axvline(test_hetero.fa[1], color='r', label='FactorAnalysis CV: %d'%test_hetero.fa[1], linestyle='--') plt.xlabel("# of components") plt.ylabel("CV scores") plt.legend(loc="best") plt.title("Heteroscedastic Noise"); """ Explanation: Compute Scores The primitive test_heteroscedasticity is a wrapper for the function compute_scores. More details on this can be seen in Model selection with Probabilistic PCA and FA. End of explanation """