markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Compare results to reference implementation.
f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,8)) ax1.imshow(imsd[0],aspect='equal',cmap="gray") ax2.imshow(out_c,aspect ='equal',cmap="gray") pyplot.show()
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0
Tensorflow
py_otf_mask = numpy.fft.fftshift(rcfilter.astype(numpy.float32)) FITMIN = tf.constant(1.0e-6) tf_alpha = tf.constant(numpy.float32(alpha)) tf_data = tf.Variable(imsd[0].astype(numpy.float32), shape = (128, 128), trainable=False) tf_gamma = tf.constant(gamma.astype(numpy.float32)) tf_rc = tf.constant(py_otf_mask*py_otf_mask/(128.0*128.0)) tf_u = tf.Variable(imsd[0].astype(numpy.float32), shape = (128, 128), trainable=True) # Tensorflow cost function. @tf.function def cost(): ## LL t1 = tf.math.add(tf_data, tf_gamma) t2 = tf.math.add(tf_u, tf_gamma) t2 = tf.math.maximum(t2, FITMIN) t2 = tf.math.log(t2) t2 = tf.math.multiply(t1, t2) t2 = tf.math.subtract(tf_u, t2) c1 = tf.math.reduce_sum(t2) ## NC t1 = tf.dtypes.complex(tf_u, tf.zeros_like(tf_u)) t2 = tf.signal.fft2d(t1) t2 = tf.math.multiply(t2, tf.math.conj(t2)) t2 = tf.math.abs(t2) t2 = tf.math.multiply(t2, tf_rc) c2 = tf.math.reduce_sum(t2) c2 = tf.math.multiply(tf_alpha, c2) return tf.math.add(c1, c2) # Gradient Descent Optimizer. # # This takes ~700ms on my laptop, so about 7x slower. tf_data.assign(numpy.copy(imsd[0])) tf_u.assign(tf_data.numpy()) for i in range(100): if((i%10)==0): print(cost().numpy()) opt = gradient_descent.GradientDescentOptimizer(2.0).minimize(cost) out_tf = tf_u.numpy() f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4)) ax1.imshow(out_c,aspect='equal',cmap="gray") ax2.imshow(out_tf,aspect ='equal',cmap="gray") pyplot.show() print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf)))) # AdamOptimizer. # # This takes ~1.5ms on my laptop, so about 15x slower. tf_data.assign(numpy.copy(imsd[0])) tf_u.assign(tf_data.numpy()) for i in range(100): if((i%10)==0): print(cost().numpy()) opt = adam.AdamOptimizer(0.8).minimize(cost) out_tf_2 = tf_u.numpy() f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4)) ax1.imshow(out_c,aspect='equal',cmap="gray") ax2.imshow(out_tf_2,aspect ='equal',cmap="gray") pyplot.show() print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf_2)))) # Adagrad. # # This takes ~950ms on my laptop, so about 9.5x slower. tf_data.assign(numpy.copy(imsd[0])) tf_u.assign(tf_data.numpy()) for i in range(100): if((i%10)==0): print(cost().numpy()) opt = adagrad.AdagradOptimizer(0.8).minimize(cost) out_tf_3 = tf_u.numpy() f,(ax1,ax2) = pyplot.subplots(1,2,sharey=False,figsize = (8,4)) ax1.imshow(out_c,aspect='equal',cmap="gray") ax2.imshow(out_tf_3,aspect ='equal',cmap="gray") pyplot.show() print("Maximum pixel difference is {0:.3f}e-".format(numpy.max(numpy.abs(out_c - out_tf_3))))
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0
A dictionary can store values for a key. In this example, we will store the value "world", at the key "hello".
responses["hello"] = "world" responses
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
One nice property of dicts is that they can store more key/value pairs.
responses["hola"] = "mundo" responses def greet(salutation): try: print(salutation, responses[salutation]) except KeyError: print("Sorry, don't know how to respond to", salutation) greet("hello") greet("hola")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
What happens if you ask for an unknown key?
greet("你好")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
We can update the dict, and ask again:
responses["你好"] = "世界" greet("你好")
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
More tricks with dicts Keys and values don't have to be strings, many other python data types can be used. For example, we can rewrite some of our counting loops to use a dict instead of variables to count nucleotides
counts = {} sequence = "acggtattcggt" for base in sequence: if not base in counts: counts[base] = 1 else: counts[base] += 1 counts
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Since keys can be any string, we can use a similar loop to count triplet (or codon) frequency
k = 3 kmercounts = {} for i in range(len(sequence) - k + 1): kmer = sequence[i:i+k] if not kmer in kmercounts: kmercounts[kmer] = 1 else: kmercounts[kmer] += 1 kmercounts
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Using dicts to build graphs We can use dicts to build data structures to represent graphs. I want to build a graph with 4 nodes, labeled 1, 2, 3, and 4, with edges between 1-2, 1-3, 1-4, and 2-3 as shown in Figure 1. We can use a dict, with the nodes as keys, and the list of nodes connected to the key as the value.
from IPython.core.display import SVG, display display(SVG(filename='fig-graph.svg')) graph = {} graph graph[1] = [2, 3, 4] graph graph[2] = [1, 3] graph[3] = [1, 2] graph[4] = [1] graph
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Now we can loop over the nodes, and extract the edges.
for node in graph: print(node, graph[node])
dicts-graphs.ipynb
humberto-ortiz/bioinf2017
gpl-3.0
Following demonstrates how to find matches between source scan: 25133-3-11 and target scan 25133-4-13. We also have a list of unit_ids from the source scan for which we want to find the match.
source_scan = dict(animal_id=25133, session=3, scan_idx=11) target_scan = dict(animal_id=25133, session=4, scan_idx=13)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Designate the pairing as what needs to be matched:
pairing = (meso.ScanInfo & source_scan).proj(src_session='session', src_scan_idx='scan_idx') * (meso.ScanInfo & target_scan).proj() meso.ScansToMatch.insert(pairing)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now also specify which units from the source should be matched
# 150 units from the source scan unit_ids = [ 46, 75, 117, 272, 342, 381, 395, 408, 414, 463, 537, 568, 581, 633, 670, 800, 801, 842, 873, 1042, 1078, 1085, 1175, 1193, 1246, 1420, 1440, 1443, 1451, 1464, 1719, 1755, 1823, 1863, 2107, 2128, 2161, 2199, 2231, 2371, 2438, 2522, 2572, 2585, 2644, 2764, 2809, 2810, 2873, 2924, 2973, 2989, 3028, 3035, 3083, 3107, 3129, 3131, 3139, 3189, 3192, 3214, 3318, 3513, 3551, 3613, 3618, 3671, 3680, 3742, 3810, 3945, 3973, 4065, 4069, 4085, 4123, 4131, 4134, 4184, 4221, 4353, 4369, 4426, 4490, 4512, 4532, 4865, 4971, 5140, 5171, 5227, 5276, 5694, 5746, 5810, 5817, 5856, 5910, 6013, 6061, 6078, 6108, 6216, 6254, 6273, 6292, 6301, 6368, 6486, 6497, 6558, 6569, 6618, 6620, 6825, 6887, 6911, 6984, 7091, 7199, 7205, 7242, 7331, 7372, 7415, 7429, 7433, 7659, 7715, 7927, 7946, 8085, 8096, 8181, 8317, 8391, 8392, 8395, 8396, 8415, 8472, 8478, 8572, 8580, 8610, 8663, 8681, 8683, 8700] # create list of entries src_units = [dict(source_scan, unit_id=unit) for unit in unit_ids] meso.SourceUnitsToMatch.insert(meso.ScanSet.Unit.proj() & src_units)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now we have specified scans to match and source scan units, we can populate ProximityCellMatch
meso.ProximityCellMatch.populate(display_progress=True)
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Now find the best proximity match
meso.BestProximityCellMatch().populate() meso.BestProximityCellMatch()
python/example/how_to_fill_proxmity_cell_match.ipynb
cajal/pipeline
lgpl-3.0
Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función. Ejercicio 2 En la siguiente celda busca las siguientes funciones: sum, max, round, mean. No olvides ejecutar la celda después de haber escrito las funciones. Como te pudiste dar cuenta, cuando no encuentra la función te da un error... En IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, función o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automáticamente (si es lo único que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes...
variable = 50 saludo = 'Hola'
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras. para ver qué funciones mágicas hay en Jupyter sólo tienes que escribir python %magic Todas las funciones "mágicas" empiezan con el signo de porcentaje % Gráficas Ahora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature
# Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos). # Fíjate en el la función mágica para que aparezca nuestra gráfica en la celda. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Crea un arreglo de 30 valores para x que va de 0 a 5. x = np.linspace(0, 5, 30) y = x**2 # grafica y versus x fig, ax = plt.subplots(nrows=1, ncols=1) ax.plot(x, y, color='red') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('A simple graph of $y=x^2$')
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión: y = x**2 con: y=np.sin(x) Gráficas interactivas
# Importa matplotlib y numpy # con la misma "magia". %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Importa la función interactiva de IPython usada # para construir los widgets interactivos from IPython.html.widgets import interact def plot_sine(frequency=4.0, grid_points=12, plot_original=True): """ Grafica muestras discretas de una curva sinoidal en ``[0, 1]``. """ x = np.linspace(0, 1, grid_points + 2) y = np.sin(2 * frequency * np.pi * x) xf = np.linspace(0, 1, 1000) yf = np.sin(2 * frequency * np.pi * xf) fig, ax = plt.subplots(figsize=(8, 6)) ax.set_xlabel('x') ax.set_ylabel('signal') ax.set_title('Aliasing in discretely sampled periodic signal') if plot_original: ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2) ax.plot(x, y, marker='o', linewidth=2) # la función interactiva construye automáticamente una interfase de usuario para explorar # la gráfica de la función de seno. interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True)
UsoJupyter/.ipynb_checkpoints/CuadernoJupyter-checkpoint.ipynb
PyladiesMx/Empezando-con-Python
mit
Lets define parameters of a 50 periods undulator with a small deflection parameter $K_0=0.1$, and an electron with $\gamma_e=100$
K0 = 0.1 Periods=50 g0=100.0 StepsPerPeriod=24 gg = g0/(1.+K0**2/2)**.5 vb = (1.-gg**-2)**0.5 k_res = 2*gg**2 dt = 1./StepsPerPeriod Steps2Do = int((Periods+2)/dt)+1
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
Electron is added by creating a dummy specie equipped with the undulator device, and a single particle with $p_x = \sqrt{\gamma_e^2-1}$ is added
specie_in = {'TimeStep':dt, 'Devices':([chimera.undul_analytic,np.array([K0, 1., 0, Periods])],) } beam = Specie(specie_in) NumParts = 1 beam.Data['coords'] = np.zeros((3,NumParts)) beam.Data['momenta'] = np.zeros((3,NumParts)) beam.Data['momenta'][0] = np.sqrt(g0**2-1) beam.Data['coords_halfstep'] = beam.Data['coords'].copy() beam.Data['weights'] = np.ones((NumParts,))/NumParts chimera_in = {'Particles':(beam,),} Chimera = ChimeraRun(chimera_in)
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The SR calculator constractor takes the mode specifier Mode which can be 'far', 'near' and 'near-circ' which define the far field angular mapping, near field coordinate mapping in Cartesian geometry and near field coordinate mapping in cylindrical geometry. The mapping domain is defined by the Grid - for 'far' it is [($k_{min}$, $k_{max}$), ($\theta_{min}$, $\theta_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_\theta$, $N_\phi$)], where $k$, $\theta$ and $\phi$ are the wavenumber elevation and azimuthal angles of the emission. - for 'near' it is [($k_{min}$, $k_{max}$), ($x_{min}$, $x_{max}$), ($y_{min}$, $y_{max}$), ($N_k$, $N_x$, $N_y$)], where 'x' and 'y' are screen Cartesian dimensions - for 'near-circ' it is [($k_{min}$, $k_{max}$), ($r_{min}$, $r_{max}$), ($\phi_{min}$, $\phi_{max}$), ($N_k$, $N_r$, $N_\phi$)], where 'x' and 'y' are screen cylindrical dimensions The Features attribut can contain WavelenghtGrid, which would provide the homogeneous grid for the wavelengths. After the calculator is constructed, the track-container should be initiated.
sr_in_far = {'Grid':[(0.02*k_res,1.1*k_res),(0,2./g0),(0.,2*np.pi),(160,80,24)], 'TimeStep':dt,'Features':(), } sr_in_nearcirc = {'Grid':[(0.02*k_res,1.1*k_res),(0,15),(0.,2*np.pi),1e3,(160,80,24)], 'TimeStep':dt,'Features':(),'Mode':'near-circ', } sr_in_near = {'Grid':[(0.02*k_res,1.1*k_res),(-15,15),(-15,15),1e3,(160,160,160)], 'TimeStep':dt,'Features':(),'Mode':'near', } sr_calc_far = SR(sr_in_far) sr_calc_nearcirc = SR(sr_in_nearcirc) sr_calc_near = SR(sr_in_near) sr_calc_far.init_track(Steps2Do,beam) sr_calc_nearcirc.init_track(Steps2Do,beam) sr_calc_near.init_track(Steps2Do,beam)
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The simulation is run as usually, but after each (or selected) step the track point is added to the track container using add_track method. After the orbits are recorded the spectrum is calculated with calculate_spectrum method, which can take the component as an argument (e.g. comp='z'). In contrast to the axes conventions use in linear machines (z, x, y) and storage rings (s, x, z), here we use the generic convention (x, y, z) -- propagation axis is X, and oscillations axis is typically Z. The defalut is comp='all' which accounts for all components.
t0 = time.time() for i in range(Steps2Do): Chimera.make_step(i) sr_calc_far.add_track(beam) sr_calc_nearcirc.add_track(beam) sr_calc_near.add_track(beam) print('Done orbits in {:g} sec'.format(time.time()-t0)) t0 = time.time() sr_calc_far.calculate_spectrum(comp='z') print('Done farfield spectrum in {:g} min'.format((time.time()-t0)/60.)) t0 = time.time() sr_calc_nearcirc.calculate_spectrum(comp='z') print('Done nearfield (cylindric) spectrum in {:g} min'.format((time.time()-t0)/60.)) t0 = time.time() sr_calc_near.calculate_spectrum(comp='z') print('Done nearfield spectrum in {:g} min'.format((time.time()-t0)/60. ))
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
The few useful functions are available: - get_full_spectrum returns the full energy spectral-angular distribution, $\mathrm{d}\mathcal{W}/(\mathrm{d}\varepsilon\, \mathrm{d}\Theta)$ (dimensionless) - get_energy_spectrum returns $\Theta$-integrated get_full_spectrum - get_spot returns $\varepsilon$-integrated get_full_spectrum, in the units of $(dW/d\Theta)$ [J] $\cdot\lambda_0$ [$\mu$m] - get_spot_cartesian is same as get_spot but returns the profile interpolated onto cartesian axis; takes th_part argument (default value is 1) to specify the cone angle relative to the full cone $(0,\theta_{max})$, and bins=(Nx,Ny) to specify the resolution of the cartesian grid - get_energy returns full energy either in $(dW)$ [J] $\cdot\lambda_0$ [$\mu$m] Each of these diagnostics can take a spect_filter argument, which will multiply the spectral-angular distribution (shape should conform with multiplication operation). For get_spot and get_spot_cartesian the wavenumber can be specified to show the profile for a single energy. The marco-particles weights are either defined as in main code, chim_units=True, or represent number of real electrons chim_units=False. Since in CHIMERA the particles charge is $\propto \lambda_0$, if chim_units=True is used the get_spot and get_energy return Jouls.
args_calc = {'chim_units':False, 'lambda0_um':1} FullSpect_far = sr_calc_far.get_full_spectrum(**args_calc) FullSpect_nearcirc = sr_calc_nearcirc.get_full_spectrum(**args_calc) FullSpect_near = sr_calc_near.get_full_spectrum(**args_calc) spotXY_far,ext_far = sr_calc_far.get_spot_cartesian(bins=(120,120),**args_calc) spotXY_nearcirc,ext_nearcirc = sr_calc_nearcirc.get_spot_cartesian(bins=(120,120),**args_calc) spotXY_near = sr_calc_near.get_spot(**args_calc) ext_near = np.array(list(sum(sr_calc_near.Args['Grid'][1:3], ()))) FullEnergy_far = sr_calc_far.get_energy(**args_calc)/k_res FullEnergy_nearcirc = sr_calc_nearcirc.get_energy(**args_calc)/k_res FullEnergy_near = sr_calc_near.get_energy(**args_calc)/k_res FullEnergy_theor = sr_calc_near.J_in_um*(7*np.pi/24)/137.*K0**2*(1+K0**2/2)*Periods print('** Full energy in far field agrees with theory with {:.1f}% error'. \ format(100*(2*(FullEnergy_far-FullEnergy_theor)/(FullEnergy_far+FullEnergy_theor)))) print('** Full energy in near field (cylindric) agrees with theory with {:.1f}% error'. \ format(100*(2*(FullEnergy_nearcirc-FullEnergy_theor)/(FullEnergy_nearcirc+FullEnergy_theor)))) print('** Full energy in near field agrees with theory with {:.1f}% error'. \ format(100*(2*(FullEnergy_near-FullEnergy_theor)/(FullEnergy_near+FullEnergy_theor))))
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
Let us compare the obtained spectrum with analytical estimates, derived with Periods>1 and $K_0<<1$ approximations. Here we use expressions from [I. A. Andriyash et al, Phys. Rev. ST Accel. Beams 16 100703 (2013)], which can be easily derived or found in textbooks on undulator radiation. - the resonant frequency depends on angle as $\propto \left(1+\theta^2\gamma_e^2\right)^{-1}$ - the energy in units of full number of photons with fundamental wavelength is $$N_{ph} = \frac{7\pi \alpha}{24}\, K_0^2\, \left(1+\frac{K_0^2}{2}\right)\, N_\mathrm{periods}$$ - the transverse profiles of emission are $\propto \left(1+\theta^2\gamma_e^2\right)^{-3}$ (vertical), $\propto \left(1+\theta^2\gamma_e^2\right)^{-7}$ (horizontal)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,5)) extent = np.array(sr_calc_far.Args["Grid"][0]+sr_calc_far.Args["Grid"][1]) extent[:2] /= k_res extent[2:] *= g0 th,ph = sr_calc_far.Args['theta'], sr_calc_far.Args['phi'] ax1.plot( 1./(1.+(th*g0)**2), th*g0, ':', c='k',lw=1.5) ax1.imshow(FullSpect_far.mean(-1).T, interpolation='spline16', aspect='auto', origin='lower', cmap = plt.cm.Spectral_r, extent=extent) ax1.set_xlabel(r'Wavenumber $k/k_0$', fontsize=18) ax1.set_ylabel(r'Angle, $\theta \gamma_e$', fontsize=18) ax2.imshow(spotXY_far.T,origin='lower',cmap = plt.cm.bone_r,extent=g0*ext_far) ax2.set_xlabel(r'Horiz. angle, $\theta \gamma_e$', fontsize=18) ax2.set_ylabel(r'Vert. angle, $\theta \gamma_e$', fontsize=18) th = np.r_[g0*ext_far[0]:g0*ext_far[1]:spotXY_far.shape[0]*1j] ax3 = plt.twinx(ax=ax2); ax3.plot(th, spotXY_far[:,spotXY_far.shape[0]//2+1]/spotXY_far.max(), c='b') ax3.plot(th, (1+th**2)**-7, '--',c='b',lw=2) ax3.set_ylim(0,3) ax3.yaxis.set_ticks([]) ax4 = plt.twiny(ax=ax2); ax4.plot(spotXY_far[spotXY_far.shape[0]//2+1,:]/spotXY_far.max(), th, c='g') ax4.plot((1+th**2)**-3, th, '--',c='g',lw=2) ax4.set_xlim(0,3) ax4.xaxis.set_ticks([]);
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
As mentioned before, the radiation profile for a given (e.g. fundumantal) wavenumber, can be specified as k0 argument to get_spot.
spotXY_k0,ext = sr_calc_far.get_spot_cartesian(k0=k_res,th_part=0.2,**args_calc) Spect1D = sr_calc_far.get_energy_spectrum(**args_calc) k_ax = sr_calc_far.get_spectral_axis()/k_res fig,(ax1,ax2) = plt.subplots(1,2,figsize=(14,5)) ax1.plot(k_ax, Spect1D/Spect1D.max()) ax1.plot(k_ax, FullSpect_far[1:,0,0]/FullSpect_far[1:,0,0].max()) ax1.set_xlim(0.5,1.1) ax1.set_ylim(0,1.1) ax1.set_xlabel(r'Wavenumber $k/k_0$', fontsize=18) ax1.set_ylabel(r'$dW/d\varepsilon_{ph}$ (norm.)', fontsize=18) ax1.legend(('angle-integrated','on-axis'), loc=2,fontsize=15) ax2.imshow(spotXY_k0.T,origin='lower',cmap = plt.cm.bone_r,extent=g0*ext_far) ax2.set_xlabel(r'Horiz. angle, $\theta \gamma_e$', fontsize=18) ax2.set_ylabel(r'Vert. angle, $\theta \gamma_e$', fontsize=18); Lsource = (sr_calc_near.Args['Grid'][3] - 0.5*Periods) ext_wo_far = np.array(list(sum(sr_calc_far.Args['Grid'][0:2], ()))) ext_wo_far[2] = -ext_wo_far[3] ext_wo_far[2:] *= g0 ext_wo_far[:2] /= k_res ext_wo_nearcirc = np.array(list(sum(sr_calc_nearcirc.Args['Grid'][0:2], ()))) ext_wo_nearcirc[2] = -ext_wo_nearcirc[3] ext_wo_nearcirc[2:] *= g0/Lsource ext_wo_nearcirc[:2] /= k_res ext_wo_near = np.array(list(sum(sr_calc_near.Args['Grid'][0:2], ()))) ext_wo_near[2:] *= g0/Lsource ext_wo_near[:2] /= k_res fig,((ax1,ax2),(ax3,ax4),(ax5,ax6),) = plt.subplots(3,2, figsize=(12,15)) ax1.imshow(np.hstack((FullSpect_far[:,::-1,0],FullSpect_far[:,:,12])).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_far) ax2.imshow(np.hstack((FullSpect_far[:,::-1,6],FullSpect_far[:,:,18])).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_far) ax3.imshow(np.hstack((FullSpect_nearcirc[:,::-1,0],FullSpect_nearcirc[:,:,12])).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_nearcirc) ax4.imshow(np.hstack((FullSpect_nearcirc[:,::-1,6],FullSpect_nearcirc[:,:,18])).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_nearcirc) ax5.imshow(FullSpect_near[:,:,75:86].mean(-1).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_near ) ax6.imshow(FullSpect_near[:,78:86,:].mean(-2).T, origin='lower',interpolation='bilinear', aspect='auto',cmap=plt.cm.Spectral_r, extent=ext_wo_near ) ax1.set_ylabel('Far field \n Angle (mrad)',fontsize=16) ax3.set_ylabel('Near field (cylindric) \n Angle (mrad)',fontsize=16) ax5.set_ylabel('Near field \n Angle (mrad)',fontsize=16) for ax in (ax1,ax2,ax3,ax4,ax5,ax6): ax.set_ylim(-1.5,1.5) ax.set_xlabel('Wavenumber ($k/k_0$)',fontsize=16) fig,axs = plt.subplots(3,3, figsize=(16,15)) kk = 0.95*k_res for i in range(3): kk = (0.7*k_res, 0.98*k_res, None)[i] (ax1,ax2,ax3) = axs[:,i] spotXY_far,ext_far = sr_calc_far.get_spot_cartesian(k0=kk,**args_calc) spotXY_nearcirc,ext_nearcirc = sr_calc_nearcirc.get_spot_cartesian(k0=kk,**args_calc) spotXY_near = sr_calc_near.get_spot(k0=kk,**args_calc) ax1.imshow(spotXY_far.T,origin='lower', cmap = plt.cm.Spectral_r, extent=g0*ext_far) ax2.imshow(spotXY_nearcirc.T, origin='lower',cmap=plt.cm.Spectral_r, extent=g0*ext_nearcirc/(sr_calc_nearcirc.Args['Grid'][3]) ) ax3.imshow(spotXY_near.T, origin='lower',cmap=plt.cm.Spectral_r, extent=g0*ext_near/Lsource ) if i==0: ax1.set_ylabel('Far field',fontsize=16) ax2.set_ylabel('Near field (cylindric)',fontsize=16) ax3.set_ylabel('Near field',fontsize=16) for ax in (ax1,ax2,ax3): ax.set_xlabel('Angle (mrad)',fontsize=16) ax.set_xlim(-1.5,1.5) ax.set_ylim(-1.5,1.5)
doc/radiation calculator.ipynb
hightower8083/chimera
gpl-3.0
There are some linking issues with boost's program options in the below (commented) cells.
#standalone_prog = nativesys.as_standalone('chem_kinet', compile_kwargs=dict(options=['warn', 'pic', 'openmp', 'debug'])) #standalone_prog #p = subprocess.Popen([standalone_prog, '--return-on-root', '1', '--special-settings', '1000'], # stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) #out, err = p.communicate(input='2 1e-2 2e-3 0 800 8 0 0 0 0 1'.encode('utf-8')) #retc = p.wait() #assert retc == 0 #print(err.decode('utf-8')) #res_sa, = [Result(*args, kineticsys) for args in parse_standalone_output(out.decode('utf-8').split('\n'))] #res_sa.plot()
examples/_native_override_chemical_kinetics.ipynb
bjodah/pyodesys
bsd-2-clause
Time to reach steady state If we define steady state to occur when the change in chemical concentrations is below a certain threshold, then the obtained time will depend on that threshold. Here we investigate how that choice affects the answer (with respect to numerical precision etc.)
native = native_sys['cvode'].from_other(kineticsys, namespace_override=native_override) def plot_tss_conv(factor, tols, ax): tol_kw = dict(plot=False, return_on_root=True, nsteps=2000, special_settings=[factor]) tss = [integrate_and_plot(native, atol=tol, rtol=tol, **tol_kw).xout[-1] for tol in tols] ax.semilogx(tols, tss, label=factor) fig, ax = plt.subplots(figsize=(14, 6)) tols = np.logspace(-15, -10, 200) for factor in [1e2, 1e3, 1e4, 1.1e4, 1e5, 1e6, 1e7]: plot_tss_conv(factor, tols, ax) ax.legend()
examples/_native_override_chemical_kinetics.ipynb
bjodah/pyodesys
bsd-2-clause
Le but de la fonction stupid_generator est de lister les entiers inférieurs à end. Cependant, elle ne retourne pas directement la liste mais un générateur sur cette liste. Comparez avec la fonction suivante :
def stupid_list(end): i = 0 result = [] while i < end: result.append(i) i+=1 return result stupid_list(3)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Pour récupérer les objets de stupid_generator, il faut le transformer explicitement en liste ou alors parcourir les objets à travers une boucle.
it = stupid_generator(3) next(it) list(stupid_generator(3)) for v in stupid_generator(3): print(v)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Remarque : les instructions de stupid_generator ne sont pas exécutées lors de l'appel initial de la fonction mais seulement lorsque l'on commence à parcourir le générateur pour récupérer le premier objet. L'instruction yield stoppe alors l'exécution et retourne le premier objet. Si l'on demande un deuxième objet, l'exécution sera reprise là où elle a été arrétée.
def test_generator(): print("Cette instruction est exécutée lors de l'appel du premier objet") yield 1 print("Cette instruction est exécutée lors de l'appel du deuxième objet") yield 2 print("Cette instruction est exécutée lors de l'appel du troisième objet") yield 3 it = test_generator() next(it) next(it) next(it) next(it) # l'appel de next sur un générateur terminé lève une exception
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice : implanter la fonction suivante dont le but est d'engendrer les n premiers nombre de Fibonacci. La suite de fibonacci est définie par : $f_0 = 1$ $f_1 = 1$ $f_n = f_{n-1} + f_{n-2}$ pour $n \geq 2$. Remarques : La fonction doit renvoyer l'ensemble des nombres de Fibonacci, pas seulement le dernier. Pour cette raison (et pour d'autres), une version itérative est largement préférable à une récursive. Ne vous inquiétez pas, on fera plein de récursivité après ! Vous n'avez pas non non plus besoin d'un tableau pour stocker toute la suite, 3 variables sont suffisantes.
def first_fibonacci_generator(n): """ Return a generator for the first ``n`` Fibonacci numbers """ # écrire le code ici it = first_fibonacci_generator(5) it next(it)
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Votre fonction doit passer les tests suivants :
import types assert type(first_fibonacci_generator(3)) == types.GeneratorType assert list(first_fibonacci_generator(0)) == [] assert list(first_fibonacci_generator(1)) == [1] assert list(first_fibonacci_generator(2)) == [1,1] assert list(first_fibonacci_generator(8)) == [1,1,2,3,5,8,13,21]
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Dans les cas précédent, le générateur s'arrête de lui même au bout d'un certain temps. Cependant, il est aussi possible d'écrire des générateurs infinis. Dans ce cas, la responsabilité de l'arrêt revient à la l'appelant.
def powers2(): v = 1 while True: yield v v*=2 for v in powers2(): print(v) if v > 1000000: break
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exercice: Implantez les fonctions suivantes
def fibonacci_generator(): """ Return an infinite generator for Fibonacci numbers """ # écrire le code ici it = fibonacci_generator() next(it) def fibonacci_finder(n): """ Return the first Fibonacci number greater than or equal to n """ # écrire le code ici assert fibonacci_finder(10) == 13 assert fibonacci_finder(100) == 144 assert fibonacci_finder(1000) == 1597 assert fibonacci_finder(1000000) == 1346269
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
En vous inspirant des fonctions précédentes (mais sans les utiliser) ou en reprenant la fonction du cours, implantez de façon récursive la fonction suivante qui engendre l'ensemble des mots binaires d'une taille donnée.
def binary_word_generator(n): """ Return a generator on binary words of size n in lexicographic order Input : - n, the length of the words """ # écrire le code ici list(binary_word_generator(3)) # tests import types assert type(binary_word_generator(0)) == types.GeneratorType assert list(binary_word_generator(0)) == [''] assert list(binary_word_generator(1)) == ['0', '1'] assert list(binary_word_generator(2)) == ['00', '01', '10', '11'] assert list(binary_word_generator(3)) == ['000', '001', '010', '011', '100', '101', '110', '111'] assert len(list(binary_word_generator(4))) == 16 assert len(list(binary_word_generator(7))) == 128
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Sur le même modèle, implanter la fonction suivante. (un peu plus dur) Indications : * Posez-vous la question de cette façon : si j'ai un mot de taille $n$ qui termine par 0 et qui contient $k$ fois 1, combien de 1 contenait le mot taille $n-1$ à partir duquel il a été créé ? De même s'il termine par 1. * Bien que la fonction ressemble à la précédente, vous ne devez pas appeler binary_word_generator Remarque : l'ordre des mots n'est plus imposé
def binary_kword_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1 Input : - n, the size of the words - k, the number of 1 """ # écrire le code ici list(binary_kword_generator(4,2)) # tests import types assert type(binary_kword_generator(0,0)) == types.GeneratorType assert list(binary_kword_generator(0,0)) == [''] assert list(binary_kword_generator(0,1)) == [] assert list(binary_kword_generator(1,1)) == ['1'] assert list(binary_kword_generator(4,4)) == ['1111'] assert list(binary_kword_generator(4,0)) == ['0000'] assert set(binary_kword_generator(4,2)) == set(['0011', '0101', '1001', '0110', '1010', '1100']) assert len(list(binary_kword_generator(7,3))) == 35
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Et pour finir On appelle un prefixe de Dyck un mot binaire de taille $n$ avec $k$ occurences de 1, tel que dans tout préfixe, le nombre de 1 soit supérieur ou égal au nombre de 0. Par exemple : $1101$ est un préfixe de Dyck pour $n=4$ et $k=3$. Mais $1001$ n'en est pas un car dans le prefixe $100$ le nombre de 0 est supérieur au nombre de 1.
def dyck_prefix_generator(n,k): """ Return a generator on binary words of size n such that each word contains exacty k occurences of 1, and in any prefix, the number of 1 is greater than or equal to the number of 0. Input : - n, the size of the words - k, the number of 1 """ # écrire le code ici list(dyck_prefix_generator(4,2)) assert len(list(dyck_prefix_generator(0,0))) == 1 assert len(list(dyck_prefix_generator(0,1))) == 0 assert len(list(dyck_prefix_generator(1,0))) == 0 assert list(dyck_prefix_generator(1,1)) == ['1'] assert set(dyck_prefix_generator(3,2)) == set(["110","101"] ) assert len(set(dyck_prefix_generator(10,5))) == 42
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Exécutez la ligne suivante et copiez la liste des nombres obtenus dans Google.
[len(set(dyck_prefix_generator(2*n, n))) for n in range(8)]
TP/TP1GenerationRecursive.ipynb
hivert/CombiFIIL
gpl-2.0
Vertex client library: AutoML image classification model for export to edge <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create image classification models to export as an Edge model using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip. Objective In this tutorial, you create a AutoML image classification model from a Python script using the Vertex client library, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. Export the Edge model from the Model resource to Cloud Storage. Download the model locally. Make a local prediction. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start creating your own AutoML image classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() for client in clients.items(): print(client)
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields we need to specify are: multi_label: Whether True/False this is a multi-label (vs single) classification. budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image classification, the budget must be a minimum of 8 hours. model_type: The type of deployed model: CLOUD: For deploying to Google Cloud. MOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time). MOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy. MOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy. disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget. Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
PIPE_NAME = "flowers_pipe-" + TIMESTAMP MODEL_NAME = "flowers_model-" + TIMESTAMP task = json_format.ParseDict( { "multi_label": False, "budget_milli_node_hours": 8000, "model_type": "MOBILE_TF_LOW_LATENCY_1", "disable_early_stopping": False, }, Value(), ) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Export as Edge model You can export an AutoML image classification model as an Edge model which you can then custom deploy to an edge device, such as a mobile phone or IoT device, or download locally. Use this helper function export_model to export the model to Google Cloud, which takes the following parameters: name: The Vertex fully qualified identifier for the Model resource. format: The format to save the model format as. gcs_dest: The Cloud Storage location to store the SavedFormat model artifacts to. This function calls the Model client service's method export_model, with the following parameters: name: The Vertex fully qualified identifier for the Model resource. output_config: The destination information for the exported model. artifact_destination.output_uri_prefix: The Cloud Storage location to store the SavedFormat model artifacts to. export_format_id: The format to save the model format as. For AutoML image classification: tf-saved-model: TensorFlow SavedFormat for deployment to a container. tflite: TensorFlow Lite for deployment to an edge or mobile device. edgetpu-tflite: TensorFlow Lite for TPU tf-js: TensorFlow for web client coral-ml: for Coral devices The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is exported.
MODEL_DIR = BUCKET_NAME + "/" + "flowers" def export_model(name, format, gcs_dest): output_config = { "artifact_destination": {"output_uri_prefix": gcs_dest}, "export_format_id": format, } response = clients["model"].export_model(name=name, output_config=output_config) print("Long running operation:", response.operation.name) result = response.result(timeout=1800) metadata = response.operation.metadata artifact_uri = str(metadata.value).split("\\")[-1][4:-1] print("Artifact Uri", artifact_uri) return artifact_uri model_package = export_model(model_to_deploy_id, "tflite", MODEL_DIR)
notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import de fonctions et de modules spécifiques à Openfisca
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar_list from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_accises import get_accises_carburants from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_tva import get_tva_taux_plein from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_prix_carburants import \ get_prix_carburants
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Appel des paramètres de la législation et des prix
ticpe = ['ticpe_gazole', 'ticpe_super9598'] accise_diesel = get_accises_carburants(ticpe) prix_ttc = ['diesel_ttc', 'super_95_ttc'] prix_carburants = get_prix_carburants(prix_ttc) tva_taux_plein = get_tva_taux_plein()
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Création d'une dataframe contenant ces paramètres
df_taux_implicite = concat([accise_diesel, prix_carburants, tva_taux_plein], axis = 1) df_taux_implicite.rename(columns = {'value': 'taux plein tva'}, inplace = True)
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
A partir des paramètres, calcul des taux de taxation implicites
df_taux_implicite['taux_implicite_diesel'] = ( df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva']) / (df_taux_implicite['prix diesel ttc'] - (df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva']))) ) df_taux_implicite['taux_implicite_sp95'] = ( df_taux_implicite['accise ticpe super9598'] * (1 + df_taux_implicite['taux plein tva']) / (df_taux_implicite['prix super 95 ttc'] - (df_taux_implicite['accise ticpe super9598'] * (1 + df_taux_implicite['taux plein tva']))) ) df_taux_implicite = df_taux_implicite.dropna()
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Réalisation des graphiques
graph_builder_bar_list(df_taux_implicite['taux_implicite_diesel'], 1, 1) graph_builder_bar_list(df_taux_implicite['taux_implicite_sp95'], 1, 1)
openfisca_france_indirect_taxation/examples/notebooks/taux_implicite_ticpe.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
Closures A closure closes over free variables from their environment.
def html_tag(tag): def wrap_text(msg): print('<{0}>{1}<{0}>'.format(tag, msg)) return wrap_text print_h1 = html_tag('h1') print_h1('Test Headline') print_h1('Another Headline') print_p = html_tag('p') print_p('Test Paragraph!')
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Decorators Decorators are a way to dynamically alter the functionality of your functions. So for example, if you wanted to log information when a function is run, you could use a decorator to add this functionality without modifying the source code of your original function.
def decorator_function(original_function): def wrapper_function(): print("wrapper executed this before {}".format(original_function.__name__)) return original_function() return wrapper_function def display(): print("display function ran!") decorated_display = decorator_function(display) decorated_display() # The above code is functionally the same as below: def decorator_function(original_function): def wrapper_function(): print("wrapper executed this before {}".format(original_function.__name__)) return original_function() return wrapper_function @decorator_function def display(): print("display function ran!") display() # Lets make our decorator function to work with functions with different number of arguments # For this we use, *args (arguments) and **kwargs (keyword arguments). # args and kwargs are convention, you can use any other name you want like *myargs, **yourkeywordargs def decorator_function(original_function): def wrapper_function(*args, **kwargs): print("wrapper executed this before {}".format(original_function.__name__)) return original_function(*args, **kwargs) return wrapper_function @decorator_function def display(): print("display function ran!") @decorator_function def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display() display_info('John', 25) # Now let's use a class as a decorator instead of a function class decorator_class(object): def __init__(self, original_function): self.original_function = original_function def __call__(self, *args, **kwargs): # This method is going to behave just like our wrapper function behaved print('call method executed this before {}'.format(self.original_function.__name__)) return self.original_function(*args, **kwargs) @decorator_class def display(): print("display function ran!") @decorator_class def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display() display_info('John', 25)
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Some practical applications of decorators
#Let's say we want to keep track of how many times a specific function was run and what argument were passed to that function def my_logger(orig_func): import logging logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion def wrapper(*args, **kwargs): logging.info( 'Ran with args: {}, and kwargs: {}'.format(args, kwargs)) return orig_func(*args, **kwargs) return wrapper @my_logger def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # Note: # Since display_info is decorated with my_logger, the above call is equivalent to: # decorated_display = my_logger(display_info) # decorated_display('John', 25) def my_timer(orig_func): import time def wrapper(*args, **kwargs): t1 = time.time() result = orig_func(*args, **kwargs) t2 = time.time() - t1 print('{} ran in: {} sec'.format(orig_func.__name__, t2)) return result return wrapper @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # Note: # Since display_info is decorated with my_timer, the above call is equivalent to: # decorated_display = my_timer(display_info) # decorated_display('John', 25) # (or simply put) # my_timer(display_info('John', 25))
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Chaining of Decorators
@my_timer @my_logger def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_timer(my_logger(display_info('John', 25))) # The above code will give us some unexpected results. # Instead of printing "display_info ran in: ---- sec" it prints "wrapper ran in: ---- sec"
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Let's see if switching the order of decorators helps
@my_logger @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25) # This is equivalent to my_logger(my_timer(display_info('John', 25))) # Now this would create wrapper.log instead of display_info.log like we expected. # To understand why wrapper.log is generated instead of display_info.log let's look at the following code. def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info = my_timer(display_info('John',25)) print(display_info.__name__) # So how do we solve this problem # The answer is by using the wraps decorator # For this we need to import wraps from functools module from functools import wraps def my_logger(orig_func): import logging logging.basicConfig(filename='{}.log'.format(orig_func.__name__), level=logging.INFO) #Generates a log file in current directory with the name of original funcion @wraps(orig_func) def wrapper(*args, **kwargs): logging.info( 'Ran with args: {}, and kwargs: {}'.format(args, kwargs)) return orig_func(*args, **kwargs) return wrapper def my_timer(orig_func): import time @wraps(orig_func) def wrapper(*args, **kwargs): t1 = time.time() result = orig_func(*args, **kwargs) t2 = time.time() - t1 print('{} ran in: {} sec'.format(orig_func.__name__, t2)) return result return wrapper @my_logger @my_timer def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('Hank', 22)
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
Decorators with Arguments
def prefix_decorator(prefix): def decorator_function(original_function): def wrapper_function(*args, **kwargs): print(prefix, "Executed before {}".format(original_function.__name__)) result = original_function(*args, **kwargs) print(prefix, "Executed after {}".format(original_function.__name__), '\n') return result return wrapper_function return decorator_function @prefix_decorator('LOG:') def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('John', 25)
ipynb/Closures and Decorators.ipynb
sripaladugu/sripaladugu.github.io
mit
通过运行以下代码单元格进行验证。
print(env.observation_space) print(env.action_space)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
执行以下代码单元格以按照随机策略玩二十一点。 (代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你体验当智能体与环境互动时返回的输出结果。)
for i_episode in range(3): state = env.reset() while True: print(state) action = env.action_space.sample() state, reward, done, info = env.step(action) if done: print('End game! Reward: ', reward) print('You won :)\n') if reward > 0 else print('You lost :(\n') break
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
执行以下代码单元格以按照该策略玩二十一点。 (代码当前会玩三次二十一点——你可以随意修改该数字,或者多次运行该单元格。该单元格旨在让你熟悉 generate_episode_from_limit 函数的输出结果。)
for i in range(3): print(generate_episode_from_limit(env))
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。 你的算法将有四个参数: - env:这是 OpenAI Gym 环境的实例。 - num_episodes:这是通过智能体-环境互动生成的阶段次数。 - generate_episode:这是返回互动阶段的函数。 - gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: - V:这是一个字典,其中 V[s] 是状态 s 的估算值。例如,如果代码返回以下输出结果:
{(4, 7, False): -0.38775510204081631, (18, 6, False): -0.58434296365330851, (13, 2, False): -0.43409090909090908, (6, 7, False): -0.3783783783783784, ...
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
则状态 (4, 7, False) 的值估算为 -0.38775510204081631。 如果你不知道如何在 Python 中使用 defaultdict,建议查看此源代码。
from collections import defaultdict import numpy as np import sys def mc_prediction_v(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionary of lists returns = defaultdict(list) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return V
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
使用以下单元格计算并绘制状态值函数估算值。 (用于绘制值函数的代码来自此源代码,并且稍作了修改。) 要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
from plot_utils import plot_blackjack_values # obtain the value function V = mc_prediction_v(env, 500000, generate_episode_from_limit) # plot the value function plot_blackjack_values(V)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
现在你已经准备好自己编写 MC 预测的实现了。你可以选择实现首次经历或所有经历 MC 预测;对于 Blackjack 环境,这两种技巧是对等的。 你的算法将有四个参数: - env: 这是 OpenAI Gym 环境的实例。 - num_episodes:这是通过智能体-环境互动生成的阶段次数。 - generate_episode:这是返回互动阶段的函数。 - gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionaries of arrays returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) N = defaultdict(lambda: np.zeros(env.action_space.n)) Q = defaultdict(lambda: np.zeros(env.action_space.n)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return Q
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
请使用以下单元格获取动作值函数估值 $Q$。我们还绘制了相应的状态值函数。 要检查你的实现是否正确,应将以下图与解决方案 notebook Monte_Carlo_Solution.ipynb 中的对应图进行比较。
# obtain the action-value function Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) # obtain the state-value function V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \ for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V_to_plot)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
第 3 部分:MC 控制 - GLIE 在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。 你的算法将有四个参数: env: 这是 OpenAI Gym 环境的实例。 num_episodes:这是通过智能体-环境互动生成的阶段次数。 generate_episode:这是返回互动阶段的函数。 gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。 policy:这是一个字典,其中 policy[s] 会返回智能体在观察状态 s 之后选择的动作。 (你可以随意定义其他函数,以帮助你整理代码。)
def mc_control_GLIE(env, num_episodes, gamma=1.0): nA = env.action_space.n # initialize empty dictionaries of arrays Q = defaultdict(lambda: np.zeros(nA)) N = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return policy, Q
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
通过以下单元格获取估算的最优策略和动作值函数。
# obtain the estimated optimal policy and action-value function policy_glie, Q_glie = mc_control_GLIE(env, 500000)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
真最优策略 $\pi_*$ 可以在该教科书的第 82 页找到(下文也提供了)。请将你的最终估算值与最优策略进行比较——它们能够有多接近?如果你对算法的效果不满意,请花时间调整 $\epsilon$ 的衰减率和/或使该算法运行更多个阶段,以获得更好的结果。 第 4 部分:MC 控制 - 常量-$\alpha$ 在此部分,你将自己编写常量-$\alpha$ MC 控制的实现。 你的算法将有三个参数: env: 这是 OpenAI Gym 环境的实例。 num_episodes:这是通过智能体-环境互动生成的阶段次数。 generate_episode:这是返回互动阶段的函数。 alpha:这是更新步骤的步长参数。 gamma:这是折扣率。它必须是在 0 到 1(含)之间的值,默认值为:1。 该算法会返回以下输出结果: Q:这是一个字典(一维数组),其中 Q[s][a] 是状态 s 和动作 a 对应的估算动作值。 policy:这是一个字典,其中 policy[s] 会返回智能体在观察状态 s 之后选择的动作。 (你可以随意定义其他函数,以帮助你整理代码。)
def mc_control_alpha(env, num_episodes, alpha, gamma=1.0): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function return policy, Q
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
通过以下单元格获得估算的最优策略和动作值函数。
# obtain the estimated optimal policy and action-value function policy_alpha, Q_alpha = mc_control_alpha(env, 500000, 0.008)
assets/media/uda-ml/qinghua/mengteka/迷你项目:蒙特卡洛方法(第 2 部分)/Monte_Carlo-zh.ipynb
hetaodie/hetaodie.github.io
mit
Minimal example with random data
def aquire_audio_data(): D, T = 4, 10000 y = np.random.normal(size=(D, T)) return y y = aquire_audio_data() Y = stft(y, **stft_options).transpose(2, 0, 1) with tf.Session() as session: Y_tf = tf.placeholder( tf.complex128, shape=(None, None, None)) Z_tf = wpe(Y_tf) Z = session.run(Z_tf, {Y_tf: Y}) z_tf = istft(Z.transpose(1, 2, 0), size=stft_options['size'], shift=stft_options['shift'])
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
STFT For simplicity reasons we calculate the STFT in Numpy and provide the result in form of the Tensorflow feed dict.
Y = stft(y, **stft_options).transpose(2, 0, 1)
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
iterative WPE A placeholder for Y is declared. The wpe function is fed with Y via the Tensorflow feed dict. Finally, an inverse STFT in Numpy is performed to obtain a dereverberated result in time domain.
from nara_wpe.tf_wpe import get_power with tf.Session()as session: Y_tf = tf.placeholder(tf.complex128, shape=(None, None, None)) Z_tf = wpe(Y_tf, taps=taps, iterations=iterations) Z = session.run(Z_tf, {Y_tf: Y}) z = istft(Z.transpose(1, 2, 0), size=stft_options['size'], shift=stft_options['shift']) IPython.display.Audio(z[0], rate=sampling_rate)
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
Power spectrum Before and after applying WPE
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8)) im1 = ax1.imshow(20 * np.log10(np.abs(Y[:, 0, 200:400])), origin='lower') ax1.set_xlabel('') _ = ax1.set_title('reverberated') im2 = ax2.imshow(20 * np.log10(np.abs(Z[:, 0, 200:400])), origin='lower') _ = ax2.set_title('dereverberated') cb = fig.colorbar(im1)
examples/WPE_Tensorflow_offline.ipynb
fgnt/nara_wpe
mit
The histogram A KDE may be thought of as an extension to the familiar histogram. The purpose of the KDE is to estimate an unknown probability density function $f(x)$ given data sampled from it. A natural first thought is to use a histogram – it’s well known, simple to understand and works reasonably well. To see how the histogram performs on the data generated above, we'll plot the true distribution alongside a histogram. As seen below, the histogram does a fairly poor job, since: - The location of the bins and the number of bins are both arbitrary. - The estimated distribution is discontinuous (not smooth), while the true distribution is continuous (smooth).
x = np.linspace(np.min(data) - 1, np.max(data) + 1, num=2**10) plt.hist(data, density=True, label='Histogram', edgecolor='#1f77b4', color='w') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data', zorder=9) plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Centering the histogram In an effort to reduce the arbitrary placement of the histogram bins, we center a box function $K$ on each data point $x_i$ and average those functions to obtain a probability density function. This is a (very simple) kernel density estimator. $$\hat{f}(x) = \frac{1}{N} \sum_{i=1}^N K \left( x-x_i \right) \quad \text{ where } \quad K = \text{box function}$$ Since each function $K$ has $\int K \, dx = 1$, we divide by $N$ to ensure that $\int \hat{f}(x) \, dx = 1$. A kernel density estimator with $\int \hat{f}(x) \, dx = 1$ and $\hat{f}(x) \geq 0$ for every $x$ is called a bona fide estimator. Enough theory -- let's see what this looks like graphically on a simple data set.
from KDEpy import TreeKDE np.random.seed(123) data = [1, 2, 4, 8, 16] # Plot the points plt.figure(figsize=(14, 3)); plt.subplot(1, 3, 1) plt.title('Data points') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') # Plot a kernel on each data point plt.subplot(1, 3, 2); plt.title('Data points with kernel functions') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') for d in data: x, y = TreeKDE(kernel='box').fit([d])() plt.plot(x, y, color='#1f77b4') # Plot a normalized kernel on each data point, and the sum plt.subplot(1, 3, 3); plt.title('Data points with kernel functions summed') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') for d in data: x, y = TreeKDE(kernel='box').fit([d])() plt.plot(x, y / len(data), color='#1f77b4') x, y = TreeKDE(kernel='box').fit(data)() plt.plot(x, y, color='#ff7f0e');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Now let us grapically review our progress so far: We have a collection of data points ${x_1, x_2, \dots, x_N}$. These data points are drawn from a true probability density function $f(x)$, unknown to us. A very simple way to construct an estimate $\hat{f}(x)$ is to use a histogram. A KDE $\hat{f}(x)$ with a box kernel is similar to a histogram, but the data chooses the location of the boxes. Looking at the graph below, we see that the KDE with a box kernel is the most promising estimate so far.
np.random.seed(123) data = distribution.rvs(32) # Use a box function with the FFTKDE to obtain a density estimate x, y = FFTKDE(kernel='box', bw=0.7).fit(data).evaluate() plt.plot(x, y, zorder=10, color='#ff7f0e', label='KDE with box kernel') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data', zorder=9) plt.hist(data, density=True, label='Histogram', edgecolor='#1f77b4', color='w') plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Choosing a smooth kernel The true function $f(x)$ is continuous, while our estimate $\hat{f}(x)$ is not. To alleviate the problem of discontinuity, we substitute the box function used above for a gaussian function (a normal distribution). $$K = \text{box function} \quad \to \quad K = \text{gaussian function}$$ The gaussian is smooth, and so the result of our estimate will also be smooth. This looks even better than our previous jagged estimate. Note (Kernel functions). Many kernel functions are available, see FFTKDE._available_kernels.keys() for a complete list.
# Use the FFTKDE with a smooth Gaussian x, y = FFTKDE(kernel='gaussian', bw=0.7).fit(data).evaluate() plt.plot(x, y, zorder=10, label='KDE with box kernel') plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Selecting a suitable bandwidth To control the bandwidth $h$ of the kernel, we'll add a factor $h >0$ to the equation above. $$\hat{f}(x) = \frac{1}{N} \sum_{i=1}^N K \left( x-x_i \right) \quad \to \quad \hat{f}(x) = \frac{1}{N h} \sum_{i=1}^N K \left( \frac{x-x_i}{h} \right)$$ When $h \to 0$, the estimate becomes jagged (high variance, overfitting). When $h \to \infty$, the estimate becomes oversmoothed (high bias, underfitting). Note (Bandwidth). The bw parameter is implemented so that it corresponds to the standard deviation $\sigma$ of the kernel function $K$. This is a design choice for consistent bandwidths across kernels with and without finite support. Note (Automatic Bandwidth Selection). Automatic bandwidth selection based on the data is available, i.e. bw='silverman'. For a complete listing of available routines for automatic bandwidth selection, see FFTKDE._bw_methods.keys(). In general, optimal bandwidth selection is a difficult theoretical problem.
for bw in [0.25, 'silverman']: x, y = FFTKDE(kernel='gaussian', bw=bw).fit(data).evaluate() plt.plot(x, y, label='bw="{}"'.format(bw)) plt.scatter(data, np.zeros_like(data), marker='|', c='r', label='Data') plt.plot(x, distribution.pdf(x), label='True pdf', c='r', ls='--') plt.legend(loc='best');
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Weighted data As a final modification to the problem, let's weight each indiviual data point $x_i$ with a weight $w_i \geq 0$. The contribution to the sum from the data point $x_i$ is controlled by $w_i$. We divide by $\sum_{i=1}^N w_i$ to ensure that $\int \hat{f}(x) \, dx = 1$. Here is how the equation is transformed when we add weights. $$\hat{f}(x) = \frac{1}{N h} \sum_{i=1}^N K \left( \frac{x - x_i}{h} \right)$$ $$\hat{f}(x) = \frac{1}{\left( \sum_{i=1}^N w_i \right ) h} \sum_{i=1}^N w_i K \left( \frac{x - x_i}{h} \right)$$ The factor $N^{-1}$ performed uniform weighting, and is removed in favor of the $w_i$'s. Note (Scaling of weights). If the weights do not sum to unity, they will be scaled automatically by KDEpy. Let's see how weighting can change the result.
plt.figure(figsize=(14, 3)); plt.subplot(1, 2, 1) plt.scatter(data, np.zeros_like(data), marker='o', c='None', edgecolor='r', alpha=0.75) # Unweighted KDE x, y = FFTKDE().fit(data)() plt.plot(x, y) plt.subplot(1, 2, 2); np.random.seed(123) weights = np.exp(data) * 25 plt.scatter(data, np.zeros_like(data), marker='o', c='None', edgecolor='r', alpha=0.75, s=weights) # Unweighted KDE x, y = FFTKDE().fit(data, weights=weights)() plt.plot(x, y);
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Speed There is much more to say about kernel density estimation, but let us conclude with a speed test. Note (Speed of FFTKDE). Millions of data points pose no trouble for the FFTKDE implementation. Computational time scales linearily with the number of points in practical settings. The theoretical runtime is $\mathcal{O}(2^d N + n \log n)$, where $d$ is the dimension, $N$ is the number of data points and $n$ the number of grid points. With $d=1$ and $n = 2^{10}$ (the default values), $N$ dominates.
import time import statistics import itertools import operator def time_function(function, n=10, t=25): times = [] for _ in range(t): data = np.random.randn(n) * 10 weights = np.random.randn(n) ** 2 start_time = time.perf_counter() function(data, weights) times.append(time.perf_counter() - start_time) return statistics.mean(times) def time_FFT(data, weights): x, y = FFTKDE().fit(data, weights)() # Generate sizes [5, 10, 50, 100, ..., 10_000_000] data_sizes = list(itertools.accumulate([5, 2] * 7, operator.mul)) times_fft = [time_function(time_FFT, k) for k in data_sizes] plt.loglog(data_sizes, times_fft, label='FFTKDE') plt.xlabel('Number of data points $N$') plt.ylabel('Average time $t$') plt.grid(True); for size, time in zip(data_sizes, times_fft): print('{:,}'.format(size).ljust(8), round(time, 3), sep='\t')
docs/source/introduction.ipynb
tommyod/KDEpy
gpl-3.0
Let's pick a descriptor. Allowed types are: cnt: atom counts bob: bag of bonds soap: smooth overlap of atomic positions; choose from: soap.sum - all atoms summed together soap.mean - mean of all atom SOAP soap.centre - computed at the central point mbtr: many-body tensor representation cm: Coulomb matrix
# TYPE is the descriptor type TYPE = "cm" #show descriptor details print("\nDescriptor details") desc = open("./data/descriptor."+TYPE.split('.')[0]+".txt","r").readlines() for l in desc: print(l.strip()) print(" ")
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
and load the databases with the descriptors (input) and the correct charge densities (output). Databases are quite big, so we can decide how many samples to use for training.
# load input/output data trainIn = load_npz("./data/energy.input."+TYPE+".npz").toarray() trainOut = numpy.load("./data/energy.output.npy") trainIn = trainIn.astype(dtype=numpy.float64, casting='safe') # decide how many samples to take from the database samples = min(trainIn.shape[0], 9000) vsamples = min(trainIn.shape[0]-samples,1000) print("training samples: "+str(samples)) print("validation samples: "+str(vsamples)) print("number of features: {}".format(trainIn.shape[1])) # split between training and validation validIn = trainIn[samples:samples+vsamples] validOut = trainOut[samples:samples+vsamples] trainIn = trainIn[0:samples] trainOut = trainOut[0:samples] # shift and scale the inputs train_mean = numpy.mean(trainIn, axis=0) train_std = numpy.std(trainIn, axis=0) train_std[train_std==0] = 1 for a in range(trainIn.shape[1]): trainIn[:,a] -= train_mean[a] for a in range(trainIn.shape[1]): trainIn[:,a] /= train_std[a] # also for validation set for a in range(validIn.shape[1]): validIn[:,a] -= train_mean[a] for a in range(validIn.shape[1]): validIn[:,a] /= train_std[a] # show the first few descriptors print("\nDescriptors for the first 5 molecules:") print(trainIn[0:5])
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Next we setup a multilayer perceptron of suitable size. Out package of choice is scikit-learn, but more efficient ones are available.<br> Check the scikit-learn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html">documentation</a> for a list of parameters.
# setup the neural network nn = MLPRegressor(hidden_layer_sizes=(1000,200,50,50), activation='tanh', solver='lbfgs', alpha=0.01, learning_rate='adaptive')
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Training Now comes the tough part! The idea of training is to evaluate the ANN with the training inputs and measure its error (since we know the correct outputs). It is then possible to compute the derivative (gradient) of the error w.r.t. each parameter (connections and biases). By shifting the parameters in the opposite direction of the gradient, we obtain a better set of parameters, that should give smaller error. This procedure can be repeated until the error is minimised. It may take a while...
# use this to change some parameters during training if the NN gets stuck in a bad spot nn.set_params(solver='lbfgs') nn.fit(trainIn, trainOut);
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Check the ANN quality with a regression plot, showing the mismatch between the exact and NN predicted outputs for the validation set.
# evaluate the training and validation error trainMLOut = nn.predict(trainIn) validMLOut = nn.predict(validIn) print ("Mean Abs Error (training) : ", (numpy.abs(trainMLOut-trainOut)).mean()) print ("Mean Abs Error (validation): ", (numpy.abs(validMLOut-validOut)).mean()) plt.plot(validOut,validMLOut,'o') plt.plot([min(validOut),max(validOut)],[min(validOut),max(validOut)]) # perfect fit line plt.xlabel('correct output') plt.ylabel('NN output') plt.show() # error histogram plt.hist(validMLOut-validOut,50) plt.xlabel("Error") plt.ylabel("Occurrences") plt.show()
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
Exercises 1. Compare descriptors Keeping the size of the NN constant, test the accuracy of different descriptors with the same NN size, and find the best one for these systems.
# DIY code here...
NeuralNetwork - TotalEnergy.ipynb
fullmetalfelix/ML-CSC-tutorial
gpl-3.0
These are the imports from the Keras API. Note the long format which can hopefully be shortened in the future to e.g. from tf.keras.models import Model.
from tensorflow.python.keras.models import Model, Sequential from tensorflow.python.keras.layers import Dense, Flatten, Dropout from tensorflow.python.keras.applications import VGG16 from tensorflow.python.keras.applications.vgg16 import preprocess_input, decode_predictions from tensorflow.python.keras.preprocessing.image import ImageDataGenerator from tensorflow.python.keras.optimizers import Adam, RMSprop
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Function for calculating the predicted classes of the entire test-set and calling the above function to plot a few examples of mis-classified images.
def example_errors(): # The Keras data-generator for the test-set must be reset # before processing. This is because the generator will loop # infinitely and keep an internal index into the dataset. # So it might start in the middle of the test-set if we do # not reset it first. This makes it impossible to match the # predicted classes with the input images. # If we reset the generator, then it always starts at the # beginning so we know exactly which input-images were used. generator_test.reset() # Predict the classes for all images in the test-set. y_pred = new_model.predict_generator(generator_test, steps=steps_test) # Convert the predicted classes from arrays to integers. cls_pred = np.argmax(y_pred,axis=1) # Plot examples of mis-classified images. plot_example_errors(cls_pred) # Print the confusion matrix. print_confusion_matrix(cls_pred)
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Input Pipeline The Keras API has its own way of creating the input pipeline for training a model using files. First we need to know the shape of the tensors expected as input by the pre-trained VGG16 model. In this case it is images of shape 224 x 224 x 3.
input_shape = model.layers[0].output_shape[1:3] input_shape
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Get the number of classes for the dataset.
num_classes = generator_train.num_class num_classes
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
Training the new model is just a single function call in the Keras API. This takes about 6-7 minutes on a GTX 1070 GPU.
history = new_model.fit_generator(generator=generator_train, epochs=epochs, steps_per_epoch=steps_per_epoch, class_weight=class_weight, validation_data=generator_test, validation_steps=steps_test)
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
After training we can also evaluate the new model's performance on the test-set using a single function call in the Keras API.
result = new_model.evaluate_generator(generator_test, steps=steps_test) print("Test-set classification accuracy: {0:.2%}".format(result[1]))
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The training can then be continued so as to fine-tune the VGG16 model along with the new classifier.
history = new_model.fit_generator(generator=generator_train, epochs=epochs, steps_per_epoch=steps_per_epoch, class_weight=class_weight, validation_data=generator_test, validation_steps=steps_test)
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
We can then plot the loss-values and classification accuracy from the training. Depending on the dataset, the original model, the new classifier, and hyper-parameters such as the learning-rate, this may improve the classification accuracies on both training- and test-set, or it may improve on the training-set but worsen it for the test-set in case of overfitting. It may require some experimentation with the parameters to get this right.
plot_training_history(history) result = new_model.evaluate_generator(generator_test, steps=steps_test) print("Test-set classification accuracy: {0:.2%}".format(result[1]))
10_Fine-Tuning.ipynb
newworldnewlife/TensorFlow-Tutorials
mit
The first distinction to make is between Figure, which is the outer frame of a canvas, and the rectangular XY grids or coordinate systems we place within the figure. XY grid objects are known as "axes" (plural) and most of the attributes we associate with plots are actually connected to axes. What may be confusing to the new user is that plt (pyplot) keeps track of which axes we're working on, so we have ways of communicating with axes through plt which obscures the direct connection twixt axes and their attributes. Below we avoid using plt completely except for initializing our figure, and manage to get two sets of axes (two plots inside the same figure).
fig = plt.figure("main", figsize=(5,5)) # name or int id optional, as is figsize ax1 = fig.add_axes([0.1, 0.5, 0.8, 0.4], xticklabels=[], ylim=(-1.2, 1.2)) # no x axis tick marks ax2 = fig.add_axes([0.1, 0.1, 0.8, 0.4], ylim=(-1.2, 1.2)) x = np.linspace(0, 10) ax1.plot(np.sin(x)) _ = ax2.plot(np.cos(x)) # assign to dummy variable to suppress text output plt.figure?
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Here's subplot in action, creating axes automatically based on how many rows and columns we specify, followed by a sequence number i, 1 through however many (in this case six). Notice how plt is keeping track of which subplot axes are current, and we talk to said axes through plt.
for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Here we're talking to the axes objects more directly by calling "get current axes". Somewhat confusingly, the instances return have an "axes" attribute which points to the same instance, a wrinkle I explore below. Note the slight difference between the last two lines.
for i in range(1, 7): plt.subplot(2, 3, i) plt.text(0.5, 0.5, str((2, 3, i)), fontsize=18, ha='center') # synonymous. gca means 'get current axes' plt.gca().get_yaxis().set_visible(False) plt.gca().axes.get_xaxis().set_visible(False) # axes optional, self referential plt.gcf().subplots_adjust(hspace=0.1, wspace=0.1) # get current figure, adjust spacing
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
LAB: You might need to install pillow to get the code cells to work. Pillow is a Python 3 fork of PIL, the Python Imaging Library, still imported using that name. conda install pillow from the most compatible repo for whatever Anaconda environment you're using would be one way to get it. Using pip would be another. The face.png binary is in your course folder for this evening. Question: Might we using axes to show images? Answer: Absolutely, as matplotlib axes have an imshow method.
from PIL import Image # Image is a module! plt.subplot(1, 2, 1) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis im = Image.open("face.png") plt.imshow(im) plt.subplot(1, 2, 2) plt.xticks([]) # get rid of tickmarks on x axis plt.yticks([]) # get rid of tickmarks on y axis # rotate 180 degrees rotated = im.transpose(Image.ROTATE_180) plt.imshow(rotated) _ = plt.gcf().tight_layout()
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
The script below, borrowed from the matplotlib gallery, shows another common idiom for getting a figure and axes pair. Call plt.suplots with no arguments. Then talk to ax directly, for the most part. We're also rotating the x tickmark labels by 45 degrees. Fancy! Uncommenting the use('classic') command up top makes a huge difference in the result. I'm still trying to figure that out.
# plt.style.use('classic') vegetables = ["cucumber", "tomato", "lettuce", "asparagus", "potato", "wheat", "barley"] farmers = ["Farmer Joe", "Upland Bros.", "Smith Gardening", "Agrifun", "Organiculture", "BioGoods Ltd.", "Cornylee Corp."] harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0], [2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0], [1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0], [0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0], [0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0], [1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1], [0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]]) fig, ax = plt.subplots() im = ax.imshow(harvest) # We want to show all ticks... ax.set_xticks(np.arange(len(farmers))) ax.set_yticks(np.arange(len(vegetables))) # ... and label them with the respective list entries ax.set_xticklabels(farmers) ax.set_yticklabels(vegetables) # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. for i in range(len(vegetables)): for j in range(len(farmers)): text = ax.text(j, i, harvest[i, j], ha="center", va="center", color="y") ax.set_title("Harvest of local farmers (in tons/year)") fig.tight_layout()
SUBPLOTS_PYT_DS_SAISOFT.ipynb
4dsolutions/Python5
mit
Task I: Build the Model Now, using Keras we will build a multivariate regression model. Remember, these kinds of models can be represented as artifical neural networks, hence why we can implement them using Keras. <img src="resources/linear-regression-net.png" alt="Linear regression as an artificial neural network" width="300" /> The model, an artificial neural network, will consist of a $d$ dimensional input that is fully- or densely-connected to a single output neuron. The model will be made using the Keras functional guide, which allows us to take advantage of a functional API to create complex models with an arbitrary number of input and output neurons. Below is some example code for how to set up a simple model using this API with 32 inputs and 4 outputs: ```python from keras.models import Model from keras.layers import Input, Dense a = Input(shape=(32,)) b = Dense(4) model = Model(inputs=a, outputs=b) ``` Notice how this is the same setup we used for the previous notebook on linear regression. Make sure to revisit that notebook if you have trouble understanding the basic usage of this API. <div class="alert alert-success"> **Task**: Build a model using the Keras functional guide for the bike-sharing dataset. Use the following functions to put together your model: <ul> <li><a href="https://keras.io/models/model/">Input()</a></li> <li><a href="https://keras.io/models/model/">Dense()</a></li> <li><a href="https://keras.io/models/model/">Model()</a></li> </ul> It may be helpful to browse other parts of the Keras documentation. </div>
# Import what we need from keras.layers import (Input, Dense) from keras.models import Model def simple_model(nb_inputs, nb_outputs): """Return a Keras Model. """ model = None return model ### Do *not* modify the following line ### # Test and see that the model has been created correctly tests.test_simple_model(simple_model)
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Selecting Hyperparameters As opposed to standard model parameters, such as the weights in a linear model, hyperparamters are user-specified parameters not learned by the training process, i.e. they are specified a priori. In the following section we will look at how we can define and evaluate a few different hyperparameters relevant to our previously defined model. The hyperparameters we will take a look at are: Learning rate Number of epochs Batch size Digression: Different Sets of Data One of the ultimate goals of machine learning is for our models to generalise well. That is, we would like the performance of our model on the data we have trained on, i.e. the in-sample error, to be representative of the performance of our model on the data we are attempting to model, i.e. the out-of-sample error. Unfortunately, for most problems we are unable to test our model on all possible data that we have not trained on. This might be due to difficulties gathering new data or simply because the amount of possible data is very large. For this reason, we have to settle for a different solution when we want to evaluate our trained models. The go-to solution is to gather a second set of data, in addtion to the training set, called a test set. For the test set to be useful it is important that it is representative of the data we have not trained on. In order words, the error we get on the test set should be close to the out-of-sample error. Selecting appropriate hyperparameters can be seen as a sort of meta-optimisation task on top of the learning task. Now, we could train a model several times, alter some hyperparameters each time, and record the final performance on the test set, however, this will likely yield errors that are overly optimistic. This is because looking at the test set when making learning choices, i.e. selecting hyperparamters, introduces bias and causes the estimated out-of-sample error to diverge from the true out-of-sample error. Remember, this is the reason why we have a test set in the first place. The solution to this problem is to create a third set: the validation set. This is typically a partition of the training set, however there exist several cross validation methodologies for how to create and use validation sets efficiently. By having this third set we can: (i) use the training set to train the trainable model parameters, (ii) use the validation set to select hyperparameters, and (iii) use the test set to estimate the out-of-sample error. This split ensures that the test set remains unbiased. Learning Rate As we saw in the previous notebook, learning rate is an important parameter that decides how big of a jump we will make during gradient descent-based optimisation when moving in the negative gradient direction. In order to select a good learning rate it is paramount that we track the state of the current error / loss / cost during training after each application of the gradient descent update rule. Below is a cartoon diagram illustrating the loss over the course of training. The shape of the error as training progresses can give a good indication as to what constitutes a good learning rate. <img src="resources/learningrates.jpeg" alt="Choice of learning rate" width="400" /> source <div class="alert alert-danger"> <strong>Ideally we would want:</strong> <ul> <li>Small training error</li> <li>Little to no overfitting, i.e. *validation* performance measure matches the training performance measure (see figure below)</li> </ul> </div> Validation error refers to the error taken over a validation set on the current model. <img src="resources/validationset.jpeg" alt="Validation set overfitting" width="400" /> source Epochs In artificial neural network terminology one epoch typically means that every example in the training set has been seen once by the learning algorithm. It is generally preferable to track the number of epochs as opposed to the number iterations, i.e. applications of an update rule, because the latter depends on the batch size. In literature, iteration is sometimes used synonymously with epoch. <div class="alert alert-danger"> <strong>Ideally we would want:</strong> <ul> <li>To avoid stopping the training too early</li> <li>To avoid training for too long</li> </ul> </div> Batch Size As we saw in the previous notebook, we typically sum over multiple examples for a single application of an update rule. The number of examples we include is the batch size. The batch size allows us to control how much memory we need during training because we only need to sample examples for a single batch. This is important for when the entire dataset cannot fit in memory. The important thing to keeep in mind when it comes to batch size is that the smaller the batch size the less accurate the estimate of the gradient over the training set will be. In other words, moves done by the update rule in the space over all trainable parameters become more noisy the smaller the batch size is. <div class="alert alert-danger"> <strong>Ideally we would want:</strong> <ul> <li>To fit a number of examples in memory</li> <li>Avoid unnecessary amounts of noise when updating trainable model parameters</li> </ul> </div> Plotting Error vs. Epoch with Keras <div class="alert alert-info"> <strong>In the following code snippet we will:</strong> <ul> <li>Create a model using the `simple_model()` function we made earlier</li> <li>Define all of the hyperparameters we will need</li> <li>Train the network using gradient descent</li> <li>Plot how the error evolves throughout training</li> </ul> </div> Make sure you understand most of the code below before you continue.
"""Do not modify the following code. It is to be used as a refence for future tasks. """ # Create a simple model model = simple_model(nb_features, nb_outputs) # # Define hyperparameters # lr = 0.2 nb_epochs = 10 batch_size = 10 # Fraction of the training data held as a validation set validation_split = 0.1 # Define optimiser optimizer = keras.optimizers.sgd(lr=lr) # Compile model, use mean squared error model.compile(loss='mean_squared_error', optimizer=optimizer) # Print model model.summary() # Train and record history logs = model.fit(X_train, y_train, batch_size=batch_size, epochs=nb_epochs, validation_split=validation_split, verbose=2) # Plot the error fig, ax = plt.subplots(1,1) pd.DataFrame(logs.history).plot(ax=ax) ax.grid(linestyle='dotted') ax.legend() plt.show() # Estimation on unseen data can be done using the `predict()` function, e.g.: _y = model.predict(X_test)
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Now, with this model, let's try to optimize the regularization factor $\lambda$. This adjusts the strength of the regularizer. <div class="alert alert-success"> **Task**: Alter the regularization factor and assess the performance over 100 epochs using a batch size of 128. At a minimum, test out the following regularization strengths: <ul> <li> $\lambda = 0.01$</li> <li> $\lambda = 0.005$</li> <li> $\lambda = 0.0005$</li> <li> $\lambda = 0.00005$</li> </ul> Similarly to the task where you had to tune hyperparameters, you will have to write down Keras code for creating an optimiser as well as the model. Remember, it is better to write the missing components down manually rather than copy-pasting them. </div>
# Regularization factor (lambda) reg_factor = 0.005 # Create a simple model model = None # # Define hyperparameters # lr = 0.0005 nb_epochs = 100 batch_size = 128 reg_factor = 0.0005 # Fraction of the training data held as a validation set validation_split = 0.1 # Define optimiser # Compile model, use mean squared error ### Do *not* modify the following lines ### # Print model model.summary() # Train our network and do live plots of loss tools.assess_multivariate_model(model, X_train, y_train, X_test, y_test, test_dates, nb_epochs, batch_size, validation_split )
1-regression/2-multivariate-linear-regression.ipynb
torgebo/deep_learning_workshop
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 4 sample_id = 420 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function mins = np.min(x, axis=0) maxs = np.max(x, axis=0) rng = maxs - mins return 1 - ((maxs - x) / rng) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
mikelseverson/Udacity-Deep_Learning-Nanodegree
mit