markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Introduction Dust in the interstellar medium (ISM) extinguishes background starlight. The wavelength dependence of the extinction is such that short-wavelength light is extinguished more than long-wavelength light, and we call this effect *reddening*.If you're new to extinction, here is a brief introduction to the types of quantities involved.The fractional change to the flux of starlight is $$\frac{dF_\lambda}{F_\lambda} = -\tau_\lambda$$where $\tau$ is the optical depth and depends on wavelength. Integrating along the line of sight, the resultant flux is an exponential function of optical depth,$$\tau_\lambda = -\ln\left(\frac{F_\lambda}{F_{\lambda,0}}\right).$$With an eye to how we define magnitudes, we usually change the base from $e$ to 10, $$\tau_\lambda = -2.303\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right),$$and define an extinction $A_\lambda = 1.086 \,\tau_\lambda$ so that$$A_\lambda = -2.5\log\left(\frac{F_\lambda}{F_{\lambda,0}}\right).$$There are two basic take-home messages from this derivation:* Extinction introduces a multiplying factor $10^{-0.4 A_\lambda}$ to the flux.* Extinction is defined relative to the flux without dust, $F_{\lambda,0}$. Once astropy and the affiliated packages are installed, we can import from them as needed: Example 1: Investigate Extinction Models The `dust_extinction` package provides various models for extinction $A_\lambda$ normalized to $A_V$. The shapes of normalized curves are relatively (and perhaps surprisingly) uniform in the Milky Way. The little variation that exists is often parameterized by the ratio of extinction ($A_V$) to reddening in the blue-visual ($E_{B-V}$),$$R_V \equiv \frac{A_V}{E_{B-V}}$$where $E_{B-V}$ is differential extinction $A_B-A_V$. In this example, we show the $R_V$-parameterization for the Clayton, Cardelli, & Mathis (1989, CCM) and the Fitzpatrick (1999) models. [More model options are available in the `dust_extinction` documentation.](https://dust-extinction.readthedocs.io/en/latest/dust_extinction/model_flavors.html)
# Create wavelengths array. wav = np.arange(0.1, 3.0, 0.001)*u.micron for model in [CCM89, F99]: for R in (2.0,3.0,4.0): # Initialize the extinction model ext = model(Rv=R) plt.plot(1/wav, ext(wav), label=model.name+' R='+str(R)) plt.xlabel('$\lambda^{-1}$ ($\mu$m$^{-1}$)') plt.ylabel('A($\lambda$) / A(V)') plt.legend(loc='best') plt.title('Some Extinction Laws') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Astronomers studying the ISM often display extinction curves against inverse wavelength (wavenumber) to show the ultraviolet variation, as we do here. Infrared extinction varies much less and approaches zero at long wavelength in the absence of wavelength-independent, or grey, extinction. Example 2: Deredden a Spectrum Here we deredden (unextinguish) the IUE ultraviolet spectrum and optical photometry of the star $\rho$ Oph (HD 147933).First, we will use astroquery to fetch the archival [IUE spectrum from MAST](https://archive.stsci.edu/iue/):
download_dir = pathlib.Path('~/.astropy/cache/astroquery/Mast').expanduser() download_dir.mkdir(exist_ok=True) obsTable = Observations.query_object("HD 147933", radius="1 arcsec") obsTable_spec = obsTable[obsTable['dataproduct_type'] == 'spectrum'] obsTable_spec obsids = obsTable_spec[39]['obsid'] dataProductsByID = Observations.get_product_list(obsids) manifest = Observations.download_products(dataProductsByID, download_dir=str(download_dir))
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
We read the downloaded files into an astropy table:
t_lwr = Table.read(download_dir / 'mastDownload/IUE/lwr05639/lwr05639mxlo_vo.fits') print(t_lwr)
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
The `.quantity` extension in the next lines will read the Table columns into Quantity vectors. Quantities keep the units of the Table column attached to the numpy array values.
wav_UV = t_lwr['WAVE'][0,].quantity UVflux = t_lwr['FLUX'][0,].quantity
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Now, we use astroquery again to fetch photometry from Simbad to go with the IUE spectrum:
custom_query = Simbad() custom_query.add_votable_fields('fluxdata(U)','fluxdata(B)','fluxdata(V)') phot_table=custom_query.query_object('HD 147933') Umag=phot_table['FLUX_U'] Bmag=phot_table['FLUX_B'] Vmag=phot_table['FLUX_V']
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
To convert the photometry to flux, we look up some [properties of the photometric passbands](http://ned.ipac.caltech.edu/help/photoband.lst), including the flux of a magnitude zero star through the each passband, also known as the zero-point of the passband.
wav_U = 0.3660 * u.micron zeroflux_U_nu = 1.81E-23 * u.Watt/(u.m*u.m*u.Hz) wav_B = 0.4400 * u.micron zeroflux_B_nu = 4.26E-23 * u.Watt/(u.m*u.m*u.Hz) wav_V = 0.5530 * u.micron zeroflux_V_nu = 3.64E-23 * u.Watt/(u.m*u.m*u.Hz)
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
The zero-points that we found for the optical passbands are not in the same units as the IUE fluxes. To make matters worse, the zero-point fluxes are $F_\nu$ and the IUE fluxes are $F_\lambda$. To convert between them, the wavelength is needed. Fortunately, astropy provides an easy way to make the conversion with *equivalencies*:
zeroflux_U = zeroflux_U_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_U)) zeroflux_B = zeroflux_B_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_B)) zeroflux_V = zeroflux_V_nu.to(u.erg/u.AA/u.cm/u.cm/u.s, equivalencies=u.spectral_density(wav_V))
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Now we can convert from photometry to flux using the definition of magnitude:$$F=F_0\ 10^{-0.4\, m}$$
Uflux = zeroflux_U * 10.**(-0.4*Umag) Bflux = zeroflux_B * 10.**(-0.4*Bmag) Vflux = zeroflux_V * 10.**(-0.4*Vmag)
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Using astropy quantities allow us to take advantage of astropy's unit support in plotting. [Calling `astropy.visualization.quantity_support` explicitly turns the feature on.](http://docs.astropy.org/en/stable/units/quantity.htmlplotting-quantities) Then, when quantity objects are passed to matplotlib plotting functions, the axis labels are automatically labeled with the unit of the quantity. In addition, quantities are converted automatically into the same units when combining multiple plots on the same axes.
astropy.visualization.quantity_support() plt.plot(wav_UV,UVflux,'m',label='UV') plt.plot(wav_V,Vflux,'ko',label='U, B, V') plt.plot(wav_B,Bflux,'ko') plt.plot(wav_U,Uflux,'ko') plt.legend(loc='best') plt.ylim(0,3E-10) plt.title('rho Oph') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Finally, we initialize the extinction model, choosing values $R_V = 5$ and $E_{B-V} = 0.5$. This star is famous in the ISM community for having large-$R_V$ dust in the line of sight.
Rv = 5.0 # Usually around 3, but about 5 for this star. Ebv = 0.5 ext = F99(Rv=Rv)
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
To extinguish (redden) a spectrum, multiply by the `ext.extinguish` function. To unextinguish (deredden), divide by the same `ext.extinguish`, as we do here:
plt.semilogy(wav_UV,UVflux,'m',label='UV') plt.semilogy(wav_V,Vflux,'ko',label='U, B, V') plt.semilogy(wav_B,Bflux,'ko') plt.semilogy(wav_U,Uflux,'ko') plt.semilogy(wav_UV,UVflux/ext.extinguish(wav_UV,Ebv=Ebv),'b', label='dereddened: EBV=0.5, RV=5') plt.semilogy(wav_V,Vflux/ext.extinguish(wav_V,Ebv=Ebv),'ro', label='dereddened: EBV=0.5, RV=5') plt.semilogy(wav_B,Bflux/ext.extinguish(wav_B,Ebv=Ebv),'ro') plt.semilogy(wav_U,Uflux/ext.extinguish(wav_U,Ebv=Ebv),'ro') plt.legend(loc='best') plt.title('rho Oph') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Notice that, by dereddening the spectrum, the absorption feature at 2175 Angstrom is removed. This feature can also be seen as the prominent bump in the extinction curves in Example 1. That we have smoothly removed the 2175 Angstrom feature suggests that the values we chose, $R_V = 5$ and $E_{B-V} = 0.5$, are a reasonable model for the foreground dust.Those experienced with dereddening should notice that that `dust_extinction` returns $A_\lambda/A_V$, while other routines like the IDL fm_unred procedure often return $A_\lambda/E_{B-V}$ by default and need to be divided by $R_V$ in order to compare directly with `dust_extinction`. Example 3: Calculate Color Excess with `synphot` Calculating broadband *photometric* extinction is harder than it might look at first. All we have to do is look up $A_\lambda$ for a particular passband, right? Under the right conditions, yes. In general, no.Remember that we have to integrate over a passband to get synthetic photometry,$$A = -2.5\log\left(\frac{\int W_\lambda F_{\lambda,0} 10^{-0.4A_\lambda} d\lambda}{\int W_\lambda F_{\lambda,0} d\lambda} \right),$$where $W_\lambda$ is the fraction of incident energy transmitted through a filter. See the detailed appendix in [Bessell & Murphy (2012)](https://ui.adsabs.harvard.edu/abs/2012PASP..124..140B/abstract) for an excellent review of the issues and common misunderstandings in synthetic photometry.There is an important point to be made here. The expression above does not simplify any further. Strictly speaking, it is impossible to convert spectral extinction $A_\lambda$ into a magnitude system without knowing the wavelength dependence of the source's original flux across the filter in question. As a special case, if we assume that the source flux is constant in the band (i.e. $F_\lambda = F$), then we can cancel these factors out from the integrals, and extinction in magnitudes becomes the weighted average of the extinction factor across the filter in question. In that special case, $A_\lambda$ at $\lambda_{\rm eff}$ is a good approximation for magnitude extinction.In this example, we will demonstrate the more general calculation of photometric extinction. We use a blackbody curve for the flux before the dust, apply an extinction curve, and perform synthetic photometry to calculate extinction and reddening in a magnitude system. First, let's get the filter transmission curves:
# Optional, for when the STScI ftp server is not answering: config.conf.vega_file = 'http://ssb.stsci.edu/cdbs/calspec/alpha_lyr_stis_008.fits' config.conf.johnson_u_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_u_004_syn.fits' config.conf.johnson_b_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_b_004_syn.fits' config.conf.johnson_v_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_v_004_syn.fits' config.conf.johnson_r_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_r_003_syn.fits' config.conf.johnson_i_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/johnson_i_003_syn.fits' config.conf.bessel_j_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_j_003_syn.fits' config.conf.bessel_h_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_h_004_syn.fits' config.conf.bessel_k_file = 'http://ssb.stsci.edu/cdbs/comp/nonhst/bessell_k_003_syn.fits' u_band = SpectralElement.from_filter('johnson_u') b_band = SpectralElement.from_filter('johnson_b') v_band = SpectralElement.from_filter('johnson_v') r_band = SpectralElement.from_filter('johnson_r') i_band = SpectralElement.from_filter('johnson_i') j_band = SpectralElement.from_filter('bessel_j') h_band = SpectralElement.from_filter('bessel_h') k_band = SpectralElement.from_filter('bessel_k')
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
If you are running this with your own python, see the [synphot documentation](https://synphot.readthedocs.io/en/latest/installation-and-setup) on how to install your own copy of the necessary files. Next, let's make a background flux to which we will apply extinction. Here we make a 10,000 K blackbody using the model mechanism from within `synphot` and normalize it to $V$ = 10 in the Vega-based magnitude system.
# First, create a blackbody at some temperature. sp = SourceSpectrum(BlackBodyNorm1D, temperature=10000) # sp.plot(left=1, right=15000, flux_unit='flam', title='Blackbody') # Get the Vega spectrum as the zero point flux. vega = SourceSpectrum.from_vega() # vega.plot(left=1, right=15000) # Normalize the blackbody to some chosen magnitude, say V = 10. vmag = 10. v_band = SpectralElement.from_filter('johnson_v') sp_norm = sp.normalize(vmag * units.VEGAMAG, v_band, vegaspec=vega) sp_norm.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody')
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Now we initialize the extinction model and choose an extinction of $A_V$ = 2. To get the `dust_extinction` model working with `synphot`, we create a wavelength array and make a spectral element with the extinction model as a lookup table.
# Initialize the extinction model and choose the extinction, here Av = 2. ext = CCM89(Rv=3.1) Av = 2. # Create a wavelength array. wav = np.arange(0.1, 3, 0.001)*u.micron # Make the extinction model in synphot using a lookup table. ex = ExtinctionCurve(ExtinctionModel1D, points=wav, lookup_table=ext.extinguish(wav, Av=Av)) sp_ext = sp_norm*ex sp_ext.plot(left=1, right=15000, flux_unit='flam', title='Normed Blackbody with Extinction')
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Synthetic photometry refers to modeling an observation of a star by multiplying the theoretical model for the astronomical flux through a certain filter response function, then integrating.
# "Observe" the star through the filter and integrate to get photometric mag. sp_obs = Observation(sp_ext, v_band) sp_obs_before = Observation(sp_norm, v_band) # sp_obs.plot(left=1, right=15000, flux_unit='flam', # title='Normed Blackbody with Extinction through V Filter')
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
Next, `synphot` performs the integration and computes magnitudes in the Vega system.
sp_stim_before = sp_obs_before.effstim(flux_unit='vegamag', vegaspec=vega) sp_stim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega) print('before dust, V =', np.round(sp_stim_before,1)) print('after dust, V =', np.round(sp_stim,1)) # Calculate extinction and compare to our chosen value. Av_calc = sp_stim - sp_stim_before print('$A_V$ = ', np.round(Av_calc,1))
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
This is a good check for us to do. We normalized our spectrum to $V$ = 10 mag and added 2 mag of visual extinction, so the synthetic photometry procedure should reproduce these chosen values, and it does. Now we are ready to find the extinction in other passbands. We calculate the new photometry for the rest of the Johnson optical and the Bessell infrared filters. We calculate extinction $A = \Delta m$ and plot color excess, $E(\lambda - V) = A_\lambda - A_V$. Notice that `synphot` calculates the effective wavelength of the observations for us, which is very useful for plotting the results. We show reddening with the model extinction curve for comparison in the plot.
bands = [u_band,b_band,v_band,r_band,i_band,j_band,h_band,k_band] for band in bands: # Calculate photometry with dust: sp_obs = Observation(sp_ext, band, force='extrap') obs_effstim = sp_obs.effstim(flux_unit='vegamag', vegaspec=vega) # Calculate photometry without dust: sp_obs_i = Observation(sp_norm, band, force='extrap') obs_i_effstim = sp_obs_i.effstim(flux_unit='vegamag', vegaspec=vega) # Extinction = mag with dust - mag without dust # Color excess = extinction at lambda - extinction at V color_excess = obs_effstim - obs_i_effstim - Av_calc plt.plot(sp_obs_i.effective_wavelength(), color_excess,'or') print(np.round(sp_obs_i.effective_wavelength(),1), ',', np.round(color_excess,2)) # Plot the model extinction curve for comparison plt.plot(wav,Av*ext(wav)-Av,'--k') plt.ylim([-2,2]) plt.xlabel('$\lambda$ (Angstrom)') plt.ylabel('E($\lambda$-V)') plt.title('Reddening of T=10,000K Background Source with Av=2') plt.show()
_____no_output_____
BSD-3-Clause
tutorials/color-excess/color-excess.ipynb
jvictor42/astropy-tutorials
2D ERT modeling and inversion
import matplotlib.pyplot as plt import numpy as np import pygimli as pg import pygimli.meshtools as mt from pygimli.physics import ert
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Create geometry definition for the modelling domain.worldMarker=True indicates the default boundary conditions for the ERT
world = mt.createWorld(start=[-50, 0], end=[50, -50], layers=[-1, -8], worldMarker=True)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Create some heterogeneous circular anomaly
block = mt.createCircle(pos=[-4.0, -5.0], radius=[1, 1.8], marker=4, boundaryMarker=10, area=0.01) circle = mt.createCircle(pos=[4.0, -5.0], radius=[1, 1.8], marker=5, boundaryMarker=10, area=0.01) poly = mt.createPolygon([(1,-4), (2,-1.5), (4,-2), (5,-2), (8,-3), (5,-3.5), (3,-4.5)], isClosed=True, addNodes=3, interpolate='spline', marker=5)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Merge geometry definition into a Piecewise Linear Complex (PLC)
geom = world + block + circle # + poly
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Optional: show the geometry
pg.show(geom)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Create a Dipole Dipole ('dd') measuring scheme with 21 electrodes.
scheme = ert.createData(elecs=np.linspace(start=-20, stop=20, num=42), schemeName='dd')
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Put all electrode (aka sensors) positions into the PLC to enforce meshrefinement. Due to experience, its convenient to add further refinementnodes in a distance of 10% of electrode spacing to achieve sufficientnumerical accuracy.
for p in scheme.sensors(): geom.createNode(p) geom.createNode(p - [0, 0.01]) # Create a mesh for the finite element modelling with appropriate mesh quality. mesh = mt.createMesh(geom, quality=34) # Create a map to set resistivity values in the appropriate regions # [[regionNumber, resistivity], [regionNumber, resistivity], [...] rhomap = [[1, 50.], [2, 50.], [3, 50.], [4, 150.], [5, 15]] # Take a look at the mesh and the resistivity distribution pg.show(mesh, data=rhomap, label=pg.unit('res'), showMesh=True)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Perform the modeling with the mesh and the measuring scheme itselfand return a data container with apparent resistivity values,geometric factors and estimated data errors specified by the noise setting.The noise is also added to the data. Here 1% plus 1µV.Note, we force a specific noise seed as we want reproducable results fortesting purposes.
data = ert.simulate(mesh, scheme=scheme, res=rhomap, noiseLevel=1, noiseAbs=1e-6, seed=1337, verbose=False) pg.info(np.linalg.norm(data['err']), np.linalg.norm(data['rhoa'])) pg.info('Simulated data', data) pg.info('The data contains:', data.dataMap().keys()) pg.info('Simulated rhoa (min/max)', min(data['rhoa']), max(data['rhoa'])) pg.info('Selected data noise %(min/max)', min(data['err'])*100, max(data['err'])*100) # data['k']
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Optional: you can filter all values and tokens in the data container.Its possible that there are some negative data values due to noise andhuge geometric factors. So we need to remove them.
data.remove(data['rhoa'] < 0) # data.remove(data['k'] < -20000.0) pg.info('Filtered rhoa (min/max)', min(data['rhoa']), max(data['rhoa'])) # You can save the data for further use data.save('simple.dat') # You can take a look at the data ert.show(data, cMap="RdBu_r")
24/11/21 - 13:43:27 - pyGIMLi - INFO - Filtered rhoa (min/max) 44.872181820362094 55.08720265284357
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Initialize the ERTManager, e.g. with a data container or a filename.
mgr = ert.ERTManager('simple.dat')
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Run the inversion with the preset data. The Inversion mesh will be createdwith default settings.
inv = mgr.invert(lam=10, verbose=False) #np.testing.assert_approx_equal(mgr.inv.chi2(), 0.7, significant=1)
24/11/21 - 13:41:23 - pyGIMLi - INFO - Found 2 regions. 24/11/21 - 13:41:23 - pyGIMLi - INFO - Region with smallest marker (1) set to background 24/11/21 - 13:41:23 - pyGIMLi - INFO - Creating forward mesh from region infos. 24/11/21 - 13:41:23 - Core - WARNING - Region Nr: 1 is background and should not get a model transformation. 24/11/21 - 13:41:23 - Core - WARNING - Region Nr: 1 is background and should not get a model control. 24/11/21 - 13:41:23 - pyGIMLi - INFO - Creating refined mesh (H2) to solve forward task. 24/11/21 - 13:41:23 - pyGIMLi - INFO - Set default startmodel to median(data values)=49.532539205485705 24/11/21 - 13:41:23 - pyGIMLi - INFO - Created startmodel from forward operator: 633 [49.532539205485705,...,49.532539205485705]
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
Let the ERTManger show you the model of the last successful run and how itfits the data. Shows data, model response, and model.
mgr.showResultAndFit(cMap="RdBu_r") meshPD = pg.Mesh(mgr.paraDomain) # Save copy of para mesh for plotting later
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
You can also provide your own mesh (e.g., a structured grid if you like them)Note, that x and y coordinates needs to be in ascending order to ensure thatall the cells in the grid have the correct orientation, i.e., all cells needto be numbered counter-clockwise and the boundary normal directions need topoint outside.
inversionDomain = pg.createGrid(x=np.linspace(start=-21, stop=21, num=43), y=-pg.cat([0], pg.utils.grange(0.5, 8, n=8))[::-1], marker=2)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
The inversion domain for ERT problems needs a boundary that represents thefar regions in the subsurface of the halfspace.Give a cell marker lower than the marker for the inversion region, the lowestcell marker in the mesh will be the inversion boundary region by default.
grid = pg.meshtools.appendTriangleBoundary(inversionDomain, marker=1, xbound=50, ybound=50) pg.show(grid, markers=True) #pg.show(grid, markers=True)
_____no_output_____
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
The Inversion can be called with data and mesh as argument as well
model = mgr.invert(data, mesh=grid, lam=10, verbose=False) # np.testing.assert_approx_equal(mgr.inv.chi2(), 0.951027, significant=3)
24/11/21 - 13:41:27 - pyGIMLi - INFO - Found 2 regions. 24/11/21 - 13:41:27 - pyGIMLi - INFO - Region with smallest marker (1) set to background 24/11/21 - 13:41:27 - pyGIMLi - INFO - Creating forward mesh from region infos. 24/11/21 - 13:41:27 - Core - WARNING - Region Nr: 1 is background and should not get a model transformation. 24/11/21 - 13:41:27 - Core - WARNING - Region Nr: 1 is background and should not get a model control. 24/11/21 - 13:41:27 - pyGIMLi - INFO - Creating refined mesh (H2) to solve forward task. 24/11/21 - 13:41:27 - pyGIMLi - INFO - Set default startmodel to median(data values)=49.53253920548567 24/11/21 - 13:41:27 - pyGIMLi - INFO - Created startmodel from forward operator: 336 [49.53253920548567,...,49.53253920548567]
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
You can of course get access to mesh and model and plot them for your own.Note that the cells of the parametric domain of your mesh might be ina different order than the values in the model array if regions are used.The manager can help to permutate them into the right order.
# np.testing.assert_approx_equal(mgr.inv.chi2(), 1.4, significant=2) maxC = 150 modelPD = mgr.paraModel(model) # do the mapping pg.show(mgr.paraDomain, modelPD, label='Model', cMap='RdBu_r', logScale=True, cMin=15, cMax=maxC) pg.info('Inversion stopped with chi² = {0:.3}'.format(mgr.fw.chi2())) fig, (ax1, ax2, ax3) = plt.subplots(3,1, sharex=True, sharey=True, figsize=(8,7)) pg.show(mesh, rhomap, ax=ax1, hold=True, cMap="RdBu_r", logScale=True, orientation="vertical", cMin=15, cMax=maxC) pg.show(meshPD, inv, ax=ax2, hold=True, cMap="RdBu_r", logScale=True, orientation="vertical", cMin=15, cMax=maxC) mgr.showResult(ax=ax3, cMin=15, cMax=maxC, cMap="RdBu_r", orientation="vertical") labels = ["True model", "Inversion unstructured mesh", "Inversion regular grid"] for ax, label in zip([ax1, ax2, ax3], labels): ax.set_xlim(mgr.paraDomain.xmin(), mgr.paraDomain.xmax()) ax.set_ylim(mgr.paraDomain.ymin(), mgr.paraDomain.ymax()) ax.set_title(label)
24/11/21 - 13:43:44 - pyGIMLi - INFO - Inversion stopped with chi² = 0.944
CC0-1.0
plot_01_ert_2d_mod_inv.ipynb
ruboerner/pg
N-gram language models or how to write scientific papers (4 pts)We shall train our language model on a corpora of [ArXiv](http://arxiv.org/) articles and see if we can generate a new one!![img](https://media.npr.org/assets/img/2013/12/10/istock-18586699-monkey-computer_brick-16e5064d3378a14e0e4c2da08857efe03c04695e-s800-c85.jpg)_data by neelshah18 from [here](https://www.kaggle.com/neelshah18/arxivdataset/)__Disclaimer: this has nothing to do with actual science. But it's fun, so who cares?!_
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w # !wget "https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1" -O arxivData.json.tar.gz # !tar -xvzf arxivData.json.tar.gz data = pd.read_json("./arxivData.json") data.sample(n=5) # assemble lines: concatenate title and description lines = data.apply(lambda row: row['title'] + ' ; ' + row['summary'], axis=1).tolist() sorted(lines, key=len)[:3]
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
TokenizationYou know the dril. The data is messy. Go clean the data. Use WordPunctTokenizer or something.
from nltk.tokenize import WordPunctTokenizer # Task: convert lines (in-place) into strings of space-separated tokens. import & use WordPunctTokenizer tokenizer = WordPunctTokenizer() lines = [' '.join(tokenizer.tokenize(line.lower())) for line in lines] assert sorted(lines, key=len)[0] == \ 'differential contrastive divergence ; this paper has been retracted .' assert sorted(lines, key=len)[2] == \ 'p = np ; we claim to resolve the p =? np problem via a formal argument for p = np .'
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
N-Gram Language Model (1point)A language model is a probabilistic model that estimates text probability: the joint probability of all tokens $w_t$ in text $X$: $P(X) = P(w_1, \dots, w_T)$.It can do so by following the chain rule:$$P(w_1, \dots, w_T) = P(w_1)P(w_2 \mid w_1)\dots P(w_T \mid w_1, \dots, w_{T-1}).$$ The problem with such approach is that the final term $P(w_T \mid w_1, \dots, w_{T-1})$ depends on $n-1$ previous words. This probability is impractical to estimate for long texts, e.g. $T = 1000$.One popular approximation is to assume that next word only depends on a finite amount of previous words:$$P(w_t \mid w_1, \dots, w_{t - 1}) = P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1})$$Such model is called __n-gram language model__ where n is a parameter. For example, in 3-gram language model, each word only depends on 2 previous words. $$ P(w_1, \dots, w_n) = \prod_t P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1}).$$You can also sometimes see such approximation under the name of _n-th order markov assumption_. The first stage to building such a model is counting all word occurences given N-1 previous words
from tqdm import tqdm from collections import defaultdict, Counter # special tokens: # - unk represents absent tokens, # - eos is a special token after the end of sequence UNK, EOS = "_UNK_", "_EOS_" def count_ngrams(lines, n): """ Count how many times each word occured after (n - 1) previous words :param lines: an iterable of strings with space-separated tokens :returns: a dictionary { tuple(prefix_tokens): {next_token_1: count_1, next_token_2: count_2}} When building counts, please consider the following two edge cases - if prefix is shorter than (n - 1) tokens, it should be padded with UNK. For n=3, empty prefix: "" -> (UNK, UNK) short prefix: "the" -> (UNK, the) long prefix: "the new approach" -> (new, approach) - you should add a special token, EOS, at the end of each sequence "... with deep neural networks ." -> (..., with, deep, neural, networks, ., EOS) count the probability of this token just like all others. """ counts = defaultdict(Counter) # counts[(word1, word2)][word3] = how many times word3 occured after (word1, word2) if n == 1: for line in lines: counts.update(((word,) for word in line.split())) for i in range(1, n+1, 1): for line in lines: splitted = line.split() line_len = len(splitted) splitted.append(EOS) for j in range(line_len): try: current_slice = splitted[j:j+i] counts.update(tuple(current_slice[:-1]) return counts # let's test it dummy_lines = sorted(lines, key=len)[:100] dummy_counts = count_ngrams(dummy_lines, n=3) assert set(map(len, dummy_counts.keys())) == {2}, "please only count {n-1}-grams" assert len(dummy_counts[('_UNK_', '_UNK_')]) == 78 assert dummy_counts['_UNK_', 'a']['note'] == 3 assert dummy_counts['p', '=']['np'] == 2 assert dummy_counts['author', '.']['_EOS_'] == 1 ['author', '.'][:-1]
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Once we can count N-grams, we can build a probabilistic language model.The simplest way to compute probabilities is in proporiton to counts:$$ P(w_t | prefix) = { Count(prefix, w_t) \over \sum_{\hat w} Count(prefix, \hat w) } $$
class NGramLanguageModel: def __init__(self, lines, n): """ Train a simple count-based language model: compute probabilities P(w_t | prefix) given ngram counts :param n: computes probability of next token given (n - 1) previous words :param lines: an iterable of strings with space-separated tokens """ assert n >= 1 self.n = n counts = count_ngrams(lines, self.n) # compute token proabilities given counts self.probs = defaultdict(Counter) # probs[(word1, word2)][word3] = P(word3 | word1, word2) # populate self.probs with actual probabilities <YOUR CODE> def get_possible_next_tokens(self, prefix): """ :param prefix: string with space-separated prefix tokens :returns: a dictionary {token : it's probability} for all tokens with positive probabilities """ prefix = prefix.split() prefix = prefix[max(0, len(prefix) - self.n + 1):] prefix = [ UNK ] * (self.n - 1 - len(prefix)) + prefix return self.probs[tuple(prefix)] def get_next_token_prob(self, prefix, next_token): """ :param prefix: string with space-separated prefix tokens :param next_token: the next token to predict probability for :returns: P(next_token|prefix) a single number, 0 <= P <= 1 """ return self.get_possible_next_tokens(prefix).get(next_token, 0)
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Let's test it!
dummy_lm = NGramLanguageModel(dummy_lines, n=3) p_initial = dummy_lm.get_possible_next_tokens('') # '' -> ['_UNK_', '_UNK_'] assert np.allclose(p_initial['learning'], 0.02) assert np.allclose(p_initial['a'], 0.13) assert np.allclose(p_initial.get('meow', 0), 0) assert np.allclose(sum(p_initial.values()), 1) p_a = dummy_lm.get_possible_next_tokens('a') # '' -> ['_UNK_', 'a'] assert np.allclose(p_a['machine'], 0.15384615) assert np.allclose(p_a['note'], 0.23076923) assert np.allclose(p_a.get('the', 0), 0) assert np.allclose(sum(p_a.values()), 1) assert np.allclose(dummy_lm.get_possible_next_tokens('a note')['on'], 1) assert dummy_lm.get_possible_next_tokens('a machine') == \ dummy_lm.get_possible_next_tokens("there have always been ghosts in a machine"), \ "your 3-gram model should only depend on 2 previous words"
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Now that you've got a working n-gram language model, let's see what sequences it can generate. But first, let's train it on the whole dataset.
lm = NGramLanguageModel(lines, n=3)
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
The process of generating sequences is... well, it's sequential. You maintain a list of tokens and iteratively add next token by sampling with probabilities.$ X = [] $__forever:__* $w_{next} \sim P(w_{next} | X)$* $X = concat(X, w_{next})$Instead of sampling with probabilities, one can also try always taking most likely token, sampling among top-K most likely tokens or sampling with temperature. In the latter case (temperature), one samples from$$w_{next} \sim {P(w_{next} | X) ^ {1 / \tau} \over \sum_{\hat w} P(\hat w | X) ^ {1 / \tau}}$$Where $\tau > 0$ is model temperature. If $\tau << 1$, more likely tokens will be sampled with even higher probability while less likely tokens will vanish.
def get_next_token(lm, prefix, temperature=1.0): """ return next token after prefix; :param temperature: samples proportionally to lm probabilities ^ (1 / temperature) if temperature == 0, always takes most likely token. Break ties arbitrarily. """ <YOUR CODE> from collections import Counter test_freqs = Counter([get_next_token(lm, 'there have') for _ in range(10000)]) assert 250 < test_freqs['not'] < 450 assert 8500 < test_freqs['been'] < 9500 assert 1 < test_freqs['lately'] < 200 test_freqs = Counter([get_next_token(lm, 'deep', temperature=1.0) for _ in range(10000)]) assert 1500 < test_freqs['learning'] < 3000 test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.5) for _ in range(10000)]) assert 8000 < test_freqs['learning'] < 9000 test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.0) for _ in range(10000)]) assert test_freqs['learning'] == 10000 print("Looks nice!")
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Let's have fun with this model
prefix = 'artificial' # <- your ideas :) for i in range(100): prefix += ' ' + get_next_token(lm, prefix) if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0: break print(prefix) prefix = 'bridging the' # <- more of your ideas for i in range(100): prefix += ' ' + get_next_token(lm, prefix, temperature=0.5) if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0: break print(prefix)
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
__More in the homework:__ nucleous sampling, top-k sampling, beam search(not for the faint of heart). Evaluating language models: perplexity (1point)Perplexity is a measure of how well does your model approximate true probability distribution behind data. __Smaller perplexity = better model__.To compute perplexity on one sentence, use:$$ {\mathbb{P}}(w_1 \dots w_N) = P(w_1, \dots, w_N)^{-\frac1N} = \left( \prod_t P(w_t \mid w_{t - n}, \dots, w_{t - 1})\right)^{-\frac1N},$$On the corpora level, perplexity is a product of probabilities of all tokens in all sentences to the power of 1, divided by __total length of all sentences__ in corpora.This number can quickly get too small for float32/float64 precision, so we recommend you to first compute log-perplexity (from log-probabilities) and then take the exponent.
def perplexity(lm, lines, min_logprob=np.log(10 ** -50.)): """ :param lines: a list of strings with space-separated tokens :param min_logprob: if log(P(w | ...)) is smaller than min_logprop, set it equal to min_logrob :returns: corpora-level perplexity - a single scalar number from the formula above Note: do not forget to compute P(w_first | empty) and P(eos | full_sequence) PLEASE USE lm.get_next_token_prob and NOT lm.get_possible_next_tokens """ <YOUR CODE> return <...> lm1 = NGramLanguageModel(dummy_lines, n=1) lm3 = NGramLanguageModel(dummy_lines, n=3) lm10 = NGramLanguageModel(dummy_lines, n=10) ppx1 = perplexity(lm1, dummy_lines) ppx3 = perplexity(lm3, dummy_lines) ppx10 = perplexity(lm10, dummy_lines) ppx_missing = perplexity(lm3, ['the jabberwock , with eyes of flame , ']) # thanks, L. Carrol print("Perplexities: ppx1=%.3f ppx3=%.3f ppx10=%.3f" % (ppx1, ppx3, ppx10)) assert all(0 < ppx < 500 for ppx in (ppx1, ppx3, ppx10)), "perplexity should be nonnegative and reasonably small" assert ppx1 > ppx3 > ppx10, "higher N models should overfit and " assert np.isfinite(ppx_missing) and ppx_missing > 10 ** 6, "missing words should have large but finite perplexity. " \ " Make sure you use min_logprob right" assert np.allclose([ppx1, ppx3, ppx10], (318.2132342216302, 1.5199996213739575, 1.1838145037901249))
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Now let's measure the actual perplexity: we'll split the data into train and test and score model on test data only.
from sklearn.model_selection import train_test_split train_lines, test_lines = train_test_split(lines, test_size=0.25, random_state=42) for n in (1, 2, 3): lm = NGramLanguageModel(n=n, lines=train_lines) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx)) # whoops, it just blew up :)
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
LM SmoothingThe problem with our simple language model is that whenever it encounters an n-gram it has never seen before, it assigns it with the probabilitiy of 0. Every time this happens, perplexity explodes.To battle this issue, there's a technique called __smoothing__. The core idea is to modify counts in a way that prevents probabilities from getting too low. The simplest algorithm here is Additive smoothing (aka [Lapace smoothing](https://en.wikipedia.org/wiki/Additive_smoothing)):$$ P(w_t | prefix) = { Count(prefix, w_t) + \delta \over \sum_{\hat w} (Count(prefix, \hat w) + \delta) } $$If counts for a given prefix are low, additive smoothing will adjust probabilities to a more uniform distribution. Not that the summation in the denominator goes over _all words in the vocabulary_.Here's an example code we've implemented for you:
class LaplaceLanguageModel(NGramLanguageModel): """ this code is an example, no need to change anything """ def __init__(self, lines, n, delta=1.0): self.n = n counts = count_ngrams(lines, self.n) self.vocab = set(token for token_counts in counts.values() for token in token_counts) self.probs = defaultdict(Counter) for prefix in counts: token_counts = counts[prefix] total_count = sum(token_counts.values()) + delta * len(self.vocab) self.probs[prefix] = {token: (token_counts[token] + delta) / total_count for token in token_counts} def get_possible_next_tokens(self, prefix): token_probs = super().get_possible_next_tokens(prefix) missing_prob_total = 1.0 - sum(token_probs.values()) missing_prob = missing_prob_total / max(1, len(self.vocab) - len(token_probs)) return {token: token_probs.get(token, missing_prob) for token in self.vocab} def get_next_token_prob(self, prefix, next_token): token_probs = super().get_possible_next_tokens(prefix) if next_token in token_probs: return token_probs[next_token] else: missing_prob_total = 1.0 - sum(token_probs.values()) missing_prob_total = max(0, missing_prob_total) # prevent rounding errors return missing_prob_total / max(1, len(self.vocab) - len(token_probs)) #test that it's a valid probability model for n in (1, 2, 3): dummy_lm = LaplaceLanguageModel(dummy_lines, n=n) assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), "I told you not to break anything! :)" for n in (1, 2, 3): lm = LaplaceLanguageModel(train_lines, n=n, delta=0.1) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx)) # optional: try to sample tokens from such a model
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Kneser-Ney smoothing (2 points)Additive smoothing is simple, reasonably good but definitely not a State of The Art algorithm.Your final task in this notebook is to implement [Kneser-Ney](https://en.wikipedia.org/wiki/Kneser%E2%80%93Ney_smoothing) smoothing.It can be computed recurrently, for n>1:$$P_{kn}(w_t | prefix_{n-1}) = { \max(0, Count(prefix_{n-1}, w_t) - \delta) \over \sum_{\hat w} Count(prefix_{n-1}, \hat w)} + \lambda_{prefix_{n-1}} \cdot P_{kn}(w_t | prefix_{n-2})$$where- $prefix_{n-1}$ is a tuple of {n-1} previous tokens- $lambda_{prefix_{n-1}}$ is a normalization constant chosen so that probabilities add up to 1- Unigram $P_{kn}(w_t | prefix_{n-2})$ corresponds to Kneser Ney smoothing for {N-1}-gram language model.- Unigram $P_{kn}(w_t)$ is a special case: how likely it is to see x_t in an unfamiliar contextSee lecture slides or wiki for more detailed formulae.__Your task__ is to- implement KneserNeyLanguageModel- test it on 1-3 gram language models- find optimal (within reason) smoothing delta for 3-gram language model with Kneser-Ney smoothing
class KneserNeyLanguageModel(NGramLanguageModel): """ A template for Kneser-Ney language model. Default delta may be suboptimal. """ def __init__(self, lines, n, delta=1.0): self.n = n <YOUR CODE> def get_possible_next_tokens(self, prefix): < YOUR CODE > def get_next_token_prob(self, prefix, next_token): <YOUR CODE> #test that it's a valid probability model for n in (1, 2, 3): dummy_lm = KneserNeyLanguageModel(dummy_lines, n=n) assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), "I told you not to break anything! :)" for n in (1, 2, 3): lm = KneserNeyLanguageModel(train_lines, n=n, smoothing=<...>) ppx = perplexity(lm, test_lines) print("N = %i, Perplexity = %.5f" % (n, ppx))
_____no_output_____
MIT
week03_lm/seminar.ipynb
ivkrasovskiy/nlp_course
Introduction to matplotlib
import matplotlib.pyplot as plt import numpy as np %matplotlib inline x = np.linspace(0, 10, 50) y = x**2 plt.plot(x,y) plt.title("Title") plt.xlabel("X Label") plt.ylabel("Y Label") plt.subplot(1, 2, 1) plt.plot(x,y, "red") plt.subplot(1,2, 2) plt.plot(y,x, "green") fig = plt.figure() ax = fig.add_axes([0.1, 0.2, 0.9, 0.9]) ax.plot(x,y, "purple") ax.set_xlabel("XLAB") fig = plt.figure() axes1 = fig.add_axes([0.1, 0.2, 0.8, 0.8]) axes2 = fig.add_axes([0.2, 0.5, 0.3, 0.3]) axes1.plot(x,y) axes2.plot(y,x) axes1.set_xlabel("x") axes2.set_xlabel("y") fig, axes = plt.subplots(nrows=3, ncols=3) axes[0,1].plot(x,y) axes[1,2].plot(y,x) fig.set_size_inches(8,8) plt.tight_layout() fig = plt.figure(figsize=(8,5)) ax = fig.add_axes([0,0,1,1]) ax.plot(x,x**2, label = "xhoch2") ax.plot(x,x**3, "blue", label="xhoch3", ls="--", lw=2, marker = "o") ax.set_xlim([0,4]) ax.legend() x = np.random.randn(1000) plt.hist(x) from datetime import datetime fig = plt.figure(figsize=(8,5)) x = np.array([datetime(2019, 1, 1, i, 0) for i in range(24)]) y = np.random.randint(100, size=x.shape) plt.plot(x,y) fig, ax = plt.subplots(figsize=(8,5)) x = np.linspace(-2,2,1000) y = x*np.random.randn(1000) ax.scatter(x,y) import pandas as pd mydf = pd.DataFrame(np.random.rand(10,4), columns=["a", "b", "c", "d"]) mydf.plot.bar(figsize=(8,5))
_____no_output_____
FTL
notebooks/99_lst_Tests_Mataplotlib.ipynb
lukk60/RETrends
This page was created from a Jupyter notebook. The original notebook can be found [here](https://github.com/klane/databall/blob/master/notebooks/parameter-tuning.ipynb). It investigates tuning model parameters to achieve better performance. First we must import the necessary installed modules.
import itertools import numpy as np import matplotlib.pyplot as plt import seaborn as sns from functools import partial from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neural_network import MLPClassifier from hyperopt import hp
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Next we need to import a few local modules.
import os import sys import warnings warnings.filterwarnings('ignore') module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) from databall.database import Database from databall.plotting import format_538, plot_metrics, plot_matrix from databall.model_selection import calculate_metrics, optimize_params, train_test_split import databall.util as util
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Apply the FiveThirtyEight plot style.
plt.style.use('fivethirtyeight')
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
DataAs before, we collect the stats and betting data from the database and create training and test sets where the 2016 season is reserved as the test set.
database = Database('../data/nba.db') games = database.betting_stats(window=10) x_train, y_train, x_test, y_test = train_test_split(games, 2006, 2016, xlabels=util.stat_names() + ['SEASON'])
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The stats below are the box score stats used during [feature selection](feature-selection.md). I decided to further explore these because they are readily available from multiple sources and do not require any calculation of advanced stats by users.
stats = ['FGM', 'FGA', 'FG3M', 'FG3A', 'FTM', 'FTA', 'OREB', 'DREB', 'AST', 'TOV', 'STL', 'BLK'] stats = ['TEAM_' + s for s in stats] + ['POSSESSIONS'] stats += [s + '_AWAY' for s in stats] + ['HOME_SPREAD']
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Logistic RegressionThe plots below show `LogisticRegression` model performance using different combinations of three parameters in a grid search: `penalty` (type of norm), `class_weight` (where "balanced" indicates weights are inversely proportional to class frequencies and the default is one), and `dual` (flag to use the dual formulation, which changes the equation being optimized). For each combination, models were trained with different `C` values, which controls the inverse of the regularization strength.All models have similar accuracy, ROC area, and precision/recall area for all `C` values tested. However, their individual precision and recall metrics change wildly with C. We are more interested in accuracy for this specific problem because accuracy directly controls profit. Using a grid search is not the most efficient parameter tuning method because grid searches do not use information from prior runs to aid future parameter choices. You are at the mercy of the selected grid points.
# Create functions that return logistic regression models with different parameters models = [partial(LogisticRegression, penalty='l1'), partial(LogisticRegression, penalty='l1', class_weight='balanced'), partial(LogisticRegression), partial(LogisticRegression, class_weight='balanced'), partial(LogisticRegression, dual=True), partial(LogisticRegression, class_weight='balanced', dual=True)] start = -8 stop = -2 C_vec = np.logspace(start=start, stop=stop, num=20) results = calculate_metrics(models, x_train, y_train, stats, 'C', C_vec, k=6) legend = ['L1 Norm', 'L1 Norm, Balanced Class', 'L2 Norm (Default)', 'L2 Norm, Balanced Class', 'L2 Norm, Dual Form', 'L2 Norm, Balanced Class, Dual Form'] fig, ax = plot_metrics(C_vec, results, 'Regularization Parameter', log=True) ax[-1].legend(legend, fontsize=16, bbox_to_anchor=(1.05, 1), borderaxespad=0) [a.set_xlim(10**start, 10**stop) for a in ax] [a.set_ylim(-0.05, 1.05) for a in ax] title = 'Grid searches are not the most efficient' subtitle = 'Grid search of logistic regression hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.22, 3.45), yoff=(-1.54, -1.64), toff=(-.16, 1.25), soff=(-0.16, 1.12), n=100) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
An alternative solution is to use an optimization algorithm that minimizes a loss function to select the hyperparameters. I experimented with the hyperopt package for this, which accepts a parameter search space and loss function as its inputs. The search space consists of discrete choices and ranges on continuous variables. I swapped out the `class_weight` and `dual` variables in favor of `fit_intercept` and `intercept_scaling`, which controls whether to include an intercept in the `LogisticRegression` model and a scaling factor. The scaling factor can help reduce the effect of regularization on the intercept. I chose cross-validation accuracy as the loss function (actually 1-accuracy since the optimizer minimizes the loss function) since we are interested in increasing profits. The optimal hyperparameters are displayed below.
space_log = {} space_log['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10)) space_log['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10)) space_log['penalty'] = hp.choice('penalty', ['l1', 'l2']) space_log['fit_intercept'] = hp.choice('fit_intercept', [False, True]) model = LogisticRegression() best_log, param_log = optimize_params(model, x_train, y_train, stats, space_log, max_evals=1000) print(best_log)
{'C': 0.0001943920615336294, 'fit_intercept': True, 'intercept_scaling': 134496.71823111628, 'penalty': 'l2'}
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The search history is displayed below. The intercept scale factor tended toward high values, even though the default value is 1.0.
labels = ['Regularization', 'Intercept Scale', 'Penalty', 'Intercept'] fig, ax = plot_matrix(param_log.index.values, param_log[[k for k in space_log.keys()]].values, 'Iteration', labels, 2, 2, logy=[True, True, False, False]) [a.set_yticks([0, 1]) for a in ax[2:]] ax[2].set_yticklabels(['L1', 'L2']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of logistic regression hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.18, 2.25), yoff=(-1.42, -1.52), toff=(-.16, 1.25), soff=(-0.16, 1.12), n=80, bottomtick=np.nan) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The cross-validation accuracy history shows that many models performed about the same despite their parameter values given the band of points just below 51% accuracy. The optimizer was also unable to find a model that significantly improved accuracy.
fig = plt.figure(figsize=(12, 6)) plt.plot(param_log.index.values, param_log['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of logistic regression hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Support Vector MachineThe [`LinearSVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.htmlsklearn.svm.LinearSVC) class is similar to a generic [`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC) with a linear kernel, but is implemented with liblinear instead of libsvm. The documentation states that `LinearSVC` scales better to large sample sizes since `SVC`'s fit time complexity is more than quadratic with the number of samples. I initially tried `SVC`, but the training time was too costly. `LinearSVC` proved to be must faster for this application.The code below sets up a `LinearSVC` hyperparameter search space using four parameters: `C` (penalty of the error term), `loss` (the loss function), `fit_intercept` (identical to `LogisticRegression`), and `intercept_scaling` (identical to `LogisticRegression`). I limited the number of evaluations to 500 to reduce the computational cost.
space_svm = {} space_svm['C'] = hp.loguniform('C', -8*np.log(10), -2*np.log(10)) space_svm['intercept_scaling'] = hp.loguniform('intercept_scaling', -8*np.log(10), 8*np.log(10)) space_svm['loss'] = hp.choice('loss', ['hinge', 'squared_hinge']) space_svm['fit_intercept'] = hp.choice('fit_intercept', [False, True]) model = LinearSVC() best_svm, param_svm = optimize_params(model, x_train, y_train, stats, space_svm, max_evals=500) print(best_svm)
{'C': 3.2563857398383885e-06, 'fit_intercept': True, 'intercept_scaling': 242.79319791592195, 'loss': 'squared_hinge'}
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The search history below is similar to the logistic regression history, but hyperopt appears to test more intercept scales with low values than before. This is also indicated by the drastic reduction in the intercept scale compared to logistic regression.
labels = ['Regularization', 'Intercept Scale', 'Loss', 'Intercept'] fig, ax = plot_matrix(param_svm.index.values, param_svm[[k for k in space_svm.keys()]].values, 'Iteration', labels, 2, 2, logy=[True, True, False, False]) [a.set_yticks([0, 1]) for a in ax[2:]] ax[2].set_yticklabels(['Hinge', 'Squared\nHinge']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of support vector machine hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.24, 2.25), yoff=(-1.42, -1.52), toff=(-.22, 1.25), soff=(-0.22, 1.12), n=80, bottomtick=np.nan) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The plot below shows the `LinearSVC` cross-validation accuracy history. There is a band of points similar to what we observed for logistic regression below 51% accuracy. The support vector machine model does not perform much better than logistic regression, and several points fall below 50% accuracy.
fig = plt.figure(figsize=(12, 6)) plt.plot(param_svm.index.values, param_svm['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of support vector machine hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Random ForestThe code below builds a [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.htmlsklearn.ensemble.RandomForestClassifier) hyperparameter search space using the parameters `n_estimators` (number of decision trees in the forest), `class_weight` (identical to the `LogisticRegression` grid search), `criterion` (function to evaluate split quality), and `bootstrap` (controls whether bootstrap samples are used when building trees). I reduced the number of function evaluations to 100 in the interest of computational time.
space_rf = {} space_rf['n_estimators'] = 10 + hp.randint('n_estimators', 40) space_rf['criterion'] = hp.choice('criterion', ['gini', 'entropy']) space_rf['class_weight'] = hp.choice('class_weight', [None, 'balanced']) space_rf['bootstrap'] = hp.choice('bootstrap', [False, True]) model = RandomForestClassifier(random_state=8) best_rf, param_rf = optimize_params(model, x_train, y_train, stats, space_rf, max_evals=100) print(best_rf)
{'bootstrap': True, 'class_weight': 'balanced', 'criterion': 'entropy', 'n_estimators': 34}
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The random forest hyperparameter search history is displayed below.
labels = ['Estimators', 'Criterion', 'Class Weight', 'Bootstrap'] fig, ax = plot_matrix(param_rf.index.values, param_rf[[k for k in space_rf.keys()]].values, 'Iteration', labels, 2, 2) [a.set_yticks([0, 1]) for a in ax[1:]] ax[1].set_yticklabels(['Gini', 'Entropy']) ax[2].set_yticklabels(['None', 'Balanced']) ax[3].set_yticklabels(['False', 'True']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of random forest hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12), n=80, bottomtick=np.nan) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The cross-validation accuracy history shows the random forest model performs slightly worse than logistic regression.
fig = plt.figure(figsize=(12, 6)) plt.plot(param_rf.index.values, param_rf['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of random forest hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
Neural NetworkThe code below builds a [`MLPClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.htmlsklearn.neural_network.MLPClassifier) hyperparameter search space using the parameters `hidden_layer_sizes` (number of neurons in each hidden layer), `alpha` (controls the L2 regularization similar to the `C` parameter in `LogisticRegression` and `LinearSVC`), `activation` (network activation function), and `solver` (the algorithm used to optimize network weights). The network structure was held to a single hidden layer. I kept the number of function evaluations at 100 in the interest of computational time.
space_mlp = {} space_mlp['hidden_layer_sizes'] = 10 + hp.randint('hidden_layer_sizes', 40) space_mlp['alpha'] = hp.loguniform('alpha', -8*np.log(10), 3*np.log(10)) space_mlp['activation'] = hp.choice('activation', ['relu', 'logistic', 'tanh']) space_mlp['solver'] = hp.choice('solver', ['lbfgs', 'sgd', 'adam']) model = MLPClassifier() best_mlp, param_mlp = optimize_params(model, x_train, y_train, stats, space_mlp, max_evals=100) print(best_mlp)
{'activation': 'tanh', 'alpha': 5.700733605522687e-06, 'hidden_layer_sizes': 49, 'solver': 'lbfgs'}
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The multi-layer perceptron hyperparameter search history is displayed below.
labels = ['Hidden Neurons', 'Regularization', 'Activation', 'Solver'] fig, ax = plot_matrix(param_mlp.index.values, param_mlp[[k for k in space_mlp.keys()]].values, 'Iteration', labels, 2, 2, logy=[False, True, False, False]) [a.set_yticks([0, 1, 2]) for a in ax[2:]] ax[2].set_yticklabels(['RELU', 'Logistic', 'Tanh']) ax[3].set_yticklabels(['LBFGS', 'SGD', 'ADAM']) title = 'Hyperopt is more flexible than a grid search' subtitle = 'Hyperopt search of multi-layer perceptron hyperparameters' format_538(fig, 'NBA Stats & Covers.com', ax=ax, title=title, subtitle=subtitle, xoff=(-0.26, 2.25), yoff=(-1.42, -1.52), toff=(-.24, 1.25), soff=(-0.24, 1.12), n=80, bottomtick=np.nan) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
The cross-validation history suggests the multi-layer perceptron performs the best of the four models, albeit the improvement is minor.
fig = plt.figure(figsize=(12, 6)) plt.plot(param_mlp.index.values, param_mlp['accuracy'], '.', markersize=5) title = 'Improvements are hard to come by' subtitle = 'Accuracy of multi-layer perceptron hyperparameter optimization history' format_538(fig, 'NBA Stats & Covers.com', xlabel='Iteration', ylabel='Accuracy', title=title, subtitle=subtitle, xoff=(-0.1, 1.01), yoff=(-0.14, -0.2), toff=(-0.09, 1.12), soff=(-0.09, 1.04), bottomtick=0.5) plt.show()
_____no_output_____
MIT
notebooks/parameter-tuning.ipynb
klane/nba
CollationAn alternative for `sorted()` using `icu.Collator` and `icu.RuleBasedCollator`. Currently supports lists, tuples, strings, dataframes and series.Custom collation rules are defined for Dinka and Akan.All icu::Collator supported locales are available.__TODO:__* add support for numpy arrays* add support for dicts [[1]](https://stackoverflow.com/questions/38793694/python-sort-a-list-of-objects-dictionaries-with-a-given-sortkey-function)* allow user to define colation rules and pass them to `el_collation.sorted_`* allow user to modify collation rules provided by ICU locales
import pandas as pd import el_collation as elcol import random
_____no_output_____
MIT
collation/el_collation.ipynb
enabling-languages/dinka
Custom Collation Rules (unsupported locales)The following examples will be using predefined collation rules for Dinka.
# Set language lang = "din-SS" # Provide Dinka lexemes ordered_lexemes_tuple = ( 'abany', 'abaany', 'abaŋ', 'abenh', 'abeŋ', 'aber', 'abeer', 'abëër', 'abeeric', 'aberŋic', 'abuɔ̈c', 'abuɔk', 'abuɔɔk', 'abuɔ̈k', 'abur', 'acut', 'acuut', 'acuth', 'ago', 'agook', 'agol', 'akɔ̈r', 'akɔrcok', 'akuny', 'akuŋɛŋ' ) # Ensure lexeme order is randomised random.seed(5) random_lexemes = tuple(random.sample(ordered_lexemes_tuple, len(ordered_lexemes_tuple))) random_lexemes # Sort randomised tuple of Dinka lexemes sorted_lexemes = elcol.sorted_(random_lexemes, lang) sorted_lexemes
_____no_output_____
MIT
collation/el_collation.ipynb
enabling-languages/dinka
Pandas dataframes
ddf = pd.read_csv("../word_frequency/unilex/din.txt", sep='\t', skiprows = range(2,5)) random_ddf = ddf.sample(frac=1) sorted_ddf = elcol.sorted_(random_ddf, lang, random_ddf['Form']) sorted_ddf.head(30)
_____no_output_____
MIT
collation/el_collation.ipynb
enabling-languages/dinka
Pandas series
random_words = random_ddf['Form'] sorted_words = elcol.sorted_(random_words, lang, random_words)
_____no_output_____
MIT
collation/el_collation.ipynb
enabling-languages/dinka
What is DCT (discrete cosine transformation) ?- This notebook creates arbitrary consumption functions at both 1-dimensional and 2-dimensional grids and illustrate how DCT approximates the full-grid function with different level of accuracies. - This is used in [DCT-Copula-Illustration notebook](DCT-Copula-Illustration.ipynb) to plot consumption functions approximated by DCT versus original consumption function at full grids.- Written by Tao Wang- June 19, 2019
# Setup def in_ipynb(): try: if str(type(get_ipython())) == "<class 'ipykernel.zmqshell.ZMQInteractiveShell'>": return True else: return False except NameError: return False # Determine whether to make the figures inline (for spyder or jupyter) # vs whatever is the automatic setting that will apply if run from the terminal if in_ipynb(): # %matplotlib inline generates a syntax error when run from the shell # so do this instead get_ipython().run_line_magic('matplotlib', 'inline') else: get_ipython().run_line_magic('matplotlib', 'auto') # Import tools import scipy.fftpack as sf # scipy discrete fourier transform import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy.linalg as lag from scipy import misc from matplotlib import cm ## DCT in 1 dimension grids= np.linspace(0,100,100) # this represents the grids on which consumption function is defined.i.e. m or k c =grids + 50*np.cos(grids*2*np.pi/40) # this is an arbitrary example of consumption function c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] # get indices of dct coefficients(absolute value) in descending order ## DCT in 1 dimension for difference accuracy levels fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.5,0.9,0.99]) plt.plot(c,'r*',label='c at full grids') c_dct = sf.dct(c,norm='ortho') # set norm =ortho is important ind=np.argsort(abs(c_dct))[::-1] for idx in range(len(lvl_lst)): i = 1 # starts the loop that finds the needed indices so that an target level of approximation is achieved while lag.norm(c_dct[ind[0:i]].copy())/lag.norm(c_dct) < lvl_lst[idx]: i = i + 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions used") c_dct_rdc=c.copy() c_dct_rdc[ind[needed+1:]] = 0 c_approx = sf.idct(c_dct_rdc) plt.plot(c_approx,label=r'c approx at ${}$'.format(lvl_lst[idx])) plt.legend(loc=0) ## Blockwise DCT. For illustration but not used in BayerLuetticke. ## But it illustrates how doing dct in more finely devided blocks give a better approximation size = c.shape c_dct = np.zeros(size) c_approx=np.zeros(size) fig = plt.figure(figsize=(5,5)) fig.suptitle('DCT compressed c function with different number of basis funcs') nbs_lst = np.array([20,50]) plt.plot(c,'r*',label='c at full grids') for i in range(len(nbs_lst)): delta = np.int(size[0]/nbs_lst[i]) for pos in np.r_[:size[0]:delta]: c_dct[pos:(pos+delta)] = sf.dct(c[pos:(pos+delta)],norm='ortho') c_approx[pos:(pos+delta)]=sf.idct(c_dct[pos:(pos+delta)]) plt.plot(c_dct,label=r'Nb of blocks= ${}$'.format(nbs_lst[i])) plt.legend(loc=0) # DCT in 2 dimensions def dct2d(x): x0 = sf.dct(x.copy(),axis=0,norm='ortho') x_dct = sf.dct(x0.copy(),axis=1,norm='ortho') return x_dct def idct2d(x): x0 = sf.idct(x.copy(),axis=1,norm='ortho') x_idct= sf.idct(x0.copy(),axis=0,norm='ortho') return x_idct # arbitrarily generate a consumption function at different grid points grid0=20 grid1=20 grids0 = np.linspace(0,20,grid0) grids1 = np.linspace(0,20,grid1) c2d = np.zeros([grid0,grid1]) # create an arbitrary c functions at 2-dimensional grids for i in range(grid0): for j in range(grid1): c2d[i,j]= grids0[i]*grids1[j] - 50*np.sin(grids0[i]*2*np.pi/40)+10*np.cos(grids1[j]*2*np.pi/40) ## do dct for 2-dimensional c at full grids c2d_dct=dct2d(c2d) ## convert the 2d to 1d for easier manipulation c2d_dct_flt = c2d_dct.flatten(order='F') ind2d = np.argsort(abs(c2d_dct_flt.copy()))[::-1] # get indices of dct coefficients(abosolute value) # in the decending order # DCT in 2 dimensions for different levels of accuracy fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function with different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1) ax.imshow(c2d) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(np.sort(ind2d[needed+1:]),(grid0,grid1),order='F') c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2) ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.imshow(c2d_approx) ## surface plot of c at full grids and dct approximates with different accuracy levels fig = plt.figure(figsize=(15,10)) fig.suptitle('DCT compressed c function in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') ax.plot_surface(grids0,grids1,c2d,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx,cmap=cm.coolwarm) # surface plot of absoulte value of differences of c at full grids and approximated fig = plt.figure(figsize=(15,10)) fig.suptitle('Differences(abosolute value) of DCT compressed with c at full grids in different accuracy levels') lvl_lst = np.array([0.999,0.99,0.9,0.8,0.5]) ax=fig.add_subplot(2,3,1,projection='3d') c2d_diff = abs(c2d-c2d) ax.plot_surface(grids0,grids1,c2d_diff,cmap=cm.coolwarm) ax.set_title(r'$1$') for idx in range(len(lvl_lst)): i = 1 while lag.norm(c2d_dct_flt[ind2d[:i]].copy())/lag.norm(c2d_dct_flt) < lvl_lst[idx]: i += 1 needed = i print("For accuracy level of "+str(lvl_lst[idx])+", "+str(needed)+" basis functions are used") c2d_dct_rdc=c2d_dct.copy() idx_urv = np.unravel_index(ind2d[needed+1:],(grid0,grid1)) c2d_dct_rdc[idx_urv] = 0 c2d_approx = idct2d(c2d_dct_rdc) c2d_approx_diff = abs(c2d_approx - c2d) ax = fig.add_subplot(2,3,idx+2,projection='3d') ax.set_title(r'${}$'.format(lvl_lst[idx])) ax.plot_surface(grids0,grids1,c2d_approx_diff,cmap= 'OrRd',linewidth=1) ax.view_init(20, 90)
For accuracy level of 0.999, 10 basis functions are used For accuracy level of 0.99, 5 basis functions are used For accuracy level of 0.9, 3 basis functions are used For accuracy level of 0.8, 2 basis functions are used For accuracy level of 0.5, 1 basis functions are used
Apache-2.0
REMARKs/BayerLuetticke/notebooks/DCT.ipynb
MridulS/REMARK
CU Woot Math Method 2 for unsupervosed discovery of new behavior traits 1) Convert response field dictionary into a document 2) Develop word vector using term frequency - inverse document frequency 3) Use K-Means to cluster documents 4) Map traits to clusters to validate techniqueIn the first results presented to Woot Math a 100K sample of the entire data set was chosen. In this report, I'll start with the same type of analysis to develop the same heat map. In the meeting Sean and Brent suggested using just one of the qual_id and repeat the experiment and then look at the samples in clusers without traits. I'll do that in a subsequent analysis Part 1. Heat map with 100 K sample of all qual_id's
## Connect to local DB client = MongoClient('localhost', 27017) print ("Setup db access") # # Get collections from mongodb # #db = client.my_test_db db = client.test chunk = 100000 start = 0 end = start + chunk #reponses = db.anon_student_task_responses.find({'correct':False})[start:end] reponses = db.anon_student_task_responses.find()[start:end] df_responses = pd.DataFrame(list(reponses)) print (df_responses.shape) ## Make the documents to be analyzed ## Functions for turning dictionary into document def make_string_from_list(key, elem_list): # Append key to each item in list ans = '' for elem in elem_list: ans += key + '_' + elem def make_string(elem, key=None, top=True): ans = '' if not elem: return ans if top: top = False top_keys = [] for idx in range(len(elem.keys())): top_keys.append(True) for idx, key in enumerate(elem.keys()): if top_keys[idx]: top = True top_keys[idx] = False ans += ' ' else: top = False #print ('ans = ', ans) #print (type(elem[key])) if type(elem[key]) is str or\ type(elem[key]) is int: #print ('add value', elem[key]) value = str(elem[key]) #ans += key + '_' + value + ' ' + value + ' ' ans += key + '_' + value + ' ' elif type(elem[key]) is list: #print ('add list', elem[key]) temp_elem = dict() for item in elem[key]: temp_elem[key] = item ans += make_string(temp_elem, top) elif type(elem[key]) is dict: #print ('add dict', elem[key]) for item_key in elem[key].keys(): temp_elem = dict() temp_elem[item_key] = elem[key][item_key] ans += key + '_' + make_string(temp_elem, top) elif type(elem[key]) is float: #print ('add dict', elem[key]) sig = 2 value = elem[key] value = round(value, sig-int( floor(log10(abs(value))))-1) value = str(value) #ans += key + '_' + value + ' ' + value + ' ' ans += key + '_' + value + ' ' # ans += ' ' + key + ' ' #print ('not handled', elem[key]) return ans # Makes the cut & paste below easier df3 = df_responses df3['response_doc'] = df3['response'].map(make_string) df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ') df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace('/','_')) df3['response_doc'] = df3['response_doc'] + ' ' + df3['txt'] df3['response_doc'] = df3['response_doc'].map(lambda x: x + ' ') df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("\n", "")) df3['response_doc'] = df3['response_doc'].map(lambda x: x.replace("?", " "))
_____no_output_____
MIT
working/EDA_WM-BrianMc-topics-Method2-heat_map-100-clusters-random62.ipynb
bdmckean/woot_math_analysis
Sample Documents
for idx in range(20): print ("Sample number:", idx, "\n", df3.iloc[idx]['response_doc']) data_samples = df3['response_doc'] n_features = 1000 n_samples = len(data_samples) n_topics = 50 n_top_words = 20 print("Extracting tf-idf features ...") tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=n_features, stop_words='english') t0 = time() tfidf = tfidf_vectorizer.fit_transform(data_samples) print("done in %0.3fs." % (time() - t0)) # Number of clusters true_k = 100 km = MiniBatchKMeans(n_clusters=true_k, init='k-means++', n_init=1, init_size=1000, batch_size=1000, random_state=62) print("Clustering with %s" % km) t0 = time() km.fit(tfidf) print("done in %0.3fs" % (time() - t0)) print() print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(true_k): print("Cluster %d:\n" % i, end='') for ind in order_centroids[i, :30]: print(' --- %s\n' % terms[ind], end='') print() df3['cluster_100'] = km.labels_ df3['trait_1'] = df3['behavioral_traits'].apply(lambda x : x[0] if len(x) > 0 else 'None' ) df3['trait_2'] = df3['behavioral_traits'].apply(lambda x : x[1] if len(x) > 1 else 'None' ) df_trait_1 = df3.groupby(['cluster_100', 'trait_1']).size().unstack(fill_value=0) df_trait_2 = df3.groupby(['cluster_100', 'trait_2']).size().unstack(fill_value=0) df_cluster_100 = df3.groupby('cluster_100') df_trait_1.index.rename('cluster_100', inplace=True) df_trait_2.index.rename('cluster_100', inplace=True) df_traits = pd.concat([df_trait_1, df_trait_2], axis=1) df_traits = df_traits.drop('None', axis=1) #df_traits_norm = (df_traits - df_traits.mean()) / (df_traits.max() - df_traits.min()) df_traits_norm = (df_traits / (df_traits.sum()) ) fig = plt.figure(figsize=(18.5, 16)) cmap = sns.cubehelix_palette(light=.95, as_cmap=True) sns.heatmap(df_traits_norm, cmap=cmap, linewidths=.5) #sns.heatmap(df_traits_norm, cmap="YlGnBu", linewidths=.5)
_____no_output_____
MIT
working/EDA_WM-BrianMc-topics-Method2-heat_map-100-clusters-random62.ipynb
bdmckean/woot_math_analysis
Analysis of stock prices using PCA / Notebook 3In this notebook we will study the dimensionality of stock price sequences, and show that they lie between the 1D of smooth functions and 2D of rapidly varying functions.The mathematicians Manuel Mandelbrot and Richard Hudson wrote a book titled [The Misbehavior of Markets: A Fractal View of Financial Turbulence](https://www.amazon.com/gp/product/0465043577?ie=UTF8&tag=trivisonno-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0465043577). In this book they demonstrate that financial sequences have a fractal dimension that is higher than one. In other words, the changes in stock prices are more similar to random walk, than to a smooth differentiable curve.In this notebook we will estimate the fractal dimension of sequences corresponding to the log of the price of a stock. We will do the same for some other, non-random sequences.We will use the [Box Counting](https://en.wikipedia.org/wiki/Box_counting) method to estimate the dimension. Box CountingFor the sake of simplicity, lets start with a simple smooth curve corresoinding to $sin(x)$.Intuitively speaking, the dimension of this curve should be 1. Lets see how we measure that using box-counting.The idea is simple: we split the 2D plane into smaller and smaller rectangles and count the number of rectangles that touch the curve. The gridlines in the figure below partition the figure into $16 \times 16 = 256$ rectangles. The yellow shading corresponds the partition of the figure into $8 \times 8$ rectangles. The green corresponds to the partition into $16\times 16$ (which is the same as the grid), The blue and the red correspond to partitions into $32\times32$ and $64 \times 64$ respectively. You can see that as theboxes get smaller their number increases. ![Sinusoid](figs/Sinusoid.BoxCount.png)The dimension is defined by the relation between the size of the cubes and the number of rectangle that touch the curve. More precisly, we say that the size of a rectangle in a $n \times n$ partition is $\epsilon=1/n$. We denote by $N(\epsilon)$ the number of rectangles of size $\epsilon$ that touch the curve. Then if $d$ is the dimension, the relationship between $N(\epsilon)$ and $\epsilon$ is $$N(\epsilon) = \frac{C}{\epsilon^d}$$For some constant $C$Taking $\log$s of both side we get $$(1)\;\;\;\;\;\;\;\;\;\;\;\;\log N(\epsilon) = \log C + d \log \frac{1}{\epsilon}$$We can use this equation to estimate $d$ as follows: let $\epsilon_2 \gg \epsilon_1$ be two sizes that are far apart (say $\epsilon_1=1/4$ and $\epsilon_2=1/1024$), and let $N(\epsilon_1),N(\epsilon_2)$ be the corresponding box counts. Then by taking the difference between Equation (1) for the two sizes we get the estimate$$ d \approx \frac{\log N(\epsilon_1) - \log N(\epsilon_2)}{\log \epsilon_2- \log \epsilon_1}$$Note that this is an estimate, it depends on the particular values of $\epsilon_1$ and $\epsilon_2$. We can refer to it as the "dimension" if we get the same number for any choice of the two sizes (as well as other details sich as the extent of the function. Here are similar figures for the seque ![AMZN](figs/AMZN.BoxCount.png)![IBM](figs/IBM.BoxCount.png)
import findspark findspark.init() from pyspark import SparkContext #sc.stop() sc = SparkContext(master="local[3]") from pyspark.sql import * sqlContext = SQLContext(sc) %pylab inline import numpy as np df=sqlContext.read.csv('../Data/SP500.csv',header='true',inferSchema='true') df.count() columns=df.columns col=[c for c in columns if '_P' in c] tickers=[a[:-2] for a in col] tickers[:10],len(tickers) def get_seq(ticker): key=ticker+"_P" L=df.select(key).collect() L=[x[key] for x in L if not x[key] is None] return L
_____no_output_____
MIT
Code/3.Dimensionality.ipynb
vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality
We generate graphs like the ones below for your analysis of dimensionality on the stocks ![Graph for Analysing Stocks](figs/plots.png)
pickleFile="Tester/Dimensionality.pkl"
_____no_output_____
MIT
Code/3.Dimensionality.ipynb
vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality
Finding DimensionWe find the dimension for a particular ticker using its sequence of data Sample Input:```python dimension = Box_count([sequence of AAPL], 'AAPL')``` Sample Output:dimension = 1.28
from scipy.optimize import curve_fit import pandas as pd def f( x, A, Df ): ''' User defined function for scipy.optimize.curve_fit(), which will find optimal values for A and Df. ''' return Df * x + A def count_boxes(PriceSequence, n): length = len(PriceSequence) PriceSequence = map(lambda x: log(x), PriceSequence) # Log of price is needed? maxP = max(PriceSequence) minP = min(PriceSequence) full_x = np.linspace(0,length,n+1).tolist() full_y = np.linspace(minP,maxP,n+1).tolist() x_spacing = full_x[1]-full_x[0] y_spacing = full_y[1]-full_y[0] counts = np.zeros((n,n)) boxpoints = n+1 for i in range(length-1): (x1,x2) = (i,i+1) (y1,y2) = (PriceSequence[i],PriceSequence[i+1]) xPoints = np.linspace(x1,x2,boxpoints).tolist() yPoints = np.linspace(y1,y2,boxpoints).tolist() for j in range(boxpoints): xindex = int(xPoints[j]/x_spacing) yindex = int((yPoints[j] - minP)/y_spacing) -1 if(counts[xindex][yindex] == 0): counts[xindex][yindex] = 1 return np.sum(counts) def Box_count(LL,ticker): ## Your Implementation goes here dimension = 0.0 r = np.array([ 2.0**i for i in xrange(0,10)]) # r - 1/episilon N = np.array([ count_boxes( LL, int(ri)) for ri in r ]) popt, pcov = curve_fit( f, np.log(r),np.log( N )) Lacunarity, dimension = popt return dimension
_____no_output_____
MIT
Code/3.Dimensionality.ipynb
vnnsrk/Understanding-stock-data---Field-clustering-and-dimensionality
PySDDR: An Advanced TutorialIn the beginner's guide only tabular data was used as input to the PySDDR framework. In this advanced tutorial we show the effects when combining structured and unstructured data. Currently, the framework only supports images as unstructured data.We will use the MNIST dataset as a source for the unstructured data and generate additional tabular features corresponding to those. Our outcome in this tutorial is simulated based on linear and non-linear effects of tabular data and a linear effect of the number shown on the MNIST image. Our model is not provided with the (true) number, but instead has to learn the number effect from the image (together with the structured data effects):\begin{equation*}y = \sin(x_1) - 3x_2 + x_3^4 + 3\cdot number + \epsilon\end{equation*}with $\epsilon \sim \mathcal{N}(0, \sigma^2)$ and $number$ is the number on the MNIST image.The aim of training is for the model to be able to output a latent effect, representing the number depicted in the MNIST image.We start by importing the sddr module and other required libraries
# import the sddr module from sddr import Sddr import torch import torch.nn as nn import torch.optim as optim import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #set seeds for reproducibility torch.manual_seed(1) np.random.seed(1)
_____no_output_____
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
User inputs First the user defines the data to be used. The data is loaded and if it does not already exist, a column needs to be added to the tabular data describing the unstructured data - structured data correspondence. In the example below we add a column where each item includes the name of the image to which the current row of tabular data corresponds.
data_path = '../data/mnist_data/tab.csv' data = pd.read_csv(data_path,delimiter=',') # append a column for the numbers: each data point contains a file name of the corresponding image for i in data.index: data.loc[i,'numbers'] = f'img_{i}.jpg'
_____no_output_____
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
Next the distribution, formulas and training parameters are defined. The size of each image is ```28x28``` so our neural network has a layer which flattens the input, which is followed by a linear layer of input size ```28x28``` and an output size of ```128```. Finally, this is followed by a ```ReLU``` for the activation.Here the structured data is not pre-loaded as it would be typically too large to load in one step. Therefore the path to the directory in which it is stored is provided along with the data type (for now only 'images' supported). The images are then loaded in batches using PyTorch's dataloader. Note that here again the key given in the ```unstructured_data``` dictionary must match the name it is given in the formula, in this case ```'numbers'```. Similarly the keys of the ```deep_models_dict``` must also match the names in the formula, in this case ```'dnn'```
# define distribution and the formula for the distibutional parameter distribution = 'Normal' formulas = {'loc': '~ -1 + spline(x1, bs="bs", df=10) + x2 + dnn(numbers) + spline(x3, bs="bs", df=10)', 'scale': '~1' } # define the deep neural networks' architectures and output shapes used in the above formula deep_models_dict = { 'dnn': { 'model': nn.Sequential(nn.Flatten(1, -1), nn.Linear(28*28,128), nn.ReLU()), 'output_shape': 128}, } # define your training hyperparameters train_parameters = { 'batch_size': 8000, 'epochs': 1000, 'degrees_of_freedom': {'loc':9.6, 'scale':9.6}, 'optimizer' : optim.Adam, 'val_split': 0.15, 'early_stop_epsilon': 0.001, 'dropout_rate': 0.01 } # provide the location and datatype of the unstructured data unstructured_data = { 'numbers' : { 'path' : '../data/mnist_data/mnist_images', 'datatype' : 'image' } } # define output directory output_dir = './outputs'
_____no_output_____
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
InitializationThe sddr instance is initialized with the parameters given by the user in the previous step:
sddr = Sddr(output_dir=output_dir, distribution=distribution, formulas=formulas, deep_models_dict=deep_models_dict, train_parameters=train_parameters, )
Using device: cpu
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
TrainingThe sddr network is trained with the data defined above and the loss curve is plotted.
sddr.train(structured_data=data, target="y_gen", unstructured_data = unstructured_data, plot=True)
Beginning training ... Train Epoch: 0 Training Loss: 129.044235 Train Epoch: 0 Validation Loss: 143.731430 Train Epoch: 100 Training Loss: 98.628090 Train Epoch: 100 Validation Loss: 118.442505 Train Epoch: 200 Training Loss: 72.697281 Train Epoch: 200 Validation Loss: 106.068893 Train Epoch: 300 Training Loss: 53.885902 Train Epoch: 300 Validation Loss: 97.472977 Train Epoch: 400 Training Loss: 40.945545 Train Epoch: 400 Validation Loss: 91.278023 Train Epoch: 500 Training Loss: 32.393982 Train Epoch: 500 Validation Loss: 85.700958 Train Epoch: 600 Training Loss: 26.009539 Train Epoch: 600 Validation Loss: 81.085602 Train Epoch: 700 Training Loss: 21.401140 Train Epoch: 700 Validation Loss: 76.584694 Train Epoch: 800 Training Loss: 18.019514 Train Epoch: 800 Validation Loss: 74.260246 Train Epoch: 900 Training Loss: 15.354483 Train Epoch: 900 Validation Loss: 71.126083
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
Evaluation - Visualizing the partial effectsIn this case the data is assumed to follow a normal distribution, in which case two distributional parameters, loc and scale, need to be estimated. Below we plot the partial effects of each smooth term.Remember the partial effects are computed by: partial effect = smooth_features * coefs (weights)In other words the smoothing terms are multiplied with the weights of the Structured Head. We use the partial effects to interpret whether our model has learned correctly.
partial_effects_loc = sddr.eval('loc',plot=True) partial_effects_scale = sddr.eval('scale',plot=True)
Nothing to plot. No (non-)linear partial effects specified for this parameter. (Deep partial effects are not plotted.)
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
As we can see the distributional parameter loc has two parial effects, one sinusoidal and one quadratic. The parameter scale expectedly has no partial effect since the formula only includes an intercept.Next we retrieve our ground truth data and compare it with the model's estimation
# compare prediction of neural network with ground truth data_pred = data.loc[:,:] ground_truth = data.loc[:,'y_gen'] # predict returns partial effects and a distributional layer that gives statistical information about the prediction distribution_layer, partial_effect = sddr.predict(data_pred, clipping=True, plot=False, unstructured_data = unstructured_data) # retrieve the mean and variance of the distributional layer predicted_mean = distribution_layer.loc[:,:].T predicted_variance = distribution_layer.scale[0] # and plot the result plt.scatter(ground_truth, predicted_mean) print(f"Predicted variance for first sample: {predicted_variance}")
Predicted variance for first sample: tensor([1.3674])
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
The comparison shows that for most samples the predicted and true values are directly propotional.Next we want to check if the model learned the correct correspondence of images and numbers
# we create a copy of our original structured data where we set all inputs but the images to be zero data_pred_copy = data.copy() data_pred_copy.loc[:,'x1'] = 0 data_pred_copy.loc[:,'x2'] = 0 data_pred_copy.loc[:,'x3'] = 0 # and make a prediction using only the images distribution_layer, partial_effect = sddr.predict(data_pred_copy, clipping=True, plot=False, unstructured_data = unstructured_data) # add the predicted mean value to our tabular data data_pred_copy['predicted_number'] = distribution_layer.loc[:,:].numpy().flatten() # and compare the true number on the images with the predicted number ax = sns.boxplot(x="y_true", y="predicted_number", data=data_pred_copy) ax.set_xlabel("true number"); ax.set_ylabel("predicted latent effect of number");
_____no_output_____
MIT
tutorials/AdvancedTutorial.ipynb
felixGer/PySDDR
Simple Linear RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement simple Linear Regression* Create a model, train it, test it and use the model Importing Needed packages
import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head()
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Data ExplorationLet's first have a descriptive exploration on our data.
# summarize the data df.describe()
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Let's select some features to explore more.
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9)
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
We can plot each of these features:
viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']] viz.hist() plt.show()
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Now, let's plot each of these features against the Emission, to see how linear their relationship is:
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue') plt.xlabel("FUELCONSUMPTION_COMB") plt.ylabel("Emission") plt.show() plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show()
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
PracticePlot **CYLINDER** vs the Emission, to see how linear is their relationship is:
# write your code here
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Click here for the solution```pythonplt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')plt.xlabel("Cylinders")plt.ylabel("Emission")plt.show()``` Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets that are mutually exclusive. After which, you train with the training set and test with the testing set.This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the model. Therefore, it gives us a better understanding of how well our model generalizes on new data.This means that we know the outcome of each data point in the testing dataset, making it great to test with! Since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.Let's split our dataset into train and test sets. 80% of the entire dataset will be used for training and 20% for testing. We create a mask to select random rows using **np.random.rand()** function:
msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk]
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Simple Regression ModelLinear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the actual value y in the dataset, and the predicted value yhat using linear approximation. Train data distribution
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show()
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
ModelingUsing sklearn package to model data.
from sklearn import linear_model regr = linear_model.LinearRegression() train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit(train_x, train_y) # The coefficients print ('Coefficients: ', regr.coef_) print ('Intercept: ',regr.intercept_)
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
As mentioned before, **Coefficient** and **Intercept** in the simple linear regression, are the parameters of the fit line.Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data.Notice that all of the data must be available to traverse and calculate the parameters. Plot outputs We can plot the fit line over the data:
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r') plt.xlabel("Engine size") plt.ylabel("Emission")
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
EvaluationWe compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:* Mean Absolute Error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.* Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean Absolute Error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.* Root Mean Squared Error (RMSE).* R-squared is not an error, but rather a popular metric to measure the performance of your regression model. It represents how close the data points are to the fitted regression line. The higher the R-squared value, the better the model fits your data. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
from sklearn.metrics import r2_score test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) test_y_ = regr.predict(test_x) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y , test_y_) )
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Exercise Lets see what the evaluation metrics are if we trained a regression model using the `FUELCONSUMPTION_COMB` feature.Start by selecting `FUELCONSUMPTION_COMB` as the train_x data from the `train` dataframe, then select `FUELCONSUMPTION_COMB` as the test_x data from the `test` dataframe
train_x = #ADD CODE test_x = #ADD CODE
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Click here for the solution```pythontrain_x = train[["FUELCONSUMPTION_COMB"]]test_x = train[["FUELCONSUMPTION_COMB"]]``` Now train a Logistic Regression Model using the `train_x` you created and the `train_y` created previously
regr = linear_model.LinearRegression() #ADD CODE
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Click here for the solution```pythonregr = linear_model.LinearRegression()regr.fit(train_x, train_y)``` Find the predictions using the model's `predict` function and the `test_x` data
predictions = #ADD CODE
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
Click here for the solution```pythonpredictions = regr.predict(test_x)``` Finally use the `predictions` and the `test_y` data and find the Mean Absolute Error value using the `np.absolute` and `np.mean` function like done previously
#ADD CODE
_____no_output_____
MIT
9.Machine Learning with Python/ML0101EN-Reg-Simple-Linear-Regression-Co2.ipynb
iamshivprakash/IBMDataScience
March 18 Notes Fitting data to models 1. Build a model2. Create a "fitness function", i.e. something that returns a scalar "distance" between the model and the data3. Apply an "optimizer" to get the best-fit parameters
from astropy import units as u def gaussian_model(xarr, amplitude, offset, width): amplitude = u.Quantity(amplitude, u.K) offset = u.Quantity(offset, u.km/u.s) width = u.Quantity(width, u.km/u.s) xarr = u.Quantity(xarr, u.km/u.s) return amplitude * np.exp(-(xarr-offset)**2/(2.*width**2)) x = 5 u.Quantity(x, u.km/u.s) x = 5 * u.m/u.s u.Quantity(x, u.km/u.s) xarr = np.linspace(-5,5,50) * u.km/u.s gaussian(xarr, 1*u.K, 0.5*u.km/u.s, 2000*u.m/u.s) %matplotlib inline import pylab as pl pl.plot(xarr, gaussian(xarr, 1, 0.5, 2)) from specutils.io import fits spec = fits.read_fits_spectrum1d('gbt_1d.fits') pl.plot(spec.velocity, spec.flux, 'k-') model = gaussian_model(spec.velocity, amplitude=5*u.K, offset=5*u.km/u.s, width=5*u.km/u.s) pl.plot(spec.velocity, spec.flux, 'k-') pl.plot(spec.velocity, model, 'b-') spec.flux * u.K def cost_function(params, data_range=None): if data_range is not None: data = spec.flux[data_range] else: data = spec.flux return (((data * u.K) - gaussian_model(spec.velocity, *params))**2).sum().value params = (1,2,3) def f(a,b,c): print("a={0}, b={1}, c={2}".format(a,b,c)) f(1,2,3) f(*params) cost_function((5*u.K, 5*u.km/u.s, 5*u.km/u.s)) from scipy.optimize import minimize result = minimize(cost_function, (5, 5, 5), args=(slice(100,200),)) result (amplitude, offset, width) = result.x best_fit_model = gaussian_model(spec.velocity, *result.x) pl.plot(spec.velocity, spec.flux, 'k-') pl.plot(spec.velocity, best_fit_model, 'r-') pl.xlim(-30, 30)
_____no_output_____
BSD-3-Clause
notebooks/MarchApril2016_TutorialSession/Notebook - March 18 - Part 1.ipynb
ESO-python/ESOPythonTutorials