repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
rsignell-usgs/python-training
|
web-services/reading_netCDF.ipynb
|
cc0-1.0
|
[
"Reading netCDF data\n\nrequires numpy and netCDF/HDF5 C libraries.\nGithub site: https://github.com/Unidata/netcdf4-python\nOnline docs: http://unidata.github.io/netcdf4-python/\nBased on Konrad Hinsen's old Scientific.IO.NetCDF API, with lots of added netcdf version 4 features.\nDeveloped by Jeff Whitaker at NOAA, with many contributions from users.\n\nInteractively exploring a netCDF File\nLet's explore a netCDF file from the Atlantic Real-Time Ocean Forecast System\nfirst, import netcdf4-python and numpy",
"import netCDF4\nimport numpy as np",
"Create a netCDF4.Dataset object\n\nf is a Dataset object, representing an open netCDF file.\nprinting the object gives you summary information, similar to ncdump -h.",
"f = netCDF4.Dataset('data/rtofs_glo_3dz_f006_6hrly_reg3.nc')\nprint(f) ",
"Access a netCDF variable\n\nvariable objects stored by name in variables dict.\nprint the variable yields summary info (including all the attributes).\nno actual data read yet (just have a reference to the variable object with metadata).",
"print(f.variables.keys()) # get all variable names\ntemp = f.variables['temperature'] # temperature variable\nprint(temp) ",
"List the Dimensions\n\nAll variables in a netCDF file have an associated shape, specified by a list of dimensions.\nLet's list all the dimensions in this netCDF file.\nNote that the MT dimension is special (unlimited), which means it can be appended to.",
"for d in f.dimensions.items():\n print(d)",
"Each variable has a dimensions and a shape attribute.",
"temp.dimensions\n\ntemp.shape",
"Each dimension typically has a variable associated with it (called a coordinate variable).\n\nCoordinate variables are 1D variables that have the same name as dimensions.\nCoordinate variables and auxiliary coordinate variables (named by the coordinates attribute) locate values in time and space.",
"mt = f.variables['MT']\ndepth = f.variables['Depth']\nx,y = f.variables['X'], f.variables['Y']\nprint(mt)\nprint(x) ",
"Accessing data from a netCDF variable object\n\nnetCDF variables objects behave much like numpy arrays.\nslicing a netCDF variable object returns a numpy array with the data.\nBoolean array and integer sequence indexing behaves differently for netCDF variables than for numpy arrays. Only 1-d boolean arrays and integer sequences are allowed, and these indices work independently along each dimension (similar to the way vector subscripts work in fortran).",
"time = mt[:] # Reads the netCDF variable MT, array of one element\nprint(time) \n\ndpth = depth[:] # examine depth array\nprint(dpth) \n\nxx,yy = x[:],y[:]\nprint('shape of temp variable: %s' % repr(temp.shape))\ntempslice = temp[0, dpth > 400, yy > yy.max()/2, xx > xx.max()/2]\nprint('shape of temp slice: %s' % repr(tempslice.shape))",
"What is the sea surface temperature and salinity at 50N, 140W?\nFinding the latitude and longitude indices of 50N, 140W\n\nThe X and Y dimensions don't look like longitudes and latitudes\nUse the auxilary coordinate variables named in the coordinates variable attribute, Latitude and Longitude",
"lat, lon = f.variables['Latitude'], f.variables['Longitude']\nprint(lat)",
"Aha! So we need to find array indices iy and ix such that Latitude[iy, ix] is close to 50.0 and Longitude[iy, ix] is close to -140.0 ...",
"# extract lat/lon values (in degrees) to numpy arrays\nlatvals = lat[:]; lonvals = lon[:] \n# a function to find the index of the point closest pt\n# (in squared distance) to give lat/lon value.\ndef getclosest_ij(lats,lons,latpt,lonpt):\n # find squared distance of every point on grid\n dist_sq = (lats-latpt)**2 + (lons-lonpt)**2 \n # 1D index of minimum dist_sq element\n minindex_flattened = dist_sq.argmin() \n # Get 2D index for latvals and lonvals arrays from 1D index\n return np.unravel_index(minindex_flattened, lats.shape)\niy_min, ix_min = getclosest_ij(latvals, lonvals, 50., -140)",
"Now we have all the information we need to find our answer.\n|----------+--------|\n| Variable | Index |\n|----------+--------|\n| MT | 0 |\n| Depth | 0 |\n| Y | iy_min |\n| X | ix_min |\n|----------+--------|\nWhat is the sea surface temperature and salinity at the specified point?",
"sal = f.variables['salinity']\n# Read values out of the netCDF file for temperature and salinity\nprint('%7.4f %s' % (temp[0,0,iy_min,ix_min], temp.units))\nprint('%7.4f %s' % (sal[0,0,iy_min,ix_min], sal.units))",
"Remote data access via openDAP\n\nRemote data can be accessed seamlessly with the netcdf4-python API\nAccess happens via the DAP protocol and DAP servers, such as TDS.\nmany formats supported, like GRIB, are supported \"under the hood\".\n\nThe following example showcases some nice netCDF features:\n\nWe are seamlessly accessing remote data, from a TDS server.\nWe are seamlessly accessing GRIB2 data, as if it were netCDF data.\nWe are generating metadata on-the-fly.",
"import datetime\ndate = datetime.datetime.now()\n# build URL for latest synoptic analysis time\nURL = 'http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GFS/Global_0p5deg/GFS_Global_0p5deg_%04i%02i%02i_%02i%02i.grib2/GC' %\\\n(date.year,date.month,date.day,6*(date.hour//6),0)\n# keep moving back 6 hours until a valid URL found\nvalidURL = False; ncount = 0\nwhile (not validURL and ncount < 10):\n print(URL)\n try:\n gfs = netCDF4.Dataset(URL)\n validURL = True\n except RuntimeError:\n date -= datetime.timedelta(hours=6)\n ncount += 1 \n\n# Look at metadata for a specific variable\n# gfs.variables.keys() will show all available variables.\nsfctmp = gfs.variables['Temperature_surface']\n# get info about sfctmp\nprint(sfctmp)\n# print coord vars associated with this variable\nfor dname in sfctmp.dimensions: \n print(gfs.variables[dname])",
"Missing values\n\nwhen data == var.missing_value somewhere, a masked array is returned.\nillustrate with soil moisture data (only defined over land)\nwhite areas on plot are masked values over water.",
"soilmvar = gfs.variables['Volumetric_Soil_Moisture_Content_depth_below_surface_layer']\n# flip the data in latitude so North Hemisphere is up on the plot\nsoilm = soilmvar[0,0,::-1,:] \nprint('shape=%s, type=%s, missing_value=%s' % \\\n (soilm.shape, type(soilm), soilmvar.missing_value))\nimport matplotlib.pyplot as plt\n%matplotlib inline\ncs = plt.contourf(soilm)",
"Packed integer data\nThere is a similar feature for variables with scale_factor and add_offset attributes.\n\nshort integer data will automatically be returned as float data, with the scale and offset applied. \n\nDealing with dates and times\n\ntime variables usually measure relative to a fixed date using a certain calendar, with units specified like hours since YY:MM:DD hh-mm-ss.\nnum2date and date2num convenience functions provided to convert between these numeric time coordinates and handy python datetime instances. \ndate2index finds the time index corresponding to a datetime instance.",
"from netCDF4 import num2date, date2num, date2index\ntimedim = sfctmp.dimensions[0] # time dim name\nprint('name of time dimension = %s' % timedim)\ntimes = gfs.variables[timedim] # time coord var\nprint('units = %s, values = %s' % (times.units, times[:]))\n\ndates = num2date(times[:], times.units)\nprint([date.strftime('%Y-%m-%d %H:%M:%S') for date in dates[:10]]) # print only first ten...",
"Get index associated with a specified date, extract forecast data for that date.",
"from datetime import datetime, timedelta\ndate = datetime.now() + timedelta(days=3)\nprint(date)\nntime = date2index(date,times,select='nearest')\nprint('index = %s, date = %s' % (ntime, dates[ntime]))",
"Get temp forecast for Boulder (near 40N, -105W)\n\nuse function getcloses_ij we created before...",
"lats, lons = gfs.variables['lat'][:], gfs.variables['lon'][:]\n# lats, lons are 1-d. Make them 2-d using numpy.meshgrid.\nlons, lats = np.meshgrid(lons,lats)\nj, i = getclosest_ij(lats,lons,40,-105)\nfcst_temp = sfctmp[ntime,j,i]\nprint('Boulder forecast valid at %s UTC = %5.1f %s' % \\\n (dates[ntime],fcst_temp,sfctmp.units))",
"Simple multi-file aggregation\nWhat if you have a bunch of netcdf files, each with data for a different year, and you want to access all the data as if it were in one file?",
"!ls -l data/prmsl*nc",
"MFDataset uses file globbing to patch together all the files into one big Dataset.\nYou can also pass it a list of specific files.\nLimitations:\n\nIt can only aggregate the data along the leftmost dimension of each variable.\nonly works with NETCDF3, or NETCDF4_CLASSIC formatted files.\nkind of slow.",
"mf = netCDF4.MFDataset('data/prmsl*nc')\ntimes = mf.variables['time']\ndates = num2date(times[:],times.units)\nprint('starting date = %s' % dates[0])\nprint('ending date = %s'% dates[-1])\nprmsl = mf.variables['prmsl']\nprint('times shape = %s' % times.shape)\nprint('prmsl dimensions = %s, prmsl shape = %s' %\\\n (prmsl.dimensions, prmsl.shape))",
"Closing your netCDF file\nIt's good to close netCDF files, but not actually necessary when Dataset is open for read access only.",
"f.close()\ngfs.close()",
"That's it!\nNow you're ready to start exploring your data interactively.\nTo be continued with Writing netCDF data ...."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Juanlu001/Charla-PyConES15-poliastro
|
Going to Mars with Python in 5 minutes.ipynb
|
mit
|
[
"A Marte con Python usando poliastro\n<img src=\"http://poliastro.github.io/_images/logo_text.svg\" width=\"70%\" />\nJuan Luis Cano Rodríguez juanlu@pybonacci.org\n2016-04-09 PyData Madrid 2016\n...en 5 minutos :)\nWarning: This is rocket science!\n¿Qué es la Astrodinámica?\n\nUna rama de la Mecánica (a su vez una rama de la Física) que estudia problemas prácticos acerca del movimiento de cohetes y otros vehículos en el espacio\n\n\n¿Qué es poliastro?\n\nUna biblioteca de puro Python para Astrodinámica\n\nhttp://poliastro.github.io/\n¡Vamos a Marte!",
"%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimport astropy.units as u\nfrom astropy import time\n\nfrom poliastro import iod\nfrom poliastro.plotting import plot\nfrom poliastro.bodies import Sun, Earth\nfrom poliastro.twobody import State\nfrom poliastro import ephem\n\nfrom jplephem.spk import SPK\nephem.download_kernel(\"de421\")",
"Primero: definir la órbita",
"r = [-6045, -3490, 2500] * u.km\nv = [-3.457, 6.618, 2.533] * u.km / u.s\n\nss = State.from_vectors(Earth, r, v)\n\nwith plt.style.context('pybonacci'):\n plot(ss)",
"Segundo: localiza los planetas",
"epoch = time.Time(\"2015-06-21 16:35\")\n\nr_, v_ = ephem.planet_ephem(ephem.EARTH, epoch)\n\nr_\n\nv_.to(u.km / u.s)",
"Tercero: Calcula la trayectoria",
"date_launch = time.Time('2011-11-26 15:02', scale='utc')\ndate_arrival = time.Time('2012-08-06 05:17', scale='utc')\ntof = date_arrival - date_launch\nr0, _ = ephem.planet_ephem(ephem.EARTH, date_launch)\nr, _ = ephem.planet_ephem(ephem.MARS, date_arrival)\n\n(v0, v), = iod.lambert(Sun.k, r0, r, tof)\n\nv0\n\nv",
"...y es Python puro!\nTruco: numba\n\nCuarto: ¡vamos a Marte!",
"def go_to_mars(offset=500., tof_=6000.):\n # Initial data\n N = 50\n\n date_launch = time.Time('2016-03-14 09:31', scale='utc') + ((offset - 500.) * u.day)\n date_arrival = time.Time('2016-10-19 16:00', scale='utc') + ((offset - 500.) * u.day)\n tof = tof_ * u.h\n\n # Calculate vector of times from launch and arrival Julian days\n jd_launch = date_launch.jd\n jd_arrival = jd_launch + tof.to(u.day).value\n jd_vec = np.linspace(jd_launch, jd_arrival, num=N)\n\n times_vector = time.Time(jd_vec, format='jd')\n rr_earth, vv_earth = ephem.planet_ephem(ephem.EARTH, times_vector)\n rr_mars, vv_mars = ephem.planet_ephem(ephem.MARS, times_vector)\n # Compute the transfer orbit!\n r0 = rr_earth[:, 0]\n rf = rr_mars[:, -1]\n\n (va, vb), = iod.lambert(Sun.k, r0, rf, tof)\n\n ss0_trans = State.from_vectors(Sun, r0, va, date_launch)\n ssf_trans = State.from_vectors(Sun, rf, vb, date_arrival)\n # Extract whole orbit of Earth, Mars and transfer (for plotting)\n rr_trans = np.zeros_like(rr_earth)\n rr_trans[:, 0] = r0\n for ii in range(1, len(jd_vec)):\n tof = (jd_vec[ii] - jd_vec[0]) * u.day\n rr_trans[:, ii] = ss0_trans.propagate(tof).r\n\n # Better compute backwards\n jd_init = (date_arrival - 1 * u.year).jd\n jd_vec_rest = np.linspace(jd_init, jd_launch, num=N)\n\n times_rest = time.Time(jd_vec_rest, format='jd')\n rr_earth_rest, _ = ephem.planet_ephem(ephem.EARTH, times_rest)\n rr_mars_rest, _ = ephem.planet_ephem(ephem.MARS, times_rest)\n # Plot figure\n # To add arrows:\n # https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/streamplot.py#L140\n fig = plt.figure(figsize=(10, 10))\n ax = fig.add_subplot(111, projection='3d')\n\n def plot_body(ax, r, color, size, border=False, **kwargs):\n \"\"\"Plots body in axes object.\n\n \"\"\"\n return ax.plot(*r[:, None], marker='o', color=color, ms=size, mew=int(border), **kwargs)\n\n # I like color\n color_earth0 = '#3d4cd5'\n color_earthf = '#525fd5'\n color_mars0 = '#ec3941'\n color_marsf = '#ec1f28'\n color_sun = '#ffcc00'\n color_orbit = '#888888'\n color_trans = '#444444'\n\n # Plotting orbits is easy!\n ax.plot(*rr_earth.to(u.km).value, color=color_earth0)\n ax.plot(*rr_mars.to(u.km).value, color=color_mars0)\n ax.plot(*rr_trans.to(u.km).value, color=color_trans)\n\n ax.plot(*rr_earth_rest.to(u.km).value, ls='--', color=color_orbit)\n ax.plot(*rr_mars_rest.to(u.km).value, ls='--', color=color_orbit)\n\n # But plotting planets feels even magical!\n plot_body(ax, np.zeros(3), color_sun, 16)\n\n plot_body(ax, r0.to(u.km).value, color_earth0, 8)\n plot_body(ax, rr_earth[:, -1].to(u.km).value, color_earthf, 8)\n\n plot_body(ax, rr_mars[:, 0].to(u.km).value, color_mars0, 8)\n plot_body(ax, rf.to(u.km).value, color_marsf, 8)\n\n # Add some text\n ax.text(-0.75e8, -3.5e8, -1.5e8, \"ExoMars mission:\\nfrom Earth to Mars\",\n size=20, ha='center', va='center', bbox={\"pad\": 30, \"lw\": 0, \"fc\": \"w\"})\n ax.text(r0[0].to(u.km).value * 2.4, r0[1].to(u.km).value * 0.4, r0[2].to(u.km).value * 1.25,\n \"Earth at launch\\n({})\".format(date_launch.to_datetime().strftime(\"%d %b\")),\n ha=\"left\", va=\"bottom\", backgroundcolor='#ffffff')\n ax.text(rf[0].to(u.km).value * 1.1, rf[1].to(u.km).value * 1.1, rf[2].to(u.km).value,\n \"Mars at arrival\\n({})\".format(date_arrival.to_datetime().strftime(\"%d %b\")),\n ha=\"left\", va=\"top\", backgroundcolor='#ffffff')\n ax.text(-1.9e8, 8e7, 1e8, \"Transfer\\norbit\", ha=\"right\", va=\"center\", backgroundcolor='#ffffff')\n\n # Tune axes\n ax.set_xlim(-3e8, 3e8)\n ax.set_ylim(-3e8, 3e8)\n ax.set_zlim(-3e8, 3e8)\n\n # And finally!\n ax.view_init(30, 260)\n plt.show()\n #fig.savefig(\"trans_30_260.png\", bbox_inches='tight')\n #return fig, ax\n\ngo_to_mars()",
"Quinto: ¡¡Hagámoslo interactivo!!!1!",
"%matplotlib inline\nfrom ipywidgets import interactive\nfrom IPython.display import display\n\nw = interactive(go_to_mars, offset=(0., 1000.), tof_=(100., 12000.))\ndisplay(w)",
"http://poliastro.github.io\n¡Mil gracias!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CalPolyPat/phys202-2015-work
|
assignments/assignment04/MatplotlibEx01.ipynb
|
mit
|
[
"Matplotlib Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Line plot of sunspot data\nDownload the .txt data for the \"Yearly mean total sunspot number [1700 - now]\" from the SILSO website. Upload the file to the same directory as this notebook.",
"import os\nassert os.path.isfile('yearssn.dat')",
"Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.",
"data = np.loadtxt('yearssn.dat')\nyears = data[:,0]\nssc = data[:,1]\n\nassert len(years)==315\nassert years.dtype==np.dtype(float)\nassert len(ssc)==315\nassert ssc.dtype==np.dtype(float)",
"Make a line plot showing the sunspot count as a function of year.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"f = plt.figure(figsize = (20, 2))\nplt.plot(years, ssc, \"b-\")\nplt.box(False)\nplt.xticks(np.linspace(1700, 2015, 5, dtype = int))\nplt.yticks(np.linspace(0, 150, 3, dtype = int))\nplt.xlabel(\"Year\")\nplt.ylabel(\"# of Sunspots\")\nplt.title(\"# of Sunspots Per Year\")\n\nassert True # leave for grading",
"Describe the choices you have made in building this visualization and how they make it effective.\nI removed the box and grid because nobody cares exactly how many sunspots there were, all that matters is the oscilitory behavior. This is plainly shown here. The ticks give a good measure of time, it is easy to estimate the number of years between peaks. As well, the y axis shows the order of magnitude well.\nNow make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.",
"f = plt.figure(figsize = (15,8))\nplt.subplot(4,1,1)\nplt.plot(years[:100], ssc[:100], \"b-\")\nplt.box(False)\nplt.xticks(np.linspace(1700, 1800, 5, dtype = int))\nplt.yticks(np.linspace(0, 150, 3, dtype = int))\nplt.xlabel(\"Year\")\nplt.ylabel(\"# of Sunspots\")\nplt.title(\"# of Sunspots Per Year from 1700-1800\")\n\nplt.subplot(4,1,2)\nplt.plot(years[100:200], ssc[100:200], \"b-\")\nplt.box(False)\nplt.xticks(np.linspace(1800, 1900, 5, dtype = int))\nplt.yticks(np.linspace(0, 150, 3, dtype = int))\nplt.xlabel(\"Year\")\nplt.ylabel(\"# of Sunspots\")\nplt.title(\"# of Sunspots Per Year from 1800-1900\")\n\nplt.subplot(4,1,3)\nplt.plot(years[200:300], ssc[200:300], \"b-\")\nplt.box(False)\nplt.xticks(np.linspace(1900, 2000, 5, dtype = int))\nplt.yticks(np.linspace(0, 150, 3, dtype = int))\nplt.xlabel(\"Year\")\nplt.ylabel(\"# of Sunspots\")\nplt.title(\"# of Sunspots Per Year from 1900-2000\")\n\nplt.subplot(4,1,4)\nplt.plot(years[300:], ssc[300:], \"b-\")\nplt.box(False)\nplt.xticks(np.linspace(2000, 2100, 5, dtype = int))\nplt.yticks(np.linspace(0, 150, 3, dtype = int))\nplt.xlabel(\"Year\")\nplt.ylabel(\"# of Sunspots\")\nplt.title(\"# of Sunspots Per Year from 2000-2015\")\n\nplt.tight_layout()\n\nassert True # leave for grading"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/pandas_01_data_structures.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b>01 - Pandas: Data Structures </b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons",
"import pandas as pd\nimport matplotlib.pyplot as plt",
"Introduction\nLet's directly start with importing some data: the titanic dataset about the passengers of the Titanic and their survival:",
"df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()",
"Starting from reading such a tabular dataset, Pandas provides the functionalities to answer questions about this data in a few lines of code. Let's start with a few examples as illustration:\n<div class=\"alert alert-warning\">\n\n- What is the age distribution of the passengers?\n\n</div>",
"df['Age'].hist()",
"<div class=\"alert alert-warning\">\n\n <ul>\n <li>How does the survival rate of the passengers differ between sexes?</li>\n</ul> \n\n</div>",
"df.groupby('Sex')[['Survived']].mean()",
"<div class=\"alert alert-warning\">\n\n <ul>\n <li>Or how does the survival rate differ between the different classes of the Titanic?</li>\n</ul> \n\n</div>",
"df.groupby('Pclass')['Survived'].mean().plot.bar()",
"<div class=\"alert alert-warning\">\n\n <ul>\n <li>Are young people (e.g. < 25 years) likely to survive?</li>\n</ul> \n\n</div>",
"df['Survived'].mean()\n\ndf25 = df[df['Age'] <= 25]\ndf25['Survived'].mean()",
"All the needed functionality for the above examples will be explained throughout the course, but as a start: the data types to work with.\nThe pandas data structures: DataFrame and Series\nTo load the pandas package and start working with it, we first import the package. The community agreed alias for pandas is pd, which we will also use here:",
"import pandas as pd",
"Let's start with getting some data. \nIn practice, most of the time you will import the data from some data source (text file, excel, database, ..), and Pandas provides functions for many different formats. \nBut to start here, let's create a small dataset about a few countries manually from a dictionar of lists:",
"data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}\ncountries = pd.DataFrame(data)\ncountries",
"The object created here is a DataFrame:",
"type(countries)",
"A DataFrame is a 2-dimensional, tablular data structure comprised of rows and columns. It is similar to a spreadsheet, a database (SQL) table or the data.frame in R.\n<img align=\"center\" width=50% src=\"../img/pandas/01_table_dataframe1.svg\">\nA DataFrame can store data of different types (including characters, integers, floating point values, categorical data and more) in columns. In pandas, we can check the data types of the columns with the dtypes attribute:",
"countries.dtypes",
"Each column in a DataFrame is a Series\nWhen selecting a single column of a pandas DataFrame, the result is a pandas Series, a 1-dimensional data structure. \nTo select the column, use the column label in between square brackets [].",
"countries['population']\n\ns = countries['population']\ntype(s)",
"Pandas objects have attributes and methods\nPandas provides a lot of functionalities for the DataFrame and Series. The .dtypes shown above is an attribute of the DataFrame. Another example is the .columns attribute, returning the column names of the DataFrame:",
"countries.columns",
"In addition, there are also functions that can be called on a DataFrame or Series, i.e. methods. As methods are functions, do not forget to use parentheses ().\nA few examples that can help exploring the data:",
"countries.head() # Top rows\n\ncountries.tail() # Bottom rows",
"The describe method computes summary statistics for each column:",
"countries['population'].describe()",
"Sorting your data by a specific column is another important first-check:",
"countries.sort_values(by='population')",
"The plot method can be used to quickly visualize the data in different ways:",
"countries.plot()",
"However, for this dataset, it does not say that much:",
"countries['population'].plot.barh() # or .plot(kind='barh')",
"<div class=\"alert alert-success\">\n\n**EXERCISE 1**:\n\n* You can play with the `kind` keyword or accessor of the `plot` method in the figure above: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin', 'box'\n\nNote: doing `df.plot(kind=\"bar\", ...)` or `df.plot.bar(...)` is exactly equivalent. You will see both ways in the wild.\n\n</div>\n\n<div style=\"border: 5px solid #3776ab; border-radius: 2px; padding: 2em;\">\n\n## Python recap\n\nPython objects have **attributes** and **methods**:\n\n* Attribute: `obj.attribute` (no parentheses!) -> property of the object (pandas examples: `dtypes`, `columns`, `shape`, ..)\n* Method: `obj.method()` (function call with parentheses) -> action (pandas examples: `mean()`, `sort_values()`, ...)\n\n</div>\n\nImporting and exporting data\nA wide range of input/output formats are natively supported by pandas:\n\nCSV, text\nSQL database\nExcel\nHDF5\njson\nhtml\npickle\nsas, stata\nParquet\n...",
"# pd.read_\n\n# countries.to_",
"<div class=\"alert alert-info\">\n\n**Note: I/O interface**\n\n* All readers are `pd.read_...`\n* All writers are `DataFrame.to_...`\n\n</div>\n\nApplication on a real dataset\nThroughout the pandas notebooks, many of exercises will use the titanic dataset. This dataset has records of all the passengers of the Titanic, with characteristics of the passengers (age, class, etc. See below), and an indication whether they survived the disaster.\nThe available metadata of the titanic data set provides the following information:\nVARIABLE | DESCRIPTION\n------ | --------\nSurvived | Survival (0 = No; 1 = Yes)\nPclass | Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)\nName | Name\nSex | Sex\nAge | Age\nSibSp | Number of Siblings/Spouses Aboard\nParch | Number of Parents/Children Aboard\nTicket | Ticket Number\nFare | Passenger Fare\nCabin | Cabin\nEmbarked | Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)\n<div class=\"alert alert-success\">\n\n**EXERCISE 2**:\n\n* Read the CSV file (available at `data/titanic.csv`) into a pandas DataFrame. Call the result `df`.\n\n</div>",
"# %load _solutions/pandas_01_data_structures1.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 3**:\n\n* Quick exploration: show the first 5 rows of the DataFrame.\n\n</div>",
"# %load _solutions/pandas_01_data_structures2.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 4**:\n\n* How many records (i.e. rows) has the titanic dataset?\n\n<details><summary>Hints</summary>\n\n* The length of a DataFrame gives the number of rows (`len(..)`). Alternatively, you can check the \"shape\" (number of rows, number of columns) of the DataFrame using the `shape` attribute. \n\n</details>\n</div>",
"# %load _solutions/pandas_01_data_structures3.py",
"<div class=\"alert alert-success\">\n <b>EXERCISE 5</b>:\n\n* Select the 'Age' column (remember: we can use the [] indexing notation and the column label).\n\n</div>",
"# %load _solutions/pandas_01_data_structures4.py",
"<div class=\"alert alert-success\">\n <b>EXERCISE 6</b>:\n\n* Make a box plot of the Fare column.\n\n</div>",
"# %load _solutions/pandas_01_data_structures5.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 7**:\n\n* Sort the rows of the DataFrame by 'Age' column, with the oldest passenger at the top. Check the help of the `sort_values` function and find out how to sort from the largest values to the lowest values\n\n</div>",
"# %load _solutions/pandas_01_data_structures6.py",
"Acknowledgement\n\nThis notebook is partly based on material of Jake Vanderplas (https://github.com/jakevdp/OsloWorkshop2014)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dereneaton/ipyrad
|
testdocs/analysis/cookbook-mb-ipcoal.ipynb
|
gpl-3.0
|
[
"<h1><span style=\"color:gray\">ipyrad-analysis toolkit:</span> mrbayes</h1>\n\nIn these analyses our interest is primarily in inferring accurate branch lengths under a relaxed molecular clock model. This means that tips are forced to line up at the present (time) but that rates of substitutions are allowed to vary among branches to best explain the variation in the sequence data. \nThere is a huge range of models that can be employed using mrbayes by employing different combinations of parameter settings, model definitions, and prior settings. The ipyrad-analysis tool here is intended to make it easy to run such jobs many times (e.g., distributed in parallel) once you have decided on your settings. In addition, we provide a number of pre-set models (e.g., clock_model=2) that may be useful for simple scenarios. \nHere we use simulations to demonstrate the accuracy of branch length estimation when sequences come from a single versus multiple distinct genealogies (e.g., gene tree vs species tree), and show an option to fix the topology to speed up analyses when your only interest is to estimate branch lengths.",
"# conda install ipyrad -c conda-forge -c bioconda\n# conda install mrbayes -c conda-forge -c bioconda\n# conda install ipcoal -c conda-forge\n\nimport toytree\nimport ipcoal\nimport ipyrad.analysis as ipa",
"Simulate a gene tree with 14 tips and MRCA of 1M generations",
"TREE = toytree.rtree.bdtree(ntips=8, b=0.8, d=0.2, seed=123)\nTREE = TREE.mod.node_scale_root_height(1e6)\nTREE.draw(ts='o', layout='d', scalebar=True);",
"Simulate sequences on single gene tree and write to NEXUS\nWhen Ne is greater the gene tree is more likely to deviate from the species tree topology and branch lengths. By setting recombination rate to 0 there will only be one true underlying genealogy for the gene tree. We set nsamples=2 because we want to simulate diploid individuals.",
"# init simulator\nmodel = ipcoal.Model(TREE, Ne=2e4, nsamples=2, recomb=0)\n\n# simulate sequence data on coalescent genealogies\nmodel.sim_loci(nloci=1, nsites=20000)\n\n# write results to database file\nmodel.write_concat_to_nexus(name=\"mbtest-1\", outdir='/tmp', diploid=True)\n\n# the simulated genealogy of haploid alleles\ngene = model.df.genealogy[0]\n\n# draw the genealogy\ntoytree.tree(gene).draw(ts='o', layout='d', scalebar=True);",
"View an example locus\nThis shows the 2 haploid samples simulated for each tip in the species tree.",
"model.draw_seqview(idx=0, start=0, end=50);",
"(1) Infer a tree under a relaxed molecular clock model",
"# init the mb object\nmb = ipa.mrbayes(\n data=\"/tmp/mbtest-1.nex\",\n name=\"itest-1\",\n workdir=\"/tmp\",\n clock_model=2,\n constraints=TREE,\n ngen=int(1e6),\n nruns=2,\n)\n\n# modify a parameter\nmb.params.clockratepr = \"normal(0.01,0.005)\"\nmb.params.samplefreq = 5000\n\n# summary of priors/params\nprint(mb.params)\n\n# start the run\nmb.run(force=True)\n\n# load the inferred tree\nmbtre = toytree.tree(\"/tmp/itest-1.nex.con.tre\", 10)\n\n# scale root node to 1e6\nmbtre = mbtre.mod.node_scale_root_height(1e6)\n\n# draw inferred tree \nc, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);\n\n# draw TRUE tree in orange on the same axes\nTREE.draw(\n axes=a, \n ts='o', \n layout='d', \n scalebar=True, \n edge_colors=\"darkorange\",\n node_sizes=0,\n fixed_order=mbtre.get_tip_labels(),\n);\n\n# check convergence statistics\nmb.convergence_stats",
"(2) Concatenated sequences from a species tree\nHere we use concatenated sequence data from 100 loci where each represents one or more distinct genealogies. In addition, Ne is increased to 1e5, allowing for more genealogical variation. We expect the accuracy of estimated edge lengths will decrease since we are not adequately modeling the genealogical variation when using concatenation. Here I set the recombination rate within loci to be zero. There is free recombination among loci, however, since they are unlinked.",
"# init simulator\nmodel = ipcoal.Model(TREE, Ne=1e5, nsamples=2, recomb=0)\n\n# simulate sequence data on coalescent genealogies\nmodel.sim_loci(nloci=100, nsites=200)\n\n# write results to database file\nmodel.write_concat_to_nexus(name=\"mbtest-2\", outdir='/tmp', diploid=True)\n\n# the simulated genealogies of haploid alleles\ngenes = model.df.genealogy[:4]\n\n# draw the genealogies of the first four loci\ntoytree.mtree(genes).draw(ts='o', layout='r', height=250);\n\n# init the mb object\nmb = ipa.mrbayes(\n data=\"/tmp/mbtest-2.nex\",\n workdir=\"/tmp\",\n name=\"itest-2\",\n clock_model=2,\n constraints=TREE,\n ngen=int(1e6),\n nruns=2,\n)\n\n# summary of priors/params\nprint(mb.params)\n\n# start the run\nmb.run(force=True)\n\n# load the inferred tree\nmbtre = toytree.tree(\"/tmp/itest-2.nex.con.tre\", 10)\n\n# scale root node from unitless to 1e6\nmbtre = mbtre.mod.node_scale_root_height(1e6)\n\n# draw inferred tree \nc, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);\n\n# draw true tree in orange on the same axes\nTREE.draw(\n axes=a, \n ts='o', \n layout='d', \n scalebar=True, \n edge_colors=\"darkorange\",\n node_sizes=0,\n fixed_order=mbtre.get_tip_labels(),\n);\n\nmb.convergence_stats",
"To see the NEXUS file (data, parameters, priors):",
"mb.print_nexus_string()",
"(3) Tree inference (not fixed topology) and plotting support values\nHere we will try to infer the topology from a concatenated data set (i.e., not set a constraint on the topology). I increased the ngen setting since the MCMC chain takes longer to converge when searching over topology space. Take note that the support values from mrbayes newick files are available in the \"prob{percent}\" feature, as shown below.",
"# init the mb object\nmb = ipa.mrbayes(\n data=\"/tmp/mbtest-2.nex\",\n name=\"itest-3\",\n workdir=\"/tmp\",\n clock_model=2,\n ngen=int(2e6),\n nruns=2,\n)\n\n# summary of priors/params\nprint(mb.params)\n\n# start run\nmb.run(force=True)",
"The tree topology was correctly inferred",
"# load the inferred tree\nmbtre = toytree.tree(\"/tmp/itest-3.nex.con.tre\", 10)\n\n# scale root node from unitless to 1e6\nmbtre = mbtre.mod.node_scale_root_height(1e6)\n\n# draw inferred tree \nc, a, m = mbtre.draw(\n layout='d',\n scalebar=True, \n node_sizes=18, \n node_labels=\"prob{percent}\",\n);",
"The branch lengths are not very accurate in this case:",
"# load the inferred tree\nmbtre = toytree.tree(\"/tmp/itest-3.nex.con.tre\", 10)\n\n# scale root node from unitless to 1e6\nmbtre = mbtre.mod.node_scale_root_height(1e6)\n\n# draw inferred tree \nc, a, m = mbtre.draw(ts='o', layout='d', scalebar=True);\n\n# draw true tree in orange on the same axes\nTREE.draw(\n axes=a, \n ts='o', \n layout='d', \n scalebar=True, \n edge_colors=\"darkorange\",\n node_sizes=0,\n fixed_order=mbtre.get_tip_labels(),\n);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
KUrushi/knocks
|
chapter2/UNIX command.ipynb
|
mit
|
[
"hightemp.txtは,日本の最高気温の記録を「都道府県」「地点」「℃」「日」のタブ区切り形式で格納したファイルである.以下の処理を行うプログラムを作成し,hightemp.txtを入力ファイルとして実行せよ.さらに,同様の処理をUNIXコマンドでも実行し,プログラムの実行結果を確認せよ.",
"10. 行数のカウント\n行数をカウントせよ.確認にはwcコマンドを用いよ",
"with open(\"hightemp.txt\") as f:\n count = len(f.readlines())\nprint(count)",
"wc\nファイル内のバイト数, 単語数, 及び行数を集計し表示する.\nまた, 空白で区切られたものを単語として扱う.\n表示: 行数 単語数 バイト数\nwc [-clw] [--bytes] [--chars] [--lines] [--words] [file]\nオプション\n\n-c, --bytes, --chars バイト数のみ集計し表示 \n-w, --word 単語数のみ集計し表示\n-l, --lines 行数のみ集計し表示\nfile",
"%%bash\nwc -l hightemp.txt",
"11. タブをスペースに置換\nタブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ.",
"def replace_tab2space(file):\n with open(file) as f:\n for i in f.readlines():\n print(i.strip('\\n').replace('\\t', ' '))\n\nreplace_tab2space('hightemp.txt')",
"expand\nタブをスペースに変換する(1)\nexpand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...]\nオプション\n\n-i, --initial 空白以外の文字直後のタブは無視する\n-t NUMBER, --tabs=NUMBER タブ幅を数値NUMBERで指定する(デフォルトは8桁)\nFILE テキスト・ファイルを指定する",
"%%bash\nexpand -t 1 hightemp.txt",
"12. 1列目をcol1.txtに,2列目をcol2.txtに保存\n各行の1列目だけを抜き出したものをcol1.txtに,2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ.確認にはcutコマンドを用いよ.",
"def write_col(col):\n with open(\"hightemp.txt\", 'r') as f:\n writing = [i.split('\\t')[col-1]+\"\\n\" for i in f.readlines()]\n \n with open('col{}.txt'.format(col), 'w') as f:\n f.write(\"\".join(writing))\n\nwrite_col(1)\nwrite_col(2)",
"cut\nテキスト・ファイルの各行から一部分を取り出す\nexpand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...]\nオプション\n\n-b, --bytes byte-list byte-listで指定した位置のバイトだけ表示する\n-c, --characters character-list character-listで指定した位置の文字だけ表示する\n-d, --delimiter delim フィールドの区切りを設定する。初期設定値はタブ\n-f, --fields field-list field-listで指定したフィールドだけ表示する\n-s, --only-delimited フィールドの区切りのない行を無視する\nfile テキスト・ファイルを指定する",
"%%bash \ncut -f 1 hightemp.txt > cut_col1.txt\ncut -f 2 hightemp.txt > cut_col2.txt",
"13. col1.txtとcol2.txtをマージ\n12で作ったcol1.txtとcol2.txtを結合し,元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ.確認にはpasteコマンドを用いよ.",
"with open('col1.txt', 'r') as f1:\n col1 = [i.strip('\\n') for i in f1.readlines()]\n \nwith open('col2.txt', 'r') as f2:\n col2 = [i.strip('\\n') for i in f2.readlines()]\n\nwriting = \"\"\nfor i in range(len(col1)):\n writing += col1[i] + '\\t' + col2[i] + '\\n'\n\nwith open('marge.txt', 'w') as f:\n f.write(writing)\n",
"paste\nファイルを水平方向に連結する\npaste [オプション] [FILE]\nオプション\n\n-d --delimiters=LIST タブの代わりに区切り文字をLISTで指定する\n-s, --serial 縦方向に連結する\nFILE 連結するファイルを指定する",
"%%bash\npaste col1.txt col2.txt > paste_marge.txt",
"14. 先頭からN行を出力\n自然数Nをコマンドライン引数などの手段で受け取り,\n入力のうち先頭のN行だけを表示せよ.確認にはheadコマンドを用いよ.",
"def head(N):\n with open('marge.txt') as f:\n return \"\".join(f.readlines()[:N+1])\nprint(head(3))",
"head\nファイルの先頭部分を表示する\nhead [-c N[bkm]] [-n N] [-qv] [--bytes=N[bkm]] [--lines=N] [--quiet] [--silent] [--verbose] [file...]\nオプション\n\n-c N, --bytes N ファイルの先頭からNバイト分の文字列を表示する。Nの後にbを付加したときは,指定した数の512倍のバイトを,kを付加したときは指定した数の1024倍のバイトを,mを付加したときには指定した数の1048576倍のバイトを表示する\n-n N, --lines N ファイルの先頭からN行を表示する\n-q, --quiet, --silent ファイル名を表示しない\n-v, --verbose 常にファイル名を表示する\nfile 表示するファイルを指定する",
"%%bash\nhead -n 3 marge.txt ",
"15. 末尾のN行を出力\n自然数Nをコマンドライン引数などの手段で受け取り,\n入力のうち末尾のN行だけを表示せよ.確認にはtailコマンドを用いよ.",
"def tail(N):\n with open('marge.txt') as f:\n tail = \"\".join(f.readlines()[-1:-N:-1])\n return tail\nprint(tail(3))",
"16. ファイルをN分割する\n自然数Nをコマンドライン引数などの手段で受け取り,\n入力のファイルを行単位でN分割せよ.同様の処理をsplitコマンドで実現せよ.",
"def split_flie(name, N):\n with open(name, 'r') as f:\n split = \"\".join(f.readlines()[:N])\n return split\n\nprint(split_flie(\"marge.txt\", 3))",
"split\nファイルを分割する\nsplit [-lines] [-l lines] [-b bytes[bkm]] [-C bytes[bkm]] [--lines=lines] [--bytes=bytes[bkm]] [--linebytes=bytes[bkm]] [infile [outfile-prefix]]\nオプション\n\n-lines, -l lines, --lines=lines 元ファイルのlineで指定した行ごとに区切り,出力ファイルに書き出す\n-b bytes[bkm], --bytes=bytes[bkm] 元ファイルをbytesで示したバイト数で分割し,出力する。数字の後に記号を付加することにより単位を変えることができる。kはKバイトmはMバイト\n-C bytes[bkm], --line-bytes=bytes[bkm] 各行で区切り出力する。このとき各行でbytesを越える場合は次のファイルに書き出す\ninfile 元ファイルを指定する\noutfile-prefix 書き出し先ファイルのファイル名のベース名を決定する。出力はベース名にアルファベットを付けたファイル名となる",
"%%bash\nsplit -l 3 marge.txt split_marge.txt",
"17. 1列目の文字列の異なり\n1列目の文字列の種類(異なる文字列の集合)を求めよ.\n確認にはsort, uniqコマンドを用いよ.",
"def kinds_col(file_name, N=0):\n with open(file_name, 'r') as f:\n tmp = f.readlines()\n \n return set([i.strip('\\n') for i in tmp])\n\nprint(kinds_col('col1.txt'))",
"18. 各行を3コラム目の数値の降順にソート\n各行を3コラム目の数値の逆順で整列せよ\n(注意: 各行の内容は変更せずに並び替えよ).\n確認にはsortコマンドを用いよ\n(この問題はコマンドで実行した時の結果と合わなくてもよい).",
"def sorted_list(filename, col):\n with open(filename, 'r') as f:\n return_list = [i.strip(\"\\n\").split('\\t') for i in f.readlines()]\n \n return sorted(return_list, key=lambda x: x[col], reverse=True)\n\nprint(sorted_list(\"hightemp.txt\", 2))",
"19. 各行の1コラム目の文字列の出現頻度を求め,出現頻度の高い順に並べる\n各行の1列目の文字列の出現頻度を求め,その高い順に並べて表示せよ.確認にはcut, uniq, sortコマンドを用いよ.",
"def frequency_sort(filename, col):\n from collections import Counter\n with open(filename, 'r') as f:\n return_list = [i.strip(\"\\n\").split('\\t')[col-1] for i in f.readlines()]\n return [i[0] for i in Counter(return_list).most_common()]\n\nprint(frequency_sort(\"hightemp.txt\", 1))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
YufeiZhang/Principles-of-Programming-Python-3
|
Lectures/Lecture_3/cost_of_operations_on_lists.ipynb
|
gpl-3.0
|
[
"<h1 align=\"center\">Cost of operations on lists</h1>\n\nInserting elements at the end of a list",
"%matplotlib inline\nfrom matplotlib.pyplot import plot\n\nfrom time import time\n\ndata = []\nfor i in range(1000, 50001, 1000):\n L = []\n before = time()\n for _ in range(i):\n L.append(None)\n after = time()\n data.append((i, after - before))\nplot(*tuple(zip(*data)))\nprint()",
"Inserting elements at the beginning of a list",
"%matplotlib inline\nfrom matplotlib.pyplot import plot\n\nfrom time import time\n\ndata = []\nfor i in range(1000, 50001, 1000):\n L = []\n before = time()\n for _ in range(i):\n L.insert(0, None)\n after = time()\n data.append((i, after - before))\nplot(*tuple(zip(*data)))\nprint()",
"Size of a list initialised to a given number of elements",
"%matplotlib inline\nfrom matplotlib.pyplot import plot\n\nfrom sys import getsizeof\n\ndata = []\nfor i in range(1, 51):\n data.append((i, getsizeof([None] * i)))\nplot(*tuple(zip(*data)))\nprint()",
"Size of a list to which elements are appended incrementally",
"%matplotlib inline\nfrom matplotlib.pyplot import plot\n\nfrom sys import getsizeof\n\ndata = []\nL = []\nfor i in range(1, 51):\n L.append(None)\n data.append((i, getsizeof(L)))\nplot(*tuple(zip(*data)))\nprint()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/rebound
|
ipython_examples/OrbitalElements.ipynb
|
gpl-3.0
|
[
"Orbital Elements\nNote: All angles for orbital elements are in radians\nWe can add particles to a simulation by specifying cartesian components:",
"import rebound\nsim = rebound.Simulation()\nsim.add(m=1., x=1., vz = 2.)",
"Any components not passed automatically default to 0. REBOUND can also accept orbital elements. \nReference bodies\nAs a reminder, there is a one-to-one mapping between (x,y,z,vx,vy,vz) and orbital elements, and one should always specify what the orbital elements are referenced against (e.g., the central star, the system's barycenter, etc.). The differences betwen orbital elements referenced to these centers differ by $\\sim$ the mass ratio of the largest body to the central mass. By default, REBOUND always uses Jacobi elements, which for each particle are always referenced to the center of mass of all particles with lower index in the simulation. \nFor the painstaking user: When separating out the center of mass degree of freedom and reducing the N body problem to N-1 Kepler problems and interaction terms, there are a number of possible Hamiltonian splittings (see e.g., Hernandez & Dehnen 2017), and different possible choices for the primary mass in each of the separate Kepler problems. REBOUND takes this primary mass to be the total mass of all the particles in the simulation. If particles are added from the inside out, this gives logical behavior in the limit of a hierarchical system, even for large masses (one can think of it as setting up our new particle in a 2-body orbit around all the interior mass concentrated at the interior particles' center of mass). \nLet's set up a stellar binary,",
"sim.add(m=1., a=1.)\nsim.status()",
"We always have to pass a semimajor axis (to set a length scale), but any other elements are by default set to 0. Notice that our second star has the same vz as the first one due to the default Jacobi elements. Now we could add a distant planet on a circular orbit,",
"sim.add(m=1.e-3, a=100.)",
"This planet is set up relative to the binary center of mass (again due to the Jacobi coordinates), which is probably what we want. But imagine we now want to place a test mass in a tight orbit around the second star. If we passed things as above, the orbital elements would be referenced to the binary/outer-planet center of mass. We can override the default by explicitly passing a primary (any instance of the Particle class):",
"sim.add(primary=sim.particles[1], a=0.01)",
"All simulations are performed in Cartesian elements, so to avoid the overhead, REBOUND does not update particles' orbital elements as the simulation progresses. However, you can always access any orbital element through, e.g., sim.particles[1].inc (see the diagram, and table of orbital elements under the Orbit structure at http://rebound.readthedocs.org/en/latest/python_api.html). This will calculate that orbital element individually--you can calculate all the particles' orbital elements at once with sim.calculate_orbits(). REBOUND will always output angles in the range $[-\\pi,\\pi]$, except the inclination which is always in $[0,\\pi]$.",
"print(sim.particles[1].a)\norbits = sim.calculate_orbits()\nfor orbit in orbits:\n print(orbit)",
"Notice that there is always one less orbit than there are particles, since orbits are only defined between pairs of particles. We see that we got the first two orbits right, but the last one is way off. The reason is that again the REBOUND default is that we always get Jacobi elements. But we initialized the last particle relative to the second star, rather than the center of mass of all the previous particles.\nTo get orbital elements relative to a specific body, you can manually use the calculate_orbit method of the Particle class:",
"print(sim.particles[3].calculate_orbit(primary=sim.particles[1]))",
"though we could have simply avoided this problem by adding bodies from the inside out (second star, test mass, first star, circumbinary planet).\nWhen you access orbital elements individually, e.g., sim.particles[1].inc, you always get Jacobi elements. If you need to specify the primary, you have to do it with sim.calculate_orbit() as above.\nEdge cases and orbital element sets\nDifferent orbital elements lose meaning in various limits, e.g., a planar orbit and a circular orbit. REBOUND therefore allows initialization with several different types of variables that are appropriate in different cases. It's important to keep in mind that the procedure to initialize particles from orbital elements is not exactly invertible, so one can expect discrepant results for elements that become ill defined. For example,",
"sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0., inc=0.1, Omega=0.3, omega=0.1)\nprint(sim.particles[1].orbit)",
"The problem here is that $\\omega$ (the angle from the ascending node to pericenter) is ill-defined for a circular orbit, so it's not clear what we mean when we pass it, and we get spurious results for both $\\omega$ and $f$, since the latter is also undefined as the angle from pericenter to the particle's position. However, the true longitude $\\theta$, the broken angle from the $x$ axis to the ascending node = $\\Omega + \\omega + f$, and then to the particle's position, is always well defined:",
"print(sim.particles[1].theta)",
"To be clearer and ensure we get the results we expect, we could instead pass theta to specify the longitude we want, e.g.",
"sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0., inc=0.1, Omega=0.3, theta = 0.4)\nprint(sim.particles[1].theta)\n\nimport rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.2, Omega=0.1)\nprint(sim.particles[1].orbit)",
"Here we have a planar orbit, in which case the line of nodes becomes ill defined, so $\\Omega$ is not a good variable, but we pass it anyway! In this case, $\\omega$ is also undefined since it is referenced to the ascending node. Here we get that now these two ill-defined variables get flipped. The appropriate variable is pomega ($\\varpi = \\Omega + \\omega$), which is the angle from the $x$ axis to pericenter:",
"print(sim.particles[1].pomega)",
"We can specify the pericenter of the orbit with either $\\omega$ or $\\varpi$:",
"import rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.2, pomega=0.1)\nprint(sim.particles[1].orbit)",
"Note that if the inclination is exactly zero, REBOUND sets $\\Omega$ (which is undefined) to 0, so $\\omega = \\varpi$. \nFinally, we can specify the position of the particle along its orbit using mean (rather than true) longitudes or anomalies (for example, this might be useful for resonances). We can either use the mean anomaly $M$, which is referenced to pericenter (again ill-defined for circular orbits), or its better-defined counterpart the mean longitude l $= \\lambda = \\Omega + \\omega + M$, which is analogous to $\\theta$ above,",
"sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.1, Omega=0.3, M = 0.1)\nsim.add(a=1., Omega=0.3, l = 0.4)\nprint(sim.particles[1].l)\nprint(sim.particles[2].l)",
"REBOUND calculates the mean longitude in such a way that it smoothly approaches $\\theta$ in the limit of $e\\rightarrow0$:",
"sim.particles[2].theta\n\nimport rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1., e=0.1, omega=1.)\nprint(sim.particles[1].orbit)",
"In summary, you can specify the phase of the orbit through any one of the angles M, f, theta or l=$\\lambda$. Additionally, one can instead use the time of pericenter passage T. This time should be set in the appropriate time units, and you'd initialize sim.t to the appropriate time you want to start the simulation.\nAccuracy\nAs a test of accuracy and demonstration of issues related to the last section, let's test the numerical stability by intializing particles with small eccentricities and true anomalies, computing their orbital elements back, and comparing the relative error. We choose the inclination and node longitude randomly:",
"import random\nimport numpy as np\n\ndef simulation(par):\n e,f = par\n e = 10**e\n f = 10**f\n sim = rebound.Simulation()\n sim.add(m=1.)\n a = 1.\n inc = random.random()*np.pi\n Omega = random.random()*2*np.pi\n sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, f=f)\n o=sim.particles[1].orbit\n if o.f < 0: # avoid wrapping issues\n o.f += 2*np.pi\n err = max(np.fabs(o.e-e)/e, np.fabs(o.f-f)/f)\n return err\n\nrandom.seed(1)\nN = 100\nes = np.linspace(-16.,-1.,N)\nfs = np.linspace(-16.,-1.,N)\nparams = [(e,f) for e in es for f in fs]\n\npool=rebound.InterruptiblePool()\nres = pool.map(simulation, params)\nres = np.array(res).reshape(N,N)\nres = np.nan_to_num(res)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib import ticker\nfrom matplotlib.colors import LogNorm\nimport matplotlib\n\nf,ax = plt.subplots(1,1,figsize=(7,5))\nextent=[fs.min(), fs.max(), es.min(), es.max()]\n\nax.set_xlim(extent[0], extent[1])\nax.set_ylim(extent[2], extent[3])\nax.set_xlabel(r\"true anomaly (f)\")\nax.set_ylabel(r\"eccentricity\")\n\nim = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin=\"lower\", interpolation='nearest', cmap=\"RdYlGn_r\", extent=extent)\ncb = plt.colorbar(im, ax=ax)\ncb.solids.set_rasterized(True)\ncb.set_label(\"Relative Error\")",
"We see that the behavior is poor, which is physically due to $f$ becoming poorly defined at low $e$. If instead we initialize the orbits with the true longitude $\\theta$ as discussed above, we get much better results:",
"def simulation(par):\n e,theta = par\n e = 10**e\n theta = 10**theta\n sim = rebound.Simulation()\n sim.add(m=1.)\n a = 1.\n inc = random.random()*np.pi\n Omega = random.random()*2*np.pi\n omega = random.random()*2*np.pi\n sim.add(m=0.,a=a,e=e,inc=inc,Omega=Omega, theta=theta)\n o=sim.particles[1].orbit\n if o.theta < 0:\n o.theta += 2*np.pi\n err = max(np.fabs(o.e-e)/e, np.fabs(o.theta-theta)/theta)\n return err\n\nrandom.seed(1)\nN = 100\nes = np.linspace(-16.,-1.,N)\nthetas = np.linspace(-16.,-1.,N)\nparams = [(e,theta) for e in es for theta in thetas]\n\npool=rebound.InterruptiblePool()\nres = pool.map(simulation, params)\nres = np.array(res).reshape(N,N)\nres = np.nan_to_num(res)\n\nf,ax = plt.subplots(1,1,figsize=(7,5))\nextent=[thetas.min(), thetas.max(), es.min(), es.max()]\n\nax.set_xlim(extent[0], extent[1])\nax.set_ylim(extent[2], extent[3])\nax.set_xlabel(r\"true longitude (\\theta)\")\nax.set_ylabel(r\"eccentricity\")\n\nim = ax.imshow(res, norm=LogNorm(), vmax=1., vmin=1.e-16, aspect='auto', origin=\"lower\", interpolation='nearest', cmap=\"RdYlGn_r\", extent=extent)\ncb = plt.colorbar(im, ax=ax)\ncb.solids.set_rasterized(True)\ncb.set_label(\"Relative Error\")",
"Hyperbolic & Parabolic Orbits\nREBOUND can also handle hyperbolic orbits, which have negative $a$ and $e>1$:",
"sim.add(a=-0.2, e=1.4)\nsim.status()",
"Currently there is no support for exactly parabolic orbits, but we can get a close approximation by passing a nearby hyperbolic orbit where we can specify the pericenter = $|a|(e-1)$ with $a$ and $e$. For example, for a 0.1 AU pericenter,",
"sim = rebound.Simulation()\nsim.add(m=1.)\nq = 0.1\na=-1.e14\ne=1.+q/np.fabs(a)\nsim.add(a=a, e=e)\nprint(sim.particles[1].orbit)",
"Retrograde Orbits\nOrbital elements can be counterintuitive for retrograde orbits, but REBOUND tries to sort them out consistently. This can lead to some initially surprising results. For example,",
"sim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(a=1.,inc=np.pi,e=0.1, Omega=0., pomega=1.)\nprint(sim.particles[1].orbit)",
"We passed $\\Omega=0$ and $\\varpi=1.$. For prograde orbits, $\\varpi = \\Omega + \\omega$, so we'd expect $\\omega = 1$, but instead we get $\\omega=-1$. If we think about things physically, $\\varpi$ is the angle from the $x$ axis to pericenter, measured in the positive direction (counterclockwise) defined by $z$. $\\Omega$ is always measured in this same sense, but $\\omega$ is always measured in the orbital plane in the direction of the orbit. For retrograde orbits, this means that $\\omega$ is measured in the opposite sense to $\\Omega$, so $\\varpi = \\Omega - \\omega$, which is why we got $\\omega = -1$. \nSimilarly, the retrograde version of $\\theta = \\Omega + \\omega + f$ is $\\theta = \\Omega - \\omega - f$, and l = $\\lambda = \\Omega + \\omega + M$ becomes $\\lambda = \\Omega - \\omega - M$. REBOUND chooses these conventions based on whether $i < \\pi/2$, which means that if you were tracking $\\varpi$ for nearly polar orbits, you would get unphysical jumps if the orbits crossed back and forth between prograde and retrograde. Of course, $\\varpi$ is not a good angle at such high inclinations, and only has physical meaning when the orbital plane nearly coincides with the reference plane.\nExceptions\nAdding a particle or getting orbital elements from particles in a simulation should never yield NaNs in any of the structure fields. Please let us know if you find a case that does. \nIn cases where it would return a NaN, REBOUND will raise a ValueError. The only cases that should do so when adding a particle are 1) passing an eccentricity of exactly 1. 2) passing a negative eccentricity. 3) Passing $e>1$ if $a>0$. 4) Passing $e<1$ if $a<0$. 5) Passing a longitude or anomaly for a hyperbolic orbit that's beyond the range allowed by the asymptotes defined by the hyperbola. You will also get errors if you try to initialize particles with orbital elements manually with rebound.Particle().\nWhen obtaining orbital elements from a Particle structure, REBOUND will raise a ValueError if 1) the primary's mass is zero, or 2) the particle's and primary's position are the same.\nNegative inclinations\nWhile inclinations are only defined in the range $[0,\\pi]$, you can also pass negative inclinations when adding particles in REBOUND. This is interpreted as referencing $\\Omega$ and $\\omega$ to the descending, rather than the ascending node. So for example, if one set up particles with the same $\\Omega$ and a range of inclinations distributed around zero, one would obtain what one might expect, i.e. a set of orbits that are all rotated around the same line of nodes.\nJacobi masses\nThere is a classical Hamiltonian splitting for the N-body problem (see e.g., Wisdom & Holman 1991) that when expanded to first order in the planet/star mass ratio, gives an interaction Hamiltonian with the same form as the disturbing function for an exterior perturber. This makes it particularly attractive for analytic or semi-analytic studies. In this splitting, the masses of the primaries for each planet take on a particular form. One can add particles using these jacobi masses with the jacobi_masses flag. By default this flag is false and the primary mass is the total mass of all particles in the simulation (see the top of this ipynb).",
"import rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(m=1.e-3, a=1., jacobi_masses=True)\nsim.add(m=1.e-3, a=5., jacobi_masses=True)\nsim.move_to_com()",
"The jacobi mass and default mass assigned by REBOUND always agree for the first particle, but differ for all the rest",
"print(sim.particles[1].a, sim.particles[2].a)",
"We can calculate orbital elements using jacobi masses by using the same flag in sim.calculate_orbits",
"o = sim.calculate_orbits(jacobi_masses=True)\nprint(o[0].a, o[1].a)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Santara/ML-MOOC-NPTEL
|
lecture4/ML-Anirban_Tutorial4.ipynb
|
gpl-3.0
|
[
"from __future__ import division, print_function\nimport numpy as np\nfrom sklearn import datasets, svm \nfrom sklearn.cross_validation import train_test_split\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"1. Support Vector Classification\n1.1 Load the Iris dataset",
"iris = datasets.load_iris()\nX = iris.data[:,:2]\ny = iris.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)",
"1.2 Use Support Vector Machine with different kinds of kernels and evaluate performance",
"def evaluate_on_test_data(model=None):\n predictions = model.predict(X_test)\n correct_classifications = 0\n for i in range(len(y_test)):\n if predictions[i] == y_test[i]:\n correct_classifications += 1\n accuracy = 100*correct_classifications/len(y_test) #Accuracy as a percentage\n return accuracy\n\nkernels = ('linear','poly','rbf')\naccuracies = []\nfor index, kernel in enumerate(kernels):\n model = svm.SVC(kernel=kernel)\n model.fit(X_train, y_train)\n acc = evaluate_on_test_data(model)\n accuracies.append(acc)\n print(\"{} % accuracy obtained with kernel = {}\".format(acc, kernel))",
"1.3 Visualize the decision boundaries",
"#Train SVMs with different kernels\nsvc = svm.SVC(kernel='linear').fit(X_train, y_train)\nrbf_svc = svm.SVC(kernel='rbf', gamma=0.7).fit(X_train, y_train)\npoly_svc = svm.SVC(kernel='poly', degree=3).fit(X_train, y_train)\n\n#Create a mesh to plot in\nh = .02 # step size in the mesh\nx_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1\ny_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n\n#Define title for the plots\ntitles = ['SVC with linear kernel',\n 'SVC with RBF kernel',\n 'SVC with polynomial (degree 3) kernel']\n\n\nfor i, clf in enumerate((svc, rbf_svc, poly_svc)):\n # Plot the decision boundary. For that, we will assign a color to each\n # point in the mesh [x_min, m_max]x[y_min, y_max].\n plt.figure(i)\n\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)\n\n # Plot also the training points\n plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.ocean)\n plt.xlabel('Sepal length')\n plt.ylabel('Sepal width')\n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())\n plt.xticks(())\n plt.yticks(())\n plt.title(titles[i])\n\nplt.show()",
"1.4 Check the support vectors",
"#Checking the support vectors of the polynomial kernel (for example)\nprint(\"The support vectors are:\\n\", poly_svc.support_vectors_)",
"2. Support Vector Regression\n2.1 Load data from the Boston dataset",
"boston = datasets.load_boston()\nX = boston.data\ny = boston.target\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)",
"2.2 Use Support Vector Machine with different kinds of kernels and evaluate performance",
"def evaluate_on_test_data(model=None):\n predictions = model.predict(X_test)\n sum_of_squared_error = 0\n for i in range(len(y_test)):\n err = (predictions[i]-y_test[i]) **2\n sum_of_squared_error += err\n mean_squared_error = sum_of_squared_error/len(y_test)\n RMSE = np.sqrt(mean_squared_error) \n return RMSE\n\nkernels = ('linear','rbf')\nRMSE_vec = []\nfor index, kernel in enumerate(kernels):\n model = svm.SVR(kernel=kernel)\n model.fit(X_train, y_train)\n RMSE = evaluate_on_test_data(model)\n RMSE_vec.append(RMSE)\n print(\"RMSE={} obtained with kernel = {}\".format(RMSE, kernel))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cleuton/datascience
|
covid19_Brasil/Covid19_no_Brasil.ipynb
|
apache-2.0
|
[
"Covid 19 - Visualização Brasil\nDados oriundos de https://brasil.io/dataset/covid19/caso \nCleuton Sampaio",
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\ndf = pd.read_csv('./covid19-86691a57080d4801a240e49035b292fc.csv') \ndf.head()\n\nlist_cidades = df.groupby(\"city\").count().index.tolist()\n\n\nlist_cidades",
"Vamos geocodificar as cidades buscando-as pela Geocode API da Google. Você precisa obter uma API Key: https://console.cloud.google.com/apis/ Se quiser, pode usar o arquivo geocodificado que eu salvei.",
"import requests\nimport json\ncidades = {}\ncidades_nao_encontradas = []\nurl = 'https://maps.googleapis.com/maps/api/geocode/json'\n\nparams = dict(\n key='** USE SUA API KEY**'\n)\ninx = 0\nstart=None # Inform None to process all cities. Inform a city to begin processing after it\nstart_saving=False\nfor city in list_cidades:\n if start != None:\n if city == start:\n start_saving = True\n else:\n start_saving = True\n if start_saving:\n params['address'] = city + ',brasil'\n resp = requests.get(url=url, params=params)\n data = resp.json()\n try: \n latitude = data['results'][0]['geometry']['location']['lat']\n longitude = data['results'][0]['geometry']['location']['lng']\n print(city,latitude,longitude)\n cidades[city]={'latitude': latitude, 'longitude': longitude}\n inx = inx + 1\n except: \n print(\"Erro na cidade: \",city)\n cidades_nao_encontradas.append(city)\n else:\n print('Pulando a cidade já processada:',city)\nprint('Cidades salvas no arquivo:',inx)\nprint('Cidades não encontradas',cidades_nao_encontradas)\nwith open('cidades.json', 'w') as fp:\n json.dump(cidades, fp) \n\nprint(json.dumps(cidades))\n",
"Eu salvei um arquivo chamado \"cidades.json\" com a geocodificação, de modo a evitar acessar a API novamenete. \nAgora, preciso baixar um mapa contendo o Brasil. Escolhi o centro usando o Google Maps e ajustei o zoom, o tamanho e a escala. Note que você vai precisar de uma chave de API.",
"#-13.6593766,-58.6914406\nlatitude = -13.6593766\nlongitude = -50.6914406\nzoom = 4\nsize = 530\nscale = 2\napikey = \"** SUA API KEY**\"\ngmapas = \"https://maps.googleapis.com/maps/api/staticmap?center=\" + str(latitude) + \",\" + str(longitude) + \\\n \"&zoom=\" + str(zoom) + \\\n \"&scale=\" + str(scale) + \\\n \"&size=\" + str(size) + \"x\" + str(size) + \"&key=\" + apikey\n\nwith open('mapa.png', 'wb') as handle:\n response = requests.get(gmapas, stream=True)\n\n if not response.ok:\n print(response)\n\n for block in response.iter_content(1024):\n if not block:\n break\n\n handle.write(block)",
"Agora, preciso de uma função para usar a projeção de Mercator e calcular as bordas do mapa que eu baixei. Eu baixei desta resposta do StackOverflow: https://stackoverflow.com/questions/12507274/how-to-get-bounds-of-a-google-static-map E funciona melhor do que a que estava utilizando.",
"import MercatorProjection\ncenterLat = latitude\ncenterLon = longitude\nmapWidth = size\nmapHeight = size\ncenterPoint = MercatorProjection.G_LatLng(centerLat, centerLon)\ncorners = MercatorProjection.getCorners(centerPoint, zoom, mapWidth, mapHeight)\nprint(corners)",
"Gerei um novo Dataframe contendo as latitudes, longitudes e quantidade de casos:",
"casos = df.groupby(\"city\")['confirmed'].sum()\ndf2 = pd.DataFrame.from_dict(cidades,orient='index')\ndf2['casos'] = casos\nprint(df2.head())",
"Agora, vou acrescentar um atributo com a cor do ponto, de acordo com uma heurística de quantidade de casos: Quanto mais, mais vermelho:",
"def calcular_cor(valor):\n cor = 'r'\n if valor <= 10: \n cor = '#ffff00'\n elif valor <= 30:\n cor = '#ffbf00'\n elif valor <= 50:\n cor = '#ff8000'\n return cor\n \ndf2['cor'] = [calcular_cor(codigo) for codigo in df2['quantidade']]\n\ndf2.head()",
"Vamos ordenar pela quantidade de casos:",
"df2 = df2.sort_values(['casos'])\n",
"Temos alguns \"outliers\", ou seja, coordenadas muito fora do país. Provavelmente, problemas de geocodificação. Vamos retirá-las:",
"print(df2.loc[(df2['latitude'] > 20) | (df2['longitude']< -93)])\ndf3 = df2.drop(df2[(df2.latitude > 20) | (df2.longitude < -93)].index)",
"Agora dá para plotar um gráfico utilizando aquela imagem baixada. Eu tive que ajustar as coordenadas de acordo com os limites do retângulo, calculados pela projeção de Mercator:",
"import matplotlib.image as mpimg\nmapa=mpimg.imread('./mapa.png')\nfig, ax = plt.subplots(figsize=(10, 10))\n#{'N': 20.88699826581544, 'E': -15.535190599999996, 'S': -43.89198802990045, 'W': -85.84769059999999}\nplt.imshow(mapa, extent=[corners['W'],corners['E'],corners['S'],corners['N']], alpha=1.0, aspect='auto')\nax.scatter(df3['longitude'],df3['latitude'], c=df3['cor'],s=8+df3['casos']*0.03)\nplt.ylabel(\"Latitude\", fontsize=14)\nplt.xlabel(\"Longitude\", fontsize=14)\nax.set_title('Casos de Covid19 em Abril')\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fionapigott/Data-Science-45min-Intros
|
sklearn-101/sklearn-101.ipynb
|
unlicense
|
[
"Intro to scikit-learn, machine learning in Python\nPart of our team's ongoing Data Science Workshops\n2014-04-04, Josh Montague (@jrmontag)\nOverview\nThis is a simple introduction to the sklearn API, with two examples using built-in datasets:\n1. K-Nearest Neighbors (supervised)\n2. Linear regression (unsupervised)\nThis session doesn't go into the implementations of the estimator algorithms, nor how to choose them appropriately. However, the official docs are incredible, and have more information than you will probably every have time to read. See also this fantastic flowchart on when to use which algorithm.\nImport some libraries for data munging and inspection:\nThese aren't the sklearn library, but they're important for doing science in Python.\nIf this doesn't work, you'll need to \n$pip install numpy \nand\n$pip install matplotlib\nFor this tutorial, I'm using numpy version 1.12.1 and matplotlib version 2.0.2 with Python version 3.6.1\nLater we'll import code from sklearn to do the machine learning.",
"import numpy as np\nimport random\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Get & inspect the data\nFirst, we'll get some sample data from the built-in collection. Fisher's famous iris dataset is a great place to start. The datasets are of type bunch, which is dictionary-like.",
"# import the 'iris' dataset from sklearn\nfrom sklearn import datasets\niris = datasets.load_iris()",
"Use some of the keys in the bunch to poke around the data and get a sense for what we have to work with. Generally, the sklearn built-in data has data and target keys whose values (arrays) we'll use for our machine learning.",
"print(\"data dimensions (array): {} \\n \".format(iris.data.shape))\nprint(\"bunch keys: {} \\n\".format(iris.keys()))\nprint(\"feature names: {} \\n\".format(iris.feature_names))\n\n# the \"description\" is a giant text blob that will tell you more \n# about the contents of the bunch\nprint(iris.DESCR) ",
"Have a look at the actual data we'll be using. Note that the data array has four features for each sample (consistent with the \"feature names\" above), and the labels can only take on the values 0-2.\nI'm still getting familiar with slicing, indexing, etc numpy arrays, so I find it helpful to have the docs open somewhere.",
"# preview 'idx' rows/cols of the data\nidx = 6\n\nprint(\"example iris features: \\n {} \\n\".format(iris.data[:idx]))\nprint(\"example iris labels: {} \\n\".format(iris.target[:idx]))\nprint(\"all possible labels: {} \\n\".format(np.unique(iris.target)))",
"It's always a good idea to throw out some scatter plots (if the data is appropriate) to see the space our data covers. Since we have four features, we can just grab some pairs of the data and make simple scatter plots.",
"plt.figure(figsize = (16,4))\n\nplt.subplot(131)\nplt.scatter(iris.data[:, 0:1], iris.data[:, 1:2])\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"sepal width (cm)\")\nplt.axis(\"tight\")\n\nplt.subplot(132)\nplt.scatter(iris.data[:, 1:2], iris.data[:, 2:3])\nplt.xlabel(\"sepal width (cm)\")\nplt.ylabel(\"petal length (cm)\")\nplt.axis(\"tight\")\n\nplt.subplot(133)\nplt.scatter(iris.data[:, 0:1], iris.data[:, 2:3])\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"petal length (cm)\")\nplt.axis(\"tight\")",
"And, what the heck, this is a perfectly good opportunity to try out a 3D plot, too, right? We'll also add in the target from the dataset - that is, the actual labels that we're getting to - as the coloring. Since we only have three dimensions to plot, we have to leave something out.",
"from mpl_toolkits.mplot3d import Axes3D\n\nfig = plt.figure(figsize = (10,8))\nax = plt.axes(projection='3d')\nax.view_init(15, 60) # (elev, azim) : adjust these to change viewing angle!\n\n\nx = iris.data[:, 0:1]\ny = iris.data[:, 1:2]\nz = iris.data[:, 2:3]\n\n# add the last dimension for use in e.g. the color!\nlabel1 = iris.data[:, 3:4]\n# can also color the data according to the actual labels \nlabel2 = iris.target\n\nax.scatter(x, y, z, c=label2)\n\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"sepal width (cm)\")\nax.set_zlabel(\"petal length (cm)\")\n#plt.axis(\"tight\")",
"Since this dataset has a collection of \"ground truth\" labels (label2 in the previous graph), this is an example of supervised learning. We tell the algorithm the right answer a whole bunch of times, and look for it to figure out the best way to predict labels of future data samples.\nLearning & predicting\nIn sklearn, model objects (implementing a particular algorithm) are called \"estimators\". Estimators implement the methods fit(X,y) and predict(X).\nFirst things first, we want to get all the sample feature (data) and label (target) arrays. Then, we'll randomly split up the data sets into training and test sets by shuffling the order and slicing off the last 'subset' samples to hold for testing.",
"iris_X = iris.data\niris_y = iris.target\n\nr = random.randint(0,100)\nnp.random.seed(r)\nidx = np.random.permutation(len(iris_X))\n\nsubset = 25\n\niris_X_train = iris_X[idx[:-subset]] # all but the last 'subset' rows\niris_y_train = iris_y[idx[:-subset]]\niris_X_test = iris_X[idx[-subset:]] # the last 'subset' rows\niris_y_test = iris_y[idx[-subset:]]",
"To see that we're choosing the training and test samples, we can again plot them to see how they're distributed. If you re-run the random.seed code, it should choose a new random collection of the data.",
"plt.figure(figsize= (6,6))\n\nplt.scatter(iris_X_train[:, 0:1]\n , iris_X_train[:, 1:2]\n , c=\"blue\"\n , s=30\n , label=\"train data\"\n )\n\nplt.scatter(iris_X_test[:, 0:1]\n , iris_X_test[:, 1:2]\n , c=\"red\"\n , s=30 \n , label=\"test data\" \n )\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"sepal width (cm)\")\nplt.legend()\n#_ = plt.axis(\"tight\")",
"Now, we use the sklearn package and create a nearest-neighbor classification estimator (with all the default values). After instantiating the object, we use its fit() method and pass it the training data - both features and labels. Note that the __repr__() here tells you about the default values if you want to adjust them. Of course you can always just look at the documentation, too!",
"from sklearn.neighbors import KNeighborsClassifier\n\nknn = KNeighborsClassifier()\n\n# fit the model using the training data and labels\nknn.fit(iris_X_train, iris_y_train)",
"At this point, the trained knn estimator object has, internally, the \"best\" mapping from input to output. It can now be used to predict new data via the predict() method. In this case, the prediction is which class the new samples' features best fit - a simple 1D array of labels.",
"# predict the labels for the test data, using the trained model\niris_y_predict = knn.predict(iris_X_test)\n\n# show the results (labels)\niris_y_predict",
"Since this data came labeled, we can actually look at the actual correct answers. As this list grows in size, it's trickier to note the differences or similarities. But, still, it looks like it did a pretty good job.",
"iris_y_test",
"Thankfully, sklearn also has many built-in ways to gauge the \"accuracy\" of our trained estimator. The simplest is just \"what fraction of our classifications did we get right?\" Clearly easier than inspecting by eye. Note: occasionally, this estimator actually reaches 100%. If you increase the \"subset\" that's cut out for testing, you're likely to see a decrease in the accuracy_score, which makes it easier to visualize.",
"from sklearn.metrics import accuracy_score\n\naccuracy_score(iris_y_test, iris_y_predict)",
"So we know the successful prediction percentage, but we can also do a visual inspection of how the labels differ. Even for this small dataset, it can be tricky to spot the differences between the true values and predicted values; an accuracy_score in the 90% range means that only one or two samples will be incorrectly predicted.",
"plt.figure(figsize = (12,6))\n\nplt.subplot(221)\nplt.scatter(iris_X_test[:, 0:1] # real data\n , iris_X_test[:, 1:2]\n , c=iris_y_test # real labels\n , s=100\n , alpha=0.6\n )\nplt.ylabel(\"sepal width (cm)\")\nplt.title(\"real labels\")\n\nplt.subplot(223)\nplt.scatter(iris_X_test[:, 0:1]\n , iris_X_test[:, 2:3]\n , c=iris_y_test\n , s=100\n , alpha=0.6\n )\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"petal length (cm)\")\n\n\nplt.subplot(222)\nplt.scatter(iris_X_test[:, 0:1] # real data\n , iris_X_test[:, 1:2]\n , c=iris_y_predict # predicted labels\n , s=100\n , alpha=0.6\n )\nplt.ylabel(\"sepal width (cm)\")\nplt.title(\"predicted labels\")\n\nplt.subplot(224)\nplt.scatter(iris_X_test[:, 0:1]\n , iris_X_test[:, 2:3]\n , c=iris_y_predict\n , s=100\n , alpha=0.6\n )\nplt.xlabel(\"sepal length (cm)\")\nplt.ylabel(\"petal length (cm)\")",
"Alternatively, if you have more numpy and matplotlib skillz than I currently have, you can also visualize the resulting model of a similar estimator like so: (source)",
"from IPython.core.display import Image\nImage(filename='./iris_knn.png', width=500)",
"Once more, but with model persistence\nNow let's work an example of unsupervised learning.\nAfter some time with exploration like in the last example, we'll get a handle on our data, the features that will be helpful, and the general pipeline of analysis. In order to make the analysis more portable (and also when issues of scale become important), we may want to train our estimator once (as well as we possibly can, without overfitting), and then use that estimator for predictions of new data for quite a while. If fitting the estimator takes an hour (or a day!), we definitely don't want to repeat that process any more than necessary.\nEnter pickle.\npickle is a way to serialize and de-serialize Python objects; that is, convert an in-memory structure to a byte stream that you can write to file, move around, and read back into memory as a full-fledged Python object. \nFirst, let's work with a slightly trimmed-down analysis pipeline this time. First, get some data (still built in), and have a look. This data is on house sales in Boston around the 1970s (hence the ridiculously low prices, relative to today!). The features are all kinds of interesting things, and the targets are the sale prices.",
"# boston home sales in the 1970s\nboston = datasets.load_boston()\n\nboston.keys()\n\n# get the full info\nprint(boston.DESCR)",
"The feature_names are less obvious in this dataset, but if you print out the DESCR above, there's a more detailed explanation of the features.",
"print(\"dimensions: {}, {} \\n\".format(boston.data.shape, boston.target.shape))\nprint(\"features (defs in DESCR): {} \\n\".format(boston.feature_names))\nprint(\"first row: {}\\n\".format(boston.data[:1]))",
"While a better model would take incorporate all of these features (with appropriate normalization), let's focus on just one for the moment. Column 5 is the number of rooms in each house. Seems reasonable that this would be a decent predictor of sale price. Let's have a quick look at the data.",
"rooms = boston.data[:, 5]\n\nplt.figure(figsize = (6,6))\nplt.scatter(rooms, boston.target, alpha=0.5)\nplt.xlabel(\"room count\")\nplt.ylabel(\"cost ($1000s)\")\nplt.axis(\"tight\")",
"Ok, so we can work with this - there's definitely some correlation between these two variables. Let's imagine that we just knew we wanted to fit this immediately, without all the inspection. Furthermore, we wanted to build a model, fit the estimator, and then keep it around for use in an analysis later on (via pickle). Let's go back a step and start from loading the data:",
"# here comes the data\nboston = datasets.load_boston()\n\n# a little goofyness to get a \"column vector\" for the estimator\nb_X_tmp = boston.data[:, np.newaxis]\nb_X = b_X_tmp[:, :, 5]\nb_y = boston.target\n\n# split it out into train/test again\nr = random.randint(0,100)\nnp.random.seed(r)\nidx = np.random.permutation(len(b_X))\n\nsubset = 50\n\nb_X_train = b_X[idx[:-subset]] # all but last 'subset' rows\nb_y_train = b_y[idx[:-subset]]\nb_X_test = b_X[idx[-subset:]] # last 'subset' rows\nb_y_test = b_y[idx[-subset:]]",
"Fit the estimator (and this time let's print out the fitted model parameters)...",
"# get our estimator & fit to the training data \nfrom sklearn.linear_model import LinearRegression\nlr = LinearRegression()\n\nprint(lr.fit(b_X_train, b_y_train))\nprint(\"Coefficients: {},{} \\n\".format(lr.coef_, lr.intercept_))",
"And now let's imagine we spent a ton of time building this model, so we want to save it to disk:",
"import pickle\n\np = pickle.dumps(lr)\nprint(p) # looks super useful, right?",
"Write the pickle to disk, and then navigate to this location in another shell and cat the file. It's pretty much what you see above. Pretty un-helpful to the eye.",
"# write this fitted estimator (python object) to a byte stream\npickle.dump(lr,open('./lin-reg.pkl', 'wb'))\n\n!cat ./lin-reg.pkl",
"But, now imagine we had another process that had some new data and we wanted to use this pre-existing model to predict some results. We just read in the faile, and deserialize it. You can even check the coefficients to see that it's \"the same\" model.",
"# now, imagine you've previously created this file and stored it off somewhere... \nnew_lr = pickle.load(open('./lin-reg.pkl', 'rb'))\n \nprint(new_lr) \nprint(\"Coefficient (compare to previous output): {}, {} \\n\".format(new_lr.coef_, new_lr.intercept_))",
"And now we can use it to predict the target for our housing data (remember, we use the \"test\" data for measuring the success of our estimator.",
"b_y_predict = new_lr.predict(b_X_test)\n\n#b_y_predict # you can have a look at the result if you want",
"Now, we can have a look at how we did. Below, we can look at the best-fit line through all of the data (both \"test\" and \"train\"). Then, we also compare the predicted fit results (test data) to the actual true results.",
"plt.figure(figsize= (12,5))\n\nplt.subplot(121)\nplt.scatter(b_X, b_y, c=\"red\")\nplt.plot(b_X, new_lr.predict(b_X), '-k')\nplt.axis('tight')\nplt.xlabel('room count')\nplt.ylabel('predicted price ($1000s)')\nplt.title(\"fit to all data\")\n\nplt.subplot(122)\nplt.scatter(b_y_test, b_y_predict, c=\"green\")\nplt.plot([0, 50], [0, 50], '--k')\nplt.axis('tight')\nplt.xlabel('true price ($1000s)')\nplt.ylabel('predicted price ($1000s)')\nplt.title(\"true- and predicted-value comparison (test data)\")",
"It's not so bad! We left out all sorts of things that might help, like weighting, normalization, ..., but it's not bad for how easy it was.\n\nSo, generally, the approach to using the sklearn package is to choose an estimator, split your data, fit on the training data, predict on the testing data, and then celebrate how easy scikit-learn has made this process:\nfrom sklearn import estimator\n\ne = Estimator()\ne.fit(train_samples [, train_labels])\ne.predict(test_samples [, test_labels])\n\nrejoice()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/4a_streaming_data_training.ipynb
|
apache-2.0
|
[
"Training a model with traffic_last_5min feature\nIntroduction\nIn this notebook, we'll train a taxifare prediction model but this time with an additional feature of traffic_last_5min.",
"import datetime\nimport os\nimport shutil\n\nimport pandas as pd\nimport tensorflow as tf\n\nfrom matplotlib import pyplot as plt\nfrom tensorflow import keras\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, DenseFeatures\nfrom tensorflow.keras.callbacks import TensorBoard\n\nprint(tf.__version__)\n%matplotlib inline\n\nPROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# For Bash Code\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION",
"Load raw data",
"!ls -l ../data/taxi-traffic*\n\n!head ../data/taxi-traffic*",
"Use tf.data to read the CSV files\nThese functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature traffic_last_5min.",
"CSV_COLUMNS = [\n 'fare_amount',\n 'dayofweek',\n 'hourofday',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'traffic_last_5min'\n]\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]\n\n\ndef features_and_labels(row_data):\n label = row_data.pop(LABEL_COLUMN)\n features = row_data\n\n return features, label\n\n\ndef create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = tf.data.experimental.make_csv_dataset(\n pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n\n dataset = dataset.map(features_and_labels)\n\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(buffer_size=1000).repeat()\n\n # take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(1)\n return dataset\n\nINPUT_COLS = [\n 'dayofweek',\n 'hourofday',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'traffic_last_5min'\n]\n\n# Create input layer of feature columns\nfeature_columns = {\n colname: tf.feature_column.numeric_column(colname)\n for colname in INPUT_COLS\n }",
"Build a simple keras DNN model",
"# Build a keras DNN model using Sequential API\ndef build_model(dnn_hidden_units):\n model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))\n \n for num_nodes in dnn_hidden_units:\n model.add(Dense(units=num_nodes, activation=\"relu\"))\n \n model.add(Dense(units=1, activation=\"linear\")) \n \n # Create a custom evalution metric\n def rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n # Compile the keras model\n model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[rmse, \"mse\"])\n \n return model",
"Next, we can call the build_model to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.",
"HIDDEN_UNITS = [32, 8]\n\nmodel = build_model(dnn_hidden_units=HIDDEN_UNITS)\n\nBATCH_SIZE = 1000\nNUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around\nNUM_EVALS = 60 # how many times to evaluate\nNUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample\n\ntrainds = create_dataset(\n pattern='../data/taxi-traffic-train*',\n batch_size=BATCH_SIZE,\n mode=tf.estimator.ModeKeys.TRAIN)\n\nevalds = create_dataset(\n pattern='../data/taxi-traffic-valid*',\n batch_size=BATCH_SIZE,\n mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\n\n%%time\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)\n\nLOGDIR = \"./taxi_trained\"\nhistory = model.fit(x=trainds,\n steps_per_epoch=steps_per_epoch,\n epochs=NUM_EVALS,\n validation_data=evalds,\n callbacks=[TensorBoard(LOGDIR)])\n\nRMSE_COLS = ['rmse', 'val_rmse']\n\npd.DataFrame(history.history)[RMSE_COLS].plot()\n\nmodel.predict(x={\"dayofweek\": tf.convert_to_tensor([6]),\n \"hourofday\": tf.convert_to_tensor([17]),\n \"pickup_longitude\": tf.convert_to_tensor([-73.982683]),\n \"pickup_latitude\": tf.convert_to_tensor([40.742104]),\n \"dropoff_longitude\": tf.convert_to_tensor([-73.983766]),\n \"dropoff_latitude\": tf.convert_to_tensor([40.755174]),\n \"traffic_last_5min\": tf.convert_to_tensor([114])},\n steps=1)",
"Export and deploy model",
"OUTPUT_DIR = \"./export/savedmodel\"\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\nEXPORT_PATH = os.path.join(OUTPUT_DIR,\n datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\ntf.saved_model.save(model, EXPORT_PATH) # with default serving function\nos.environ['EXPORT_PATH'] = EXPORT_PATH\n\n%%bash\nPROJECT=${PROJECT}\nBUCKET=${BUCKET}\nREGION=${REGION}\nMODEL_NAME=taxifare\nVERSION_NAME=traffic\n\nif [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then\n echo \"$MODEL_NAME already exists\"\nelse\n # create model\n echo \"Creating $MODEL_NAME\"\n gcloud ai-platform models create --regions=$REGION $MODEL_NAME\nfi\n\nif [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then\n echo \"Deleting already existing $MODEL_NAME:$VERSION_NAME ... \"\n gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME\n echo \"Please run this cell again if you don't see a Creating message ... \"\n sleep 2\nfi\n\n# create model\necho \"Creating $MODEL_NAME:$VERSION_NAME\"\ngcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \\\n --framework=tensorflow --python-version=3.5 --runtime-version=1.14 \\\n --origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET",
"Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jphall663/GWU_data_mining
|
10_model_interpretability/src/sensitivity_analysis.ipynb
|
apache-2.0
|
[
"License\n\nCopyright 2017 J. Patrick Hall, jphall@gwu.edu\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nSensitivity Analysis\n\nPreliminaries: imports, start h2o, load and clean data",
"# imports\nimport h2o \nimport numpy as np\nimport pandas as pd\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\n\n# start h2o\nh2o.init()\nh2o.remove_all()",
"Load and prepare data for modeling",
"# load data\npath = '../../03_regression/data/train.csv'\nframe = h2o.import_file(path=path)\n\n# assign target and inputs\ny = 'SalePrice'\nX = [name for name in frame.columns if name not in [y, 'Id']]",
"Impute missing values",
"# determine column types\n# impute\nreals, enums = [], []\nfor key, val in frame.types.items():\n if key in X:\n if val == 'enum':\n enums.append(key)\n else: \n reals.append(key)\n \n_ = frame[reals].impute(method='median')\n_ = frame[enums].impute(method='mode')\n\n# split into training and validation\ntrain, valid = frame.split_frame([0.7], seed=12345)",
"Train a predictive model",
"# train GBM model\nmodel = H2OGradientBoostingEstimator(ntrees=100,\n max_depth=10,\n distribution='huber',\n learn_rate=0.1,\n stopping_rounds=5,\n seed=12345)\n\nmodel.train(y=y, x=X, training_frame=train, validation_frame=valid)\n\npreds = valid.cbind(model.predict(valid))",
"Determine important variables for use in sensitivity analysis",
"model.varimp_plot()",
"Helper function for finding quantile indices",
"def get_quantile_dict(y, id_, frame):\n\n \"\"\" Returns the percentiles of a column y as the indices for another column id_.\n \n Args:\n y: Column in which to find percentiles.\n id_: Id column that stores indices for percentiles of y.\n frame: H2OFrame containing y and id_. \n \n Returns:\n Dictionary of percentile values and index column values.\n \n \"\"\"\n \n quantiles_df = frame.as_data_frame()\n quantiles_df.sort_values(y, inplace=True)\n quantiles_df.reset_index(inplace=True)\n \n percentiles_dict = {}\n percentiles_dict[0] = quantiles_df.loc[0, id_]\n percentiles_dict[99] = quantiles_df.loc[quantiles_df.shape[0]-1, id_]\n inc = quantiles_df.shape[0]//10\n \n for i in range(1, 10):\n percentiles_dict[i * 10] = quantiles_df.loc[i * inc, id_]\n\n return percentiles_dict\n\nsale_quantile_dict = get_quantile_dict('SalePrice', 'Id', preds)\npred_quantile_dict = get_quantile_dict('predict', 'Id', preds)\n\nprint('SalePrice quantiles:\\n', sale_quantile_dict)\nprint()\nprint('prediction quantiles:\\n',pred_quantile_dict)",
"Get validation data ranges",
"print('lowest SalePrice:\\n', preds[preds['Id'] == int(sale_quantile_dict[0])]['SalePrice'])\nprint('lowest prediction:\\n', preds[preds['Id'] == int(pred_quantile_dict[0])]['predict'])\nprint('highest SalePrice:\\n', preds[preds['Id'] == int(sale_quantile_dict[99])]['SalePrice'])\nprint('highest prediction:\\n', preds[preds['Id'] == int(pred_quantile_dict[99])]['predict'])",
"This result alone is interesting. The model appears to be struggling to accurately predict low and high values for SalePrice. This behavior should be corrected to increase the accuracy of predictions. A strategy for improving predictions for these homes with extreme values might be to weight them higher during training using observation weights, or they may need their own models.\nNow use trained model to test predictions for interesting situations\nHow will the model handle making the home with the lowest predicted price even less desirable?",
"# look at current row\nprint(preds[preds['Id'] == int(pred_quantile_dict[0])])\n\n# find current error\nobserved = preds[preds['Id'] == int(pred_quantile_dict[0])]['SalePrice'][0,0]\npredicted = preds[preds['Id'] == int(pred_quantile_dict[0])]['predict'][0,0]\nprint('Error: %.2f%%' % (100*(abs(observed - predicted)/observed)))\n\n# change value of important variables\ntest_case = preds[preds['Id'] == int(pred_quantile_dict[0])]\ntest_case = test_case.drop('predict')\ntest_case['OverallQual'] = 0\ntest_case['Neighborhood'] = 'IDOTRR'\ntest_case['GrLivArea'] = 500\ntest_case = test_case.cbind(model.predict(test_case))\nprint(test_case)\n\n# recalculate error\nobserved = test_case['SalePrice'][0,0]\npredicted = test_case['predict'][0,0]\nprint('Error: %.2f%%' % (100*(abs(observed - predicted)/observed)))",
"While the model does not seem to handle low-valued homes very well, making the home with the lowest predicted price less appealling does not seem to make the model's predictions any worse. While this prediction behavior appears somewhat stable, which would normally be desirable, this is not particularly good news as the underlying prediction is so inaccurate. \nHow will the model handle making the home with the highest predicted price even more desirable?",
"# look at current row\nprint(preds[preds['Id'] == int(pred_quantile_dict[99])])\n\n# find current error\nobserved = preds[preds['Id'] == int(pred_quantile_dict[99])]['SalePrice'][0,0]\npredicted = preds[preds['Id'] == int(pred_quantile_dict[99])]['predict'][0,0]\nprint('Error: %.2f%%' % (100*(abs(observed - predicted)/observed)))\n\n# change value of important variables\ntest_case = preds[preds['Id'] == int(pred_quantile_dict[99])]\ntest_case = test_case.drop('predict')\ntest_case['Neighborhood'] = 'StoneBr'\ntest_case['GrLivArea'] = 5000\ntest_case = test_case.cbind(model.predict(test_case))\nprint(test_case)\n\n# recalculate error\nobserved = test_case['SalePrice'][0,0]\npredicted = test_case['predict'][0,0]\nprint('Error: %.2f%%' % (100*(abs(observed - predicted)/observed)))",
"This result may point to unstable predictions for the higher end of SalesPrice.\nShutdown H2O",
"h2o.cluster().shutdown(prompt=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_left_cerebellum_volume_source.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Generate a left cerebellum volume source space\nGenerate a volume source space of the left cerebellum and plot its vertices\nrelative to the left cortical surface source space and the freesurfer\nsegmentation file.",
"# Author: Alan Leggitt <alan.leggitt@ucsf.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nfrom scipy.spatial import ConvexHull\nfrom mayavi import mlab\nfrom mne import setup_source_space, setup_volume_source_space\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nsubj = 'sample'\naseg_fname = subjects_dir + '/sample/mri/aseg.mgz'",
"Setup the source spaces",
"# setup a cortical surface source space and extract left hemisphere\nsurf = setup_source_space(subj, subjects_dir=subjects_dir,\n add_dist=False, overwrite=True)\nlh_surf = surf[0]\n\n# setup a volume source space of the left cerebellum cortex\nvolume_label = 'Left-Cerebellum-Cortex'\nsphere = (0, 0, 0, 120)\nlh_cereb = setup_volume_source_space(subj, mri=aseg_fname, sphere=sphere,\n volume_label=volume_label,\n subjects_dir=subjects_dir)",
"Plot the positions of each source space",
"# extract left cortical surface vertices, triangle faces, and surface normals\nx1, y1, z1 = lh_surf['rr'].T\nfaces = lh_surf['use_tris']\nnormals = lh_surf['nn']\n# normalize for mayavi\nnormals /= np.sum(normals * normals, axis=1)[:, np.newaxis]\n\n# extract left cerebellum cortex source positions\nx2, y2, z2 = lh_cereb[0]['rr'][lh_cereb[0]['inuse'].astype(bool)].T\n\n# open a 3d figure in mayavi\nmlab.figure(1, bgcolor=(0, 0, 0))\n\n# plot the left cortical surface\nmesh = mlab.pipeline.triangular_mesh_source(x1, y1, z1, faces)\nmesh.data.point_data.normals = normals\nmlab.pipeline.surface(mesh, color=3 * (0.7,))\n\n# plot the convex hull bounding the left cerebellum\nhull = ConvexHull(np.c_[x2, y2, z2])\nmlab.triangular_mesh(x2, y2, z2, hull.simplices, color=3 * (0.5,), opacity=0.3)\n\n# plot the left cerebellum sources\nmlab.points3d(x2, y2, z2, color=(1, 1, 0), scale_factor=0.001)\n\n# adjust view parameters\nmlab.view(173.78, 101.75, 0.30, np.array([-0.03, -0.01, 0.03]))\nmlab.roll(85)",
"Compare volume source locations to segmentation file in freeview",
"# Export source positions to nift file\nnii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii'\n\n# Combine the source spaces\nsrc = surf + lh_cereb\n\nsrc.export_volume(nii_fname, mri_resolution=True)\n\n# Uncomment the following lines to display source positions in freeview.\n'''\n# display image in freeview\nfrom mne.utils import run_subprocess\nmri_fname = subjects_dir + '/sample/mri/brain.mgz'\nrun_subprocess(['freeview', '-v', mri_fname, '-v',\n '%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',\n '%s:colormap=jet:colorscale=0,2' % nii_fname, '-slice',\n '157 75 105'])\n'''"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rafburzy/Python_EE
|
Seismic/SDOF.ipynb
|
bsd-3-clause
|
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.style.use('ggplot')",
"Response of the Single Degree of Freedom (SDOF) System\nthe response is given by the following formula:\n$$ a(t) = e^{-\\zeta \\omega \\cdot t} \\cdot \\left[ 2 \\zeta cos \\left(\\omega \\sqrt{1-\\zeta^2}t \\right) + \\frac{1-2\\zeta^2} {\\sqrt{1-\\zeta^2}} sin\\left(\\omega \\sqrt{1-\\zeta^2}t \\right) \\right] $$",
"# time and frequency vectors\nt = np.arange(0, 30, 0.002)\nf = np.arange(2, 45, 2**(1.0/6.0)) #frequency change by 1/6 octave\n\n# relative damping factor\ndz = 0.05\n\n# omega\nw1 = 2*np.pi*f[1]\n# acceleration response calculation\na1 = np.exp(-dz*w1*t) * (2*dz*np.cos(w1*np.sqrt(1-dz**2)*t) + (1-2*dz**2)/np.sqrt(1-dz**2)*np.sin(w1*np.sqrt(1-dz**2)*t))\n\n# omega\nw2 = 2*np.pi*f[-1]\n# acceleration response calculation\na2 = np.exp(-dz*w2*t) * (2*dz*np.cos(w2*np.sqrt(1-dz**2)*t) + (1-2*dz**2)/np.sqrt(1-dz**2)*np.sin(w2*np.sqrt(1-dz**2)*t))\n\nplt.figure(figsize=(12,5))\nplt.subplot(121)\nplt.plot(t, a1, label='Frequency ' + str(f[1]))\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [m/s2]')\nplt.legend()\nplt.xlim(0, 5)\n\nplt.subplot(122)\nplt.plot(t, a2, label='Frequency ' + str(f[-1]))\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [m/s2]')\nplt.legend()\nplt.xlim(0, 1)\n\nplt.tight_layout();\n\n# wholistic approach\n# omega\nw = 2*np.pi*f\n\nA = np.empty((len(t), len(f)))\n\nfor j in range(len(w)):\n for i in range(len(t)):\n A[i,j] = np.exp(-dz*w[j]*t[i]) * (2*dz*np.cos(w[j]*np.sqrt(1-dz**2)*t[i]) +\n (1-2*dz**2)/np.sqrt(1-dz**2)*np.sin(w[j]*np.sqrt(1-dz**2)*t[i]))\n\nplt.figure(figsize=(12,6))\nplt.plot(t,A)\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [m/s2]')\nplt.xlim(0, 2);\n\na_max =[]\nfor i in range(A.shape[1]):\n a_max.append(max(np.abs(A[:,i])))\n\nplt.figure(figsize=(12,6))\nplt.semilogx(f, a_max, marker='.', linestyle='none')\nplt.xlabel('Frequency [Hz]')\nplt.ylabel('Acceleration [m/s2]')\nplt.ylim(0.8, 1);",
"Exication of the test object\ngiven by the formula\n$$ a(t) = \\sum\\limits_{i} A_i sin \\left( \\omega_i t + \\phi_i \\right) + \\Psi(t) $$",
"#Aux variables definition\nZPA = 0.4\n\n#Amplitudes\nAA = [np.random.random()*ZPA for k in f]\n \n#Random angle\nfi = [np.random.random()*np.pi/2 for k in f]\n\n# excitation for frequency range\nB = np.empty((len(t), len(f)))\nfor j in range(len(w)):\n for i in range(len(t)):\n B[i,j] = AA[j]*np.sin(w[j]*t[i] + fi[j])\n\n# sum across columns (per frequency)\nC = np.sum(B, axis=1)\n\nplt.figure(figsize=(12,10))\nplt.subplot(211)\nplt.plot(t, C, color='orange')\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [g]')\n\nplt.subplot(212)\nplt.plot(t, C, color='green')\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [g]')\nplt.xlim(0,2)\nplt.ylim(-2,2)\nplt.tight_layout();",
"Window functions definition (Psi function)",
"window = np.hamming(len(C))\nwindow2 = np.bartlett(len(C))\nwindow3 = np.blackman(len(C))\nwindow4 = np.hanning(len(C))\nwindow5 = np.kaiser(len(C), beta=3.5)\n\nplt.figure(figsize=(12,6))\nplt.plot(t, window, label='hamming')\nplt.plot(t, window2, label='barlett')\nplt.plot(t, window3, label='blackman')\nplt.plot(t, window4, label='hanning')\nplt.plot(t, window5, label='kaiser')\nplt.legend();",
"Exitation with window function",
"plt.figure(figsize=(12,10))\nplt.plot(t, C*window5, color='orange', alpha=0.35)\nplt.plot(t, C*window, alpha=0.35)\nplt.xlabel('Time [s]')\nplt.ylabel('Acceleration [g]');",
"Frequency Response Function (FRF) of SDOF\nSpatial parameter model\n<img src='FRF.png'>",
"# definition of parameters\n\n# spring constant [N/m]\nk = 40\n\n# mass [kg]\nm = 2\n\n# damping\nc = 5\n\n# frequency\nf1 = np.arange(0, 45, 0.001)\n# angular frequency\nomega = 2*np.pi*f1\n\n# frequency response function (FRF)\nH = 1 / (-omega**2*m + 1j*omega*c + k)\n\nplt.figure(figsize=(10,8))\n\nplt.subplot(211)\nplt.semilogx(omega, np.abs(H))\nplt.axvline(np.sqrt(k/m), color='grey', linestyle='--')\nplt.axvline(np.sqrt(k/m - (c/(2*m))**2), color='orange', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('H(omega)')\n\nplt.subplot(212)\nplt.semilogx(omega, np.angle(H)*360/(2*np.pi))\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('Phase{H}')\n\nplt.tight_layout()\n\n1/k\n\nomega_0 = np.sqrt(k/m)\nomega_0\n\nsigma = c / (2*m)\nomega_d = np.sqrt(omega_0**2 - sigma**2)\nomega_d",
"Modal parameter model (because in reality mass, spring constant and damping coefficient are not known) <br>\n<img src='modal.png'>",
"# examplary data from testing\n\n# modal constant\nC = 1.4\n\n# resonance frequency (undamped nat freq)\nomega_00 = 2*np.pi*5.8\n\n# damping\ndzeta = 6.6/100\n\n# damped natural freq\nomega_d = omega_00 * np.sqrt(1 - dzeta**2)\n\n# residue\nR = -1j* C * 0.54\nR_con = np.conj(R)\n\n# sigma\nsigma = np.sqrt(omega_00**2 - omega_d**2)\n\n# pole location\np = -sigma + 1j*omega_d\np_con = np.conj(p)\n\n# frequency response function FRF\nH2 = R / (1j*omega - p) + R_con / (1j*omega - p_con)\n\nplt.figure(figsize=(10,8))\n\nplt.subplot(211)\nplt.semilogx(omega, np.abs(H2))\nplt.axvline(omega_d, color='grey', linestyle='--')\n#plt.axvline(np.sqrt(k/m - (c/(2*m))**2), color='orange', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('H(omega)')\n\nplt.subplot(212)\nplt.semilogx(omega, np.angle(H2)*360/(2*np.pi))\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('Phase{H}')\n\nplt.tight_layout()\n\nomega_d, omega_00\n\n# approach by SP institute with correction based on literature\nf0 = omega_00/(2*np.pi)\nH3 = C * (f0**2 - f1**2 - 1j*2*dzeta*f0*f1) / ((f0**2 - f1**2)**2 + (2*dzeta*f0*f1)**2)\n\n# as written in the materials from SP \nH4 = C * (f0**2 + 1j*2*dzeta*f0*f1) / (f0**2 - f1**2 + 1j*2*dzeta*f0*f1)\n\nmax(abs(H2)), max(abs(H3)), max(abs(H4)), R\n\nplt.figure(figsize=(10,8))\n\nplt.subplot(211)\nplt.semilogx(omega, np.abs(H2), label='B&K')\nplt.semilogx(omega, np.abs(H3), label='SP mod')\nplt.axvline(omega_d, color='grey', linestyle='--')\n#plt.axvline(np.sqrt(k/m - (c/(2*m))**2), color='orange', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('H(omega)')\nplt.legend()\n\nplt.subplot(212)\nplt.semilogx(omega, np.angle(H2)*360/(2*np.pi))\nplt.semilogx(omega, np.angle(H3)*360/(2*np.pi))\nplt.axvline(omega_d, color='grey', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('Phase{H}')\n\n\nplt.tight_layout()\n\nplt.figure(figsize=(10,8))\n\nplt.subplot(211)\nplt.semilogx(omega, np.abs(H3), label='SP mod')\nplt.semilogx(omega, np.abs(H4), label='SP')\nplt.axvline(omega_d, color='grey', linestyle='--')\n#plt.axvline(np.sqrt(k/m - (c/(2*m))**2), color='orange', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('H(omega)')\nplt.legend()\n\nplt.subplot(212)\nplt.semilogx(omega, np.angle(H3)*360/(2*np.pi))\nplt.semilogx(omega, np.angle(H4)*360/(2*np.pi))\nplt.axvline(omega_d, color='grey', linestyle='--')\nplt.xlabel('angular frequency [rad/s]')\nplt.ylabel('Phase{H}')\n\n\nplt.tight_layout()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IS-ENES-Data/submission_forms
|
test/prov/old/prov-submission.ipynb
|
apache-2.0
|
[
"Representation of data submission workflow components based on W3C-PROV",
"%load_ext autoreload\n%autoreload 2\n\nfrom prov.model import ProvDocument\nd1 = ProvDocument()",
"Model is along the concept described in https://www.w3.org/TR/prov-primer/",
"from IPython.display import display, Image\nImage(filename='key-concepts.png')\n\nimport sys\n\n#sys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')\nsys.path.append('C:\\\\Users\\\\Stephan Kindermann\\\\Documents\\\\GitHub\\\\submission_forms')\nfrom dkrz_forms import form_handler\nfrom dkrz_forms.config import *\n\nname_space = project_config.NAME_SPACES\ncordex_dict = project_config.PROJECT_DICT['test']\n# add namespaces for submission provenance capture\n\nfor key,value in name_space.iteritems():\n d1.add_namespace(key,value)\n\n#d1.add_namespace()\n# to do: look into some predefined vocabs, e.g. dublin core, iso19139,foaf etc.\n\nd1.add_namespace(\"enes_entity\",'http://www.enes.org/enes_entitiy#')\nd1.add_namespace('enes_agent','http://www.enes.org/enes_agent#')\nd1.add_namespace('data_collection','http://www.enes.org/enes_entity/file_collection')\nd1.add_namespace('data_manager','http://www.enes.org/enes_agent/data_manager')\nd1.add_namespace('data_provider','http://www.enes.org/enes_agent/data_provider')\nd1.add_namespace('subm','http://www.enes.org/enes_entity/data_submsission')\nd1.add_namespace('foaf','http://xmlns.com/foaf/0.1/')",
"Example name spaces\n(from DOI: 10.3390/ijgi5030038 , mehr unter https://github.com/tsunagun/vocab/blob/master/all_20130125.csv)\nowl Web Ontology Language http://www.w3.org/2002/07/owl#\ndctype DCMI Type Vocabulary http://purl.org/dc/dcmitype/\ndco DCO Ontology http://info.deepcarbon.net/schema#\nprov PROV Ontology http://www.w3.org/ns/prov#\nskos Simple Knowledge\n Organization System http://www.w3.org/2004/02/skos/core#\nfoaf FOAF Ontology http://xmlns.com/foaf/0.1/\nvivo VIVO Ontology http://vivoweb.org/ontology/core#\nbibo Bibliographic Ontology http://purl.org/ontology/bibo/\nxsd XML Schema Datatype http://www.w3.org/2001/XMLSchema#\nrdf Resource Description\n Framework http://www.w3.org/1999/02/22-rdf-syntax-ns#\nrdfs Resource Description\n Framework Schema http://www.w3.org/2000/01/rdf-schema#",
"# later: organize things in bundles\ndata_manager_ats = {'foaf:givenName':'Peter','foaf:mbox':'lenzen@dkzr.de'}\n\nd1.entity('sub:empty')\ndef add_stage(agent,activity,in_state,out_state):\n # in_stage exists, out_stage is generated\n d1.agent(agent, data_manager_ats)\n d1.activity(activity)\n d1.entity(out_state)\n \n d1.wasGeneratedBy(out_state,activity)\n d1.used(activity,in_state)\n d1.wasAssociatedWith(activity,agent)\n d1.wasDerivedFrom(out_state,in_state)\n\nimport json\nform_file = open('/home/stephan/tmp/Repos/submission_forms_repo/test/test_ki_sk1.json',\"r\")\njson_info = form_file.read()\n#json_info[\"__type__\"] = \"sf\",\nform_file.close()\nsf_dict = json.loads(json_info)\n\nform_handler.FForm(sf_dict)\n\nsf = form_handler.FForm(sf_dict)\n\nprint sf.__dict__\n\n\nprint sf.sub\n\n\ndata_provider = sf.sub['first_name']+'_'+sf.sub['last_name']\nsubmission_manager = sf.sub['responsible_person']\ningest_manager = sf.ing['responsible_person']\nqa_manager = sf.ing['responsible_person']\npublication_manager = sf.pub['responsible_person']\n\nadd_stage(agent='data_provider:test_user_id',activity='subm:submit',in_state=\"subm:empty\",out_state='subm:out1_sub')\nadd_stage(agent='data_manager:peter_lenzen_id',activity='subm:review',in_state=\"subm:out1_sub\",out_state='subm:out1_rev')\nadd_stage(agent='data_manager:peter_lenzen_id',activity='subm:ingest',in_state=\"subm:out1_rev\",out_state='subm:out1_ing')\nadd_stage(agent='data_manager:hdh_id',activity='subm:check',in_state=\"subm:out1_ing\",out_state='subm:out1_che')\nadd_stage(agent='data_manager:katharina_b_id',activity='subm:publish',in_state=\"subm:out1_che\",out_state='subm:out1_pub')\nadd_stage(agent='data_manager:lta_id',activity='subm:archive',in_state=\"subm:out1_pub\",out_state='subm:out1_arch')\n\nmylist = [a]\na = {'1':'2'}",
"assign information to provenance graph nodes and edges",
"%matplotlib inline\nd1.plot()\n\n\n\nd1.serialize()\n\nimport json\ningest_prov_file = open('ingest_prov_1.json','w')\n\nprov_data = d1.serialize()\nprov_data_json = json.dumps(prov_data)\n\ningest_prov_file.write(prov_data)\n\ningest_prov_file.close()\n\n#d1.wasAttributedTo(data_submission,'????')",
"Transform submission object to a provenance graph",
"#d1.get_records()\nsubmission = d1.get_record('subm:out1_sub')[0]\nreview = d1.get_record('subm:out1_rev')[0]\ningest = d1.get_record('subm:out1_ing')[0]\ncheck = d1.get_record('subm:out1_che')[0]\npublication = d1.get_record('subm:out1_pub')[0]\nlta = d1.get_record('subm:out1_arch')[0]\n\n\nres = form_handler.prefix_dict(sf.sub,'sub',sf.sub.keys())\nres['sub:status']=\"fertig\"\nprint res\ning = form_handler.prefix_dict(sf.ing,'ing',sf.ing.keys())\nqua = form_handler.prefix_dict(sf.qua,'qua',sf.qua.keys())\npub = form_handler.prefix_dict(sf.pub,'pub',sf.pub.keys())\n\nsubmission.add_attributes(res)\ningest.add_attributes(ing)\ncheck.add_attributes(qua)\npublication.add_attributes(pub)\n\nche_act = d1.get_record('subm:check') \ntst = che_act[0]\ntest_dict = {'subm:test':'test'}\ntst.add_attributes(test_dict)\n\nprint tst\ntst.FORMAL_ATTRIBUTES\ntst.\n\nche_act = d1.get_record('subm:check') \n#tst.formal_attributes\n#tst.FORMAL_ATTRIBUTES\ntst.add_attributes({'foaf:name':'tst'})\nprint tst.attributes\n#for i in tst:\n # print i\n#tst.insert([('subm:givenName','sk')])\n\nimport sys\nsys.path.append('/home/stephan/Repos/ENES-EUDAT/submission_forms')\nfrom dkrz_forms import form_handler\nsf,repo = form_handler.init_form(\"CORDEX\")\n\n\n\ninit_dict = sf.__dict__ \nsub_form = form_handler.prefix(sf,'subm',sf.__dict__.keys()) \n\nsub_dict = sub_form.__dict__\n\n#init_state = d1.get_record('subm:empty')[0]\n#init_state.add_attributes(init_dict)\n\nsub_state = d1.get_record('subm:out1_sub')[0]\ninit_state.add_attributes(sub_dict)\n\n\ntst_dict = {'test1':'val1','test2':'val2'}\ntst = form_handler.submission_form(tst_dict) \nprint tst.__dict__\n\n\nprint result.__dict__\n\ndict_from_class(sf)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
brunnerp/PythonFachkurs
|
05_dataio_matplotlib/FileHandling.ipynb
|
mit
|
[
"File handling\nInput/Output",
"# input function reads from standard in and returns a string\nanswer = input(\"enter your name:\")\nprint(answer)",
"Simple file operations\nwriting to files",
"# files can be opened using the open function, which creates a file object\nf = open( 'new_file.txt', 'w' ) # attention overwrites existing file\n# important functions: read, write, readlines, writelines\n# dir(f) \nf.write(\"Hallo Welt!\")\nf.close()\n\n# writelines\nlines = []\nfor i in range(12):\n lines.append(\"Number: \" + str(i) + '\\n')\nprint(lines)\n\n\nf = open( 'new_file.txt', 'a' ) # open file and append to it\nf.writelines(lines)\nf.close()",
"reading from files",
"# usage of with to open files is recommended in python\nwith open('new_file.txt', 'r') as f: # open file for reading\n content = f.read() # get the whole content of a file into a string\n\nprint(content)\n\n\nwith open('new_file.txt', 'r') as f: # open file for reading\n lines = f.readlines()\nprint(lines)\nf.close()",
"Parsing data",
"with open('data.txt', 'r') as f:\n lines = f.readlines()\n#print(lines)\n\ndata = {}\n\n# iterate over all lines in the file\nfor line in lines:\n if line.startswith('#'): # skip comments\n continue\n left, right = line.split(':') # split splits a string at the occurence of the keyword\n data[ left.strip() ] = float(right) # strip removes leading and tailing spaces\nprint(data)\n",
"Object serialization using pickle",
"# the pickle class is used to serialize python variables (convert them to bytestrings)\nimport pickle",
"how pickle converts objects to strings",
"d = { 1: 'green', 2: 'blue', 3: 'red' }\npickle.dumps(d) ",
"how to save objects to a file",
"with open('save.p', 'wb') as f:\n pickle.dump(d, f)\n\nwith open('save.p', 'wb') as f:\n pickle.dump(d, f)\n\nwith open('save.p', 'rb') as f:\n loaded_data = pickle.load(f)\nprint(loaded_data)",
"Organizing files in folders",
"# create a folder for the data files\nimport os\n\n# get current working directory\nwork_path = os.getcwd()\nprint(work_path)\n\n# define path for data files\ndata_path = os.path.join(work_path, 'data/')\n\n# check if folder exists already\nif not os.path.exists(data_path): \n os.mkdir(data_path)",
"Comma separated value (CSV) files\nreading tabular data using pandas",
"with open('real_estate.csv', 'r') as f:\n f.readlines()\n\nimport pandas as pd\ndf = pd.read_csv( 'real_estate.csv' )\nprint(df)\n#print(df.values.tolist())",
"JavaScript Object Notation (JSON)\nJSON (/ˈdʒeɪsən/ JAY-sən),[1] or JavaScript Object Notation, is an open standard format that uses human-readable text to transmit data objects consisting of attribute–value pairs. It is used primarily to transmit data between a server and web application, as an alternative to XML.\nAlthough originally derived from the JavaScript scripting language, JSON is a language-independent data format. Code for parsing and generating JSON data is readily available in many programming languages.\nhttps://en.wikipedia.org/wiki/JSON",
"cat employees.json\n\nimport json\n\nd = json.load( open('employees.json') ) \n\nd['employees'][1] "
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
piyueh/SEM-Toolbox
|
solutions/chapter02/exercise01.ipynb
|
mit
|
[
"Exercise 1",
"import numpy\nfrom matplotlib import pyplot\n% matplotlib inline\n\nimport os, sys\nsys.path.append(os.path.split(os.path.split(os.getcwd())[0])[0])\n\nimport utils.quadrature as quad",
"(a)\nLet $f = x^6$. And $\\int_{-1}^{1} f dx = \\frac{2}{7}$. \nUse Gauss-Lobatto-Legendre quadrature to numerically evaluate $\\int_{-1}^{1} f dx$ with number of quadrature points $Q=4,\\ 5,\\ 6$.",
"def f(x):\n \"\"\"the integrand: x**6\"\"\"\n \n return x**6\n\nprint(\"The exact solution is: \", 2./7.)\n\nfor Qi in [4, 5, 6]:\n qd = quad.GaussLobattoJacobi(Qi)\n ans = qd(f, -1, 1)\n print(\"Q = {0}, ans = {1}, absolute error is {2}\".format(Qi, ans, abs(ans-2./7.)))",
"(b)\nLet $f = x^6$. And $\\int_{-1}^{2} f dx = \\frac{129}{7}$. \nUse Gauss-Lobatto-Legendre quadrature to numerically evaluate $\\int_{-1}^{2} f dx$ with number of quadrature points $Q=4,\\ 5,\\ 6$.",
"print(\"The exact solution is: \", 129./7.)\n\nfor Qi in [4, 5, 6]:\n qd = quad.GaussLobattoJacobi(Qi)\n ans = qd(f, -1, 2)\n print(\"Q = {0}, ans = {1}, absolute error is {2}\".format(Qi, ans, abs(ans-129./7.)))",
"(c)\nLet $f = \\sin{x}$. And $\\int_{0}^{\\pi/2} f dx = 1$. \nUse Gauss-Lobatto-Legendre quadrature to numerically evaluate $\\int_{0}^{\\pi/2} f dx$ with number of quadrature points $2 \\le Q \\le 8$ and plot the error to check the convergence order.",
"def f(x):\n \"\"\"the integrand: x**6\"\"\"\n \n return numpy.sin(x)\n\nprint(\"The exact solution is: \", 1.)\n\nQ = range(2, 9)\nerr = numpy.zeros_like(Q, dtype=numpy.float64)\nfor i, Qi in enumerate(Q):\n qd = quad.GaussLobattoJacobi(Qi)\n ans = qd(f, 0., numpy.pi/2.)\n err[i] = numpy.abs(ans - 1.)\n print(\"Q = {0}, ans = {1}, absolute error is {2}\".format(Qi, ans, err[i]))\n\npyplot.semilogy(Q, err, 'k.-', lw=2, markersize=15)\npyplot.title(r\"Error of numerical integration of $\\int_{0}^{\\pi/2}\\sin{(x)} dx$\" + \n \"\\n with Gauss-Lobatto-Legendre quadrature\", y=1.08)\npyplot.xlabel(r\"$Q$\" + \", number of quadratiure points\")\npyplot.ylabel(\"absolute error\")\npyplot.grid();"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
brettavedisian/Liquid-Crystals-Summer-2015
|
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
|
mit
|
[
"# Annulus_Simple_Matplotlib\n# mjm June 20, 2016\n#\n# solve Poisson eqn with Vin = V0 and Vout = 0 for an annulus\n# with inner radius r1, outer radius r2\n# Vin = 10, Vout =0\n#\nfrom dolfin import *\nfrom mshr import * # need for Circle object to make annulus \nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.tri as tri\nfrom mpl_toolkits.mplot3d import Axes3D\n#parameters[\"plotting_backend\"] = \"matplotlib\"\nimport logging\nlogging.getLogger(\"FFC\").setLevel(logging.WARNING)\n#from matplotlib import cm\n%matplotlib inline",
"Commands for plotting\nThese are used so the the usual \"plot\" will use matplotlib.",
"# commands for plotting, \"plot\" works with matplotlib \ndef mesh2triang(mesh):\n xy = mesh.coordinates()\n return tri.Triangulation(xy[:, 0], xy[:, 1], mesh.cells())\n\ndef mplot_cellfunction(cellfn):\n C = cellfn.array()\n tri = mesh2triang(cellfn.mesh())\n return plt.tripcolor(tri, facecolors=C)\n\ndef mplot_function(f):\n mesh = f.function_space().mesh()\n if (mesh.geometry().dim() != 2):\n raise AttributeError('Mesh must be 2D')\n # DG0 cellwise function\n if f.vector().size() == mesh.num_cells():\n C = f.vector().array()\n return plt.tripcolor(mesh2triang(mesh), C)\n # Scalar function, interpolated to vertices\n elif f.value_rank() == 0:\n C = f.compute_vertex_values(mesh)\n return plt.tripcolor(mesh2triang(mesh), C, shading='gouraud')\n # Vector function, interpolated to vertices\n elif f.value_rank() == 1:\n w0 = f.compute_vertex_values(mesh)\n if (len(w0) != 2*mesh.num_vertices()):\n raise AttributeError('Vector field must be 2D')\n X = mesh.coordinates()[:, 0]\n Y = mesh.coordinates()[:, 1]\n U = w0[:mesh.num_vertices()]\n V = w0[mesh.num_vertices():]\n return plt.quiver(X,Y,U,V)\n\n# Plot a generic dolfin object (if supported)\ndef plot(obj):\n plt.gca().set_aspect('equal')\n if isinstance(obj, Function):\n return mplot_function(obj)\n elif isinstance(obj, CellFunctionSizet):\n return mplot_cellfunction(obj)\n elif isinstance(obj, CellFunctionDouble):\n return mplot_cellfunction(obj)\n elif isinstance(obj, CellFunctionInt):\n return mplot_cellfunction(obj)\n elif isinstance(obj, Mesh):\n if (obj.geometry().dim() != 2):\n raise AttributeError('Mesh must be 2D')\n return plt.triplot(mesh2triang(obj), color='#808080')\n\n raise AttributeError('Failed to plot %s'%type(obj))\n# end of commands for plotting",
"Annulus\nThis is the field in an annulus. We specify boundary conditions and solve the problem.",
"r1 = 1 # inner circle radius\nr2 = 10 # outer circle radius\n\n# shapes of inner/outer boundaries are circles\nc1 = Circle(Point(0.0, 0.0), r1)\nc2 = Circle(Point(0.0, 0.0), r2)\n\ndomain = c2 - c1 # solve between circles\nres = 20\nmesh = generate_mesh(domain, res)\n\nclass outer_boundary(SubDomain):\n\tdef inside(self, x, on_boundary):\n\t\ttol = 1e-2\n\t\treturn on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r2) < tol\n\nclass inner_boundary(SubDomain):\n\tdef inside(self, x, on_boundary):\n\t\ttol = 1e-2\n\t\treturn on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r1) < tol\n\nouterradius = outer_boundary()\ninnerradius = inner_boundary()\n\nboundaries = FacetFunction(\"size_t\", mesh)\n\nboundaries.set_all(0)\nouterradius.mark(boundaries,2)\ninnerradius.mark(boundaries,1)\n\nV = FunctionSpace(mesh,'Lagrange',1)\n\nn = Constant(10.0) \n\nbcs = [DirichletBC(V, 0, boundaries, 2),\n\t DirichletBC(V, n, boundaries, 1)]\n#\t DirichletBC(V, nx, boundaries, 1)]\n\nu = TrialFunction(V)\n\nv = TestFunction(V)\nf = Constant(0.0)\na = inner(nabla_grad(u), nabla_grad(v))*dx\nL = f*v*dx\n\nu = Function(V)\nsolve(a == L, u, bcs)\n",
"Plotting with matplotlib\nNow the usual \"plot\" commands will work for plotting the mesh and the function.",
"plot(mesh) # usual Fenics command, will use matplotlib\n\nplot(u) # usual Fenics command, will use matplotlib",
"If you want to do usual \"matplotlib\" stuff then you still need \"plt.\" prefix on commands.",
"plt.figure()\nplt.subplot(1,2,1)\nplot(mesh)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.subplot(1,2,2)\nplot(u) \nplt.title('annulus solution')",
"Plotting along a line\nIt turns out the the solution \"u\" is a function that can be evaluated at a point. So in the next cell we loop through a line and make a vector of points for plotting. You just need to give it coordinates $u(x,y)$.",
"y = np.linspace(r1,r2*0.99,100)\nuu = []\nnp.array(uu)\nfor i in range(len(y)):\n yy = y[i]\n uu.append(u(0.0,yy)) #evaluate u along y axis\nplt.figure()\nplt.plot(y,uu)\nplt.grid(True)\nplt.xlabel('y')\nplt.ylabel('V')\n\nu"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CyberCRI/dataanalysis-herocoli-redmetrics
|
v1.52/Tests/4.1 User comparison tests.ipynb
|
cc0-1.0
|
[
"User comparison tests\nTable of Contents\nPreparation\nUser data vectors\nUser lists\nSessions' checkpoints\nAssembly\nTime\nPreparation\n<a id=preparation />",
"%run \"../Functions/1. Google form analysis.ipynb\"\n%run \"../Functions/4. User comparison.ipynb\"",
"Data vectors of users\n<a id=userdatavectors />",
"#getAllResponders()\n\nsetAnswerTemporalities(gform)",
"getAllUserVectorData",
"# small sample\n#allData = getAllUserVectorData( getAllUsers( rmdf152 )[:10] )\n\n# complete set\n#allData = getAllUserVectorData( getAllUsers( rmdf152 ) )\n\n# subjects who answered the gform\nallData = getAllUserVectorData( getAllResponders() )\n\n# 10 subjects who answered the gform\n#allData = getAllUserVectorData( getAllResponders()[:10] )\n\nefficiencies = allData.loc['efficiency'].sort_values()\nefficiencies.index = range(0, len(allData.columns))\nefficiencies.plot(title = 'efficiency')\n\nefficiencies2 = allData.loc['efficiency'].sort_values()\nefficiencies2 = efficiencies2[efficiencies2 != 0]\nefficiencies2.index = range(0, len(efficiencies2))\nefficiencies2 = np.log(efficiencies2)\nefficiencies2.plot(title = 'efficiency log')\n\nmaxChapter = allData.loc['maxChapter'].sort_values()\nmaxChapter.index = range(0, len(allData.columns))\nmaxChapter.plot(title = 'maxChapter')\n\nlen(allData.columns)\n\nuserIds = getAllResponders()\n_source = correctAnswers\n\n# _source is used as correction source, if we want to include answers to these questions\n#def getAllUserVectorData( userIds, _source = [] ):\n \n# result\nisInitialized = False\nallData = []\n\nf = FloatProgress(min=0, max=len(userIds))\ndisplay(f)\n\nfor userId in userIds:\n #print(str(userId))\n f.value += 1\n if not isInitialized:\n isInitialized = True\n allData = getUserDataVector(userId, _source = _source)\n else:\n allData = pd.concat([allData, getUserDataVector(userId, _source = _source)], axis=1)\n\n#print('done')\nallData\n\nuserId",
"Correlation Matrix",
"methods = ['pearson', 'kendall', 'spearman']\n\n_allUserVectorData = allData.T\n_method = methods[0]\n_title='RedMetrics Correlations'\n_abs=True\n_clustered=False\n_figsize = (20,20)\n\n\n#def plotAllUserVectorDataCorrelationMatrix(\n# _allUserVectorData,\n# _method = methods[0], \n# _title='RedMetrics Correlations', \n# _abs=False,\n# _clustered=False, \n# _figsize = (20,20)\n#):\n \n_progress = FloatProgress(min=0, max=3)\ndisplay(_progress)\n\n# computation of correlation matrix\n_m = _method\nif(not (_method in methods)):\n _m = methods[0]\n_correlation = _allUserVectorData.astype(float).corr(_m)\n_progress.value += 1\nif(_abs):\n _correlation = _correlation.abs()\n_progress.value += 1\n\n# plot\nif(_clustered):\n sns.clustermap(_correlation,cmap=plt.cm.jet,square=True,figsize=_figsize)\nelse:\n _fig = plt.figure(figsize=_figsize)\n _ax = plt.subplot(111)\n _ax.set_title(_title)\n sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True)\n_progress.value += 1\n\ngform['Temporality'].unique()\n\nallData.loc['scoreundefined'].dropna()\n\ngetAllUsers(rmdf152)[:10]\n\nlen(getAllUsers(rmdf152))",
"List of users and their sessions\n<a id=userlists />",
"userSessionsRelevantColumns = ['customData.localplayerguid', 'sessionId']\nuserSessions = rmdf152[rmdf152['type']=='start'].loc[:,userSessionsRelevantColumns]\n\nuserSessions = userSessions.rename(index=str, columns={'customData.localplayerguid': 'userId'})\nuserSessions.head()\n\n#groupedUserSessions = userSessions.groupby('customData.localplayerguid')\n#groupedUserSessions.head()\n#groupedUserSessions.describe().head()",
"List of sessions with their checkpoints achievements\n<a id=sessionscheckpoints />",
"checkpointsRelevantColumns = ['sessionId', 'customData.localplayerguid', 'type', 'section', 'userTime']\ncheckpoints = rmdf152.loc[:, checkpointsRelevantColumns]\n\ncheckpoints = checkpoints[checkpoints['type']=='reach'].loc[:,['section','sessionId','userTime']]\ncheckpoints = checkpoints[checkpoints['section'].str.startswith('tutorial', na=False)]\n#checkpoints = checkpoints.groupby(\"sessionId\")\n#checkpoints = checkpoints.max()\ncheckpoints.head()",
"Assembly of both\n<a id=assembly />",
"#assembled = userSessions.combine_first(checkpoints)\nassembled = pd.merge(userSessions, checkpoints, on='sessionId', how='outer')\nassembled.head()\n\nuserSections = assembled.drop('sessionId', 1)\nuserSections.head()\n\nuserSections = userSections.dropna()\nuserSections.head()\n\ncheckpoints = userSections.groupby(\"userId\")\ncheckpoints = checkpoints.max()\ncheckpoints.head()",
"Time analysis\n<a id=time />",
"#userTimedSections = userSections.groupby(\"userId\").agg({ \"userTime\": np.min })\n#userTimedSections = userSections.groupby(\"userId\")\nuserTimes = userSections.groupby(\"userId\").agg({ \"userTime\": [np.min, np.max] })\nuserTimes[\"duration\"] = pd.to_datetime(userTimes[\"userTime\"][\"amax\"]) - pd.to_datetime(userTimes[\"userTime\"][\"amin\"])\nuserTimes[\"duration\"] = userTimes[\"duration\"].map(lambda x: np.timedelta64(x, 's'))\nuserTimes = userTimes.sort_values(by=['duration'], ascending=[False])\nuserTimes.head()",
"TODO\nuserTimes.loc[:,'duration']\nuserTimes = userTimes[4:]\nuserTimes[\"duration_seconds\"] = userTimes[\"duration\"].map(lambda x: pd.Timedelta(x).seconds)\nmaxDuration = np.max(userTimes[\"duration_seconds\"])\nuserTimes[\"duration_rank\"] = userTimes[\"duration_seconds\"].rank(ascending=False)\nuserTimes.plot(x=\"duration_rank\", y=\"duration_seconds\")\nplt.xlabel(\"game session\")\nplt.ylabel(\"time played (s)\")\nplt.legend('')\nplt.xlim(0, 139)\nplt.ylim(0, maxDuration)\nuserTimedSections = userSections.groupby(\"section\").agg({ \"userTime\": np.min })\nuserTimedSections\nuserTimedSections[\"firstReached\"] = pd.to_datetime(userTimedSections[\"userTime\"])\nuserTimedSections.head()\nuserTimedSections.drop('userTime', 1)\nuserTimedSections.head()\nuserTimedSections[\"firstCompletionDuration\"] = userTimedSections[\"firstReached\"].diff()\nuserTimedSections.head()",
"sessionCount = 1\n_rmDF = rmdf152\nsample = gform\nbefore = False\nafter = True\ngfMode = False\nrmMode = True\n\n#def getAllUserVectorDataCustom(before, after, gfMode = False, rmMode = True, sessionCount = 1, _rmDF = rmdf152)\nuserIds = []\n\nif (before and after):\n userIds = getSurveysOfUsersWhoAnsweredBoth(sample, gfMode = gfMode, rmMode = rmMode)\nelif before:\n if rmMode:\n userIds = getRMBefores(sample)\n else:\n userIds = getGFBefores(sample)\nelif after:\n if rmMode:\n userIds = getRMAfters(sample)\n else:\n userIds = getGFormAfters(sample)\nif(len(userIds) > 0):\n userIds = userIds[localplayerguidkey]\n allUserVectorData = getAllUserVectorData(userIds, _rmDF = _rmDF)\n allUserVectorData = allUserVectorData.T\n result = allUserVectorData[allUserVectorData['sessionsCount'] == sessionCount].T\n\nelse:\n print(\"no matching user\")\n result = []\n\nresult\n\ngetAllUserVectorDataCustom(False, True)\n\nuserIdsBoth = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)[localplayerguidkey]\nallUserVectorData = getAllUserVectorData(userIdsBoth)\nallUserVectorData = allUserVectorData.T\nallUserVectorData[allUserVectorData['sessionsCount'] == 1]",
"user progress classification\ntinkering",
"testUser = \"3685a015-fa97-4457-ad73-da1c50210fe1\"\n\ndef getScoreFromBinarized(binarizedAnswers):\n gformIndices = binarizedAnswers.index.map(lambda s: int(s.split(correctionsColumnNameStem)[1]))\n return pd.Series(np.dot(binarizedAnswers, np.ones(binarizedAnswers.shape[1])), index=gform.loc[gformIndices, localplayerguidkey]) \n\n#allResponders = getAllResponders()\n\n#gf_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = False)\nrm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = False, rmMode = True)\n#gfrm_both = getSurveysOfUsersWhoAnsweredBoth(gform, gfMode = True, rmMode = True)\n\nsciBinarizedBefore = getAllBinarized(_form = getRMBefores(rm_both))\nsciBinarizedAfter = getAllBinarized(_form = getRMAfters(rm_both))\n\nscoresBefore = getScoreFromBinarized(sciBinarizedBefore)\nscoresAfter = getScoreFromBinarized(sciBinarizedAfter)\n\nmedianBefore = np.median(scoresBefore)\nmedianAfter = np.median(scoresAfter)\nmaxScore = sciBinarizedBefore.shape[1]\n\nindicators = pd.DataFrame()\nindicators['before'] = scoresBefore\nindicators['after'] = scoresAfter\n\nindicators['delta'] = scoresAfter - scoresBefore\nindicators['maxPotentialDelta'] = maxScore - scoresBefore\nfor index in indicators['maxPotentialDelta'].index:\n if (indicators.loc[index, 'maxPotentialDelta'] == 0):\n indicators.loc[index, 'maxPotentialDelta'] = 1 \n\nindicators['relativeBefore'] = scoresBefore / medianBefore\nindicators['relativeAfter'] = scoresAfter / medianBefore\nindicators['relativeDelta'] = indicators['delta'] / medianBefore\nindicators['realizedPotential'] = indicators['delta'] / indicators['maxPotentialDelta']\nindicators['increaseRatio'] = indicators['before']\nfor index in indicators['increaseRatio'].index:\n if (indicators.loc[index, 'increaseRatio'] == 0):\n indicators.loc[index, 'increaseRatio'] = 1 \nindicators['increaseRatio'] = indicators['delta'] / indicators['increaseRatio']\n\nindicators\n\n(min(indicators['relativeBefore']), max(indicators['relativeBefore'])),\\\n(min(indicators['relativeDelta']), max(indicators['relativeDelta'])),\\\nmedianBefore,\\\nnp.median(indicators['relativeBefore']),\\\nnp.median(indicators['relativeDelta'])\\\n\nindicatorX = 'relativeBefore'\nindicatorY = 'relativeDelta'\n\ndef scatterPlotIndicators(indicatorX, indicatorY):\n \n print(indicatorX + ' range: ' + str((min(indicators[indicatorX]), max(indicators[indicatorX]))))\n print(indicatorY + ' range: ' + str((min(indicators[indicatorY]), max(indicators[indicatorY]))))\n print(indicatorX + ' median: ' + str(np.median(indicators[indicatorX])))\n print(indicatorY + ' median: ' + str(np.median(indicators[indicatorY])))\n \n fig = plt.figure()\n ax1 = fig.add_subplot(111)\n ax1.scatter(indicators[indicatorX], indicators[indicatorY])\n plt.xlabel(indicatorX)\n plt.ylabel(indicatorY)\n\n # vertical line\n plt.plot( [np.median(indicators[indicatorX]), np.median(indicators[indicatorX])],\\\n [min(indicators[indicatorY]), max(indicators[indicatorY])],\\\n 'k-', lw=2)\n\n # horizontal line\n plt.plot( [min(indicators[indicatorX]), max(indicators[indicatorX])],\\\n [np.median(indicators[indicatorY]), np.median(indicators[indicatorY])],\\\n 'k-', lw=2)\n\nindicators.columns\n\nscatterPlotIndicators('relativeBefore', 'relativeDelta')\n\nscatterPlotIndicators('relativeBefore', 'realizedPotential')\n\nscatterPlotIndicators('relativeBefore', 'increaseRatio')\n\nscatterPlotIndicators('relativeBefore', 'relativeAfter')\n\nscatterPlotIndicators('maxPotentialDelta', 'realizedPotential')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Bismarrck/tensorflow
|
tensorflow/examples/udacity/2_fullyconnected.ipynb
|
apache-2.0
|
[
"Deep Learning\nAssignment 2\nPreviously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.\nThe goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport numpy as np\nimport tensorflow as tf\nimport time\nfrom six.moves import cPickle as pickle\nfrom six.moves import range",
"First reload the data we generated in 1_notmnist.ipynb.",
"pickle_file = 'notMNIST.pickle'\n\nwith open(pickle_file, 'rb') as f:\n save = pickle.load(f)\n train_dataset = save['train_dataset']\n train_labels = save['train_labels']\n valid_dataset = save['valid_dataset']\n valid_labels = save['valid_labels']\n test_dataset = save['test_dataset']\n test_labels = save['test_labels']\n del save # hint to help gc free up memory\n print('Training set', train_dataset.shape, train_labels.shape)\n print('Validation set', valid_dataset.shape, valid_labels.shape)\n print('Test set', test_dataset.shape, test_labels.shape)",
"Reformat into a shape that's more adapted to the models we're going to train:\n- data as a flat matrix,\n- labels as float 1-hot encodings.",
"image_size = 28\nnum_labels = 10\n\ndef reformat(dataset, labels):\n dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n return dataset, labels\ntrain_dataset, train_labels = reformat(train_dataset, train_labels)\nvalid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\ntest_dataset, test_labels = reformat(test_dataset, test_labels)\nprint('Training set', train_dataset.shape, train_labels.shape)\nprint('Validation set', valid_dataset.shape, valid_labels.shape)\nprint('Test set', test_dataset.shape, test_labels.shape)",
"We're first going to train a multinomial logistic regression using simple gradient descent.\nTensorFlow works like this:\n* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:\n with graph.as_default():\n ...\n\n\n\nThen you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:\nwith tf.Session(graph=graph) as session:\n ...\n\n\nLet's load all the data into TensorFlow and build the computation graph corresponding to our training:",
"# With gradient descent training, even this much data is prohibitive.\n# Subset the training data for faster turnaround.\ntrain_subset = 10000\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data.\n # Load the training, validation and test data into constants that are\n # attached to the graph.\n tf_train_dataset = tf.constant(train_dataset[:train_subset, :])\n tf_train_labels = tf.constant(train_labels[:train_subset])\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n # These are the parameters that we are going to be training. The weight\n # matrix will be initialized using random values following a (truncated)\n # normal distribution. The biases get initialized to zero.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n # We multiply the inputs with the weight matrix, and add biases. We compute\n # the softmax and cross-entropy (it's one operation in TensorFlow, because\n # it's very common, and it can be optimized). We take the average of this\n # cross-entropy across all training examples: that's our loss.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n \n # Optimizer.\n # We are going to find the minimum of this loss using gradient descent.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n # These are not part of training, but merely here so that we can report\n # accuracy figures as we train.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)",
"Let's run this computation and iterate:",
"num_steps = 801\n\ndef accuracy(predictions, labels):\n return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n / predictions.shape[0])\n\nwith tf.Session(graph=graph) as session:\n # This is a one-time operation which ensures the parameters get initialized as\n # we described in the graph: random weights for the matrix, zeros for the\n # biases. \n tic = time.time()\n try:\n tf.global_variables_initializer().run()\n except AttributeError:\n tf.initialize_all_variables().run()\n print('Initialized')\n for step in range(num_steps):\n # Run the computations. We tell .run() that we want to run the optimizer,\n # and get the loss value and the training predictions returned as numpy\n # arrays.\n _, l, predictions = session.run([optimizer, loss, train_prediction])\n if (step % 100 == 0):\n print('Loss at step %3d : %f' % (step, l))\n print('Training accuracy : %.1f%%' % accuracy(\n predictions, train_labels[:train_subset, :]))\n # Calling .eval() on valid_prediction is basically like calling run(), but\n # just to get that one numpy array. Note that it recomputes all its graph\n # dependencies.\n print('Validation accuracy : %.1f%%' % accuracy(\n valid_prediction.eval(), valid_labels))\n print('Test accuracy : %.1f%%' % accuracy(test_prediction.eval(), test_labels))\n print('GradientDecent time : %.3f s' % (time.time() - tic))",
"Let's now switch to stochastic gradient descent training instead, which is much faster.\nThe graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().",
"batch_size = 128\n\ngraph = tf.Graph()\nwith graph.as_default():\n\n # Input data. For the training data, we use a placeholder that will be fed\n # at run time with a training minibatch.\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, image_size * image_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, num_labels]))\n biases = tf.Variable(tf.zeros([num_labels]))\n \n # Training computation.\n logits = tf.matmul(tf_train_dataset, weights) + biases\n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = tf.nn.softmax(\n tf.matmul(tf_valid_dataset, weights) + biases)\n test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)",
"Let's run it:",
"num_steps = 3001\n\nwith tf.Session(graph=graph) as session:\n tic = time.time()\n try:\n tf.global_variables_initializer().run()\n except AttributeError:\n tf.initialize_all_variables().run()\n print(\"Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n print(\"StochasticGradientDescent Time: %.3f s\" % (time.time() - tic))",
"Problem\nTurn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.",
"batch_size = 128\nnodes = 1024\nnum_steps = 3001\n\nnngraph = tf.Graph()\n\nwith nngraph.as_default():\n\n # Input data. For the training data, we use a placeholder that will be fed\n # at run time with a training minibatch.\n tf_train_dataset = tf.placeholder(tf.float32,\n shape=(batch_size, image_size * image_size))\n tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n tf_valid_dataset = tf.constant(valid_dataset)\n tf_test_dataset = tf.constant(test_dataset)\n \n # Variables.\n weights = tf.Variable(\n tf.truncated_normal([image_size * image_size, nodes]))\n biases = tf.Variable(tf.zeros([nodes]))\n z = tf.matmul(tf_train_dataset, weights) + biases\n \n # Hidden Layer\n u = np.sqrt(6.0) / np.sqrt(nodes + num_labels)\n hidden_weights = tf.Variable(\n tf.random_uniform([nodes, num_labels], minval=-u, maxval=u))\n hidden_bias = tf.Variable(tf.zeros([num_labels]))\n logits = tf.matmul(tf.nn.relu(z), hidden_weights) + hidden_bias\n \n def forward_prop_tensor(dataset):\n return tf.nn.softmax(\n tf.matmul(\n tf.nn.relu(tf.matmul(dataset, weights) + biases), hidden_weights\n ) + hidden_bias\n )\n \n loss = tf.reduce_mean(\n tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))\n \n # Optimizer.\n optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n \n # Predictions for the training, validation, and test data.\n train_prediction = tf.nn.softmax(logits)\n valid_prediction = forward_prop_tensor(tf_valid_dataset)\n test_prediction = forward_prop_tensor(tf_test_dataset)\n\nwith tf.Session(graph=nngraph) as session:\n tic = time.time()\n try:\n tf.global_variables_initializer().run()\n except AttributeError:\n tf.initialize_all_variables().run()\n print(\"One-Hidden-Layer NueralNetworkGraph Initialized\")\n for step in range(num_steps):\n # Pick an offset within the training data, which has been randomized.\n # Note: we could use better randomization across epochs.\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n # Generate a minibatch.\n batch_data = train_dataset[offset:(offset + batch_size), :]\n batch_labels = train_labels[offset:(offset + batch_size), :]\n # Prepare a dictionary telling the session where to feed the minibatch.\n # The key of the dictionary is the placeholder node of the graph to be fed,\n # and the value is the numpy array to feed to it.\n feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n _, l, predictions = session.run(\n [optimizer, loss, train_prediction], feed_dict=feed_dict)\n if (step % 500 == 0):\n print(\"Minibatch loss at step %d: %f\" % (step, l))\n print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n print(\"Validation accuracy: %.1f%%\" % accuracy(\n valid_prediction.eval(), valid_labels))\n print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))\n print(\"StochasticGradientDescent Time: %.3f s\" % (time.time() - tic))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
TiKeil/Master-thesis-LOD
|
notebooks/TEST_VCLOD_Coefficient_1.ipynb
|
apache-2.0
|
[
"VCLOD Test for Coefficient 1\nThis script performs the main VCLOD test for this thesis with a specific diffusion coefficient. We investigate the energy error of the VCLOD dependent on the updated correctors. For this purpose, we update every corrector individually and compare it to the reference solution. This enables a good comparison between percentages. We desire to yield a fast decrease of the energy error of the VCLOD method since, due to the error indicator, we sort and update the element correctors in terms of the effect that comes with the perturbation.",
"import os\nimport sys\nimport numpy as np\nimport scipy.sparse as sparse\nimport random\nimport csv\n\n%matplotlib notebook\nimport matplotlib.pyplot as plt\nfrom visualize import drawCoefficient\nfrom data import * \n\nfrom gridlod import interp, coef, util, fem, world, linalg, femsolver\nimport pg_rand, femsolverCoarse, buildcoef2d\nfrom gridlod.world import World",
"Result function\nThe 'result' function investigates the VCLOD for each percentage. The reference solution is computed by a standard FEM on the fine mesh. We compute the 'worst solution' that represents zero percentage updating and clearly has no computational cost at all. Afterwards, we compute the error indicator for the given patch size $k=4$ and use every value gradually. Furthermore we store the resulting energy error for the VCLOD as well as the optimal energy error that results from 100 percentage updating. Once again, we take advantage of the 'gridlod' module in order to compute the required matrices.",
"def result(pglod, world, A, R, f, k, String):\n print \"-------------- \" + String + \" ---------------\" \n NWorldFine = world.NWorldFine\n NWorldCoarse = world.NWorldCoarse\n NCoarseElement = world.NCoarseElement\n \n boundaryConditions = world.boundaryConditions\n NpFine = np.prod(NWorldFine+1)\n NpCoarse = np.prod(NWorldCoarse+1)\n \n # new Coefficient\n ANew = R.flatten()\n Anew = coef.coefficientFine(NWorldCoarse, NCoarseElement, ANew)\n \n # reference solution\n f_fine = np.ones(NpFine)\n uFineFem, AFine, MFine = femsolver.solveFine(world, ANew, f_fine, None, boundaryConditions)\n \n # worst solution\n KFull = pglod.assembleMsStiffnessMatrix()\n MFull = fem.assemblePatchMatrix(NWorldCoarse, world.MLocCoarse)\n free = util.interiorpIndexMap(NWorldCoarse) \n \n bFull = MFull*f\n KFree = KFull[free][:,free]\n bFree = bFull[free]\n\n xFree = sparse.linalg.spsolve(KFree, bFree)\n \n basis = fem.assembleProlongationMatrix(NWorldCoarse, NCoarseElement)\n \n basisCorrectors = pglod.assembleBasisCorrectors()\n modifiedBasis = basis - basisCorrectors\n \n xFull = np.zeros(NpCoarse)\n xFull[free] = xFree\n uCoarse = xFull\n uLodFine = modifiedBasis*xFull\n \n uLodFineWorst = uLodFine\n \n # energy error\n errorworst = np.sqrt(np.dot(uFineFem - uLodFineWorst, AFine*(uFineFem - uLodFineWorst)))\n \n # tolerance = 0 \n vis, eps = pglod.updateCorrectors(Anew, 0, f, 1, clearFineQuantities=False, Computing=False)\n \n PotentialCorrectors = np.sum(vis)\n elemente = np.arange(np.prod(NWorldCoarse))\n \n # identify tolerances\n epsnozero = filter(lambda x: x!=0, eps)\n \n assert(np.size(epsnozero) != 0)\n \n mini = np.min(epsnozero)\n minilog = int(round(np.log10(mini)-0.49))\n epsnozero.append(10**(minilog))\n ToleranceListcomplete = []\n for i in range(0,int(np.size(epsnozero))):\n ToleranceListcomplete.append(epsnozero[i])\n\n ToleranceListcomplete.sort()\n ToleranceListcomplete = np.unique(ToleranceListcomplete)\n\n # with tolerance\n errorplotinfo = []\n tolerancesafe = []\n errorBest = []\n errorWorst = []\n recomputefractionsafe = []\n recomputefraction = 0\n Correctors = 0\n leng = np.size(ToleranceListcomplete)\n for k in range(leng-1,-1,-1):\n tol = ToleranceListcomplete[k]\n print \" --- \"+ str(-k+leng) + \"/\" + str(leng)+ \" --- Tolerance: \" + str(round(tol,5)) + \" in \"+ String +\" ---- \", \n vistol = pglod.updateCorrectors(Anew, tol, f, clearFineQuantities=False, Testing=True)\n \n Correctors += np.sum(vistol)\n \n recomputefraction += float(np.sum(vistol))/PotentialCorrectors * 100\n recomputefractionsafe.append(recomputefraction)\n \n KFull = pglod.assembleMsStiffnessMatrix()\n MFull = fem.assemblePatchMatrix(NWorldCoarse, world.MLocCoarse)\n free = util.interiorpIndexMap(NWorldCoarse) \n\n bFull = MFull*f\n KFree = KFull[free][:,free]\n bFree = bFull[free]\n\n xFree = sparse.linalg.spsolve(KFree, bFree)\n basis = fem.assembleProlongationMatrix(NWorldCoarse, NCoarseElement)\n\n basisCorrectors = pglod.assembleBasisCorrectors()\n\n modifiedBasis = basis - basisCorrectors\n\n xFull = np.zeros(NpCoarse)\n xFull[free] = xFree\n uCoarse = xFull\n uLodFine = modifiedBasis*xFull\n \n #energy error\n errortol = np.sqrt(np.dot(uFineFem - uLodFine, AFine*(uFineFem - uLodFine)))\n \n errorplotinfo.append(errortol)\n tolerancesafe.append(tol)\n \n # 100% updating\n uLodFinebest = uLodFine\n errorbest = np.sqrt(np.dot(uFineFem - uLodFinebest, AFine*(uFineFem - uLodFinebest)))\n \n for k in range(leng-1,-1,-1):\n errorBest.append(errorbest)\n errorWorst.append(errorworst)\n\n return vis, eps, PotentialCorrectors, recomputefractionsafe, errorplotinfo, errorWorst, errorBest\n",
"Preparations\nWe use the same setting as we have already used before containing the 'buildcoef2d' class in order to construct the coefficient. We visualize the coefficient and store the information in an extern folder.",
"bg = 0.05 #background\nval = 1 #values\n\n#fine World\nNWorldFine = np.array([256, 256])\nNpFine = np.prod(NWorldFine+1) \n\n#coarse World\nNWorldCoarse = np.array([16,16])\nNpCoarse = np.prod(NWorldCoarse+1)\n\n#ratio between Fine and Coarse\nNCoarseElement = NWorldFine/NWorldCoarse\n\nboundaryConditions = np.array([[0, 0],\n [0, 0]])\n\nworld = World(NWorldCoarse, NCoarseElement, boundaryConditions)\n\n#righthandside\nf = np.ones(NpCoarse)\n\n#Coefficient 1\nCoefClass = buildcoef2d.Coefficient2d(NWorldFine,\n bg = bg,\n val = val,\n length = 2,\n thick = 2,\n space = 2,\n probfactor = 1,\n right = 1,\n down = 0,\n diagr1 = 0,\n diagr2 = 0,\n diagl1 = 0,\n diagl2 = 0,\n LenSwitch = None,\n thickSwitch = None,\n equidistant = True,\n ChannelHorizontal = None,\n ChannelVertical = None,\n BoundarySpace = True)\n\nA = CoefClass.BuildCoefficient()\nABase = A.flatten()\n\nROOT = '../test_data/Coef1/'\n\n#safe NworldFine\nwith open(\"%s/NWorldFine.txt\" % ROOT, 'wb') as csvfile:\n writer = csv.writer(csvfile)\n for val in NWorldFine:\n writer.writerow([val])\n\n#safe NworldCoarse\nwith open(\"%s/NWorldCoarse.txt\" % ROOT, 'wb') as csvfile:\n writer = csv.writer(csvfile)\n for val in NWorldCoarse:\n writer.writerow([val])\n\n#ABase\nwith open(\"%s/OriginalCoeff.txt\" % ROOT, 'wb') as csvfile:\n writer = csv.writer(csvfile)\n for val in ABase:\n writer.writerow([val])\n\n#fine-fem\nf_fine = np.ones(NpFine)\nuFineFem, AFine, MFine = femsolver.solveFine(world, ABase, f_fine, None, boundaryConditions)\n\n#fine solution\nwith open(\"%s/finescale.txt\" % ROOT, 'wb') as csvfile:\n writer = csv.writer(csvfile)\n for val in uFineFem:\n writer.writerow([val])\n \nplt.figure(\"Original\")\ndrawCoefficient(NWorldFine, ABase,greys=True)\nplt.title(\"Original coefficient\")\nplt.show()",
"Perturbations of the same entries\nTo keep comparability, we use the 'specific' perturbation function and use a random seed.",
"# random seed\nrandom.seed(20)\n\n# decision\nvalc = np.shape(CoefClass.ShapeRemember)[0]\nnumbers = []\ndecision = np.zeros(100)\ndecision[0] = 1\n\n\nfor i in range(0,valc):\n a = random.sample(decision,1)[0]\n if a == 1:\n numbers.append(i)\n\nvalue1 = 3\nC1 = CoefClass.SpecificValueChange(ratio=value1,\n Number = numbers,\n probfactor=1,\n randomvalue=None,\n negative=None,\n ShapeRestriction=True,\n ShapeWave=None,\n ChangeRight=1,\n ChangeDown=1,\n ChangeDiagr1=1,\n ChangeDiagr2=1,\n ChangeDiagl1=1,\n ChangeDiagl2=1,\n Original = True,\n NewShapeChange = True)\n\nV = CoefClass.SpecificVanish(Number = numbers,\n probfactor=1,\n PartlyVanish=None,\n ChangeRight=1,\n ChangeDown=1,\n ChangeDiagr1=1,\n ChangeDiagr2=1,\n ChangeDiagl1=1,\n ChangeDiagl2=1,\n Original = True)\n\nM1 = CoefClass.SpecificMove(probfactor=1,\n Number = numbers,\n steps=1,\n randomstep=None,\n randomDirection=None,\n ChangeRight=1,\n ChangeDown=1,\n ChangeDiagr1=1,\n ChangeDiagr2=1,\n ChangeDiagl1=1,\n ChangeDiagl2=1,\n Right=1,\n BottomRight=0,\n Bottom=0,\n BottomLeft=0,\n Left=0,\n TopLeft=0,\n Top=0,\n TopRight=0,\n Original = True)",
"Precomputations",
"k = 4\n\nNWorldFine = world.NWorldFine\nNWorldCoarse = world.NWorldCoarse\nNCoarseElement = world.NCoarseElement\n\nboundaryConditions = world.boundaryConditions\nNpFine = np.prod(NWorldFine+1)\nNpCoarse = np.prod(NWorldCoarse+1)\n\n#interpolant\nIPatchGenerator = lambda i, N: interp.L2ProjectionPatchMatrix(i, N, NWorldCoarse, NCoarseElement, boundaryConditions)\n\n#old Coefficient\nABase = A.flatten()\nAold = coef.coefficientFine(NWorldCoarse, NCoarseElement, ABase)\n\npglod = pg_rand.VcPetrovGalerkinLOD(Aold, world, k, IPatchGenerator, 0)\npglod.originCorrectors(clearFineQuantities=False)",
"We call the result function for each perturbation and store the result subsequently.\nChange in value",
"vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest = result(pglod ,world, A, C1, f, k, 'Specific value change' + str(value1))\n\nsafeChange(ROOT, C1, vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest)",
"Disappearance",
"vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest = result(pglod ,world, A, V, f, k, 'Vanish')\n\nsafeVanish(ROOT, V, vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest)",
"Shift",
"vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest = result(pglod ,world, A, M1, f, k, 'One Step Move')\n\nsafeShift(ROOT, M1, vis, eps, PotentialUpdated, recomputefractionsafe, errorplotinfo, errorworst, errorbest)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/actuariat_python
|
_doc/notebooks/sessions/election_carte_electorale.ipynb
|
mit
|
[
"Elections et cartes électorales - énoncé\nD'après wikipédia, le Gerrymandering est un terme politique nord-américain pour désigner le découpage des circonscriptions électorales ayant pour objectif de donner l’avantage à un parti, un candidat, ou un groupe donné. Et c'est ce que nous allons faire dans cette séance. C'est un problème tout-à-fait d'actualité : Primaire de la droite : 10 228 bureaux de vote stratégiquement répartis.",
"%matplotlib inline\n\nfrom jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Données\nLes données sont de plusieurs types et sont regroupées en trois fichiers :\n\nRésultat des élections législatives françaises de 2012 au niveau bureau de vote\nCountours des circonscriptions des législatives\nLocalisation des buraux de votes\nLocalisation des villes\nLocalisation des bureaux de vote avec Cartelec, cette base requiert la conversion des coordonnées (lire ce notebook)\n\nIl est conseillé de télécharger directement ces données. Les paragraphes suivants expliquent comment ceux-ci ont été récupérés ou construit. Il n'est pas immédiat d'obtenir la localisation des bureaux de vote. Celle-ci n'est d'ailleurs pas complète.\nRésultats des élections législatives",
"from actuariat_python.data import elections_legislatives_bureau_vote\ntour = elections_legislatives_bureau_vote(source='xd')\n\ntour[\"T2\"].sort_values([\"Code département\", \"N° de circonscription Lg\"]).head()",
"Géolocalisation des circonscription",
"from actuariat_python.data import elections_legislatives_circonscription_geo\ngeo = elections_legislatives_circonscription_geo()\n\ngeo.sort_values([\"department\", \"code_circonscription\"]).head()\n\nc = list(geo.sort_values([\"department\", \"code_circonscription\"])[\"communes\"])[0].split(\"-\")\nc.sort()\nc[:5]\n\nlist(geo.sort_values([\"department\", \"code_circonscription\"])[\"kml_shape\"])[:1]",
"Géolocation des bureaux de vote\nCes données sont importantes afin de pouvoir associer chaque bureau à une circonscription. C'est la donnée la plus compliquée à obtenir car elle nécessite de combiner plusieurs techniques et sources de données pour obtenir une table propre. Le fichier final peut être obtenu comme suit :",
"from actuariat_python.data import elections_vote_places_geo\nbureau_geo = elections_vote_places_geo()\n\nbureau_geo.head()",
"Ce qui suit explique la façon dont j'ai constuit cette table.\nLes bureaux sont assez stables d'une élection à l'autre et cela ne devrait pas trop poser de problèmes si on mélange les données. En revanche, ces données sont assez difficiles à obtenir. open.data.fr propose ces données mais il faut récupérer chaque ville ou chaque région séparément sans garantie de réussir à couvrir tout le territoire. Le site NosDonnes.fr recense bien toutes ces informations mais il n'est pas possible - au moment où j'écris ces lignes - de récupérer ces données sous la forme d'un fichier plat. De plus, certaines régions ne sont disponibles que sous forme de scan d'impressions papier. C'est en lisant l'article Comment redécoupe-t-on la carte électorale? que je suis tombé finalement sur la base constituée pour rédiger l'article Etude sur le redécoupage électoral : évaluer le poids politique de la réforme. Les données sont accessibles sur RegardsCitoyens.fr et en cherchant bien, on trouve le répertoire redécoupage et le fichier 2014041401_resultats.csv.zip. Cependant, la géolocalisation des bureaux n'est pas souvent renseignée. On peut se contenter des fichiers obtenus pour quelques zones seulement : Paris, Gironde, Montpellier, Marseille, Saint-Malo, Nogent-Sur-Marne, Haut-de-Seine, Toulouse, Grand-Poitiers, Calvados. Cette approche risque d'être fastidieuse dans la mesure où les formats pour chaque ville ont de grande chance d'être différent. \nLa meilleure option est peut-être de scrapper le site bureaudevote.fr - qui fonctionne bien quand il n'est pas en maintenance - en espérant que les numéros des bureaux de votes correspondent. Le site propose seulement les adresses. Il faudra utiliser une API de geocoding. Quelque soit la source, on supposera alors que tous les bureaux de vote associés au même emplacement feront nécessairement partie de la même circonscription. Bref, l'open data passe aussi par une certaine standardisation !\nCertains départements ne sont pas renseignés, la Drôme par exemple. Il faut aller sur d'autres sites comme linternaute.com, accéder à la carte, mais il faudra un peu plus de temps pour récupérer toutes ces informations. Le code proposé ci-dessus récupère les coordonnées des bureaux de vote. Cela prend plusieurs heures et il faut relancer le processus quand il s'arrête pour une erreur de réseau. Il est conseillé de télécharger le résultat. Le géocodeur d'OpenStreetMap n'est pas de très bonne qualité sur la France. Il ne retourne rien dans plus d'un tiers des cas. On peut compléter avec l'API de Bing Maps.",
"from actuariat_python.data import elections_vote_place_address\nbureau = elections_vote_place_address(hide_warning=True)\n\nbureau.head()",
"On récupère une clé pour utiliser l'API de Bing Maps avec le module keyring. Pour stocker son mot de passe sur la machine, il suffit d'écrire :\nimport keyring\nkeyring.get_password(\"bing\", \"actuariat_python,key\")",
"import keyring, os\nbing_key = keyring.get_password(\"bing\", \"actuariat_python,key\")\ncoders = [\"Nominatim\"]\nif bing_key:\n # si la clé a été trouvée\n coders.append((\"bing\", bing_key))\nlen(coders)\n\nimport os\nif not os.path.exists(\"bureauxvotegeo.zip\"):\n from actuariat_python.data import geocode\n from pyquickhelper.loghelper import fLOG\n fLOG(OutputPrint=True)\n bureau_geo = geocode(bureau, fLOG=fLOG, index=False, encoding=\"utf-8\",\n exc=False, save_every=\"bureau.dump.txt\", sep=\"\\t\", every=100,\n coders=coders)\nelse:\n print(\"Les données ont déjà été geocodées.\")",
"On regarde les valeurs manquantes.",
"import missingno\nmissingno.matrix(bureau_geo, figsize=(12, 6));",
"On pourra finalement récupérer la base des géocodes comme ceci :",
"from actuariat_python.data import elections_vote_places_geo\nplaces = elections_vote_places_geo()\nplaces.head()",
"Géolocalisation des villes",
"from actuariat_python.data import elections_vote_places_geo\nbureau_geo = elections_vote_places_geo()\nvilles_geo = bureau_geo[[\"city\", \"zip\", \"n\"]].groupby([\"city\", \"zip\"], as_index=False).count()\nvilles_geo.head()\n\nfrom actuariat_python.data import villes_geo\nvilles_geo = villes_geo(as_df=True)\nvilles_geo.head()\n\nimport keyring, os\nbing_key = keyring.get_password(\"bing\", \"actuariat_python,key\")\ncoders = []\nif bing_key:\n # si la clé a été trouvée\n coders.append((\"bing\", bing_key))\nlen(coders)\n\nimport os\ngeocode = True\nif geocode:\n if os.path.exists(\"villes_geo.txt\"):\n import pandas\n villes_geo = pandas.read_csv(\"villes_geo.txt\", sep=\"\\t\", encoding=\"utf-8\")\n else:\n from actuariat_python.data import geocode\n from pyquickhelper.loghelper import fLOG\n fLOG(OutputPrint=True)\n villes_geo = geocode(villes_geo, fLOG=fLOG, index=False, encoding=\"utf-8\",\n exc=False, save_every=\"villes.dump.txt\", sep=\"\\t\", every=100,\n coders=coders, country=\"France\")\n\nvilles_geo.head()",
"On conserve les données pour éviter de les reconstuire et faire appel à l'API Bing à nouveau.",
"villes_geo.to_csv(\"villes_geo.txt\", sep=\"\\t\", index=False, encoding=\"utf-8\")\n\nvilles_geo.shape",
"Géolocation des bureaux de vote avec Cartélec\nLe site cartelec recense beaucoup plus de bureaux de vote mais pour les élections 2007. Ils ne devraient pas avoir changé beaucoup.",
"from pyensae.datasource import download_data\nshp_vote = download_data(\"base_cartelec_2007_2010.zip\")\nshp_vote\n\n# La version 2.0.0.dev de pyshp est buggée. Il vaut mieux ne pas l'utiliser.\nimport shapefile\nif \"dev\" in shapefile.__version__:\n raise ImportError(\"Use a different version of pyshp not '{0}'\".format(shapefile.__version__))\nr = shapefile.Reader(\"fond0710.shp\", encoding=\"utf8\", encodingErrors=\"ignore\")\nshapes = r.shapes()\nrecords = r.records()\nlen(shapes), len(records)\n\n{k[0]: v for k, v in zip(r.fields, records[0])}\n\nshapes[0].points",
"Exercice 1\n\nEtablir un plan d'action\nDétailler la mise en oeuvre de ce plan à partir des données\nRépartir les tâches sur plusieurs équipes\n\nExercice 2\nMettre en oeuvre le plan d'action."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
printedheart/h2o-3
|
h2o-py/demos/H2O_tutorial_medium.ipynb
|
apache-2.0
|
[
"H2O Tutorial\nAuthor: Spencer Aiello\nContact: spencer@h2oai.com\nThis tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.\nDetailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.\nSetting up your system for this demo\nThe following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory",
"import pandas as pd\nimport numpy\nfrom numpy.random import choice\nfrom sklearn.datasets import load_boston\n\nimport h2o\nh2o.init()\n\n# transfer the boston data from pandas to H2O\nboston_data = load_boston()\nX = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)\nX[\"Median_value\"] = boston_data.target\nX = h2o.H2OFrame(python_obj=X.to_dict(\"list\"))\n\n# select 10% for valdation\nr = X.runif(seed=123456789)\ntrain = X[r < 0.9,:]\nvalid = X[r >= 0.9,:]\n\nh2o.export_file(train, \"Boston_housing_train.csv\", force=True)\nh2o.export_file(valid, \"Boston_housing_test.csv\", force=True)",
"Enable inline plotting in the Jupyter Notebook",
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"Intro to H2O Data Munging\nRead csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.",
"fr = h2o.import_file(\"Boston_housing_train.csv\")",
"View the top of the H2O frame.",
"fr.head()",
"View the bottom of the H2O Frame",
"fr.tail()",
"Select a column\nfr[\"VAR_NAME\"]",
"fr[\"CRIM\"].head() # Tab completes",
"Select a few columns",
"columns = [\"CRIM\", \"RM\", \"RAD\"]\nfr[columns].head()",
"Select a subset of rows\nUnlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.",
"fr[2:7,:] # explicitly select all columns with :",
"Key attributes:\n * columns, names, col_names\n * len, shape, dim, nrow, ncol\n * types\nNote: \nSince the data is not in local python memory\nthere is no \"values\" attribute. If you want to \npull all of the data into the local python memory\nthen do so explicitly with h2o.export_file and\nreading the data into python memory from disk.",
"# The columns attribute is exactly like Pandas\nprint \"Columns:\", fr.columns, \"\\n\"\nprint \"Columns:\", fr.names, \"\\n\"\nprint \"Columns:\", fr.col_names, \"\\n\"\n\n# There are a number of attributes to get at the shape\nprint \"length:\", str( len(fr) ), \"\\n\"\nprint \"shape:\", fr.shape, \"\\n\"\nprint \"dim:\", fr.dim, \"\\n\"\nprint \"nrow:\", fr.nrow, \"\\n\"\nprint \"ncol:\", fr.ncol, \"\\n\"\n\n# Use the \"types\" attribute to list the column types\nprint \"types:\", fr.types, \"\\n\"",
"Select rows based on value",
"fr.shape",
"Boolean masks can be used to subselect rows based on a criteria.",
"mask = fr[\"CRIM\"]>1\nfr[mask,:].shape",
"Get summary statistics of the data and additional data distribution information.",
"fr.describe()",
"Set up the predictor and response column names\nUsing H2O algorithms, it's easier to reference predictor and response columns\nby name in a single frame (i.e., don't split up X and y)",
"x = fr.names\ny=\"Median_value\"\nx.remove(y)",
"Machine Learning With H2O\nH2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.\nUnlike Scikit-learn, H2O allows for categorical and missing data.\nThe basic work flow is as follows:\n* Fit the training data with a machine learning algorithm\n* Predict on the testing data\nSimple model",
"model = h2o.random_forest(x=fr[:400,x],y=fr[:400,y],seed=42) # Define and fit first 400 points\n\nmodel.predict(fr[400:fr.nrow,:]) # Predict the rest",
"The performance of the model can be checked using the holdout dataset",
"perf = model.model_performance(fr[400:fr.nrow,:])\nperf.r2() # get the r2 on the holdout data\nperf.mse() # get the mse on the holdout data\nperf # display the performance object",
"Train-Test Split\nInstead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.",
"r = fr.runif(seed=12345) # build random uniform column over [0,1]\ntrain= fr[r<0.75,:] # perform a 75-25 split\ntest = fr[r>=0.75,:]\n\nmodel = h2o.random_forest(x=train[x],y=train[y],seed=42)\n\nperf = model.model_performance(test)\nperf.r2()",
"There was a massive jump in the R^2 value. This is because the original data is not shuffled.\nCross validation\nH2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).\nIn conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either:\n * AUTO: Perform random assignment\n * Random: Each row has a equal (1/nfolds) chance of being in any fold.\n * Modulo: Observations are in/out of the fold based by modding on nfolds",
"model = h2o.random_forest(x=fr[x],y=fr[y], nfolds=10) # build a 10-fold cross-validated model\n\nscores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute\nprint \"Expected R^2: %.2f +/- %.2f \\n\" % (scores.mean(), scores.std()*1.96)\nprint \"Scores:\", scores.round(2)",
"However, you can still make use of the cross_val_score from Scikit-Learn\nCross validation: H2O and Scikit-Learn",
"from sklearn.cross_validation import cross_val_score\nfrom h2o.cross_validation import H2OKFold\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator\nfrom h2o.model.regression import h2o_r2_score\nfrom sklearn.metrics.scorer import make_scorer",
"You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method",
"model = H2ORandomForestEstimator(seed=42)\n\nscorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer\ncustom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv \nscores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)\n\nprint \"Expected R^2: %.2f +/- %.2f \\n\" % (scores.mean(), scores.std()*1.96)\nprint \"Scores:\", scores.round(2)",
"There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.\nSince the progress bar print out gets annoying let's disable that",
"h2o.__PROGRESS_BAR__=False\nh2o.no_progress()",
"Grid Search\nGrid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)\nRandomized grid search: H2O and Scikit-Learn",
"from sklearn import __version__\nsklearn_version = __version__\nprint sklearn_version",
"If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).\nThe steps to perform a randomized grid search:\n1. Import model and RandomizedSearchCV\n2. Define model\n3. Specify parameters to test\n4. Define grid search object\n5. Fit data to grid search object\n6. Collect scores\nAll the steps will be repeated from above.\nBecause 0.16.1 is installed, we use scipy to define specific distributions\nADVANCED TIP:\nTurn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).\nWe'll turn it back on again in the aftermath of a Parallel job.\nIf you don't want to run jobs in parallel, don't turn off the reference counting.\nPattern is:\n >>> h2o.turn_off_ref_cnts()\n >>> .... parallel job ....\n >>> h2o.turn_on_ref_cnts()",
"%%time\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator # Import model\nfrom sklearn.grid_search import RandomizedSearchCV # Import grid search\nfrom scipy.stats import randint, uniform\n\nmodel = H2ORandomForestEstimator(seed=42) # Define model\n\nparams = {\"ntrees\": randint(20,50),\n \"max_depth\": randint(1,10),\n \"min_rows\": randint(1,10), # scikit's min_samples_leaf\n \"mtries\": randint(2,fr[x].shape[1]),} # Specify parameters to test\n\nscorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer\ncustom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv \nrandom_search = RandomizedSearchCV(model, params, \n n_iter=30, \n scoring=scorer, \n cv=custom_cv, \n random_state=42,\n n_jobs=1) # Define grid search object\n\nrandom_search.fit(fr[x], fr[y])\n\nprint \"Best R^2:\", random_search.best_score_, \"\\n\"\nprint \"Best params:\", random_search.best_params_",
"We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.",
"def report_grid_score_detail(random_search, charts=True):\n \"\"\"Input fit grid search estimator. Returns df of scores with details\"\"\"\n df_list = []\n\n for line in random_search.grid_scores_:\n results_dict = dict(line.parameters)\n results_dict[\"score\"] = line.mean_validation_score\n results_dict[\"std\"] = line.cv_validation_scores.std()*1.96\n df_list.append(results_dict)\n\n result_df = pd.DataFrame(df_list)\n result_df = result_df.sort(\"score\", ascending=False)\n \n if charts:\n for col in get_numeric(result_df):\n if col not in [\"score\", \"std\"]:\n plt.scatter(result_df[col], result_df.score)\n plt.title(col)\n plt.show()\n\n for col in list(result_df.columns[result_df.dtypes == \"object\"]):\n cat_plot = result_df.score.groupby(result_df[col]).mean()\n cat_plot.sort()\n cat_plot.plot(kind=\"barh\", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))\n plt.show()\n return result_df\n\ndef get_numeric(X):\n \"\"\"Return list of numeric dtypes variables\"\"\"\n return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith((\"float\", \"int\", \"bool\")))].index.tolist()\n\nreport_grid_score_detail(random_search).head()",
"Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:",
"%%time\n\nparams = {\"ntrees\": randint(30,40),\n \"max_depth\": randint(4,10),\n \"mtries\": randint(4,10),}\n\ncustom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big\n # impact on the std of the resulting scores. More\nrandom_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher \n n_iter=10, # variation per sample\n scoring=scorer, \n cv=custom_cv, \n random_state=43, \n n_jobs=1) \n\nrandom_search.fit(fr[x], fr[y])\n\nprint \"Best R^2:\", random_search.best_score_, \"\\n\"\nprint \"Best params:\", random_search.best_params_\n\nreport_grid_score_detail(random_search)",
"Transformations\nRule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.\nAt the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.\nBasic steps:\n\nRemove the response variable from transformations.\nImport transformer\nDefine transformer\nFit train data to transformer\nTransform test and train data\nRe-attach the response variable.\n\nFirst let's normalize the data using the means and standard deviations of the training data.\nThen let's perform a principal component analysis on the training data and select the top 5 components.\nUsing these components, let's use them to reduce the train and test design matrices.",
"from h2o.transforms.preprocessing import H2OScaler\nfrom h2o.transforms.decomposition import H2OPCA",
"Normalize Data: Use the means and standard deviations from the training data.",
"y_train = train.pop(\"Median_value\")\ny_test = test.pop(\"Median_value\")\n\nnorm = H2OScaler()\nnorm.fit(train)\nX_train_norm = norm.transform(train)\nX_test_norm = norm.transform(test)\n\nprint X_test_norm.shape\nX_test_norm",
"Then, we can apply PCA and keep the top 5 components.",
"pca = H2OPCA(n_components=5)\npca.fit(X_train_norm)\nX_train_norm_pca = pca.transform(X_train_norm)\nX_test_norm_pca = pca.transform(X_test_norm)\n\n# prop of variance explained by top 5 components?\n\nprint X_test_norm_pca.shape\nX_test_norm_pca[:5]\n\nmodel = H2ORandomForestEstimator(seed=42)\nmodel.fit(X_train_norm_pca,y_train)\ny_hat = model.predict(X_test_norm_pca)\n\nh2o_r2_score(y_test,y_hat)",
"Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.\nPipelines\n\"Tranformers unite!\"\nIf your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.\nSteps:\n\nImport Pipeline, transformers, and model\nDefine pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).\nFit the training data to pipeline\nEither transform or predict the testing data",
"from h2o.transforms.preprocessing import H2OScaler\nfrom h2o.transforms.decomposition import H2OPCA\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator\n\nfrom sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>\nmodel = H2ORandomForestEstimator(seed=42)\npipe = Pipeline([(\"standardize\", H2OScaler()), # Define pipeline as a series of steps\n (\"pca\", H2OPCA(n_components=5)),\n (\"rf\", model)]) # Notice the last step is an estimator\n\npipe.fit(train, y_train) # Fit training data\ny_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)\nh2o_r2_score(y_test, y_hat) # Notice the final score is identical to before",
"This is so much easier!!!\nBut, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.\nCombining randomized grid search and pipelines\n\"Yo dawg, I heard you like models, so I put models in your models to model models.\"\nSteps:\n\nImport Pipeline, grid search, transformers, and estimators <Not shown below>\nDefine pipeline\nDefine parameters to test in the form: \"(Step name)__(argument name)\" A double underscore separates the two words.\nDefine grid search\nFit to grid search",
"pipe = Pipeline([(\"standardize\", H2OScaler()),\n (\"pca\", H2OPCA()),\n (\"rf\", H2ORandomForestEstimator(seed=42))])\n\nparams = {\"standardize__center\": [True, False], # Parameters to test\n \"standardize__scale\": [True, False],\n \"pca__n_components\": randint(2, 6),\n \"rf__ntrees\": randint(50,80),\n \"rf__max_depth\": randint(4,10),\n \"rf__min_rows\": randint(5,10), }\n# \"rf__mtries\": randint(1,4),} # gridding over mtries is \n # problematic with pca grid over \n # n_components above \n\nfrom sklearn.grid_search import RandomizedSearchCV\nfrom h2o.cross_validation import H2OKFold\nfrom h2o.model.regression import h2o_r2_score\nfrom sklearn.metrics.scorer import make_scorer\n\ncustom_cv = H2OKFold(fr, n_folds=5, seed=42)\nrandom_search = RandomizedSearchCV(pipe, params,\n n_iter=30,\n scoring=make_scorer(h2o_r2_score),\n cv=custom_cv,\n random_state=42,\n n_jobs=1)\n\n\nrandom_search.fit(fr[x],fr[y])\nresults = report_grid_score_detail(random_search)\nresults.head()",
"Currently Under Development (drop-in scikit-learn pieces):\n * Richer set of transforms (only PCA and Scale are implemented)\n * Richer set of estimators (only RandomForest is available)\n * Full H2O Grid Search\nOther Tips: Model Save/Load\nIt is useful to save constructed models to disk and reload them between H2O sessions. Here's how:",
"best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search\nh2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline\n\nsave_path = h2o.save_model(h2o_model, path=\".\", force=True)\nprint save_path\n\n# assumes new session\nmy_model = h2o.load_model(path=save_path)\n\nmy_model.predict(fr)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jrossyra/adaptivemd
|
examples/tutorial/test_worker.ipynb
|
lgpl-2.1
|
[
"AdaptiveMD\nExample 1 - Setup\n0. Imports",
"import sys, os, time",
"We want to stop RP from reporting all sorts of stuff for this example so we set a specific environment variable to tell RP to do so. If you want to see what RP reports change it to REPORT.",
"# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')\nos.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'",
"We will import the appropriate parts from AdaptiveMD as we go along so it is clear what it needed at what stage. Usually you will have the block of imports at the beginning of your script or notebook as suggested in PEP8.",
"from adaptivemd import Project\n\nfrom adaptivemd import OpenMMEngine\nfrom adaptivemd import PyEMMAAnalysis\n\nfrom adaptivemd import File, Directory, WorkerScheduler\n\nfrom adaptivemd import DT",
"Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.",
"Project.delete('test')\n\nproject = Project('test')",
"Now we have a handle for our project. First thing is to set it up to work on a resource.\n1. Set the resource\nWhat is a resource? A Resource specifies a shared filesystem with one or more clusteres attached to it. This can be your local machine or just a regular cluster or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do).\nOnce you have chosen your place to store your results this way it is set for the project and can (at least should) not be altered since all file references are made to match this resource. Currently you can use the Fu Berlin Allegro Cluster or run locally. There are two specific local adaptations that include already the path to your conda installation. This simplifies the use of openmm or pyemma.\nLet us pick a local resource on a laptop for now.",
"from adaptivemd import LocalCluster, AllegroCluster\n\nresource_id = 'local.jhp'\n\nresource = LocalCluster()\nresource.wrapper.append('source activate xyz')\n\nif resource_id == 'local.jhp':\n project.initialize(resource)\nelif resource_id == 'local.sheep':\n project.initialize(LocalCluster())\nelif resource_id == 'fub.allegro':\n project.initialize(AllegroCluster())",
"TaskGenerators\nTaskGenerators are instances whose purpose is to create tasks to be executed. This is similar to the\nway Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a ComputeUnitDescription and executed. In simple terms:\nThe task generator creates the bash scripts for you that run a simulation or run pyemma.\nA task generator will be initialized with all parameters needed to make it work and it will now what needs to be staged to be used.\nThe engine\nA task generator that will create jobs to run simulations. Currently it uses a little python script that will excute OpenMM. It requires conda to be added to the PATH variable or at least openmm to be installed on the cluster. If you setup your resource correctly then this should all happen automatically.\nFirst we define a File object. These are used to represent files anywhere, on the cluster or your local application. File like any complex object in adaptivemd can have a .name attribute that makes them easier to find later.",
"pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb').load()",
"Here we used a special prefix that can point to specific locations. \n\nfile:// points to files on your local machine. \nunit:// specifies files on the current working directory of the executing node. Usually these are temprary files for a single execution.\nshared:// specifies the root shared FS directory (e.g. NO_BACKUP/ on Allegro) Use this to import and export files that are already on the cluster.\nstaging:// a special scheduler specific directory where files are moved after they are completed on a node and should be used for later. Use this to relate to files that should be stored or reused. After you one excution is done you usually move all important files to this place.\nsandbox:// this should not concern you and is a special RP folder where all pilot/session folders are located.\n\nSo let's do an example for an OpenMM engine. This is simply a small python script that makes OpenMM look like a executable. It run a simulation by providing an initial frame, OpenMM specific system.xml and integrator.xml files and some additional parameters like the platform name, how often to store simulation frames, etc.",
"engine = OpenMMEngine(\n pdb_file=pdb_file,\n system_file=File('file://../files/alanine/system.xml').load(),\n integrator_file=File('file://../files/alanine/integrator.xml').load(),\n args='-r --report-interval 1 -p CPU --store-interval 1'\n).named('openmm')",
"To explain this we have now an OpenMMEngine which uses the previously made pdb File object and uses the location defined in there. The same some Files for the OpenMM XML files and some args to store each frame (to keep it fast) and run using the CPU kernel.\nLast we name the engine openmm to find it later.",
"engine.name",
"The modeller\nThe instance to compute an MSM model of existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.",
"modeller = PyEMMAAnalysis(\n pdb_file=pdb_file\n).named('pyemma')",
"Again we name it pyemma for later reference.\nAdd generators to project\nNext step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given.\nTo be precise there is an order in the time of creation of the object, but it is only accurate to seconds and it really is the time it was created and not stored.",
"project.generators.add(engine)\nproject.generators.add(modeller)\n\nsc = WorkerScheduler(project.resource)\nsc.enter(project)\n\nt = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)). extend(50).extend(100)\n\nsc.task_to_script(t)\n\nsc.advance()\nfor f in project.trajectories:\n print f.basename, f.length, DT(f.created).time\n\nfor t in project.tasks:\n print t.stderr\n\nfor l in project.logs:\n print repr(l)\n\nprint project.generators\n\nt1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True))\nt2 = t1.extend(100)\n\nproject.tasks.add(t2)\n\nt.append('source activate enb')\n\nt._user_pre_exec\n\nsc.wrapper.pre_exec('source activate')\n\nfor f in project.trajectories:\n print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)\n\nfor f in project.files:\n print f.drive, f.path, f.created, f.__time__, f.exists, hex(f.__uuid__)\n\nw = project.workers.last\nprint w.state\nprint w.command\n\nfor t in project.tasks:\n print t.state, t.worker.hostname if t.worker else 'None'\n\nsc.advance()\n\nt1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100))\nt2 = t1.extend(100)\n\nproject.tasks.add(t2)\n\n# from adaptivemd.engine import Trajectory\n# t3 = engine.task_run_trajectory(Trajectory('staging:///trajs/0.dcd', pdb_file, 100)).extend(100)\n# t3.dependencies = []\n\n# def get_created_files(t, s):\n# if t.is_done():\n# print 'done', s\n# return s - set(t.added_files)\n# else:\n# adds = set(t.added_files)\n# rems = set(s.required[0] for s in t._pre_stage)\n# print '+', adds\n# print '-', rems\n# q = set(s) - adds | rems \n \n# if t.dependencies is not None:\n# for d in t.dependencies: \n# q = get_created_files(d, q)\n\n# return q\n \n# get_created_files(t3, {})\n\nfor w in project.workers:\n print w.hostname, w.state\n\nw = project.workers.last\nprint w.state\nprint w.command\n\nw.command = 'shutdown'\n\nfor t in project.tasks:\n print t.state, t.worker.hostname if t.worker else 'None'\n\nfor f in project.trajectories:\n print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)\n\nproject.trajectories.one[0]\n\nt = engine.task_run_trajectory(project.new_trajectory(project.trajectories.one[0], 100))\n\nproject.tasks.add(t)\n\nprint project.files\nprint project.tasks\n\nt = modeller.execute(list(project.trajectories))\n\nproject.tasks.add(t)\n\nfrom uuid import UUID\n\nproject.storage.tasks._document.find_one({'_dict': {'generator' : { '_dict': }}})\n\ngenlist = ['openmm']\n\nscheduler = sc\nprefetch = 1\n\nwhile True:\n scheduler.advance()\n if scheduler.is_idle:\n for _ in range(prefetch):\n tasklist = scheduler(project.storage.tasks.consume_one())\n\n if len(tasklist) == 0:\n break\n\n time.sleep(2.0)",
"Note, that you cannot add the same engine twice. But if you create a new engine it will be considered different and hence you can store it again. \nCreate one intial trajectory\nFinally we are ready to run a first trajectory that we will store as a point of reference in the project. Also it is nice to see how it works in general.\n1. Open a scheduler\na job on the cluster to execute tasks\nthe .get_scheduler function delegates to the resource and uses the get_scheduler functions from there. This is merely a convenience since a Scheduler has the responsibility to open queues on the resource for you. \nYou have the same options as the queue has in the resource. This is often the number of cores and walltime, but can be additional ones, too. \nLet's open the default queue and use a single core for it since we only want to run one simulation.",
"scheduler = project.get_scheduler(cores=1)",
"Next we create the parameter for the engine to run the simulation. Since it seemed appropriate we use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actualy reference the no yet existing trajectory and do stuff with it.\nA Trajectory should have a unique name and so there is a project function to get you one. It uses numbers and makes sure that this number has not been used yet in the project.",
"trajectory = project.new_trajectory(engine['pdb_file'], 100)\ntrajectory",
"This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx.dcd.\nNow, we want that this trajectory actually exists so we have to make it (on the cluster which is waiting for things to do). So we need a Task object to run a simulation. Since Task objects are very flexible there are helper functions to get them to do, what you want, like the ones we already created just before. Let's use the openmm engine to create an openmm task",
"task = engine.task_run_trajectory(trajectory)",
"That's it, just that a trajectory description and turn it into a task that contains the shell commands and needed files, etc. \nLast step is to really run the task. You can just use a scheduler as a function or call the .submit() method.",
"scheduler(task)",
"Now we have to wait. To see, if we are done, you can check the scheduler if it is still running tasks.",
"scheduler.is_idle\n\nprint scheduler.generators",
"or you wait until it becomes idle using .wait()",
"# scheduler.wait()",
"If all went as expected we will now have our first trajectory.",
"print project.files\nprint project.trajectories",
"Excellent, so cleanup and close our queue",
"scheduler.exit()",
"and close the project.",
"project.close()",
"The final project.close() will also shut down all open schedulers for you, so the exit command would not be necessary here. It is relevant if you want to exit the queue as soon as possible to save walltime.\nSummary\nYou have finally created an AdaptiveMD project and run your first trajectory. Since the project exists now, it is\nmuch easier to run more trajectories now."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ngcm/training-public
|
FEEG6016 Simulation and Modelling/08-Finite-Elements-Lab-2.ipynb
|
mit
|
[
"Finite Elements Lab 2 Worksheet",
"from IPython.core.display import HTML\ncss_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'\nHTML(url=css_file)",
"2d problem\nWe've looked at the problem of finding the static temperature distribution in a bar. Now let's move on to finding the temperature distribution of a plate of length $1$ on each side. The temperature $T(x, y) = T(x_1, x_2)$ satisfies\n$$\n \\nabla^2 T + f({\\bf x}) = \\left( \\partial_{xx} + \\partial_{yy} \\right) T + f({\\bf x}) = 0.\n$$\nWe'll fix the temperature to be zero at the right edge, $T(1, y) = 0$. We'll allow heat to flow out of the other edges, giving the boundary conditions on all edges as\n$$\n\\begin{align}\n \\partial_x T(0, y) &= 0, & T(1, y) &=0, \\\n \\partial_y T(x, 0) &= 0, & \\partial_y T(x, 1) &=0.\n\\end{align}\n$$\nOnce again we want to write down the weak form by integrating by parts. To do that we rely on the divergence theorem,\n$$\n \\int_{\\Omega} \\text{d}\\Omega \\, \\nabla_i \\phi = \\int_{\\Gamma} \\text{d}\\Gamma \\, \\phi n_i.\n$$\nHere $\\Omega$ is the domain (which in our problem is the plate, $x, y \\in [0, 1]$) and $\\Gamma$ its boundary (in our problem the four lines $x=0, 1$ and $y=0, 1$), whilst ${\\bf n}$ is the (inward-pointing) normal vector to the boundary.\nWe then multiply the strong form of the static heat equation by a weight function $w(x, y)$ and integrate by parts, using the divergence theorem, to remove the second derivative. To enforce the boundary conditions effectively we again choose the weight function to vanish where the value of the temperature is explicitly given, i.e. $w(1, y) = 0$. That is, we split the boundary $\\Gamma$ into a piece $\\Gamma_D$ where the boundary conditions are in Dirichlet form (the value $T$ is given) and a piece $\\Gamma_N$ where the boundary conditions are in Neumann form (the value of the normal derivative $n_i \\nabla_i T$ is given). We then enforce that on $\\Gamma_D$ the weight function vanishes.\nFor our problem, this gives\n$$\n \\int_{\\Omega} \\text{d} \\Omega \\, \\nabla_i w \\nabla_i T = \\int_{\\Omega} \\text{d} \\Omega \\, w f.\n$$\nRe-writing for our explicit domain and our Cartesian coordinates we get\n$$\n \\int_0^1 \\text{d} y \\, \\int_0^1 \\text{d} x \\, \\left( \\partial_x w \\partial_x T + \\partial_y w \\partial_y T \\right) = \\int_0^1 \\text{d} y \\, \\int_0^1 \\text{d} x \\, w(x, y) f(x, y).\n$$\nThis should be compared to the one dimensional case\n$$\n \\int_0^1 \\text{d}x \\, \\partial_x w(x) \\partial_x T(x) = \\int_0^1 \\text{d}x \\, w(x) f(x).\n$$\nWe can now envisage using the same steps as the one dimensional case. Split the domain into elements, represent all functions in terms of known shape functions on each element, assemble the problems in each element to a single matrix problem, and then solve the matrix problem.\nElements\nHere we will use triangular elements. As a simple example we'll split the plate into two triangles.",
"%matplotlib inline\nimport numpy\nfrom matplotlib import pyplot\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16\nrcParams['figure.figsize'] = (12,6)\n\nnodes = numpy.array([[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0]])\nIEN = numpy.array([[0, 1, 2], \n [1, 3, 2]])\n\npyplot.figure()\npyplot.axis('equal')\npyplot.triplot(nodes[:,0], nodes[:,1], triangles=IEN, lw=2)\npyplot.plot(nodes[:,0], nodes[:,1], 'ro')\nfor e in range(nodes.shape[1]):\n barycentre = numpy.mean(nodes[IEN[e,:],:], axis=0)\n pyplot.text(barycentre[0], barycentre[1], \"{}\".format(e),\n bbox=dict(facecolor='red', alpha=0.5))\n for n in range(3):\n pyplot.text(nodes[IEN[e,n],0]-0.07*(-1)**e,nodes[IEN[e,n],1]+0.07, r\"${}_{{{}}}$\".format(n,e),\n bbox=dict(facecolor='blue', alpha=0.25 + 0.5*e))\nfor n in range(nodes.shape[0]):\n pyplot.text(nodes[n,0]-0.07, nodes[n,1]-0.07, \"{}\".format(n),\n bbox=dict(facecolor='green', alpha=0.3))\npyplot.xlim(-0.2, 1.2)\npyplot.ylim(-0.2, 1.2)\npyplot.xlabel(r\"$x$\")\npyplot.ylabel(r\"$y$\");",
"What we're doing here is\n\nProviding a list of nodes by their global coordinates.\nProviding the element node array IEN which says how the elements are linked to the nodes.\n\nWe have that for element $e$ and local node number $a = 0, 1, 2$ the global node number is $A = IEN(e, a)$. This notation is sufficiently conventional that matplotlib recognizes it with its triplot/tripcolor/trisurf functions.\nIt is convention that the nodes are ordered in the anti-clockwise direction as the local number goes from $0$ to $2$.\nThe plot shows the\n\nelement numbers in the red boxes\nthe global node numbers in the green boxes\nthe local element numbers in the blue boxes (the subscript shows the element number).\n\nWe will need one final array, which is the $ID$ or destination array. This links the global node number to the global equation number in the final linear system. As the order of the equations in a linear system doesn't matter, this essentially encodes whether a node should have any equation in the linear system. Any node on $\\Gamma_D$, where the value of the temperature is given, should not have an equation. In the example above the right edge is fixed, so nodes $1$ and $3$ lie on $\\Gamma_D$ and should not have an equation. Thus in our case we have",
"ID = numpy.array([0,-1,1,-1])",
"In the one dimensional case we used the location matrix or $LM$ array to link local node numbers in elements to equations. With the $IED$ and $ID$ arrays the $LM$ matrix is strictly redundant, as $LM(a, e) = ID(IEN(e, a))$. However, it's still standard to construct it:",
"LM = numpy.zeros_like(IEN.T)\nfor e in range(IEN.shape[0]):\n for a in range(IEN.shape[1]):\n LM[a,e] = ID[IEN[e,a]]\nLM",
"Function representation and shape functions\nWe're going to want to write our unknown functions $T, w$ in terms of shape functions. These are easiest to write down for a single reference element, in the same way as we did for the one dimensional case where our reference element used the coordinates $\\xi$. In two dimensions we'll use the reference coordinates $\\xi_0, \\xi_1$, and the standard \"unit\" triangle:",
"corners = numpy.array([[0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 0.0]])\npyplot.plot(corners[:,0],corners[:,1],linewidth=2)\npyplot.xlabel(r\"$\\xi_0$\")\npyplot.ylabel(r\"$\\xi_1$\")\npyplot.axis('equal')\npyplot.ylim(-0.1,1.1);",
"The shape functions on this triangle are\n\\begin{align}\n N_0(\\xi_0, \\xi_1) &= 1 - \\xi_0 - \\xi_1, \\\n N_1(\\xi_0, \\xi_1) &= \\xi_0, \\\n N_2(\\xi_0, \\xi_1) &= \\xi_1.\n\\end{align}\nThe derivatives are all either $0$ or $\\pm 1$.\nAs soon as we have the shape functions, our weak form becomes\n$$\n \\sum_A T_A \\int_{\\Omega} \\text{d}\\Omega \\, \\left( \\partial_{x} N_A (x, y) \\partial_{x} N_B(x, y) + \\partial_{y} N_A(x, y) \\partial_{y} N_B(x, y) \\right) = \\int_{\\Omega} \\text{d}\\Omega \\, N_B(x, y) f(x, y).\n$$\nIf we restrict to a single element the weak form becomes\n$$\n \\sum_A T_A \\int_{\\triangle} \\text{d}\\triangle \\, \\left( \\partial_{x} N_A (x, y) \\partial_{x} N_B(x, y) + \\partial_{y} N_A(x, y) \\partial_{y} N_B(x, y) \\right) = \\int_{\\triangle} \\text{d}\\triangle \\, N_B(x, y) f(x, y).\n$$\nWe need to map the triangle and its $(x, y) = {\\bf x}$ coordinates to the reference triangle and its $(\\xi_0, \\xi_1) = {\\bf \\xi}$ coordinates. We also need to work out the integrals that appear in the weak form. We need the transformation formula\n$$\n \\int_{\\triangle} \\text{d}\\triangle \\, \\phi(x, y) = \\int_0^1 \\text{d}\\xi_1 \\, \\int_0^{1-\\xi_1} \\text{d}\\xi_0 \\, \\phi \\left( x(\\xi_0, \\xi_1), y(\\xi_0, \\xi_1) \\right) j(\\xi_0, \\xi_1),\n$$\nwhere the Jacobian matrix $J$ is\n$$\n J = \\left[ \\frac{\\partial {\\bf x}}{\\partial {\\bf \\xi}} \\right] = \\begin{pmatrix} \\partial_{\\xi_0} x & \\partial_{\\xi_1} x \\ \\partial_{\\xi_0} y & \\partial_{\\xi_1} y \\end{pmatrix}\n$$\nand hence the Jacobian determinant $j$ is\n$$\n j = \\det{J} = \\det \\left[ \\frac{\\partial {\\bf x}}{\\partial {\\bf \\xi}} \\right] = \\det \\begin{pmatrix} \\partial_{\\xi_0} x & \\partial_{\\xi_1} x \\ \\partial_{\\xi_0} y & \\partial_{\\xi_1} y \\end{pmatrix}.\n$$\nWe will also need the Jacobian matrix when writing the derivatives of the shape functions in terms of the coordinates on the reference triangle, i.e.\n$$\n \\begin{pmatrix} \\partial_x N_A & \\partial_y N_A \\end{pmatrix} = \\begin{pmatrix} \\partial_{\\xi_0} N_A & \\partial_{\\xi_1} N_A \\end{pmatrix} J^{-1} .\n$$\nThe integral over the reference triangle can be directly approximated using, for example, Gauss quadrature. To second order we have\n$$\n \\int_0^1 \\text{d}\\xi_1 \\, \\int_0^{1-\\xi_1} \\text{d}\\xi_0 \\, \\psi \\left( x(\\xi_0, \\xi_1), y(\\xi_0, \\xi_1) \\right) \\simeq \\frac{1}{6} \\sum_{j = 1}^{3} \\psi \\left( x((\\xi_0)_j, (\\xi_1)_j), y((\\xi_0)_j, (\\xi_1)_j) \\right)\n$$\nwhere\n$$\n\\begin{align}\n (\\xi_0)_1 &= \\frac{1}{6}, & (\\xi_1)_1 &= \\frac{1}{6}, \\\n (\\xi_0)_2 &= \\frac{4}{6}, & (\\xi_1)_2 &= \\frac{1}{6}, \\\n (\\xi_0)_3 &= \\frac{1}{6}, & (\\xi_1)_3 &= \\frac{4}{6}.\n\\end{align}\n$$\nFinally, we need to map from the coordinates ${\\bf \\xi}$ to the coordinates ${\\bf x}$. This is straightforward if we think of writing each component $(x, y)$ in terms of the shape functions. So for element $e$ with node locations $(x^e_a, y^e_a)$ for local node number $a = 0, 1, 2$ we have\n$$\n\\begin{align}\n x &= x^e_0 N_0(\\xi_0, \\xi_1) + x^e_1 N_1(\\xi_0, \\xi_1) + x^e_2 N_2(\\xi_0, \\xi_1), \\\n y &= y^e_0 N_0(\\xi_0, \\xi_1) + y^e_1 N_1(\\xi_0, \\xi_1) + y^e_2 N_2(\\xi_0, \\xi_1).\n\\end{align}\n$$\nTasks\n\nWrite a function that, given ${\\bf \\xi}$, returns that shape functions at that location.\nWrite a function that, given ${\\bf \\xi}$, returns the derivatives of the shape functions at that location.\nWrite a function that, given the (global) locations ${\\bf x}$ of the nodes of a triangular element and the local coordinates ${\\bf \\xi}$ within the element returns the corresponding global coordinates.\nWrite a function that, given the (global) locations ${\\bf x}$ of the nodes of a triangular element and the local coordinates ${\\bf \\xi}$, returns the Jacobian matrix at that location.\nWrite a function that, given the (global) locations ${\\bf x}$ of the nodes of a triangular element and the local coordinates ${\\bf \\xi}$, returns the determinant of the Jacobian matrix at that location.\nWrite a function that, given the (global) locations ${\\bf x}$ of the nodes of a triangular element and the local coordinates ${\\bf \\xi}$ within the element returns the derivatives $\\partial_{\\bf x} N_a = J^{-1} \\partial_{\\bf \\xi} N_a$.\nWrite a function that, given a function $\\psi({\\bf \\xi})$, returns the quadrature of $\\psi$ over the reference triangle.\nWrite a function that, given the (global) locations of the nodes of a triangular element and a function $\\phi(x, y)$, returns the quadrature of $\\phi$ over the element.\nTest all of the above by integrating simple functions (eg $1, \\xi, \\eta, x, y$) over the elements above.\n\nMore tasks\n\nWrite a function to compute the coefficients of the stiffness matrix for a single element,\n$$\n k^e_{ab} = \\int_{\\triangle^e} \\text{d}\\triangle^e \\, \\left( \\partial_{x} N_a (x, y) \\partial_{x} N_b(x, y) + \\partial_{y} N_a(x, y) \\partial_{y} N_b(x, y) \\right).\n$$\nWrite a function to compute the coefficients of the force vector for a single element,\n$$\n f^e_b = \\int_{\\triangle^e} \\text{d}\\triangle^e \\, N_b(x, y) f(x, y).\n$$\n\nAlgorithm\nThis gives our full algorithm:\n\nSet number of elements $N_{\\text{elements}}$.\nSet node locations ${\\bf x}A, A = 0, \\dots, N{\\text{nodes}}$. Note that there is no longer a direct connection between the number of nodes and elements.\nSet up the $IEN$ and $ID$ arrays linking elements to nodes and elements to equation numbers. From these set the location matrix $LM$. Work out the required number of equations $N_{\\text{equations}}$ (the maximum of the $ID$ array plus $1$).\nSet up arrays of zeros for the global stiffness matrix (size $N_{\\text{equations}} \\times N_{\\text{equations}}$) and force vector (size $N_{\\text{equations}}$).\n\nFor each element:\n\nForm the element stiffness matrix $k^e_{ab}$.\nForm the element force vector $f^e_a$.\nAdd the contributions to the global stiffness matrix and force vector\n\n\n\nSolve $K {\\bf T} = {\\bf F}$.\n\n\nAlgorithm tasks\n\nWrite a function that given a list of nodes and the $IEN$ and $ID$ arrays and returns the solution ${\\bf T}$.\nTest on the system $f(x, y) = 1$ with exact solution $T = (1-x^2)/2$.\nFor a more complex case with the same boundary conditions try\n$$\n f(x, y) = x^2 (x - 1) \\left( y^2 + 4 y (y - 1) + (y - 1)^2 \\right) + (3 x - 1) y^2 (y - 1)^2\n$$\nwith exact solution\n$$\n T(x, y) = \\tfrac{1}{2} x^2 (1 - x) y^2 (1 - y)^2.\n$$\n\nA useful function is a grid generator or mesher. Good meshers are generally hard: here is a very simple one for this specific problem.",
"def generate_2d_grid(Nx):\n \"\"\"\n Generate a triangular grid covering the plate math:`[0,1]^2` with Nx (pairs of) triangles in each dimension.\n \n Parameters\n ----------\n \n Nx : int\n Number of triangles in any one dimension (so the total number on the plate is math:`2 Nx^2`)\n \n Returns\n -------\n \n nodes : array of float\n Array of (x,y) coordinates of nodes\n IEN : array of int\n Array linking elements to nodes\n ID : array of int\n Array linking nodes to equations\n \"\"\"\n Nnodes = Nx+1\n x = numpy.linspace(0, 1, Nnodes)\n y = numpy.linspace(0, 1, Nnodes)\n X, Y = numpy.meshgrid(x,y)\n nodes = numpy.zeros((Nnodes**2,2))\n nodes[:,0] = X.ravel()\n nodes[:,1] = Y.ravel()\n ID = numpy.zeros(len(nodes), dtype=numpy.int)\n n_eq = 0\n for nID in range(len(nodes)):\n if nID % Nnodes == Nx:\n ID[nID] = -1\n else:\n ID[nID] = n_eq\n n_eq += 1\n IEN = numpy.zeros((2*Nx**2,3), dtype=numpy.int)\n for i in range(Nx):\n for j in range(Nx):\n IEN[2*i+2*j*Nx , :] = i+j*Nnodes, i+1+j*Nnodes, i+(j+1)*Nnodes\n IEN[2*i+1+2*j*Nx, :] = i+1+j*Nnodes, i+1+(j+1)*Nnodes, i+(j+1)*Nnodes\n return nodes, IEN, ID"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kubeflow/pipelines
|
samples/contrib/mnist/00_Kubeflow_Cluster_Setup.ipynb
|
apache-2.0
|
[
"# Copyright 2019 The Kubeflow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"Deploying a Kubeflow Cluster on Google Cloud Platform (GCP)\nThis notebook provides instructions for setting up a Kubeflow cluster on GCP using the command-line interface (CLI). For additional help, see the guide to deploying Kubeflow using the CLI.\nThere are two possible alternatives:\n- The first alternative is to deploy Kubeflow cluster using the Kubeflow deployment web app, and the instruction can be found here.\n- Another alternative is to use recently launched AI Platform Pipeline. But, it is important to note that the AI Platform Pipeline is a standalone Kubeflow Pipeline deployment, where a lot of the components in full Kubeflow deployment won't be pre-installed. The instruction can be found here.\nThe CLI deployment gives you more control over the deployment process and configuration than you get if you use the deployment UI.\nPrerequisites\n\nYou have a GCP project setup for your Kubeflow Deployment with you having the owner role for the project and with the following APIs enabled:\nCompute Engine API\nKubernetes Engine API\nIdentity and Access Management(IAM) API\nDeployment Manager API\nCloud Resource Manager API\nAI Platform Training & Prediction API\n\n\nYou have set up OAuth for Cloud IAP\nYou have installed and setup kubectl\nYou have installed gcloud-sdk\n\nRunning Environment\nThis notebook helps in creating the Kubeflow cluster on GCP. You must run this notebook in an environment with Cloud SDK installed, such as Cloud Shell. Learn more about installing Cloud SDK.\nSetting up a Kubeflow cluster\n\nDownload kfctl\nSetup environment variables\nCreate dedicated service account for deployment\nDeploy Kubefow\nInstall Kubeflow Pipelines SDK\nSanity check\n\nCreate a working directory\nCreate a new working directory in your current directory. The default name is kubeflow, but you can change the name.",
"work_directory_name = 'kubeflow'\n\n! mkdir -p $work_directory_name\n\n%cd $work_directory_name",
"Download kfctl\nDownload kfctl to your working directory. The default version used is v0.7.0, but you can find the latest release here.",
"## Download kfctl v0.7.0\n! curl -LO https://github.com/kubeflow/kubeflow/releases/download/v0.7.0/kfctl_v0.7.0_linux.tar.gz\n \n## Unpack the tar ball\n! tar -xvf kfctl_v0.7.0_linux.tar.gz",
"If you are using AI Platform Notebooks, your environment is already authenticated. Skip the following cell.",
"## Create user credentials\n! gcloud auth application-default login",
"Set up environment variables\nSet up environment variables to use while installing Kubeflow. Replace variable placeholders (for example, <VARIABLE NAME>) with the correct values for your environment.",
"# Set your GCP project ID and the zone where you want to create the Kubeflow deployment\n%env PROJECT=<ADD GCP PROJECT HERE>\n%env ZONE=<ADD GCP ZONE TO LAUNCH KUBEFLOW CLUSTER HERE>\n\n# google cloud storage bucket\n%env GCP_BUCKET=gs://<ADD STORAGE LOCATION HERE>\n\n# Use the following kfctl configuration file for authentication with \n# Cloud IAP (recommended):\nuri = \"https://raw.githubusercontent.com/kubeflow/manifests/v0.7-branch/kfdef/kfctl_gcp_iap.0.7.0.yaml\"\nuri = uri.strip()\n%env CONFIG_URI=$uri\n\n# For using Cloud IAP for authentication, create environment variables\n# from the OAuth client ID and secret that you obtained earlier:\n%env CLIENT_ID=<ADD OAuth CLIENT ID HERE>\n%env CLIENT_SECRET=<ADD OAuth CLIENT SECRET HERE>\n\n# Set KF_NAME to the name of your Kubeflow deployment. You also use this\n# value as directory name when creating your configuration directory. \n# For example, your deployment name can be 'my-kubeflow' or 'kf-test'.\n%env KF_NAME=<ADD KUBEFLOW DEPLOYMENT NAME HERE>\n\n# Set up name of the service account that should be created and used\n# while creating the Kubeflow cluster\n%env SA_NAME=<ADD SERVICE ACCOUNT NAME TO BE CREATED HERE>",
"Configure gcloud and add kfctl to your path.",
"! gcloud config set project ${PROJECT}\n\n! gcloud config set compute/zone ${ZONE}\n\n\n# Set the path to the base directory where you want to store one or more \n# Kubeflow deployments. For example, /opt/.\n# Here we use the current working directory as the base directory\n# Then set the Kubeflow application directory for this deployment.\n\nimport os\nbase = os.getcwd()\n%env BASE_DIR=$base\n\nkf_dir = os.getenv('BASE_DIR') + \"/\" + os.getenv('KF_NAME')\n%env KF_DIR=$kf_dir\n\n# The following command is optional. It adds the kfctl binary to your path.\n# If you don't add kfctl to your path, you must use the full path\n# each time you run kfctl. In this example, the kfctl file is present in\n# the current directory\nnew_path = os.getenv('PATH') + \":\" + os.getenv('BASE_DIR')\n%env PATH=$new_path",
"Create service account",
"! gcloud iam service-accounts create ${SA_NAME}\n! gcloud projects add-iam-policy-binding ${PROJECT} \\\n --member serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com \\\n --role 'roles/owner'\n! gcloud iam service-accounts keys create key.json \\\n --iam-account ${SA_NAME}@${PROJECT}.iam.gserviceaccount.com",
"Set GOOGLE_APPLICATION_CREDENTIALS",
"key_path = os.getenv('BASE_DIR') + \"/\" + 'key.json'\n%env GOOGLE_APPLICATION_CREDENTIALS=$key_path",
"Setup and deploy Kubeflow",
"! mkdir -p ${KF_DIR}\n%cd $kf_dir\n! kfctl apply -V -f ${CONFIG_URI}",
"Install Kubeflow Pipelines SDK",
"%%capture\n\n# Install the SDK (Uncomment the code if the SDK is not installed before)\n! pip3 install 'kfp>=0.1.36' --quiet --user",
"Sanity Check: Check the ingress created",
"! kubectl -n istio-system describe ingress",
"Access the Kubeflow cluster at https://<KF_NAME>.endpoints.<gcp_project_id>.cloud.goog/\nNote that it may take up to 15-20 mins for the above url to be functional."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
m2dsupsdlclass/lectures-labs
|
labs/01_keras/Intro Keras.ipynb
|
mit
|
[
"Training Neural Networks with Keras\nGoals:\n\nIntro: train a neural network with tensorflow and the Keras layers\n\nDataset:\n\nDigits: 10 class handwritten digits\nhttp://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.datasets import load_digits\n\ndigits = load_digits()\n\ndigits.images.shape\n\ndigits.data.shape\n\ndigits.target.shape\n\nsample_index = 45\nplt.figure(figsize=(3, 3))\nplt.imshow(digits.images[sample_index], cmap=plt.cm.gray_r,\n interpolation='nearest')\nplt.title(\"image label: %d\" % digits.target[sample_index]);",
"Train / Test Split\nLet's keep some held-out data to be able to measure the generalization performance of our model.",
"from sklearn.model_selection import train_test_split\n\n\ndata = np.asarray(digits.data, dtype='float32')\ntarget = np.asarray(digits.target, dtype='int32')\n\nX_train, X_test, y_train, y_test = train_test_split(\n data, target, test_size=0.15, random_state=37)\n\nX_train.shape\n\nX_test.shape\n\ny_train.shape\n\ny_test.shape",
"Preprocessing of the Input Data\nMake sure that all input variables are approximately on the same scale via input normalization:",
"from sklearn import preprocessing\n\n\n# mean = 0 ; standard deviation = 1.0\nscaler = preprocessing.StandardScaler()\nX_train = scaler.fit_transform(X_train)\nX_test = scaler.transform(X_test)\n\n# print(scaler.mean_)\n# print(scaler.scale_)\n\nX_train.shape\n\nX_train.mean(axis=0)\n\nX_train.std(axis=0)",
"Let's display the one of the transformed sample (after feature standardization):",
"sample_index = 45\nplt.figure(figsize=(3, 3))\nplt.imshow(X_train[sample_index].reshape(8, 8),\n cmap=plt.cm.gray_r, interpolation='nearest')\nplt.title(\"transformed sample\\n(standardization)\");",
"The scaler objects makes it possible to recover the original sample:",
"plt.figure(figsize=(3, 3))\nplt.imshow(scaler.inverse_transform(X_train[sample_index:sample_index+1]).reshape(8, 8),\n cmap=plt.cm.gray_r, interpolation='nearest')\nplt.title(\"original sample\");\n\nprint(X_train.shape, y_train.shape)\n\nprint(X_test.shape, y_test.shape)",
"Preprocessing of the Target Data\nTo train a first neural network we also need to turn the target variable into a vector \"one-hot-encoding\" representation. Here are the labels of the first samples in the training set encoded as integers:",
"y_train[:3]",
"Keras provides a utility function to convert integer-encoded categorical variables as one-hot encoded values:",
"from tensorflow.keras.utils import to_categorical\n\nY_train = to_categorical(y_train)\nY_train[:3]\n\nY_train.shape",
"Feed Forward Neural Networks with Keras\nObjectives of this section:\n\nBuild and train a first feedforward network using Keras\nhttps://www.tensorflow.org/guide/keras/overview\n\n\nExperiment with different optimizers, activations, size of layers, initializations\n\nA First Keras Model\nWe can now build an train a our first feed forward neural network using the high level API from keras:\n\nfirst we define the model by stacking layers with the right dimensions\nthen we define a loss function and plug the SGD optimizer\nthen we feed the model the training data for fixed number of epochs",
"from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras import optimizers\n\ninput_dim = X_train.shape[1]\nhidden_dim = 100\noutput_dim = Y_train.shape[1]\n\nmodel = Sequential()\nmodel.add(Dense(hidden_dim, input_dim=input_dim, activation=\"tanh\"))\nmodel.add(Dense(output_dim, activation=\"softmax\"))\n\nmodel.compile(optimizer=optimizers.SGD(learning_rate=0.1),\n loss='categorical_crossentropy', metrics=['accuracy'])\n\nhistory = model.fit(X_train, Y_train, validation_split=0.2, epochs=15, batch_size=32)",
"Visualizing the Convergence\nLet's wrap the keras history info into a pandas dataframe for easier plotting:",
"import pandas as pd\n\nhistory_df = pd.DataFrame(history.history)\nhistory_df[\"epoch\"] = history.epoch\n\nfig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(12, 6))\nhistory_df.plot(x=\"epoch\", y=[\"loss\", \"val_loss\"], ax=ax0)\nhistory_df.plot(x=\"epoch\", y=[\"accuracy\", \"val_accuracy\"], ax=ax1);",
"Monitoring Convergence with Tensorboard\nTensorboard is a built-in neural network monitoring tool.",
"%load_ext tensorboard\n\n!rm -rf tensorboard_logs\n\nimport datetime\nfrom tensorflow.keras.callbacks import TensorBoard\n\nmodel = Sequential()\nmodel.add(Dense(hidden_dim, input_dim=input_dim, activation=\"tanh\"))\nmodel.add(Dense(output_dim, activation=\"softmax\"))\n\nmodel.compile(optimizer=optimizers.SGD(learning_rate=0.1),\n loss='categorical_crossentropy', metrics=['accuracy'])\n\ntimestamp = datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\nlog_dir = \"tensorboard_logs/\" + timestamp\ntensorboard_callback = TensorBoard(log_dir=log_dir, histogram_freq=1)\n\nmodel.fit(x=X_train, y=Y_train, validation_split=0.2, epochs=15,\n callbacks=[tensorboard_callback]);\n\n%tensorboard --logdir tensorboard_logs",
"b) Exercises: Impact of the Optimizer\n\n\nTry to decrease the learning rate value by 10 or 100. What do you observe?\n\n\nTry to increase the learning rate value to make the optimization diverge.\n\n\nConfigure the SGD optimizer to enable a Nesterov momentum of 0.9\n\n\nNotes: \nThe keras API documentation is available at:\nhttps://www.tensorflow.org/api_docs/python/tf/keras\nIt is also possible to learn more about the parameters of a class by using the question mark: type and evaluate:\npython\noptimizers.SGD?\nin a jupyter notebook cell.\nIt is also possible to type the beginning of a function call / constructor and type \"shift-tab\" after the opening paren:\npython\noptimizers.SGD(<shiff-tab>",
"optimizers.SGD?\n\n# %load solutions/keras_sgd_and_momentum.py",
"Replace the SGD optimizer by the Adam optimizer from keras and run it\n with the default parameters.\n\nHint: use optimizers.<TAB> to tab-complete the list of implemented optimizers in Keras.\n\nAdd another hidden layer and use the \"Rectified Linear Unit\" for each\n hidden layer. Can you still train the model with Adam with its default global\n learning rate?",
"# %load solutions/keras_adam.py",
"Exercises: Forward Pass and Generalization\n\nCompute predictions on test set using model.predict(...)\nCompute average accuracy of the model on the test set: the fraction of test samples for which the model makes a prediction that matches the true label.",
"# %load solutions/keras_accuracy_on_test_set.py",
"Let us decompose how we got the predictions. First, we call the model on the data to get the laster layer (softmax) outputs directly as a tensorflow Tensor:",
"predictions_tf = model(X_test)\npredictions_tf[:5]\n\ntype(predictions_tf), predictions_tf.shape",
"We can use the tensorflow API to check that for each row, the probabilities sum to 1:",
"import tensorflow as tf\n\ntf.reduce_sum(predictions_tf, axis=1)[:5]",
"We can also extract the label with the highest probability using the tensorflow API:",
"predicted_labels_tf = tf.argmax(predictions_tf, axis=1)\npredicted_labels_tf[:5]",
"We can compare those labels to the expected labels to compute the accuracy with the Tensorflow API. Note however that we need an explicit cast from boolean to floating point values to be able to compute the mean accuracy when using the tensorflow tensors:",
"accuracy_tf = tf.reduce_mean(tf.cast(predicted_labels_tf == y_test, tf.float64))\naccuracy_tf",
"Also note that it is possible to convert tensors to numpy array if one prefer to use numpy:",
"accuracy_tf.numpy()\n\npredicted_labels_tf[:5]\n\npredicted_labels_tf.numpy()[:5]\n\n(predicted_labels_tf.numpy() == y_test).mean()",
"Home Assignment: Impact of Initialization\nLet us now study the impact of a bad initialization when training\na deep feed forward network.\nBy default Keras dense layers use the \"Glorot Uniform\" initialization\nstrategy to initialize the weight matrices:\n\neach weight coefficient is randomly sampled from [-scale, scale]\nscale is proportional to $\\frac{1}{\\sqrt{n_{in} + n_{out}}}$\n\nThis strategy is known to work well to initialize deep neural networks\nwith \"tanh\" or \"relu\" activation functions and then trained with\nstandard SGD.\nTo assess the impact of initialization let us plug an alternative init\nscheme into a 2 hidden layers networks with \"tanh\" activations.\nFor the sake of the example let's use normal distributed weights\nwith a manually adjustable scale (standard deviation) and see the\nimpact the scale value:",
"from tensorflow.keras import initializers\n\nnormal_init = initializers.TruncatedNormal(stddev=0.01)\n\n\nmodel = Sequential()\nmodel.add(Dense(hidden_dim, input_dim=input_dim, activation=\"tanh\",\n kernel_initializer=normal_init))\nmodel.add(Dense(hidden_dim, activation=\"tanh\",\n kernel_initializer=normal_init))\nmodel.add(Dense(output_dim, activation=\"softmax\",\n kernel_initializer=normal_init))\n\nmodel.compile(optimizer=optimizers.SGD(learning_rate=0.1),\n loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.layers",
"Let's have a look at the parameters of the first layer after initialization but before any training has happened:",
"model.layers[0].weights\n\nw = model.layers[0].weights[0].numpy()\nw\n\nw.std()\n\nb = model.layers[0].weights[1].numpy()\nb\n\nhistory = model.fit(X_train, Y_train, epochs=15, batch_size=32)\n\nplt.figure(figsize=(12, 4))\nplt.plot(history.history['loss'], label=\"Truncated Normal init\")\nplt.legend();",
"Once the model has been fit, the weights have been updated and notably the biases are no longer 0:",
"model.layers[0].weights",
"Questions:\n\n\nTry the following initialization schemes and see whether\n the SGD algorithm can successfully train the network or\n not:\n\n\na very small e.g. stddev=1e-3\n\na larger scale e.g. stddev=1 or 10\n\ninitialize all weights to 0 (constant initialization)\n\n\nWhat do you observe? Can you find an explanation for those\n outcomes?\n\n\nAre more advanced solvers such as SGD with momentum or Adam able\n to deal better with such bad initializations?",
"# %load solutions/keras_initializations.py\n\n# %load solutions/keras_initializations_analysis.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anoopsarkar/nlp-class-hw
|
zhsegment/default.ipynb
|
apache-2.0
|
[
"zhsegment: default program",
"from default import *",
"Run the default solution on dev",
"Pw = Pdist(data=datafile(\"data/count_1w.txt\"))\nsegmenter = Segment(Pw) # note that the default solution for this homework ignores the unigram counts\noutput_full = []\nwith open(\"data/input/dev.txt\") as f:\n for line in f:\n output = \" \".join(segmenter.segment(line.strip()))\n output_full.append(output)\nprint(\"\\n\".join(output_full[:3])) # print out the first three lines of output as a sanity check",
"Evaluate the default output",
"from zhsegment_check import fscore\nwith open('data/reference/dev.out', 'r') as refh:\n ref_data = [str(x).strip() for x in refh.read().splitlines()]\n tally = fscore(ref_data, output_full)\n print(\"score: {:.2f}\".format(tally), file=sys.stderr)\n",
"Documentation\nWrite some beautiful documentation of your program here.\nAnalysis\nDo some analysis of the results. What ideas did you try? What worked and what did not?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Naereen/notebooks
|
Test_for_Binder__access_local_packages.ipynb
|
mit
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Test-for-Binder-v2\" data-toc-modified-id=\"Test-for-Binder-v2-1\"><span class=\"toc-item-num\">1 </span>Test for Binder v2</a></div><div class=\"lev2 toc-item\"><a href=\"#Sys-&-OS-modules\" data-toc-modified-id=\"Sys-&-OS-modules-11\"><span class=\"toc-item-num\">1.1 </span>Sys & OS modules</a></div><div class=\"lev2 toc-item\"><a href=\"#Importing-a-file\" data-toc-modified-id=\"Importing-a-file-12\"><span class=\"toc-item-num\">1.2 </span>Importing a file</a></div><div class=\"lev2 toc-item\"><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-13\"><span class=\"toc-item-num\">1.3 </span>Conclusion</a></div>\n\n# Test for Binder v2\n\n## Sys & OS modules",
"import sys\n\nprint(\"Path (sys.path):\")\nfor f in sys.path:\n print(f)\n\nimport os\n\nprint(\"Current directory:\")\nprint(os.getcwd())",
"Importing a file\nI will import this file from the agreg/ sub-folder.",
"import agreg.memoisation",
"Conclusion\nIt seems to work as wanted."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/trax
|
trax/examples/NER_using_Reformer.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/SauravMaheshkar/trax/blob/SauravMaheshkar-example-1/examples/NER_using_Reformer.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"#@title\n# Copyright 2020 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Author - @SauravMaheshkar\nInstall Dependencies\nInstall the latest version of the Trax Library.",
"!pip install -q -U trax",
"Introduction\n\nNamed-entity recognition (NER) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.\nTo evaluate the quality of a NER system's output, several measures have been defined. The usual measures are called Precision, recall, and F1 score. However, several issues remain in just how to calculate those values. State-of-the-art NER systems for English produce near-human performance. For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%.\nImporting Packages",
"import trax # Our Main Library\nfrom trax import layers as tl\nimport os # For os dependent functionalities\nimport numpy as np # For scientific computing\nimport pandas as pd # For basic data analysis\nimport random as rnd # For using random functions",
"Pre-Processing\nLoading the Dataset\nLet's load the ner_dataset.csv file into a dataframe and see what it looks like",
"data = pd.read_csv(\"/kaggle/input/entity-annotated-corpus/ner_dataset.csv\",encoding = 'ISO-8859-1')\ndata = data.fillna(method = 'ffill')\ndata.head()",
"Creating a Vocabulary File\nWe can see there's a column for the words in each sentence. Thus, we can extract this column using the .loc() and store it into a .txt file using the .savetext() function from numpy.",
"## Extract the 'Word' column from the dataframe\nwords = data.loc[:, \"Word\"]\n\n## Convert into a text file using the .savetxt() function\nnp.savetxt(r'words.txt', words.values, fmt=\"%s\")",
"Creating a Dictionary for Vocabulary\nHere, we create a Dictionary for our vocabulary by reading through all the sentences in the dataset.",
"vocab = {}\nwith open('words.txt') as f:\n for i, l in enumerate(f.read().splitlines()):\n vocab[l] = i\n print(\"Number of words:\", len(vocab))\n vocab['<PAD>'] = len(vocab)",
"Extracting Sentences from the Dataset\nFor extracting sentences from the dataset and creating (X,y) pairs for training.",
"class Get_sentence(object):\n def __init__(self,data):\n self.n_sent=1\n self.data = data\n agg_func = lambda s:[(w,p,t) for w,p,t in zip(s[\"Word\"].values.tolist(),\n s[\"POS\"].values.tolist(),\n s[\"Tag\"].values.tolist())]\n self.grouped = self.data.groupby(\"Sentence #\").apply(agg_func)\n self.sentences = [s for s in self.grouped]\n\ngetter = Get_sentence(data)\nsentence = getter.sentences\n\nwords = list(set(data[\"Word\"].values))\nwords_tag = list(set(data[\"Tag\"].values))\n\nword_idx = {w : i+1 for i ,w in enumerate(words)}\ntag_idx = {t : i for i ,t in enumerate(words_tag)}\n\nX = [[word_idx[w[0]] for w in s] for s in sentence]\ny = [[tag_idx[w[2]] for w in s] for s in sentence]",
"Making a Batch Generator\nHere, we create a batch generator for training.",
"def data_generator(batch_size, x, y,pad, shuffle=False, verbose=False):\n\n num_lines = len(x)\n lines_index = [*range(num_lines)]\n if shuffle:\n rnd.shuffle(lines_index)\n \n index = 0 \n while True:\n buffer_x = [0] * batch_size \n buffer_y = [0] * batch_size \n\n max_len = 0\n for i in range(batch_size):\n if index >= num_lines:\n index = 0\n if shuffle:\n rnd.shuffle(lines_index)\n \n buffer_x[i] = x[lines_index[index]]\n buffer_y[i] = y[lines_index[index]]\n \n lenx = len(x[lines_index[index]]) \n if lenx > max_len:\n max_len = lenx \n \n index += 1\n\n X = np.full((batch_size, max_len), pad)\n Y = np.full((batch_size, max_len), pad)\n\n\n for i in range(batch_size):\n x_i = buffer_x[i]\n y_i = buffer_y[i]\n\n for j in range(len(x_i)):\n\n X[i, j] = x_i[j]\n Y[i, j] = y_i[j]\n\n if verbose: print(\"index=\", index)\n yield((X,Y))",
"Splitting into Test and Train",
"from sklearn.model_selection import train_test_split\nx_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state=1)",
"Building the Model\nThe Reformer Model\nIn this notebook, we use the Reformer, which is a more efficient of Transformer that uses reversible layers and locality-sensitive hashing. You can read the original paper here. \nLocality-Sensitive Hashing\n\nThe biggest problem that one might encounter while using Transformers, for huge corpora is the handling of the attention layer. Reformer introduces Locality Sensitive Hashing to solve this problem, by computing a hash function that groups similar vectors together. Thus, a input sequence is rearranged to bring elements with the same hash together and then divide into segments(or chunks, buckets) to enable parallel processing. Thus, we can apply Attention to these chunks (rather than the whole input sequence) to reduce the computational load.\n\nReversible Layers\n\nUsing Locality Sensitive Hashing, we were able to solve the problem of computation but still we have a memory issue. Reformer implements a novel approach to solve this problem, by recomputing the input of each layer on-demand during back-propagation, rather than storing it in memory. This is accomplished by using Reversible Layers (activations from last layers are used to recover activations from any intermediate layer). \nReversible layers store two sets of activations for each layer. \n\n\nOne follows the standard procedure in which the activations are added as they pass through the network\n\n\nThe other set only captures the changes. Thus, if we run the network in reverse, we simply subtract the activations applied at each layer.\n\n\n\nModel Architecture\nWe will perform the following steps:\n\n\nUse input tensors from our data generator\n\n\nProduce Semantic entries from an Embedding Layer\n\n\nFeed these into our Reformer Language model\n\n\nRun the Output through a Linear Layer\n\n\nRun these through a log softmax layer to get predicted classes\n\n\nWe use the:\n\ntl.Serial(): Combinator that applies layers serially(by function composition). It's commonly used to construct deep networks. It uses stack semantics to manage data for its sublayers\n\ntl.Embedding(): Initializes a trainable embedding layer that maps discrete tokens/ids to vectors\n\n\ntrax.models.reformer.Reformer(): Creates a Reversible Transformer encoder-decoder model.\n\n\ntl.Dense(): Creates a Dense(fully-connected, affine) layer\n\n\ntl.LogSoftmax(): Creates a layer that applies log softmax along one tensor axis.",
"def NERmodel(tags, vocab_size=35181, d_model = 50):\n\n model = tl.Serial(\n # tl.Embedding(vocab_size, d_model),\n trax.models.reformer.Reformer(vocab_size, d_model, ff_activation=tl.LogSoftmax),\n tl.Dense(tags),\n tl.LogSoftmax()\n )\n\n return model\n\nmodel = NERmodel(tags = 17)\n\nprint(model)",
"Train the Model",
"from trax.supervised import training\n\nrnd.seed(33)\n\nbatch_size = 64\n\ntrain_generator = trax.data.inputs.add_loss_weights(\n data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True),\n id_to_mask=vocab['<PAD>'])\n\neval_generator = trax.data.inputs.add_loss_weights(\n data_generator(batch_size, x_test, y_test,vocab['<PAD>'] ,True),\n id_to_mask=vocab['<PAD>'])\n\ndef train_model(model, train_generator, eval_generator, train_steps=1, output_dir='model'):\n train_task = training.TrainTask(\n train_generator, \n loss_layer = tl.CrossEntropyLoss(), \n optimizer = trax.optimizers.Adam(0.01), \n n_steps_per_checkpoint=10\n )\n\n eval_task = training.EvalTask(\n labeled_data = eval_generator, \n metrics = [tl.CrossEntropyLoss(), tl.Accuracy()], \n n_eval_batches = 10 \n )\n\n training_loop = training.Loop(\n model, \n train_task, \n eval_tasks = eval_task, \n output_dir = output_dir) \n\n training_loop.run(n_steps = train_steps)\n return training_loop\n\ntrain_steps = 100\ntraining_loop = train_model(model, train_generator, eval_generator, train_steps)",
"References\n\n\n\nGoogle AI Blog- Reformer: The Efficient Transformer\n\n\nGoogle AI Blog- Transformer: A Novel Neural Network Architecture for Language Understanding\n\n\nTrax: Deep Learning with Clear Code and Speed\n\n\nThe Illustrated Transformer\n\n\nAttention Is All You Need\n\n\nIllustrating the Reformer"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
uliang/First-steps-with-the-Python-language
|
Day 2 - Unit 3.2.ipynb
|
mit
|
[
"from __future__ import print_function, division\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom matplotlib.style import use\nimport numpy as np\nimport matplotlib as mpl\n%matplotlib inline\nuse(\"ggplot\")\nPRSA = pd.read_csv(\"https://archive.ics.uci.edu/ml/machine-learning-databases/00381/PRSA_data_2010.1.1-2014.12.31.csv\",\n index_col=0)\nbike_sharing = pd.read_csv(\"day.csv\", index_col=0)",
"2. Density based plots with matplotlib\nIn this section, we will be looking at density based plots. Plots like these address a problem with big data: How does one visualise a plot with 10,000++ data points and avoid overplotting.",
"PRSA.head()",
"Source : https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data",
"plt.plot( PRSA.TEMP, PRSA[\"pm2.5\"], 'o', color=\"steelblue\", alpha=0.5)\nplt.ylabel(\"$\\mu g/m^3$\")\nplt.title(\"PM 2.5 readings as a function of temperature (Celsius)\")",
"As one can see, there's not much one can say about the structure of the data because every point below 400 $\\mu g/m^3$ is totally filled up with blue color. \nIt is here, that a density plot helps mitigate this problem. The central idea is that individual data points are not so important in as much as they contribute to revealing the underlying distribution of the data. In other words, for large amounts of data, we want to visualize the distribution instead of visualizing how individual datapoints are placed. \nFor this lesson, we will look at this data set and others to investigate the use of other plotting functions in matplotlib. \n2.1 Learning objectives\n\nTo use histograms to visualize distribution of univariate data. \nTo customize histograms. \nTo use 2D histogram plots and hexbin plots to plot 2D distributions. \nTo plot colorbars to annotate such plots\n\n3. Histograms\nHistograms are created using the hist command. We illustrate this using the PRSA data set. Let's say that we are interested in plotting the distribution of DEWP.",
"fig = plt.figure()\nax = fig.add_subplot(111)\n\nax.set_title(\"Histogram of dew point readings\")\nax.set_xlabel(\"Dew point (C$^\\circ$)\")\n\n# Enter plotting code below here: ax.hist(PRSA.DEWP)\n\n\n\n",
"3.1 Understanding the plotting code\nSo how was this produced? \n\nWe initialized a figure object using plt.figure. \nNext we created an axis within the figure. We used the command fig.add_subplot. We passed an argument 111 to the function which is a short hand for creating only one axes.\nWith an ax object created, we now set a title using the set_title function and also label the x -axis with its label. \nThe histogram proper is plotting by calling the hist method on the axes instance. We merely need to pass the (1-dimensional) array of data to the function. \n\nNotice the output that is produced. It's not very nice and probably needs some prettying up. But as a quick exploratory plot, it does its job. Notice the extra textual output. These can be suppressed with the ; written at the end of the last statement in the code block. \n3.2 Customizing our histogram\nThe chart above can be customized to our liking by passing keyword arguments to the function. \n\n\nNumber of bins. The number of bins may be adjusted with the bins= keyword argument. However, do note that more bins does not translate into a better chart. With more bins, one tends to pick up much more variation between data (noise) that what is necessary. Therefore, try to choose a value which will give you the best sense of how the data is dristributed between the extremes of no details (small bins value) to a noisy chart (high bins value). \n\n\nNormalization. Setting normed=True (it is False by default) means that the total area of the histogram is set to 1. This setting is useful to compare distributions of variables on different orders of magnitude. \n\n\nIs a log scale needed?. log=True may be used if you want the count (or relative counts) to be plotted on a log scale. This may be useful if the counts in different bins differ by huge orders of magnitude. This is especially true for data modelled by to power law distributions. \n\n\nCumulative sums. Sometimes, you want to plot the ogive instead. Enable this by setting cumulative=True. \n\n\nColors. Selecting the best color, especially when comparing multiple histograms on the same axis is crucial. You have choose different colors using color= argument. You may use any hexadecimal color codes (e.g. ##660000) or CSS color names, matplotlib color abbrevations (e.g. c for cyan, m for magenta, b for blue etc..) \n\n\nLines between bars. A histogram is more presentable is one draws lines between bars. Enable this by setting lw= to an appropraite thickness (any value around 0.5 is ok) and giving it a color by setting ec=. \n\n\nTransparency control. This is useful if there are multiple histograms. Use alpha= and enter any value from 0 (fully transparent) to 1 (fully opaque).",
"fig = plt.figure()\nax = fig.add_subplot(111)\n\nax.set_title(\"Histogram of dew point readings\")\nax.set_xlabel(\"Dew point (C$^\\circ$)\")\n# Try typing in ax.hist(PRSA.DEWP, ...) with various customization options. \n\n",
"More options can be found of the documentation page.\n3.3 Histograms with weights\nThe hist function is not only used to plot histograms. In essence, it is a function used to plot rectangular patches on an axis. Thus, we use hist to plot bar charts. In fact, we may use it to plot stacked bar charts, which is something seaborn cannot do. \nIn the following dataset, we want to plot the distribution of daily bike rental counts (variable cnt) on any given day of the week and seperate them by the variable weathersit which is an ordinal variable denoting the severity of the weather. 1 means good weather, 4 means bad.",
"bike_sharing.head()",
"Source: https://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset\nAs you can see in the dataset above, we need to plot weekday on the x-axis and have cnt as the y-axis. How can we plot this using hist? The problem is that if we pass the weekday array to hist, we end up counting the frequency of each day in the dataset! \nTo solve this, we pass the cnt variable as a seperate parameter to hist through the keyword argument weights.",
"# Please run this cell before proceeding\n\ngroup = bike_sharing.groupby(\"weathersit\")\nweathers = [w_r for w_r, _ in group]\nday_data = [group.get_group(weather).weekday for weather in weathers]\nweights_data = [group.get_group(weather).cnt for weather in weathers]",
"What we need to do is to group the data set by weather rating and create a list of array data to be passed to hist. We can do this efficiently using the groupby method on data frames and list comprehension statements. Now we have three lists: One for weather rating, one for the day of the week and one more for the daily bike rental counts. \nWe pass this to the hist function and pass a sequence of floats [-0.5, 0.5,..., 6.5] so that each bar is nicely centered on the tick mark. We also plot this count on a log scale (set log=True) because we expect quite a large difference between bike rental counts in good weather as compared to bad. Without this, it is very difficult to see any variation for rentals during bad weather.",
"fig2 = plt.figure()\n\nax2 = fig2.add_subplot(111)\nax2.set_title(\"Distribution of daily bike rental counts by weather conditions\", y=1.05)\nax2.set_xlabel(\"Day\")\nax2.set_ylabel(\"Bike rental counts\")\nax2.hist( , # day_data\n label= , # weathers\n weights= , # weights_data\n log= , # We want the y-axis to be on a log scale\n bins=np.linspace(-0.5,6.5,8), \n histtype=\"bar\", # this is the default setting. \n ec=\"k\", lw=1.0, \n )\nax2.legend(loc=\"lower right\", title=\"Weather\\nrating\", fontsize=\"x-small\");",
"As expected, bike rentals are low in bad weather. However, notice the variation within a week for the bad weather category. It is quite clear that people do not go biking on a bad weather Sunday since they have a choice not to go out! \nLet's rerun the cell above by replacing the ax.hist function with the following code snippet. This helps us create a stacked bar chart. \nax2.set_ylim(*(0,5e5))\nax2.hist(day_data,\n label=weathers,\n weights=weights_data,\n bins=np.linspace(-0.5, 6.5, 8),\n ec=\"k\", lw=1.,\n histtype=\"barstacked\", \n rwidth=0.8\n )\n\nA note on this new code: The histtype=\"barstacked\" stacks the data on top each other. The new parameter rwidth= sets the ratio between the bar width and the bin width, which is the way you put spaces between bars. We disable log scale so that the natural totals are more clearly seen. Notice the extreme difference between rentals in different weather conditions. \nThere is hardly any differences between work days and weekends although we can detect a slight increase through the week. People do love the outdoors! \n3.4 Summary of plotting a histogram\nTo summarize the teaching points above: \n\nPass either a single array of data to make into a histogram or a list of arrays if you want multiple data on one chart. You do not need to summarize the data. hist will do it for you. \nVisual properties can be customize with keyword arguments like color, lw, ec, alpha, etc...\nYou can control whether to normalize the plot to have unit area by setting normed. \nChoose an appropriate chart by setting histtype= to either bar or barstacked. \nPass frequency counts to the weights= parameter if you have a summarized bin count already. \n\n4. hist2d and the hexbin plot\nWe use hist2d to visualize the joint distribution of bivariate data. When we expect correlation between two variables, a two dimensional histogram helps us reveal the structure of that relationship and avoids overplotting. \n4.1 Plotting and customizing a 2D histogram\nLet's return to the PRSA data set. Recall that a scatterplot suffers from overplotting. In order to circumvent this and still get useful insight into the data, we use hist2d.",
"PRSA = PRSA.dropna() #drop missing data\nfig3, ax3 = plt.subplots(figsize=(8,6))\n\nax3.grid(b=\"off\")\nax3.set_facecolor(\"white\")\nax3.set_title(\"PM2.5 readings distribution by temperature\")\nax3.set_ylabel(\"$\\mu g$/$m^3$\")\nax3.set_xlabel(\"Daily temperature (C$^\\circ$)\")\nax3.hist2d(PRSA.TEMP, PRSA[\"pm2.5\"],\n bins=55,\n cmin=5,\n range=[[-15,40],[0,600]],\n cmap=\"Blues\");",
"This chart is more informative that a simple scatterplot. For one, we now know that there are two modes in the distribution of temperature and pollutant. Furthermore, there is more variation in pollution levels in the colder seasons as compared to warmer days. \nLet's try to understand how this plot was created. \nfig3, ax3 = plt.subplots(figsize=(8,6))\n\nWe initialize a figure object of width=8 units and height=6 units and an axis object to contain our histogram. \nax3.grid(b=\"off\")\nax3.set_facecolor(\"white\")\nax3.set_title(\"PM2.5 readings distribution by temperature\")\nax3.set_ylabel(\"$\\mu g$/$m^3$\")\nax3.set_xlabel(\"Daily temperature (C$^\\circ$)\")\n\nThese codes set the axis grid to invisible and the background color to white. The rest are plotting information: plot titles and the units used on each axes. \nax3.hist2d(PRSA.TEMP, PRSA[\"pm2.5\"],\n bins=55,\n cmin=5,\n range=[[-15,40],[0,600]],\n cmap=\"Blues\");\n\nFinally, this is the command to plot the histogram. Since this is a 2d histogram, we pass two arrays of data, first for the x-axis and then for the y-axis. \n\nThe bins= parameter is set to 55. That means there are 55 bins in each dimension. Again, too high a number leads to overfitting and creates a very noisy chart. So choose a suitable number so as not to lose too much information. \ncmin= controls which frequency counts are displayed. If a particular frequency count is below cmin, it is not plotted. \nThe range parameter sets the expected range of the data in each dimension. Data points which are outside the range are considered outliers and not tallied. \nFinally cmap controls the color scheme used to indicate frequency counts. There are many color schemes to choose from and all can be seen here. \"Blues\" is a type of color scheme known as a sequential color scheme. Use this for measures like frequency counts where the contrast between min and max is important. \n\n4.2 Annotating plots with colorbar\nThis plot is still not perfect. For example, it would be nice to have an way to tell the frequency counts at each color level. To do this, we add a color bar to the plot. \nModify ax3.hist(PRSA.TEMP, ... to the following:\nimg = ax3.hist(PRSA.TEMP, ...\n\nThis saves the 2d histogram image(along with other supplementary information) in the variable img. Let's see what img contains.",
"img",
"If you observe, img is a tuple of length 4. The last entry of the tuple is the image data. Next, after the last line of ax3.hist command, add in the function\ncbar = plt.colorbar(img[3], ax=ax3)\n\nThis means that we are now going to plot a color bar in ax3 (that's what ax=ax3 means) using the image data from our 2d histogram (which is why we must pass img[3] as an argument to plt.colorbar. We save the created colorbar instance in a variable named cbar so that we may further customize it. \nTo add a title to the colorbar, add in the following line: \ncbar.set_label(\"Frequency\")\n\nThis is what you should see if everything is done correctly.\n4.3 hexbin plots\n2D histograms create a square grid and visualize frequency counts using colors. Instead of using a square grid, we may also a hexagonal grid. As hexagons have more sides, this smooths out the resulting image. Let's see this effect with the same PRSA data set as in the hist2d plot.",
"fig4, ax4 = plt.subplots(figsize=(9,6))\n\nax4.grid(b=\"off\")\nax4.set_facecolor(\"white\")\nax4.set_title(\"PM2.5 distribution by temperature\")\nax4.set_ylabel(\"$\\mu$g/$m^3$\")\nax4.set_xlabel(\"Temperature (C$^\\circ$)\")\nimg = ax4.hexbin(PRSA.TEMP, PRSA[\"pm2.5\"],\n gridsize=55,\n mincnt=5,\n # bins=\"log\",\n cmap=\"Blues\",\n )\ncbar = plt.colorbar(img, ax=ax4)\n#cbar.set_label(\"$\\log($Frequency)\")\ncbar.set_label(\"Frequency\")",
"However, hexbin plots differ from the square 2d histograms in more ways than the type of tiling used. We may use hexbin plots to investigate how a dependant variable depends on 2 independant variables. Just as we passed bin frequencies to the weights parameter in hist, we pass the third dependant variable to the C parameter in hexbin. That means, we can visualize a two dimensional surface embedded in a 3D space as an altitude map. \n4.3.1 The hexbin C parameter\nLet's investigate how PM2.5 pollutants vary with temperature and atmospheric pressure in the PRSA dataset. To do that type in (or copy paste) the following code\nfig5, ax5 = plt.subplots(figsize=(8,6))\nax5.grid(b=\"off\")\nax5.set_facecolor(\"white\")\nax5.set_title(\"PM2.5 pollutants as a function of temperature\\nand atmospheric pressure\")\nax5.set_xlabel(\"Temperature (C$^\\circ$)\")\nax5.set_ylabel(\"Pressure (hPa)\")\nimg = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA[\"pm2.5\"],\n gridsize=(30, 20),\n cmap=\"rainbow\")\ncbar = plt.colorbar(img, ax=ax5)\ncbar.set_label(\"$\\mu$g/m$^3$\")",
"# Paste or type in your code here\n\n\n\n\n\n\n\n\n\n\n\n\n",
"The color for each hexagon is determined by the mean value of each PM2.5 readings corresponding to the pressure and temperature readings contained in each hexagon. \n4.3.2 Changing the aggregation function for each hexbin\nHowever, the aggregation function on each hexagon can be changed by specifying another function to argument reduce_C_function. Let's change this by passing the following code to hexbin. \nreduce_C_function=np.median\n\nExercise: Change the label of the color bar to indicate that we are taking the median PM2.5 readings in each hexagon.",
"fig5, ax5 = plt.subplots(figsize=(8,6))\nax5.grid(b=\"off\")\nax5.set_facecolor(\"white\")\nax5.set_title(\"PM2.5 pollutants as a function of temperature\\nand atmospheric pressure\")\nax5.set_xlabel(\"Temperature (C$^\\circ$)\")\nax5.set_ylabel(\"Pressure (hPa)\")\nimg = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA[\"pm2.5\"],\n gridsize=(30,20),\n ,\n cmap=\"rainbow\",\n)\ncbar = plt.colorbar(img, ax=ax5)\n# Write your answer below \n\n",
"4.3.3 Changing the bins parameter\nBesides controlling the number of hexagons, we can also bin the hexagons so that the hexagons within the same bin have the same color. This helps us further smooth out the plot and avoid overfitting.",
"fig5, ax5 = plt.subplots(figsize=(8,6))\nax5.grid(b=\"off\")\nax5.set_facecolor(\"white\")\nax5.set_title(\"PM2.5 pollutants as a function of temperature\\nand atmospheric pressure\", y=1.05)\nax5.set_xlabel(\"Temperature (C$^\\circ$)\")\nax5.set_ylabel(\"Pressure (hPa)\")\nimg = ax5.hexbin(PRSA.TEMP, PRSA.PRES, C=PRSA[\"pm2.5\"],\n gridsize=(30,20),\n bins=\"log\",\n cmap=\"rainbow\"\n ) \ncbar = plt.colorbar(img, ax=ax5)\ncbar.set_label(\"$\\log_{10}(P)$\\n$P \\;\\mu m$/$m^3$\")",
"By passing \"log\" to the bins parameter, we normalize the color scale so that a color corresponds to $\\log(x+1)$ where $x$ is the aggregated C value of each hexagon. As you can see above, this may help bring out fine details between lumonosity levels. Note that the data has not be normalized, only the color correspondence. \nbins=\"log\"\n\ncbar.set_label(\"$\\log_{10}(P)$\\n$P \\;\\mu m$/$m^3$\")\n\nWe could also pass a single int, $n$, to the parameter so that hexbin creates $n$ of such color levels. \nimport matplotlib as mpl\nd_rainbow = mpl.cm.get_cmap(\"rainbow\", 5)\nticks=[0.4, 1.2, 2.0, 2.8, 3.6]\n...\n\nbins=5\ncmap=d_rainbow\n...\ncbar = plt.colorbar(img, ax=ax5, ticks=ticks)\ncbar.set_label(\"Pollutant level\")\ncbar.set_ticklabels([\"%d\" % (x) for x in range(1,6)])\n\nBy passing a list of int's, we can set the binning boundaries explicitly. Note that we are setting the lower bound of these bins. \nimport matplotlib as mpl\nd_rainbow = mpl.cm.get_cmap(\"rainbow\", 4)\n\n...\n\nbins=[0, 50, 100, 200],\ncmap=d_rainbow\n\n...\n\ncbar = plt.colorbar(img, ax=ax5, extend=\"max\", ticks=[1, 1.75, 2.5, 3.25])\ncbar.set_label(\"$\\mu$m/$m^3$\")\ncbar.set_ticklabels([\"$\\geq 0$\", \"$\\geq 50$\", \"$\\geq 100$\", \"$\\geq 200$\"])\n\nExercise: The orientation of the colorbar can be changed using the orientation parameter. There are two settings: \"vertical\" and \"horizontal\". Change the orientation of the color bar so that it is parallel to the $x$-axis. \n*Extra: The resulting figure may be out of shape because matplotlib steals some space from the parent axis to place the color bar. Reshape the figure using the parameter figsize=(..., ...) so that the graph is more presentable. *"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ChadFulton/statsmodels
|
examples/notebooks/statespace_sarimax_stata.ipynb
|
bsd-3-clause
|
[
"SARIMAX: Introduction\nThis notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation.\nFirst, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf:\n\nARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset.\nVariation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect.\nARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect.\nARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable.\n\nSecond, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate:\n\nOne-step-ahead in-sample prediction\nn-step-ahead out-of-sample forecasting\nn-step-ahead in-sample dynamic prediction",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\nimport requests\nfrom io import BytesIO",
"ARIMA Example 1: Arima\nAs can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term.\nThe postulated data process is then:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $c$ is the intercept of the ARMA model, $\\Delta$ is the first-difference operator, and we assume $\\epsilon_{t} \\sim N(0, \\sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below):\n$$\n(1 - \\phi_1 L ) \\Delta y_t = c + (1 + \\theta_1 L) \\epsilon_{t}\n$$\nwhere $L$ is the lag operator.\nNotice that one difference between the Stata output and the output below is that Stata estimates the following model:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that:\n$$\n(\\Delta y_t - \\beta_0) = \\phi_1 ( \\Delta y_{t-1} - \\beta_0) + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t} \\\n\\Delta y_t = (1 - \\phi_1) \\beta_0 + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nso that $c = (1 - \\phi_1) \\beta_0$.",
"# Dataset\nwpi1 = requests.get('https://www.stata-press.com/data/r12/wpi1.dta').content\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nres = mod.fit(disp=False)\nprint(res.summary())",
"Thus the maximum likelihood estimates imply that for the process above, we have:\n$$\n\\Delta y_t = 0.1050 + 0.8740 \\Delta y_{t-1} - 0.4206 \\epsilon_{t-1} + \\epsilon_{t}\n$$\nwhere $\\epsilon_{t} \\sim N(0, 0.5226)$. Finally, recall that $c = (1 - \\phi_1) \\beta_0$, and here $c = 0.1050$ and $\\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean:\n$$\\beta_0 = \\frac{c}{1 - \\phi_1} = \\frac{0.1050}{1 - 0.8740} = 0.83$$\nNote: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close.\nARIMA Example 2: Arima with additive seasonal effects\nThis model is an extension of that from example 1. Here the data is assumed to follow the process:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nThe new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level.\nBefore estimating the dataset, graphs showing:\n\nThe time series (in logs)\nThe first difference of the time series (in logs)\nThe autocorrelation function\nThe partial autocorrelation function.\n\nFrom the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model.",
"# Dataset\ndata = pd.read_stata(BytesIO(wpi1))\ndata.index = data.t\ndata['ln_wpi'] = np.log(data['wpi'])\ndata['D.ln_wpi'] = data['ln_wpi'].diff()\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\n# Levels\naxes[0].plot(data.index._mpl_repr(), data['wpi'], '-')\naxes[0].set(title='US Wholesale Price Index')\n\n# Log difference\naxes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')\naxes[1].hlines(0, data.index[0], data.index[-1], 'r')\naxes[1].set(title='US Wholesale Price Index - difference of logs');\n\n# Graph data\nfig, axes = plt.subplots(1, 2, figsize=(15,4))\n\nfig = sm.graphics.tsa.plot_acf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[0])\nfig = sm.graphics.tsa.plot_pacf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[1])",
"To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))\nThe order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0).\nFor the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use:\npython\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4))\nand the corresponding data process would be:\n$$\ny_t = c + \\phi_1 y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_2 \\epsilon_{t-2} + \\theta_3 \\epsilon_{t-3} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t}\n$$\nor\n$$\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_2 L^2 + \\theta_3 L^3 + \\theta_4 L^4) \\epsilon_{t}\n$$\nWhen the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\\epsilon_{t-2}$ and $\\epsilon_{t-3}$, which we don't want here.\nWhat we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use:\npython\nar = 1 # this is the maximum degree specification\nma = (1,0,0,1) # this is the lag polynomial specification\nmod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma)))\nThis gives the following form for the process of the data:\n$$\n\\Delta y_t = c + \\phi_1 \\Delta y_{t-1} + \\theta_1 \\epsilon_{t-1} + \\theta_4 \\epsilon_{t-4} + \\epsilon_{t} \\\n(1 - \\phi_1 L)\\Delta y_t = c + (1 + \\theta_1 L + \\theta_4 L^4) \\epsilon_{t}\n$$\nwhich is what we want.",
"# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))\nres = mod.fit(disp=False)\nprint(res.summary())",
"ARIMA Example 3: Airline Model\nIn the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \\times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D y_t = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere:\n\n$\\phi_p (L)$ is the non-seasonal autoregressive lag polynomial\n$\\tilde \\phi_P (L^s)$ is the seasonal autoregressive lag polynomial\n$\\Delta^d \\Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times.\n$A(t)$ is the trend polynomial (including the intercept)\n$\\theta_q (L)$ is the non-seasonal moving average lag polynomial\n$\\tilde \\theta_Q (L^s)$ is the seasonal moving average lag polynomial\n\nsometimes we rewrite this as:\n$$\n\\phi_p (L) \\tilde \\phi_P (L^s) y_t^* = A(t) + \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nwhere $y_t^* = \\Delta^d \\Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model.\nAs an example, consider the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2) (1 - \\tilde \\phi_1 L^{12}) \\Delta \\Delta_{12} y_t = c + \\epsilon_t\n$$\nHere, we have:\n\n$\\phi_p (L) = (1 - \\phi_1 L - \\phi_2 L^2)$\n$\\tilde \\phi_P (L^s) = (1 - \\phi_1 L^12)$\n$d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences.\n$A(t) = c$ is the constant trend polynomial (i.e. just an intercept)\n$\\theta_q (L) = \\tilde \\theta_Q (L^s) = 1$ (i.e. there is no moving average effect)\n\nIt may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model:\n$$\n(1 - \\phi_1 L - \\phi_2 L^2 - \\tilde \\phi_1 L^{12} + \\phi_1 \\tilde \\phi_1 L^{13} + \\phi_2 \\tilde \\phi_1 L^{14} ) y_t^* = c + \\epsilon_t\n$$\nwhich can be rewritten as:\n$$\ny_t^ = c + \\phi_1 y_{t-1}^ + \\phi_2 y_{t-2}^ + \\tilde \\phi_1 y_{t-12}^ - \\phi_1 \\tilde \\phi_1 y_{t-13}^ - \\phi_2 \\tilde \\phi_1 y_{t-14}^ + \\epsilon_t\n$$\nThis is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters.\nSpecifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer.\nFor the airline model ARIMA $(2,1,0) \\times (1,1,0)_{12}$ with an intercept, the command is:\npython\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12))",
"# Dataset\nair2 = requests.get('https://www.stata-press.com/data/r12/air2.dta').content\ndata = pd.read_stata(BytesIO(air2))\ndata.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')\ndata['lnair'] = np.log(data['air'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)\nres = mod.fit(disp=False)\nprint(res.summary())",
"Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.\nThe default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.\nARIMA Example 4: ARMAX (Friedman)\nThis model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of \"regression with SARIMA errors\" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as:\n$$\ny_t = \\beta_t x_t + u_t \\\n \\phi_p (L) \\tilde \\phi_P (L^s) \\Delta^d \\Delta_s^D u_t = A(t) +\n \\theta_q (L) \\tilde \\theta_Q (L^s) \\epsilon_t\n$$\nNotice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations.\nThis specification nests many simpler specifications. For example, regression with AR(2) errors is:\n$$\ny_t = \\beta_t x_t + u_t \\\n(1 - \\phi_1 L - \\phi_2 L^2) u_t = A(t) + \\epsilon_t\n$$\nThe model considered in this example is regression with ARMA(1,1) errors. The process is then written:\n$$\n\\text{consump}_t = \\beta_0 + \\beta_1 \\text{m2}_t + u_t \\\n(1 - \\phi_1 L) u_t = (1 - \\theta_1 L) \\epsilon_t\n$$\nNotice that $\\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output.",
"# Dataset\nfriedman2 = requests.get('https://www.stata-press.com/data/r12/friedman2.dta').content\ndata = pd.read_stata(BytesIO(friedman2))\ndata.index = data.time\n\n# Variables\nendog = data.loc['1959':'1981', 'consump']\nexog = sm.add_constant(data.loc['1959':'1981', 'm2'])\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))\nres = mod.fit(disp=False)\nprint(res.summary())",
"ARIMA Postestimation: Example 1 - Dynamic Forecasting\nHere we describe some of the post-estimation capabilities of Statsmodels' SARIMAX.\nFirst, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation).",
"# Dataset\nraw = pd.read_stata(BytesIO(friedman2))\nraw.index = raw.time\ndata = raw.loc[:'1981']\n\n# Variables\nendog = data.loc['1959':, 'consump']\nexog = sm.add_constant(data.loc['1959':, 'm2'])\nnobs = endog.shape[0]\n\n# Fit the model\nmod = sm.tsa.statespace.SARIMAX(endog.loc[:'1978-01-01'], exog=exog.loc[:'1978-01-01'], order=(1,0,1))\nfit_res = mod.fit(disp=False)\nprint(fit_res.summary())",
"Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).",
"mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))\nres = mod.filter(fit_res.params)",
"The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).\nWith no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.",
"# In-sample one-step-ahead predictions\npredict = res.get_prediction()\npredict_ci = predict.conf_int()",
"We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.\nThe dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.\nHere we perform dynamic prediction starting in the first quarter of 1978.",
"# Dynamic predictions\npredict_dy = res.get_prediction(dynamic='1978-01-01')\npredict_dy_ci = predict_dy.conf_int()",
"We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.",
"# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')\n\n# Plot data points\ndata.loc['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed')\n\n# Plot predictions\npredict.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast')\nci = predict_ci.loc['1977-07-01':]\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)\npredict_dy.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)')\nci = predict_dy_ci.loc['1977-07-01':]\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='g', alpha=0.1)\n\nlegend = ax.legend(loc='lower right')",
"Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.",
"# Prediction error\n\n# Graph\nfig, ax = plt.subplots(figsize=(9,4))\nnpre = 4\nax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')\n\n# In-sample one-step-ahead predictions and 95% confidence intervals\npredict_error = predict.predicted_mean - endog\npredict_error.loc['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast')\nci = predict_ci.loc['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], alpha=0.1)\n\n# Dynamic predictions and 95% confidence intervals\npredict_dy_error = predict_dy.predicted_mean - endog\npredict_dy_error.loc['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)')\nci = predict_dy_ci.loc['1977-10-01':].copy()\nci.iloc[:,0] -= endog.loc['1977-10-01':]\nci.iloc[:,1] -= endog.loc['1977-10-01':]\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)\n\nlegend = ax.legend(loc='lower left');\nlegend.get_frame().set_facecolor('w')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amueller/scipy-2017-sklearn
|
notebooks/23.Out-of-core_Learning_Large_Scale_Text_Classification.ipynb
|
cc0-1.0
|
[
"Out-of-core Learning - Large Scale Text Classification for Sentiment Analysis\nScalability Issues\nThe sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.\nThe main scalability issues are:\n\nMemory usage of the text vectorizer: all the string representations of the features are loaded in memory\nParallelization problems for text feature extraction: the vocabulary_ would be a shared state: complex synchronization and overhead\nImpossibility to do online or out-of-core / streaming learning: the vocabulary_ needs to be learned from the data: its size cannot be known before making one pass over the full dataset\n\nTo better understand the issue let's have a look at how the vocabulary_ attribute work. At fit time the tokens of the corpus are uniquely indentified by a integer index and this mapping stored in the vocabulary:",
"from sklearn.feature_extraction.text import CountVectorizer\n\nvectorizer = CountVectorizer(min_df=1)\n\nvectorizer.fit([\n \"The cat sat on the mat.\",\n])\nvectorizer.vocabulary_",
"The vocabulary is used at transform time to build the occurrence matrix:",
"X = vectorizer.transform([\n \"The cat sat on the mat.\",\n \"This cat is a nice cat.\",\n]).toarray()\n\nprint(len(vectorizer.vocabulary_))\nprint(vectorizer.get_feature_names())\nprint(X)",
"Let's refit with a slightly larger corpus:",
"vectorizer = CountVectorizer(min_df=1)\n\nvectorizer.fit([\n \"The cat sat on the mat.\",\n \"The quick brown fox jumps over the lazy dog.\",\n])\nvectorizer.vocabulary_",
"The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.\nWith this new vocabulary, the dimensionality of the output space is now larger:",
"X = vectorizer.transform([\n \"The cat sat on the mat.\",\n \"This cat is a nice cat.\",\n]).toarray()\n\nprint(len(vectorizer.vocabulary_))\nprint(vectorizer.get_feature_names())\nprint(X)",
"The IMDb movie dataset\nTo illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task: sentiment analysis on text documents. The goal is to tell apart negative from positive movie reviews from the Internet Movie Database (IMDb).\nIn the following sections, with a large subset of movie reviews from the IMDb that has been collected by Maas et al. \n\nA. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. Learning Word Vectors for Sentiment Analysis. In the proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. \n\nThis dataset contains 50,000 movie reviews, which were split into 25,000 training samples and 25,000 test samples. The reviews are labeled as either negative (neg) or positive (pos). Moreover, positive means that a movie received >6 stars on IMDb; negative means that a movie received <5 stars, respectively.\nAssuming that the ../fetch_data.py script was run successfully the following files should be available:",
"import os\n\ntrain_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')\ntest_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'test')",
"Now, let's load them into our active session via scikit-learn's load_files function",
"from sklearn.datasets import load_files\n\ntrain = load_files(container_path=(train_path),\n categories=['pos', 'neg'])\n\ntest = load_files(container_path=(test_path),\n categories=['pos', 'neg'])",
"<div class=\"alert alert-warning\">\n <b>NOTE</b>:\n <ul>\n <li>\n Since the movie datasets consists of 50,000 individual text files, executing the code snippet above may take ~20 sec or longer.\n </li>\n </ul>\n</div>\n\nThe load_files function loaded the datasets into sklearn.datasets.base.Bunch objects, which are Python dictionaries:",
"train.keys()",
"In particular, we are only interested in the data and target arrays.",
"import numpy as np\n\nfor label, data in zip(('TRAINING', 'TEST'), (train, test)):\n print('\\n\\n%s' % label)\n print('Number of documents:', len(data['data']))\n print('\\n1st document:\\n', data['data'][0])\n print('\\n1st label:', data['target'][0])\n print('\\nClass names:', data['target_names'])\n print('Class count:', \n np.unique(data['target']), ' -> ',\n np.bincount(data['target']))",
"As we can see above the 'target' array consists of integers 0 and 1, where 0 stands for negative and 1 stands for positive.\nThe Hashing Trick\nRemember the bag of word representation using a vocabulary based vectorizer:\n<img src=\"figures/bag_of_words.svg\" width=\"100%\">\nTo workaround the limitations of the vocabulary-based vectorizers, one can use the hashing trick. Instead of building and storing an explicit mapping from the feature names to the feature indices in a Python dict, we can just use a hash function and a modulus operation:\n<img src=\"figures/hashing_vectorizer.svg\" width=\"100%\">\nMore info and reference for the original papers on the Hashing Trick in the following site as well as a description specific to language here.",
"from sklearn.utils.murmurhash import murmurhash3_bytes_u32\n\n# encode for python 3 compatibility\nfor word in \"the cat sat on the mat\".encode(\"utf-8\").split():\n print(\"{0} => {1}\".format(\n word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))",
"This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.\nThe HashingVectorizer class is an alternative to the CountVectorizer (or TfidfVectorizer class with use_idf=False) that internally uses the murmurhash hash function:",
"from sklearn.feature_extraction.text import HashingVectorizer\n\nh_vectorizer = HashingVectorizer(encoding='latin-1')\nh_vectorizer",
"It shares the same \"preprocessor\", \"tokenizer\" and \"analyzer\" infrastructure:",
"analyzer = h_vectorizer.build_analyzer()\nanalyzer('This is a test sentence.')",
"We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method: there is no need to fit as HashingVectorizer is a stateless transformer:",
"docs_train, y_train = train['data'], train['target']\ndocs_valid, y_valid = test['data'][:12500], test['target'][:12500]\ndocs_test, y_test = test['data'][12500:], test['target'][12500:]",
"The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute):",
"h_vectorizer.transform(docs_train)",
"Now, let's compare the computational efficiency of the HashingVectorizer to the CountVectorizer:",
"h_vec = HashingVectorizer(encoding='latin-1')\n%timeit -n 1 -r 3 h_vec.fit(docs_train, y_train)\n\ncount_vec = CountVectorizer(encoding='latin-1')\n%timeit -n 1 -r 3 count_vec.fit(docs_train, y_train)",
"As we can see, the HashingVectorizer is much faster than the Countvectorizer in this case.\nFinally, let us train a LogisticRegression classifier on the IMDb training subset:",
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import Pipeline\n\nh_pipeline = Pipeline([\n ('vec', HashingVectorizer(encoding='latin-1')),\n ('clf', LogisticRegression(random_state=1)),\n])\n\nh_pipeline.fit(docs_train, y_train)\n\nprint('Train accuracy', h_pipeline.score(docs_train, y_train))\nprint('Validation accuracy', h_pipeline.score(docs_valid, y_valid))\n\nimport gc\n\ndel count_vec\ndel h_pipeline\n\ngc.collect()",
"Out-of-Core learning\nOut-of-Core learning is the task of training a machine learning model on a dataset that does not fit into memory or RAM. This requires the following conditions:\n\na feature extraction layer with fixed output dimensionality\nknowing the list of all classes in advance (in this case we only have positive and negative reviews)\na machine learning algorithm that supports incremental learning (the partial_fit method in scikit-learn).\n\nIn the following sections, we will set up a simple batch-training function to train an SGDClassifier iteratively. \nBut first, let us load the file names into a Python list:",
"train_path = os.path.join('datasets', 'IMDb', 'aclImdb', 'train')\ntrain_pos = os.path.join(train_path, 'pos')\ntrain_neg = os.path.join(train_path, 'neg')\n\nfnames = [os.path.join(train_pos, f) for f in os.listdir(train_pos)] +\\\n [os.path.join(train_neg, f) for f in os.listdir(train_neg)]\n\nfnames[:3]",
"Next, let us create the target label array:",
"y_train = np.zeros((len(fnames), ), dtype=int)\ny_train[:12500] = 1\nnp.bincount(y_train)",
"Now, we implement the batch_train function as follows:",
"from sklearn.base import clone\n\ndef batch_train(clf, fnames, labels, iterations=25, batchsize=1000, random_seed=1):\n vec = HashingVectorizer(encoding='latin-1')\n idx = np.arange(labels.shape[0])\n c_clf = clone(clf)\n rng = np.random.RandomState(seed=random_seed)\n \n for i in range(iterations):\n rnd_idx = rng.choice(idx, size=batchsize)\n documents = []\n for i in rnd_idx:\n with open(fnames[i], 'r', encoding='latin-1') as f:\n documents.append(f.read())\n X_batch = vec.transform(documents)\n batch_labels = labels[rnd_idx]\n c_clf.partial_fit(X=X_batch, \n y=batch_labels, \n classes=[0, 1])\n \n return c_clf",
"Note that we are not using LogisticRegression as in the previous section, but we will use a SGDClassifier with a logistic cost function instead. SGD stands for stochastic gradient descent, an optimization alrogithm that optimizes the weight coefficients iteratively sample by sample, which allows us to feed the data to the classifier chunk by chuck.\nAnd we train the SGDClassifier; using the default settings of the batch_train function, it will train the classifier on 25*1000=25000 documents. (Depending on your machine, this may take >2 min)",
"from sklearn.linear_model import SGDClassifier\n\nsgd = SGDClassifier(loss='log', random_state=1)\n\nsgd = batch_train(clf=sgd,\n fnames=fnames,\n labels=y_train)",
"Eventually, let us evaluate its performance:",
"vec = HashingVectorizer(encoding='latin-1')\nsgd.score(vec.transform(docs_test), y_test)",
"Limitations of the Hashing Vectorizer\nUsing the Hashing Vectorizer makes it possible to implement streaming and parallel text classification but can also introduce some issues:\n\nThe collisions can introduce too much noise in the data and degrade prediction quality,\nThe HashingVectorizer does not provide \"Inverse Document Frequency\" reweighting (lack of a use_idf=True option).\nThere is no easy way to inverse the mapping and find the feature names from the feature index.\n\nThe collision issues can be controlled by increasing the n_features parameters.\nThe IDF weighting might be reintroduced by appending a TfidfTransformer instance on the output of the vectorizer. However computing the idf_ statistic used for the feature reweighting will require to do at least one additional pass over the training set before being able to start training the classifier: this breaks the online learning scheme.\nThe lack of inverse mapping (the get_feature_names() method of TfidfVectorizer) is even harder to workaround. That would require extending the HashingVectorizer class to add a \"trace\" mode to record the mapping of the most important features to provide statistical debugging information.\nIn the mean time to debug feature extraction issues, it is recommended to use TfidfVectorizer(use_idf=False) on a small-ish subset of the dataset to simulate a HashingVectorizer() instance that have the get_feature_names() method and no collision issues.\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>\n In our implementation of the batch_train function above, we randomly draw *k* training samples as a batch in each iteration, which can be considered as a random subsampling ***with*** replacement. Can you modify the `batch_train` function so that it iterates over the documents ***without*** replacement, i.e., that it uses each document ***exactly once*** per iteration?\n </li>\n </ul>\n</div>",
"# %load solutions/23_batchtrain.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kubeflow/pipelines
|
samples/contrib/e2e-outlier-drift-explainer/seldon/seldon_e2e_adult.ipynb
|
apache-2.0
|
[
"End to End Machine Learning Pipeline for Income Prediction\nWe use demographic features from the 1996 US census to build an end to end machine learning pipeline. The pipeline is also annotated so it can be run as a Kubeflow Pipeline using the Kale pipeline generator.\nThe notebook/pipeline stages are:\n\nSetup \nImports\npipeline-parameters\nminio client test\nTrain a simple sklearn model and push to minio\nPrepare an Anchors explainer for model and push to minio\nTest Explainer\nTrain an isolation forest outlier detector for model and push to minio\nDeploy a Seldon model and test\nDeploy a KfServing model and test\nDeploy an outlier detector",
"import numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom alibi.explainers import AnchorTabular\nfrom alibi.datasets import fetch_adult\nfrom minio import Minio\nfrom minio.error import ResponseError\nfrom joblib import dump, load\nimport dill\nimport time\nimport json\nfrom subprocess import run, Popen, PIPE\nfrom alibi_detect.utils.data import create_outlier_batch\n\nMINIO_HOST=\"minio-service.kubeflow:9000\"\nMINIO_ACCESS_KEY=\"minio\"\nMINIO_SECRET_KEY=\"minio123\"\nMINIO_MODEL_BUCKET=\"seldon\"\nINCOME_MODEL_PATH=\"sklearn/income/model\"\nEXPLAINER_MODEL_PATH=\"sklearn/income/explainer\"\nOUTLIER_MODEL_PATH=\"sklearn/income/outlier\"\nDEPLOY_NAMESPACE=\"admin\"\n\ndef get_minio():\n return Minio(MINIO_HOST,\n access_key=MINIO_ACCESS_KEY,\n secret_key=MINIO_SECRET_KEY,\n secure=False)\n\nminioClient = get_minio()\nbuckets = minioClient.list_buckets()\nfor bucket in buckets:\n print(bucket.name, bucket.creation_date)\n\nif not minioClient.bucket_exists(MINIO_MODEL_BUCKET):\n minioClient.make_bucket(MINIO_MODEL_BUCKET)",
"Train Model",
"adult = fetch_adult()\nadult.keys()\n\ndata = adult.data\ntarget = adult.target\nfeature_names = adult.feature_names\ncategory_map = adult.category_map",
"Note that for your own datasets you can use our utility function gen_category_map to create the category map:",
"from alibi.utils.data import gen_category_map",
"Define shuffled training and test set",
"np.random.seed(0)\ndata_perm = np.random.permutation(np.c_[data, target])\ndata = data_perm[:,:-1]\ntarget = data_perm[:,-1]\n\nidx = 30000\nX_train,Y_train = data[:idx,:], target[:idx]\nX_test, Y_test = data[idx+1:,:], target[idx+1:]",
"Create feature transformation pipeline\nCreate feature pre-processor. Needs to have 'fit' and 'transform' methods. Different types of pre-processing can be applied to all or part of the features. In the example below we will standardize ordinal features and apply one-hot-encoding to categorical features.\nOrdinal features:",
"ordinal_features = [x for x in range(len(feature_names)) if x not in list(category_map.keys())]\nordinal_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])",
"Categorical features:",
"categorical_features = list(category_map.keys())\ncategorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])",
"Combine and fit:",
"preprocessor = ColumnTransformer(transformers=[('num', ordinal_transformer, ordinal_features),\n ('cat', categorical_transformer, categorical_features)])",
"Train Random Forest model\nFit on pre-processed (imputing, OHE, standardizing) data.",
"np.random.seed(0)\nclf = RandomForestClassifier(n_estimators=50)\n\nmodel=Pipeline(steps=[(\"preprocess\",preprocessor),(\"model\",clf)])\nmodel.fit(X_train,Y_train)",
"Define predict function",
"def predict_fn(x):\n return model.predict(x)\n\n#predict_fn = lambda x: clf.predict(preprocessor.transform(x))\nprint('Train accuracy: ', accuracy_score(Y_train, predict_fn(X_train)))\nprint('Test accuracy: ', accuracy_score(Y_test, predict_fn(X_test)))\n\ndump(model, 'model.joblib') \n\nprint(get_minio().fput_object(MINIO_MODEL_BUCKET, f\"{INCOME_MODEL_PATH}/model.joblib\", 'model.joblib'))",
"Train Explainer",
"model.predict(X_train)\nexplainer = AnchorTabular(predict_fn, feature_names, categorical_names=category_map)",
"Discretize the ordinal features into quartiles",
"explainer.fit(X_train, disc_perc=[25, 50, 75])\n\nwith open(\"explainer.dill\", \"wb\") as dill_file:\n dill.dump(explainer, dill_file) \n dill_file.close()\nprint(get_minio().fput_object(MINIO_MODEL_BUCKET, f\"{EXPLAINER_MODEL_PATH}/explainer.dill\", 'explainer.dill'))",
"Get Explanation\nBelow, we get an anchor for the prediction of the first observation in the test set. An anchor is a sufficient condition - that is, when the anchor holds, the prediction should be the same as the prediction for this instance.",
"model.predict(X_train)\nidx = 0\nclass_names = adult.target_names\nprint('Prediction: ', class_names[explainer.predict_fn(X_test[idx].reshape(1, -1))[0]])",
"We set the precision threshold to 0.95. This means that predictions on observations where the anchor holds will be the same as the prediction on the explained instance at least 95% of the time.",
"explanation = explainer.explain(X_test[idx], threshold=0.95)\nprint('Anchor: %s' % (' AND '.join(explanation['names'])))\nprint('Precision: %.2f' % explanation['precision'])\nprint('Coverage: %.2f' % explanation['coverage'])",
"Train Outlier Detector",
"from alibi_detect.od import IForest\n\nod = IForest(\n threshold=0.,\n n_estimators=200,\n)\n\n\nod.fit(X_train)\n\nnp.random.seed(0)\nperc_outlier = 5\nthreshold_batch = create_outlier_batch(X_train, Y_train, n_samples=1000, perc_outlier=perc_outlier)\nX_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target\n#X_threshold = (X_threshold - mean) / stdev\nprint('{}% outliers'.format(100 * y_threshold.mean()))\n\nod.infer_threshold(X_threshold, threshold_perc=100-perc_outlier)\nprint('New threshold: {}'.format(od.threshold))\nthreshold = od.threshold\n\nX_outlier = [[300, 4, 4, 2, 1, 4, 4, 0, 0, 0, 600, 9]]\n\nod.predict(\n X_outlier\n)\n\nfrom alibi_detect.utils.saving import save_detector, load_detector\nfrom os import listdir\nfrom os.path import isfile, join\n\nfilepath=\"ifoutlier\"\nsave_detector(od, filepath) \nonlyfiles = [f for f in listdir(filepath) if isfile(join(filepath, f))]\nfor filename in onlyfiles:\n print(filename)\n print(get_minio().fput_object(MINIO_MODEL_BUCKET, f\"{OUTLIER_MODEL_PATH}/{filename}\", join(filepath, filename)))",
"Deploy Seldon Core Model",
"secret = f\"\"\"apiVersion: v1\nkind: Secret\nmetadata:\n name: seldon-init-container-secret\n namespace: {DEPLOY_NAMESPACE}\ntype: Opaque\nstringData:\n AWS_ACCESS_KEY_ID: {MINIO_ACCESS_KEY}\n AWS_SECRET_ACCESS_KEY: {MINIO_SECRET_KEY}\n AWS_ENDPOINT_URL: http://{MINIO_HOST}\n USE_SSL: \"false\"\n\"\"\"\nwith open(\"secret.yaml\",\"w\") as f:\n f.write(secret)\nrun(\"cat secret.yaml | kubectl apply -f -\", shell=True)\n\nsa = f\"\"\"apiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: minio-sa\n namespace: {DEPLOY_NAMESPACE}\nsecrets:\n - name: seldon-init-container-secret\n\"\"\"\nwith open(\"sa.yaml\",\"w\") as f:\n f.write(sa)\nrun(\"kubectl apply -f sa.yaml\", shell=True)\n\nmodel_yaml=f\"\"\"apiVersion: machinelearning.seldon.io/v1\nkind: SeldonDeployment\nmetadata:\n name: income-classifier\n namespace: {DEPLOY_NAMESPACE}\nspec:\n predictors:\n - componentSpecs:\n graph:\n implementation: SKLEARN_SERVER\n modelUri: s3://{MINIO_MODEL_BUCKET}/{INCOME_MODEL_PATH}\n envSecretRefName: seldon-init-container-secret\n name: classifier\n logger:\n mode: all\n explainer:\n type: AnchorTabular\n modelUri: s3://{MINIO_MODEL_BUCKET}/{EXPLAINER_MODEL_PATH}\n envSecretRefName: seldon-init-container-secret\n name: default\n replicas: 1\n\"\"\"\nwith open(\"model.yaml\",\"w\") as f:\n f.write(model_yaml)\nrun(\"kubectl apply -f model.yaml\", shell=True)\n\nrun(f\"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l seldon-deployment-id=income-classifier -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE})\", shell=True)\n\nrun(f\"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l seldon-deployment-id=income-classifier -o jsonpath='{{.items[1].metadata.name}}' -n {DEPLOY_NAMESPACE})\", shell=True)",
"Make a prediction request",
"payload='{\"data\": {\"ndarray\": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}'\ncmd=f\"\"\"curl -d '{payload}' \\\n http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \\\n -H \"Content-Type: application/json\"\n\"\"\"\nret = Popen(cmd, shell=True,stdout=PIPE)\nraw = ret.stdout.read().decode(\"utf-8\")\nprint(raw)",
"Make an explanation request",
"payload='{\"data\": {\"ndarray\": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}'\ncmd=f\"\"\"curl -d '{payload}' \\\n http://income-classifier-default-explainer.{DEPLOY_NAMESPACE}:9000/api/v1.0/explain \\\n -H \"Content-Type: application/json\"\n\"\"\"\nret = Popen(cmd, shell=True,stdout=PIPE)\nraw = ret.stdout.read().decode(\"utf-8\")\nprint(raw)",
"Deploy Outier Detector",
"outlier_yaml=f\"\"\"apiVersion: serving.knative.dev/v1\nkind: Service\nmetadata:\n name: income-outlier\n namespace: {DEPLOY_NAMESPACE}\nspec:\n template:\n metadata:\n annotations:\n autoscaling.knative.dev/minScale: \"1\"\n spec:\n containers:\n - image: seldonio/alibi-detect-server:1.2.2-dev_alibidetect\n imagePullPolicy: IfNotPresent\n args:\n - --model_name\n - adultod\n - --http_port\n - '8080'\n - --protocol\n - seldon.http\n - --storage_uri\n - s3://{MINIO_MODEL_BUCKET}/{OUTLIER_MODEL_PATH}\n - --reply_url\n - http://default-broker \n - --event_type\n - io.seldon.serving.inference.outlier\n - --event_source\n - io.seldon.serving.incomeod\n - OutlierDetector\n envFrom:\n - secretRef:\n name: seldon-init-container-secret\n\"\"\"\nwith open(\"outlier.yaml\",\"w\") as f:\n f.write(outlier_yaml)\nrun(\"kubectl apply -f outlier.yaml\", shell=True)\n\ntrigger_outlier_yaml=f\"\"\"apiVersion: eventing.knative.dev/v1alpha1\nkind: Trigger\nmetadata:\n name: income-outlier-trigger\n namespace: {DEPLOY_NAMESPACE}\nspec:\n filter:\n sourceAndType:\n type: io.seldon.serving.inference.request\n subscriber:\n ref:\n apiVersion: serving.knative.dev/v1alpha1\n kind: Service\n name: income-outlier\n\"\"\"\nwith open(\"outlier_trigger.yaml\",\"w\") as f:\n f.write(trigger_outlier_yaml)\nrun(\"kubectl apply -f outlier_trigger.yaml\", shell=True)\n\nrun(f\"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l serving.knative.dev/service=income-outlier -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE})\", shell=True)",
"Deploy KNative Eventing Event Display",
"event_display=f\"\"\"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: event-display\n namespace: {DEPLOY_NAMESPACE} \nspec:\n replicas: 1\n selector:\n matchLabels: &labels\n app: event-display\n template:\n metadata:\n labels: *labels\n spec:\n containers:\n - name: helloworld-go\n # Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display\n image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display@sha256:f4628e97a836c77ed38bd3b6fd3d0b06de4d5e7db6704772fe674d48b20bd477\n---\nkind: Service\napiVersion: v1\nmetadata:\n name: event-display\n namespace: {DEPLOY_NAMESPACE}\nspec:\n selector:\n app: event-display\n ports:\n - protocol: TCP\n port: 80\n targetPort: 8080\n---\napiVersion: eventing.knative.dev/v1alpha1\nkind: Trigger\nmetadata:\n name: income-outlier-display\n namespace: {DEPLOY_NAMESPACE}\nspec:\n broker: default\n filter:\n attributes:\n type: io.seldon.serving.inference.outlier\n subscriber:\n ref:\n apiVersion: v1\n kind: Service\n name: event-display\n\"\"\"\nwith open(\"event_display.yaml\",\"w\") as f:\n f.write(event_display)\nrun(\"kubectl apply -f event_display.yaml\", shell=True)\n\nrun(f\"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/event-display -n {DEPLOY_NAMESPACE}\", shell=True)",
"Test Outlier Detection",
"def predict():\n payload='{\"data\": {\"ndarray\": [[300, 4, 4, 2, 1, 4, 4, 0, 0, 0, 600, 9]]}}'\n cmd=f\"\"\"curl -d '{payload}' \\\n http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \\\n -H \"Content-Type: application/json\"\n \"\"\"\n ret = Popen(cmd, shell=True,stdout=PIPE)\n raw = ret.stdout.read().decode(\"utf-8\")\n print(raw)\n\ndef get_outlier_event_display_logs():\n cmd=f\"kubectl logs $(kubectl get pod -l app=event-display -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE}) -n {DEPLOY_NAMESPACE}\"\n ret = Popen(cmd, shell=True,stdout=PIPE)\n res = ret.stdout.read().decode(\"utf-8\").split(\"\\n\")\n data= []\n for i in range(0,len(res)):\n if res[i] == 'Data,':\n j = json.loads(json.loads(res[i+1]))\n if \"is_outlier\"in j[\"data\"].keys():\n data.append(j)\n if len(data) > 0:\n return data[-1]\n else:\n return None\nj = None\nwhile j is None:\n predict()\n print(\"Waiting for outlier logs, sleeping\")\n time.sleep(2)\n j = get_outlier_event_display_logs()\n \nprint(j)\nprint(\"Outlier\",j[\"data\"][\"is_outlier\"]==[1])",
"Clean Up Resources",
"run(f\"kubectl delete sdep income-classifier -n {DEPLOY_NAMESPACE}\", shell=True)\nrun(f\"kubectl delete ksvc income-outlier -n {DEPLOY_NAMESPACE}\", shell=True)\nrun(f\"kubectl delete sa minio-sa -n {DEPLOY_NAMESPACE}\", shell=True)\nrun(f\"kubectl delete secret seldon-init-container-secret -n {DEPLOY_NAMESPACE}\", shell=True)\nrun(f\"kubectl delete deployment event-display -n {DEPLOY_NAMESPACE}\", shell=True)\nrun(f\"kubectl delete svc event-display -n {DEPLOY_NAMESPACE}\", shell=True)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DelMaestroGroup/PartEntFermions
|
FiniteSizeScaling/plotScript.ipynb
|
mit
|
[
"Finite Size Scaling\nFigure 3\nInvestigate the finite size scaling of $S_2(n=1)-\\ln(N)$ vs $N^{-(4g+1)}$ for $N=2\\to 14$ and $V/t = -1.3 \\to 1.8$.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\ndef g(V):\n '''Interaction parameter g.'''\n K = np.pi/np.arccos(-V/2)/2\n return (K+1/K)/4-1/2",
"Interaction strengths and boundary conditions",
"V = [1.8,1.4,1.0,0.6,0.2,-0.1,-0.5,-0.9,-1.3]\nBC = ['APBC','PBC']",
"Load the data and perform the linear fit for each BC and interaction strength",
"S2scaled = {}\nfor cBC in BC:\n for cV in V:\n \n # load raw data\n data = np.loadtxt('N1%sn1u_%3.1f.dat'%(cBC[0],cV))\n\n # Raises each N to the power of the leading finite size correction γ=(4g+1)\n x = data[:,0]**(-(1.0+4*g(cV)))\n \n # perform the linear fit\n p = np.polyfit(x,data[:,3],1)\n \n # rescale the data\n y = (data[:,3]-p[1])/p[0]\n\n # store\n S2scaled['%s%3.1f'%(cBC[0],cV)] = np.stack((x,y),axis=1)",
"Plot the scaled finite-size data collapse",
"colors = ['#4173b3','#e95c47','#7dcba4','#5e4ea2','#fdbe6e','#808080','#2e8b57','#b8860b','#87ceeb']\nmarkers = ['o','s','h','D','H','>','^','<','v']\n\nplt.style.reload_library()\nwith plt.style.context('../IOP_large.mplstyle'):\n \n # Create the figure\n fig1 = plt.figure()\n ax2 = fig1.add_subplot(111)\n ax3 = fig1.add_subplot(111)\n ax2.set_xlabel(r'$N^{-(1+4g)}$')\n ax2.set_ylabel(r'$(S_2(n=1)-{\\rm{ln}}(N)-a)/b$')\n ax2.set_xlim(0,0.52)\n ax2.set_ylim(0,0.52)\n ax3 = ax2.twinx()\n ax3.set_xlim(0,0.52)\n ax3.set_ylim(0,0.52)\n ax1 = fig1.add_subplot(111)\n ax1.set_xlim(0,0.52)\n ax1.set_ylim(0,0.52)\n \n\n # Plots (S2(n=1)-ln(N)-a)/b vs. N^-(4g+1) \n # anti-periodic boundary conditions\n for i,cV in enumerate(V):\n data = S2scaled['A%3.1f'%cV]\n if cV > 0:\n ax = ax2\n else:\n ax = ax3\n \n ax.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew='1.0', \n linestyle='None', label=r'$%3.1f$'%cV)\n \n # U/t > 0 legend\n ax2.legend(loc=(0.01,0.29),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')\n \n \n # periodic boundary conditions \n for i,cV in enumerate(V): \n data = S2scaled['P%3.1f'%cV]\n\n ax1.plot(data[:,0],data[:,1], marker=markers[i], color=colors[i], mfc='None', mew=1.0,\n linestyle='None', label='None')\n\n # U/t < 0 legend\n ax3.legend(loc=(0.65,0.04),frameon=False,numpoints=1,ncol=1,title=r'$V/t$')\n ax3.tick_params(\n axis='both', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n right='off', # ticks along the bottom edge are off\n top='off', # ticks along the top edge are off\n labelright='off') # labels along the bottom edge are off\n ax2.tick_params(\n axis='both', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n right='on',\n top='on') # ticks along the top edge are off\n \n # Save the figure\n plt.savefig('finiteSizeScaling.pdf')\n plt.savefig('finiteSizeScaling.png')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
seniosh/StatisticalMethods
|
examples/SDSScatalog/FirstLook.ipynb
|
gpl-2.0
|
[
"A First Look at the SDSS Photometric \"Galaxy\" Catalog\n\n\nThe Sloan Digital Sky Survey imaged over 10,000 sq degrees of sky (about 25% of the total), automatically detecting, measuring and cataloging millions of \"objects\".\n\n\nWhile the primary data products of the SDSS was (and still are) its spectroscopic surveys, the photometric survey provides an important testing ground for dealing with pure imaging surveys like those being carried out by DES and that is planned with LSST.\n\n\nLet's download part of the SDSS photometric object catalog and explore it.\n\n\nSDSS data release 12 (DR12) is described at the SDSS3 website and in the survey paper by Alam et al 2015. \nWe will use the SDSS DR12 SQL query interface. For help designing queries, the sample queries page is invaluable, and you will probably want to check out the links to the \"schema browser\" at some point as well. Notice the \"check syntax only\" button on the SQL query interface: this is very useful for debugging SQL queries.\n\nSmall test queries can be executed directly in the browser. Larger ones (involving more than a few tens of thousands of objects, or that involve a lot of processing) should be submitted via the CasJobs system. Try the browser first, and move to CasJobs when you need to.",
"%load_ext autoreload\n%autoreload 2\n\nfrom __future__ import print_function\nimport numpy as np\nimport SDSS\nimport pandas as pd\nimport matplotlib\n%matplotlib inline\n\nobjects = \"SELECT top 10000 \\\nra, \\\ndec, \\\ntype, \\\ndered_u as u, \\\ndered_g as g, \\\ndered_r as r, \\\ndered_i as i, \\\npetroR50_i AS size \\\nFROM PhotoObjAll \\\nWHERE \\\n((type = '3' OR type = '6') AND \\\n ra > 185.0 AND ra < 185.2 AND \\\n dec > 15.0 AND dec < 15.2)\"\nprint (objects)\n\n# Download data. This can take a while...\nsdssdata = SDSS.select(objects)\nsdssdata",
"Notice:\n* Some values are large and negative - indicating a problem with the automated measurement routine. We will need to deal with these.\n* Sizes are \"effective radii\" in arcseconds. The typical resolution (\"point spread function\" effective radius) in an SDSS image is around 0.7\".\n\nLet's save this download for further use.",
"!mkdir -p downloads\nsdssdata.to_csv(\"downloads/SDSSobjects.csv\")",
"Visualizing Data in N-dimensions\nThis is, in general, difficult.\nLooking at all possible 1 and 2-dimensional histograms/scatter plots helps a lot. \nColor coding can bring in a 3rd dimension (and even a 4th). Interactive plots and movies are also well worth thinking about.\n<br>\nHere we'll follow a multi-dimensional visualization example due to Josh Bloom at UC Berkeley:",
"# We'll use astronomical g-r color as the colorizer, and then plot \n# position, magnitude, size and color against each other.\n\ndata = pd.read_csv(\"downloads/SDSSobjects.csv\",usecols=[\"ra\",\"dec\",\"u\",\"g\",\\\n \"r\",\"i\",\"size\"])\n\n# Filter out objects with bad magnitude or size measurements:\ndata = data[(data[\"u\"] > 0) & (data[\"g\"] > 0) & (data[\"r\"] > 0) & (data[\"i\"] > 0) & (data[\"size\"] > 0)]\n\n# Log size, and g-r color, will be more useful:\ndata['log_size'] = np.log10(data['size'])\ndata['g-r_color'] = data['g'] - data['r']\n\n# Drop the things we're not so interested in:\ndel data['u'], data['g'], data['r'], data['size']\n\ndata.head()\n\n# Get ready to plot:\npd.set_option('display.max_columns', None)\n# !pip install --upgrade seaborn \nimport seaborn as sns\nsns.set()\n\ndef plot_everything(data,colorizer,vmin=0.0,vmax=10.0):\n # Truncate the color map to retain contrast between faint objects.\n norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)\n cmap = matplotlib.cm.jet\n m = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap)\n plot = pd.scatter_matrix(data, alpha=0.2,figsize=[15,15],color=m.to_rgba(data[colorizer]))\n return\n\nplot_everything(data,'g-r_color',vmin=-1.0, vmax=3.0)",
"Size-magnitude\nLet's zoom in and look at the objects' (log) sizes and magnitudes.",
"zoom = data.copy()\ndel zoom['ra'],zoom['dec'],zoom['g-r_color']\nplot_everything(zoom,'i',vmin=15.0, vmax=21.5)",
"Q: What features do you notice in this plot?\nTalk to your neighbor for a minute or two about all the things that might be going on, and be ready to point things out to the class."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/starthinker
|
colabs/email_dv360_to_bigquery.ipynb
|
apache-2.0
|
[
"DV360 Report Emailed To BigQuery\nPulls a DV360 Report from a gMail email into BigQuery.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.",
"!pip install git+https://github.com/google/starthinker\n",
"2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.",
"from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n",
"3. Enter DV360 Report Emailed To BigQuery Recipe Parameters\n\nThe person executing this recipe must be the recipient of the email.\nSchedule a DV360 report to be sent to an email like ****.\nOr set up a redirect rule to forward a report you already receive.\nThe report can be sent as an attachment or a link.\nEnsure this recipe runs after the report is email daily.\nGive a regular expression to match the email subject.\nConfigure the destination in BigQuery to write the data.\nModify the values below for your use case, can be done multiple times, then click play.",
"FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'email':'', # Email address report was sent to.\n 'subject':'.*', # Regular expression to match subject. Double escape backslashes.\n 'dataset':'', # Existing dataset in BigQuery.\n 'table':'', # Name of table to be written to.\n 'dbm_schema':'[]', # Schema provided in JSON list format or empty list.\n 'is_incremental_load':False, # Append report data to table based on date column, de-duplicates.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"4. Execute DV360 Report Emailed To BigQuery\nThis does NOT need to be modified unless you are changing the recipe, click play.",
"from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'email':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'read':{\n 'from':'noreply-dv360@google.com',\n 'to':{'field':{'name':'email','kind':'string','order':1,'default':'','description':'Email address report was sent to.'}},\n 'subject':{'field':{'name':'subject','kind':'string','order':2,'default':'.*','description':'Regular expression to match subject. Double escape backslashes.'}},\n 'link':'https://storage.googleapis.com/.*',\n 'attachment':'.*'\n },\n 'write':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing dataset in BigQuery.'}},\n 'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Name of table to be written to.'}},\n 'schema':{'field':{'name':'dbm_schema','kind':'json','order':5,'default':'[]','description':'Schema provided in JSON list format or empty list.'}},\n 'header':True,\n 'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':6,'default':False,'description':'Append report data to table based on date column, de-duplicates.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
QuantStack/quantstack-talks
|
pylondinium/notebooks/wealth-of-nations.ipynb
|
bsd-3-clause
|
[
"This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.",
"import pandas as pd\nimport numpy as np\nimport os\n\nfrom bqplot import (\n LogScale, LinearScale, OrdinalColorScale, ColorAxis,\n Axis, Scatter, Lines, CATEGORY10, Label, Figure, Tooltip\n)\n\nfrom ipywidgets import HBox, VBox, IntSlider, Play, jslink\n\ninitial_year = 1800",
"Cleaning and Formatting JSON Data",
"data = pd.read_json(os.path.abspath('./nations.json'))\n\ndef clean_data(data):\n for column in ['income', 'lifeExpectancy', 'population']:\n data = data.drop(data[data[column].apply(len) <= 4].index)\n return data\n\ndef extrap_interp(data):\n data = np.array(data)\n x_range = np.arange(1800, 2009, 1.)\n y_range = np.interp(x_range, data[:, 0], data[:, 1])\n return y_range\n\ndef extrap_data(data):\n for column in ['income', 'lifeExpectancy', 'population']:\n data[column] = data[column].apply(extrap_interp)\n return data\n\ndata = clean_data(data)\ndata = extrap_data(data)\n\nincome_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max))\nlife_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max))\npop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max))\n\ndef get_data(year):\n year_index = year - 1800\n income = data['income'].apply(lambda x: x[year_index])\n life_exp = data['lifeExpectancy'].apply(lambda x: x[year_index])\n pop = data['population'].apply(lambda x: x[year_index])\n return income, life_exp, pop",
"Creating the Tooltip to display the required fields\nbqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.",
"tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])",
"Creating the Label to display the year\nStaying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.",
"year_label = Label(x=[0.75], y=[0.10], font_size=52, font_weight='bolder', colors=['orange'],\n text=[str(initial_year)], enable_move=True)",
"Defining Axes and Scales\nThe inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.",
"x_sc = LogScale(min=income_min, max=income_max)\ny_sc = LinearScale(min=life_exp_min, max=life_exp_max)\nc_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6])\nsize_sc = LinearScale(min=pop_min, max=pop_max)\n\nax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='left', grid_lines='solid')\nax_x = Axis(label='Income per Capita', scale=x_sc, grid_lines='solid')",
"Creating the Scatter Mark with the appropriate size and color parameters passed\nTo generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.",
"# Start with the first year's data\ncap_income, life_exp, pop = get_data(initial_year)\n\nwealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop,\n names=data['name'], display_names=False,\n scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc},\n default_size=4112, tooltip=tt, animate=True, stroke='Black',\n unhovered_style={'opacity': 0.5})\n\nnation_line = Lines(x=data['income'][0], y=data['lifeExpectancy'][0], colors=['Gray'],\n scales={'x': x_sc, 'y': y_sc}, visible=False)",
"Creating the Figure",
"time_interval = 10\n\nfig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y],\n title='Health and Wealth of Nations', animation_duration=time_interval)",
"Using a Slider to allow the user to change the year and a button for animation\nHere we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.",
"year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)",
"When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.",
"def hover_changed(change):\n if change.new is not None:\n nation_line.x = data['income'][change.new + 1]\n nation_line.y = data['lifeExpectancy'][change.new + 1]\n nation_line.visible = True\n else:\n nation_line.visible = False\n \nwealth_scat.observe(hover_changed, 'hovered_point')",
"On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.",
"def year_changed(change):\n wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value)\n year_label.text = [str(year_slider.value)]\n\nyear_slider.observe(year_changed, 'value')",
"Add an animation button",
"play_button = Play(min=1800, max=2008, interval=time_interval)\njslink((play_button, 'value'), (year_slider, 'value'))",
"Displaying the GUI",
"VBox([HBox([play_button, year_slider]), fig])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xpharry/Udacity-DLFoudation
|
tutorials/reinforcement/Q-learning-cart.ipynb
|
mit
|
[
"Deep Q-learning\nIn this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.\n\nWe can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.",
"import gym\nimport tensorflow as tf\nimport numpy as np\n\n# Create the Cart-Pole game environment\nenv = gym.make('CartPole-v0')",
"We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.\nRun the code below to watch the simulation run.",
"env.reset()\nrewards = []\nfor _ in range(100):\n env.render()\n state, reward, done, info = env.step(env.action_space.sample()) # take a random action\n rewards.append(reward)\n if done:\n rewards = []\n env.reset()",
"To shut the window showing the simulation, use env.close().\nIf you ran the simulation above, we can look at the rewards:",
"print(rewards[-20:])",
"The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.\nQ-Network\nWe train our Q-learning agent using the Bellman Equation:\n$$\nQ(s, a) = r + \\gamma \\max{Q(s', a')}\n$$\nwhere $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.\nBefore we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.\n<img src=\"assets/deep-q-learning.png\" width=450px>\nNow, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.\n<img src=\"assets/q-network.png\" width=550px>\nAs I showed before, we can define our targets for training as $\\hat{Q}(s,a) = r + \\gamma \\max{Q(s', a')}$. Then we update the weights by minimizing $(\\hat{Q}(s,a) - Q(s,a))^2$. \nFor this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.\nBelow is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.",
"class QNetwork:\n def __init__(self, learning_rate=0.01, state_size=4, \n action_size=2, hidden_size=10, \n name='QNetwork'):\n # state inputs to the Q-network\n with tf.variable_scope(name):\n self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')\n \n # One hot encode the actions to later choose the Q-value for the action\n self.actions_ = tf.placeholder(tf.int32, [None], name='actions')\n one_hot_actions = tf.one_hot(self.actions_, action_size)\n \n # Target Q values for training\n self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')\n \n # ReLU hidden layers\n self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)\n self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)\n\n # Linear output layer\n self.output = tf.contrib.layers.fully_connected(self.fc2, action_size, \n activation_fn=None)\n \n ### Train with loss (targetQ - Q)^2\n # output has length 2, for two actions. This lext line chooses\n # one value from output (per row) according to the one-hot encoded actions.\n self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)\n \n self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))\n self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)",
"Experience replay\nReinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. \nHere, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.\nBelow, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.",
"from collections import deque\nclass Memory():\n def __init__(self, max_size = 1000):\n self.buffer = deque(maxlen=max_size)\n \n def add(self, experience):\n self.buffer.append(experience)\n \n def sample(self, batch_size):\n idx = np.random.choice(np.arange(len(self.buffer)), \n size=batch_size, \n replace=False)\n return [self.buffer[ii] for ii in idx]",
"Exploration - Exploitation\nTo learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\\epsilon$ (epsilon). That is, with some probability $\\epsilon$ the agent will make a random action and with probability $1 - \\epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\\epsilon$-greedy policy.\nAt first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.\nQ-Learning training algorithm\nPutting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:\n\nInitialize the memory $D$\nInitialize the action-value network $Q$ with random weights\nFor episode = 1, $M$ do\nFor $t$, $T$ do\nWith probability $\\epsilon$ select a random action $a_t$, otherwise select $a_t = \\mathrm{argmax}_a Q(s,a)$\nExecute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$\nStore transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$\nSample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$\nSet $\\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\\hat{Q}_j = r_j + \\gamma \\max{a'}{Q(s'_j, a')}$\nMake a gradient descent step with loss $(\\hat{Q}_j - Q(s_j, a_j))^2$\n\n\nendfor\nendfor\n\nHyperparameters\nOne of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.",
"train_episodes = 1000 # max number of episodes to learn from\nmax_steps = 200 # max steps in an episode\ngamma = 0.99 # future reward discount\n\n# Exploration parameters\nexplore_start = 1.0 # exploration probability at start\nexplore_stop = 0.01 # minimum exploration probability \ndecay_rate = 0.0001 # expotentional decay rate for exploration prob\n\n# Network parameters\nhidden_size = 64 # number of units in each Q-network hidden layer\nlearning_rate = 0.0001 # Q-network learning rate\n\n# Memory parameters\nmemory_size = 10000 # memory capacity\nbatch_size = 20 # experience mini-batch size\npretrain_length = batch_size # number experiences to pretrain the memory\n\ntf.reset_default_graph()\nmainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)",
"Populate the experience memory\nHere I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.",
"# Initialize the simulation\nenv.reset()\n# Take one random step to get the pole and cart moving\nstate, reward, done, _ = env.step(env.action_space.sample())\n\nmemory = Memory(max_size=memory_size)\n\n# Make a bunch of random actions and store the experiences\nfor ii in range(pretrain_length):\n # Uncomment the line below to watch the simulation\n # env.render()\n\n # Make a random action\n action = env.action_space.sample()\n next_state, reward, done, _ = env.step(action)\n\n if done:\n # The simulation fails so no next state\n next_state = np.zeros(state.shape)\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n \n # Start new episode\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n else:\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n state = next_state",
"Training\nBelow we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.",
"# Now train with experiences\nsaver = tf.train.Saver()\nrewards_list = []\nwith tf.Session() as sess:\n # Initialize variables\n sess.run(tf.global_variables_initializer())\n \n step = 0\n for ep in range(1, train_episodes):\n total_reward = 0\n t = 0\n while t < max_steps:\n step += 1\n # Uncomment this next line to watch the training\n # env.render() \n \n # Explore or Exploit\n explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step) \n if explore_p > np.random.rand():\n # Make a random action\n action = env.action_space.sample()\n else:\n # Get action from Q-network\n feed = {mainQN.inputs_: state.reshape((1, *state.shape))}\n Qs = sess.run(mainQN.output, feed_dict=feed)\n action = np.argmax(Qs)\n \n # Take action, get new state and reward\n next_state, reward, done, _ = env.step(action)\n \n total_reward += reward\n \n if done:\n # the episode ends so no next state\n next_state = np.zeros(state.shape)\n t = max_steps\n \n print('Episode: {}'.format(ep),\n 'Total reward: {}'.format(total_reward),\n 'Training loss: {:.4f}'.format(loss),\n 'Explore P: {:.4f}'.format(explore_p))\n rewards_list.append((ep, total_reward))\n \n # Add experience to memory\n memory.add((state, action, reward, next_state))\n \n # Start new episode\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n\n else:\n # Add experience to memory\n memory.add((state, action, reward, next_state))\n state = next_state\n t += 1\n \n # Sample mini-batch from memory\n batch = memory.sample(batch_size)\n states = np.array([each[0] for each in batch])\n actions = np.array([each[1] for each in batch])\n rewards = np.array([each[2] for each in batch])\n next_states = np.array([each[3] for each in batch])\n \n # Train network\n target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})\n \n episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)\n target_Qs[episode_ends] = (0, 0)\n \n targets = rewards + gamma * np.max(target_Qs, axis=1)\n\n loss, _ = sess.run([mainQN.loss, mainQN.opt],\n feed_dict={mainQN.inputs_: states,\n mainQN.targetQs_: targets,\n mainQN.actions_: actions})\n \n saver.save(sess, \"checkpoints/cartpole.ckpt\")\n",
"Visualizing training\nBelow I'll plot the total rewards for each episode. I took a rolling average too, in blue.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef running_mean(x, N):\n cumsum = np.cumsum(np.insert(x, 0, 0)) \n return (cumsum[N:] - cumsum[:-N]) / N \n\neps, rews = np.array(rewards_list).T\nsmoothed_rews = running_mean(rews, 10)\nplt.plot(eps[-len(smoothed_rews):], smoothed_rews)\nplt.plot(eps, rews, color='grey', alpha=0.3)\nplt.xlabel('Episode')\nplt.ylabel('Total Reward')",
"Testing\nLet's checkout how our trained agent plays the game.",
"test_episodes = 10\ntest_max_steps = 400\nenv.reset()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n for ep in range(1, test_episodes):\n t = 0\n while t < test_max_steps:\n env.render() \n \n # Get action from Q-network\n feed = {mainQN.inputs_: state.reshape((1, *state.shape))}\n Qs = sess.run(mainQN.output, feed_dict=feed)\n action = np.argmax(Qs)\n \n # Take action, get new state and reward\n next_state, reward, done, _ = env.step(action)\n \n if done:\n t = test_max_steps\n env.reset()\n # Take one random step to get the pole and cart moving\n state, reward, done, _ = env.step(env.action_space.sample())\n\n else:\n state = next_state\n t += 1\n\nenv.close()",
"Extending this\nSo, Cart-Pole is a pretty simple game. However, the same model can be used to train an agent to play something much more complicated like Pong or Space Invaders. Instead of a state like we're using here though, you'd want to use convolutional layers to get the state from the screen images.\n\nI'll leave it as a challenge for you to use deep Q-learning to train an agent to play Atari games. Here's the original paper which will get you started: http://www.davidqiu.com:8888/research/nature14236.pdf."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
feststelltaste/software-analytics
|
notebooks/Structural Test Case Similarity.ipynb
|
gpl-3.0
|
[
"Introduction\nIn big and old legacy systems, tests are often a mess. Especially end-to-end-tests with UI testing frameworks like Selenium quickly become a PITA aka unmaintainable. They are running slow and you are confronted with plenty of tests that do partly the same.\nIn this data analysis, I want to illustrate a way that can take us out of this misery. We want to spot test cases that are structural very similar and thus can be seen as duplicated. We'll calculate the similarity between tests based on their invocations of production code. We can achieve this treating our software data as observations of linear features. This opens up ways for us to leverage existing machine learning techniques like multidimensional scaling or clustering.\nAs software data under analysis, we'll use the JUnit tests of a Java application for demonstrating the approach. \nNote: The real use case originates from a software system with a massive amount of Selenium tests that uses the Page Object pattern. Each page object represents one HTML site of your web application. So, a page object exposes methods in the programming language you use enabling the interaction with a web site programmatically. In such a scenario, you can infer which tests are triggering the same set of UI components (like buttons). This is a good estimator for test cases that test the same use cases in the application. We can use the results of such an analysis to find repeating test scenarios as well as tests that just differ from a minor nuance of an otherwise similar use case (which could probably be tested with other means like integration or pure UI tests with a mocked backend).\nDataset\nI'm using a dataset that I've created in a previous blog post. It shows which test methods call which code in the main line (\"production\"). \nNote: There are also other ways to get this structural information e. g. by mining the log file of a test execution (this would even add real runtime information as well). But for our demo purpose, the pure structural information between the test code and our production code is sufficient.\nFirst, we read in the data with Pandas.",
"import pandas as pd\n\ninvocations = pd.read_csv(\"datasets/test_code_invocations.csv\", sep=\";\")\ninvocations.head()",
"What we've got here are all names of our test types (test_type) and production types (prod_type) as well as the signatures of the test methods (test_method) and production methods (prod_method). We also have the amount of calls from the test methods to the production methods (invocations).\nAnalysis\nOK, let's do some actual work! We want to calculate the structural similarity of test cases to spot possible duplications of tests.\nWhat we have are all tests cases (aka test methods) and their calls to the production code base (= the production methods). We can transform this data to get a matrix representation that shows which test method triggers which production method by using Pandas' pivot_table function on our invocations DataFrame.",
"\ninvocation_matrix = invocations.pivot_table(\n index=['test_type', 'test_method'],\n columns=['prod_type', 'prod_method'],\n values='invocations', \n fill_value=0\n)\n# show interesting parts of results\ninvocation_matrix.iloc[4:8,4:6]",
"What we've got now is the information for each invocation (or non-invocation) of test methods to production methods. In mathematical words, we've got now a n-dimensional vector for each test method where n is the number of tested production methods in our code base! That means we've just transformed our software data to a representation that we can now work on with standard Data Science tools :-D! That means all further problem solving techniques in this area can be reused by us. \nThis is exactly what we do now in our further analysis. We reduced our problem to a distance calculation between vectors (we use distance instead of similarity because later used visualization techniques work with distances). For this, we can use the cosine_distances function of the machine learning library http://scikit-learn.org to calculate a pair-wise distance matrix between the test methods.",
"from sklearn.metrics.pairwise import cosine_distances\n\ndistance_matrix = cosine_distances(invocation_matrix)\n# show some interesting parts of results\ndistance_matrix[81:85,60:62]",
"From this data, we create a DataFrame to get a better representation. You can find the complete DataFrame here as excel file as well.",
"distance_df = pd.DataFrame(distance_matrix, index=invocation_matrix.index, columns=invocation_matrix.index)\n# show some interesting parts of results\ndistance_df.iloc[81:85,60:62]\n\ninvocations[\n (invocations.test_method == \"void readRoundtripWorksWithFullData()\") |\n (invocations.test_method == \"void postCommentActuallyCreatesComment()\")]\n\ninvocations[\n (invocations.test_method == \"void readRoundtripWorksWithFullData()\") |\n (invocations.test_method == \"void postTwiceCreatesTwoElements()\")]",
"Visualization\nOur now 422x422 big distance matrix distance_df isn't a good way to spot similarities very well. Let's break down the result into two dimensions using multidimensional scaling (MDS) from scikit-learn and plot the results with the plotting library matplotlib.\nMDS tries to find a representation of our 422-dimensional data set into the two-dimensional space while retaining the distance information between all data points (=test methods). We use the MDS technique with our precomputed dissimilarity matrix distance_df.",
"from sklearn.manifold import MDS\n\nmodel = MDS(dissimilarity='precomputed', random_state=10)\ndistance_df_2d = model.fit_transform(distance_df)\ndistance_df_2d[:5]",
"Next, we plot the now two-dimensional matrix with matplotlib. We colorize all data points according to the name of the test types. We can achieve this by assigning each type a number within 0 and 1 (relative_index) and draw a color from a predefined color spectrum (cm.hsv) for each type. With this, each test class gets its own color. This enables us to quickly reason about test classes that belong together.",
"%matplotlib inline\nfrom matplotlib import cm\nimport matplotlib.pyplot as plt\n\nrelative_index = distance_df.index.labels[0].values() / distance_df.index.labels[0].max()\ncolors = [x for x in cm.hsv(relative_index)]\nplt.figure(figsize=(8,8))\nx = distance_df_2d[:,0]\ny = distance_df_2d[:,1]\nplt.scatter(x, y, c=colors)",
"We now have the visual information about which test methods call similar production code! Let's discuss this plot:\n* Groups of data points (aka clusters) of the same color are the good ones (like the blue colored ones in the lower middle). They show that there is a high cohesion of test methods with test classes that test the corresponding production code.\n* Clusters with mixed colored data points (like in the upper middle) require further analysis. Here, different test classes test the similar production code\nLet's quickly find those spots programmatically by using another machine learning technique: density-based clustering! Here, we use DBSCAN to find data points that are close together. We plot this information into the plot above to visualize dense groups of data.",
"from sklearn.cluster import DBSCAN\n\ndbscan = DBSCAN(eps=0.08, min_samples=10)\nclustering_results = dbscan.fit(distance_df_2d)\nplt.figure(figsize=(8,8))\ncluster_members = clustering_results.components_\n\n# plot all data points\nplt.scatter(x, y, c='k', alpha=0.2)\n\n# plot cluster members\nplt.scatter(\n cluster_members[:,0],\n cluster_members[:,1],\n c='r', s=100, alpha=0.1)\n\ntests = pd.DataFrame(index=distance_df.index)\ntests['cluster'] = clustering_results.labels_\ncohesive_tests = tests[tests.cluster != -1]\ncohesive_tests.head()\n\ntest_measures = cohesive_tests.reset_index().groupby(\"cluster\").test_type.agg({\"nunique\", \"count\"})\ntest_measures\n\ntest_list = cohesive_tests.reset_index().groupby(\"cluster\").test_type.apply(set)\ntest_list\n\ntest_analysis_result = test_measures.join(test_list)\ntest_analysis_result\n\ntest_analysis_result.iloc[0].test_type",
"Conclusion\nWhat a trip! We've started from a data set that showed us the invocations of production methods by test methods. We also went our way deep through the three mathematical / machine learning techniques cosine_distances, MDS and DBSCAN. Finally, we've found out which different test class classes test the same production code. The result is a helpful starting point to reorganizing your tests.\nIn general, we saw how we can transform software specific problems to questions that can be answered by using standard Data Science tooling."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/mlops-on-gcp
|
immersion/kubeflow_pipelines/cicd/labs/lab-03.ipynb
|
apache-2.0
|
[
"CI/CD for a KFP pipeline\nLearning Objectives:\n1. Learn how to create a custom Cloud Build builder to pilote CAIP Pipelines\n1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP\n1. Learn how to setup a Cloud Build Github trigger to rebuild the KFP\nIn this lab you will walk through authoring of a Cloud Build CI/CD workflow that automatically builds and deploys a KFP pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.\nConfiguring environment settings\nUpdate the ENDPOINT constat with the settings reflecting your lab environment. \nThen endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\nOpen the SETTINGS for your instance\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.",
"ENDPOINT = '<YOUR_ENDPOINT>'\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]",
"Creating the KFP CLI builder\nExercise\nIn the cell below, write a docker file that\n* Uses gcr.io/deeplearning-platform-release/base-cpu as base image\n* Install the python package kfp with version 0.2.5\n* Starts /bin/bash as entrypoint",
"%%writefile kfp-cli/Dockerfile\n\n# TODO",
"Build the image and push it to your project's Container Registry.",
"IMAGE_NAME='kfp-cli'\nTAG='latest'\nIMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)",
"Exercise\nIn the cell below, use gcloud builds to build the kfp-cli Docker image and push it to the project gcr.io registry.",
"!gcloud builds # COMPLETE THE COMMAND",
"Understanding the Cloud Build workflow.\nExercise\nIn the cell below, you'll complete the cloudbuild.yaml file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using Cloud Build variables.\nThe CI/CD workflow automates the steps you walked through manually during lab-02-kfp-pipeline:\n1. Builds the trainer image\n1. Builds the base image for custom components\n1. Compiles the pipeline\n1. Uploads the pipeline to the KFP environment\n1. Pushes the trainer and base images to your project's Container Registry\nAlthough the KFP backend supports pipeline versioning, this feature has not been yet enable through the KFP CLI. As a temporary workaround, in the Cloud Build configuration a value of the TAG_NAME variable is appended to the name of the pipeline. \nThe Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates KFP CLI.",
"%%writefile cloudbuild.yaml\n\nsteps:\n# Build the trainer image\n- name: 'gcr.io/cloud-builders/docker'\n args: ['build', '-t', 'gcr.io/$PROJECT_ID/$_TRAINER_IMAGE_NAME:$TAG_NAME', '.']\n dir: $_PIPELINE_FOLDER/trainer_image\n \n# TODO: Build the base image for lightweight components\n- name: # TODO\n args: # TODO\n dir: # TODO\n\n# Compile the pipeline\n# TODO: Set the environment variables below for the $_PIPELINE_DSL script\n# HINT: https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values\n- name: 'gcr.io/$PROJECT_ID/kfp-cli'\n args:\n - '-c'\n - |\n dsl-compile --py $_PIPELINE_DSL --output $_PIPELINE_PACKAGE\n env:\n - 'BASE_IMAGE= # TODO\n - 'TRAINER_IMAGE= # TODO\n - 'RUNTIME_VERSION= # TODO\n - 'PYTHON_VERSION= # TODO\n - 'COMPONENT_URL_SEARCH_PREFIX= # TODO\n - 'USE_KFP_SA=$_USE_KFP_SA'\n dir: $_PIPELINE_FOLDER/pipeline\n \n# Upload the pipeline\n# TODO: Use the kfp-cli Cloud Builder and write the command to upload the ktf pipeline \n- name: # TODO\n args:\n - '-c'\n - |\n # TODO\n dir: $_PIPELINE_FOLDER/pipeline\n\n\n# Push the images to Container Registry\n# TODO: List the images to be pushed to the project Docker registry\nimages: # TODO\n",
"Manually triggering CI/CD runs\nYou can manually trigger Cloud Build runs using the gcloud builds submit command.",
"SUBSTITUTIONS=\"\"\"\n_ENDPOINT={},\\\n_TRAINER_IMAGE_NAME=trainer_image,\\\n_BASE_IMAGE_NAME=base_image,\\\nTAG_NAME=test,\\\n_PIPELINE_FOLDER=.,\\\n_PIPELINE_DSL=covertype_training_pipeline.py,\\\n_PIPELINE_PACKAGE=covertype_training_pipeline.yaml,\\\n_PIPELINE_NAME=covertype_continuous_training,\\\n_RUNTIME_VERSION=1.15,\\\n_PYTHON_VERSION=3.7,\\\n_USE_KFP_SA=True,\\\n_COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/\n\"\"\".format(ENDPOINT).strip()\n\n!gcloud builds submit . --config cloudbuild.yaml --substitutions {SUBSTITUTIONS}",
"Setting up GitHub integration\nExercise\nIn this exercise you integrate your CI/CD workflow with GitHub, using Cloud Build GitHub App. \nYou will set up a trigger that starts the CI/CD workflow when a new tag is applied to the GitHub repo managing the pipeline source code. You will use a fork of this repo as your source GitHub repository.\nStep 1: Create a fork of this repo\nFollow the GitHub documentation to fork this repo\nStep 2: Create a Cloud Build trigger\nConnect the fork you created in the previous step to your Google Cloud project and create a trigger following the steps in the Creating GitHub app trigger article. Use the following values on the Edit trigger form:\n|Field|Value|\n|-----|-----|\n|Name|[YOUR TRIGGER NAME]|\n|Description|[YOUR TRIGGER DESCRIPTION]|\n|Event| Tag|\n|Source| [YOUR FORK]|\n|Tag (regex)|.*|\n|Build Configuration|Cloud Build configuration file (yaml or json)|\n|Cloud Build configuration file location| ./immersion/kubeflow_pipelines/cicd/labs/cloudbuild.yaml|\nUse the following values for the substitution variables:\n|Variable|Value|\n|--------|-----|\n|_BASE_IMAGE_NAME|base_image|\n|_COMPONENT_URL_SEARCH_PREFIX|https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/|\n|_ENDPOINT|[Your inverting proxy host]|\n|_PIPELINE_DSL|covertype_training_pipeline.py|\n|_PIPELINE_FOLDER|immersion/kubeflow_pipelines/cicd/labs|\n|_PIPELINE_NAME|covertype_training_deployment|\n|_PIPELINE_PACKAGE|covertype_training_pipeline.yaml|\n|_PYTHON_VERSION|3.7|\n|_RUNTIME_VERSION|1.15|\n|_TRAINER_IMAGE_NAME|trainer_image|\n|_USE_KFP_SA|False|\nTrigger the build\nTo start an automated build create a new release of the repo in GitHub. Alternatively, you can start the build by applying a tag using git. \ngit tag [TAG NAME]\ngit push origin --tags\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ChadFulton/statsmodels
|
examples/notebooks/discrete_choice_overview.ipynb
|
bsd-3-clause
|
[
"Discrete Choice Models Overview",
"from __future__ import print_function\nimport numpy as np\nimport statsmodels.api as sm",
"Data\nLoad data from Spector and Mazzeo (1980). Examples follow Greene's Econometric Analysis Ch. 21 (5th Edition).",
"spector_data = sm.datasets.spector.load()\nspector_data.exog = sm.add_constant(spector_data.exog, prepend=False)",
"Inspect the data:",
"print(spector_data.exog[:5,:])\nprint(spector_data.endog[:5])",
"Linear Probability Model (OLS)",
"lpm_mod = sm.OLS(spector_data.endog, spector_data.exog)\nlpm_res = lpm_mod.fit()\nprint('Parameters: ', lpm_res.params[:-1])",
"Logit Model",
"logit_mod = sm.Logit(spector_data.endog, spector_data.exog)\nlogit_res = logit_mod.fit(disp=0)\nprint('Parameters: ', logit_res.params)",
"Marginal Effects",
"margeff = logit_res.get_margeff()\nprint(margeff.summary())",
"As in all the discrete data models presented below, we can print a nice summary of results:",
"print(logit_res.summary())",
"Probit Model",
"probit_mod = sm.Probit(spector_data.endog, spector_data.exog)\nprobit_res = probit_mod.fit()\nprobit_margeff = probit_res.get_margeff()\nprint('Parameters: ', probit_res.params)\nprint('Marginal effects: ')\nprint(probit_margeff.summary())",
"Multinomial Logit\nLoad data from the American National Election Studies:",
"anes_data = sm.datasets.anes96.load()\nanes_exog = anes_data.exog\nanes_exog = sm.add_constant(anes_exog, prepend=False)",
"Inspect the data:",
"print(anes_data.exog[:5,:])\nprint(anes_data.endog[:5])",
"Fit MNL model:",
"mlogit_mod = sm.MNLogit(anes_data.endog, anes_exog)\nmlogit_res = mlogit_mod.fit()\nprint(mlogit_res.params)",
"Poisson\nLoad the Rand data. Note that this example is similar to Cameron and Trivedi's Microeconometrics Table 20.5, but it is slightly different because of minor changes in the data.",
"rand_data = sm.datasets.randhie.load()\nrand_exog = rand_data.exog.view(float).reshape(len(rand_data.exog), -1)\nrand_exog = sm.add_constant(rand_exog, prepend=False)",
"Fit Poisson model:",
"poisson_mod = sm.Poisson(rand_data.endog, rand_exog)\npoisson_res = poisson_mod.fit(method=\"newton\")\nprint(poisson_res.summary())",
"Negative Binomial\nThe negative binomial model gives slightly different results.",
"mod_nbin = sm.NegativeBinomial(rand_data.endog, rand_exog)\nres_nbin = mod_nbin.fit(disp=False)\nprint(res_nbin.summary())",
"Alternative solvers\nThe default method for fitting discrete data MLE models is Newton-Raphson. You can use other solvers by using the method argument:",
"mlogit_res = mlogit_mod.fit(method='bfgs', maxiter=100)\nprint(mlogit_res.summary())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs
|
site/en/r1/tutorials/_index.ipynb
|
apache-2.0
|
[
"Copyright 2018 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Get Started with TensorFlow 1.x\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/_index.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\n\nNote: This is an archived TF1 notebook. These are configured\nto run in TF2's \ncompatibility mode\nbut will run in TF1 as well. To use TF1 in Colab, use the\n%tensorflow_version 1.x\nmagic.\n\nThis is a Google Colaboratory notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To run the Colab notebook:\n\nConnect to a Python runtime: At the top-right of the menu bar, select CONNECT.\nRun all the notebook code cells: Select Runtime > Run all.\n\nFor more examples and guides (including details for this program), see Get Started with TensorFlow.\nLet's get started, import the TensorFlow library into your program:",
"import tensorflow.compat.v1 as tf",
"Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:",
"mnist = tf.keras.datasets.mnist\n\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0",
"Build the tf.keras model by stacking layers. Select an optimizer and loss function used for training:",
"model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(512, activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(10, activation=tf.nn.softmax)\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])",
"Train and evaluate model:",
"model.fit(x_train, y_train, epochs=5)\n\nmodel.evaluate(x_test, y_test, verbose=2)",
"You’ve now trained an image classifier with ~98% accuracy on this dataset. See Get Started with TensorFlow to learn more."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AEW2015/PYNQ_PR_Overlay
|
Pynq-Z1/notebooks/Video_PR/Generic_Blur.ipynb
|
bsd-3-clause
|
[
"Don't forget to delete the hdmi_out and hdmi_in when finished\nGeneric Kernal Filter Notebook\nIn this notebook, we have provided an user interface which allows user to generate various image filters by controlling the values in a 3x3 kernal matrix, the dividing factor and the bias value.\nThere are some examples at the following webpages: \nhttp://lodev.org/cgtutor/filtering.html#Emboss\nAlternatively you can search online for kernel image processing.\nImport libraries and download base bitstream",
"from pynq.drivers.video import HDMI\nfrom pynq import Bitstream_Part\nfrom pynq.board import Register\nfrom pynq import Overlay\nOverlay(\"demo.bit\").download()",
"Start streaming from device connected to HDMI input on the PYNQ Board",
"hdmi_in = HDMI('in')\nhdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)\nhdmi_out.mode(3)\nhdmi_out.start()\nhdmi_in.start()",
"Creates user interface\nIn this section, we create 10 registers which hold the values of the Kernal matrix, the dividing factor and the bias value. We also create a slider which allows user to control the values store in the registers.",
"R0 =Register(0)\nR1 =Register(1)\nR2 =Register(2)\nR3 =Register(3)\nR4 =Register(4)\nR5 =Register(5)\nR6 =Register(6)\nR7 =Register(7)\nR8 =Register(8)\nR9 =Register(9)\nR10 =Register(10)\nimport ipywidgets as widgets\n\n\nR0_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_0:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\nR1_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_1:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\nR2_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_2:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\nR3_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_3:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR4_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_4:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR5_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_5:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR6_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_6:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR7_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_7:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR8_s = widgets.IntSlider(\n value=1,\n min=-128,\n max=127,\n step=1,\n description='M_8:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR9_s = widgets.IntSlider(\n value=9,\n min=1,\n max=127,\n step=1,\n description='Factor:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\nR10_s = widgets.IntSlider(\n value=0,\n min=0,\n max=255,\n step=1,\n description='Bias:',\n disabled=False,\n continuous_update=True,\n orientation='vertical',\n readout=True,\n readout_format='i',\n slider_color='black'\n)\n\ndef update_r0(*args):\n R0.write(R0_s.value)\nR0_s.observe(update_r0, 'value')\ndef update_r1(*args):\n R1.write(R1_s.value)\nR1_s.observe(update_r1, 'value')\ndef update_r2(*args):\n R2.write(R2_s.value)\nR2_s.observe(update_r2, 'value')\ndef update_r3(*args):\n R3.write(R3_s.value)\nR3_s.observe(update_r3, 'value')\ndef update_r4(*args):\n R4.write(R4_s.value)\nR4_s.observe(update_r4, 'value')\ndef update_r5(*args):\n R5.write(R5_s.value)\nR5_s.observe(update_r5, 'value')\ndef update_r6(*args):\n R6.write(R6_s.value)\nR6_s.observe(update_r6, 'value')\ndef update_r7(*args):\n R7.write(R7_s.value)\nR7_s.observe(update_r7, 'value')\ndef update_r8(*args):\n R8.write(R8_s.value)\nR8_s.observe(update_r8, 'value')\ndef update_r9(*args):\n R9.write(R9_s.value)\nR9_s.observe(update_r9, 'value')\ndef update_r10(*args):\n R10.write(R10_s.value)\nR10_s.observe(update_r10, 'value')",
"Continue to create user interface",
"from IPython.display import clear_output\nfrom ipywidgets import Button, HBox, VBox\n\nwords = ['HDMI Reset', 'Kernal Filter']\nitems = [Button(description=w) for w in words]\n\n\ndef on_hdmi_clicked(b):\n hdmi_out.stop()\n hdmi_in.stop()\n hdmi_out.start()\n hdmi_in.start()\ndef on_Kernal_clicked(b):\n Bitstream_Part(\"Generic_Filter_p.bit\").download()\n R3_s.disabled = False;\n R0.write(1)\n R1.write(1)\n R2.write(1)\n R3.write(1)\n R4.write(1)\n R5.write(1)\n R6.write(1)\n R7.write(1)\n R8.write(1)\n R9.write(9)\n R10.write(0)\n R0_s.description='M_0'\n R0_s.value = 1\n R0_s.max = 127\n R1_s.description='M_1'\n R1_s.value = 1\n R1_s.max = 127\n R2_s.description='M_2'\n R2_s.value = 1\n R2_s.max = 127\n R3_s.description='M_3'\n R3_s.value = 1\n R3_s.max = 127\n R4_s.description='M_4'\n R4_s.value = 1\n R4_s.max = 127\n R5_s.description='M_5'\n R5_s.value = 1\n R5_s.max = 127\n R6_s.description='M_6'\n R6_s.value = 1\n R6_s.max = 127\n R7_s.description='M_7'\n R7_s.value = 1\n R7_s.max = 127\n R8_s.description='M_8'\n R8_s.value = 1\n R8_s.max = 127\n R9_s.description='Factor'\n R9_s.value = 9\n R9_s.max = 127\n R10_s.description='Bias'\n R10_s.value = 0\n R10_s.max = 255\n\nitems[0].on_click(on_hdmi_clicked)\nitems[1].on_click(on_Kernal_clicked)",
"User interface instruction\nAt this point, the streaming may not work properly. Please run the code section below. Afterwards, press the 'HDMI Reset\" button to reset the HDMI input and output. The streaming should now work properly.\nIn order to start applying filter on the stream, press the 'Kernal Filter' button. The kernal filter is default as a Box Blur filter. \nAfterwards, users can change to any kernal filter they want by changing the value on the slider.\nEach values is denoted by the equation below:\n[M0 M1 M2]\n[M3 M4 M5]\n[M6 M7 M8] / Factor + Bias",
"HBox([VBox([items[0], items[1]]),R0_s,R1_s,R2_s,R3_s,R4_s,R5_s,R6_s,R7_s,R8_s,R9_s,R10_s])\n\nhdmi_in.stop()\nhdmi_out.stop()\ndel hdmi_in\ndel hdmi_out"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
davicsilva/dsintensive
|
notebooks/eda-miniprojects/hospital_readmit/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_3-checkpoint.ipynb
|
apache-2.0
|
[
"Hospital Readmissions Data Analysis and Recommendations for Reduction\nBackground\nIn October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.\nExercise Directions\nIn this exercise, you will:\n+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate\n+ construct a statistically sound analysis and make recommendations of your own \nMore instructions provided below. Include your work in this notebook and submit to your Github account. \nResources\n\nData source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3\nMore information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html\nMarkdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet",
"%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport bokeh.plotting as bkp\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n# read in readmissions data provided\nhospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')",
"Preliminary Analysis",
"# deal with missing and inconvenient portions of data \nclean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']\nclean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)\nclean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')\n\n# generate a scatterplot for number of discharges vs. excess rate of readmissions\n# lists work better with matplotlib scatterplot function\nx = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]\ny = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])\n\nfig, ax = plt.subplots(figsize=(8,5))\nax.scatter(x, y,alpha=0.2)\n\nax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)\nax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)\n\nax.set_xlim([0, max(x)])\nax.set_xlabel('Number of discharges', fontsize=12)\nax.set_ylabel('Excess rate of readmissions', fontsize=12)\nax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)\n\nax.grid(True)\nfig.tight_layout()",
"Preliminary Report\nRead the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.\nA. Initial observations based on the plot above\n+ Overall, rate of readmissions is trending down with increasing number of discharges\n+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)\n+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) \nB. Statistics\n+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 \n+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 \nC. Conclusions\n+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates. \n+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.\nD. Regulatory policy recommendations\n+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.\n+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.\n\nExercise\nInclude your work on the following in this notebook and submit to your Github account. \nA. Do you agree with the above analysis and recommendations? Why or why not?\nB. Provide support for your arguments and your own recommendations with a statistically sound analysis:\n\nSetup an appropriate hypothesis test.\nCompute and report the observed significance value (or p-value).\nReport statistical significance for $\\alpha$ = .01. \nDiscuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?\nLook at the scatterplot above. \nWhat are the advantages and disadvantages of using this plot to convey information?\nConstruct another plot that conveys the same information in a more direct manner.\n\n\n\nYou can compose in notebook cells using Markdown: \n+ In the control panel at the top, choose Cell > Cell Type > Markdown\n+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet",
"# Your turn",
"The dataset",
"clean_hospital_read_df.tail()",
"We are interested in the hospitals with 'Excess Readmission Ratio' > 0\n\nAbout hospitals with 'Excess Readminision Ratio' > 0",
"tot_hosp_valid = clean_hospital_read_df.loc[clean_hospital_read_df['Excess Readmission Ratio'] > 0]['Hospital Name'].count()\nexcess_read_ratio_max = clean_hospital_read_df['Excess Readmission Ratio'].max()\nexcess_read_ratio_min = clean_hospital_read_df['Excess Readmission Ratio'].min()\nexcess_read_ratio_mean = clean_hospital_read_df['Excess Readmission Ratio'].mean()\nranges = [0, 1, 2]\nh1, h2 = clean_hospital_read_df['Excess Readmission Ratio'].groupby(pd.cut(clean_hospital_read_df['Excess Readmission Ratio'], ranges)).count()\n\nprint(\"+---------------------------------------------------|---------------+\")\nprint(\"| Total hospitals with excess readmission ratio > 0 | %s |\" %(format(tot_hosp_valid, ',')))\nprint(\"|---------------------------------------------------|---------------|\")\nprint(\"| Excess readimission ratio: max | %.2f |\" % excess_read_ratio_max)\nprint(\"| min | %.2f |\" % excess_read_ratio_min)\nprint(\"| mean | %.2f |\" % excess_read_ratio_mean)\nprint(\"|---------------------------------------------------|---------------|\")\nprint(\"| Hospitals with excess readmission ratio <= 1 | %s (%.2f%%)|\" %(format(h1, ','), (100*(h1/(h1+h2)))))\nprint(\"| Hospitals with excess readmission ratio > 1 & <=2 | %s (%.2f%%)|\" %(format(h2, ','), (100*(h2/(h1+h2)))))\nprint(\"+---------------------------------------------------|---------------+\")",
"'Excess Readmission Ratio' in hospitals with number of discharges < 100",
"tot_h100 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] < 100].count()\nmed_h100 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] < 100].mean()\ntot_h100_exc_gt_one = clean_hospital_read_df['Hospital Name'].loc[(clean_hospital_read_df['Number of Discharges'] < 100) & (clean_hospital_read_df['Excess Readmission Ratio'] > 1)].count()\nprint(\"+---------------------------------------------------+--------------+\")\nprint(\"| Hospitals with discharges < 100 Total | %s |\" %(format(tot_h100,',')))\nprint(\"| Mean | %.3f |\" %med_h100)\nprint(\"| Have excess readmission rate > 1 | %.2f%% |\" %(100*(tot_h100_exc_gt_one/tot_h100)))\nprint(\"+---------------------------------------------------+--------------+\")",
"'Excess Readmission Ratio' in hospitals with number of discharges > 1000",
"tot_h1000 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] > 1000].count()\nmed_h1000 = clean_hospital_read_df['Excess Readmission Ratio'].loc[clean_hospital_read_df['Number of Discharges'] > 1000].mean()\ntot_h1000_exc_gt_one = clean_hospital_read_df['Hospital Name'].loc[(clean_hospital_read_df['Number of Discharges'] > 1000) & (clean_hospital_read_df['Excess Readmission Ratio'] > 1)].count()\nprint(\"+---------------------------------------------------+--------------+\")\nprint(\"| Hospitals with discharges > 1000 Total | %d |\" %tot_h1000)\nprint(\"| Mean | %.3f |\" %med_h1000)\nprint(\"| Have excess readmission rate > 1 | %.2f%% |\" %(100*(tot_h1000_exc_gt_one/tot_h1000)))\nprint(\"+---------------------------------------------------+--------------+\")",
"How to find out if there is a correlation between hospital capacity (number of discharges) and readmission rates.",
"from scipy.stats import pearsonr\n\ndf_temp = clean_hospital_read_df[['Number of Discharges', 'Excess Readmission Ratio']].dropna()\n\ndf_temp.head()",
"Pearson correlation\n\"The Pearson correlation coefficient measures the linear relationship between two datasets. ... Like other correlation coefficients, this one varies between -1 and +1 with 0 implying no correlation. Correlations of -1 or +1 imply an exact linear relationship. Positive correlations imply that as x increases, so does y. Negative correlations imply that as x increases, y decreases.\".<br> Source: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.pearsonr.html \nNumber of Discharges versus Excess Readmission Ratio\nA - All hospitals",
"pearson, pvalue = pearsonr(df_temp[['Number of Discharges']], df_temp[['Excess Readmission Ratio']])\npvalue1 = pvalue\npearson1 = pearson\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for all hospitals: |\") \nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.30f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\nsns.set(color_codes=True)\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_temp)",
"A - All hospitals: Zooming in\nCalculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with:\n\nExcess Readmission Ratio < 1.6\nNumber of Discharges < 2,000",
"df_temp_a = df_temp.loc[(df_temp['Number of Discharges'] < 2000) & (df_temp['Excess Readmission Ratio'] < 1.6)]\npearson, pvalue = pearsonr(df_temp_a[['Number of Discharges']], df_temp_a[['Excess Readmission Ratio']])\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for all hospitals with: |\") \nprint(\"| a) Excess Readmission Ratio < 1.6 |\") \nprint(\"| b) Number of Discharges < 2,000 |\")\nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.30f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\nsns.set(color_codes=True)\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_temp_a)",
"B - Hospitals with discharges < 100",
"df_temp_dischg100 = df_temp.loc[df_temp['Number of Discharges'] < 100]\npearson,pvalue = pearsonr(df_temp_dischg100[['Number of Discharges']], df_temp_dischg100[['Excess Readmission Ratio']])\npvalue2 = pvalue\npearson2 = pearson\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for hospitals with discharges < 100: |\")\nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.20f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\ndf_dischg_lt100 = clean_hospital_read_df[['Number of Discharges', 'Excess Readmission Ratio']].loc[(clean_hospital_read_df['Number of Discharges'] < 100) & (clean_hospital_read_df['Excess Readmission Ratio'] > 0)]\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_dischg_lt100)",
"B - Hospitals with discharges < 100: Zooming in\nCalculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with:\n\nExcess Readmission Ratio < 1.2\nNumber of Discharges > 40 (this dataset has already Number of Discharges < 100)",
"df_temp_b = df_temp_dischg100.loc[(df_temp_dischg100['Number of Discharges'] > 40) & (df_temp_dischg100['Excess Readmission Ratio'] < 1.2)]\npearson, pvalue = pearsonr(df_temp_b[['Number of Discharges']], df_temp_b[['Excess Readmission Ratio']])\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for all hospitals with: |\") \nprint(\"| a) Excess Readmission Ratio < 1.2 |\") \nprint(\"| b) Number of Discharges < 40 |\")\nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.30f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\nsns.set(color_codes=True)\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_temp_b)",
"C - Hospitals with discharges > 1,000",
"df_temp_dischg1000 = df_temp.loc[df_temp['Number of Discharges'] > 1000]\npearson, pvalue = pearsonr(df_temp_dischg1000[['Number of Discharges']], df_temp_dischg1000[['Excess Readmission Ratio']])\npvalue3 = pvalue\npearson3 = pearson\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for hospitals with discharges > 1000: |\") \nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.20f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\ndf_dischg_gt1000 = clean_hospital_read_df[['Number of Discharges', 'Excess Readmission Ratio']].loc[(clean_hospital_read_df['Number of Discharges'] > 1000) & (clean_hospital_read_df['Excess Readmission Ratio'] > 0)]\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_dischg_gt1000)",
"C - Hospitals with discharges > 1,000: Zooming in\nCalculating Pearson correlation and p-value without outlier hospitals. Let's verify only the hopitals with:\n\nExcess Readmission Ratio < 1.2\nNumber of Discharges < 2,000 (this dataset has already Number of Discharges > 1,000)",
"df_temp_c = df_dischg_gt1000.loc[(df_dischg_gt1000['Number of Discharges'] < 2000) & (df_dischg_gt1000['Excess Readmission Ratio'] < 1.2)]\npearson, pvalue = pearsonr(df_temp_c[['Number of Discharges']], df_temp_c[['Excess Readmission Ratio']])\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for all hospitals with: |\") \nprint(\"| a) Excess Readmission Ratio < 1.2 |\") \nprint(\"| b) Number of Discharges < 2,000 |\")\nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.30f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_temp_c)",
"D - What about hospitals with discharges between 100 and 1,000 (inclusive)?",
"df_temp2 = df_temp.loc[(df_temp['Number of Discharges'] >= 100) & (df_temp['Number of Discharges'] <= 1000)]\npearson, pvalue = pearsonr(df_temp2[['Number of Discharges']], df_temp2[['Excess Readmission Ratio']])\npvalue4 = pvalue\nprint(\"+----------------------------------------------------------+\")\nprint(\"| 'Number of Discharges' versus 'Excess Readmission Ratio' |\")\nprint(\"| for hospitals with discharges >= 100 and <= 1,000: |\") \nprint(\"|----------------------------------------------------------|\")\nprint(\"| Pearson Correlation = %.4f |\" %pearson[[0][0]])\nprint(\"| p-value = %.20f |\" %pvalue[[0][0]])\nprint(\"+----------------------------------------------------------+\")\n\ndf_dischg_medium = clean_hospital_read_df[['Number of Discharges', 'Excess Readmission Ratio']].loc[(clean_hospital_read_df['Number of Discharges'] >= 100) & (clean_hospital_read_df['Number of Discharges'] <= 1000) & (clean_hospital_read_df['Excess Readmission Ratio'] > 0)]\nsns.regplot(x=\"Number of Discharges\", y=\"Excess Readmission Ratio\", data=df_dischg_medium)",
"Preliminary Report\nRead the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.\nA. Initial observations based on the plot above\n+ Overall, rate of readmissions is trending down with increasing number of discharges\n+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)\n+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) \nB. Statistics\n+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 \n+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 \nC. Conclusions\n+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates. \n+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.\nD. Regulatory policy recommendations\n+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.\n+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.\nA. Do you agree with the above analysis and recommendations? Why or why not?\nNo. \n\nI could not find a significant correlation - Pearson correlation close to ABS(1) - between hospital capacity (number of discharges) and readmission rates;<br>\nConsidering:<br> \ni) all hospitals<br> \nii) hospitals/facilities with number of discharges < 100<br>\niii) hospitals/facilities with number of discharges > 1000<br>",
"print(\"+-----------------------------------------------------------+-------------+\")\nprint(\"| Hospital/facilities | Correlation*|\")\nprint(\"|-----------------------------------------------------------|-------------|\")\nprint(\"| i) All hospitals | %.4f |\" %pearson1) \nprint(\"| ii) Hospitals/facilities with number of discharges < 100 | %.4f |\" %pearson2)\nprint(\"| ii) Hospitals/facilities with number of discharges > 1,000| %.4f |\" %pearson3)\nprint(\"+-----------------------------------------------------------+-------------+\")\nprint(\"* Pearson Correlation for: Hospital capacity(number of discharges) / readmission rates\")",
"For all three groups above there is a negative correlation,\nThe Pearson Correlation values is \"far\" from a strong correlation (close to 1) and, finally,\nEven without the outliers, we found similar values for the the Pearson Correlation (three groups above).\n\nB. Provide support for your arguments and your own recommendations with a statistically sound analysis:\n1. Setup an appropriate hypothesis test",
"print(\"+--------------------------------------------------------------+\")\nprint(\"| Null Hypothesis: |\")\nprint(\"| Ho: There is *not* a significant correlation between |\")\nprint(\"| hospital capacity (discharges) and readmission rates|\")\nprint(\"|--------------------------------------------------------------|\")\nprint(\"| Alternative Hypothesis: |\")\nprint(\"| Ha: There is a significant correlation between |\")\nprint(\"| hospital capacity (discharges) and readmission rates|\")\nprint(\"+--------------------------------------------------------------+\")",
"2. Compute and report the observed significance value (or p-value).",
"print(\"+--------------------------------------------------------------+\")\nprint(\"| Scenario 1: |\")\nprint(\"| All Hospitals: P-Value = %.30f |\" %pvalue1)\nprint(\"|--------------------------------------------------------------|\")\nprint(\"| Scenario 2: |\")\nprint(\"| Hospitals discharges < 100: P-Value = %.20f |\" %pvalue2)\nprint(\"|--------------------------------------------------------------|\")\nprint(\"| Scenario 3: |\")\nprint(\"| Hospitals with discharges > 1,000: P-Value = %.4f |\" %pvalue3)\nprint(\"|--------------------------------------------------------------|\")\nprint(\"| Scenario 4: |\")\nprint(\"| Hospitals discharges>100 and <1,000: P-Value = %.12f|\" %pvalue4)\nprint(\"+--------------------------------------------------------------+\")",
"3. Report statistical significance for α = .01\n4. Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?\n5. Look at the scatterplot above\n\nWhat are the advantages and disadvantages of using this plot to convey information?\n\nSome advantages:<br>\na) It shows how much one variable affect the other or, in other words. The relationship between two variables is called their correlation;<br>\nb) Outliers: the maximum and minimum value, usually, can be easily determined;<br>\nc) It is possible to show many variables in a single plot.<br>\nSome disadvantages:<br>\nd) Discretization: it is difficult to see the values (x,y) if we have many of them that are very close;<br>\n\nConstruct another plot that conveys the same information in a more direct manner"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mpacer/nb_struct2app
|
damian/Nikola.ipynb
|
mit
|
[
"Nikola\nStatic site generator\nFeatures\n\nIt’s just a bunch of HTML files and assets.\nIncremental builds/rebuild using doit, so Nikola is fast.\nMultilingual\nExtensible\n\nFriendly CLI\n\n\nMultiple input formats such as reStructuredText, Markdown, HTML and Jupyter Notebooks (out of the box as part of the core!!)\n\n\nThe core of the Nikola / Jupyter integration\n\nhttps://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/ipynb.py",
"from nbconvert.exporters import HTMLExporter\n\n...\n\ndef _compile_string(self, nb_json):\n \"\"\"Export notebooks as HTML strings.\"\"\"\n self._req_missing_ipynb()\n c = Config(self.site.config['IPYNB_CONFIG'])\n c.update(get_default_jupyter_config())\n exportHtml = HTMLExporter(config=c)\n body, _ = exportHtml.from_notebook_node(nb_json)\n return body",
"Some other gems",
"def read_metadata(self, post, lang=None):\n \"\"\"Read metadata directly from ipynb file.\n As ipynb files support arbitrary metadata as json, the metadata used by Nikola\n will be assume to be in the 'nikola' subfield.\n \"\"\"\n self._req_missing_ipynb()\n if lang is None:\n lang = LocaleBorg().current_lang\n source = post.translated_source_path(lang)\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb_json = nbformat.read(in_file, current_nbformat)\n # Metadata might not exist in two-file posts or in hand-crafted\n # .ipynb files.\n return nb_json.get('metadata', {}).get('nikola', {})\n\ndef create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n ...\n\n if content.startswith(\"{\"):\n # imported .ipynb file, guaranteed to start with \"{\" because it’s JSON.\n nb = nbformat.reads(content, current_nbformat)\n else:\n nb = nbformat.v4.new_notebook()\n nb[\"cells\"] = [nbformat.v4.new_markdown_cell(content)]",
"Let see it in action!",
"cd /media/data/devel/damian_blog/\n\n!ls\n\ntitle = \"We are above 1000 stars!\"\n\ntags_list = ['Jupyter', 'python', 'reveal', 'RISE', 'slideshow']\n\ntags = ', '.join(tags_list)\n\n!nikola new_post -f ipynb -t \"{title}\" --tags=\"{tags}\"",
"```\nCreating New Post\n\nTitle: We are above 1000 stars!\nScanning posts......done!\n[2017-07-12T16:45:00Z] NOTICE: compile_ipynb: No kernel specified, assuming \"python3\".\n[2017-07-12T16:45:01Z] INFO: new_post: Your post's text is at: posts/we-are-above-1000-stars.ipynb\n```",
"!nikola build\n\n!nikola deploy\n\nfrom IPython.display import IFrame\nIFrame(\"http://www.damian.oquanta.info/\", 980, 600)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
metpy/MetPy
|
v0.5/_downloads/upperair_soundings.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Upper Air Sounding Tutorial\nUpper air analysis is a staple of many synoptic and mesoscale analysis\nproblems. In this tutorial we will gather weather balloon data, plot it,\nperform a series of thermodynamic calculations, and summarize the results.\nTo learn more about the Skew-T diagram and its use in weather analysis and\nforecasting, checkout this <http://homes.comet.ucar.edu/~alanbol/aws-tr-79-006.pdf>_\nair weather service guide.",
"from datetime import datetime\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nimport numpy as np\n\nimport metpy.calc as mpcalc\nfrom metpy.io import get_upper_air_data\nfrom metpy.plots import Hodograph, SkewT",
"Getting Data\nWe will download data from the\nUniversity of Wyoming sounding data page <http://weather.uwyo.edu/upperair/sounding.html>_\n, which has an extensive archive of data available, as well as current data.\nIn this case, we will download the sounding data from the Veterans Day\ntornado outbreak in 2002 by passing a datetime object and station name to the\nget_upper_air_data function.",
"dataset = get_upper_air_data(datetime(2002, 11, 11, 0), 'BNA')\n\n# We can view the fields available in the dataset. We will create some simple\n# variables to make the rest of the code more concise.\n\nprint(dataset.variables.keys())\n\np = dataset.variables['pressure'][:]\nT = dataset.variables['temperature'][:]\nTd = dataset.variables['dewpoint'][:]\nu = dataset.variables['u_wind'][:]\nv = dataset.variables['v_wind'][:]",
"Thermodynamic Calculations\nOften times we will want to calculate some thermodynamic parameters of a\nsounding. The MetPy calc module has many such calculations already implemented!\n\nLifting Condensation Level (LCL) - The level at which an air parcel's\n relative humidity becomes 100% when lifted along a dry adiabatic path.\nParcel Path - Path followed by a hypothetical parcel of air, beginning\n at the surface temperature/pressure and rising dry adiabatically until\n reaching the LCL, then rising moist adiabatially.",
"# Calculate the LCL\nlcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])\n\nprint(lcl_pressure, lcl_temperature)\n\n# Calculate the parcel profile.\nparcel_prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')",
"Basic Skew-T Plotting\nThe Skew-T (log-P) diagram is the standard way to view rawinsonde data. The\ny-axis is height in pressure coordinates and the x-axis is temperature. The\ny coordinates are plotted on a logarithmic scale and the x coordinate system\nis skewed. An explanation of skew-T interpretation is beyond the scope of this\ntutorial, but here we will plot one that can be used for analysis or\npublication.\nThe most basic skew-T can be plotted with only five lines of Python.\nThese lines perform the following tasks:\n\n\nCreate a Figure object and set the size of the figure.\n\n\nCreate a SkewT object\n\n\nPlot the pressure and temperature (note that the pressure,\n the independent variable, is first even though it is plotted on the y-axis).\n\n\nPlot the pressure and dewpoint temperature.\n\n\nPlot the wind barbs at the appropriate pressure using the u and v wind\n components.",
"# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r', linewidth=2)\nskew.plot(p, Td, 'g', linewidth=2)\nskew.plot_barbs(p, u, v)\n\n# Show the plot\nplt.show()",
"Advanced Skew-T Plotting\nFiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are\nuseful when performing further analysis on the Skew-T diagram. Often the\n0C isotherm is emphasized and areas of CAPE and CIN are shaded.",
"# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig, rotation=30)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Plot LCL temperature as black dot\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\n\n# Plot the parcel profile as a black line\nskew.plot(p, parcel_prof, 'k', linewidth=2)\n\n# Color regions of CAPE and CIN (the area between the actual temperature and\n# the parcel path).\nskew.ax.fill_betweenx(p, T, parcel_prof, where=T >= parcel_prof, facecolor='blue', alpha=0.4)\nskew.ax.fill_betweenx(p, T, parcel_prof, where=T < parcel_prof, facecolor='red', alpha=0.4)\n\n# Plot a zero degree isotherm\nl = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Show the plot\nplt.show()",
"Adding a Hodograph\nA hodograph is a polar representation of the wind profile measured by the rawinsonde.\nWinds at different levels are plotted as vectors with their tails at the origin, the angle\nfrom the vertical axes representing the direction, and the length representing the speed.\nThe line plotted on the hodograph is a line connecting the tips of these vectors,\nwhich are not drawn.",
"# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig, rotation=30)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Plot LCL as black dot\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\n\n# Plot the parcel profile as a black line\nskew.plot(p, parcel_prof, 'k', linewidth=2)\n\n# Color regions of CAPE and CIN (the area between the actual temperature and\n# the parcel path).\nskew.ax.fill_betweenx(p, T, parcel_prof, where=T >= parcel_prof, facecolor='blue', alpha=0.4)\nskew.ax.fill_betweenx(p, T, parcel_prof, where=T < parcel_prof, facecolor='red', alpha=0.4)\n\n# Plot a zero degree isotherm\nl = skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Create a hodograph\n# Create an inset axes object that is 40% width and height of the\n# figure and put it in the upper right hand corner.\nax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)\nh = Hodograph(ax_hod, component_range=80.)\nh.add_grid(increment=20)\nh.plot_colormapped(u, v, np.hypot(u, v)) # Plot a line colored by wind speed\n\n# Show the plot\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
notnil/udacity-ml-capstone
|
project.ipynb
|
mit
|
[
"Machine Learning Engineer Capstone\nPreprocessing\nThe preprocessing routine searches the ./data/videos dir and creates usable datasets for subsequent model learning tasks. It breaks down videos into one second clips and from those clips generates frames, spectrograms, and InceptionV3 feature vectors. Currently clips and frames aren't used directly and spectrograms aren't used at all, but they are kept for labeling, debugging, and future model upgrades. The feature vector is the only output used for learning. Feature vectors are generated in preprocessing because it drastically reduced training time which allows for a faster model iteration cycle. This methodology was inspired from this repo and associated blog post: https://github.com/harvitronix/five-video-classification-methods",
"import glob\nimport subprocess\nimport json\nimport os\nimport csv\nfrom tqdm import tnrange, tqdm_notebook\nfrom keras.preprocessing import image\nfrom keras.applications.inception_v3 import InceptionV3, preprocess_input\nfrom keras.models import Model, load_model\nfrom keras.layers import Input\nimport numpy as np\n\n# Adapted from https://github.com/harvitronix/five-video-classification-methods\nclass Extractor():\n \"\"\"Extractor builds an inception model without the top classification \n layers and extracts a feature array from an image.\"\"\"\n \n def __init__(self):\n # Get model with pretrained weights.\n base_model = InceptionV3(\n weights='imagenet',\n include_top=True\n )\n\n # We'll extract features at the final pool layer.\n self.model = Model(\n inputs=base_model.input,\n outputs=base_model.get_layer('avg_pool').output\n )\n\n def extract(self, image_path):\n img = image.load_img(image_path, target_size=(299, 299))\n x = image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n\n # Get the prediction.\n features = self.model.predict(x)\n features = features[0]\n return features\n\ndef video_length(path):\n \"\"\"returns the length of the video in secs\"\"\"\n cmd = \"ffprobe -i \" + path + \" -show_entries format=duration -v quiet -of json\"\n pipe = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout\n output = pipe.read()\n d = json.loads(output)\n s = d[\"format\"][\"duration\"]\n return int(float(s))\n\ndef video_id(path):\n \"\"\"returns the id of a video from a path in this format: ./data/videos/:video_id\"\"\"\n return path.split(\"/\")[3].split(\".\")[0]\n\ndef clip_dir_path(path):\n \"\"\"returns the path to dir containing all clips for a video ./data/clips/:video_id\"\"\"\n vid_id = video_id(path)\n return \"./data/clips/\" + vid_id\n\ndef create_clips(path):\n \"\"\"given a path to a video create_clips writes one sec video segments to disk \n in the following format ./data/clips/:video_id/:clip_id.mp4\"\"\"\n \n # create clip dir\n dir_path = clip_dir_path(path)\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n \n # create one sec clips from src\n video_len = video_length(path)\n for i in tqdm_notebook(xrange(video_len), desc=\"Clips for \" + video_id(path)):\n clip_path = dir_path + \"/\" + '%05d' % i + \".mp4\" \n if not os.path.exists(clip_path):\n cmd = \"ffmpeg -v error -y -i \" + path + \" -ss \" + str(i) + \" -t 1 \" + clip_path\n os.system(cmd)\n\ndef create_frames(path):\n \"\"\"given a path to a video create_frames writes frames from previous generated \n clips. create_clips must be run before create_frames. Frames are saved in the \n following format ./data/frames/:video_id/:clip_id/:frame_id.jpg\"\"\"\n \n # create frame dir\n vid_id = video_id(path)\n dir_path = \"./data/frames/\" + vid_id\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n \n # create frames from clip\n video_len = video_length(path)\n for i in tqdm_notebook(xrange(video_len), desc=\"Frames for \" + vid_id):\n clip_path = clip_dir_path(path) + \"/\" + '%05d' % i + \".mp4\"\n frame_dir_path = dir_path + \"/\" + '%05d' % i\n if not os.path.exists(frame_dir_path):\n os.makedirs(frame_dir_path)\n cmd = \"ffmpeg -v error -y -i \" + clip_path + \" -r 5.0 \" + frame_dir_path + \"/%5d.jpg\"\n os.system(cmd)\n \n # resize frames to 299x299 for InceptionV3\n frame_paths = glob.glob(frame_dir_path + \"/*.jpg\")\n for fi in xrange(len(frame_paths)):\n path = frame_paths[fi]\n # resize first\n cmd = \"convert \" + path + \" -resize 299x299 \" + path\n os.system(cmd)\n # add black background\n cmd = \"convert \" + path + \" -gravity center -background black -extent 299x299 \" + path\n os.system(cmd)\n\ndef create_spectrograms(path):\n \"\"\"given a path to a video create_spectrograms writes spectrograms from previous generated \n clips. create_clips must be run before create_spectrograms. Spectrograms are saved in the \n following format ./data/audio/:video_id/:clip_id.png\"\"\"\n \n # create audio dir\n vid_id = video_id(path)\n dir_path = \"./data/audio/\" + vid_id\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n \n # create spectrogram from clip\n video_len = video_length(path)\n for i in tqdm_notebook(xrange(video_len), desc=\"Spectrograms for \" + vid_id):\n clip_path = clip_dir_path(path) + \"/\" + '%05d' % i + \".mp4\"\n spec_path = dir_path + \"/\" + '%05d' % i + \".png\"\n if not os.path.exists(spec_path):\n cmd = \"ffmpeg -v error -y -i \" + clip_path + \" -lavfi showspectrumpic=s=32x32:legend=false \" + spec_path\n os.system(cmd)\n\n\nextractor = Extractor()\n\ndef create_features(path):\n \"\"\"given a path to a video create_features writes inceptionV3 feature outputs from previous generated \n clips. create_clips and create_frames must be run before create_features. Feature outputs are saved \n in the following format ./data/features/:video_id/:clip_id/:frame_id.txt.gz\"\"\"\n \n # create feature dir\n vid_id = video_id(path)\n dir_path = \"./data/features/\" + vid_id\n if not os.path.exists(dir_path):\n os.makedirs(dir_path) \n \n # save feature array for every frame\n video_len = video_length(path) \n with tqdm_notebook(total=video_len, desc=\"Features for \" + vid_id) as pbar:\n for root, dirs, files in os.walk('./data/frames/'+ vid_id):\n for f in files:\n if f.endswith(\".jpg\"):\n frame_path = root + \"/\" + f\n feature_path = frame_path.replace(\"frames\", \"features\").replace(\"jpg\", \"txt.gz\")\n feature_dir = root.replace(\"frames\", \"features\")\n if not os.path.exists(feature_dir):\n os.makedirs(feature_dir)\n if not os.path.exists(feature_path):\n features = extractor.extract(frame_path)\n np.savetxt(feature_path, features)\n pbar.update(1)\n\n# create assets from folder of videos. This takes a LONG TIME.\nvideo_paths = glob.glob(\"./data/videos/*.mp4\")\nvideos_len = len(video_paths)\nfor i in tqdm_notebook(xrange(videos_len), desc=\"Preprocessing Videos\"):\n path = video_paths[i]\n create_clips(path)\n create_frames(path)\n create_spectrograms(path)\n create_features(path)",
"Create Labels\nLabels are generated from the labelmaker's csv output of its internal sqlite database. Labels are shuffled and divided into training, validation, and test sets at a ratio of roughly 3:1:1",
"import pandas as pd\nimport glob\nimport numpy as np\n\n# read in and shuffle data\nlabels = pd.read_csv(\"./labelmaker/labels.csv\").as_matrix()\nprint \"Labels Shape: {}\".format(labels.shape)\nnp.random.seed(0)\nnp.random.shuffle(labels)\n\n# split labels into train, validation, and test sets\ndiv = len(labels) / 5\ntrain_labels = labels[0:div*3,:]\nval_labels = labels[div*3:div*4,:]\ntest_labels = labels[div*4:,:]\n\nprint \"Trainging Labels Shape: {}\".format(train_labels.shape)\nprint \"Validation Labels Shape: {}\".format(val_labels.shape)\nprint \"Test Labels Shape: {}\".format(test_labels.shape)",
"Model\nThe Keras model is composed of a sequential model with two time sensitive LSTM layers followed by two Dense layers and an output layer. The initial input of (7, 2048) represents seven frames per clip each with a 2048 sized vector generated by InceptionV3. The final 4x1 output vector is the category prediction.",
"from keras.models import Sequential\nfrom keras.layers import Dense, LSTM, Dropout, Flatten, GRU\nfrom keras import backend as K\n\nmodel = Sequential([\n LSTM(512, return_sequences=True, input_shape=(7, 2048)),\n LSTM(512, return_sequences=True, input_shape=(7, 512)), \n Flatten(),\n Dense(512, activation='relu'),\n Dropout(0.5),\n Dense(512, activation='relu'),\n Dropout(0.5), \n Dense(4, activation='softmax')\n])\nmodel.compile(loss=\"categorical_crossentropy\", optimizer=\"adam\", metrics=['accuracy'])\n\nprint \"Model Compiled\"",
"Match Labels\nThis routine retrieves the features from disk and pairs them with their one hot encoded labels. Currently all datasets are loaded into memory, but with enough videos, the code should be switched to using a Keras generator.",
"def one_hot(i):\n return np.array([int(i==0),int(i==1),int(i==2),int(i==3)])\n\ndef get_features(labels):\n x, y = [], []\n for i in xrange(len(labels)):\n video_id = labels[i][0]\n clip_id = labels[i][1]\n label = labels[i][2]\n\n features = []\n for i in range(7):\n fname = \"./data/features/\" + video_id + \"/\" + '%05d' % clip_id + \"/\" + '%05d' % (i+1) + \".txt.gz\"\n f = np.loadtxt(fname)\n features.append(f)\n \n x.append(features)\n y.append(one_hot(label))\n x = np.array(x)\n return x, np.array(y)\n\nprint \"Getting features\"\n\nX_train, Y_train = get_features(train_labels)\nX_val, Y_val = get_features(val_labels)\n\nprint X_train.shape\nprint Y_train.shape",
"Training\nThis routine trains the model and logs updates to the console and Tensorboard. After training is complete the model is saved using the current timestamp to distinguish training runs.",
"from keras.callbacks import TensorBoard\nimport time\nimport numpy as np\n\ntensorboard = TensorBoard(log_dir='./logs', \n histogram_freq=0,\n write_graph=True, \n write_images=True)\n\nmodel.fit(X_train, \n Y_train, \n batch_size=100, \n epochs=30, \n verbose=2, \n callbacks=[tensorboard], \n validation_data=(X_val, Y_val))\n\nfile_name = \"shot_classifier_\" + str(int(time.time())) + \".h5\"\nmodel.save(file_name)\nprint \"Model Saved\"",
"Prediction\nThis routine tests the saved model using the Keras predict method. Overall accuracy and a confusion matrix are displayed to validate that the model is accurate against unseen data.",
"from keras.models import load_model\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\n\ndef reverse_one_hot(val):\n hi_idx = -1\n hi = -1\n for i in range(len(val)):\n v = val[i]\n if hi == -1 or v > hi:\n hi = v\n hi_idx = i\n return hi_idx\n\ndef normalize_labels(Y):\n norm = []\n for v in Y:\n norm.append(reverse_one_hot(v))\n return np.array(norm)\n\nX_test, Y_test = get_features(test_labels)\nmodel = load_model(\"shot_classifier_1501284149.h5\")\nY_pred = model.predict(X_test, verbose=2)\n\nY_test_norm = normalize_labels(Y_test)\nY_pred_norm = normalize_labels(Y_pred)\n\nprint \"Overall Accuracy: \" + str(accuracy_score(Y_test_norm, Y_pred_norm))\n\ncon_m = confusion_matrix(Y_test_norm, Y_pred_norm)\n\ntitles = [\"forehand\", \"backhand\", \"volley\", \"serve\"]\n\nfor i in range(4):\n for j in range(4):\n actual = titles[i]\n predicted = titles[j]\n print \"predicted \" + predicted + \" when \" + actual + \" \" + str(con_m[i][j])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ko/quantum/tutorials/gradients.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"그래디언트 계산하기\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/gradients\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/gradients.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n이 튜토리얼에서는 양자 회로의 기대 값에 대한 그래디언트 계산 알고리즘을 탐색합니다.\n양자 회로에서 관찰 가능한 특정 기대 값의 그래디언트를 계산하는 것은 복잡한 프로세스입니다. 관찰 가능 항목의 기대 값은 기록하기 쉬운 분석적 그래디언트 수식이 있는 행렬 곱셈 또는 벡터 더하기와 같은 기존의 머신러닝 변환과 달리 언제든 기록하기 쉬운 분석적 그래디언트 수식을 사용할 수 없습니다. 결과적으로 다양한 시나리오에 유용한 다양한 양자 그래디언트 계산 방법이 있습니다. 이 튜토리얼에서는 두 가지의 다른 미분 체계를 비교하고 대조합니다.\n설정",
"!pip install tensorflow==2.1.0",
"TensorFlow Quantum을 설치하세요.",
"!pip install tensorflow-quantum",
"이제 TensorFlow 및 모듈 종속성을 가져옵니다.",
"import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit",
"1. 준비\n양자 회로에 대한 그래디언트 계산 개념을 좀 더 구체적으로 만들어 보겠습니다. 다음과 같은 매개변수화된 회로가 있다고 가정합니다.",
"qubit = cirq.GridQubit(0, 0)\nmy_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))\nSVGCircuit(my_circuit)",
"관찰 가능 항목과 함께:",
"pauli_x = cirq.X(qubit)\npauli_x",
"이 연산자를 보면 $⟨Y(\\alpha)| X | Y(\\alpha)⟩ = \\sin(\\pi \\ alpha)$라는 것을 알 수 있습니다.",
"def my_expectation(op, alpha):\n \"\"\"Compute ⟨Y(alpha)| `op` | Y(alpha)⟩\"\"\"\n params = {'alpha': alpha}\n sim = cirq.Simulator()\n final_state = sim.simulate(my_circuit, params).final_state\n return op.expectation_from_wavefunction(final_state, {qubit: 0}).real\n\n\nmy_alpha = 0.3\nprint(\"Expectation=\", my_expectation(pauli_x, my_alpha))\nprint(\"Sin Formula=\", np.sin(np.pi * my_alpha))",
"$f_{1}(\\alpha) = ⟨Y(\\alpha)| X | Y(\\alpha)⟩$를 정의하면 $f_{1}^{'}(\\alpha) = \\pi \\cos(\\pi \\alpha)$입니다. 확인해 보겠습니다.",
"def my_grad(obs, alpha, eps=0.01):\n grad = 0\n f_x = my_expectation(obs, alpha)\n f_x_prime = my_expectation(obs, alpha + eps)\n return ((f_x_prime - f_x) / eps).real\n\n\nprint('Finite difference:', my_grad(pauli_x, my_alpha))\nprint('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))",
"2. 미분기의 필요성\n더 큰 회로일수록 주어진 양자 회로의 그래디언트를 정확하게 계산하는 공식이 항상 주어지지 않습니다. 간단한 공식으로 그래디언트를 계산하기에 충분하지 않은 경우 tfq.differentiators.Differentiator 클래스를 사용하여 회로의 그래디언트를 계산하기 위한 알고리즘을 정의할 수 있습니다. 예를 들어, 다음을 사용하여 TensorFlow Quantum(TFQ)의 상기 예를 다시 재현할 수 있습니다.",
"expectation_calculation = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nexpectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"그러나 샘플링을 기반으로 예상치로 전환하면(실제 기기에서 발생하는 일) 값이 약간 변경될 수 있습니다. 이것은 이제 불완전한 추정치를 가지고 있음을 의미합니다.",
"sampled_expectation_calculation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=[[my_alpha]])",
"이것은 그래디언트와 관련하여 심각한 정확성 문제로 빠르게 복합화될 수 있습니다.",
"# Make input_points = [batch_size, 1] array.\ninput_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)\nexact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=input_points)\nimperfect_outputs = sampled_expectation_calculation(my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=input_points)\nplt.title('Forward Pass Values')\nplt.xlabel('$x$')\nplt.ylabel('$f(x)$')\nplt.plot(input_points, exact_outputs, label='Analytic')\nplt.plot(input_points, imperfect_outputs, label='Sampled')\nplt.legend()\n\n# Gradients are a much different story.\nvalues_tensor = tf.convert_to_tensor(input_points)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=pauli_x,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = sampled_expectation_calculation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\nsampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')\nplt.legend()",
"여기서 유한 차분 공식은 분석 사례에서 그래디언트를 계산하는 것이 빠르지만 샘플링 기반 방법의 경우 노이즈가 너무 많습니다. 좋은 그래디언트를 계산할 수 있도록 보다 신중한 기술을 사용해야 합니다. 다음으로 분석적 기대 그래디언트 계산에는 적합하지 않지만 실제 샘플 기반 사례에서 훨씬 더 성능을 발휘하는 훨씬 느린 기술을 살펴보겠습니다.",
"# A smarter differentiation scheme.\ngradient_safe_sampled_expectation = tfq.layers.SampledExpectation(\n differentiator=tfq.differentiators.ParameterShift())\n\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n imperfect_outputs = gradient_safe_sampled_expectation(\n my_circuit,\n operators=pauli_x,\n repetitions=500,\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nsampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)\n\nplt.title('Gradient Values')\nplt.xlabel('$x$')\nplt.ylabel('$f^{\\'}(x)$')\nplt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')\nplt.plot(input_points, sampled_param_shift_gradients, label='Sampled')\nplt.legend()",
"위에서 특정 연구 시나리오에 특정 미분기가 가장 잘 사용됨을 알 수 있습니다. 일반적으로 기기 노이즈 등에 강한 느린 샘플 기반 방법은 보다 '실제' 설정에서 알고리즘을 테스트하거나 구현할 때 유용한 미분기입니다. 유한 차분과 같은 더 빠른 방법은 분석 계산 및 더 높은 처리량을 원하지만 아직 알고리즘의 기기 실행 가능성에 관심이 없는 경우 적합합니다.\n3. 다중 observable\n두 번째 observable을 소개하고 TensorFlow Quantum이 단일 회로에 대해 여러 observable을 지원하는 방법을 살펴보겠습니다.",
"pauli_z = cirq.Z(qubit)\npauli_z",
"이 observable이 이전과 같은 회로에서 사용된다면 $f_{2}(\\alpha) = ⟨Y(\\alpha)| Z | Y (\\alpha)⟩ = \\cos(\\pi \\alpha)$ 및 $f_{2}^{'}(\\alpha) = -\\pi \\sin (\\pi \\alpha)$입니다. 간단하게 확인해 보겠습니다.",
"test_value = 0.\n\nprint('Finite difference:', my_grad(pauli_z, test_value))\nprint('Sin formula: ', -np.pi * np.sin(np.pi * test_value))",
"이 정도면 일치한다고 볼 수 있습니다.\n이제 $g(\\alpha) = f_{1}(\\alpha) + f_{2}(\\alpha)$를 정의하면 $g'(\\alpha) = f_{1}^{'}(\\alpha) + f^{'}_{2}(\\alpha)$입니다. 회로와 함께 사용하기 위해 TensorFlow Quantum에서 하나 이상의 observable을 정의하는 것은 $g$에 더 많은 용어를 추가하는 것과 같습니다.\n이것은 회로에서 특정 심볼의 그래디언트가 해당 회로에 적용된 해당 심볼의 각 observable에 대해 그래디언트의 합과 동일함을 의미합니다. 이는 TensorFlow 그래디언트 가져오기 및 역전파(특정 심볼에 대한 그래디언트로 모든 observable에 대한 그래디언트 합계를 제공)와 호환됩니다.",
"sum_of_outputs = tfq.layers.Expectation(\n differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))\n\nsum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=[[test_value]])",
"여기서 첫 번째 항목은 예상 w.r.t Pauli X이고, 두 번째 항목은 예상 w.r.t Pauli Z입니다. 그래디언트를 사용할 때는 다음과 같습니다.",
"test_value_tensor = tf.convert_to_tensor([[test_value]])\n\nwith tf.GradientTape() as g:\n g.watch(test_value_tensor)\n outputs = sum_of_outputs(my_circuit,\n operators=[pauli_x, pauli_z],\n symbol_names=['alpha'],\n symbol_values=test_value_tensor)\n\nsum_of_gradients = g.gradient(outputs, test_value_tensor)\n\nprint(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))\nprint(sum_of_gradients.numpy())",
"여기에서 각 observable의 그래디언트의 합이 실제로 $\\alpha$의 그래디언트임을 확인했습니다. 이 동작은 모든 TensorFlow Quantum 미분기에서 지원하며 나머지 TensorFlow와의 호환성에 중요한 역할을 합니다.\n4. 고급 사용법\n여기서는 양자 회로에 대한 사용자 정의 미분 루틴을 정의하는 방법을 배웁니다. TensorFlow Quantum 서브 클래스 tfq.differentiators.Differentiator 내에 존재하는 모든 미분기입니다. 미분기에서 differentiate_analytic 및 differentiate_sampled를 구현해야 합니다.\n다음은 TensorFlow Quantum 구조를 사용하여 이 튜토리얼의 첫 번째 부분에 나온 폐쇄형 솔루션을 구현합니다.",
"class MyDifferentiator(tfq.differentiators.Differentiator):\n \"\"\"A Toy differentiator for <Y^alpha | X |Y^alpha>.\"\"\"\n\n def __init__(self):\n pass\n\n @tf.function\n def _compute_gradient(self, symbol_values):\n \"\"\"Compute the gradient based on symbol_values.\"\"\"\n\n # f(x) = sin(pi * x)\n # f'(x) = pi * cos(pi * x)\n return tf.cast(tf.cos(symbol_values * np.pi) * np.pi, tf.float32)\n\n @tf.function\n def differentiate_analytic(self, programs, symbol_names, symbol_values,\n pauli_sums, forward_pass_vals, grad):\n \"\"\"Specify how to differentiate a circuit with analytical expectation.\n\n This is called at graph runtime by TensorFlow. `differentiate_analytic`\n should calculate the gradient of a batch of circuits and return it\n formatted as indicated below. See\n `tfq.differentiators.ForwardDifference` for an example.\n\n Args:\n programs: `tf.Tensor` of strings with shape [batch_size] containing\n the string representations of the circuits to be executed.\n symbol_names: `tf.Tensor` of strings with shape [n_params], which\n is used to specify the order in which the values in\n `symbol_values` should be placed inside of the circuits in\n `programs`.\n symbol_values: `tf.Tensor` of real numbers with shape\n [batch_size, n_params] specifying parameter values to resolve\n into the circuits specified by programs, following the ordering\n dictated by `symbol_names`.\n pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]\n containing the string representation of the operators that will\n be used on all of the circuits in the expectation calculations.\n forward_pass_vals: `tf.Tensor` of real numbers with shape\n [batch_size, n_ops] containing the output of the forward pass\n through the op you are differentiating.\n grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]\n representing the gradient backpropagated to the output of the\n op you are differentiating through.\n\n Returns:\n A `tf.Tensor` with the same shape as `symbol_values` representing\n the gradient backpropagated to the `symbol_values` input of the op\n you are differentiating through.\n \"\"\"\n\n # Computing gradients just based off of symbol_values.\n return self._compute_gradient(symbol_values) * grad\n\n @tf.function\n def differentiate_sampled(self, programs, symbol_names, symbol_values,\n pauli_sums, num_samples, forward_pass_vals, grad):\n \"\"\"Specify how to differentiate a circuit with sampled expectation.\n\n This is called at graph runtime by TensorFlow. `differentiate_sampled`\n should calculate the gradient of a batch of circuits and return it\n formatted as indicated below. See\n `tfq.differentiators.ForwardDifference` for an example.\n\n Args:\n programs: `tf.Tensor` of strings with shape [batch_size] containing\n the string representations of the circuits to be executed.\n symbol_names: `tf.Tensor` of strings with shape [n_params], which\n is used to specify the order in which the values in\n `symbol_values` should be placed inside of the circuits in\n `programs`.\n symbol_values: `tf.Tensor` of real numbers with shape\n [batch_size, n_params] specifying parameter values to resolve\n into the circuits specified by programs, following the ordering\n dictated by `symbol_names`.\n pauli_sums: `tf.Tensor` of strings with shape [batch_size, n_ops]\n containing the string representation of the operators that will\n be used on all of the circuits in the expectation calculations.\n num_samples: `tf.Tensor` of positive integers representing the\n number of samples per term in each term of pauli_sums used\n during the forward pass.\n forward_pass_vals: `tf.Tensor` of real numbers with shape\n [batch_size, n_ops] containing the output of the forward pass\n through the op you are differentiating.\n grad: `tf.Tensor` of real numbers with shape [batch_size, n_ops]\n representing the gradient backpropagated to the output of the\n op you are differentiating through.\n\n Returns:\n A `tf.Tensor` with the same shape as `symbol_values` representing\n the gradient backpropagated to the `symbol_values` input of the op\n you are differentiating through.\n \"\"\"\n return self._compute_gradient(symbol_values) * grad",
"이 새로운 미분기는 이제 기존 tfq.layer 객체와 함께 사용할 수 있습니다.",
"custom_dif = MyDifferentiator()\ncustom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)\n\n# Now let's get the gradients with finite diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n exact_outputs = expectation_calculation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nanalytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)\n\n# Now let's get the gradients with custom diff.\nwith tf.GradientTape() as g:\n g.watch(values_tensor)\n my_outputs = custom_grad_expectation(my_circuit,\n operators=[pauli_x],\n symbol_names=['alpha'],\n symbol_values=values_tensor)\n\nmy_gradients = g.gradient(my_outputs, values_tensor)\n\nplt.subplot(1, 2, 1)\nplt.title('Exact Gradient')\nplt.plot(input_points, analytic_finite_diff_gradients.numpy())\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.subplot(1, 2, 2)\nplt.title('My Gradient')\nplt.plot(input_points, my_gradients.numpy())\nplt.xlabel('x')",
"이제 이 새로운 미분기를 사용하여 미분 ops를 생성할 수 있습니다.\n요점: 차별화 요소는 한 번에 하나의 op에만 연결할 수 있으므로 이전 op에 연결된 미분기는 새 op에 연결하기 전에 새로 고쳐야 합니다.",
"# Create a noisy sample based expectation op.\nexpectation_sampled = tfq.get_sampled_expectation_op(\n cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))\n\n# Make it differentiable with your differentiator:\n# Remember to refresh the differentiator before attaching the new op\ncustom_dif.refresh()\ndifferentiable_op = custom_dif.generate_differentiable_op(\n sampled_op=expectation_sampled)\n\n# Prep op inputs.\ncircuit_tensor = tfq.convert_to_tensor([my_circuit])\nop_tensor = tfq.convert_to_tensor([[pauli_x]])\nsingle_value = tf.convert_to_tensor([[my_alpha]])\nnum_samples_tensor = tf.convert_to_tensor([[1000]])\n\nwith tf.GradientTape() as g:\n g.watch(single_value)\n forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,\n op_tensor, num_samples_tensor)\n\nmy_gradients = g.gradient(forward_output, single_value)\n\nprint('---TFQ---')\nprint('Foward: ', forward_output.numpy())\nprint('Gradient:', my_gradients.numpy())\nprint('---Original---')\nprint('Forward: ', my_expectation(pauli_x, my_alpha))\nprint('Gradient:', my_grad(pauli_x, my_alpha))",
"성공: 이제 TensorFlow Quantum이 제공하는 모든 미분기를 사용하고 자신만의 미분기를 정의할 수 있습니다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
spulido99/NetworksAnalysis
|
alejogm0520/Ejercicios 1.1.ipynb
|
mit
|
[
"Ejercicios Graphs, Paths & Components\nEjercicios básicos de Grafos.\nEjercicio - Número de Nodos y Enlaces\n(resuelva en código propio y usando la librería NerworkX o iGraph)\nCuente en número de nodos y enalces con los siguientes links (asumiendo que el grafo puede ser dirigido y no dirigido)",
"edges = set([(1, 2), (3, 1), (3, 2), (2, 4)])\n\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy as sc\nimport itertools\nimport random",
"Usando la libreria",
"gr = nx.Graph()\nfor i in range(1,5):\n gr.add_node(i)\nfor i in edges:\n gr.add_edge(i[0], i[1])\n \nnx.draw_spectral(gr)\n\nplt.show()\n\nprint ('The graph is directed?: ', nx.is_directed(gr))\nif nx.is_directed(gr) is True:\n print ('Number of edges: ', gr.number_of_edges())\nelse:\n print ('Number of edges: ', gr.number_of_edges()*2)\n\nprint ('Number of nodes: ', gr.number_of_nodes())\n\ngr2 = nx.DiGraph()\nfor i in range(1,5):\n gr2.add_node(i)\nfor i in edges:\n gr2.add_edge(i[0], i[1])\n\nnx.draw_spectral(gr2)\n\nplt.show()\n\nprint ('The graph is directed?: ', nx.is_directed(gr2))\nif nx.is_directed(gr2) is True:\n print ('Number of edges: ', gr2.number_of_edges())\nelse:\n print ('Number of edges: ', gr2.number_of_edges()*2)\n\nprint ('Number of nodes: ', gr2.number_of_nodes())",
"Propio",
"Directed=False\nprint ('The graph is directed?: ', Directed)\n\nif Directed is True:\n print ('Number of edges: ', len(edges))\nelse:\n print ('Number of edges: ', 2*len(edges))\n\ntemp = []\nfor i in edges:\n temp.append(i[0])\n temp.append(i[1])\ntemp = np.array(temp)\n\nprint ('Number of nodes: ', np.size(np.unique(temp)))\n\nDirected=True\nprint ('The graph is directed?: ', Directed)\n\nif Directed is True:\n print ('Number of edges: ', len(edges))\nelse:\n print ('Number of edges: ', 2*len(edges))\n\ntemp = []\nfor i in edges:\n temp.append(i[0])\n temp.append(i[1])\ntemp = np.array(temp)\n\nprint ('Number of nodes: ', np.size(np.unique(temp)))\n\ndel temp, Directed",
"Ejercicio - Matriz de Adyacencia\n(resuelva en código propio y usando la librería NetworkX (python) o iGraph (R))\nCree la matriz de adyacencia del grafo del ejercicio anterior (para dirigido y no-dirigido)\nUsando Librería",
"A = nx.adjacency_matrix(gr)\nprint ('No Dirigida')\nprint(A)\n\nA = nx.adjacency_matrix(gr2)\nprint ('Dirigida')\nprint(A)",
"Propia",
"def adjmat(ed, directed):\n if directed is True:\n temp_d1 = []\n temp_d2 = []\n for i in ed:\n temp_d1.append(i[0])\n temp_d2.append(i[1])\n B=sc.sparse.csr_matrix((np.ones(len(temp_d1), dtype='int'), (temp_d1, temp_d2)))\n else:\n temp_d1 = []\n temp_d2 = []\n for i in ed:\n temp_d1.append(i[0])\n temp_d1.append(i[1])\n temp_d2.append(i[1])\n temp_d2.append(i[0])\n B=sc.sparse.csr_matrix((np.ones(len(temp_d1), dtype='int'), (temp_d1, temp_d2)))\n return B\n\nA2 = adjmat(edges, True)\nprint ('Dirigida')\nprint (A2)\n\nA2 = adjmat(edges, False)\nprint ('No Dirigida')\nprint (A2)\n\ndel A, A2, gr, gr2",
"Ejercicio - Sparseness\nEnron email network - Directed http://snap.stanford.edu/data/email-Enron.html\nCalcule la proporción entre número de links existentes contra el número de links posibles.",
"F = open(\"Email-Enron.txt\",'r')\nNet1=nx.read_edgelist(F)\nF.close()\n\nn = Net1.number_of_nodes()\nposibles = Net1.number_of_nodes()*(Net1.number_of_nodes()-1.0)/2.0\nprint ('Ratio: ', Net1.number_of_edges()/posibles)",
"En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?",
"ANet1 = nx.adjacency_matrix(Net1)\n\nnzeros=Net1.number_of_nodes()*Net1.number_of_nodes()-len(ANet1.data)\nprint ('La Red tiene: ', nzeros, ' ceros')\n\ndel Net1, posibles, ANet1, nzeros",
"Social circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html\nCalcule la proporción entre número de links existentes contra el número de links posibles.",
"F = open(\"facebook_combined.txt\",'r')\nNet=nx.read_edgelist(F)\nF.close()\n\nn = Net.number_of_nodes()\nposibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0\nprint ('Ratio: ', Net.number_of_edges()/posibles)",
"En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?",
"ANet = nx.adjacency_matrix(Net)\n\nnzeros=Net.number_of_nodes()*Net.number_of_nodes()-len(ANet.data)\nprint ('La Red tiene: ', nzeros, ' ceros')\n\ndel Net, n, posibles, ANet, nzeros",
"Webgraph from the Google programming contest, 2002 - Directed http://snap.stanford.edu/data/web-Google.html\nCalcule la proporción entre número de links existentes contra el número de links posibles.",
"F = open(\"web-Google.txt\",'r')\nNet=nx.read_edgelist(F)\nF.close()\n\nn = Net.number_of_nodes()\nposibles = Net.number_of_nodes()*(Net.number_of_nodes()-1.0)/2.0\nprint ('Ratio: ', Net.number_of_edges()/posibles)",
"En la matriz de adyacencia de cada uno de las redes elegidas, cuantos ceros hay?",
"ANet = nx.adjacency_matrix(Net)\n\nnzeros=Net.number_of_nodes()*Net.number_of_nodes()-len(ANet.data)\nprint ('La Red tiene: ', nzeros, ' ceros')\n\ndel Net, n, posibles, ANet, nzeros",
"Ejercicio - Redes Bipartitas\nDefina una red bipartita y genere ambas proyecciones, explique qué son los nodos y links tanto de la red original como de las proyeccciones\nSe define una red donde los nodes E1, E2 y E3 son Estaciones de Bus, y se definen los nodos R101, R250, R161, R131 y R452 como rutas de buses.",
"B = nx.Graph()\nB.add_nodes_from(['E1','E2', 'E3'], bipartite=0)\nB.add_nodes_from(['R250', 'R161', 'R131', 'R452','R101'], bipartite=1)\nB.add_edges_from([('E1', 'R250'), ('E1', 'R452'), ('E3', 'R250'), ('E3', 'R131'), ('E3', 'R161'), ('E3', 'R452'), ('E2', 'R161'), ('E2', 'R101'),('E1', 'R131')])\nB1=nx.algorithms.bipartite.projected_graph(B, ['E1','E2', 'E3'])\nB2=nx.algorithms.bipartite.projected_graph(B,['R250', 'R161', 'R131', 'R452'])\n\nvalue =np.zeros(len(B.nodes()))\ni = 0\nfor node in B.nodes():\n if any(node == a for a in B1.nodes()):\n value[i] = 0.25\n if any(node == a for a in B2.nodes()):\n value[i] = 0.75 \n i += 1\n\n\nfig, ax = plt.subplots(1, 3, num=1)\nplt.sca(ax[1])\nax[1].set_title('Bipartita')\nnx.draw(B, with_labels = True, cmap=plt.get_cmap('summer'), node_color=value)\nplt.sca(ax[0])\nax[0].set_title('Proyeccion A')\nnx.draw(B1, with_labels = True, cmap=plt.get_cmap('summer'), node_color=np.ones(len(B1.nodes()))*0.25)\nplt.sca(ax[2])\nnx.draw(B2, with_labels = True, cmap=plt.get_cmap('summer'), node_color=0.75*np.ones(len(B2.nodes())))\nax[2].set_title('Proyeccion B')\n\nplt.show()",
"La proyección A representa la comunicación entre Estaciones mediante el flujo de las rutas de buses, La proyección B representa la posible interacción o \"encuentros\" entre las rutas de buses en función de las estaciones.\nEjercicio - Paths\nCree un grafo de 5 nodos con 5 enlaces. Elija dos nodos cualquiera e imprima:\n5 Paths diferentes entre los nodos\nEl camino mas corto entre los nodos\nEl diámetro de la red\nUn self-avoiding path",
"Nodes = [1, 2, 3, 4, 5]\nnEdges = 5\n\ntemp = []\nfor subset in itertools.combinations(Nodes, 2):\n temp.append(subset)\nEdges = random.sample(temp, nEdges)\n\nEdges\nG = nx.Graph()\nG.add_edges_from(Edges)\nnx.draw(G, with_labels = True)\nplt.show()\n\nGrafo = {\n 1 : []\n , 2 : []\n , 3 : []\n , 4 : []\n , 5 : []\n \n}\nfor i in Edges:\n Grafo[i[0]].append(i[1])\n Grafo[i[1]].append(i[0])\n\ndef pathGen(Inicio, Fin):\n\n flag=False\n\n actual = Inicio\n temp = []\n cont = 0\n while not flag:\n temp.append(actual)\n actual = random.sample(Grafo[actual], 1)[0]\n if actual == Fin:\n flag = True\n temp.append(actual)\n break \n return temp\n\nprint \"Un posible path entre el nodo 5 y 4 es: \", pathGen(5,3)\nprint \"Un posible path entre el nodo 5 y 4 es: \", pathGen(5,3)\nprint \"Un posible path entre el nodo 5 y 4 es: \", pathGen(5,3)\nprint \"Un posible path entre el nodo 5 y 4 es: \", pathGen(5,3)\nprint \"Un posible path entre el nodo 5 y 4 es: \", pathGen(5,3)\n\nvisited = {i : False for i in xrange(1, 6)}\n\ndef shortest(a, b, length = 0):\n global visited, Grafo\n if b == a : return length\n \n minL = float('inf')\n for v in Grafo[a]:\n if not visited[v]:\n visited[v] = True\n minL = min(minL, 1 + shortest(v, b))\n visited[v] = False\n return minL\n\nprint 'El camino mas corto entre los nodos 5 y 3 es: ', shortest(5, 3)\n\ntemp = []\n\nfor subset in itertools.combinations(Nodes, 2):\n temp.append(subset)\n\nmaxL = 0\nfor i in temp:\n maxL=max(maxL,shortest(i[0], i[1]))\nprint 'La diametro de la Red es, ', maxL\n\ndef avoidpathGen(Inicio, Fin):\n\n flag=False\n\n actual = Inicio\n temp = []\n past = []\n cont = 0\n while not flag:\n temp.append(actual)\n past.append(actual)\n temp2 = random.sample(Grafo[actual], 1)[0]\n while not len(np.intersect1d(past,temp2)) == 0:\n temp2 = random.sample(Grafo[actual], 1)[0]\n actual = temp2\n if actual == Fin:\n flag = True\n temp.append(actual)\n break \n return temp\n\nprint 'Un self-avoiding path del nodo 5 a 3 es: ', avoidpathGen(5,3)",
"Ejercicio - Componentes\nBaje una red real (http://snap.stanford.edu/data/index.html) y lea el archivo\nSocial circles from Facebook (anonymized) - Undirected http://snap.stanford.edu/data/egonets-Facebook.html",
"F = open(\"youtube.txt\",'r')\nNet1=nx.read_edgelist(F)\nF.close()\n\nprint 'La red tiene: ',nx.number_connected_components(Net1), ' componentes'",
"Implemente el algorithmo Breadth First para encontrar el número de componentes (revise que el resultado es el mismo que utilizando la librería)",
"Edges = Net1.edges()\n\nlen(Edges)\n\ndef netgen(nn, ne):\n nod = [i for i in range(nn)]\n nEdges = ne\n temp = []\n for subset in itertools.combinations(nod, 2):\n temp.append(subset)\n edg = random.sample(temp, nEdges)\n return edg, nod\n\nG = nx.Graph()\nedges, nodes = netgen(10, 7)\n\nG.add_edges_from(edges)\nnx.draw(G, with_labels = True)\nplt.show()\n\nnx.number_connected_components(G)\n\ndef componts(nod, edg):\n dgraf = {}\n for i in nod:\n dgraf[i] = []\n for i in edg:\n dgraf[i[0]].append(i[1])\n dgraf[i[1]].append(i[0])\n empty = nod[:]\n cont = -1\n Labels = {}\n for i in nod:\n Labels[i] = -1\n \n while (len(empty) is not 0):\n cont += 1\n temp = random.sample(empty, 1)\n if Labels[temp[0]] is -1:\n value = cont\n else:\n value = Labels[temp[0]]\n Labels[temp[0]] = value\n empty.remove(temp[0])\n \n for i in dgraf[temp[0]]:\n Labels[i] = value\n if not any_in(dgraf[i], empty):\n if i in empty:\n empty.remove(i)\n print empty\n \n \n return Labels, cont\n\nLab, comp = componts(nodes, edges)\nfor i in range(10):\n print i, Lab[i]\nprint comp\nprint edges\n\nplt.bar(Lab.keys(), Lab.values(), color='g')\nplt.show()\n\nany_in([1,2],[2,3,4,5,6,7])\n\nany_in = lambda a, b: any(i in b for i in a)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
james-prior/cohpy
|
20171116-dojo-classification-of-letters-of-roman-numerals.ipynb
|
mit
|
[
"Someone else had some code in some other language that was something like the following couple of cells.",
"roman_letter_values = (\n ('I', 1),\n ('V', 5),\n ('X', 10),\n ('L', 50),\n ('C', 100),\n ('D', 500),\n ('M', 1000),\n)\n\nfor i, (letter, value) in enumerate(roman_letter_values):\n if i % 2 == 1:\n print(i, letter, value)",
"I know what the code above means, but I have to think about it.\nIt was hard to understand,\nbecause the test above as repeated below, is so indirect.\nif i % 2 == 1:\n\nCompare that with the directness of the following test from cell #12.\nif significant_digits(value) == 5:",
"def prime_factors(x):\n divisor = 2\n while divisor < x:\n if x % divisor == 0:\n yield divisor\n x //= divisor\n continue\n divisor += 1\n yield x\n\nfor letter, value in roman_letter_values:\n print(letter, value, list(prime_factors(value)))\n\ndef significant_digits(x):\n return int(''.join(reversed(str(x))))\n\ndef foo():\n for letter, value in roman_letter_values:\n print(letter, value, significant_digits(value))\n\nfoo()\n\ndef significant_digits(x):\n return int(str(x).rstrip('0'))\n\nfoo()\n\ndef significant_digits(x):\n while x % 10 == 0:\n x //= 10\n return x\n\nfoo()\n\nfor letter, value in roman_letter_values:\n if significant_digits(value) == 5:\n print(letter, value)",
"Of the three different implementations of significant_digits() above,\nwhich do you like the most?\nWhich do you dislike the most?\nHow would you write significant_digits()?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
heprom/pymicro
|
examples/SampleDataUserGuide/1_Getting_Information_from_SampleData_datasets.ipynb
|
mit
|
[
"2 - Getting started with SampleData : Exploring dataset contents\nThis second User Guide tutorial will introduce you to:\n\ncreate and open datasets with the SampleData class\nthe SampleData Naming System\nhow to get informations on a dataset content interactively\nhow to use the external software Vitables to visualize the content and organization of a dataset\nhow to use the Paraview software to visualize the spatially organized data stored in datasets\nhow to use generic HDF5 command line tools to print the content of your dataset\n\nYou will find a short summary of all methods reviewed in this tutorial at the end of this page. \n<div class=\"alert alert-info\">\n\n**Note** \n\nThroughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [previous notebook](./SampleData_Introduction.ipynb) of this User Guide.\n\n</div>\n\n<div class=\"alert alert-warning\">\n\n**Warning** \n\nThis Notebook review the methods to get information on SampleData HDF5 datsets content. Some of the methods detailed here produce very long outputs, that have been conserved in the documentation version. Reading completely the content of this output is absolutely not necessary to learn what is detailed on this page, they are just provided here as examples. So do not be afraid and fill free to scroll down quickly when you see large prints !\n</div>\n\nI - Create and Open datasets with the SampleData class\nIn this first section, we will see how to create SampleData datasets, or open pre-existing ones. These two operations are performed by instantiating a SampleData class object. \nBefore that, you will need to import the SampleData class. We will import it with the alias name SD, by executing:\nImport SampleData and get help",
"from pymicro.core.samples import SampleData as SD",
"Before starting to create our datasets, we will take a look at the SampleData class documenation, to discover the arguments of the class constructor. You can read it on the pymicro.core package API doc page, or print interactively by executing:\n```python\n\n\n\nhelp(SD)\nor, if you are working with a Jupyter notebook, by executing the magic command:\n?SD\n```\n\n\n\nDo not hesitate to systematically use the help function or the \"?\" magic command to get information on methods when you encounter a new one. All SampleData methods are documented with explicative docstrings, that detail the method arguments and returns.\nDataset creation\nThe class docstring is divided in multiple rubrics, one of them giving the list of the class constructor arguments. \nLet us review them one by one.\n\nfilename: basename of the HDF5/XDMF pair of file of the dataset\n\nThis is the first and only mandatory argument of the class constructor. If this string corresponds to an existing file, the SampleData class will open these file, and create a file instance to interact with this already existing dataset. If the filename do not correspond to an existing file, the class will create a new dataset, which is what we want to do here.\nLet us create a SampleData dataset:",
"data = SD(filename='my_first_dataset')",
"That is it. The class has created a new HDF5/XDMF pair of files, and associated the interface with this dataset to the variable data. No message has been returned by the code, how can we know that the dataset has been created ?\nWhen the name of the file is not an absolute path, the default behavior of the class is to create the dataset in the current work directory. Let us print the content of this directory then !",
"import os # load python module to interact with operating system\ncwd = os.getcwd() # get current directory\nfile_list = os.listdir(cwd) # get content of current work directory\nprint(file_list,'\\n')\n\n# now print only files that start with our dataset basename\nprint('Our dataset files:')\nfor file in file_list:\n if file.startswith('my_first_dataset'):\n print(file)",
"The two files my_first_dataset.h5 and my_first_dataset.xdmf have indeed been created. \nIf you want interactive prints about the dataset creation, you can set the verbose argument to True. This will set the activate the verbose mode of the class. When it is, the class instance prints a lot of information about what it is doing. This flag can be set by using the set_verbosity method:",
"data.set_verbosity(True)",
"Let us now close our dataset, and see if the class instance prints information about it:",
"del data",
"<div class=\"alert alert-info\">\n\n**Note** \n\nIt is a good practice to always delete your `SampleData` instances once you are done working with a dataset, or if you want to re-open it. As the class instance handles opened files as long as it exists, deleting it ensures that the files are properly closed. Otherwise, file may close at some random times or stay opened, and you may encounter undesired behavior of your datasets.\n\n</div>\n\nThe class indeed returns some prints during the instance destruction. As you can see, the class instance wrights data into the pair of files, and then closes the dataset instance and the files. \nDataset opening and verbose mode\nLet us now try to create a new SD instance for the same dataset file \"my_first_dataset\". As the dataset files (HDF5, XDMF) already exist, this new SampleData instance will open the dataset files and synchronize with them. with the verbose mode on. When activated, SampleData class instances will display messages about the actions performed by the class (creating, deleting data items for instance)",
"data = SD(filename='my_first_dataset', verbose=True)",
"You can see that the printed information states that the dataset file my_first_dataset.h5 has been opened, and not created. This second instantiation of the class has not created a new dataset, but instead, has opened the one that we have just closed. Indeed, in that case, we provided a filename that already existed.\nSome information about the dataset content are also printed by the class. This information can be retrived with specific methods that will be detailed in the next section of this Notebook. Let us focus for now on one part of it. \nThe printed info reveals that our dataset content is composed only of one object, a Group data object named /. This group is the Root Group of the dataset. Each dataset has necessarily a Root Group, automatically created along with the dataset. You can see that this Group already have a Child, named Index. This particular data object will be presented in the third section of this Notebook. You can also observe that the Root Group already has attributes (recall from introduction Notebook that they are Name/Value pairs used to store metadata in datasets). Two of those attributes match arguments of the SampleData class constructor:\n\nthe description attribute\nthe sample_name attribute\n\nThe description and sample_name are not modified in the dataset when reading a dataset. These SD constructor arguments are only used when creating a dataset. They are string metadata whose role is to give a general name/title to the dataset, and a general description. \nHowever, they can be set or changed after the dataset creation with the methods set_sample_name and set_description, used a little further in this Notebook.\nNow we know how to open a dataset previously created with SampleData. We could want to open a new dataset, with the name of an already existing data, but overwrite it. The SampleData constructor allows to do that, and we will see it in the next subsection. But first, we will close our dataset again:",
"del data",
"Overwriting datasets\nThe overwrite_hdf5 argument of the class constructor, if it is set to True, will remove the filename dataset and create a new empty one, if this dataset already exists:",
"data = SD(filename='my_first_dataset', verbose=True, overwrite_hdf5=True)",
"As you can see, the dataset files have been overwritten, as requested. We will now close our dataset again and continue to see the possibilities offered by the class constructor.",
"del data",
"Copying dataset\nOne last thing that may be interesting to do with already existing dataset files, is to create a new dataset that is a copy of them, associated with a new class instance. This is usefull for instance when you have to try new processing on a set of valuable data, without risking to damage the data. \nTo do this, you may use the copy_sample method of the SampleData class. Its main arguments are:\n\nsrc_sample_file: basename of the dataset files to copy (source file)\ndst_sample_file: basename of the dataset to create as a copy of the source (desctination file)\nget_object: if False, the method will just create the new dataset files and close them. If True, the method will leave the files open and return a SampleData instance that you may use to interact with your new dataset.\n\nLet us try to create a copy of our first dataset:",
"data2 = SD.copy_sample(src_sample_file='my_first_dataset', dst_sample_file='dataset_copy', get_object=True)\n\ncwd = os.getcwd() # get current directory\nfile_list = os.listdir(cwd) # get content of current work directory\nprint(file_list,'\\n')\n\n# now print only files that start with our dataset basename\nprint('Our dataset files:')\nfor file in file_list:\n if file.startswith('dataset_copy'):\n print(file)",
"The copy_dataset HDF5 and XDMF files have indeed been created, and are a copy of the my_first_dataset HDF5 and XDMF files.\nNote that the copy_sample is a static method, that can be called even without SampleData instance. Note also that it has a overwrite argument, that allows to overwrite an already existing dst_sample_file. It also has, like the class constructor, a autodelete argument, that we will discover in the next subsection.\nAutomatically removing dataset files\nIn some occasions, we may want to remove our dataset files after using our SampleData class instance. This can be the case for instance if you are trying some new data processing, or using the class for visualization purposes, and are not interested in keeping your test data. \nThe class has a autodelete attribute for this purpose. IF it is set to True, the class destructor will remove the dataset file pair in addition to deleting the class instance. The class constructor and the copy_sample method also have a autodelete argument, which, if True, will automatically set the class instance autodelete attribute to True.\nTo illustrate this feature, we will try to change the autodelete attribute of our copied dataset to True, and remove it.",
"# set the autodelete argument to True\ndata2.autodelete = True\n# Set the verbose mode on for copied dataset \ndata2.set_verbosity(True)\n\n# Close copied dataset\ndel data2",
"The class destructor ends by priting a confirmation message of the dataset files removal in verbose mode, as you can see in the cell above.\nLet us verify that it has been effectively deleted:",
"file_list = os.listdir(cwd) # get content of current work directory\nprint(file_list,'\\n')\n\n# now print only files that start with our dataset basename\nprint('Our copied dataset files:')\nfor file in file_list:\n if file.startswith('dataset_copy'):\n print(file)",
"As you can see, the dataset files have been suppressed. Now we can also open and remove our first created dataset using the class constructor autodelete option:",
"data = SD(filename='my_first_dataset', verbose=True, autodelete=True)\n\nprint(f'Is autodelete mode on ? {data.autodelete}')\n\ndel data\n\nfile_list = os.listdir(cwd) # get content of current work directory\nprint(file_list,'\\n')\n\n# now print only files that start with our dataset basename\nprint('Our dataset files:')\nfor file in file_list:\n if file.startswith('my_first_dataset'):\n print(file)",
"Now, you now how to create or open SampleData datasets. Before starting to explore their content in detail, a last feature of the SampleData class must be introduced: the naming system and conventions used to create or access data items in datasets.\n<div class=\"alert alert-info\">\n\n**Note** \n\nUsing the **autodelete** option is usefull when you want are using the class for tries, or tests, and do not want to keep the dataset files on your computer. It is also a proper way to remove a SampleData dataset, as it allows to remove both files in one time.\n\n</div>\n\nII - The SampleData Naming system\nSampleData datasets are composed of a set of organized data items. When handling datasets, you will need to specify which item you want to interact with or create. The SampleData class provides 4 different ways to refer to datasets. The first type of data item identificator is :\n\nthe Path of the data item in the HDF5 file. \n\nLike a file within a filesystem has a path, HDF5 data items have a Path within the dataset. Each data item is the children of a HDF5 Group (analogous to a file contained in a directory), and each Group may also be children of a Group (analogous to a directory contained in a directory). The origin directory is called the root group, and has the path '/'. The Path offers a completely non-ambiguous way to designate a data item within the dataset, as it is unique. A typical path of a dataitem will look like that : /Parent_Group1/Parent_Group2/ItemName. However, pathes can become very long strings, and are usually not a convenient way to name data items. For that reason, you also can refer to them in SampleData methods using:\n\nthe Name of the data item.\n\nIt is the last element of its Path, that comes after the last / character. For a dataset that has the path /Parent_Group1/Parent_Group2/ItemName, the dataset Name is ItemName. It allows to refer quickly to the data item without writing its whole Path.\nHowever, note that two different datasets may have the same Name (but different pathes), and thus it may be necessary to use additional names to refer to them with no ambiguity without having to write their full path. In addition, it may be convenient to be able to use, in addition to its storage name, one or more additional and meaningfull names to designate a data item. For these reasons, two additional identificators can be used:\n\nthe Indexname of the data item\nthe Alias or aliases of the data item\n\nThose two types of indentificators are strings that can be used as additional data item Names. They play completely similar roles. The Indexname is also used in the dataset Index (see below), that gather the data item indexnames together with their pathes within the dataset. All data items must have an Indexname, that can be identical to their Name. If additionnal names are given to a dataset, they are stored as an Alias.\nMany SampleData methods have a nodename or name argument. Everytime you will encounter it, you may use one of the 4 identificators presented in this section, to provide the name of the dataset you want to create or interact with. Many examples will follow in the rest of this Notebook, and of this User Guide.\nLet us now move on to discover the methods that allow to explore the datasets content.\nIII- Interactively get information on datasets content\nThe goal of this section is to review the various way to get interactive information on your SampleData dataset (interactive in the sens that you can get them by executing SampleData class methods calls into a Python interpreter console).\nFor this purpose, we will use a pre-existing dataset that already has some data stored, and look into its content. This dataset is a reference SampleData dataset used for the core package unit tests.",
"from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path\nimport os\ndataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'test_sampledata_ref') # test dataset file path\ndata = SD(filename=dataset_file)",
"1- The Dataset Index\nAs explained in the previous section, all data items have a Path, and an Indexname. The collection of Indexname/Path pairs forms the Index of the dataset. For each SampleData dataset, an Index Group is stored in the root Group, and the collection of those pairs is stored as attributes of this Index Group. Additionnaly, a class attribute content_index stores them as a dictionary in the clas instance, and allows to access them easily. The dictionary and the Index Group attributes are automatically synchronized by the class. \nLet us see if we can see the dictionary content:",
"data.content_index",
"You should see the dictionary keys that are names of data items, and associated values, that are hdf5 pathes. You can see also data item Names at the end of their Pathes. The data item aliases are also stored in a dictionary, that is an attribute of the class, named aliases:",
"data.aliases",
"You can see that this dictionary contains keys only for data item that have additional names, and also that those keys are the data item indexnames.\nThe dataset index can be plotted together with the aliases, with a prettier aspect, by calling the method print_index:",
"data.print_index()",
"This method prints the content of the dataset Index, with a given depth and from a specific root. The depth is the number of parents that a data item has. The root Group has thus a depth of 0, its children a depth of 1, the children of its children a depth of 2, and so on... The local root argument can be changed, to print only the Index for data items that are children of a specific group. When used without arguments, print_index uses a depth of 3 and the dataset root as default settings.\nAs you can see, our dataset already contains some data items. We can already identify at least 3 HDF5 Groups (test_group, test_image, test_group), as they have childrens, and a lot of other data items. \nLet us try different to print Indexes with different parameters. To start, Let us try to print the Index from a different local root, for instance the group with the path /test_image. The way to do it is to use the local_root argument. We will hence give it the value of the /test_image path.",
"data.print_index(local_root=\"/test_image\")",
"The print_index method local root arguments needs the name of the Group whose children Index must be printed. As explained in section II, you may use for this other identificators than its Path. Let us try its Name (last part of its path), which is test_image, or its Indexname, which is image:",
"data.print_index(local_root=\"test_image\")\ndata.print_index(local_root=\"image\")",
"As you can see, the result is the same in the 3 cases.\nLet us now try to print the dataset Index with a maximal data item depth of 2, using the max_depth argument:",
"data.print_index(max_depth=2)",
"Of course, you can combine those two arguments:",
"data.print_index(max_depth=2, local_root='mesh')",
"The print_index method is usefull to get a glimpse of the content and organization of the whole dataset, or some part of it, and to quickly see the short indexnames or aliases that you can use to refer to data items. \nTo add aliases to data items or Groups, you can use the add_alias method.\nThe Index allows to quickly see the internal structure of your dataset, however, it does not provide detailed information on the data items. We will now see how to retrieve it with the SampleData class.\n2- The Dataset content\nThe SampleData class provides a method to print an organized and detailed overview of the data items in the dataset, the print_dataset_content method. Let us see what the methods prints when called with no arguments:",
"data.print_dataset_content()",
"As you can see, this method prints by increasing depth, detailed information on each Group and each data item of the dataset, with a maximum depth that can be specified with a max_depth argument (like the method print_index, that has a default value of 3). The printed output is structured by groups: each Group that has childrenis described by a first set of information, followed by a Group CONTENT string that describes all of its childrens. \nFor each data item, or Group, the method prints their name, path, type, attributes, content, compression settings and memory size if it is an array, children names if it is a Group. Hence, when calling this method, you can see the content and organization of the dataset, all the metadata attached to all data items, and the disk size occupied by each data item. As you progress through this tutorial, you will learn the meaning of those informations for all types of SampleData data items.\nThe print_dataset_content method has a short boolean argument, that allows to plot a condensed string representation of the dataset:",
"data.print_dataset_content(short=True)",
"This shorter print can be read easily, provide a complete and visual overview of the dataset organization, and indicate the memory size and type of each data item or Group in the dataset. The printed output distinguishes Group data items, from Nodes data item. The later regroups all types of arrays that may be stored in the HDF5 file. \nBoth short and long version of the print_dataset_content output can be written into a text file, if a filename is provided as value for the to_file method argument:",
"data.print_dataset_content(short=True, to_file='dataset_information.txt')\n\n# Let us open the content of the created file, to see if the dataset information has been written in it:\n%cat dataset_information.txt",
"<div class=\"alert alert-info\">\n\n**Note** \n\nThe string representation of the *SampleData* class is composed of a first part, which is the output of the `print_index` method, and a second part, that is the output of the `print_datase_content` method (short output).\n</div>",
"# SampleData string representation :\nprint(data)",
"Now you now how to get a detailed overview of the dataset content. However, with large datasets, that may have a complex internal organization (many Groups, lot of data items and metadata...), the print_dataset_content return string can become very large. In this case, it becomes cumbersome to look for a specific information on a Group or on a particular data item. For this reason, the SampleData class provides methods to only print information on one or several data items of the dataset. They are presented in the next subsections.\n3- Get information on data items\nTo get information on a specific data item (including Groups), you may use the print_node_info method. This method has 2 arguments: the name argument, and the short argument. As explain in section II, the name argument can be one of the 4 possible identifier that the target node can have (name, path, indexname or alias). The short argument has the same effect on the printed output as for the print_dataset_content method. Let us look at some examples. Its default value is False, i.e. the detailed output.\nFirst, we will for instance want to have information on the Image Group that is stored in the dataset. The print_index and short print_dataset_content allowed us to see that this group has the name test_image, the indexname image, and the path /test_image. We will call the method with two of those identificators, and with the two possible values of the short argument.",
"# Method called with data item indexname, and short output\ndata.print_node_info(nodename='image', short=True)\n\n# Method called with data item Path and long output\ndata.print_node_info(nodename='/test_image', short=False)",
"You can observe that this method prints the same block of information that the one that appeared in the print_dataset_content method output, for the description of the test_image group. With this block, we can learn that this Group is a children of the root Group ('/'), that it has two children that are the data items named Field_index and test_image_field. We can also see its attributes names and values. Here they provide information on the nature of the Group, that is a 3D image group, and on the topology of this image (for instance, that it is a 9x9x9 voxel image, of size 0.2).\nLet us now apply this method on a data item that is not a group, the test_array data item. The print_index function instructed us that this node has an alias name, that is test_alias. We will use it here to get information on this node, to illustrate the use of the only type of node indicator that has not been used throughout this notebook:",
"data.print_node_info('test_alias')",
"Here, we can learn which is the Node parent, what is the node Name, see that it has no attributes, see that it is an array of shape (51,), that it is not stored with data compression (compresion level to 0), and that it occupies a disk space of 64 Kb.\nThe print_node_info method is usefull to get information on a specific target, and avoir dealing with the sometimes too large output returned by the print_dataset_content method. \n4- Get information on Groups content\nThe previous subsection showed that the print_node_info method applied on Groups returns only information about the group name, metadata and children names. The SampleData class offers a method that allows to print this information, with in addition, the detailed content of each children of the target group: the print_group_content method.\nLet us try it on the Mesh group of our test dataset:",
"data.print_group_content(groupname='test_mesh')",
"Obviously, this methods is identical to the print_dataset_content method, but restricted to one Group. As the first one, it has a to_file, a short and a max_depth arguments. These arguments work just as for print_dataset_content method, hence there use is not detailed here. the However, you may see one difference here. In the output printed above, we see that the test_mesh group has a Geometry children which is a group, but whose content is not printed. The print_group_content has indeed, by default, a non-recursive behavior. To get a recursive print of the group content, you must set the recursive argument to True:",
"data.print_group_content('test_mesh', recursive=True)",
"As you can see, the information on the childrens of the Geometry group have been printed. Note that the max_depth argument is considered by this method as an absolute depth, meaning that you have to specify a depth that is at least the depth of the target group to see some output printed for the group content. The default maximum depth for this method is set to a very high value of 1000. Hence, print_group_content prints by defaults group contents with a recursive behavior. Note also that print_group_content with the recursive option, is equivalent to print_dataset_content but prints the dataset content as if the target group was the root.\n5- Get information on grids\nOne of the SampleData class main functionalities is the manipulation and storage of spatially organized data, which is handled by Grid groups in the data model. Because they are usually key data for mechanical sample datasets, the SampleData class provides a method to print Grouo informations only for Grid groups, the print_grids_info method:",
"data.print_grids_info()",
"This method also has the to_file and short arguments of the print_dataset_content method:",
"data.print_grids_info(short=True, to_file='dataset_information.txt')\n%cat dataset_information.txt",
"6- Get xdmf tree content\nAs explained in the first Notebook of this User Guide, these grid Groups and associated data are stored in a dual format by the SampleData class. This dual format is composed of the dataset HDF5 file, and an associated XDMF file containing metadata, describing Grid groups topology, data types and fields. \nThe XDMF file is handled in the SampleData class by the xdmf_tree attribute, which is an instance of the lxml.etree class of the lxml package:",
"data.xdmf_tree",
"The XDMF file is synchronized with the in-memory xdmf_tree argument when calling the sync method, or when deleting the SampleData instance. However, you may want to look at the content of the XDMF tree while you are interactively using your SampleData instance. In this case, you can use the print_xdmf method:",
"data.print_xdmf()",
"As you can observe, you will get a print of the content of the XDMF file that would be written if you would close the file right now. You can observe that the XDMF file provides information on the grids that match those given by the Groups and Nodes attributes printed above with the previously studied method: the test image is a regular grid of 10x10x10 nodes, i.e. a 9x9x9 voxels grid. Only one field is defined on test_image, test_image_field, whereas two are defined on test_mesh.\nThis XDMF file can directly be opened in Paraview, if both file are closed. If any syntax or formatting issue is encountered when Paraview reads the XDMF file, it will return an error message and the data visualization will not be rendered. The print_xdmf method allows you to verify your XDMF data and syntax, to make sure that the data formatting is correct.\n7- Get memory size of file and data items\nSampleData is designed to create large datasets, with data items that can reprensent tens Gb of data or more. Being able to easily see and identify which data items use the most disk space is a crucial aspect for data management. Until now, with the method we have reviewed, we only have been able to print the Nodes disk sizes together with a lot of other information. In order to speed up this process, the SampleData class has one method that allow to directly query and print only the memory size of a Node, the get_node_disk_size method:",
"data.get_node_disk_size(nodename='test_array')",
"As you can see, the default behavior of this method is to print a message indicating the Node disk size, but also to return a tuple containing the value of the disk size and its unit. If you want to print data in bytes, you may call this method with the convert argument set to False:",
"data.get_node_disk_size(nodename='test_array', convert=False)",
"If you want to use this method to get a numerical value within a script, but do not want the class to print anything, you can use the print_flag argument:",
"size, unit = data.get_node_disk_size(nodename='test_array', print_flag=False)\nprint(f'Printed by script: node size is {size} {unit}')\n\nsize, unit = data.get_node_disk_size(nodename='test_array', print_flag=False, convert=False)\nprint(f'Printed by script: node size is {size} {unit}')",
"The disk size of the whole HDF5 file can also be printed/returned, using the get_file_disk_size method, that has the same print_flag and convert arguments:",
"data.get_file_disk_size()\n\nsize, unit = data.get_file_disk_size(convert=False, print_flag=False)\nprint(f'\\nPrinted by script: file size is {size} {unit}')",
"8- Get nodes/groups attributes (metadata)\nAnother central aspect of the SampleData class is the management of metadata, that can be attached to all Groups or Nodes of the dataset. Metadata comes in the form of HDF5 attributes, that are Name/Value pairs, and that we already encountered when exploring the outputs of methods like print_dataset_content, print_node_info...\nThose methods print the Group/Node attributes together with other information. To only print the attributes of a given data item, you can use the print_node_attributes method:",
"data.print_node_attributes(nodename='test_mesh')",
"As you can see, this method prints a list of all data item attributes, with the format * Name : Value \\n.\nIt allows you to quickly see what attributes are stored together with a given data item, and their values. \nIf you want to get the value of a specific attribute, you can use the get_attribute method. It takes two arguments, the name of the attribute you want to retrieve, and the name of the data item where it is stored:",
"Nnodes = data.get_attribute(attrname='number_of_nodes', nodename='test_mesh')\nprint(f'The mesh test_mesh has {Nnodes} nodes')",
"You can also get all attributes of a data item as a dictionary. In this case, you just need to specify the name of the data item from which you want attributes, and use the get_dic_from_attributes method:",
"mesh_attrs = data.get_dic_from_attributes(nodename='test_mesh')\n\nfor name, value in mesh_attrs.items():\n print(f' Attribute {name} is {value}')",
"We have now seen how to explore all of types of information that a SampleData dataset may contain, individually or all together, interactively, from a Python console. Let us review now how to explore the content of SampleData datasets with external softwares.\nIV - Visualize dataset contents with Vitables\nAll the information that you can get with all the methods presented in the previous section can also be accessed externally by opening the HDF5 dataset file with the Vitables software. This software is usually part of the Pytables package, that is a dependency of pymicro. You should be able to use it in a Python environement compatible with pymicro. If needed, you may refer to the Vitables website to find download and installations instructions for PyPi or conda: https://vitables.org/.\nVitables provide a graphical interface that allows you to browse through all your dataset data items, and access or modify their stored data and metadata values. You may either open Vitables and then open your HDF5 dataset file from the Vitables interface, or you can directly open Vitables to read a specfic file from command line, by running:\nvitables my_dataset_path.h5. \nThis command will work only if your dataset file is closed (if the SampleData instance still exists in your Python console, this will not work, you first need to delete your instance to close the files). \nHowever, the SampleData class has a specific method to allowing to open your dataset with Vitables interactively, directly from your Python console: the method pause_for_visualization. As explained just above, this method closes the XDMF and HDF5 datasets, and runs in your shell the command vitables my_dataset_path.h5. Then, it freezes the interactive Python console and keep the dataset files closed, for as long as the Vitables software is running. When Vitables is shutdown, the SampleData class will reopen the HDF5 and XDMF files, synchronize with them and resume the interactive Python console. \n<div class=\"alert alert-warning\">\n\n**Warning** \n\nWhen calling the `pause_for_visualization` method from a python console (ipython, Jupyter...), you may face environment issue leading to your shell not finding the proper *Vitables* software executable. To ensure that the right *Vitables* is found, the method can take an optional argument `Vitables_path`, which must be the path of the *Vitables* executable. If this argument is passed, the method will run, after closing the HDF5 and XDMF files, the command \n`Vitables_path my_dataset_path.hdf5`\n</div>\n\n<div class=\"alert alert-info\">\n\n**Note** \n\nThe method is not called here to allow automatic execution of the Notebook when building the documentation on a platform that do not have Vitables available.\n</div>",
"# uncomment to test \n# data.pause_for_visualization(Vitables=True, Vitables_path='Path_to_Vitables_executable')",
"Please refer to the Vitables documentation, that can be downloaded here https://sourceforge.net/projects/vitables/files/ViTables-3.0.0/, to learn how to browse through your HDF5 file. The Vitables software is very intuitive, you will see that it is provides a usefull and convenient tool to explore your SampleData datasets outside of your interactive Python consoles. \nV - Visualize datasets grids and fields with Paraview\nAs for Vitables, the pause_for_visualization method allows you to open your dataset with Paraview, interactively from a Python console. \nParaview will provide you with a very powerfull visualization tool to render your spatially organized data (grids) stored in your datasets. Unlike Vitables, Paraview must can read the XDMF format. Hence, if you want to open your dataset with Paraview, outside of a Python console, make sure that the HFD5 and XDMF file are not opened by another program, and run in your shell the command:\nparaview my_dataset_path.xdmf.\nAs you may have guessed, the pause_for_visualization method, when called interactively with the Paraview argument set to True, will close both files, and run this command, just like for the Vitables option. The datasets will remained closed and the Python console freezed for as long as you will keep the Paraview software running. When you will shutdown Paraview, the SampleData class will reopen the HDF5 and XDMF files, synchronize with them and resume the interactive Python console. \n<div class=\"alert alert-warning\">\n\n**Warning** \n\nWhen calling the `pause_for_visualization` method from a python console (ipython, Jupyter...), you may face environment issue leading to your shell not finding the proper *Paraview* software executable. To ensure that the right *Paraview* is found, the method can take an optional argument `Paraview_path`, which must be the path of the Vitables executable. If this argument is passed, the method will run, after closing the HDF5 and XDMF files, the command \n`Paraview_path my_dataset_path.xdmf`\n</div>",
"# Like for Vitables --> uncomment to test\n# data.pause_for_visualization(Paraview=True, Paraview_path='Path_to_Paraview_executable')",
"<div class=\"alert alert-info\">\n\n**Note** \n\n**It is recommended to use a recent version of the Paraview software to visualize SampleData datasets (>= 5.0).**\nWhen opening the XDMF file, Paraview may ask you to choose a specific file reader. It is recommended to choose the \n**XDMF_reader**, and not the **Xdmf3ReaderT**, or **Xdmf3ReaderS**.\n</div>\n\nVI - Using command line tools\nYou can also examine the content of your HDF5 datasets with generoc HDF5 command line tools, such as h5ls or h5dump:\n<div class=\"alert alert-warning\">\n\n**Warning** \n\nIn the following, executable programs that come with the HDF5 library and the Pytables package are used. If you are executing this notebook with Jupyter, you may not be able to have those executable in your path, if your environment is not suitably set. A workaround consist in finding the absolute path of the executable, and replacing the executable name in the following cells by its full path. For instance, replace \n\n `ptdump file.h5` \nwith\n\n `/full/path/to/ptdump file.h5`\n\nTo find this full path, you can run in your shell the command `which ptdump`. Of course, the same applies for `h5ls` and `h5dump`.\n</div>\n\n<div class=\"alert alert-info\">\n\n**Note** \n\nMost code lines below are commented as they produce very large outputs, that otherwise pollute the documentation if they are included in the automatic build process. Uncomment them to test them if you are using interactively these notebooks !\n</div>\n\nFor that, you must first close your dataset. If you don't, this tools will not be able to open the HDF5 file as it is opened by the SampleData class in the Python interpretor.",
"del data \n\n# raw output of H5ls --> prints the childrens of the file root group\n!h5ls ../data/test_sampledata_ref.h5\n\n# recursive output of h5ls (-r option) --> prints all data items \n!h5ls -r ../data/test_sampledata_ref.h5\n\n# recursive (-r) and detailed (-d) output of h5ls --> also print the content of the data arrays\n# !h5ls -rd ../data/test_sampledata_ref.h5\n\n# output of h5dump:\n# !h5dump ../data/test_sampledata_ref.h5",
"As you can see if you uncommented and executed this cell, h5dump prints a a fully detailed description of your dataset: organization, data types, item names and path, and item content (value stored in arrays). As it produces a very large output, it may be convenient to write its output in a file:",
"# !h5dump ../data/test_sampledata_ref.h5 > test_dump.txt\n\n# !cat test_dump.txt",
"You can also use the command line tool of the Pytables software ptdump, that also takes as argument the HDF5 file, and has two command options, the verbose mode -v, and the detailed mode -d:",
"# uncomment to test !\n# !ptdump ../data/test_sampledata_ref.h5\n\n# uncomment to test!\n# !ptdump -v ../data/test_sampledata_ref.h5\n\n# uncomment to test !\n# !ptdump -d ../data/test_sampledata_ref.h5",
"This second tutorial of the SampleData User Guide is now finished. You should now be able to easily find all the information you are interested in from a SampleData dataset !"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
albahnsen/PracticalMachineLearningClass
|
exercises/E4-Regression-Linear&Logistic.ipynb
|
mit
|
[
"Exercise 04\nPart 1 - Linear Regression\nEstimate a regression using the Income data\nForecast of income\nWe'll be working with a dataset from US Census indome (data dictionary).\nMany businesses would like to personalize their offer based on customer’s income. High-income customers could be, for instance, exposed to premium products. As a customer’s income is not always explicitly known, predictive model could estimate income of a person based on other information.\nOur goal is to create a predictive model that will be able to output an estimation of a person income.",
"import pandas as pd\nimport numpy as np\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# read the data and set the datetime as the index\nincome = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/income.csv.zip', index_col=0)\n\nincome.head()\n\nincome.shape",
"Exercise 4.1\nWhat is the relation between the age and Income?\nFor a one percent increase in the Age how much the income increases?\nUsing sklearn estimate a linear regression and predict the income when the Age is 30 and 40 years",
"income.plot(x='Age', y='Income', kind='scatter')",
"Exercise 4.2\nEvaluate the model using the MSE\nExercise 4.3\nRun a regression model using as features the Age and Age$^2$ using the OLS equations\nExercise 4.4\nEstimate a regression using more features.\nHow is the performance compared to using only the Age?\nPart 2: Logistic Regression\nCustomer Churn:\nlosing/attrition of the customers from the company. Especially, the industries that the user acquisition is costly, it is crucially important for one company to reduce and ideally make the customer churn to 0 to sustain their recurring revenue. If you consider customer retention is always cheaper than customer acquisition and generally depends on the data of the user(usage of the service or product), it poses a great/exciting/hard problem for machine learning.\nData\nDataset is from a telecom service provider where they have the service usage(international plan, voicemail plan, usage in daytime, usage in evenings and nights and so on) and basic demographic information(state and area code) of the user. For labels, I have a single data point whether the customer is churned out or not.",
"# Download the dataset\ndata = pd.read_csv('https://github.com/ghuiber/churn/raw/master/data/churn.csv')\n\ndata.head()",
"Exercise 4.5\nCreate Y and X\nWhat is the distribution of the churners?\nSplit the data in train (70%) and test (30%)\nExercise 4.6\nTrain a Logistic Regression using the training set and apply the algorithm to the testing set.\nExercise 4.7\na) Create a confusion matrix using the prediction on the 30% set.\nb) Estimate the accuracy of the model in the 30% set"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arasdar/DL
|
uri-dl/uri-dl-hw-2/assignment2/FullyConnectedNets.ipynb
|
unlicense
|
[
"Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.",
"# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(('%s: ' % k, v.shape))",
"Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:",
"# Test the affine_forward function\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint('Testing affine_forward function:')\nprint('difference: ', rel_error(out, correct_out))",
"Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.",
"# Test the affine_backward function\nnp.random.seed(231)\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint('Testing affine_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))",
"ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:",
"# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array([[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 5e-8\nprint('Testing relu_forward function:')\nprint('difference: ', rel_error(out, correct_out))",
"ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:",
"np.random.seed(231)\nx = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 3e-12\nprint('Testing relu_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))",
"\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:",
"def affine_relu_forward(x, w, b):\n \"\"\"\n Convenience layer that perorms an affine transform followed by a ReLU\n\n Inputs:\n - x: Input to the affine layer\n - w, b: Weights for the affine layer\n\n Returns a tuple of:\n - out: Output from the ReLU\n - cache: Object to give to the backward pass\n \"\"\"\n a, fc_cache = affine_forward(x, w, b)\n out, relu_cache = relu_forward(a)\n cache = (fc_cache, relu_cache)\n return out, cache\n\n\ndef affine_relu_backward(dout, cache):\n \"\"\"\n Backward pass for the affine-relu convenience layer\n \"\"\"\n fc_cache, relu_cache = cache\n da = relu_backward(dout, relu_cache)\n dx, dw, db = affine_backward(da, fc_cache)\n return dx, dw, db",
"Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:",
"np.random.seed(231)\nnum_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9\nprint('Testing svm_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))\n\ndx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8\nprint('\\nTesting softmax_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))",
"Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.",
"np.random.seed(231)\nN, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-3\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)\n\nprint('Testing initialization ... ')\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint('Testing test-time forward pass ... ')\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint('Testing training loss (no regularization)')\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print('Running numeric gradient check with reg = ', reg)\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))",
"Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.",
"model = TwoLayerNet()\nsolver = None\n\n##############################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #\n# 50% accuracy on the validation set. #\n##############################################################################\nsolver = Solver(model, data,\n update_rule='sgd',\n optim_config={\n 'learning_rate': 1e-3,\n },\n lr_decay=0.95,\n num_epochs=10, batch_size=100,\n print_every=100)\nsolver.train()\n\n\npass\n##############################################################################\n# END OF YOUR CODE #\n##############################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()",
"Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.",
"np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))",
"As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 1e-2\nlearning_rate = 1e-4\nmodel = FullyConnectedNet([100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a five-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-3\nweight_scale = 1e-5\nmodel = FullyConnectedNet([100, 100, 100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\nNo\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.",
"from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint('next_w error: ', rel_error(next_w, expected_next_w))\nprint('velocity error: ', rel_error(expected_velocity, config['velocity']))",
"Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.",
"num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': 1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.",
"# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('cache error: ', rel_error(expected_cache, config['cache']))\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('v error: ', rel_error(expected_v, config['v']))\nprint('m error: ', rel_error(expected_m, config['m']))",
"Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:",
"learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.",
"best_model = None\n################################################################################\n# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #\n# batch normalization and dropout useful. Store your best model in the #\n# best_model variable. #\n################################################################################\nlearning_rate = 1e-3 #{'rmsprop': 1e-4, 'adam': 1e-3}\nupdate_rule = 'adam' # for update_rule in ['adam', 'rmsprop']:\nmodel = FullyConnectedNet(dropout=0.95, dtype=np.float64, \n hidden_dims=[100, 100, 100, 100, 100],\n use_batchnorm=True, weight_scale=5e-2)\n\nsolver = Solver(model=model, data=data, num_epochs=2, batch_size=100, update_rule=update_rule,\n optim_config={'learning_rate': learning_rate}, verbose=True)\nsolvers[update_rule] = solver\nsolver.train()\nprint()\n\n# np.random.seed(231)\n# N, D, H1, H2, C = 2, 15, 20, 30, 10\n# X = np.random.randn(N, D)\n# y = np.random.randint(C, size=(N,))\n\n# for reg in [0, 3.14]:\n# print('Running check with reg = ', reg)\n# model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n# reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n# loss, grads = model.loss(X, y)\n# print('Initial loss: ', loss)\n\n# for name in sorted(grads):\n# f = lambda _: model.loss(X, y)[0]\n# grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n# print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n\nbest_model = model\n\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################",
"Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.",
"y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)\ny_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)\nprint('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())\nprint('Test set accuracy: ', (y_test_pred == data['y_test']).mean())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/neural_structured_learning/tutorials/graph_keras_mlp_cora.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Neural Structured Learning Authors",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Graph regularization for document classification using natural graphs\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_mlp_cora\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nGraph regularization is a specific technique under the broader paradigm of\nNeural Graph Learning\n(Bui et al., 2018). The core\nidea is to train neural network models with a graph-regularized objective,\nharnessing both labeled and unlabeled data.\nIn this tutorial, we will explore the use of graph regularization to classify\ndocuments that form a natural (organic) graph.\nThe general recipe for creating a graph-regularized model using the Neural\nStructured Learning (NSL) framework is as follows:\n\nGenerate training data from the input graph and sample features. Nodes in\n the graph correspond to samples and edges in the graph correspond to\n similarity between pairs of samples. The resulting training data will\n contain neighbor features in addition to the original node features.\nCreate a neural network as a base model using the Keras sequential,\n functional, or subclass API.\nWrap the base model with the GraphRegularization wrapper class, which\n is provided by the NSL framework, to create a new graph Keras model. This\n new model will include a graph regularization loss as the regularization\n term in its training objective.\nTrain and evaluate the graph Keras model.\n\nSetup\nInstall the Neural Structured Learning package.",
"!pip install --quiet neural-structured-learning",
"Dependencies and imports",
"import neural_structured_learning as nsl\n\nimport tensorflow as tf\n\n# Resets notebook state\ntf.keras.backend.clear_session()\n\nprint(\"Version: \", tf.__version__)\nprint(\"Eager mode: \", tf.executing_eagerly())\nprint(\n \"GPU is\",\n \"available\" if tf.config.list_physical_devices(\"GPU\") else \"NOT AVAILABLE\")",
"Cora dataset\nThe Cora dataset is a citation graph where\nnodes represent machine learning papers and edges represent citations between\npairs of papers. The task involved is document classification where the goal is\nto categorize each paper into one of 7 categories. In other words, this is a\nmulti-class classification problem with 7 classes.\nGraph\nThe original graph is directed. However, for the purpose of this example, we\nconsider the undirected version of this graph. So, if paper A cites paper B, we\nalso consider paper B to have cited A. Although this is not necessarily true, in\nthis example, we consider citations as a proxy for similarity, which is usually\na commutative property.\nFeatures\nEach paper in the input effectively contains 2 features:\n\n\nWords: A dense, multi-hot bag-of-words representation of the text in the\n paper. The vocabulary for the Cora dataset contains 1433 unique words. So,\n the length of this feature is 1433, and the value at position 'i' is 0/1\n indicating whether word 'i' in the vocabulary exists in the given paper or\n not.\n\n\nLabel: A single integer representing the class ID (category) of the paper.\n\n\nDownload the Cora dataset",
"!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz\n!tar -C /tmp -xvzf /tmp/cora.tgz",
"Convert the Cora data to the NSL format\nIn order to preprocess the Cora dataset and convert it to the format required by\nNeural Structured Learning, we will run the 'preprocess_cora_dataset.py'\nscript, which is included in the NSL github repository. This script does the\nfollowing:\n\nGenerate neighbor features using the original node features and the graph.\nGenerate train and test data splits containing tf.train.Example instances.\nPersist the resulting train and test data in the TFRecord format.",
"!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py\n\n!python preprocess_cora_dataset.py \\\n--input_cora_content=/tmp/cora/cora.content \\\n--input_cora_graph=/tmp/cora/cora.cites \\\n--max_nbrs=5 \\\n--output_train_data=/tmp/cora/train_merged_examples.tfr \\\n--output_test_data=/tmp/cora/test_examples.tfr",
"Global variables\nThe file paths to the train and test data are based on the command line flag\nvalues used to invoke the 'preprocess_cora_dataset.py' script above.",
"### Experiment dataset\nTRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'\nTEST_DATA_PATH = '/tmp/cora/test_examples.tfr'\n\n### Constants used to identify neighbor features in the input.\nNBR_FEATURE_PREFIX = 'NL_nbr_'\nNBR_WEIGHT_SUFFIX = '_weight'",
"Hyperparameters\nWe will use an instance of HParams to include various hyperparameters and\nconstants used for training and evaluation. We briefly describe each of them\nbelow:\n\n\nnum_classes: There are a total 7 different classes\n\n\nmax_seq_length: This is the size of the vocabulary and all instances in\n the input have a dense multi-hot, bag-of-words representation. In other\n words, a value of 1 for a word indicates that the word is present in the\n input and a value of 0 indicates that it is not.\n\n\ndistance_type: This is the distance metric used to regularize the sample\n with its neighbors.\n\n\ngraph_regularization_multiplier: This controls the relative weight of\n the graph regularization term in the overall loss function.\n\n\nnum_neighbors: The number of neighbors used for graph regularization.\n This value has to be less than or equal to the max_nbrs command-line\n argument used above when running preprocess_cora_dataset.py.\n\n\nnum_fc_units: The number of fully connected layers in our neural\n network.\n\n\ntrain_epochs: The number of training epochs.\n\n\nbatch_size: Batch size used for training and evaluation.\n\n\ndropout_rate: Controls the rate of dropout following each fully\n connected layer\n\n\neval_steps: The number of batches to process before deeming evaluation\n is complete. If set to None, all instances in the test set are evaluated.",
"class HParams(object):\n \"\"\"Hyperparameters used for training.\"\"\"\n def __init__(self):\n ### dataset parameters\n self.num_classes = 7\n self.max_seq_length = 1433\n ### neural graph learning parameters\n self.distance_type = nsl.configs.DistanceType.L2\n self.graph_regularization_multiplier = 0.1\n self.num_neighbors = 1\n ### model architecture\n self.num_fc_units = [50, 50]\n ### training parameters\n self.train_epochs = 100\n self.batch_size = 128\n self.dropout_rate = 0.5\n ### eval parameters\n self.eval_steps = None # All instances in the test set are evaluated.\n\nHPARAMS = HParams()",
"Load train and test data\nAs described earlier in this notebook, the input training and test data have\nbeen created by the 'preprocess_cora_dataset.py'. We will load them into two\ntf.data.Dataset objects -- one for train and one for test.\nIn the input layer of our model, we will extract not just the 'words' and the\n'label' features from each sample, but also corresponding neighbor features\nbased on the hparams.num_neighbors value. Instances with fewer neighbors than\nhparams.num_neighbors will be assigned dummy values for those non-existent\nneighbor features.",
"def make_dataset(file_path, training=False):\n \"\"\"Creates a `tf.data.TFRecordDataset`.\n\n Args:\n file_path: Name of the file in the `.tfrecord` format containing\n `tf.train.Example` objects.\n training: Boolean indicating if we are in training mode.\n\n Returns:\n An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`\n objects.\n \"\"\"\n\n def parse_example(example_proto):\n \"\"\"Extracts relevant fields from the `example_proto`.\n\n Args:\n example_proto: An instance of `tf.train.Example`.\n\n Returns:\n A pair whose first value is a dictionary containing relevant features\n and whose second value contains the ground truth label.\n \"\"\"\n # The 'words' feature is a multi-hot, bag-of-words representation of the\n # original raw text. A default value is required for examples that don't\n # have the feature.\n feature_spec = {\n 'words':\n tf.io.FixedLenFeature([HPARAMS.max_seq_length],\n tf.int64,\n default_value=tf.constant(\n 0,\n dtype=tf.int64,\n shape=[HPARAMS.max_seq_length])),\n 'label':\n tf.io.FixedLenFeature((), tf.int64, default_value=-1),\n }\n # We also extract corresponding neighbor features in a similar manner to\n # the features above during training.\n if training:\n for i in range(HPARAMS.num_neighbors):\n nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')\n nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,\n NBR_WEIGHT_SUFFIX)\n feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(\n [HPARAMS.max_seq_length],\n tf.int64,\n default_value=tf.constant(\n 0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))\n\n # We assign a default value of 0.0 for the neighbor weight so that\n # graph regularization is done on samples based on their exact number\n # of neighbors. In other words, non-existent neighbors are discounted.\n feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(\n [1], tf.float32, default_value=tf.constant([0.0]))\n\n features = tf.io.parse_single_example(example_proto, feature_spec)\n\n label = features.pop('label')\n return features, label\n\n dataset = tf.data.TFRecordDataset([file_path])\n if training:\n dataset = dataset.shuffle(10000)\n dataset = dataset.map(parse_example)\n dataset = dataset.batch(HPARAMS.batch_size)\n return dataset\n\n\ntrain_dataset = make_dataset(TRAIN_DATA_PATH, training=True)\ntest_dataset = make_dataset(TEST_DATA_PATH)",
"Let's peek into the train dataset to look at its contents.",
"for feature_batch, label_batch in train_dataset.take(1):\n print('Feature list:', list(feature_batch.keys()))\n print('Batch of inputs:', feature_batch['words'])\n nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')\n nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)\n print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])\n print('Batch of neighbor weights:',\n tf.reshape(feature_batch[nbr_weight_key], [-1]))\n print('Batch of labels:', label_batch)",
"Let's peek into the test dataset to look at its contents.",
"for feature_batch, label_batch in test_dataset.take(1):\n print('Feature list:', list(feature_batch.keys()))\n print('Batch of inputs:', feature_batch['words'])\n print('Batch of labels:', label_batch)",
"Model definition\nIn order to demonstrate the use of graph regularization, we build a base model\nfor this problem first. We will use a simple feed-forward neural network with 2\nhidden layers and dropout in between. We illustrate the creation of the base\nmodel using all model types supported by the tf.Keras framework -- sequential,\nfunctional, and subclass.\nSequential base model",
"def make_mlp_sequential_model(hparams):\n \"\"\"Creates a sequential multi-layer perceptron model.\"\"\"\n model = tf.keras.Sequential()\n model.add(\n tf.keras.layers.InputLayer(\n input_shape=(hparams.max_seq_length,), name='words'))\n # Input is already one-hot encoded in the integer format. We cast it to\n # floating point format here.\n model.add(\n tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))\n for num_units in hparams.num_fc_units:\n model.add(tf.keras.layers.Dense(num_units, activation='relu'))\n # For sequential models, by default, Keras ensures that the 'dropout' layer\n # is invoked only during training.\n model.add(tf.keras.layers.Dropout(hparams.dropout_rate))\n model.add(tf.keras.layers.Dense(hparams.num_classes))\n return model",
"Functional base model",
"def make_mlp_functional_model(hparams):\n \"\"\"Creates a functional API-based multi-layer perceptron model.\"\"\"\n inputs = tf.keras.Input(\n shape=(hparams.max_seq_length,), dtype='int64', name='words')\n\n # Input is already one-hot encoded in the integer format. We cast it to\n # floating point format here.\n cur_layer = tf.keras.layers.Lambda(\n lambda x: tf.keras.backend.cast(x, tf.float32))(\n inputs)\n\n for num_units in hparams.num_fc_units:\n cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)\n # For functional models, by default, Keras ensures that the 'dropout' layer\n # is invoked only during training.\n cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)\n\n outputs = tf.keras.layers.Dense(hparams.num_classes)(cur_layer)\n\n model = tf.keras.Model(inputs, outputs=outputs)\n return model",
"Subclass base model",
"def make_mlp_subclass_model(hparams):\n \"\"\"Creates a multi-layer perceptron subclass model in Keras.\"\"\"\n\n class MLP(tf.keras.Model):\n \"\"\"Subclass model defining a multi-layer perceptron.\"\"\"\n\n def __init__(self):\n super(MLP, self).__init__()\n # Input is already one-hot encoded in the integer format. We create a\n # layer to cast it to floating point format here.\n self.cast_to_float_layer = tf.keras.layers.Lambda(\n lambda x: tf.keras.backend.cast(x, tf.float32))\n self.dense_layers = [\n tf.keras.layers.Dense(num_units, activation='relu')\n for num_units in hparams.num_fc_units\n ]\n self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)\n self.output_layer = tf.keras.layers.Dense(hparams.num_classes)\n\n def call(self, inputs, training=False):\n cur_layer = self.cast_to_float_layer(inputs['words'])\n for dense_layer in self.dense_layers:\n cur_layer = dense_layer(cur_layer)\n cur_layer = self.dropout_layer(cur_layer, training=training)\n\n outputs = self.output_layer(cur_layer)\n\n return outputs\n\n return MLP()",
"Create base model(s)",
"# Create a base MLP model using the functional API.\n# Alternatively, you can also create a sequential or subclass base model using\n# the make_mlp_sequential_model() or make_mlp_subclass_model() functions\n# respectively, defined above. Note that if a subclass model is used, its\n# summary cannot be generated until it is built.\nbase_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)\nbase_model.summary()",
"Train base MLP model",
"# Compile and train the base MLP model\nbase_model.compile(\n optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nbase_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)",
"Evaluate base MLP model",
"# Helper function to print evaluation metrics.\ndef print_metrics(model_desc, eval_metrics):\n \"\"\"Prints evaluation metrics.\n\n Args:\n model_desc: A description of the model.\n eval_metrics: A dictionary mapping metric names to corresponding values. It\n must contain the loss and accuracy metrics.\n \"\"\"\n print('\\n')\n print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])\n print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])\n if 'graph_loss' in eval_metrics:\n print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])\n\neval_results = dict(\n zip(base_model.metrics_names,\n base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))\nprint_metrics('Base MLP model', eval_results)",
"Train MLP model with graph regularization\nIncorporating graph regularization into the loss term of an existing\ntf.Keras.Model requires just a few lines of code. The base model is wrapped to\ncreate a new tf.Keras subclass model, whose loss includes graph\nregularization.\nTo assess the incremental benefit of graph regularization, we will create a new\nbase model instance. This is because base_model has already been trained for a\nfew iterations, and reusing this trained model to create a graph-regularized\nmodel will not be a fair comparison for base_model.",
"# Build a new base MLP model.\nbase_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(\n HPARAMS)\n\n# Wrap the base MLP model with graph regularization.\ngraph_reg_config = nsl.configs.make_graph_reg_config(\n max_neighbors=HPARAMS.num_neighbors,\n multiplier=HPARAMS.graph_regularization_multiplier,\n distance_type=HPARAMS.distance_type,\n sum_over_axis=-1)\ngraph_reg_model = nsl.keras.GraphRegularization(base_reg_model,\n graph_reg_config)\ngraph_reg_model.compile(\n optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\ngraph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)",
"Evaluate MLP model with graph regularization",
"eval_results = dict(\n zip(graph_reg_model.metrics_names,\n graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))\nprint_metrics('MLP + graph regularization', eval_results)",
"The graph-regularized model's accuracy is about 2-3% higher than that of the\nbase model (base_model).\nConclusion\nWe have demonstrated the use of graph regularization for document classification\non a natural citation graph (Cora) using the Neural Structured Learning (NSL)\nframework. Our advanced tutorial involves\nsynthesizing graphs based on sample embeddings before training a neural network\nwith graph regularization. This approach is useful if the input does not contain\nan explicit graph.\nWe encourage users to experiment further by varying the amount of supervision as\nwell as trying different neural architectures for graph regularization."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jasonkitbaby/udacity-homework
|
boston_housing/boston_housing.ipynb
|
apache-2.0
|
[
"机器学习工程师纳米学位\n模型评价与验证\n项目 1: 预测波士顿房价\n欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以编程练习开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以TODO标出。请仔细阅读所有的提示!\n除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。\n\n提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。\n\n\n第一步. 导入数据\n在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。\n此项目的数据集来自UCI机器学习知识库(数据集已下线)。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:\n- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。\n- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。\n- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。\n- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。\n运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。",
"# 载入此项目所需要的库\nimport numpy as np\nimport pandas as pd\nimport visuals as vs # Supplementary code\n\n# 检查你的Python版本\nfrom sys import version_info\nif version_info.major != 2 and version_info.minor != 7:\n raise Exception('请使用Python 2.7来完成此项目')\n \n# 让结果在notebook中显示\n%matplotlib inline\n\n# 载入波士顿房屋的数据集\ndata = pd.read_csv('housing.csv')\nprices = data['MEDV']\nfeatures = data.drop('MEDV', axis = 1)\n \n# 完成\nprint \"Boston housing dataset has {} data points with {} variables each.\".format(*data.shape)",
"第二步. 分析数据\n在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。\n由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。\n- 特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。\n- 目标变量:'MEDV',是我们希望预测的变量。\n他们分别被存在features和prices两个变量名中。\n编程练习 1:基础统计运算\n你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。\n在下面的代码中,你要做的是:\n- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;\n- 将运算结果储存在相应的变量中。",
"#TODO 1\n\n#目标:计算价值的最小值\nminimum_price = np.min(prices)\n\n#目标:计算价值的最大值\nmaximum_price = np.max(prices)\n\n#目标:计算价值的平均值\nmean_price =np.mean(prices)\n\n#目标:计算价值的中值\nmedian_price = np.median(prices)\n\n#目标:计算价值的标准差\nstd_price = np.std(prices)\n\n#目标:输出计算的结果\nprint \"Statistics for Boston housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)",
"问题 1 - 特征观察\n如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:\n- 'RM' 是该地区中每个房屋的平均房间数量;\n- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);\n- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。\n凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。\n提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?\n问题 1 - 回答:\n增大 RM MEDV 会增大, 房屋的价格与房间数量是正相关关系\n增大 LSTAT MEDV 会减小, 该区域的房价与收入水平有一点联系,大部分房东如果收入较低可以说明该地区房价可能偏低。\n增大 PTRATIO MEDV 会减小, 说明该地区学生多,老师少,可能教育资源比较优秀。影响到房价。",
"# 载入画图所需要的库 matplotlib\nimport matplotlib.pyplot as plt\n\n# 使输出的图像以更高清的方式显示\n%config InlineBackend.figure_format = 'retina'\n\n# 调整图像的宽高\nplt.figure(figsize=(16, 4))\nfor i, key in enumerate(['RM', 'LSTAT', 'PTRATIO']):\n plt.subplot(1, 3, i+1)\n plt.xlabel(key)\n plt.scatter(data[key], data['MEDV'], alpha=0.5)",
"编程练习 2: 数据分割与重排\n接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。\n在下面的代码中,你需要\n使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。\n - 分割比例为:80%的数据用于训练,20%用于测试;\n - 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;",
"# TODO 2\n\n# 提示: 导入train_test_split\nfrom sklearn.cross_validation import train_test_split \n\n\ndef generate_train_and_test(X, y):\n \"\"\"打乱并分割数据为训练集和测试集\"\"\"\n X_train,X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=0)\n return (X_train, X_test, y_train, y_test)\n \n\nX_train, X_test, y_train, y_test = generate_train_and_test(features, prices)",
"问题 2 - 训练及测试\n将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?\n如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处?\n提示: 如果没有数据来对模型进行测试,会出现什么问题?\n问题 2 - 回答:\n一部分数据用于训练 拟合参数, 一部分数据用于测试,来验证模型是否准确。\n如果用已经见过的数据来做测试,会导致测试的结果非常准确,预测的结果不准确, 因为见过的数据参与训练,拟合的参数非常贴近这些数据。\n\n第三步. 模型衡量标准\n在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。\n编程练习3:定义衡量标准\n如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。\nR<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。\n在下方代码的 performance_metric 函数中,你要实现:\n- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。\n- 将他们的表现评分储存到score变量中。\n或 \n\n(可选) 不使用任何外部库,参考决定系数的定义进行计算,这也可以帮助你更好的理解决定系数在什么情况下等于0或等于1。",
"# TODO 3\n\n# 提示: 导入r2_score\nfrom sklearn.metrics import r2_score\n\ndef performance_metric(y_true, y_predict):\n \"\"\"计算并返回预测值相比于预测值的分数\"\"\"\n \n score = r2_score(y_true,y_predict)\n\n return score\n\n# TODO 3 可选\n\n# 不允许导入任何计算决定系数的库\n\ndef performance_metric2(y_true, y_predict):\n \"\"\"计算并返回预测值相比于预测值的分数\"\"\"\n \n score = None\n\n return score",
"问题 3 - 拟合程度\n假设一个数据集有五个数据且一个模型做出下列目标变量的预测:\n| 真实数值 | 预测数值 |\n| :-------------: | :--------: |\n| 3.0 | 2.5 |\n| -0.5 | 0.0 |\n| 2.0 | 2.1 |\n| 7.0 | 7.8 |\n| 4.2 | 5.3 |\n你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。 \n提示:运行下方的代码,使用performance_metric函数来计算模型的决定系数。",
"# 计算这个模型的预测结果的决定系数\nscore = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])\nprint \"Model has a coefficient of determination, R^2, of {:.3f}.\".format(score)",
"问题 3 - 回答:\n成功描述了目标变量的变化, R^2 的值贴近1.\n\n第四步. 分析模型的表现\n在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 'max_depth'。用全部训练集训练,选择不同'max_depth' 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益,这可以让我们看到一些单看结果看不到的行为。\n学习曲线\n下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的变化,评分使用决定系数R<sup>2</sup>。曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。\n运行下方区域中的代码,并利用输出的图形回答下面的问题。",
"# 根据不同的训练集大小,和最大深度,生成学习曲线\nvs.ModelLearning(X_train, y_train)",
"问题 4 - 学习曲线\n选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?\n提示:学习曲线的评分是否最终会收敛到特定的值?\n问题 4 - 回答:\nfigure 1\n训练的评分一直很低, 表示 其模型拟合的不是特好,有更多的训练数据不会提高模型的表现, 需要提高模型复杂度, \n复杂度曲线\n下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。\n运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。",
"# 根据不同的最大深度参数,生成复杂度曲线\nvs.ModelComplexity(X_train, y_train)",
"问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍\n当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?\n提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?\n问题 5 - 回答:\n最大深度为1时, 是偏差问题,模型不够敏感,,欠拟合,需要提高复杂度\n最大深度为10时, 是方差问题, 模型很好的贴近训练数据,但是验证评分较差, 过拟合 \n问题 6- 最优模型的猜测\n你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?\n问题 6 - 回答:\n最大深度 是4的时候能够很好地对未见的数据进行预测。\n4之后的 验证评分没有较好的提升,训练评分越来越高,过拟合了\n4只前的 验证评分和训练评分都在不断提升,处理欠拟合阶段 \n\n第五步. 选择最优参数\n问题 7- 网格搜索(Grid Search)\n什么是网格搜索法?如何用它来优化模型?\n问题 7 - 回答:\n网格搜索算法是一种通过遍历给定的参数组合来优化模型表现的方法\n将各参数变量值的可行区间(可从小到大),划分为一系列的小区,由计算机顺序算出对应各参数变量值组合,所对应的误差目标值并逐一比较择优,从而求得该区间内最小目标值及其对应的最佳特定参数值。\n这种估值方法可保证所得的搜索解基本是全局最优解,可避免重大误差\n以决策树为例,为了能够更好地拟合和预测,我们需要调整它的参数,通常选择的参数是决策树的最大深度, 所以会给出一系列的最大深度的值,如{'max_depth': [1,2,3,4,5]}, 对各个深度的进行验证评分。\n求得 这系列深度的最优值。\n问题 8 - 交叉验证\n\n什么是K折交叉验证法(k-fold cross-validation)?\nGridSearchCV是如何结合交叉验证来完成对最佳参数组合的选择的?\nGridSearchCV中的'cv_results_'属性能告诉我们什么?\n网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?\n\n提示: 在下面 fit_model函数最后加入 print pd.DataFrame(grid.cv_results_) 可以帮你查看更多信息。\n问题 8 - 回答:\n数据集分为 训练集 和测试集\n训练集用以模型训练\n测试集合用以苹果模型最终评分\n一般的划分比例是 8:2 \n\n什么是K折交叉验证法\n\n`K-fold cross-validation (k-CV)则是Double cross-validation的延伸。\n做法是将训练集分成k个子集,选去其中一个作为验证集, 其余k-1个子集作为训练 进行训练,训练完 用验证集进行验证评分\nk-CV交叉验证会重复这个步骤k次,每次选择不同的一个子集作为验证集。\n如:总共划分了10个子集, 第一次 选取第一个子集作为验证,剩余其他9个参与训练。 第二次 选取第二子集作为验证,剩余其他9个参与训练.... 以此类推。 总共做10次训练和验证。 保证每个子集都做一次验证集。\n`\n\n\nGridSearchCV是如何结合交叉验证来完成对最佳参数组合的选择的\n首先列出模型各种可能的参数集合, 如线程模型取 函数的指数项D,决策树模型取 树的深度D\n针对每一个可能的参数 进行k次交叉验证取平均值做为模型评分。\n在这些评分里面选取一个获得最优评分的参数D\n\n\nGridSearchCV中的'cv_results_'属性能告诉我们什么?\ncv_results_ 属性告诉我们 交叉验证的每一次的运算结果和各个指标\n\n\n网格搜索时如果不使用交叉验证会有什么问题?交叉验证又是如何解决这个问题的?\n\n\n网络搜索如果不使用交叉验证,仅仅只是在各个区间里面取最优值,可能出现偶然性评分偏高或偏低的问题(欠拟合、过拟合问题)。\n交叉验证 避免了数据集划分的偶然性造成的评分偏高或偏低的问题,通过使用不同的训练集和验证集训练然后取K次评分的平均来得到最终成绩来保证评分的客观和准确\n编程练习 4:训练最优模型\n在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。\n在下方 fit_model 函数中,你需要做的是:\n1. 定义 'cross_validator' 变量: 使用 sklearn.model_selection 中的 KFold 创建一个交叉验证生成器对象;\n2. 定义 'regressor' 变量: 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;\n3. 定义 'params' 变量: 为 'max_depth' 参数创造一个字典,它的值是从1至10的数组;\n4. 定义 'scoring_fnc' 变量: 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;\n 将 ‘performance_metric’ 作为参数传至这个函数中;\n5. 定义 'grid' 变量: 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;将变量'regressor', 'params', 'scoring_fnc'和 'cross_validator' 作为参数传至这个对象构造函数中;\n如果你对python函数的默认参数定义和传递不熟悉,可以参考这个MIT课程的视频。",
"# TODO 4\n\n#提示: 导入 'KFold' 'DecisionTreeRegressor' 'make_scorer' 'GridSearchCV' \nfrom sklearn.model_selection import KFold\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import make_scorer\nfrom sklearn.model_selection import GridSearchCV\n\n\ndef fit_model(X, y):\n \"\"\" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型\"\"\"\n \n cross_validator = KFold(n_splits=2, random_state=None, shuffle=False)\n \n regressor = DecisionTreeRegressor()\n\n params = {\"max_depth\": [1,2,3,4,5,6,7,8,9,10]}\n\n scoring_fnc = make_scorer(performance_metric)\n\n grid = GridSearchCV(regressor,params,scoring=scoring_fnc,cv=cross_validator)\n\n # 基于输入数据 [X,y],进行网格搜索\n grid = grid.fit(X, y)\n \n # print pd.DataFrame(grid.cv_results_) \n \n # 返回网格搜索后的最优模型\n return grid.best_estimator_\n",
"编程练习 4:训练最优模型 (可选)\n在这个练习中,你将需要将所学到的内容整合,使用决策树算法训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。\n在下方 fit_model 函数中,你需要做的是:\n\n遍历参数‘max_depth’的可选值 1~10,构造对应模型\n计算当前模型的交叉验证分数\n返回最优交叉验证分数对应的模型",
"# TODO 4 可选\n\n'''\n不允许使用 DecisionTreeRegressor 以外的任何 sklearn 库\n\n提示: 你可能需要实现下面的 cross_val_score 函数\n\ndef cross_val_score(estimator, X, y, scoring = performance_metric, cv=3):\n \"\"\" 返回每组交叉验证的模型分数的数组 \"\"\"\n scores = [0,0,0]\n return scores\n'''\n\ndef fit_model2(X, y):\n \"\"\" 基于输入数据 [X,y],利于网格搜索找到最优的决策树模型\"\"\"\n \n #最优交叉验证分数对应的最优模型\n best_estimator = None\n \n return best_estimator",
"问题 9 - 最优模型\n最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?\n运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。",
"# 基于熟练数据,获得最优模型\noptimal_reg = fit_model(X_train, y_train)\n\n# 输出最优模型的 'max_depth' 参数\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(optimal_reg.get_params()['max_depth'])",
"问题 9 - 回答:\n最优化的答案是4, 与估计的一致\n第六步. 做出预测\n当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。\n问题 10 - 预测销售价格\n想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:\n| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |\n| :---: | :---: | :---: | :---: |\n| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |\n| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |\n| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |\n你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?为什么? \n提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。\n运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。",
"# 生成三个客户的数据\nclient_data = [[5, 17, 15], # 客户 1\n [4, 32, 22], # 客户 2\n [8, 3, 12]] # 客户 3\n\n# 进行预测\npredicted_price = optimal_reg.predict(client_data)\nfor i, price in enumerate(predicted_price):\n print \"Predicted selling price for Client {}'s home: ${:,.2f}\".format(i+1, price)",
"问题 10 - 回答:\n从统计的结果可以看出 \n\n增大 RM ,MEDV 会增大, 房屋的价格与房间数量是正相关关系\n增大 LSTAT ,MEDV 会减小, 该区域的房价与收入水平有一点联系,大部分房东如果收入较低可以说明该地区房价可能偏低。\n增大 PTRATIO , MEDV 会减小, 说明该地区学生多,老师少,可能教育资源比较优秀。影响到房价。\n\n所以\n- 客户1 预测价格 $391,183.33,房间数5 RM数量中 ,穷人数中等, LSTAT一般,教育资源一般,15个学生对1位老师, PTRATIO中等\n- 客户2 预测价格 $189,123.53,房间数4 RM数量低,穷人数偏多 LSTAT低,教育资源稀缺,22位学生对1位老师, PTRATIO高。 \n- 客户3 预测价格 $942,666.67,房间书8 RN数量高 ,穷人占比少 LSTAT高 ,教育资源较为优秀,12位学生对应1位老师, PTRATIO高\n预测价格比较合理。 \n编程练习 5\n你刚刚预测了三个客户的房子的售价。在这个练习中,你将用你的最优模型在整个测试数据上进行预测, 并计算相对于目标变量的决定系数 R<sup>2</sup>的值**。",
"#TODO 5\n\n# 提示:你可能需要用到 X_test, y_test, optimal_reg, performance_metric\n# 提示:你可能需要参考问题10的代码进行预测\n# 提示:你可能需要参考问题3的代码来计算R^2的值\n\ny_pre = optimal_reg.predict(X_test)\nr2 = r2_score(y_test,y_pre)\n\nprint \"Optimal model has R^2 score {:,.2f} on test data\".format(r2)",
"问题11 - 分析决定系数\n你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果?\n问题11 - 回答\n决定系数没有绝对的高低, 视模型和环境因素而定。\n这次的决定系数 R^2 是0.77 比较接近 1了。 结果还不错。\n说明选择的特征比较好,大部分能都能解释到房屋价格, 可以尝试添加一些其他的有用特征,看是否能进一步提升决定系数。\n模型健壮性\n一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。\n问题 12 - 模型健壮性\n模型是否足够健壮来保证预测的一致性?\n提示: 执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。",
"# 请先注释掉 fit_model 函数里的所有 print 语句\nvs.PredictTrials(features, prices, fit_model, client_data)",
"问题 12 - 回答:\n问题 13 - 实用性探讨\n简单地讨论一下你建构的模型能否在现实世界中使用? \n提示:回答以下几个问题,并给出相应结论的理由:\n- 1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用?\n- 数据中呈现的特征是否足够描述一个房屋?\n- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?\n- 你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗?\n问题 13 - 回答:\n\n1978年所采集的数据,在已考虑通货膨胀的前提下,在今天仍具有参考性\n数据中呈现的特征不够描述房屋的特征\n在波士顿这样的大都市采集的数据,并不能沿用到其他乡镇地区\n单纯凭借社区环境来判断房屋价格并不合理\n\n构建的模型是可以在现实世界中使用,需要添加更多有用的特征,比如房屋开放商,房间的楼层等, 另外需要更多的数据来进行训练和验证提高精准度。\n可选问题 - 预测北京房价\n(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中 bj_housing.csv。\n免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。\n这个数据集的特征有:\n- Area:房屋面积,平方米\n- Room:房间数,间\n- Living: 厅数,间\n- School: 是否为学区房,0或1\n- Year: 房屋建造时间,年\n- Floor: 房屋所处楼层,层\n目标变量:\n- Value: 房屋人民币售价,万\n你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。",
"# TODO 6\n\n# 导入数据\n# 载入此项目所需要的库\nimport numpy as np\nimport pandas as pd\nimport visuals as vs # Supplementary code\n\n \n# 让结果在notebook中显示\n%matplotlib inline\n\n\n# 1.导入数据\ndata = pd.read_csv('bj_housing.csv')\narea = data['Area']\nRoom = data['Room']\nliving = data['Living']\nschool = data['School']\nyear = data['Year']\nFloor = data['Floor']\nprices = data['Value']\nfeatures = data.drop('Value', axis = 1)\n \n# 完成\nprint \"BJ housing dataset has {} data points with {} variables each.\".format(*data.shape)\n\n# 2. 分析数据\n\nminimum_price = prices.min()\nmaximum_price = prices.max()\nmean_price = prices.mean()\nmedian_price = prices.median()\nstd_price = prices.std()\n\n#目标:输出计算的结果\nprint \"Statistics for BJ housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)\n\n\nX_train, X_test, y_train, y_test = generate_train_and_test(features, prices)\n\n# 学习曲线\nvs.ModelLearning(X_train, y_train)\n\n\n# 根据不同的最大深度参数,生成复杂度曲线\nvs.ModelComplexity(X_train, y_train)\n\n\noptimal_reg = fit_model(X_train, y_train)\n\n# 输出最优模型的 'max_depth' 参数\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(optimal_reg.get_params()['max_depth'])\n\n# 模型评价\ny_pre = optimal_reg.predict(X_test)\nr2 = r2_score(y_test,y_pre)\n\nprint \"Optimal model has R^2 score {:,.2f} on test data\".format(r2)",
"问题14 - 北京房价预测\n你成功的用新的数据集构建了模型了吗?他能对测试数据进行验证吗?它的表现是否符合你的预期?交叉验证是否有助于提升你模型的表现?\n提示:如果你是从零开始构建机器学习的代码会让你一时觉得无从下手。这时不要着急,你要做的只是查看之前写的代码,把每一行都看明白,然后逐步构建你的模型。当中遇到什么问题也可以在我们论坛寻找答案。也许你会发现你所构建的模型的表现并没有达到你的预期,这说明机器学习并非是一项简单的任务,构建一个表现良好的模型需要长时间的研究和测试。这也是我们接下来的课程中会逐渐学到的。\n问题14 - 回答\nR2 指数为0.26 低与期望。\n交叉验证有助于提升模型的表现,能够找到是方差问题还是偏差问题。"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JrtPec/opengrid
|
notebooks/Demo/Demo_tmpo.ipynb
|
apache-2.0
|
[
"Quick Tmpo demo\nTo get started, clone the tmpo-py repository from the opengrid github page. Then specify the path to this repo on your hard drive in your opengrid.cfg file.",
"import tmpo\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 14,8",
"Create a tmpo session, and enter debug mode to get more output.",
"s = tmpo.Session()\ns.debug = False",
"Add a sensor and token to start tracking the data for this given sensor. You only have to do this once for each sensor.",
"s.add('d209e2bbb35b82b83cc0de5e8b84a4ff','e16d9c9543572906a11649d92f902226')",
"Sync all available data to your hard drive. All sensors previously added will be synced.",
"s.sync()",
"Now you can create a pandas timeseries with all data from a given sensor.",
"ts = s.series('d209e2bbb35b82b83cc0de5e8b84a4ff')\nprint(ts)",
"When plotting the data, you'll notice that this ts contains cumulative data, and the time axis (= pandas index) contains seconds since the epoch. Not very practical.",
"ts.ix[:1000].plot()\nplt.show()",
"To show differential data (eg instantaneous power), we first have to resample this cumulative data to the interval we want to obtain. We use linear interpolation to approximate the cumulative value between two datapoints. In the example below, we resample to hourly values. Then, we take the difference between the cumulative values at hourly intervals in order to get the average power (per hour). As the original data is in Wh, we obtain W.",
"tsmin = ts.resample(rule='H')\ntsmin=tsmin.interpolate(method='linear')\ntsmin=tsmin.diff()\ntsmin.plot()",
"If we want to plot only a specific period, we can slice the data with the .ix[from:to] method.",
"tsmin.ix['20141016':'20141018'].plot()\n\nts.name",
"Note\nAll of the functionality illustrated above is now included in the houseprint module. See the notebook Demo_houseprint.ipynb for a demo.\nTMPO support is now included in the new houseprint. After loading the houseprint just call houseprint.init_tmpo() to initialize. After that, accessing data is as easy as calling sensor.get_data(), houseprint.get_data()... there are many options, such as houseprint.get_data(sensortype='gas', resample='min')."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
cdawei/digbeta
|
dchen/music/pla_split.ipynb
|
gpl-3.0
|
[
"Dataset split of AotM-2011/30Music playlists for playlist augmentation",
"%matplotlib inline\n\nimport os\nimport sys\nimport gzip\nimport numpy as np\nimport pickle as pkl\nfrom scipy.sparse import lil_matrix, issparse, hstack, vstack\nfrom collections import Counter\nimport gensim\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nnp_settings0 = np.seterr(all='raise')\nRAND_SEED = 0\nplt.style.use('seaborn')\n\ndatasets = ['aotm2011', '30music']\nffeature = 'data/msd/song2feature.pkl.gz'\nfgenre = 'data/msd/song2genre.pkl.gz'\nfsong2artist = 'data/msd/song2artist.pkl.gz'\naudio_feature_indices = [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 185, 186, 187, 198, 199, 200, 201]\n\ndix = 0\ndataset_name = datasets[dix]\ndata_dir = 'data/%s' % dataset_name\nprint(dataset_name)",
"Load playlists\nLoad playlists.",
"fplaylist = os.path.join(data_dir, '%s-playlist.pkl.gz' % dataset_name)\n_all_playlists = pkl.load(gzip.open(fplaylist, 'rb'))\n\n# _all_playlists[0]\n\nall_playlists = []\n\nif type(_all_playlists[0][1]) == tuple:\n for pl, u in _all_playlists:\n user = '%s_%s' % (u[0], u[1]) # user string\n all_playlists.append((pl, user))\nelse:\n all_playlists = _all_playlists\n\n# user_playlists = dict()\n# for pl, u in all_playlists:\n# try:\n# user_playlists[u].append(pl)\n# except KeyError:\n# user_playlists[u] = [pl]\n\n# all_playlists = []\n# for u in user_playlists:\n# if len(user_playlists[u]) > 4:\n# all_playlists += [(pl, u) for pl in user_playlists[u]]\n\nall_users = sorted(set({user for _, user in all_playlists}))\n\nprint('#user : {:,}'.format(len(all_users)))\nprint('#playlist: {:,}'.format(len(all_playlists)))\n\npl_lengths = [len(pl) for pl, _ in all_playlists]\nplt.hist(pl_lengths, bins=100)\nprint('Average playlist length: %.1f' % np.mean(pl_lengths))",
"check duplicated songs in the same playlist.",
"print('{:,} | {:,}'.format(np.sum(pl_lengths), np.sum([len(set(pl)) for pl, _ in all_playlists])))",
"Load song features\nLoad song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.",
"_song2feature = pkl.load(gzip.open(ffeature, 'rb'))\n\nsong2feature = dict()\n\nfor sid in sorted(_song2feature):\n song2feature[sid] = _song2feature[sid][audio_feature_indices]",
"Load genres\nSong genres from MSD Allmusic Genre Dataset (Top MAGD) and tagtraum.",
"song2genre = pkl.load(gzip.open(fgenre, 'rb'))",
"Song collection",
"_all_songs = sorted([(sid, int(song2feature[sid][-1])) for sid in {s for pl, _ in all_playlists for s in pl}], \n key=lambda x: (x[1], x[0]))\nprint('{:,}'.format(len(_all_songs)))",
"Randomise the order of song with the same age.",
"song_age_dict = dict()\n\nfor sid, age in _all_songs:\n age = int(age)\n try:\n song_age_dict[age].append(sid)\n except KeyError:\n song_age_dict[age] = [sid]\n\nall_songs = []\n\nnp.random.seed(RAND_SEED)\nfor age in sorted(song_age_dict.keys()):\n all_songs += [(sid, age) for sid in np.random.permutation(song_age_dict[age])]\n\npkl.dump(all_songs, gzip.open(os.path.join(data_dir, 'setting2/all_songs.pkl.gz'), 'wb'))",
"Check if all songs have genre info.",
"print('#songs missing genre: {:,}'.format(len(all_songs) - np.sum([sid in song2genre for (sid, _) in all_songs])))",
"Song popularity.",
"song2index = {sid: ix for ix, (sid, _) in enumerate(all_songs)}\nsong_pl_mat = lil_matrix((len(all_songs), len(all_playlists)), dtype=np.int8)\nfor j in range(len(all_playlists)):\n pl = all_playlists[j][0]\n ind = [song2index[sid] for sid in pl]\n song_pl_mat[ind, j] = 1\n\nsong_pop = song_pl_mat.tocsc().sum(axis=1)\n\nmax_pop = np.max(song_pop)\nmax_pop\n\nsong2pop = {sid: song_pop[song2index[sid], 0] for (sid, _) in all_songs}\n\npkl.dump(song2pop, gzip.open(os.path.join(data_dir, 'setting2/song2pop.pkl.gz'), 'wb'))",
"deal with one outlier.",
"# song_pop1 = song_pop.copy()\n# maxix = np.argmax(song_pop)\n# song_pop1[maxix] = 0\n# clipped_max_pop = np.max(song_pop1) + 10 # second_max_pop + 10\n# if max_pop - clipped_max_pop > 500:\n# song_pop1[maxix] = clipped_max_pop",
"Create song-playlist matrix\nSongs as rows, playlists as columns.",
"def gen_dataset(playlists, song2feature, song2genre, song2artist, artist2vec, \n train_song_set, dev_song_set=[], test_song_set=[], song2pop_train=None):\n \"\"\"\n Create labelled dataset: rows are songs, columns are users.\n \n Input:\n - playlists: a set of playlists\n - train_song_set: a list of songIDs in training set\n - dev_song_set: a list of songIDs in dev set\n - test_song_set: a list of songIDs in test set\n - song2feature: dictionary that maps songIDs to features from MSD\n - song2genre: dictionary that maps songIDs to genre\n - song2pop_train: a dictionary that maps songIDs to its popularity\n Output:\n - (Feature, Label) pair (X, Y)\n X: #songs by #features\n Y: #songs by #users\n \"\"\" \n song_set = train_song_set + dev_song_set + test_song_set\n N = len(song_set)\n K = len(playlists)\n \n genre_set = sorted({v for v in song2genre.values()})\n genre2index = {genre: ix for ix, genre in enumerate(genre_set)}\n \n def onehot_genre(songID):\n \"\"\"\n One-hot encoding of genres.\n Data imputation: \n - mean imputation (default)\n - one extra entry for songs without genre info\n - sampling from the distribution of genre popularity\n \"\"\"\n num = len(genre_set) # + 1\n vec = np.zeros(num, dtype=np.float)\n if songID in song2genre:\n genre_ix = genre2index[song2genre[songID]]\n vec[genre_ix] = 1\n else:\n vec[:] = np.nan\n #vec[-1] = 1\n return vec\n \n def song_artist_feature(songID):\n \"\"\"\n Return the artist feature for a given song\n \"\"\"\n if songID in song2artist:\n aid = song2artist[songID]\n return artist2vec[aid]\n else:\n return artist2vec['$UNK$']\n \n X = np.array([np.concatenate([song2feature[sid], song_artist_feature(sid), onehot_genre(sid)], axis=-1) \\\n for sid in song_set])\n Y = lil_matrix((N, K), dtype=np.bool)\n \n song2index = {sid: ix for ix, sid in enumerate(song_set)}\n for k in range(K):\n pl = playlists[k]\n indices = [song2index[sid] for sid in pl if sid in song2index]\n Y[indices, k] = True\n \n # genre imputation\n genre_ix_start = -len(genre_set)\n genre_nan = np.isnan(X[:, genre_ix_start:])\n genre_mean = np.nansum(X[:, genre_ix_start:], axis=0) / (X.shape[0] - np.sum(genre_nan, axis=0))\n #print(np.nansum(X[:, genre_ix_start:], axis=0))\n #print(genre_set)\n #print(genre_mean)\n for j in range(len(genre_set)):\n X[genre_nan[:,j], j+genre_ix_start] = genre_mean[j]\n \n # normalise the sum of all genres per song to 1\n # X[:, -len(genre_set):] /= X[:, -len(genre_set):].sum(axis=1).reshape(-1, 1) \n # NOTE: this is not necessary, as the imputed values are guaranteed to be normalised (sum to 1) \n # due to the above method to compute mean genres.\n \n # the log of song popularity\n if song2pop_train is not None:\n # for sid in song_set: \n # assert sid in song2pop_train # trust the input\n logsongpop = np.log2([song2pop_train[sid]+1 for sid in song_set]) # deal with 0 popularity\n X = np.hstack([X, logsongpop.reshape(-1, 1)])\n\n #return X, Y\n Y = Y.tocsr()\n \n train_ix = [song2index[sid] for sid in train_song_set]\n X_train = X[train_ix, :]\n Y_train = Y[train_ix, :]\n \n dev_ix = [song2index[sid] for sid in dev_song_set]\n X_dev = X[dev_ix, :]\n Y_dev = Y[dev_ix, :]\n \n test_ix = [song2index[sid] for sid in test_song_set]\n X_test = X[test_ix, :]\n Y_test = Y[test_ix, :]\n \n if len(dev_song_set) > 0:\n if len(test_song_set) > 0:\n return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc(), X_test, Y_test.tocsc()\n else:\n return X_train, Y_train.tocsc(), X_dev, Y_dev.tocsc()\n else:\n if len(test_song_set) > 0:\n return X_train, Y_train.tocsc(), X_test, Y_test.tocsc()\n else:\n return X_train, Y_train.tocsc()",
"Split playlists\nSplit playlists such that every song in test set is also in training set.\n~~Split playlists (60/10/30 train/dev/test split) uniformly at random.~~\n~~Split each user's playlists (60/20/20 train/dev/test split) uniformly at random if the user has $5$ or more playlists.~~",
"train_playlists = []\ndev_playlists = []\ntest_playlists = []\n\ncandidate_pl_indices = []\nother_pl_indices = []\n\nfor i in range(len(all_playlists)):\n pl = all_playlists[i][0]\n if np.all(np.asarray([song2pop[sid] for sid in pl]) >= 5):\n candidate_pl_indices.append(i)\n else:\n other_pl_indices.append(i)\n\nprint('%d + %d = %d | %d' % (len(candidate_pl_indices), len(other_pl_indices), \\\n len(candidate_pl_indices) + len(other_pl_indices), len(all_playlists)))\n\ndev_ratio = 0.05\ntest_ratio = 0.2\nnpl_dev = int(dev_ratio * len(all_playlists))\nnpl_test = int(test_ratio * len(all_playlists))\nnp.random.seed(RAND_SEED)\npl_indices = np.random.permutation(candidate_pl_indices)\n\ntest_ix = pl_indices[:npl_test]\ntest_playlists = [all_playlists[ix] for ix in test_ix]\n\ndev_ix = pl_indices[npl_test:npl_test + npl_dev]\ndev_playlists = [all_playlists[ix] for ix in dev_ix]\n\ntrain_ix = pl_indices[npl_test + npl_dev:].tolist() + other_pl_indices\ntrain_playlists = [all_playlists[ix] for ix in train_ix]",
"Every song in test set should also be in training set.",
"print('#Songs in train + dev set: %d, #Songs total: %d' % \\\n (len(set([sid for pl, _ in train_playlists + dev_playlists for sid in pl])), len(all_songs)))\n\nprint('{:30s} {:,}'.format('#playlist in training set:', len(train_playlists)))\nprint('{:30s} {:,}'.format('#playlist in dev set:', len(dev_playlists)))\nprint('{:30s} {:,}'.format('#playlist in test set:', len(test_playlists)))\n\nlen(train_playlists) + len(dev_playlists)\n\n# user_playlists = dict()\n# for pl, u in all_playlists:\n# try: \n# user_playlists[u].append(pl)\n# except KeyError:\n# user_playlists[u] = [pl]\n\n# sanity check\n# npl_all = np.sum([len(user_playlists[u]) for u in user_playlists])\n# print('{:30s} {:,}'.format('#users:', len(user_playlists)))\n# print('{:30s} {:,}'.format('#playlists:', npl_all))\n# print('{:30s} {:.2f}'.format('Average #playlists per user:', npl_all / len(user_playlists)))\n\n# np.random.seed(RAND_SEED)\n# for u in user_playlists:\n# playlists_u = [(pl, u) for pl in user_playlists[u]]\n# if len(user_playlists[u]) < 5:\n# train_playlists += playlists_u\n# else:\n# npl_test = int(test_ratio * len(user_playlists[u]))\n# npl_dev = int(dev_ratio * len(user_playlists[u]))\n# pl_indices = np.random.permutation(len(user_playlists[u]))\n# test_playlists += playlists_u[:npl_test]\n# dev_playlists += playlists_u[npl_test:npl_test + npl_dev]\n# train_playlists += playlists_u[npl_test + npl_dev:]\n\nxmax = np.max([len(pl) for (pl, _) in all_playlists]) + 1\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in train_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in TRAINING set')\npass\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in dev_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in DEV set')\npass\n\nax = plt.subplot(111)\nax.hist([len(pl) for (pl, _) in test_playlists], bins=100)\nax.set_yscale('log')\nax.set_xlim(0, xmax)\nax.set_title('Histogram of playlist length in TEST set')\npass",
"Learn artist features",
"song2artist = pkl.load(gzip.open(fsong2artist, 'rb'))\n\nartist_playlist = []\n\nfor pl, _ in train_playlists + dev_playlists:\n pl_artists = [song2artist[sid] if sid in song2artist else '$UNK$' for sid in pl]\n artist_playlist.append(pl_artists)\n\nfartist2vec_bin = os.path.join(data_dir, 'setting2/artist2vec.bin')\nif os.path.exists(fartist2vec_bin):\n artist2vec = gensim.models.KeyedVectors.load_word2vec_format(fartist2vec_bin, binary=True)\nelse:\n artist2vec_model = gensim.models.Word2Vec(sentences=artist_playlist, size=50, seed=RAND_SEED, \n window=10, iter=10, min_count=1)\n artist2vec_model.wv.save_word2vec_format(fartist2vec_bin, binary=True)\n artist2vec = artist2vec_model.wv",
"Hold a subset of songs in dev/test playlist\nKeep the first $K=1,2,3,4$ songs for playlist in dev and test set.",
"N_SEED_K = 1\n\ndev_playlists_obs = []\ndev_playlists_held = []\ntest_playlists_obs = []\ntest_playlists_held = []\n\nfor pl, _ in dev_playlists:\n npl = len(pl)\n k = N_SEED_K\n dev_playlists_obs.append(pl[:k])\n dev_playlists_held.append(pl[k:])\nfor pl, _ in test_playlists:\n npl = len(pl)\n k = N_SEED_K\n test_playlists_obs.append(pl[:k])\n test_playlists_held.append(pl[k:])\n\nfor ix in range(len(dev_playlists)):\n assert np.all(dev_playlists[ix][0] == dev_playlists_obs[ix] + dev_playlists_held[ix])\nfor ix in range(len(test_playlists)):\n assert np.all(test_playlists[ix][0] == test_playlists_obs[ix] + test_playlists_held[ix])\n\nprint('DEV obs: {:,} | DEV held: {:,} \\nTEST obs: {:,} | TEST held: {:,}'.format(\n np.sum([len(ppl) for ppl in dev_playlists_obs]), np.sum([len(ppl) for ppl in dev_playlists_held]),\n np.sum([len(ppl) for ppl in test_playlists_obs]), np.sum([len(ppl) for ppl in test_playlists_held])))\n\nsong2pop_train = song2pop.copy()\nsong2pop_trndev = song2pop.copy()\nfor ppl in dev_playlists_held:\n for sid in ppl:\n song2pop_train[sid] -= 1\nfor ppl in test_playlists_held:\n for sid in ppl:\n song2pop_train[sid] -= 1\n song2pop_trndev[sid] -= 1",
"Hold a subset of songs in a subset of playlists, use all songs",
"pkl_dir2 = os.path.join(data_dir, 'setting2')\nfpl2 = os.path.join(pkl_dir2, 'playlists_train_dev_test_s2_%d.pkl.gz' % N_SEED_K)\nfy2 = os.path.join(pkl_dir2, 'Y_%d.pkl.gz' % N_SEED_K)\nfxtrain2 = os.path.join(pkl_dir2, 'X_train_%d.pkl.gz' % N_SEED_K)\nfytrain2 = os.path.join(pkl_dir2, 'Y_train_%d.pkl.gz' % N_SEED_K)\nfxtrndev2 = os.path.join(pkl_dir2, 'X_trndev_%d.pkl.gz' % N_SEED_K)\nfytrndev2 = os.path.join(pkl_dir2, 'Y_trndev_%d.pkl.gz' % N_SEED_K)\nfydev2 = os.path.join(pkl_dir2, 'PU_dev_%d.pkl.gz' % N_SEED_K)\nfytest2 = os.path.join(pkl_dir2, 'PU_test_%d.pkl.gz' % N_SEED_K)\nfclique21 = os.path.join(pkl_dir2, 'cliques_trndev_%d.pkl.gz' % N_SEED_K)\nfclique22 = os.path.join(pkl_dir2, 'cliques_all_%d.pkl.gz' % N_SEED_K)\n\nX, Y = gen_dataset(playlists = [t[0] for t in train_playlists + dev_playlists + test_playlists],\n song2feature = song2feature, song2genre = song2genre, \n song2artist = song2artist, artist2vec = artist2vec, \n train_song_set = [t[0] for t in all_songs], song2pop_train=song2pop_train)\n\nX_train = X\nassert X.shape[0] == len(song2pop_trndev)\nX_trndev = X_train.copy()\nX_trndev[:, -1] = np.log([song2pop_trndev[sid]+1 for sid, _ in all_songs])\n\ndev_cols = np.arange(len(train_playlists), len(train_playlists) + len(dev_playlists))\ntest_cols = np.arange(len(train_playlists) + len(dev_playlists), Y.shape[1])\nassert len(dev_cols) == len(dev_playlists) == len(dev_playlists_obs)\nassert len(test_cols) == len(test_playlists) == len(test_playlists_obs)\n\npkl.dump({'train_playlists': train_playlists, 'dev_playlists': dev_playlists, 'test_playlists': test_playlists,\n 'dev_playlists_obs': dev_playlists_obs, 'dev_playlists_held': dev_playlists_held,\n 'test_playlists_obs': test_playlists_obs, 'test_playlists_held': test_playlists_held},\n gzip.open(fpl2, 'wb'))\n\nsong2index = {sid: ix for ix, sid in enumerate([t[0] for t in all_songs])}",
"Use dedicated sparse matrices to reprsent what entries are observed in dev and test set.",
"Y_train = Y[:, :len(train_playlists)].tocsc()\nY_trndev = Y[:, :len(train_playlists) + len(dev_playlists)].tocsc()\n\nPU_dev = lil_matrix((len(all_songs), len(dev_playlists)), dtype=np.bool)\nPU_test = lil_matrix((len(all_songs), len(test_playlists)), dtype=np.bool)\n\nnum_known_dev = 0\nfor j in range(len(dev_playlists)):\n if (j+1) % 1000 == 0:\n sys.stdout.write('\\r%d / %d' % (j+1, len(dev_playlists))); sys.stdout.flush()\n rows = [song2index[sid] for sid in dev_playlists_obs[j]]\n PU_dev[rows, j] = True\n num_known_dev += len(rows)\n\nnum_known_test = 0\nfor j in range(len(test_playlists)):\n if (j+1) % 1000 == 0:\n sys.stdout.write('\\r%d / %d' % (j+1, len(test_playlists))); sys.stdout.flush()\n rows = [song2index[sid] for sid in test_playlists_obs[j]]\n PU_test[rows, j] = True\n num_known_test += len(rows)\n\nPU_dev = PU_dev.tocsr()\nPU_test = PU_test.tocsr()\n\nprint('#unknown entries in DEV set: {:15,d} | {:15,d} \\n#unknown entries in TEST set: {:15,d} | {:15,d}'.format(\n np.prod(PU_dev.shape) - PU_dev.sum(), len(dev_playlists) * len(all_songs) - num_known_dev,\n np.prod(PU_test.shape) - PU_test.sum(), len(test_playlists) * len(all_songs) - num_known_test))",
"Feature normalisation.",
"X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))\nX_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)\nX_train -= X_train_mean\nX_train /= X_train_std\n\nX_trndev_mean = np.mean(X_trndev, axis=0).reshape((1, -1))\nX_trndev_std = np.std(X_trndev, axis=0).reshape((1, -1)) + 10 ** (-6)\nX_trndev -= X_trndev_mean\nX_trndev /= X_trndev_std\n\nprint(np.mean(np.mean(X_train, axis=0)))\nprint(np.mean( np.std(X_train, axis=0)) - 1)\nprint(np.mean(np.mean(X_trndev, axis=0)))\nprint(np.mean( np.std(X_trndev, axis=0)) - 1)\n\nprint('All : %s' % str(Y.shape))\nprint('Train : %s, %s' % (X_train.shape, Y_train.shape))\nprint('Dev : %s' % str(PU_dev.shape))\nprint('Trndev: %s, %s' % (X_trndev.shape, Y_trndev.shape))\nprint('Test : %s' % str(PU_test.shape))\n\npkl.dump(X_train, gzip.open(fxtrain2, 'wb'))\npkl.dump(Y_train, gzip.open(fytrain2, 'wb'))\npkl.dump(Y, gzip.open(fy2, 'wb'))\npkl.dump(X_trndev, gzip.open(fxtrndev2, 'wb'))\npkl.dump(Y_trndev, gzip.open(fytrndev2, 'wb'))\npkl.dump(PU_dev, gzip.open(fydev2, 'wb'))\npkl.dump(PU_test, gzip.open(fytest2, 'wb'))",
"Build the adjacent matrix of playlists (nodes) for setting II, playlists of the same user form a clique.\nCliques in train + dev set.",
"pl_users = [u for (_, u) in train_playlists + dev_playlists]\ncliques_trndev = []\nfor u in sorted(set(pl_users)):\n clique = np.where(u == np.array(pl_users, dtype=np.object))[0]\n #if len(clique) > 1:\n cliques_trndev.append(clique)\n\npkl.dump(cliques_trndev, gzip.open(fclique21, 'wb'))\n\nclqsize = [len(clq) for clq in cliques_trndev]\nprint(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))\n\nassert np.all(np.arange(Y_trndev.shape[1]) == np.asarray(sorted([k for clq in cliques_trndev for k in clq])))",
"Cliques in train + dev + test set.",
"pl_users = [u for (_, u) in train_playlists + dev_playlists + test_playlists]\nclique_all = []\nfor u in sorted(set(pl_users)):\n clique = np.where(u == np.array(pl_users, dtype=np.object))[0]\n #if len(clique) > 1:\n clique_all.append(clique)\n\npkl.dump(clique_all, gzip.open(fclique22, 'wb'))\n\nclqsize = [len(clq) for clq in clique_all]\nprint(np.min(clqsize), np.max(clqsize), len(clqsize), np.sum(clqsize))\n\nassert np.all(np.arange(Y.shape[1]) == np.asarray(sorted([k for clq in clique_all for k in clq])))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cleemesser/eeg-hdfstorage
|
notebooks/vizAbsenceSz.ipynb
|
bsd-3-clause
|
[
"Introduction to visualizing data in the eeghdf files\nGetting started\nThe EEG is stored in hierachical data format (HDF5). This format is widely used, open, and supported in many languages, e.g., matlab, R, python, C, etc.\nHere, I will use the h5py library in python",
"# import libraries\nfrom __future__ import print_function, division, unicode_literals\n%matplotlib inline\n# %matplotlib notebook\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport h5py\nfrom pprint import pprint\n\nimport stacklineplot # local copy\n\n# matplotlib.rcParams['figure.figsize'] = (18.0, 12.0)\nmatplotlib.rcParams['figure.figsize'] = (12.0, 8.0)\n\nhdf = h5py.File('./archive/YA2741G2_1-1+.eeghdf')",
"The data is stored hierachically in an hdf5 file as a tree of keys and values.\nIt is possible to inspect the file using standard hdf5 tools.\nBelow we show the keys and values associated with the root of the tree. This shows that there is a \"patient\" group and a group \"record-0\"",
"list(hdf.items())",
"We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.",
"list(hdf['patient'].attrs.items())",
"Now we look at how the waveform data is stored. By convention, the first record is called \"record-0\" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.",
"rec = hdf['record-0']\nlist(rec.attrs.items())\n\n# here is the list of data arrays stored in the record\nlist(rec.items())\n\nrec['physical_dimensions'][:]\n\nrec['prefilters'][:]\n\nrec['signal_digital_maxs'][:]\n\nrec['signal_digital_mins'][:]\n\nrec['signal_physical_maxs'][:]",
"We can then grab the actual waveform data and visualize it.",
"signals = rec['signals']\nlabels = rec['signal_labels']\nelectrode_labels = [str(s,'ascii') for s in labels]\nnumbered_electrode_labels = [\"%d:%s\" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]",
"Simple visualization of EEG (brief absence seizure)",
"# search identified spasms at 1836, 1871, 1901, 1939\nstacklineplot.show_epoch_centered(signals, 1476,epoch_width_sec=15,chstart=0, chstop=19, fs=rec.attrs['sample_frequency'], ylabels=electrode_labels, yscale=3.0)\nplt.title('Absence Seizure');",
"Annotations\nIt was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure.\nYou can access the clinical annotations via rec['edf_annotations']",
"annot = rec['edf_annotations']\n\nantext = [s.decode('utf-8') for s in annot['texts'][:]]\nstarts100ns = [xx for xx in annot['starts_100ns'][:]] # process the bytes into text and lists of start times\n\ndf = pd.DataFrame(data=antext, columns=['text']) # load into a pandas data frame\ndf['starts100ns'] = starts100ns\ndf['starts_sec'] = df['starts100ns']/10**7\ndel df['starts100ns']",
"It is easy then to find the annotations related to seizures",
"df[df.text.str.contains('sz',case=False)]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pligor/predicting-future-product-prices
|
04_time_series_prediction/.ipynb_checkpoints/23_price_history_seq2seq-cross-validation-checkpoint.ipynb
|
agpl-3.0
|
[
"# -*- coding: UTF-8 -*-\n#%load_ext autoreload\n%reload_ext autoreload\n%autoreload 2",
"https://www.youtube.com/watch?v=ElmBrKyMXxs\nhttps://github.com/hans/ipython-notebooks/blob/master/tf/TF%20tutorial.ipynb\nhttps://github.com/ematvey/tensorflow-seq2seq-tutorials",
"from __future__ import division\nimport tensorflow as tf\nfrom os import path, remove\nimport numpy as np\nimport pandas as pd\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\nfrom time import time\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom mylibs.jupyter_notebook_helper import show_graph, renderStatsList, renderStatsCollection, \\\n renderStatsListWithLabels, renderStatsCollectionOfCrossValids, plot_res_gp, my_plot_convergence\nfrom tensorflow.contrib import rnn\nfrom tensorflow.contrib import learn\nimport shutil\nfrom tensorflow.contrib.learn.python.learn import learn_runner\nfrom mylibs.tf_helper import getDefaultGPUconfig\nfrom sklearn.metrics import r2_score\nfrom mylibs.py_helper import factors\nfrom fastdtw import fastdtw\nfrom collections import OrderedDict\nfrom scipy.spatial.distance import euclidean\nfrom statsmodels.tsa.stattools import coint\nfrom common import get_or_run_nn\nfrom data_providers.price_history_seq2seq_data_provider import PriceHistorySeq2SeqDataProvider\nfrom data_providers.price_history_dataset_generator import PriceHistoryDatasetGenerator\nfrom skopt.space.space import Integer, Real\nfrom skopt import gp_minimize\nfrom skopt.plots import plot_convergence\nimport pickle\nimport inspect\nimport dill\nimport sys\nfrom models.price_history_21_seq2seq_dyn_dec_ins import PriceHistorySeq2SeqDynDecIns\nfrom gp_opt.price_history_23_gp_opt import PriceHistory23GpOpt\n\ndtype = tf.float32\nseed = 16011984\nrandom_state = np.random.RandomState(seed=seed)\nconfig = getDefaultGPUconfig()\nn_jobs = 1\n%matplotlib inline",
"Step 0 - hyperparams\nvocab_size is all the potential words you could have (classification for translation case)\nand max sequence length are the SAME thing\ndecoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now",
"num_units = 400 #state size\n\ninput_len = 60\ntarget_len = 30\n\nbatch_size = 64 #50\nwith_EOS = False\n\ntotal_train_size = 57994\ntrain_size = 6400 \ntest_size = 1282",
"Once generate data",
"data_path = '../data/price_history'\n\n#npz_full_train = data_path + '/price_history_03_dp_60to30_train.npz'\n#npz_full_train = data_path + '/price_history_60to30_targets_normed_train.npz'\nnpz_full_train = data_path + '/price_history_03_dp_60to30_global_remove_scale_targets_normed_train.npz'\n\n#npz_train = data_path + '/price_history_03_dp_60to30_57980_train.npz'\n#npz_train = data_path + '/price_history_03_dp_60to30_6400_train.npz'\n#npz_train = data_path + '/price_history_60to30_6400_targets_normed_train.npz'\nnpz_train = data_path + '/price_history_03_dp_60to30_6400_global_remove_scale_targets_normed_train.npz'\n\n#npz_test = data_path + '/price_history_03_dp_60to30_test.npz'\n#npz_test = data_path + '/price_history_60to30_targets_normed_test.npz'\nnpz_test = data_path + '/price_history_03_dp_60to30_global_remove_scale_targets_normed_test.npz'",
"Step 1 - collect data",
"# dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)\n# dp.inputs.shape, dp.targets.shape\n\n# aa, bb = dp.next()\n# aa.shape, bb.shape",
"Step 2 - Build model",
"model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)\n\n# graph = model.getGraph(batch_size=batch_size,\n# num_units=num_units,\n# input_len=input_len,\n# target_len=target_len)\n\n#show_graph(graph)",
"Cross Validating",
"def plotter(stats_list, label_text):\n _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text)\n plt.show()\n\n _ = renderStatsListWithLabels(stats_list=stats_list, label_text=label_text,\n title='Validation Error', kk='error(valid)')\n plt.show()\n\n#sorted(factors(6400))\n\nobj = PriceHistory23GpOpt(model=model,\n stats_npy_filename = 'bayes_opt_23_stats_dic',\n cv_score_dict_npy_filename = 'bayes_opt_23_cv_scores_dic',\n random_state=random_state,\n plotter = plotter,\n npz_path=npz_train,\n epochs=15,\n batch_size=batch_size,\n input_len=input_len,\n target_len=target_len,\n n_splits=5,\n )\n\nopt_res = obj.run_opt(n_random_starts=2, n_calls=17)\n\nplot_res_gp(opt_res)\n\nopt_res.best_params\n\nfilepath = PriceHistory23GpOpt.bayes_opt_dir + '/bayes_opt_23_stats_dic.npy'\n\nrenderStatsCollectionOfCrossValids(stats_dic=np.load(filepath)[()], label_texts=[\n 'num_units', 'activation', 'lamda2', 'keep_prob_input', 'learning_rate'])\nplt.show()",
"Step 3 training the network",
"model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)\n\nopt_res.best_params\n\nnum_units, activation, lamda2, keep_prob_input, learning_rate = opt_res.best_params\n\nbatch_size\n\nnpz_1280_test = '../data/price_history/price_history_03_dp_60to30_global_remove_scale_targets_normed_1280_test.npz'\n\n# PriceHistoryDatasetGenerator.create_subsampled(inpath=npz_test, target_size=1280,\n# outpath='../data/price_history/price_history_03_dp_60to30_global_remove_scale_targets_normed_1280_test.npz',\n# random_state=random_state)\n\ndef experiment():\n return model.run(npz_path=npz_train,\n npz_test = npz_1280_test,\n epochs=200,\n batch_size = batch_size,\n num_units = num_units,\n input_len=input_len,\n target_len=target_len,\n learning_rate = learning_rate * 10,\n preds_gather_enabled=True,\n batch_norm_enabled = True,\n activation = activation,\n decoder_first_input = PriceHistorySeq2SeqDynDecIns.DECODER_FIRST_INPUT.ZEROS,\n keep_prob_input = keep_prob_input,\n lamda2 = lamda2,\n )\n\n#%%time\ndyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='023_seq2seq_60to30_001')\n\ndyn_stats.plotStats()\nplt.show()\n\nr2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])\n for ind in range(len(targets))]\n\nind = np.argmin(r2_scores)\nind\n\nreals = targets[ind]\npreds = preds_dict[ind]\n\nr2_score(y_true=reals, y_pred=preds)\n\n#sns.tsplot(data=dp.inputs[ind].flatten())\n\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\n%%time\ndtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]\n for ind in range(len(targets))]\n\nnp.mean(dtw_scores)\n\ncoint(preds, reals)\n\ncur_ind = np.random.randint(len(targets))\nreals = targets[cur_ind]\npreds = preds_dict[cur_ind]\nfig = plt.figure(figsize=(15,6))\nplt.plot(reals, 'b')\nplt.plot(preds, 'g')\nplt.legend(['reals','preds'])\nplt.show()\n\nnpz_1280_test\n\naa = np.load(npz_1280_test)\n\nlen( set(aa['sku_ids']) )",
"Conclusion\nThe above test will be considered UNRELIABLE because it represents only 24 cellphones!\nWe will generate a new test set"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Bio204-class/bio204-notebooks
|
2016-03-30-ANOVA-simulations.ipynb
|
cc0-1.0
|
[
"%matplotlib inline\n\nimport numpy as np\nimport scipy.stats as stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sbn\n\n# set some seaborn aesthetics\nsbn.set_palette(\"Set1\")\n\n# initialize random seed for reproducibility\nnp.random.seed(20160330)",
"One-way ANOVA, general setup\nWe'll starting with simulating data for a one-way ANOVA, under the null hypothesis.\nIn this simulation we'll simulate four groups, all drawn from the same underlying distribution: $N(\\mu=0,\\sigma=1)$.",
"## simulate one way ANOVA under the null hypothesis of no \n## difference in group means\n\ngroupmeans = [0, 0, 0, 0]\nk = len(groupmeans) # number of groups\ngroupstds = [1] * k # standard deviations equal across groups\nn = 25 # sample size\n\n# generate samples\nsamples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]",
"Illustrate sample distributions and group means\nWe then draw the simulated data, showing the group distributions on the left and the distribution of group means on the right.",
"# draw a figure\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,4))\nclrs = sbn.color_palette(\"Set1\", n_colors=k)\n\nfor i, sample in enumerate(samples):\n sbn.kdeplot(sample, color=clrs[i], ax=ax1)\nax1_ymax = ax1.get_ylim()[1]\n\nfor i, sample in enumerate(samples):\n ax2.vlines(np.mean(sample), 0, ax1_ymax, linestyle=\"dashed\", color=clrs[i])\nax2.set_xlim(ax1.get_xlim())\nax2.set_ylim(ax1.get_ylim()) \n\nax1.set_title(\"Group Sample Distributions\")\nax2.set_title(\"Group Means\")\n\nax1.set_xlabel(\"X\")\nax1.set_ylabel(\"Density\")\n\nax2.set_xlabel(\"mean(X)\")\nax2.set_ylabel(\"Density\")\npass",
"F-statistic\nWe calculate an F-statistic, which is the ratio of the \"between group\" variance to the \"within group\" variance. The calculation below is appropriate when all the group sizes are the same.",
"# Between-group and within-group estimates of variance\nsample_group_means = [np.mean(s) for s in samples]\nsample_group_var = [np.var(s, ddof=1) for s in samples]\n\nVbtw = n * np.var(sample_group_means, ddof=1)\nVwin = np.mean(sample_group_var)\nFstat = Vbtw/Vwin\n\nprint(\"Between group estimate of population variance:\", Vbtw)\nprint(\"Within group estimate of population variance:\", Vwin)\nprint(\"Fstat = Vbtw/Vwin = \", Fstat)",
"Simulating the sampling distribution of the F-test statistic\nTo understand how surprising our observed data is, relative to what we would expect under the null hypothesis, we need to understand the sampling distribution of the F-statistic. Here we use simulation to estimate this sampling distribution.",
"# now carry out many such simulations to estimate the sampling distristribution\n# of our F-test statistic\n\ngroupmeans = [0, 0, 0, 0]\nk = len(groupmeans) # number of groups\ngroupstds = [1] * k # standard deviations equal across groups\nn = 25 # sample size\n\nnsims = 1000\nFstats = []\nfor sim in range(nsims):\n samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]\n sample_group_means = [np.mean(s) for s in samples]\n sample_group_var = [np.var(s, ddof=1) for s in samples]\n Vbtw = n * np.var(sample_group_means, ddof=1)\n Vwin = np.mean(sample_group_var)\n Fstat = Vbtw/Vwin\n Fstats.append(Fstat)\n \nFstats = np.array(Fstats)\n",
"Draw a figure to compare our simulated sampling distribution of the F-statistic to the theoretical expectation\nLet's create a plot comparing our simulated sampling distribution to the theoretical sampling distribution determined analytically. As we see below they compare well.",
"fig, ax = plt.subplots()\nsbn.distplot(Fstats, ax=ax, label=\"Simulation\",\n kde_kws=dict(alpha=0.5, linewidth=2))\n\n# plot the theoretical F-distribution for\n# corresponding degrees of freedom\ndf1 = k - 1\ndf2 = n*k - k\nx = np.linspace(0,9,500)\nFtheory = stats.f.pdf(x, df1, df2)\nplt.plot(x,Ftheory, linestyle='dashed', linewidth=2, label=\"Theory\")\n\n# axes, legends, title\nax.set_xlim(0, )\nax.set_xlabel(\"F-statistic\")\nax.set_ylabel(\"Density\")\nax.legend()\ntitle = \\\n\"\"\"Comparison of Simulated and Theoretical\nF-distribution for F(df1={}, df2={})\"\"\"\nax.set_title(title.format(df1, df2))\n\npass",
"Determining signficance thresholds\nTo determine whether we would reject the null hypothesis for an observed value of the F-statistic we need to calculate the appropriate cutoff value for a given significance threshold, $\\alpha$.\nHere we consider the standard signficiance threshold $\\alpha$ = 0.05.",
"# draw F distribution\nx = np.linspace(0,9,500)\nFtheory = stats.f.pdf(x, df1, df2)\nplt.plot(x, Ftheory, linestyle='solid', linewidth=2, label=\"Theoretical\\nExpectation\")\n\n# draw vertical line at threshold\nthreshold = stats.f.ppf(0.95, df1, df2)\nplt.vlines(threshold, 0, stats.f.pdf(threshold, df1, df2), linestyle='solid')\n\n# shade area under curve to right of threshold\nareax = np.linspace(threshold, 9, 250)\nplt.fill_between(areax, stats.f.pdf(areax, df1, df2), color='gray', alpha=0.75)\n\n# axes, legends, title\nplt.xlim(0, )\nplt.xlabel(\"F-statistic\")\nplt.ylabel(\"Density\")\nplt.legend()\ntitle = \\\nr\"\"\" $\\alpha$ = 0.05 threshold for \nF-distribution with df1 = {}, df2={}\"\"\"\nplt.title(title.format(df1, df2))\n\nprint(\"The α =0.05 significance threshold is:\", threshold)\n\npass\n",
"Note that the F-distribution above is specific to the particular degrees of freedom. We would typically refer to that distribution as $F_{3,96}$. In this case, for $\\alpha=0.5$, we would reject the null hypothesis if the observed value of the F-statistic was greater than 2.70.\nSimulation where $H_A$ holds\nAs we've done in previous cases, it's informative to simulate the situation where the null hypothesis is false (i.e. the alternative hypothesis $H_A$ is true). \nHere we simulate the case where one of the four groups is drawn from a normal distribution with a mean that is different from the other three groups -- $N(\\mu=1, \\sigma=1)$ rather than $N(\\mu=0, \\sigma=1)$.",
"# now simulate case where one of the group means is different\n\ngroupmeans = [0, 0, 0, 1]\nk = len(groupmeans) # number of groups\ngroupstds = [1] * k # standard deviations equal across groups\nn = 25 # sample size\n\nnsims = 1000\nFstats = []\nfor sim in range(nsims):\n samples = [stats.norm.rvs(loc=i, scale=j, size = n) for (i,j) in zip(groupmeans,groupstds)]\n sample_group_means = [np.mean(s) for s in samples]\n sample_group_var = [np.var(s, ddof=1) for s in samples]\n Vbtw = n * np.var(sample_group_means, ddof=1)\n Vwin = np.mean(sample_group_var)\n Fstat = Vbtw/Vwin\n Fstats.append(Fstat)\n \nFstats = np.array(Fstats)\n",
"We then plot the distribution of the F-statistic under this specific $H_A$ versus the distribution of F under the null hypothesis.",
"fig, ax = plt.subplots()\nsbn.distplot(Fstats, ax=ax, label=\"Simulated $H_A$\",\n kde_kws=dict(alpha=0.5, linewidth=2))\n\n# plot the theoretical F-distribution for\n# corresponding degrees of freedom\ndf1 = k - 1\ndf2 = n*k - k\nx = np.linspace(0,9,500)\nFtheory = stats.f.pdf(x, df1, df2)\nplt.plot(x,Ftheory, linestyle='dashed', linewidth=2, label=\"Theory\")\n\nymin, ymax = ax.get_ylim()\n# Draw threshold alpha = 0.05\nax.vlines(stats.f.ppf(0.95, df1, df2), 0, ymax, linestyle='dotted', \n color='k', label=r\"Threshold for $\\alpha=0.05$\")\n\n# axes, legends, title\nax.set_xlim(0, )\nax.set_ylim(0, ymax)\nax.set_xlabel(\"F-statistic\")\nax.set_ylabel(\"Density\")\nax.legend()\ntitle = \\\n\"\"\"Comparison of Theoretical F-distribution\nand F-distribution when $H_A$ is true\"\"\"\nax.set_title(title.format(df1, df2))\n\npass",
"The fracion of cases that are false-negatives (i.e. we fail to reject the null hypothesis when the alternative hypothesis is true) is $\\beta$. This is the fraction of the red distribution to the left of the $\\alpha$ treshold. The power of our test under this particular scenario is 1-$\\beta$."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jerkos/cobrapy
|
documentation_builder/milp.ipynb
|
lgpl-2.1
|
[
"Mixed-Integer Linear Programming\nIce Cream\nThis example was originally contributed by Joshua Lerman.\nAn ice cream stand sells cones and popsicles. It wants to maximize its profit, but is subject to a budget.\nWe can write this problem as a linear program:\n\nmax cone $\\cdot$ cone_margin + popsicle $\\cdot$ popsicle margin\nsubject to\ncone $\\cdot$ cone_cost + popsicle $\\cdot$ popsicle_cost $\\le$ budget",
"cone_selling_price = 7.\ncone_production_cost = 3.\npopsicle_selling_price = 2.\npopsicle_production_cost = 1.\nstarting_budget = 100.",
"This problem can be written as a cobra.Model",
"from cobra import Model, Metabolite, Reaction\n\ncone = Reaction(\"cone\")\npopsicle = Reaction(\"popsicle\")\n\n# constrainted to a budget\nbudget = Metabolite(\"budget\")\nbudget._constraint_sense = \"L\"\nbudget._bound = starting_budget\ncone.add_metabolites({budget: cone_production_cost})\npopsicle.add_metabolites({budget: popsicle_production_cost})\n\n# objective coefficient is the profit to be made from each unit\ncone.objective_coefficient = cone_selling_price - cone_production_cost\npopsicle.objective_coefficient = popsicle_selling_price - \\\n popsicle_production_cost\n\nm = Model(\"lerman_ice_cream_co\")\nm.add_reactions((cone, popsicle))\n\nm.optimize().x_dict",
"In reality, cones and popsicles can only be sold in integer amounts. We can use the variable kind attribute of a cobra.Reaction to enforce this.",
"cone.variable_kind = \"integer\"\npopsicle.variable_kind = \"integer\"\nm.optimize().x_dict",
"Now the model makes both popsicles and cones.\nRestaurant Order\nTo tackle the less immediately obvious problem from the following XKCD comic:",
"from IPython.display import Image\nImage(url=r\"http://imgs.xkcd.com/comics/np_complete.png\")",
"We want a solution satisfying the following constraints:\n$\\left(\\begin{matrix}2.15&2.75&3.35&3.55&4.20&5.80\\end{matrix}\\right) \\cdot \\vec v = 15.05$\n$\\vec v_i \\ge 0$\n$\\vec v_i \\in \\mathbb{Z}$\nThis problem can be written as a COBRA model as well.",
"total_cost = Metabolite(\"constraint\")\ntotal_cost._bound = 15.05\n\ncosts = {\"mixed_fruit\": 2.15, \"french_fries\": 2.75, \"side_salad\": 3.35,\n \"hot_wings\": 3.55, \"mozarella_sticks\": 4.20, \"sampler_plate\": 5.80}\n\nm = Model(\"appetizers\")\n\nfor item, cost in costs.items():\n r = Reaction(item)\n r.add_metabolites({total_cost: cost})\n r.variable_kind = \"integer\"\n m.add_reaction(r)\n\n# To add to the problem, suppose we don't want to eat all mixed fruit.\nm.reactions.mixed_fruit.objective_coefficient = 1\n \nm.optimize(objective_sense=\"minimize\").x_dict",
"There is another solution to this problem, which would have been obtained if we had maximized for mixed fruit instead of minimizing.",
"m.optimize(objective_sense=\"maximize\").x_dict",
"Boolean Indicators\nTo give a COBRA-related example, we can create boolean variables as integers, which can serve as indicators for a reaction being active in a model. For a reaction flux $v$ with lower bound -1000 and upper bound 1000, we can create a binary variable $b$ with the following constraints:\n$b \\in {0, 1}$\n$-1000 \\cdot b \\le v \\le 1000 \\cdot b$\nTo introduce the above constraints into a cobra model, we can rewrite them as follows\n$v \\le b \\cdot 1000 \\Rightarrow v- 1000\\cdot b \\le 0$\n$-1000 \\cdot b \\le v \\Rightarrow v + 1000\\cdot b \\ge 0$",
"import cobra.test\nmodel = cobra.test.create_test_model(\"textbook\")\n\n# an indicator for pgi\npgi = model.reactions.get_by_id(\"PGI\")\n# make a boolean variable\npgi_indicator = Reaction(\"indicator_PGI\")\npgi_indicator.lower_bound = 0\npgi_indicator.upper_bound = 1\npgi_indicator.variable_kind = \"integer\"\n# create constraint for v - 1000 b <= 0\npgi_plus = Metabolite(\"PGI_plus\")\npgi_plus._constraint_sense = \"L\"\n# create constraint for v + 1000 b >= 0\npgi_minus = Metabolite(\"PGI_minus\")\npgi_minus._constraint_sense = \"G\"\n\npgi_indicator.add_metabolites({pgi_plus: -1000, pgi_minus: 1000})\npgi.add_metabolites({pgi_plus: 1, pgi_minus: 1})\nmodel.add_reaction(pgi_indicator)\n\n\n# an indicator for zwf\nzwf = model.reactions.get_by_id(\"G6PDH2r\")\nzwf_indicator = Reaction(\"indicator_ZWF\")\nzwf_indicator.lower_bound = 0\nzwf_indicator.upper_bound = 1\nzwf_indicator.variable_kind = \"integer\"\n# create constraint for v - 1000 b <= 0\nzwf_plus = Metabolite(\"ZWF_plus\")\nzwf_plus._constraint_sense = \"L\"\n# create constraint for v + 1000 b >= 0\nzwf_minus = Metabolite(\"ZWF_minus\")\nzwf_minus._constraint_sense = \"G\"\n\nzwf_indicator.add_metabolites({zwf_plus: -1000, zwf_minus: 1000})\nzwf.add_metabolites({zwf_plus: 1, zwf_minus: 1})\n\n# add the indicator reactions to the model\nmodel.add_reaction(zwf_indicator)\n",
"In a model with both these reactions active, the indicators will also be active",
"solution = model.optimize()\nprint(\"PGI indicator = %d\" % solution.x_dict[\"indicator_PGI\"])\nprint(\"ZWF indicator = %d\" % solution.x_dict[\"indicator_ZWF\"])\nprint(\"PGI flux = %.2f\" % solution.x_dict[\"PGI\"])\nprint(\"ZWF flux = %.2f\" % solution.x_dict[\"G6PDH2r\"])",
"Because these boolean indicators are in the model, additional constraints can be applied on them. For example, we can prevent both reactions from being active at the same time by adding the following constraint:\n$b_\\text{pgi} + b_\\text{zwf} = 1$",
"or_constraint = Metabolite(\"or\")\nor_constraint._bound = 1\nzwf_indicator.add_metabolites({or_constraint: 1})\npgi_indicator.add_metabolites({or_constraint: 1})\n\nsolution = model.optimize()\nprint(\"PGI indicator = %d\" % solution.x_dict[\"indicator_PGI\"])\nprint(\"ZWF indicator = %d\" % solution.x_dict[\"indicator_ZWF\"])\nprint(\"PGI flux = %.2f\" % solution.x_dict[\"PGI\"])\nprint(\"ZWF flux = %.2f\" % solution.x_dict[\"G6PDH2r\"])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io
|
python/select_random_item_from_list.ipynb
|
mit
|
[
"Title: Select Random Item From A Lists\nSlug: select_random_item_from_list\nSummary: Select Random Item From A Lists in Python. \nDate: 2016-01-23 12:00\nCategory: Python\nTags: Basics \nAuthors: Chris Albon \nInteresting in learning more? Check out Fluent Python\nPreliminaries",
"from random import choice",
"Create List",
"# Make a list of crew members\ncrew_members = ['Steve', 'Stacy', 'Miller', 'Chris', 'Bill', 'Jack']",
"Select Random Item From List",
"# Choose a random crew member\nchoice(crew_members)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pyemma/deeplearning
|
assignment2/BatchNormalization.ipynb
|
gpl-3.0
|
[
"Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.",
"# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint 'Before batch normalization:'\nprint ' means: ', a.mean(axis=0)\nprint ' stds: ', a.std(axis=0)\n\n# Means should be close to zero and stds close to one\nprint 'After batch normalization (gamma=1, beta=0)'\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint ' mean: ', a_norm.mean(axis=0)\nprint ' std: ', a_norm.std(axis=0)\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint 'After batch normalization (nontrivial gamma, beta)'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in xrange(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint 'After batch normalization (test-time):'\nprint ' means: ', a_norm.mean(axis=0)\nprint ' stds: ', a_norm.std(axis=0)",
"Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.",
"# Gradient check batchnorm backward pass\n\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dgamma error: ', rel_error(da_num, dgamma)\nprint 'dbeta error: ', rel_error(db_num, dbeta)",
"Batch Normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.",
"N, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint 'dx difference: ', rel_error(dx1, dx2)\nprint 'dgamma difference: ', rel_error(dgamma1, dgamma2)\nprint 'dbeta difference: ', rel_error(dbeta1, dbeta2)\nprint 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))",
"Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n if reg == 0: print",
"Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.",
"# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()",
"Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.",
"plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.",
"# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()",
"Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nick-youngblut/SIPSim
|
ipynb/bac_genome/priming_exp/validation_sample/X12C.700.14.05_fracRichness-moreDif.ipynb
|
mit
|
[
"Running SIPSim pipeline to simulate priming_exp gradient dataset\n\nBasing simulation params off of priming_exp dataset\nBasing starting community diversity on mean percent abundances in all fraction samples for the gradient\nOther parameters are 'default'\n\nSetting variables",
"workDir = '/home/nick/notebook/SIPSim/dev/priming_exp/validation_sample/X12C.700.14_fracRichness-moreDif/'\ngenomeDir = '/home/nick/notebook/SIPSim/dev/priming_exp/genomes/'\nallAmpFrags = '/home/nick/notebook/SIPSim/dev/bac_genome1210/validation/ampFrags.pkl'\notuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'\nmetaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'\nprimerFile = '/home/nick/notebook/SIPSim/dev/515F-806R.fna'\n\ncdhit_dir = '/home/nick/notebook/SIPSim/dev/priming_exp/CD-HIT/'\nR_dir = '/home/nick/notebook/SIPSim/lib/R/'\nfigureDir = '/home/nick/notebook/SIPSim/figures/'\n\n# total dataset files\n#allAmpFrags = '/home/nick/notebook/SIPSim/dev/bac_genome1210/validation/ampFrags.pkl'\ngenomeAllDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'\ngenomeAllIndex = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/genome_index.txt'\n\n# simulation params\ncomm_richness = 6606\nseq_per_fraction = ['lognormal', 10.096, 1.116]\n\n# for making genome_map file for genome fragment simulation\ntaxonMapFile = os.path.join(cdhit_dir, 'target_taxa.txt')\ngenomeFilterFile = os.path.join(cdhit_dir, 'genomeFile_seqID_filt.txt')\nabundFile = os.path.join('/home/nick/notebook/SIPSim/dev/priming_exp/exp_info', 'X12C.700.14_frac_OTU.txt')\n\n# misc\nnprocs = 20",
"Init",
"import glob\nimport cPickle as pickle\nimport copy\nfrom IPython.display import Image\n\n%load_ext rpy2.ipython\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(gridExtra)\n\nif not os.path.isdir(workDir):\n os.makedirs(workDir)",
"Creating a community file from the fraction relative abundances",
"%%R -i abundFile\n# reading priming experiment OTU table\ntbl.abund = read.delim(abundFile, sep='\\t')\ntbl.abund %>% head\n\n%%R\ntbl.comm = tbl.abund %>%\n rename('taxon_name' = OTUId,\n 'rel_abund_perc' = mean_perc_abund) %>%\n select(taxon_name, rel_abund_perc) %>%\n mutate(library = '1',\n rank = row_number(-rel_abund_perc)) %>%\n arrange(rank)\n \ntbl.comm %>% head\n\n%%R\n# rescaling rel_abund_perc so sum(rel_abund_perc) = 100\ntbl.comm = tbl.comm %>%\n group_by(library) %>%\n mutate(total = sum(rel_abund_perc)) %>% \n ungroup() %>%\n mutate(rel_abund_perc = rel_abund_perc * 100 / total) %>%\n select(library, taxon_name, rel_abund_perc, rank)\n \ntbl.comm %>% head\n\n%%R -i comm_richness\n# number of OTUs\nn.OTUs = tbl.comm$taxon_name %>% unique %>% length\ncat('Number of OTUs:', n.OTUs, '\\n')\n\n# assertion\ncat('Community richness = number of OTUs? ', comm_richness == n.OTUs, '\\n')\n\n%%R -i workDir\n\ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\nwrite.table(tbl.comm, commFile, sep='\\t', quote=F, row.names=F)",
"Plotting community distribution",
"%%R -i workDir\n\ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\ncomm = read.delim(commFile, sep='\\t')\ncomm %>% head\n\n%%R -w 900 -h 350\n\nggplot(comm, aes(rank, rel_abund_perc)) +\n geom_point() +\n labs(x='Rank', y='% relative abundance', title='Priming experiment community abundance distribution') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Simulating fragments\nMaking a genome index file to map genome fasta files to OTUs\n\nWill be used for community simulation\nJust OTUs with association to genomes",
"%%R -i taxonMapFile -i genomeFilterFile \n\ntaxonMap = read.delim(taxonMapFile, sep='\\t') %>%\n select(target_genome, OTU) %>%\n distinct()\ntaxonMap %>% nrow %>% print\ntaxonMap %>% head(n=3) %>% print\n\nbreaker = '----------------\\n'\ncat(breaker)\n\ngenomeFilter = read.delim(genomeFilterFile, sep='\\t', header=F) \ngenomeFilter %>% nrow %>% print\ngenomeFilter %>% head(n=3) %>% print\n\ncat(breaker)\n\ncomm = read.delim(commFile, sep='\\t') \ncomm %>% nrow %>% print\ncomm %>% head(n=3) %>% print\n\n%%R\ntaxonMap$OTU %>% table %>% sort(decreasing=T) %>% head\n\n%%R\n\ntbl.j = inner_join(taxonMap, genomeFilter, c('target_genome' = 'V1')) %>%\n rename('fasta_file' = V2) %>%\n select(OTU, fasta_file, target_genome)\n\ntbl.j %>% head(n=3)\n\n%%R\ntbl.j$OTU %>% table %>% sort(decreasing=T) %>% head\n\n%%R\ntbl.j2 = inner_join(tbl.j, comm, c('OTU' = 'taxon_name')) \n\nn.target.genomes = tbl.j2$OTU %>% unique %>% length\ncat('Number of target OTUs: ', n.target.genomes, '\\n')\ncat('--------', '\\n')\ntbl.j2 %>% head(n=3)\n\n%%R -i workDir\n\noutFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\nwrite.table(tbl.j2, outFile, sep='\\t', quote=F, row.names=F, col.names=F)",
"Plotting community abundance distribution of target genomes",
"%%R -w 900 -h 350\n\nggplot(tbl.j2, aes(rank, rel_abund_perc)) +\n geom_point(size=3, shape='O', color='red') +\n labs(x='Rank', y='% relative abundance', title='Priming experiment community abundance distribution') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Simulating fragments of genomes that match priming_exp bulk OTUs",
"!cd $workDir; \\\n SIPSim fragments \\\n target_genome_index.txt \\\n --fp $genomeDir \\\n --fr $primerFile \\\n --fld skewed-normal,5000,2000,-5 \\\n --flr None,None \\\n --nf 10000 \\\n --np $nprocs \\\n --tbl \\\n 2> ampFrags.log \\\n > ampFrags.txt",
"Plotting fragment length distribution",
"%%R -i workDir\n\ninFile = paste(c(workDir, 'ampFrags.txt'), collapse='/')\n\ntbl = read.delim(inFile, sep='\\t')\ntbl %>% head(n=3)\n\n%%R -w 950 -h 650\n\nsome.taxa = tbl$taxon_name %>% unique %>% head(n=20)\n\ntbl.f = tbl %>% \n filter(taxon_name %in% some.taxa)\n\nggplot(tbl.f, aes(fragGC, fragLength)) +\n stat_density2d() +\n labs(x='Fragment G+C', y='Fragment length (bp)') +\n facet_wrap(~ taxon_name, ncol=5) +\n theme_bw() +\n theme(\n text=element_text(size=16),\n axis.title.y=element_text(vjust=1)\n )\n\n# re-running simulation with pickled file\n\n!cd $workDir; \\\n SIPSim fragments \\\n target_genome_index.txt \\\n --fp $genomeDir \\\n --fr $primerFile \\\n --fld skewed-normal,5000,2000,-5 \\\n --flr None,None \\\n --nf 10000 \\\n --np $nprocs \\\n 2> ampFrags.log \\\n > ampFrags.pkl",
"Simulating fragments of total dataset with a greater diffusion",
"!cd $workDir; \\\n SIPSim fragments \\\n $genomeAllIndex \\\n --fp $genomeAllDir \\\n --fr $primerFile \\\n --fld skewed-normal,5000,2000,-5 \\\n --flr None,None \\\n --nf 10000 \\\n --np $nprocs \\\n 2> ampFragsAll.log \\\n > ampFragsAll.pkl\n\nampFragsAllFile = os.path.join(workDir, 'ampFragsAll.pkl')",
"Appending fragments from randomly selected genomes of total dataset (n=1210)\n\nThis is to obtain the richness of the bulk soil community\nRandom OTUs will be named after non-target OTUs in comm file\n\nMaking list of non-target OTUs",
"%%R -i workDir\n# loading files\n\n## target genome index (just OTUs with associated genome)\ninFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\ntbl.target = read.delim(inFile, sep='\\t', header=F)\ncolnames(tbl.target) = c('OTUId', 'fasta_file', 'genome_name')\n\n## comm file of total community OTUs \ncommFile = paste(c(workDir, 'comm.txt'), collapse='/')\ntbl.comm = read.delim(commFile, sep='\\t')\n\n%%R\n# just OTUs w/out an associated genome\ntbl.j = anti_join(tbl.comm, tbl.target, c('taxon_name' = 'OTUId'))\nn.nontarget.genomes = tbl.j$taxon_name %>% length\ncat('Number of non-target genomes: ', n.nontarget.genomes, '\\n')\ncat('---------\\n')\ntbl.j %>% head(n=5)\n\n%%R -i comm_richness\n# checking assumptions\ncat('Target + nonTarget richness = total community richness?: ',\n n.target.genomes + n.nontarget.genomes == comm_richness, '\\n')\n\n%%R -i workDir\n# writing out non-target OTU file\noutFile = paste(c(workDir, 'comm_nonTarget.txt'), collapse='/')\nwrite.table(tbl.j, outFile, sep='\\t', quote=F, row.names=F)",
"Randomly selecting amplicon fragment length-GC KDEs from total genome pool",
"# List of non-target OTUs\ninFile = os.path.join(workDir, 'comm_nonTarget.txt')\nnonTarget = pd.read_csv(inFile, sep='\\t')['taxon_name'].tolist()\n\nprint 'Number of non-target OTUs: {}'.format(len(nonTarget))\nnonTarget[:4]\n\n# loading amplicon fragments from full genome KDE dataset\ninFile = os.path.join(workDir, 'ampFrags.pkl')\nampFrag_target = []\nwith open(inFile, 'rb') as iFH:\n ampFrag_target = pickle.load(iFH)\nprint 'Target OTU richness: {}'.format(len(ampFrag_target))\n\n# loading amplicon fragments from full genome KDE dataset\nampFrag_all = []\nwith open(allAmpFrags, 'rb') as iFH:\n ampFrag_all = pickle.load(iFH)\nprint 'Count of frag-GC KDEs for all genomes: {}'.format(len(ampFrag_all)) \n\n# random selection from list\n#target_richness = len(ampFrag_target)\n\ntarget_richness = len(ampFrag_target)\nrichness_needed = comm_richness - target_richness\nprint 'Number of random taxa needed to reach richness: {}'.format(richness_needed)\n\nif richness_needed > 0:\n index = range(target_richness)\n index = np.random.choice(index, richness_needed)\n \n ampFrag_rand = []\n for i in index:\n sys.stderr.write('{},'.format(i))\n ampFrag_rand.append(copy.deepcopy(ampFrag_all[i]))\nelse:\n ampFrag_rand = []\n\n# renaming randomly selected KDEs by non-target OTU-ID\nfor i in range(len(ampFrag_rand)):\n ampFrag_rand[i][0] = nonTarget[i]\n\n# appending random taxa to target taxa and writing\noutFile = os.path.join(workDir, 'ampFrags_wRand.pkl')\n\nwith open(outFile, 'wb') as oFH:\n x = ampFrag_target + ampFrag_rand\n print 'Number of taxa in output: {}'.format(len(x))\n pickle.dump(x, oFH)",
"Converting fragments to kde object",
"!cd $workDir; \\\n SIPSim fragment_kde \\\n ampFrags_wRand.pkl \\\n > ampFrags_wRand_kde.pkl",
"Adding diffusion",
"!cd $workDir; \\\n SIPSim diffusion \\\n ampFrags_wRand_kde.pkl \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif.pkl ",
"Making an incorp config file",
"!cd $workDir; \\\n SIPSim incorpConfigExample \\\n --percTaxa 0 \\\n --percIncorpUnif 100 \\\n > PT0_PI100.config",
"Adding isotope incorporation to BD distribution",
"!cd $workDir; \\\n SIPSim isotope_incorp \\\n ampFrags_wRand_kde_dif.pkl \\\n PT0_PI100.config \\\n --comm comm.txt \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif_incorp.pkl",
"Calculating BD shift from isotope incorporation",
"!cd $workDir; \\\n SIPSim BD_shift \\\n ampFrags_wRand_kde_dif.pkl \\\n ampFrags_wRand_kde_dif_incorp.pkl \\\n --np $nprocs \\\n > ampFrags_wRand_kde_dif_incorp_BD-shift.txt",
"Simulating gradient fractions",
"!cd $workDir; \\\n SIPSim gradient_fractions \\\n comm.txt \\\n > fracs.txt",
"Simulating an OTU table",
"!cd $workDir; \\\n SIPSim OTU_table \\\n ampFrags_wRand_kde_dif_incorp.pkl \\\n comm.txt \\\n fracs.txt \\\n --abs 1e9 \\\n --np $nprocs \\\n > OTU_abs1e9.txt",
"Plotting taxon abundances",
"%%R -i workDir\nsetwd(workDir)\n\n# loading file\ntbl = read.delim('OTU_abs1e9.txt', sep='\\t')\n\n%%R\n## BD for G+C of 0 or 100\nBD.GCp0 = 0 * 0.098 + 1.66\nBD.GCp100 = 1 * 0.098 + 1.66\n\n%%R -w 800 -h 300\n# plotting absolute abundances\n\ntbl.s = tbl %>%\n group_by(library, BD_mid) %>%\n summarize(total_count = sum(count))\n\n## plot\np = ggplot(tbl.s, aes(BD_mid, total_count)) +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n geom_histogram(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16) \n )\np\n\n%%R -w 800 -h 300\n# plotting number of taxa at each BD\n\ntbl.nt = tbl %>%\n filter(count > 0) %>%\n group_by(library, BD_mid) %>%\n summarize(n_taxa = n())\n\n## plot\np = ggplot(tbl.nt, aes(BD_mid, n_taxa)) +\n geom_area(stat='identity', alpha=0.3, position='dodge') +\n geom_histogram(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\n## plot\np = ggplot(tbl, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n\np + geom_area(stat='identity', position='fill')",
"Subsampling from the OTU table",
"dist,loc,scale = seq_per_fraction\n\n!cd $workDir; \\\n SIPSim OTU_subsample \\\n --dist $dist \\\n --dist_params mean:$loc,sigma:$scale \\\n --walk 2 \\\n --min_size 10000 \\\n --max_size 200000 \\\n OTU_abs1e9.txt \\\n > OTU_abs1e9_sub.txt ",
"Testing/Plotting seq count distribution of subsampled fraction samples",
"%%R -h 300 -i workDir\nsetwd(workDir)\n\ntbl = read.csv('OTU_abs1e9_sub.txt', sep='\\t') \n\ntbl.s = tbl %>% \n group_by(library, fraction) %>%\n summarize(total_count = sum(count)) %>%\n ungroup() %>%\n mutate(library = as.character(library))\n\nggplot(tbl.s, aes(total_count)) +\n geom_density(fill='blue')\n\n%%R -h 300 -w 600\nsetwd(workDir)\n\ntbl.s = tbl %>%\n group_by(fraction, BD_min, BD_mid, BD_max) %>%\n summarize(total_count = sum(count)) \n\nggplot(tbl.s, aes(BD_mid, total_count)) +\n geom_point() +\n geom_line() +\n labs(x='Buoyant density', y='Total sequences') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Getting list of target taxa",
"%%R -i workDir\n\ninFile = paste(c(workDir, 'target_genome_index.txt'), collapse='/')\n\ntbl.target = read.delim(inFile, sep='\\t', header=F)\ncolnames(tbl.target) = c('OTUId', 'genome_file', 'genome_ID', 'X', 'Y', 'Z')\ntbl.target = tbl.target %>% distinct(OTUId)\n\n\ncat('Number of target OTUs: ', tbl.target$OTUId %>% unique %>% length, '\\n')\ncat('----------\\n')\ntbl.target %>% head(n=3)",
"Plotting abundance distributions",
"%%R -w 800 -h 250\n# plotting relative abundances\n\ntbl = tbl %>% \n group_by(fraction) %>%\n mutate(rel_abund = count / sum(count))\n\n\n## plot\np = ggplot(tbl, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n\np = ggplot(tbl, aes(BD_mid, rel_abund, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity')",
"Abundance distribution of just target taxa",
"%%R\n\ntargets = tbl.target$OTUId %>% as.vector %>% unique \n\ntbl.f = tbl %>%\n filter(taxon %in% targets)\n\ntbl.f %>% head\n\n%%R -w 800 -h 250\n# plotting absolute abundances\n\n## plot\np = ggplot(tbl.f, aes(BD_mid, count, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity', position='dodge', alpha=0.5)\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\np = ggplot(tbl.f, aes(BD_mid, rel_abund, fill=taxon)) +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np + geom_area(stat='identity')",
"Plotting 'true' taxon abundance distribution (from priming exp dataset)",
"%%R -i metaDataFile\n# loading priming_exp metadata file\n\nmeta = read.delim(metaDataFile, sep='\\t')\nmeta %>% head(n=4)\n\n%%R -i otuTableFile\n# loading priming_exp OTU table \n\ntbl.otu.true = read.delim(otuTableFile, sep='\\t') %>%\n select(OTUId, starts_with('X12C.700.14')) \ntbl.otu.true %>% head(n=3)\n\n%%R\n# editing table\ntbl.otu.true.w = tbl.otu.true %>%\n gather('sample', 'count', 2:ncol(tbl.otu.true)) %>%\n mutate(sample = gsub('^X', '', sample)) %>%\n group_by(sample) %>%\n mutate(rel_abund = count / sum(count)) %>%\n ungroup() %>%\n filter(count > 0)\ntbl.otu.true.w %>% head(n=5)\n\n%%R\ntbl.true.j = inner_join(tbl.otu.true.w, meta, c('sample' = 'Sample'))\ntbl.true.j %>% as.data.frame %>% head(n=3)\n\n%%R -w 800 -h 300 -i workDir\n# plotting number of taxa at each BD\n\ntbl = read.csv('OTU_abs1e9_sub.txt', sep='\\t') \n\ntbl.nt = tbl %>%\n filter(count > 0) %>%\n group_by(library, BD_mid) %>%\n summarize(n_taxa = n())\n\n## plot\np = ggplot(tbl.nt, aes(BD_mid, n_taxa)) +\n geom_area(stat='identity', alpha=0.5) +\n geom_point() +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )\np\n\n%%R -w 700 -h 350\n\ntbl.true.j.s = tbl.true.j %>%\n filter(count > 0) %>%\n group_by(sample, Density) %>%\n summarize(n_taxa = sum(count > 0))\n\nggplot(tbl.true.j.s, aes(Density, n_taxa)) +\n geom_area(stat='identity', alpha=0.5) +\n geom_point() +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density', y='Number of taxa') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )",
"Plotting total counts for each sample",
"%%R -h 300 -w 600\ntbl.true.j.s = tbl.true.j %>%\n group_by(sample, Density) %>%\n summarize(total_count = sum(count)) \n\nggplot(tbl.true.j.s, aes(Density, total_count)) +\n geom_point() +\n geom_line() +\n labs(x='Buoyant density', y='Total sequences') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )",
"Plotting abundance distribution of target OTUs",
"%%R\ntbl.true.j.f = tbl.true.j %>%\n filter(OTUId %in% targets) %>%\n arrange(OTUId, Density) %>%\n group_by(sample)\ntbl.true.j.f %>% head(n=3) %>% as.data.frame\n\n%%R -w 800 -h 250\n# plotting relative abundances\n\n## plot\nggplot(tbl.true.j.f, aes(Density, rel_abund, fill=OTUId)) +\n geom_area(stat='identity') +\n geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +\n labs(x='Buoyant density') +\n theme_bw() +\n theme( \n text = element_text(size=16),\n legend.position = 'none'\n )",
"Combining true and simulated OTU tables for target taxa",
"%%R\ntbl.f.e = tbl.f %>%\n mutate(library = 'simulation') %>%\n rename('density' = BD_mid) %>%\n select(-BD_min, -BD_max)\n\ntbl.true.e = tbl.true.j.f %>% \n select('taxon' = OTUId,\n 'fraction' = sample,\n 'density' = Density,\n count, rel_abund) %>%\n mutate(library = 'true') \n \n \ntbl.sim.true = rbind(tbl.f.e, tbl.true.e) %>% as.data.frame\ntbl.f.e = data.frame()\ntbl.true.e = data.frame()\n\ntbl.sim.true %>% head(n=3)\n\n%%R\n# check\ncat('Number of target taxa: ', tbl.sim.true$taxon %>% unique %>% length, '\\n')",
"Abundance distributions of each target taxon",
"%%R -w 900 -h 3500\n\ntbl.sim.true.f = tbl.sim.true %>%\n ungroup() %>%\n filter(density >= 1.6772) %>%\n filter(density <= 1.7603) %>%\n group_by(taxon) %>%\n mutate(mean_rel_abund = mean(rel_abund)) %>%\n ungroup()\n\ntbl.sim.true.f$taxon = reorder(tbl.sim.true.f$taxon, -tbl.sim.true.f$mean_rel_abund)\n\nggplot(tbl.sim.true.f, aes(density, rel_abund, color=library)) +\n geom_point() +\n geom_line() +\n theme_bw() +\n facet_wrap(~ taxon, ncol=4, scales='free_y')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ryanpdwyer/teensyio
|
examples/LED testing.ipynb
|
mit
|
[
"%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\ndf = pd.read_csv('red_led.csv')\n\nplt.plot(df.x_scaled, df.y_scaled)\nplt.xlabel('Volts [V]')\nplt.ylabel('Current [mA]')\nplt.xlim(1.6, 1.85)\nplt.ylim(-2.6, 0.1)\n\nplt.plot(df.x_scaled, 1000*df.y_scaled, 'b.')\nplt.xlabel('Volts [V]')\nplt.ylabel(u'Current [µA]')\nplt.xlim(0.5, 1)\nplt.ylim(-30, 30)",
"Tiny offset from zero here, but overall it looks pretty good.",
"np.std(df.y_scaled[np.logical_and(df.x_scaled < 1, df.x_scaled > 0.5)], ddof=1)*1000",
"With default analogRead settings, 12-bit resolution, we see 4 µA measured current noise on 10 mA full scale, with no effort to reduce the bandwidth of any of the components.",
"(4./10000)**-1"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
solowPy/solowPy
|
examples/3 Graphical analysis.ipynb
|
mit
|
[
"<div align='center' ><img src='https://raw.githubusercontent.com/davidrpugh/numerical-methods/master/images/sgpe-logo.jpg' width=\"1200\" height=\"100\"></div>\n<div align='right'><img src='https://raw.githubusercontent.com/davidrpugh/numerical-methods/master/images/SIRElogolweb.jpg' width=\"1200\" height=\"100\"></div>",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport sympy as sym\n\nimport solowpy\n\n# define model parameters\nces_params = {'A0': 1.0, 'L0': 1.0, 'g': 0.02, 'n': 0.03, 's': 0.15,\n 'delta': 0.05, 'alpha': 0.33, 'sigma': 0.95}\n\n# create an instance of the solow.Model class\nces_model = solowpy.CESModel(params=ces_params)",
"3. Graphical analysis using Matplotlib and IPython widgets\nAs the primary use of this module is for teaching purposes, there are a number of pedagogically useful plotting methods. I will demonstrate the basic usage of only a few of them below. To see a full listing of the available plotting methods use tab-completion on the cell below.",
"# use tab completion to see complete list\nces_model.plot_",
"Static example:\nCreating a static plot of the classic Solow diagram is done as follows.",
"fig, ax = plt.subplots(1, 1, figsize=(8,6))\nces_model.plot_solow_diagram(ax)\nfig.show()",
"Interactive example:\nAll of the various plotting methods can be made interactive using IPython widgets. To construct an IPython widget we need the following additional import statements.",
"from IPython.html.widgets import fixed, interact, FloatSliderWidget",
"Creating an interactive plot of the classic Solow diagram is done as follows.",
"# wrap the static plotting code in a function\ndef interactive_solow_diagram(model, **params):\n \"\"\"Interactive widget for the factor shares.\"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 6))\n model.plot_solow_diagram(ax, Nk=1000, **params)\n \n# define some widgets for the various parameters\neps = 1e-2\ntechnology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\npopulation_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\nsavings_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\noutput_elasticity_widget = FloatSliderWidget(min=eps, max=1.0, step=eps, value=0.5)\ndepreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\nelasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=0.01, value=1.0+eps)\n\n# create the widget!\ninteract(interactive_solow_diagram, \n model=fixed(ces_model),\n g=technology_progress_widget,\n n=population_growth_widget,\n s=savings_widget, \n alpha=output_elasticity_widget,\n delta=depreciation_widget,\n sigma=elasticity_substitution_widget,\n )",
"3.1 Intensive production function\nCreating an interactive plot of the intensive production function is done as follows.",
"model.plot_intensive_output?\n\ndef interactive_intensive_output(model, **params):\n \"\"\"Interactive widget for the intensive production function.\"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 6))\n model.plot_intensive_output(ax, Nk=1000, **params)\n \n# define some widgets for the various parameters\neps = 1e-2\noutput_elasticity_widget = FloatSliderWidget(min=eps, max=1-eps, step=0.1, value=0.5)\nelasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=0.5, value=1.0+eps)\n\n# create the interactive plot\ninteract(interactive_intensive_output,\n model=fixed(ces_model),\n alpha=output_elasticity_widget,\n sigma=elasticity_substitution_widget\n )",
"3.2 Factor shares\nCreating an interactive plot of factor shares for capital and labor is done as follows.",
"def interactive_factor_shares(model, **params):\n \"\"\"Interactive widget for the factor shares.\"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 6))\n model.plot_factor_shares(ax, Nk=1000, **params)\n \n# define some widgets for the various parameters\neps = 1e-2\ntechnology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\npopulation_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\nsavings_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\noutput_elasticity_widget = FloatSliderWidget(min=eps, max=1.0-eps, step=eps, value=0.5)\ndepreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\nelasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=0.5, value=1.0+eps)\n\n# create the widget!\ninteract(interactive_factor_shares, \n model=fixed(ces_model),\n g=technology_progress_widget,\n n=population_growth_widget,\n s=savings_widget, \n alpha=output_elasticity_widget,\n delta=depreciation_widget,\n sigma=elasticity_substitution_widget,\n )",
"3.4 Phase Diagram\nCreating an interactive plot of the phase diagram for the Solow model is done as follows.",
"def interactive_phase_diagram(model, **params):\n \"\"\"Interactive widget for the phase diagram.\"\"\"\n fig, ax = plt.subplots(1, 1, figsize=(8, 6))\n model.plot_phase_diagram(ax, Nk=1000, **params)\n \n# define some widgets for the various parameters\neps = 1e-2\ntechnology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\npopulation_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=eps, value=0.02)\nsavings_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\noutput_elasticity_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\ndepreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=eps, value=0.5)\nelasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=0.5, value=1.0+eps)\n\n\n# create the widget!\nphase_diagram_widget = interact(interactive_phase_diagram, \n model=fixed(ces_model),\n g=technology_progress_widget,\n n=population_growth_widget,\n s=savings_widget, \n alpha=output_elasticity_widget,\n delta=depreciation_widget,\n sigma=elasticity_substitution_widget\n )"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
csaladenes/csaladenes.github.io
|
present/mcc2/PythonDataScienceHandbook/05.12-Gaussian-Mixtures.ipynb
|
mit
|
[
"<!--BOOK_INFORMATION-->\n<img align=\"left\" style=\"padding-right:10px;\" src=\"figures/PDSH-cover-small.png\">\nThis notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.\nThe text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!\n<!--NAVIGATION-->\n< In Depth: k-Means Clustering | Contents | In-Depth: Kernel Density Estimation >\n<a href=\"https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.12-Gaussian-Mixtures.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>\nIn Depth: Gaussian Mixture Models\nThe k-means clustering model explored in the previous section is simple and relatively easy to understand, but its simplicity leads to practical challenges in its application.\nIn particular, the non-probabilistic nature of k-means and its use of simple distance-from-cluster-center to assign cluster membership leads to poor performance for many real-world situations.\nIn this section we will take a look at Gaussian mixture models (GMMs), which can be viewed as an extension of the ideas behind k-means, but can also be a powerful tool for estimation beyond simple clustering.\nWe begin with the standard imports:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\nimport numpy as np",
"Motivating GMM: Weaknesses of k-Means\nLet's take a look at some of the weaknesses of k-means and think about how we might improve the cluster model.\nAs we saw in the previous section, given simple, well-separated data, k-means finds suitable clustering results.\nFor example, if we have simple blobs of data, the k-means algorithm can quickly label those clusters in a way that closely matches what we might do by eye:",
"# Generate some data\nfrom sklearn.datasets import make_blobs\nX, y_true = make_blobs(n_samples=400, centers=4,\n cluster_std=0.60, random_state=0)\nX = X[:, ::-1] # flip axes for better plotting\n\n# Plot the data with K Means Labels\nfrom sklearn.cluster import KMeans\nkmeans = KMeans(4, random_state=0)\nlabels = kmeans.fit(X).predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');",
"From an intuitive standpoint, we might expect that the clustering assignment for some points is more certain than others: for example, there appears to be a very slight overlap between the two middle clusters, such that we might not have complete confidence in the cluster assigment of points between them.\nUnfortunately, the k-means model has no intrinsic measure of probability or uncertainty of cluster assignments (although it may be possible to use a bootstrap approach to estimate this uncertainty).\nFor this, we must think about generalizing the model.\nOne way to think about the k-means model is that it places a circle (or, in higher dimensions, a hyper-sphere) at the center of each cluster, with a radius defined by the most distant point in the cluster.\nThis radius acts as a hard cutoff for cluster assignment within the training set: any point outside this circle is not considered a member of the cluster.\nWe can visualize this cluster model with the following function:",
"from sklearn.cluster import KMeans\nfrom scipy.spatial.distance import cdist\n\ndef plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None):\n labels = kmeans.fit_predict(X)\n\n # plot the input data\n ax = ax or plt.gca()\n ax.axis('equal')\n ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)\n\n # plot the representation of the KMeans model\n centers = kmeans.cluster_centers_\n radii = [cdist(X[labels == i], [center]).max()\n for i, center in enumerate(centers)]\n for c, r in zip(centers, radii):\n ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1))\n\nkmeans = KMeans(n_clusters=4, random_state=0)\nplot_kmeans(kmeans, X)",
"An important observation for k-means is that these cluster models must be circular: k-means has no built-in way of accounting for oblong or elliptical clusters.\nSo, for example, if we take the same data and transform it, the cluster assignments end up becoming muddled:",
"rng = np.random.RandomState(13)\nX_stretched = np.dot(X, rng.randn(2, 2))\n\nkmeans = KMeans(n_clusters=4, random_state=0)\nplot_kmeans(kmeans, X_stretched)",
"By eye, we recognize that these transformed clusters are non-circular, and thus circular clusters would be a poor fit.\nNevertheless, k-means is not flexible enough to account for this, and tries to force-fit the data into four circular clusters.\nThis results in a mixing of cluster assignments where the resulting circles overlap: see especially the bottom-right of this plot.\nOne might imagine addressing this particular situation by preprocessing the data with PCA (see In Depth: Principal Component Analysis), but in practice there is no guarantee that such a global operation will circularize the individual data.\nThese two disadvantages of k-means—its lack of flexibility in cluster shape and lack of probabilistic cluster assignment—mean that for many datasets (especially low-dimensional datasets) it may not perform as well as you might hope.\nYou might imagine addressing these weaknesses by generalizing the k-means model: for example, you could measure uncertainty in cluster assignment by comparing the distances of each point to all cluster centers, rather than focusing on just the closest.\nYou might also imagine allowing the cluster boundaries to be ellipses rather than circles, so as to account for non-circular clusters.\nIt turns out these are two essential components of a different type of clustering model, Gaussian mixture models.\nGeneralizing E–M: Gaussian Mixture Models\nA Gaussian mixture model (GMM) attempts to find a mixture of multi-dimensional Gaussian probability distributions that best model any input dataset.\nIn the simplest case, GMMs can be used for finding clusters in the same manner as k-means:",
"# from sklearn.mixture import GMM\nfrom sklearn.mixture import GaussianMixture as GMM\ngmm = GMM(n_components=4).fit(X)\nlabels = gmm.predict(X)\nplt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis');",
"But because GMM contains a probabilistic model under the hood, it is also possible to find probabilistic cluster assignments—in Scikit-Learn this is done using the predict_proba method.\nThis returns a matrix of size [n_samples, n_clusters] which measures the probability that any point belongs to the given cluster:",
"probs = gmm.predict_proba(X)\nprint(probs[:5].round(3))",
"We can visualize this uncertainty by, for example, making the size of each point proportional to the certainty of its prediction; looking at the following figure, we can see that it is precisely the points at the boundaries between clusters that reflect this uncertainty of cluster assignment:",
"size = 50 * probs.max(1) ** 2 # square emphasizes differences\nplt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size);",
"Under the hood, a Gaussian mixture model is very similar to k-means: it uses an expectation–maximization approach which qualitatively does the following:\n\n\nChoose starting guesses for the location and shape\n\n\nRepeat until converged:\n\n\nE-step: for each point, find weights encoding the probability of membership in each cluster\n\nM-step: for each cluster, update its location, normalization, and shape based on all data points, making use of the weights\n\nThe result of this is that each cluster is associated not with a hard-edged sphere, but with a smooth Gaussian model.\nJust as in the k-means expectation–maximization approach, this algorithm can sometimes miss the globally optimal solution, and thus in practice multiple random initializations are used.\nLet's create a function that will help us visualize the locations and shapes of the GMM clusters by drawing ellipses based on the GMM output:",
"from matplotlib.patches import Ellipse\n\ndef draw_ellipse(position, covariance, ax=None, **kwargs):\n \"\"\"Draw an ellipse with a given position and covariance\"\"\"\n ax = ax or plt.gca()\n \n # Convert covariance to principal axes\n if covariance.shape == (2, 2):\n U, s, Vt = np.linalg.svd(covariance)\n angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))\n width, height = 2 * np.sqrt(s)\n else:\n angle = 0\n width, height = 2 * np.sqrt(covariance)\n \n # Draw the Ellipse\n for nsig in range(1, 4):\n ax.add_patch(Ellipse(position, nsig * width, nsig * height,\n angle, **kwargs))\n \ndef plot_gmm(gmm, X, label=True, ax=None):\n ax = ax or plt.gca()\n labels = gmm.fit(X).predict(X)\n if label:\n ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2)\n else:\n ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2)\n ax.axis('equal')\n \n w_factor = 0.2 / gmm.weights_.max()\n for pos, covar, w in zip(gmm.means_, gmm.covariances_, gmm.weights_):\n draw_ellipse(pos, covar, alpha=w * w_factor)",
"With this in place, we can take a look at what the four-component GMM gives us for our initial data:",
"gmm = GMM(n_components=4, random_state=42)\nplot_gmm(gmm, X)",
"Similarly, we can use the GMM approach to fit our stretched dataset; allowing for a full covariance the model will fit even very oblong, stretched-out clusters:",
"gmm = GMM(n_components=4, covariance_type='full', random_state=42)\nplot_gmm(gmm, X_stretched)",
"This makes clear that GMM addresses the two main practical issues with k-means encountered before.\nChoosing the covariance type\nIf you look at the details of the preceding fits, you will see that the covariance_type option was set differently within each.\nThis hyperparameter controls the degrees of freedom in the shape of each cluster; it is essential to set this carefully for any given problem.\nThe default is covariance_type=\"diag\", which means that the size of the cluster along each dimension can be set independently, with the resulting ellipse constrained to align with the axes.\nA slightly simpler and faster model is covariance_type=\"spherical\", which constrains the shape of the cluster such that all dimensions are equal. The resulting clustering will have similar characteristics to that of k-means, though it is not entirely equivalent.\nA more complicated and computationally expensive model (especially as the number of dimensions grows) is to use covariance_type=\"full\", which allows each cluster to be modeled as an ellipse with arbitrary orientation.\nWe can see a visual representation of these three choices for a single cluster within the following figure:\n\nfigure source in Appendix\nGMM as Density Estimation\nThough GMM is often categorized as a clustering algorithm, fundamentally it is an algorithm for density estimation.\nThat is to say, the result of a GMM fit to some data is technically not a clustering model, but a generative probabilistic model describing the distribution of the data.\nAs an example, consider some data generated from Scikit-Learn's make_moons function, which we saw in In Depth: K-Means Clustering:",
"from sklearn.datasets import make_moons\nXmoon, ymoon = make_moons(200, noise=.05, random_state=0)\nplt.scatter(Xmoon[:, 0], Xmoon[:, 1]);",
"If we try to fit this with a two-component GMM viewed as a clustering model, the results are not particularly useful:",
"gmm2 = GMM(n_components=2, covariance_type='full', random_state=0)\nplot_gmm(gmm2, Xmoon)",
"But if we instead use many more components and ignore the cluster labels, we find a fit that is much closer to the input data:",
"gmm16 = GMM(n_components=16, covariance_type='full', random_state=0)\nplot_gmm(gmm16, Xmoon, label=False)",
"Here the mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data.\nThis is a generative model of the distribution, meaning that the GMM gives us the recipe to generate new random data distributed similarly to our input.\nFor example, here are 400 new points drawn from this 16-component GMM fit to our original data:",
"# Xnew = gmm16.sample(400, random_state=42)\nXnew, _ = gmm16.sample(400)\nplt.scatter(Xnew[:, 0], Xnew[:, 1]);",
"GMM is convenient as a flexible means of modeling an arbitrary multi-dimensional distribution of data.\nHow many components?\nThe fact that GMM is a generative model gives us a natural means of determining the optimal number of components for a given dataset.\nA generative model is inherently a probability distribution for the dataset, and so we can simply evaluate the likelihood of the data under the model, using cross-validation to avoid over-fitting.\nAnother means of correcting for over-fitting is to adjust the model likelihoods using some analytic criterion such as the Akaike information criterion (AIC) or the Bayesian information criterion (BIC).\nScikit-Learn's GMM estimator actually includes built-in methods that compute both of these, and so it is very easy to operate on this approach.\nLet's look at the AIC and BIC as a function as the number of GMM components for our moon dataset:",
"n_components = np.arange(1, 21)\nmodels = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon)\n for n in n_components]\n\nplt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC')\nplt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC')\nplt.legend(loc='best')\nplt.xlabel('n_components');",
"The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. The AIC tells us that our choice of 16 components above was probably too many: around 8-12 components would have been a better choice.\nAs is typical with this sort of problem, the BIC recommends a simpler model.\nNotice the important point: this choice of number of components measures how well GMM works as a density estimator, not how well it works as a clustering algorithm.\nI'd encourage you to think of GMM primarily as a density estimator, and use it for clustering only when warranted within simple datasets.\nExample: GMM for Generating New Data\nWe just saw a simple example of using GMM as a generative model of data in order to create new samples from the distribution defined by the input data.\nHere we will run with this idea and generate new handwritten digits from the standard digits corpus that we have used before.\nTo start with, let's load the digits data using Scikit-Learn's data tools:",
"from sklearn.datasets import load_digits\ndigits = load_digits()\ndigits.data.shape",
"Next let's plot the first 100 of these to recall exactly what we're looking at:",
"def plot_digits(data):\n fig, ax = plt.subplots(10, 10, figsize=(8, 8),\n subplot_kw=dict(xticks=[], yticks=[]))\n fig.subplots_adjust(hspace=0.05, wspace=0.05)\n for i, axi in enumerate(ax.flat):\n im = axi.imshow(data[i].reshape(8, 8), cmap='binary')\n im.set_clim(0, 16)\nplot_digits(digits.data)",
"We have nearly 1,800 digits in 64 dimensions, and we can build a GMM on top of these to generate more.\nGMMs can have difficulty converging in such a high dimensional space, so we will start with an invertible dimensionality reduction algorithm on the data.\nHere we will use a straightforward PCA, asking it to preserve 99% of the variance in the projected data:",
"from sklearn.decomposition import PCA\npca = PCA(0.99, whiten=True)\ndata = pca.fit_transform(digits.data)\ndata.shape",
"The result is 41 dimensions, a reduction of nearly 1/3 with almost no information loss.\nGiven this projected data, let's use the AIC to get a gauge for the number of GMM components we should use:",
"n_components = np.arange(50, 210, 10)\nmodels = [GMM(n, covariance_type='full', random_state=0)\n for n in n_components]\naics = [model.fit(data).aic(data) for model in models]\nplt.plot(n_components, aics);",
"It appears that around 110 components minimizes the AIC; we will use this model.\nLet's quickly fit this to the data and confirm that it has converged:",
"gmm = GMM(110, covariance_type='full', random_state=0)\ngmm.fit(data)\nprint(gmm.converged_)",
"Now we can draw samples of 100 new points within this 41-dimensional projected space, using the GMM as a generative model:",
"data_new, _ = gmm.sample(100)\ndata_new.shape",
"Finally, we can use the inverse transform of the PCA object to construct the new digits:",
"digits_new = pca.inverse_transform(data_new)\nplot_digits(digits_new)",
"The results for the most part look like plausible digits from the dataset!\nConsider what we've done here: given a sampling of handwritten digits, we have modeled the distribution of that data in such a way that we can generate brand new samples of digits from the data: these are \"handwritten digits\" which do not individually appear in the original dataset, but rather capture the general features of the input data as modeled by the mixture model.\nSuch a generative model of digits can prove very useful as a component of a Bayesian generative classifier, as we shall see in the next section.\n<!--NAVIGATION-->\n< In Depth: k-Means Clustering | Contents | In-Depth: Kernel Density Estimation >\n<a href=\"https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/05.12-Gaussian-Mixtures.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Merinorus/adaisawesome
|
Homework/02 - Data from the Web/Question 2.ipynb
|
gpl-3.0
|
[
"Obtain all the data for the Master students, starting from 2007. Compute how many months it took each master student to complete their master, for those that completed it. Partition the data between male and female students, and compute the average -- is the difference in average statistically significant?\nNotice that master students' data is more tricky than the bachelors' one, as there are many missing records in the IS-Academia database. Therefore, try to guess how much time a master student spent at EPFL by at least checking the distance in months between Master semestre 1 and Master semestre 2. If the Mineur field is not empty, the student should also appear registered in Master semestre 3. Last but not the least, don't forget to check if the student has an entry also in the Projet Master tables. Once you can handle well this data, compute the \"average stay at EPFL\" for master students. Now extract all the students with a Spécialisation and compute the \"average stay\" per each category of that attribute -- compared to the general average, can you find any specialization for which the difference in average is statistically significant?",
"# Requests : make http requests to websites\nimport requests\n# BeautifulSoup : parser to manipulate easily html content\nfrom bs4 import BeautifulSoup\n# Regular expressions\nimport re\n# Aren't pandas awesome ?\nimport pandas as pd",
"Let's get the first page in which we will be able to extract some interesting content !",
"# Ask for the first page on IS Academia. To see it, just type it on your browser address bar : http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247\nr = requests.get('http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.filter?ww_i_reportModel=133685247')\nhtmlContent = BeautifulSoup(r.content, 'html.parser')\n\nprint(htmlContent.prettify())",
"Now we need to make other requests to IS Academia, which specify every parameter : computer science students, all the years, and all bachelor semester (which are a couple of two values : pedagogic period and semester type). Thus, we're going to get all the parameters we need to make the next request :",
"# We first get the \"Computer science\" value\ncomputerScienceField = htmlContent.find('option', text='Informatique')\ncomputerScienceField\n\ncomputerScienceValue = computerScienceField.get('value')\ncomputerScienceValue\n\n# Then, we're going to need all the academic years values.\nacademicYearsField = htmlContent.find('select', attrs={'name':'ww_x_PERIODE_ACAD'})\nacademicYearsSet = academicYearsField.findAll('option')\n\n# Since there are several years to remember, we're storing all of them in a table to use them later\nacademicYearValues = []\n# We'll put the textual content in a table aswell (\"Master semestre 1\", \"Master semestre 2\"...)\nacademicYearContent = []\n\nfor option in academicYearsSet:\n value = option.get('value')\n # However, we don't want any \"null\" value\n if value != 'null':\n academicYearValues.append(value)\n academicYearContent.append(option.text)\n\n# Now, we have all the academic years that might interest us. We wrangle them a little bit so be able to make request more easily later.\nacademicYearValues_series = pd.Series(academicYearValues)\nacademicYearContent_series = pd.Series(academicYearContent)\nacademicYear_df = pd.concat([academicYearContent_series, academicYearValues_series], axis = 1)\nacademicYear_df.columns= ['Academic_year', 'Value']\nacademicYear_df = academicYear_df.sort_values(['Academic_year', 'Value'], ascending=[1, 0])\nacademicYear_df\n\n# Then, let's get all the pedagogic periods we need. It's a little bit more complicated here because we need to link the pedagogic period with a season (eg : Bachelor 1 is autumn, Bachelor 2 is spring etc.)\n# Thus, we need more than the pedagogic values. For doing some tests to associate them with the right season, we need the actual textual value (\"Bachelor semestre 1\", \"Bachelor semestre 2\" etc.)\npedagogicPeriodsField = htmlContent.find('select', attrs={'name':'ww_x_PERIODE_PEDAGO'})\npedagogicPeriodsSet = pedagogicPeriodsField.findAll('option')\n\n# Same as above, we'll store the values in a table\npedagogicPeriodValues = []\n# We'll put the textual content in a table aswell (\"Master semestre 1\", \"Master semestre 2\"...)\npedagogicPeriodContent = []\n\nfor option in pedagogicPeriodsSet:\n value = option.get('value')\n if value != 'null':\n pedagogicPeriodValues.append(value)\n pedagogicPeriodContent.append(option.text)\n\n# Let's make the values and content meet each other\npedagogicPeriodContent_series = pd.Series(pedagogicPeriodContent)\npedagogicPeriodValues_series = pd.Series(pedagogicPeriodValues)\npedagogicPeriod_df = pd.concat([pedagogicPeriodContent_series, pedagogicPeriodValues_series], axis = 1);\npedagogicPeriod_df.columns = ['Pedagogic_period', 'Value']\n\n# We keep all semesters related to master students\npedagogicPeriod_df_master = pedagogicPeriod_df[[period.startswith('Master') for period in pedagogicPeriod_df.Pedagogic_period]]\npedagogicPeriod_df_minor = pedagogicPeriod_df[[period.startswith('Mineur') for period in pedagogicPeriod_df.Pedagogic_period]]\npedagogicPeriod_df_project = pedagogicPeriod_df[[period.startswith('Projet Master') for period in pedagogicPeriod_df.Pedagogic_period]]\n\npedagogicPeriod_df = pd.concat([pedagogicPeriod_df_master, pedagogicPeriod_df_minor, pedagogicPeriod_df_project])\npedagogicPeriod_df\n\n# Lastly, we need to extract the values associated with autumn and spring semesters.\nsemesterTypeField = htmlContent.find('select', attrs={'name':'ww_x_HIVERETE'})\nsemesterTypeSet = semesterTypeField.findAll('option')\n\n# Again, we need to store the values in a table\nsemesterTypeValues = []\n# We'll put the textual content in a table aswell\nsemesterTypeContent = []\n\nfor option in semesterTypeSet:\n value = option.get('value')\n if value != 'null':\n semesterTypeValues.append(value)\n semesterTypeContent.append(option.text)\n\n# Here are the values for autumn and spring semester :\n\nsemesterTypeValues_series = pd.Series(semesterTypeValues)\nsemesterTypeContent_series = pd.Series(semesterTypeContent)\nsemesterType_df = pd.concat([semesterTypeContent_series, semesterTypeValues_series], axis = 1)\nsemesterType_df.columns = ['Semester_type', 'Value']\nsemesterType_df",
"Now, we got all the information to get all the master students !\nLet's make all the requests we need to build our data.\nWe will try to do requests such as :\n- Get students from master semester 1 of 2007-2008\n- ...\n- Get students from master semester 4 of 2007-2008\n- Get students from mineur semester 1 of 2007-2008\n- Get students from mineur semester 2 of 2007-2008\n- Get students from master project semester 1 of 2007-2008\n- Get students from master project semester 2 of 2007-2008\n... and so on for each academic year until 2015-2016, the last complete year.\nWe can even take the first semester of 2016-2017 into account, to check if some students we though they finished last year are actually still studying. This can be for different reasons : doing a mineur, a project, repeating a semester...\nWe can ask for a list of student in two formats : HTML or CSV.\nWe choosed to get them in a HTML format because this is the first time that we wrangle data in HTML format, and that may be really useful to learn in order to work with most of the websites in the future !\nThe request sent by the browser to IS Academia, to get a list of student in a HTML format, looks like this :\nhttp://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?arg1=xxx&arg2=yyy\nWith \"xxx\" the value associated with the argument named \"arg1\", \"yyy\" the value associated with the argument named \"arg2\" etc. It uses to have a lot more arguments.\nFor instance, we tried to send a request as a \"human\" through our browser and intercepted it with Postman interceptor.\nWe found that the folowing arguments have to be sent :\nww_x_GPS = -1\nww_i_reportModel = 133685247\nww_i_reportModelXsl = 133685270\nww_x_UNITE_ACAD = 249847 (which is the value of computer science !)\nww_x_PERIODE_ACAD = X (eg : the value corresponding to 2007-2008 would be 978181)\nww_x_PERIODE_PEDAGO = Y (eg : 2230106 for Master semestre 1)\nww_x_HIVERETE = Z (eg : 2936286 for autumn semester)\nThe last three values X, Y and Z must be replaced with the ones we extracted previously. For instance, if we want to get students from Master, semester 1 (which is necessarily autumn semester) of 2007-2008, the \"GET Request\" would be the following :\nhttp://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD=249847&ww_x_PERIODE_ACAD=978181&ww_x_PERIODE_PEDAGO=2230106&ww_x_HIVERETE=2936286\nSo let's cook all the requests we're going to send !",
"# Let's put the semester types aside, because we're going to need them\nautumn_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'Semestre d\\'automne', 'Value']\nautumn_semester_value = autumn_semester_value.iloc[0]\n\nspring_semester_value = semesterType_df.loc[semesterType_df['Semester_type'] == 'Semestre de printemps', 'Value']\nspring_semester_value = spring_semester_value.iloc[0]\n\n# Here is the list of the GET requests we will sent to IS Academia\nrequestsToISAcademia = []\n\n# We'll need to associate all the information associated with the requests to help wrangling data later :\nacademicYearRequests = []\npedagogicPeriodRequests = []\nsemesterTypeRequests = []\n\n# Go all over the years ('2007-2008', '2008-2009' and so on)\nfor academicYear_row in academicYear_df.itertuples(index=True, name='Academic_year'):\n \n # The year (eg: '2007-2008')\n academicYear = academicYear_row.Academic_year\n \n # The associated value (eg: '978181')\n academicYear_value = academicYear_row.Value\n \n # We get all the pedagogic periods associated with this academic year\n for pegagogicPeriod_row in pedagogicPeriod_df.itertuples(index=True, name='Pedagogic_period'):\n \n # The period (eg: 'Master semestre 1')\n pedagogicPeriod = pegagogicPeriod_row.Pedagogic_period\n \n \n # The associated value (eg: '2230106')\n pegagogicPeriod_Value = pegagogicPeriod_row.Value\n \n # We need to associate the corresponding semester type (eg: Master semester 1 is autumn, but Master semester 2 will be spring)\n if (pedagogicPeriod.endswith('1') or pedagogicPeriod.endswith('3') or pedagogicPeriod.endswith('automne')):\n semester_Value = autumn_semester_value\n semester = 'Autumn'\n else:\n semester_Value = spring_semester_value\n semester = 'Spring'\n \n \n \n # This print line is only for debugging if you want to check something\n # print(\"academic year = \" + academicYear_value + \", pedagogic value = \" + pegagogicPeriod_Value + \", pedagogic period is \" + pedagogicPeriod + \" (semester type value = \" + semester_Value + \")\")\n \n # We're ready to cook the request !\n request = 'http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD=' + computerScienceValue\n request = request + '&ww_x_PERIODE_ACAD=' + academicYear_value\n request = request + '&ww_x_PERIODE_PEDAGO=' + pegagogicPeriod_Value\n request = request + '&ww_x_HIVERETE=' + semester_Value\n \n # Add the newly created request to our wish list...\n requestsToISAcademia.append(request)\n # And we save the corresponding information for each request\n pedagogicPeriodRequests.append(pedagogicPeriod)\n academicYearRequests.append(academicYear)\n semesterTypeRequests.append(semester)\n \n \n \n\n# Here is the list of all the requests we have to send !\n# requestsToISAcademia\n\n# Here are the corresponding years for each request\n# academicYearRequests\n\n# Same for associated pedagogic periods\n# pedagogicPeriodRequests\n\n# Last but not the least, the semester types\n# semesterTypeRequests\n\nacademicYearRequests_series = pd.Series(academicYearRequests)\npedagogicPeriodRequests_series = pd.Series(pedagogicPeriodRequests)\nrequestsToISAcademia_series = pd.Series(requestsToISAcademia)\n\n# Let's summarize everything in a dataframe...\nrequests_df = pd.concat([academicYearRequests_series, pedagogicPeriodRequests_series, requestsToISAcademia_series], axis = 1)\nrequests_df.columns = ['Academic_year', 'Pedagogic_period', 'Request']\n\nrequests_df",
"The requests are now ready to be sent to IS Academia. Let's try it out !\nTIME OUT : We stopped right here for our homework. What is below should look like the beginning of a loop that gets students lists from IS Academia. It's not finished at all :(",
"# WARNING : NEXT LINE IS COMMENTED FOR DEBGUGGING THE FIRST REQUEST ONLY. UNCOMMENT IT AND INDENT THE CODE CORRECTLY TO MAKE ALL THE REQUESTS\n\n#for request in requestsToISAcademia: # LINE TO UNCOMMENT TO SEND ALL REQUESTS\nrequest = requestsToISAcademia[0] # LINE TO COMMENT TO SEND ALL REQUESTS\nprint(request)\n\n# Send the request to IS Academia\nr = requests.get(request)\n\n# Here is the HTML content of IS Academia's response\nhtmlContent = BeautifulSoup(r.content, 'html.parser')\n\n# Let's extract some data...\ncomputerScienceField = htmlContent.find('option', text='Informatique')\n\n# Getting the table of students\n# Let's make the columns\ncolumns = []\ntable = htmlContent.find('table')\nth = table.find('th', text='Civilité')\ncolumns.append(th.text)\n# Go through the table until the last column\nwhile th.findNext('').name == 'th':\n th = th.findNext('')\n columns.append(th.text)\n \n# This array will contain all the students \nstudentsTable = []\n",
"DON'T RUN THE NEXT CELL OR IT WILL CRASH ! :x",
"# Getting the information about the student we're \"looping on\"\ncurrentStudent = []\ntr = th.findNext('tr')\nchildren = tr.children\nfor child in children:\n currentStudent.append(child.text)\n \n# Add the student to the array \nstudentsTable.append(currentStudent)\n\na = tr.findNext('tr')\na\n\nwhile tr.findNext('tr') is not None:\n tr = th.findNext('tr')\n children = tr.children\n for child in children:\n currentStudent.append(child.text)\n studentsTable.append(currentStudent)\n \nstudentsTable\n\n#tr = th.parent\n#td = th.findNext('td')\n#td.text\n#th.findNext('th')\n#th.findNext('th')\n#tr = tr.findNext('tr')\n#tr\n\nprint(htmlContent.prettify())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/ai-platform-samples
|
notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb
|
apache-2.0
|
[
"# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0 \n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Retail Product Stockouts Prediction using AutoML Tables\n<table align=\"left\">\n <td>\n <a href=\"https://colab.sandbox.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/tables/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/tables/automl/notebooks/retail_product_stockout_prediction/retail_product_stockout_prediction.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nAutoML Tables enables you to build machine learning models based on tables of your own data and host them on Google Cloud for scalability. This Notebook demonstrates how you can use AutoML Tables to solve a product stockouts problem in the retail industry. This problem is solved using a binary classification approach, which predicts whether a particular product at a certain store will be out-of-stock or not in the next four weeks. Once the solution is built, you can plug this in with your production system and proactively predict stock-outs for your business.\nDataset\nIn this solution, you will use two datasets: Training/Evaluation data and Batch Prediction inputs. To access the datasets in BigQuery, you need the following information.\nTraining/Evaluation dataset\n\nProject ID: product-stockout\nDataset ID: product_stockout\nTable ID: stockout\n\nBatch Prediction inputs\n\nProject ID: product-stockout\nDataset ID: product_stockout\nTable ID: batch_prediction_inputs\n\nData Schema\n<table align=\"left\">\n <thead>\n <tr>\n <th> Field name </th>\n <th> Datatype </th>\n <th> Type </th>\n <th> Description </th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td>Item_Number</td>\n <td>STRING</td>\n <td>Identifier</td>\n <td>This is the product/ item identifier</td>\n </tr>\n <tr>\n <td>Category</td>\n <td>STRING</td>\n <td>Identifier</td>\n <td>Several items could belong to one category</td>\n</tr>\n<tr>\n <td>Vendor_Number</td>\n <td>STRING</td>\n <td>Identifier</td>\n <td>Product vendor identifier</td>\n</tr>\n<tr>\n <td>Store_Number</td>\n <td>STRING</td>\n <td>Identifier</td>\n <td>Store identifier</td>\n</tr>\n<tr>\n <td>Item_Description</td>\n <td>STRING</td>\n <td>Text Features</td>\n <td>Item Description</td>\n</tr>\n<tr>\n <td>Category_Name</td>\n <td>STRING</td>\n <td>Text Features</td>\n <td>Category Name</td>\n</tr>\n<tr>\n <td>Vendor_Name</td>\n <td>STRING</td>\n <td>Text Features</td>\n <td>Vendor Name</td>\n</tr>\n<tr>\n <td>Store_Name</td>\n <td>STRING</td>\n <td>Text Features</td>\n <td>Store Name</td>\n</tr>\n<tr>\n <td>Address</td>\n <td>STRING</td>\n <td>Text Features</td>\n <td>Address</td>\n</tr>\n<tr>\n <td>City</td>\n <td>STRING</td>\n <td>Categorical Features</td>\n <td>City</td>\n</tr>\n<tr>\n <td>Zip_Code</td>\n <td>STRING</td>\n <td>Categorical Features</td>\n <td>Zip-code</td>\n</tr>\n<tr>\n <td>Store_Location</td>\n <td>STRING</td>\n <td>Categorical Features</td>\n <td>Store Location</td>\n</tr>\n<tr>\n <td>County_Number</td>\n <td>STRING</td>\n <td>Categorical Features</td>\n <td>County Number</td>\n</tr>\n<tr>\n <td>County</td>\n <td>STRING</td>\n <td>Categorical Features</td>\n <td>County Name</td>\n</tr>\n<tr>\n <td>Weekly Sales Quantity</td>\n <td>INTEGER</td>\n <td>Time series data</td>\n <td>52 columns for weekly sales quantity from week 1 to week 52</td>\n</tr>\n<tr>\n <td>Weekly Sales Dollars</td>\n <td>INTEGER</td>\n <td>Time series data</td>\n <td>52 columns for weekly sales dollars from week 1 to week 52</td>\n</tr>\n<tr>\n <td>Inventory</td>\n <td>FLOAT</td>\n <td>Numeric Feature</td>\n <td>This inventory is stocked by the retailer looking at past sales and seasonality of the product to meet demand for future sales.</td>\n</tr>\n<tr>\n <td>Stockout</td>\n <td>INTEGER</td>\n <td>Label</td>\n <td>(1 - Stock-out, 0 - No stock-out) When the demand for four weeks future sales is not met by the inventory in stock we say we see a stock-out.\n <br/>This is because an early warning sign would help the retailer re-stock inventory with a lead time for the stock to be replenished.</td>\n</tr>\n </tbody>\n</table>\n<br>\nTo use AutoML Tables with BigQuery you do not need to download this dataset. However, if you would like to use AutoML Tables with GCS you may want to download this dataset and upload it into your GCP Project storage bucket. \nInstructions to download dataset:\n\n\nSample Dataset: Download this dataset which contains sales data.\n\n\nLink to training data: \nDataset URI: <bq://product-stockout.product_stockout.stockout>\n * Link to data for batch predictions: \nDataset URI: <bq://product-stockout.product_stockout.batch_prediction_inputs>\n\n\n\n\nUpload this dataset to GCS or BigQuery (optional). \n\n\nYou could select either GCS or BigQuery as the location of your choice to store the data for this challenge. \n\nStoring data on GCS: Creating storage buckets, Uploading data to storage buckets\nStoring data on BigQuery: Create and load data to BigQuery (optional)\n\n\n\n\n\nObjective\nProblem statement\nA stockout, or out-of-stock (OOS) event is an event that causes inventory to be exhausted. While out-of-stocks can occur along the entire supply chain, the most visible kind are retail out-of-stocks in the fast-moving consumer goods industry (e.g., sweets, diapers, fruits). Stockouts are the opposite of overstocks, where too much inventory is retained.\nImpact\nAccording to a study by researchers Thomas Gruen and Daniel Corsten, the global average level of out-of-stocks within retail fast-moving consumer goods sector across developed economies was 8.3% in 2002. This means that shoppers would have a 42% chance of fulfilling a ten-item shopping list without encountering a stockout. Despite the initiatives designed to improve the collaboration of retailers and their suppliers, such as Efficient Consumer Response (ECR), and despite the increasing use of new technologies such as radio-frequency identification (RFID) and point-of-sale data analytics, this situation has improved little over the past decades.\nThe biggest impacts being\n\nCustomer dissatisfaction\nLoss of revenue\n\nMachine Learning Solution\nUsing machine learning to solve for stock-outs can help with store operations and thus prevent out-of-stock proactively.\nThere are three big challenges any retailer would face as they try and solve this problem with machine learning:\n\nData silos: Sales data, supply-chain data, inventory data, etc. may all be in silos. Such disjoint datasets could be a challenge to work with as a machine learning model tries to derive insights from all these data points.\nMissing Features: Features such as vendor location, weather conditions, etc. could add a lot of value to a machine learning algorithm to learn from. But such features are not always available and when building machine learning solutions we think for collecting features as an iterative approach to improving the machine learning model.\nImbalanced dataset: Datasets for classification problems such as retail stock-out are traditionally very imbalanced with fewer cases for stock-out. Designing machine learning solutions by hand for such problems would be time consuming effort when your team should be focusing on collecting features.\n\nHence, we recommend using AutoML Tables. With AutoML Tables you only need to work on acquiring all data and features, and AutoML Tables would do the rest. This is a one-click deploy to solving the problem of stock-out with machine learning.\nCosts\nThis tutorial uses billable components of Google Cloud Platform (GCP):\n\nCloud AI Platform\nCloud Storage\nBigQuery\nAutoML Tables\n\nLearn about Cloud AI Platform pricing, Cloud Storage pricing, BigQuery pricing, AutoML Tables pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or AI Platform Notebooks, your environment already meets\nall the requirements to run this notebook. If you are using AI Platform Notebook, make sure the machine configuration type is 1 vCPU, 3.75 GB RAM or above. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3.\n\n\nActivate that environment and run pip install jupyter in a shell to install\n Jupyter.\n\n\nRun jupyter notebook in a shell to launch Jupyter.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AI Platform APIs and Compute Engine APIs.\n\n\nEnable AutoML API.\n\n\nPIP Install Packages and dependencies\nInstall addional dependencies not installed in Notebook environment",
"! pip install --upgrade --quiet --user google-cloud-automl\n! pip install matplotlib",
"Note: Try installing using sudo, if the above command throw any permission errors.\nRestart the kernel to allow automl_v1beta1 to be imported for Jupyter Notebooks.",
"from IPython.core.display import HTML\nHTML(\"<script>Jupyter.notebook.kernel.restart()</script>\")",
"Set up your GCP Project Id\nEnter your Project Id in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.",
"PROJECT_ID = \"[your-project-id]\" #@param {type:\"string\"}\nCOMPUTE_REGION = \"us-central1\" # Currently only supported region.",
"Authenticate your GCP account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip this step.\nOtherwise, follow these steps:\n\n\nIn the GCP Console, go to the Create service account key\n page.\n\n\nFrom the Service account drop-down list, select New service account.\n\n\nIn the Service account name field, enter a name.\n\n\nFrom the Role drop-down list, select\n AutoML > AutoML Admin,\n Storage > Storage Object Admin and BigQuery > BigQuery Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"# Upload the downloaded JSON file that contains your key.\nimport sys\n\nif 'google.colab' in sys.modules: \n from google.colab import files\n keyfile_upload = files.upload()\n keyfile = list(keyfile_upload.keys())[0]\n %env GOOGLE_APPLICATION_CREDENTIALS $keyfile\n ! gcloud auth activate-service-account --key-file $keyfile",
"If you are running the notebook locally, enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell",
"# If you are running this notebook locally, replace the string below with the\n# path to your service account key and run this cell to authenticate your GCP\n# account.\n\n%env GOOGLE_APPLICATION_CREDENTIALS /path/to/service/account\n! gcloud auth activate-service-account --key-file '/path/to/service/account'",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. AI Platform runs\nthe code from this package. In this tutorial, AI Platform also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an AI Platform model version based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets. \nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Cloud\nAI Platform services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with AI Platform.",
"BUCKET_NAME = \"[your-bucket-name]\" #@param {type:\"string\"}",
"Only if your bucket doesn't exist: Run the following cell to create your Cloud Storage bucket. Make sure Storage > Storage Admin role is enabled",
"! gsutil mb -p $PROJECT_ID -l $COMPUTE_REGION gs://$BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al gs://$BUCKET_NAME",
"Import libraries and define constants\nImport relevant packages.",
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\n# AutoML library.\nfrom google.cloud import automl_v1beta1 as automl\nimport google.cloud.automl_v1beta1.proto.data_types_pb2 as data_types\nimport matplotlib.pyplot as plt",
"Populate the following cell with the necessary constants and run it to initialize constants.",
"#@title Constants { vertical-output: true }\n\n# A name for the AutoML tables Dataset to create.\nDATASET_DISPLAY_NAME = 'stockout_data' #@param {type: 'string'}\n# The BigQuery Dataset URI to import data from.\nBQ_INPUT_URI = 'bq://product-stockout.product_stockout.stockout' #@param {type: 'string'}\n# A name for the AutoML tables model to create.\nMODEL_DISPLAY_NAME = 'stockout_model' #@param {type: 'string'}\n\nassert all([\n PROJECT_ID,\n COMPUTE_REGION,\n DATASET_DISPLAY_NAME,\n BQ_INPUT_URI,\n MODEL_DISPLAY_NAME,\n])",
"Initialize the client for AutoML and AutoML Tables.",
"# Initialize the clients.\nautoml_client = automl.AutoMlClient()\ntables_client = automl.TablesClient(project=PROJECT_ID, region=COMPUTE_REGION)",
"Test the set up\nTo test whether your project set up and authentication steps were successful, run the following cell to list your datasets in this project.\nIf no dataset has previously imported into AutoML Tables, you shall expect an empty return.",
"# List the datasets.\nlist_datasets = tables_client.list_datasets()\ndatasets = { dataset.display_name: dataset.name for dataset in list_datasets }\ndatasets",
"You can also print the list of your models by running the following cell.\nIf no model has previously trained using AutoML Tables, you shall expect an empty return.",
"# List the models.\nlist_models = tables_client.list_models()\nmodels = { model.display_name: model.name for model in list_models }\nmodels",
"Import training data\nCreate dataset\nSelect a dataset display name and pass your table source information to create a new dataset.",
"# Create dataset.\ndataset = tables_client.create_dataset(DATASET_DISPLAY_NAME)\ndataset_name = dataset.name\ndataset",
"Import data\nYou can import your data to AutoML Tables from GCS or BigQuery. For this solution, you will import data from a BigQuery Table. The URI for your table is in the format of bq://PROJECT_ID.DATASET_ID.TABLE_ID.\nThe BigQuery Table used for demonstration purpose can be accessed as bq://product-stockout.product_stockout.stockout.\nSee the table schema and dataset description from the README.",
"# Import data.\nimport_data_response = tables_client.import_data(\n dataset=dataset,\n bigquery_input_uri=BQ_INPUT_URI,\n)\nprint('Dataset import operation: {}'.format(import_data_response.operation))\n\n# Synchronous check of operation status. Wait until import is done.\nprint('Dataset import response: {}'.format(import_data_response.result()))\n\n# Verify the status by checking the example_count field.\ndataset = tables_client.get_dataset(dataset_name=dataset_name)\ndataset",
"Importing this stockout datasets takes about 10 minutes.\nIf you re-visit this Notebook, uncomment the following cell and run the command to retrieve your dataset. Replace YOUR_DATASET_NAME with its actual value obtained in the preceding cells.\nYOUR_DATASET_NAME is a string in the format of 'projects/<project_id>/locations/<location>/datasets/<dataset_id>'.",
"# dataset_name = '<YOUR_DATASET_NAME>' #@param {type: 'string'}\n# dataset = tables_client.get_dataset(dataset_name=dataset_name)",
"Review the specs\nRun the following command to see table specs such as row count.",
"# List table specs.\nlist_table_specs_response = tables_client.list_table_specs(dataset=dataset)\ntable_specs = [s for s in list_table_specs_response]\n\n# List column specs.\nlist_column_specs_response = tables_client.list_column_specs(dataset=dataset)\ncolumn_specs = {s.display_name: s for s in list_column_specs_response}\n\n# Print Features and data_type.\nfeatures = [(key, data_types.TypeCode.Name(value.data_type.type_code))\n for key, value in column_specs.items()]\nprint('Feature list:\\n')\nfor feature in features:\n print(feature[0],':', feature[1])\n\n# Table schema pie chart.\ntype_counts = {}\nfor column_spec in column_specs.values():\n type_name = data_types.TypeCode.Name(column_spec.data_type.type_code)\n type_counts[type_name] = type_counts.get(type_name, 0) + 1\n \nplt.pie(x=type_counts.values(), labels=type_counts.keys(), autopct='%1.1f%%')\nplt.axis('equal')\nplt.show()",
"In the pie chart above, you see this dataset contains three variable types: FLOAT64 (treated as Numeric), CATEGORY (treated as Categorical) and STRING (treated as Text). \nUpdate dataset: assign a label column and enable nullable columns\nGet column specs\nAutoML Tables automatically detects your data column type.\nThere are a total of 120 columns in this stockout dataset.\nRun the following command to check the column data type that automaticallyed detected. If columns contains only numerical values, but they represent categories, change that column data type to caregorical by updating your schema.\nIn addition, AutoML Tables detects Stockout to be categorical that chooses to run a classification model.",
"# Print column data types.\nfor column in column_specs:\n print(column, '-', column_specs[column].data_type)",
"Update columns: make categorical\nFrom the column data type, you noticed Item_Number, Category, Vendor_Number, Store_Number, Zip_Code and County_Number have been autodetected as FLOAT64 (Numerical) instead of CATEGORY (Categorical). \nIn this solution, the columns Item_Number, Category, Vendor_Number and Store_Number are not nullable, but Zip_Code and County_Number can take null values.\nTo change the data type, you can update the schema by updating the column spec.",
"type_code='CATEGORY' #@param {type:'string'}\n\n# Update dataset.\ncategorical_column_names = ['Item_Number', 'Category', 'Vendor_Number', \n 'Store_Number', 'Zip_Code', 'County_Number']\n\nis_nullable = [False, False, False, False, True, True] \n\nfor i in range(len(categorical_column_names)):\n column_name = categorical_column_names[i]\n nullable = is_nullable[i]\n tables_client.update_column_spec(\n dataset=dataset,\n column_spec_display_name=column_name,\n type_code=type_code,\n nullable=nullable,\n )",
"Update dataset: Assign a label\nSelect the target column and update the dataset.",
"#@title Update dataset { vertical-output: true }\n\ntarget_column_name = 'Stockout' #@param {type: 'string'}\nupdate_dataset_response = tables_client.set_target_column(\n dataset=dataset,\n column_spec_display_name=target_column_name,\n)\nupdate_dataset_response",
"Creating a model\nTrain a model\nTraining the model may take one hour or more. To obtain the results with less training time or budget, you can set train_budget_milli_node_hours, which is the train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour.\nFor demonstration purpose, the following command sets the budget as 1 node hour ('train_budget_milli_node_hours': 1000). You can increase that number up to a maximum of 72 hours ('train_budget_milli_node_hours': 72000) for the best model performance.\nEven with a budget of 1 node hour (the minimum possible budget), training a model can take more than the specified node hours\nYou can also select the objective to optimize your model training by setting optimization_objective. This solution optimizes the model by maximizing the Area Under the Precision-Recall (PR) Curve.",
"# The number of hours to train the model.\nmodel_train_hours = 1 #@param {type:'integer'}\n# Set optimization objective to train a model.\nmodel_optimization_objective = 'MAXIMIZE_AU_PRC' #@param {type:'string'}\n\ncreate_model_response = tables_client.create_model(\n MODEL_DISPLAY_NAME,\n dataset=dataset,\n train_budget_milli_node_hours=model_train_hours*1000,\n optimization_objective=model_optimization_objective,\n)\noperation_id = create_model_response.operation.name\n\nprint('Create model operation: {}'.format(create_model_response.operation))\n\n# Wait until model training is done.\nmodel = create_model_response.result()\nmodel_name = model.name\nmodel",
"If your Colab times out, use tables_client.list_models() to check whether your model has been created.\nThen uncomment the following cell and run the command to retrieve your model. Replace YOUR_MODEL_NAME with its actual value obtained in the preceding cell.\nYOUR_MODEL_NAME is a string in the format of 'projects/<project_id>/locations/<location>/models/<model_id>'",
"#model_name = '<YOUR_MODEL_NAME>' #@param {type: 'string'}\n# model = tables_client.get_model(model_name=model_name)",
"Batch prediction\nInitialize prediction\nYour data source for batch prediction can be GCS or BigQuery. For this solution, you will use a BigQuery Table as the input source. The URI for your table is in the format of bq://PROJECT_ID.DATASET_ID.TABLE_ID.\nTo write out the predictions, you need to specify a GCS bucket gs://BUCKET_NAME.\nThe AutoML Tables logs the errors in the errors.csv file.\nNOTE: The batch prediction output file(s) will be updated to the GCS bucket that you set in the preceding cells.",
"#@title Start batch prediction { vertical-output: true, output-height: 200 }\nbatch_predict_bq_input_uri = 'bq://product-stockout.product_stockout.batch_prediction_inputs' #@param {type:'string'}\nbatch_predict_gcs_output_uri_prefix = 'gs://{}'.format(BUCKET_NAME) #@param {type:'string'}\n\nbatch_predict_response = tables_client.batch_predict(\n model_name=model_name, \n bigquery_input_uri=batch_predict_bq_input_uri,\n gcs_output_uri_prefix=batch_predict_gcs_output_uri_prefix,\n)\nprint('Batch prediction operation: {}'.format(batch_predict_response.operation))\n\n# Wait until batch prediction is done.\nbatch_predict_result = batch_predict_response.result()\nbatch_predict_response.metadata\n\n# Check prediction results.\ngcs_output_directory = batch_predict_response.metadata.batch_predict_details\\\n .output_info.gcs_output_directory\nresult_file = gcs_output_directory + 'tables_1.csv'\nprint('Batch prediction results are stored as: {}'.format(result_file))",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.",
"# Delete model resource.\ntables_client.delete_model(model_name=model_name)\n\n# Delete dataset resource.\ntables_client.delete_dataset(dataset_name=dataset_name)\n\n# Delete Cloud Storage objects that were created.\n! gsutil -m rm -r gs://$BUCKET_NAME\n\n# If training model is still running, cancel it.\nautoml_client.transport._operations_client.cancel_operation(operation_id)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AlJohri/DAT-DC-12
|
notebooks/intro-numpy.ipynb
|
mit
|
[
"Introduction to NumPy\nForked from Lecture 2 of Scientific Python Lectures by J.R. Johansson",
"%matplotlib inline\n\nimport traceback\nimport matplotlib.pyplot as plt\n\nimport numpy as np",
"Why NumPy?",
"%%time\n\ntotal = 0\nfor i in range(100000):\n total += i\n\n%%time\n\ntotal = np.arange(100000).sum()\n\n%%time \n\nl = list(range(0, 1000000))\nltimes5 = [x * 5 for x in l]\n\n%%time \nl = np.arange(1000000)\nltimes5 = l * 5",
"Introduction\nThe numpy package (module) is used in almost all numerical computation using Python. It is a package that provide high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices), performance is very good. \nTo use numpy you need to import the module, using for example:",
"import numpy as np",
"In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array. \nCreating numpy arrays\nThere are a number of ways to initialize new numpy arrays, for example from\n\na Python list or tuples\nusing functions that are dedicated to generating numpy arrays, such as arange, linspace, etc.\nreading data from files\n\nFrom lists\nFor example, to create new vector and matrix arrays from Python lists we can use the numpy.array function.",
"# a vector: the argument to the array function is a Python list\nv = np.array([1,2,3,4])\n\nv\n\n# a matrix: the argument to the array function is a nested Python list\nM = np.array([[1, 2], [3, 4]])\n\nM",
"The v and M objects are both of the type ndarray that the numpy module provides.",
"type(v), type(M)",
"The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.",
"v.shape\n\nM.shape",
"The number of elements in the array is available through the ndarray.size property:",
"M.size",
"Equivalently, we could use the function numpy.shape and numpy.size",
"np.shape(M)\n\nnp.size(M)",
"So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type? \nThere are several reasons:\n\nPython lists are very general. They can contain any kind of object. They are dynamically typed. They do not support mathematical functions such as matrix and dot multiplications, etc. Implementing such functions for Python lists would not be very efficient because of the dynamic typing.\nNumpy arrays are statically typed and homogeneous. The type of the elements is determined when the array is created.\nNumpy arrays are memory efficient.\nBecause of the static typing, fast implementation of mathematical functions such as multiplication and addition of numpy arrays can be implemented in a compiled language (C and Fortran is used).\n\nUsing the dtype (data type) property of an ndarray, we can see what type the data of an array has:",
"M.dtype",
"We get an error if we try to assign a value of the wrong type to an element in a numpy array:",
"try:\n M[0,0] = \"hello\"\nexcept ValueError as e:\n print(traceback.format_exc())",
"If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument:",
"M = np.array([[1, 2], [3, 4]], dtype=complex)\n\nM",
"Common data types that can be used with dtype are: int, float, complex, bool, object, etc.\nWe can also explicitly define the bit size of the data types, for example: int64, int16, float128, complex128.\nUsing array-generating functions\nFor larger arrays it is inpractical to initialize the data manually, using explicit python lists. Instead we can use one of the many functions in numpy that generate arrays of different forms. Some of the more common are:\narange",
"# create a range\n\nx = np.arange(0, 10, 1) # arguments: start, stop, step\n\nx\n\nx = np.arange(-1, 1, 0.1)\n\nx",
"linspace and logspace",
"# using linspace, both end points ARE included\nnp.linspace(0, 10, 25)\n\nnp.logspace(0, 10, 10, base=np.e)",
"mgrid",
"x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB\n\nx\n\ny",
"random data",
"# uniform random numbers in [0,1]\nnp.random.rand(5,5)\n\n# standard normal distributed random numbers\nnp.random.randn(5,5)",
"diag",
"# a diagonal matrix\nnp.diag([1,2,3])\n\n# diagonal with offset from the main diagonal\nnp.diag([1,2,3], k=1) ",
"zeros and ones",
"np.zeros((3,3))\n\nnp.ones((3,3))",
"File I/O\nComma-separated values (CSV)\nA very common file format for data files is comma-separated values (CSV), or related formats such as TSV (tab-separated values). To read data from such files into Numpy arrays we can use the numpy.genfromtxt function. For example,",
"!head ../data/stockholm_td_adj.dat\n\ndata = np.genfromtxt('../data/stockholm_td_adj.dat')\n\ndata.shape\n\nfig, ax = plt.subplots(figsize=(14,4))\nax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5])\nax.axis('tight')\nax.set_title('tempeatures in Stockholm')\nax.set_xlabel('year')\nax.set_ylabel('temperature (C)');",
"Using numpy.savetxt we can store a Numpy array to a file in CSV format:",
"M = np.random.rand(3,3)\n\nM\n\nnp.savetxt(\"../data/random-matrix.csv\", M)\n\n!cat ../data/random-matrix.csv\n\nnp.savetxt(\"../data/random-matrix.csv\", M, fmt='%.5f') # fmt specifies the format\n\n!cat ../data/random-matrix.csv",
"Numpy's native file format\nUseful when storing and reading back numpy array data. Use the functions numpy.save and numpy.load:",
"np.save(\"../data/random-matrix.npy\", M)\n\n!file ../data/random-matrix.npy\n\nnp.load(\"../data/random-matrix.npy\")",
"More properties of the numpy arrays",
"M.itemsize # bytes per element\n\nM.nbytes # number of bytes\n\nM.ndim # number of dimensions",
"Manipulating arrays\nIndexing\nWe can index elements in an array using square brackets and indices:",
"# v is a vector, and has only one dimension, taking one index\nv[0]\n\n# M is a matrix, or a 2 dimensional array, taking two indices \nM[1,1]",
"If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)",
"M\n\nM[1]",
"The same thing can be achieved with using : instead of an index:",
"M[1,:] # row 1\n\nM[:,1] # column 1",
"We can assign new values to elements in an array using indexing:",
"M[0,0] = 1\n\nM\n\n# also works for rows and columns\nM[1,:] = 0\nM[:,2] = -1\n\nM",
"Index slicing\nIndex slicing is the technical name for the syntax M[lower:upper:step] to extract part of an array:",
"A = np.array([1,2,3,4,5])\nA\n\nA[1:3]",
"Array slices are mutable: if they are assigned a new value the original array from which the slice was extracted is modified:",
"A[1:3] = [-2,-3]\n\nA",
"We can omit any of the three parameters in M[lower:upper:step]:",
"A[::] # lower, upper, step all take the default values\n\nA[::2] # step is 2, lower and upper defaults to the beginning and end of the array\n\nA[:3] # first three elements\n\nA[3:] # elements from index 3",
"Negative indices counts from the end of the array (positive index from the begining):",
"A = np.array([1,2,3,4,5])\n\nA[-1] # the last element in the array\n\nA[-3:] # the last three elements",
"Index slicing works exactly the same way for multidimensional arrays:",
"A = np.array([[n+m*10 for n in range(5)] for m in range(5)])\n\nA\n\n# a block from the original array\nA[1:4, 1:4]\n\n# strides\nA[::2, ::2]",
"Fancy indexing\nFancy indexing is the name for when an array or list is used in-place of an index:",
"row_indices = [1, 2, 3]\nA[row_indices]\n\ncol_indices = [1, 2, -1] # remember, index -1 means the last element\nA[row_indices, col_indices]",
"We can also use index masks: If the index mask is an Numpy array of data type bool, then an element is selected (True) or not (False) depending on the value of the index mask at the position of each element:",
"B = np.array([n for n in range(5)])\nB\n\nrow_mask = np.array([True, False, True, False, False])\nB[row_mask]\n\n# same thing\nrow_mask = np.array([1,0,1,0,0], dtype=bool)\nB[row_mask]",
"This feature is very useful to conditionally select elements from an array, using for example comparison operators:",
"x = np.arange(0, 10, 0.5)\nx\n\nmask = (5 < x) * (x < 7.5)\n\nmask\n\nx[mask]",
"Functions for extracting data from arrays and creating arrays\nwhere\nThe index mask can be converted to position index using the where function",
"indices = np.where(mask)\n\nindices\n\nx[indices] # this indexing is equivalent to the fancy indexing x[mask]",
"diag\nWith the diag function we can also extract the diagonal and subdiagonals of an array:",
"np.diag(A)\n\nnp.diag(A, -1)",
"take\nThe take function is similar to fancy indexing described above:",
"v2 = np.arange(-3,3)\nv2\n\nrow_indices = [1, 3, 5]\nv2[row_indices] # fancy indexing\n\nv2.take(row_indices)",
"But take also works on lists and other objects:",
"np.take([-3, -2, -1, 0, 1, 2], row_indices)",
"choose\nConstructs an array by picking elements from several arrays:",
"which = [1, 0, 1, 0]\nchoices = [[-2,-2,-2,-2], [5,5,5,5]]\n\nnp.choose(which, choices)",
"Linear algebra\nVectorizing code is the key to writing efficient numerical calculation with Python/Numpy. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication.\nScalar-array operations\nWe can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.",
"v1 = np.arange(0, 5)\n\nv1 * 2\n\nv1 + 2\n\nA * 2, A + 2",
"Element-wise array-array operations\nWhen we add, subtract, multiply and divide arrays with each other, the default behaviour is element-wise operations:",
"A * A # element-wise multiplication\n\nv1 * v1",
"If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:",
"A.shape, v1.shape\n\nA * v1",
"Matrix algebra\nWhat about matrix mutiplication? There are two ways. We can either use the dot function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:",
"np.dot(A, A)",
"Python 3 has a new operator for using infix notation with matrix multiplication.",
"A @ A\n\nnp.dot(A, v1)\n\nnp.dot(v1, v1)",
"Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.",
"M = np.matrix(A)\nv = np.matrix(v1).T # make it a column vector\n\nv\n\nM * M\n\nM * v\n\n# inner product\nv.T * v\n\n# with matrix objects, standard matrix algebra applies\nv + M*v",
"If we try to add, subtract or multiply objects with incomplatible shapes we get an error:",
"v = np.matrix([1,2,3,4,5,6]).T\n\nM.shape, v.shape\n\nimport traceback\n\ntry:\n M * v\nexcept ValueError as e:\n print(traceback.format_exc())",
"See also the related functions: inner, outer, cross, kron, tensordot. Try for example help(np.kron).\nArray/Matrix transformations\nAbove we have used the .T to transpose the matrix object v. We could also have used the transpose function to accomplish the same thing. \nOther mathematical functions that transform matrix objects are:",
"C = np.matrix([[1j, 2j], [3j, 4j]])\nC\n\nnp.conjugate(C)",
"Hermitian conjugate: transpose + conjugate",
"C.H",
"We can extract the real and imaginary parts of complex-valued arrays using real and imag:",
"np.real(C) # same as: C.real\n\nnp.imag(C) # same as: C.imag",
"Or the complex argument and absolute value",
"np.angle(C+1) # heads up MATLAB Users, angle is used instead of arg\n\nabs(C)",
"Matrix computations\nInverse",
"np.linalg.inv(C) # equivalent to C.I \n\nC.I * C",
"Determinant",
"np.linalg.det(C)\n\nnp.linalg.det(C.I)",
"Data processing\nOften it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate statistics of datasets in arrays. \nFor example, let's calculate some properties from the Stockholm temperature dataset used above.",
"# reminder, the tempeature dataset is stored in the data variable:\nnp.shape(data)",
"mean",
"# the temperature data is in column 3\nnp.mean(data[:,3])",
"The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.\nstandard deviations and variance",
"np.std(data[:,3]), np.var(data[:,3])",
"min and max",
"# lowest daily average temperature\ndata[:,3].min()\n\n# highest daily average temperature\ndata[:,3].max()",
"sum, prod, and trace",
"d = np.arange(0, 10)\nd\n\n# sum up all elements\nnp.sum(d)\n\n# product of all elements\nnp.prod(d+1)\n\n# cummulative sum\nnp.cumsum(d)\n\n# cummulative product\nnp.cumprod(d+1)\n\n# same as: diag(A).sum()\nnp.trace(A)",
"Computations on subsets of arrays\nWe can compute with subsets of the data in an array using indexing, fancy indexing, and the other methods of extracting data from an array (described above).\nFor example, let's go back to the temperature dataset:",
"!head -n 3 ../data/stockholm_td_adj.dat",
"The dataformat is: year, month, day, daily average temperature, low, high, location.\nIf we are interested in the average temperature only in a particular month, say February, then we can create a index mask and use it to select only the data for that month using:",
"np.unique(data[:,1]) # the month column takes values from 1 to 12\n\nmask_feb = data[:,1] == 2\n\n# the temperature data is in column 3\nnp.mean(data[mask_feb,3])",
"With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code:",
"months = np.arange(1,13)\nmonthly_mean = [np.mean(data[data[:,1] == month, 3]) for month in months]\n\nfig, ax = plt.subplots()\nax.bar(months, monthly_mean)\nax.set_xlabel(\"Month\")\nax.set_ylabel(\"Monthly avg. temp.\");",
"Calculations with higher-dimensional data\nWhen functions such as min, max, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the axis argument we can specify how these functions should behave:",
"m = np.random.rand(3,3)\nm\n\n# global max\nm.max()\n\n# max in each column\nm.max(axis=0)\n\n# max in each row\nm.max(axis=1)",
"Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.\nReshaping, resizing and stacking arrays\nThe shape of an Numpy array can be modified without copying the underlaying data, which makes it a fast operation even for large arrays.",
"A\n\nn, m = A.shape\n\nB = A.reshape((1,n*m))\nB\n\nB[0,0:5] = 5 # modify the array\n\nB\n\nA # and the original variable is also changed. B is only a different view of the same data",
"We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.",
"B = A.flatten()\n\nB\n\nB[0:5] = 10\n\nB\n\nA # now A has not changed, because B's data is a copy of A's, not refering to the same data",
"Adding a new dimension: newaxis\nWith newaxis, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:",
"v = np.array([1,2,3])\n\nv.shape\n\n# make a column matrix of the vector v\nv[:, np.newaxis]\n\n# column matrix\nv[:, np.newaxis].shape\n\n# row matrix\nv[np.newaxis, :].shape",
"Stacking and repeating arrays\nUsing function repeat, tile, vstack, hstack, and concatenate we can create larger vectors and matrices from smaller ones:\ntile and repeat",
"a = np.array([[1, 2], [3, 4]])\n\n# repeat each element 3 times\nnp.repeat(a, 3)\n\n# tile the matrix 3 times \nnp.tile(a, 3)",
"concatenate",
"b = np.array([[5, 6]])\n\nnp.concatenate((a, b), axis=0)\n\nnp.concatenate((a, b.T), axis=1)",
"hstack and vstack",
"np.vstack((a,b))\n\nnp.hstack((a,b.T))",
"Copy and \"deep copy\"\nTo achieve high performance, assignments in Python usually do not copy the underlaying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference).",
"A = np.array([[1, 2], [3, 4]])\n\nA\n\n# now B is referring to the same array data as A \nB = A \n\n# changing B affects A\nB[0,0] = 10\n\nB\n\nA",
"If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called \"deep copy\" using the function copy:",
"B = np.copy(A)\n\n# now, if we modify B, A is not affected\nB[0,0] = -5\n\nB\n\nA",
"Iterating over array elements\nGenerally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in a interpreted language like Python (or MATLAB/R), iterations are really slow compared to vectorized operations. \nHowever, sometimes iterations are unavoidable. For such cases, the Python for loop is the most convenient way to iterate over an array:",
"v = np.array([1,2,3,4])\n\nfor element in v:\n print(element)\n\nM = np.array([[1,2], [3,4]])\n\nfor row in M:\n print(\"row\", row)\n \n for element in row:\n print(element)",
"When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop:",
"for row_idx, row in enumerate(M):\n print(\"row_idx\", row_idx, \"row\", row)\n \n for col_idx, element in enumerate(row):\n print(\"col_idx\", col_idx, \"element\", element)\n \n # update the matrix M: square each element\n M[row_idx, col_idx] = element ** 2\n\n# each element in M is now squared\nM",
"Vectorizing functions\nAs mentioned several times by now, to get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.",
"def theta(x):\n \"\"\"\n Scalar implemenation of the Heaviside step function.\n \"\"\"\n if x >= 0:\n return 1\n else:\n return 0\n\ntry:\n theta(np.array([-3,-2,-1,0,1,2,3]))\nexcept Exception as e:\n print(traceback.format_exc())",
"OK, that didn't work because we didn't write the Theta function so that it can handle a vector input... \nTo get a vectorized version of Theta we can use the Numpy function vectorize. In many cases it can automatically vectorize a function:",
"theta_vec = np.vectorize(theta)\n\n%%time \n\ntheta_vec(np.array([-3,-2,-1,0,1,2,3]))",
"We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance):",
"def theta(x):\n \"\"\"\n Vector-aware implemenation of the Heaviside step function.\n \"\"\"\n return 1 * (x >= 0)\n\n%%time\n\ntheta(np.array([-3,-2,-1,0,1,2,3]))\n\n# still works for scalars as well\ntheta(-1.2), theta(2.6)",
"Using arrays in conditions\nWhen using arrays in conditions,for example if statements and other boolean expressions, one needs to use any or all, which requires that any or all elements in the array evalutes to True:",
"M\n\nif (M > 5).any():\n print(\"at least one element in M is larger than 5\")\nelse:\n print(\"no element in M is larger than 5\")\n\nif (M > 5).all():\n print(\"all elements in M are larger than 5\")\nelse:\n print(\"all elements in M are not larger than 5\")",
"Type casting\nSince Numpy arrays are statically typed, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the astype functions (see also the similar asarray function). This always create a new array of new type:",
"M.dtype\n\nM2 = M.astype(float)\n\nM2\n\nM2.dtype\n\nM3 = M.astype(bool)\n\nM3",
"Further reading\n\nhttp://numpy.scipy.org - Official Numpy Documentation\nhttp://scipy.org/Tentative_NumPy_Tutorial - Official Numpy Quickstart Tutorial (highly recommended)\nhttp://www.scipy-lectures.org/intro/numpy/index.html - Scipy Lectures: Lecture 1.3\n\nVersions",
"%reload_ext version_information\n%version_information numpy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tjwei/HackNTU_Data_2017
|
Week11/DIY_AI/FeedForward-Backpropagation.ipynb
|
mit
|
[
"import numpy as np\n\n%run magic.ipynb",
"Chain Rule\n考慮 $F = f(\\mathbf{a},\\mathbf{g}(\\mathbf{b},\\mathbf{h}(\\mathbf{c}, \\mathbf{i}))$\n$\\mathbf{a},\\mathbf{b},\\mathbf{c},$ 代表著權重 , $\\mathbf{i}$ 是輸入\n站在 \\mathbf{g} 的角度,為了要更新權重,我們想算\n$\\frac{\\partial F}{\\partial b_i}$\n我們需要什麼? 由 chain rule 得知\n$\\frac{\\partial F}{\\partial b_i} =\n\\sum_j \\frac{\\partial F}{\\partial g_j}\\frac{\\partial g_j}{\\partial b_i}$\n或者寫成 Jabobian 的形式\n$\\frac{\\partial F}{\\partial \\mathbf{b}} =\n\\frac{\\partial F}{\\partial \\mathbf{g}} \\frac{\\partial \\mathbf{g}}{\\partial \\mathbf{b}}$\n所以我們希望前面能傳給我們 $\\frac{\\partial F}{\\partial \\mathbf{g}}$\n將心比心,因為 $\\mathbf{h}$ 也要算 $\\frac{\\partial F}{\\partial \\mathbf{c}}$, 所以我們還要負責傳 $\\frac{\\partial F}{\\partial \\mathbf{h}}$ 給他。 而因為 \n$\\frac{\\partial F}{\\partial \\mathbf{h}}=\n\\frac{\\partial F}{\\partial \\mathbf{g}} \\frac{\\partial \\mathbf{g}}{\\partial \\mathbf{h}}$\n所以 $\\mathbf{g}$ 中間真正需要負責計算的東西就是 $\\frac{\\partial \\mathbf{g}}{\\partial \\mathbf{h}}$ 和 $\\frac{\\partial \\mathbf{g}}{\\partial \\mathbf{b}}$\nGradient descent\n誤差函數\n我們的誤差函數還是 Cross entropy, \n假設輸入值 $x$ 對應到的真實類別是 $y$, 那我們定義誤差函數\n$ loss = -\\log(q_y)=- \\log(Predict(Y=y|x)) $\n或比較一般的\n$ loss = - p \\cdot \\log q $\n其中 $ p_i = \\Pr(Y=i|x) $ 代表真實發生的機率\n以一層 hidden layer 的 feedforward neural network 來看\n$ L= loss = -p \\cdot \\log \\sigma(C(f(Ax+b))+d) $\n由於\n$-\\log \\sigma (Z) = 1 \\log (\\sum e^{Z_j})-Z$\n$\\frac{\\partial -\\log \\sigma (Z)}{\\partial Z} = 1 \\sigma(Z)^T - \\delta$\nlet $U = f(Ax+b) $, $Z=CU+d$\n$ \\frac{\\partial L}{\\partial d} = \\frac{\\partial L}{\\partial Z} \\frac{\\partial CU+d}{\\partial d}\n= \\frac{\\partial L}{\\partial Z}\n= p^T (1 \\sigma(Z)^T - \\delta)\n= \\sigma(Z)^T - p^T\n= \\sigma(CU+d)^T - p^T\n$\n$ \\frac{\\partial L}{\\partial C_{i,j} }\n= \\frac{\\partial L}{\\partial Z} \\frac{\\partial CU+d}{\\partial C_{i,j}} \n= (p^T (1 \\sigma(Z)^T - \\delta))_i U_j \n= (\\sigma(Z) - p)_i U_j\n$\n所以\n$ \\frac{\\partial L}{\\partial C }\n= (\\sigma(Z) - p) U^T\n$\n到目前為止,都跟原來 softmax 的結果一樣。\n繼續計算 A, b 的偏微分\n$ \\frac{\\partial L}{\\partial U }\n= \\frac{\\partial L}{\\partial Z} \\frac{\\partial CU+d}{\\partial U} \n= (p^T (1 \\sigma(Z)^T - \\delta)) C\n= (\\sigma(Z) - p)^T C\n$\n$ \\frac{\\partial U_k}{\\partial b_i} \n= \\frac{\\partial f(A_kx+b_k)}{\\partial b_i}\n= \\delta_{k,i} f'(Ax+b)_i $\n$ \\frac{\\partial L}{\\partial b_i } \n= ((\\sigma(Z) - p)^T C)_i f'(Ax+b)_i$\n$ \\frac{\\partial L}{\\partial A_{i,j} } \n= ((\\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$\n任務:先暴力的利用上面直接微分好的式子來試試看\n\n把之前的 softmax, relu, sigmoid 都拿回來看看\n計算 relu 和 sigmoid 的微分\n來試試看 mod 3 問題\n隨機設定 A,b,C,d (可以嘗試不同的隱藏層維度)\n看看 loss\n設定一個 x\n計算 gradient\n扣掉 gradient\n看看 loss 是否有減少?",
"# 參考範例, 各種函數、微分\n%run -i solutions/ff_funcs.py\n\n# 參考範例, 計算 loss\n%run -i solutions/ff_compute_loss2.py",
"$ \\frac{\\partial L}{\\partial d} = \\sigma(CU+d)^T - p^T$\n$ \\frac{\\partial L}{\\partial C } = (\\sigma(Z) - p) U^T$\n$ \\frac{\\partial L}{\\partial b_i } \n= ((\\sigma(Z) - p)^T C)_i f'(Ax+b)_i$\n$ \\frac{\\partial L}{\\partial A_{i,j} } \n= ((\\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$",
"# 計算 gradient\n%run -i solutions/ff_compute_gradient.py\n\n# 更新權重,計算新的 loss\n%run -i solutions/ff_update.py",
"練習:隨機訓練 20000 次",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# 參考範例\n%run -i solutions/ff_train_mod3.py\nplt.plot(L_history);\n\n# 訓練結果測試\nfor i in range(16):\n x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)\n y = i%3\n U = relu(A@x+b)\n q = softmax(C@U+d)\n print(q.argmax(), y) ",
"練習:井字棋的判定",
"def truth(x):\n x = x.reshape(3,3)\n return int(x.all(axis=0).any() or\n x.all(axis=1).any() or\n x.diagonal().all() or\n x[::-1].diagonal().all())\n\n%run -i solutions/ff_train_ttt.py\nplt.plot(accuracy_history);"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
infilect/ml-course1
|
keras-notebooks/FCNN/3.1 Hidden Layer Representation and Embeddings.ipynb
|
mit
|
[
"Fully Connected Feed-Forward Network\nIn this notebook we will play with Feed-Forward FC-NN (Fully Connected Neural Network) for a classification task: \nImage Classification on MNIST Dataset\nRECALL\nIn the FC-NN, the output of each layer is computed using the activations from the previous one, as follows:\n$$h_{i} = \\sigma(W_i h_{i-1} + b_i)$$\nwhere ${h}_i$ is the activation vector from the $i$-th layer (or the input data for $i=0$), ${W}_i$ and ${b}_i$ are the weight matrix and the bias vector for the $i$-th layer, respectively. \n<br><rb>\n$\\sigma(\\cdot)$ is the activation function. In our example, we will use the ReLU activation function for the hidden layers and softmax for the last layer.\nTo regularize the model, we will also insert a Dropout layer between consecutive hidden layers. \nDropout works by “dropping out” some unit activations in a given layer, that is setting them to zero with a given probability.\nOur loss function will be the categorical crossentropy.\nModel definition\nKeras supports two different kind of models: the Sequential model and the Graph model. The former is used to build linear stacks of layer (so each layer has one input and one output), and the latter supports any kind of connection graph.\nIn our case we build a Sequential model with three Dense (aka fully connected) layers, with some Dropout. Notice that the output layer has the softmax activation function. \nThe resulting model is actually a function of its own inputs implemented using the Keras backend. \nWe apply the binary crossentropy loss and choose SGD as the optimizer. \nPlease remind that Keras supports a variety of different optimizers and loss functions, which you may want to check out.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"Introducing ReLU\nThe ReLu function is defined as $f(x) = \\max(0, x),$ [1]\nA smooth approximation to the rectifier is the analytic function: $f(x) = \\ln(1 + e^x)$\nwhich is called the softplus function.\nThe derivative of softplus is $f'(x) = e^x / (e^x + 1) = 1 / (1 + e^{-x})$, i.e. the logistic function.\n[1] http://www.cs.toronto.edu/~fritz/absps/reluICML.pdf by G. E. Hinton \nNote: Keep in mind this function as it is heavily used in CNN",
"from keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\nnb_classes = 10\n\n# FC@512+relu -> FC@512+relu -> FC@nb_classes+softmax\n# ... your Code Here\n\n# %load ../solutions/sol_321.py\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dense(10, activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001), \n metrics=['accuracy'])",
"Data preparation (keras.dataset)\nWe will train our model on the MNIST dataset, which consists of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images. \n\nSince this dataset is provided with Keras, we just ask the keras.dataset model for training and test data.\nWe will:\n\ndownload the data\nreshape data to be in vectorial form (original data are images)\nnormalize between 0 and 1.\n\nThe binary_crossentropy loss expects a one-hot-vector as input, therefore we apply the to_categorical function from keras.utilis to convert integer labels to one-hot-vectors.",
"from keras.datasets import mnist\nfrom keras.utils import np_utils\n\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_train.shape\n\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\nX_train = X_train.astype(\"float32\")\nX_test = X_test.astype(\"float32\")\n\n# Put everything on grayscale\nX_train /= 255\nX_test /= 255\n\n# convert class vectors to binary class matrices\nY_train = np_utils.to_categorical(y_train, 10)\nY_test = np_utils.to_categorical(y_test, 10)",
"Split Training and Validation Data",
"from sklearn.model_selection import train_test_split\n\nX_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train)\n\nX_train[0].shape\n\nplt.imshow(X_train[0].reshape(28, 28))\n\nprint(np.asarray(range(10)))\nprint(Y_train[0].astype('int'))\n\nplt.imshow(X_val[0].reshape(28, 28))\n\nprint(np.asarray(range(10)))\nprint(Y_val[0].astype('int'))",
"Training\nHaving defined and compiled the model, it can be trained using the fit function. We also specify a validation dataset to monitor validation loss and accuracy.",
"network_history = model.fit(X_train, Y_train, batch_size=128, \n epochs=2, verbose=1, validation_data=(X_val, Y_val))",
"Plotting Network Performance Trend\nThe return value of the fit function is a keras.callbacks.History object which contains the entire history of training/validation loss and accuracy, for each epoch. We can therefore plot the behaviour of loss and accuracy during the training phase.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_history(network_history):\n plt.figure()\n plt.xlabel('Epochs')\n plt.ylabel('Loss')\n plt.plot(network_history.history['loss'])\n plt.plot(network_history.history['val_loss'])\n plt.legend(['Training', 'Validation'])\n\n plt.figure()\n plt.xlabel('Epochs')\n plt.ylabel('Accuracy')\n plt.plot(network_history.history['acc'])\n plt.plot(network_history.history['val_acc'])\n plt.legend(['Training', 'Validation'], loc='lower right')\n plt.show()\n\nplot_history(network_history)",
"After 2 epochs, we get a ~88% validation accuracy. \n\nIf you increase the number of epochs, you will get definitely better results.\n\nQuick Exercise:\nTry increasing the number of epochs (if you're hardware allows to)",
"# Your code here\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001), \n metrics=['accuracy'])\nnetwork_history = model.fit(X_train, Y_train, batch_size=128, \n epochs=2, verbose=1, validation_data=(X_val, Y_val))",
"Introducing the Dropout Layer\nThe dropout layers have the very specific function to drop out a random set of activations in that layers by setting them to zero in the forward pass. Simple as that. \nIt allows to avoid overfitting but has to be used only at training time and not at test time. \n```python\nkeras.layers.core.Dropout(rate, noise_shape=None, seed=None)\n```\nApplies Dropout to the input.\nDropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.\nArguments\n\nrate: float between 0 and 1. Fraction of the input units to drop.\nnoise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).\nseed: A Python integer to use as random seed.\n\nNote Keras guarantess automatically that this layer is not used in Inference (i.e. Prediction) phase\n(thus only used in training as it should be!)\nSee keras.backend.in_train_phase function",
"from keras.layers.core import Dropout\n\n## Pls note **where** the `K.in_train_phase` is actually called!!\nDropout??\n\nfrom keras import backend as K\n\nK.in_train_phase?",
"Exercise:\nTry modifying the previous example network adding a Dropout layer:",
"from keras.layers.core import Dropout\n\n# FC@512+relu -> DropOut(0.2) -> FC@512+relu -> DropOut(0.2) -> FC@nb_classes+softmax\n# ... your Code Here\n\n# %load ../solutions/sol_312.py\n\nnetwork_history = model.fit(X_train, Y_train, batch_size=128, \n epochs=4, verbose=1, validation_data=(X_val, Y_val))\nplot_history(network_history)",
"If you continue training, at some point the validation loss will start to increase: that is when the model starts to overfit. \n\nIt is always necessary to monitor training and validation loss during the training of any kind of Neural Network, either to detect overfitting or to evaluate the behaviour of the model (any clue on how to do it??)",
"# %load solutions/sol23.py\nfrom keras.callbacks import EarlyStopping\n\nearly_stop = EarlyStopping(monitor='val_loss', patience=4, verbose=1)\n\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(10, activation='softmax'))\n\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(), \n metrics=['accuracy'])\n \nmodel.fit(X_train, Y_train, validation_data = (X_test, Y_test), epochs=100, \n batch_size=128, verbose=True, callbacks=[early_stop]) ",
"Inspecting Layers",
"# We already used `summary`\nmodel.summary()",
"model.layers is iterable",
"print('Model Input Tensors: ', model.input, end='\\n\\n')\nprint('Layers - Network Configuration:', end='\\n\\n')\nfor layer in model.layers:\n print(layer.name, layer.trainable)\n print('Layer Configuration:')\n print(layer.get_config(), end='\\n{}\\n'.format('----'*10))\nprint('Model Output Tensors: ', model.output)",
"Extract hidden layer representation of the given data\nOne simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read. \nThen you can run the ._predict(X_batch) method to get the activations for a batch of inputs.",
"model_truncated = Sequential()\nmodel_truncated.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel_truncated.add(Dropout(0.2))\nmodel_truncated.add(Dense(512, activation='relu'))\n\nfor i, layer in enumerate(model_truncated.layers):\n layer.set_weights(model.layers[i].get_weights())\n\nmodel_truncated.compile(loss='categorical_crossentropy', optimizer=SGD(), \n metrics=['accuracy'])\n\n# Check\nnp.all(model_truncated.layers[0].get_weights()[0] == model.layers[0].get_weights()[0])\n\nhidden_features = model_truncated.predict(X_train)\n\nhidden_features.shape\n\nX_train.shape",
"Hint: Alternative Method to get activations\n(Using keras.backend function on Tensors)\npython\ndef get_activations(model, layer, X_batch):\n activations_f = K.function([model.layers[0].input, K.learning_phase()], [layer.output,])\n activations = activations_f((X_batch, False))\n return activations\n\nGenerate the Embedding of Hidden Features",
"from sklearn.manifold import TSNE\n\ntsne = TSNE(n_components=2)\nX_tsne = tsne.fit_transform(hidden_features[:1000]) ## Reduced for computational issues\n\ncolors_map = np.argmax(Y_train, axis=1)\n\nX_tsne.shape\n\nnb_classes\n\nnp.where(colors_map==6)\n\ncolors = np.array([x for x in 'b-g-r-c-m-y-k-purple-coral-lime'.split('-')])\ncolors_map = colors_map[:1000]\nplt.figure(figsize=(10,10))\nfor cl in range(nb_classes):\n indices = np.where(colors_map==cl)\n plt.scatter(X_tsne[indices,0], X_tsne[indices, 1], c=colors[cl], label=cl)\nplt.legend()\nplt.show()",
"Using Bokeh (Interactive Chart)",
"from bokeh.plotting import figure, output_notebook, show\n\noutput_notebook()\n\np = figure(plot_width=600, plot_height=600)\n\ncolors = [x for x in 'blue-green-red-cyan-magenta-yellow-black-purple-coral-lime'.split('-')]\ncolors_map = colors_map[:1000]\nfor cl in range(nb_classes):\n indices = np.where(colors_map==cl)\n p.circle(X_tsne[indices, 0].ravel(), X_tsne[indices, 1].ravel(), size=7, \n color=colors[cl], alpha=0.4, legend=str(cl))\n\n# show the results\np.legend.location = 'bottom_right'\nshow(p)",
"Note: We used default TSNE parameters. Better results can be achieved by tuning TSNE Hyper-parameters\nExercise 1:\nTry with a different algorithm to create the manifold",
"from sklearn.manifold import MDS\n\n## Your code here",
"Exercise 2:\nTry extracting the Hidden features of the First and the Last layer of the model",
"## Your code here\n\n## Try using the `get_activations` function relying on keras backend\ndef get_activations(model, layer, X_batch):\n activations_f = K.function([model.layers[0].input, K.learning_phase()], [layer.output,])\n activations = activations_f((X_batch, False))\n return activations"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dlsun/symbulate
|
docs/common_cards_coins_dice.ipynb
|
mit
|
[
"Symbulate Documentation\nCards, coins, and dice\nMany probabilistic situations involving physical objects like cards, coins, and dice can be specified with BoxModel.\n< Common probability models | Contents | Common discrete distributions >\nBe sure to import Symbulate using the following commands.",
"from symbulate import *\n%matplotlib inline",
"Example. Rolling a fair n-sided die (with n=6).",
"n = 6\ndie = list(range(1, n+1))\nP = BoxModel(die)\nRV(P).sim(10000).plot()",
"Example. Flipping a fair coin twice and recording the results in sequence.",
"P = BoxModel(['H', 'T'], size=2, order_matters=True)\nP.sim(10000).tabulate(normalize=True)",
"Example. Unequally likely outcomes on a colored \"spinner\".",
"P = BoxModel(['orange', 'brown', 'yellow'], probs=[0.5, 0.25, 0.25])\nP.sim(10000).tabulate(normalize = True)",
"DeckOfCards() is a special case of BoxModel for drawing from a standard deck of 52 cards. By default replace=False.\nExample. Simulated hands of 5 cards each.",
"DeckOfCards(size=5).sim(3)",
"< Common probability models | Contents | Common discrete distributions >"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Elucidation/KSP-rocket-hover-controller
|
KSP_pid_tuning.ipynb
|
mit
|
[
"Kerbal Space Program kOS Hover PID planner\nSo we're going to try and write a kOS script for Kerbal Space Program (KSP) to control a single-stage rocket engine to hover at a set altitude. Our intrepid little spaceship consists of a single engine, some landing legs and a parachute in case (when) our controller goes haywire.\n\nTowards this, we're going to implement, you guessed it, a PID controller. As a first step for hovering, we'll have our controller change in velocity by limiting our g_force to 0 via $gforce_{goal}=1.0$.\n$$\n\\begin{align}\n gforce &= \\|\\mathbf{a}\\| / g \\Longrightarrow gforce_{measured} = \\|\\mathbf{a_{measured}}\\| / g_{measured} \\\n \\Delta thrott &= Kp(gforce_{goal} - gforce_{measured}) = Kp(1.0 - gforce_{measured}) \\\n thrott &= thrott + \\Delta thrott\n\\end{align}\n$$\nWhere $\\|\\mathbf{a_{measured}}\\|$ is our measured current max acceleration using an onboard accelerometer and $g_{measured}$ is our onboard gravitational acceleration magnitude using our graviola detector.\nAnd in the running controller loop, we update $thrott = thrott + \\Delta thrott$\nSET thrott to thrott + dthrott.\n\nWhere thrott is the throttle percentage from 0 to 1 or no thrust to full thrust. This has the effect of trying to move gforce to 1.0, matching the gravitational force.\nSetup & Helper functions\nFirst let's set up some helper functions for loading data and plotting.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy import genfromtxt\nfrom matplotlib.font_manager import FontProperties\nfrom pylab import rcParams\nfontP = FontProperties()\nfontP.set_size('small')\n\ndef loadData(filename):\n return genfromtxt(filename, delimiter=' ')\n\ndef plotData(data):\n rcParams['figure.figsize'] = 10, 6 # Set figure size to 10\" wide x 6\" tall\n t = data[:,0]\n altitude = data[:,1]\n verticalspeed = data[:,2]\n acceleration = data[:,3] # magnitude of acceleration\n gforce = data[:,4]\n throttle = data[:,5] * 100\n dthrottle_p = data[:,6] * 100\n dthrottle_d = data[:,7] * 100\n \n \n # Top subplot, position and velocity up to threshold\n plt.subplot(3, 1, 1)\n plt.plot(t, altitude, t, verticalspeed, t, acceleration)\n plt.axhline(y=100,linewidth=1,alpha=0.5,color='r',linestyle='--',label='goal');\n plt.text(max(t)*0.9, 105, 'Goal Altitude', fontsize=8);\n plt.title('Craft Altitude over Time')\n plt.ylabel('Altitude (meters)')\n plt.legend(['Altitude (m)','Vertical Speed (m/s)', 'Acceleration (m/s^2)'], \"best\", prop=fontP, frameon=False)\n \n # Middle subplot, throttle & dthrottle\n plt.subplot(3, 1, 2)\n plt.plot(t, throttle, t, dthrottle_p, t, dthrottle_d)\n plt.legend(['Throttle%','P','D'], \"best\", prop=fontP, frameon=False) # Small font, best location\n plt.ylabel('Throttle %')\n \n # Bottom subplot, gforce\n plt.subplot(3, 1, 3)\n plt.plot(t, gforce)\n plt.axhline(y=1,linewidth=1,alpha=0.5,color='r',linestyle='--',label='goal');\n plt.text(max(t)*0.9, 1.2, 'Goal g-force', fontsize=8);\n plt.legend(['gforce'], \"best\", bbox_to_anchor=(1.0, 1.0), prop=fontP, frameon = False) # Small font, best location\n plt.xlabel('Time (seconds)')\n plt.ylabel('G-force');\n plt.show();",
"G-force Control\nAs a first test, we'll start from the launchpad, thrust at full throttle till we hit $altitude_{goal} > 100\\ \\text{meters}$ and then use a proportional gain of $Kp = 0.05$ to keep $gforce \\sim 1.0$.\nLOCK dthrott_p TO Kp * (1.0 - gforce).\nLOCK dthrott TO dthrott_p.",
"data = loadData('collected_data\\\\gforce.txt')\nplotData(data)",
"Pretty cool! Once it passes 100m altitude the controller starts, the throttle controls for gforce, bringing it oscillating down around 1g. This zeros our acceleration but not our existing velocity, so the position continues to increase. We could add some derivative gain to damp down the gforce overshoot, but it won't solve this problem yet.\nThe kOS script to run this test is located in hover1.ks and called on gforce.txt by RUN hover1(gforce.txt,20).\nVertical Speed Control\nLet's try to get our vertical speed to zero now, causing our position to stay in one spot: $verticalspeed_{goal} = 0$\nInstead of using $\\Delta thrott = Kp * (1.0 - gforce)$ we'll control for vertical speed with \n$$\\Delta thrott = Kp(verticalspeed_{goal} - verticalspeed_{measured}) = Kp(-verticalspeed_{measured})$$\nLOCK dthrott_p TO Kp * (0 - SHIP:VERTICALSPEED).\nLOCK dthrott TO dthrott_p.\n\nAnd use the same proportional gain of $Kp = 0.05$.",
"data = loadData('collected_data\\\\vspeed.txt')\nplotData(data)",
"Awesome! The controller drops the velocity to a stable oscillation around 0 m/s, and the position seems to flatten off, but it isn't perfect. Maybe it's because of the oscillations? In the game I can see the engine spurt on and off rythmically. It seems to try and stay at roughly 0 m/s, but the position is not 100m and it drifts.\nThe kOS script to run this test is located in hover2.ks and called on vspeed.txt by RUN hover2(vspeed.txt,20).\nPosition Control\nInstead of gforce or velocity, lets try controlling for position next.\n$$ \\Delta thrott = K_p(altitude_{goal} - altitude_{current}) $$\nUsing $Kp = 0.05$ and $altitude_{goal} = 100$. \nLOCK dthrott_p TO Kp * (goal_altitude - SHIP:ALTITUDE).\nLOCK dthrott TO dthrott_p.\n\nNote: From now on we start the controller directly from the launch pad (instead of burning till passing 100m).",
"# To run in kOS console: RUN hover3(pos0.txt,20,0.05,0).\ndata = loadData('collected_data\\\\pos0.txt')\nplotData(data)",
"Well, we crashed.\nTurns out an ideal stable oscillation (which Proportional only controllers tend to do) starting from ground level (around 76m from where the accelerometer is located on the landed craft) would necessarily come back to that point...\nLets try adding some derivative gain to damp that out, the derivative of altitude is just the vertical speed.\n$$\n\\Delta thrott = K_p(altitude_{goal} - altitude_{current}) + K_d(verticalspeed_{goal} - verticalspeed_{current}) \\\nthrott = thrott + \\Delta thrott\n$$\nLOCK dthrott_p TO Kp * (goal_altitude - SHIP:ALTITUDE).\nLOCK dthrott_d TO Kd * (0 - SHIP:VERTICALSPEED).\nLOCK dthrott TO dthrott_p + dthrott_d.\n\nUsing $Kp = 0.05,\\ \\ Kd = 0.05$, $altitude_{goal} = 100\\ \\text{meters}$ and $verticalspeed_{goal} = 0$",
"# To run in kOS console: RUN hover3(pos1.txt,20,0.05,0).\ndata = loadData('collected_data\\\\pos1.txt')\nplotData(data)",
"Great! The controller burned us about 100m and then tried staying there, but there is quite a lot of bounce, maybe if we tweak our gains some. \nLet's try gains of $Kp = 0.08,\\ \\ Kd = 0.04$.",
"# To run in kOS console: RUN hover3(pos2.txt,20,0.08,0.04).\ndata = loadData('collected_data\\\\pos2.txt')\nplotData(data)",
"Hmm, after trying a few other combinations, it seems like there's a conceptual error here keeping us from getting to a smooth point. \nWe've been trying to build our controller with $thrott = thrott + \\Delta thrott$$ which means it takes time to overcome our previous throttle, introducing this lag between our current position/velocity and goal one, that shows up as an oscillation.\nThe kOS script used is hover3.ks and these tests are run by calling RUN hover3(posN.txt,20,Kp,Kd). For example for $Kp=0.08$ and $Kd=0.04$ the command is RUN hover3(pos2.txt,20,0.08,0.04), for data pos2.txt.\nHover set-point\nInstead let's figure out how to set our throttle point for hovering, and set $thrott = thrott_{hover} + \\Delta thrott$\nWe know our ship's current max thrust with 100%, which we will call $Thrust_{available}$. What we want to find is our $Throttle_\\%$ which would be \n$$Throttle_\\% = \\frac {Thrust_{desired}}{Thrust_{available}}$$\nWhere $Throttle_\\%$ is between 0 and 1 (0 - 100%). Our $Thrust_{desired}$ is when we have a thrust to weight ratio of 1, matching gravitational acceleration.\n$$\n\\begin{equation}\nTWR = \\frac{F}{mg} = 1\\\nF_{thrust} = mg \\ \ng = \\frac{\\mu {planet}}{(PLANET:RADIUS)^2} \\ \nThrust{desired} = SHIP:MASS * g \\\n\\end{equation}\n$$\nFinally that gives us\n$$Throttle_\\% = \\frac {SHIP:MASS * \\frac{\\mu {planet}}{(PLANET:RADIUS)^2}}{Thrust{available}}$$\nLOCK hover_throttle_level TO MIN(1, MAX(0, SHIP:MASS * g / MAX(0.0001, curr_engine:AVAILABLETHRUST))).\n\nOkay, moment of truth. Let's start small with $Kp = 0.01,\\ \\ Kd = 0.001$ and set $altitude_{goal} = 100\\ \\text{meters}$. Shown with the solid red line across.",
"# To run in kOS console: RUN hover4(hover0.txt,60,0.01,0.001).\ndata = loadData('collected_data\\\\hover0.txt')\nplotData(data)",
"It's stably oscillating! This is a good sign, showing our hover setpoint is doing it's job, the proportional gain is there, and there's barely any derivative gain.\nLet's bump up the derivative gain to $Kd = 0.01$.",
"# To run in kOS console: RUN hover4(hover1.txt,60,0.01,0.01).\ndata = loadData('collected_data\\\\hover1.txt')\nplotData(data)",
"Woohoo! It overshoots a little but stablizes smoothly at 100m! Great to see this going in the game, looks a bit like the SpaceX grasshopper.\n\nThe kOS script used is hover4.ks and these tests are run by calling RUN hover4(hoverN.txt,20,Kp,Kd).\nSome datasets:\n\nhover0.txt\nhover1.txt\nhover2.txt\nhover3.txt\n\nTweaking gains\nNow let's try and optimize the gains to reduce the rise time.\nWe take a quick shot by increasing both gains to $Kp = 0.1,\\ \\ Kd = 0.1$",
"# To run in kOS console: RUN hover4(hover2.txt,10,0.1,0.1).\ndata = loadData('collected_data\\\\hover2.txt')\nplotData(data)",
"Much faster! What happens if we change the altitude to say 300m?",
"# To run in kOS console: RUN hover5(hover3.txt,10,0.1,0.1,300).\ndata = loadData('collected_data\\\\hover3.txt')\nplotData(data)",
"Hmm some pretty big overshoot, but it does it! Next up is to apply some standardized techniques for finding gain values.\nTo be continued..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
vanheck/blog-notes
|
SquareMath/2020-04-10-SquareMathLevels-Backtest-example-ZN-1min-30M-128.ipynb
|
mit
|
[
"import sys\nimport datetime\nprint('Aktuální datum:' , datetime.datetime.now().date())\nprint('Python:' , sys.version)\n\nimport numpy as np\nprint('Numpy:', np.__version__)\n\nimport pandas as pd\nprint('Pandas:', pd.__version__)\n\nimport vhat\nprint('VHAT:', vhat.__version__)\n\nimport numba\nprint('Numba:', numba.__version__)\n\nimport matplotlib\nprint('Matplotlib:', matplotlib.__version__)\n\nimport seaborn as sns\nprint('Seaborn:', sns.__version__)\n\n#import bokeh as bk\n#import bokeh.plotting as bkplot\n#print('Bokeh:', bk.__version__)\n#bkplot.output_notebook() # show visualisation inline",
"Backtest SquareMathLevels\nCíl\nOvěření hypotézy, že SquareMath Levels fungují jako S/R úrovně, tzn. trh má tendenci se od nich odrážet.\nOvěření na statistice ZN 1min, SML - 30min SQUARE 16\nPříprava dat\nNastavení pro kalkulaci SquareMath",
"SQUARE = 128\nSQUARE_MULTIPLIER = 1.5\n# how many \nBARS_BACK_TO_REFERENCE = np.int(np.ceil(SQUARE * SQUARE_MULTIPLIER))\n# set higher timeframe for getting SquareMathLevels\nMINUTES = 30 # range 0-59\n\n\n\nPD_RESAMPLE_RULE = f'{MINUTES}Min'\n# set the period of PD_RESAMPLE_RULE will be started. E.g. PD_RESAMPLE_RULE == '30min':\n# PD_GROUPER_BASE = 5, periods will be: 8:05:00, 8:35:00, 9:05:00, etc...\n# PD_GROUPER_BASE = 0, means 8:00:00, 8:30:00, 9:00:00, etc...\nPD_GROUPER_BASE = 0 ",
"Data, která se budou analyzovat",
"TICK_SIZE_STR = f'{1/32*0.5}'\nTICK_SIZE = float(TICK_SIZE_STR)\n#SYMBOL = 'ZN'\nTICK_SIZE_STR\n\nDATA_FILE = '../../Data/ZN-1s.csv'\n\nread_cols = ['Date', 'Time', 'Open', 'High', 'Low', 'Last']\ndata = pd.read_csv(DATA_FILE, index_col=0, skipinitialspace=True, usecols=read_cols, parse_dates={'Datetime': [0, 1]})\ndata.rename(columns={\"Last\": \"Close\"}, inplace=True)\ndata.index.name = 'Datetime'\ndata['Idx'] = np.arange(data.shape[0])\ndf = data\ndf",
"Maximální high a low za posledních BARS_BACK_TO_REFERENCE svíček z vyššího timeframu.\nHigh",
"# calculate max high for actual record from higher tiframe his period\ndf_helper_gr = df[['High']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE))\ndf_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).max().dropna() # cummax() with new index\ndf_helper['bigCumMaxHigh'] = df_helper.assign(l=df_helper_gr.max().dropna().rolling(BARS_BACK_TO_REFERENCE-1).max().shift().loc[df_helper.index.get_level_values(0)].to_numpy()).max(axis=1, skipna=False)\n\ndf_helper.set_index(df_helper.index.get_level_values(1), inplace=True) # drop multiindex\ndf['SMLHighLimit'] = df_helper.bigCumMaxHigh\ndf",
"Low",
"# calculate min low for actual record from higher tiframe his period\ndf_helper_gr = df[['Low']].groupby(pd.Grouper(freq=PD_RESAMPLE_RULE, base=PD_GROUPER_BASE))\ndf_helper = df_helper_gr.rolling(PD_RESAMPLE_RULE, min_periods=1).min().dropna() # cummin() with new index\ndf_helper['bigCumMinLow'] = df_helper.assign(l=df_helper_gr.min().dropna().rolling(BARS_BACK_TO_REFERENCE-1).min().shift().loc[df_helper.index.get_level_values(0)].to_numpy()).min(axis=1, skipna=False)\n\n\ndf_helper.set_index(df_helper.index.get_level_values(1), inplace=True) # drop multiindex\ndf['SMLLowLimit'] = df_helper.bigCumMinLow\ndf",
"Zahození nepotřebných prostředků a záznamů NaN, které nemůžu analyzovat",
"del df_helper\ndel df_helper_gr\ndf.dropna(inplace=True)\ndf",
"Výpočet SMLevels pro každý záznam",
"from vhat.squaremath.funcs import calculate_octave\n\nSML_INDEXES = np.arange(-2, 10+1, dtype=np.int) # from -2/8 to +2/8\n\ndef round_to_tick_size(values, tick_size):\n return np.round(values / tick_size) * tick_size\n\ndef get_smlines(r):\n tick_size = TICK_SIZE\n lowLimit = r.SMLLowLimit\n highLimit = r.SMLHighLimit\n zeroLine, frameSize = calculate_octave(lowLimit, highLimit)\n spread = frameSize * 0.125\n sml = SML_INDEXES * spread + zeroLine\n sml = round_to_tick_size(sml, tick_size)\n return [sml, zeroLine, frameSize, spread]\n\ntemp = df.apply(get_smlines, axis=1, result_type='expand')\ntemp.columns = ['SML', 'zeroLine', 'framSize', 'spread']\ndf = df.join(temp)\ndel temp\ndf",
"Výpočet dotyku SML\nMusím vypočítat dotyk předchozího průrazu kvůli frame-shift.",
"df['prevSML'] = df.SML.shift()\ndf.dropna(inplace=True)\ndf\n\ndf['SMLTouch'] = df.apply(lambda r: np.bitwise_and(r.Low<=r.prevSML, r.prevSML<=r.High), axis=1)\ndf['SMLTouchCount'] = df.SMLTouch.apply(lambda v: sum(v))\ndf\n\nfrom dataclasses import dataclass\nfrom typing import List\n\n@dataclass\nclass Trade:\n tId: int\n # vstupní data, která znám dopředu\n entry_idx: int\n entry_sml_number: int\n entry_sml_spread: float\n entry_price: float\n entry_lots: int # -1 short, 1 long\n profit_target: float\n stop_loss: float\n \n # průběh a vývoj trhu v otevřeném obchodu\n max_running_profit_price: float\n max_running_loss_price: float\n \n # výstupní data, která se vyplní až na konci\n exit_idx: int = -1\n exit_price: float = 0.0\n exit_sml_number: int = 9999\n \n # pokud obchod skončí tak, že nebude možné zjistit výsledek, co bylo realizováno dříve, nastaví se tahle proměnná\n unrecognizable_trade: bool = False\n \n@dataclass\nclass TradeList:\n trades: List[Trade]\n\ndef check_open_trades(v, finished_trades, r):\n # TODO: dodělat indexy aktuální svíce\n trades_to_close = []\n for tid, trade in opened_trades.items():\n \n # průběžné statistiky\n if trade.entry_lots == 0 : raise Exception('Něco jsem dojebal - open trades má entry lots == 0')\n long_trade = trade.entry_lots>0\n if long_trade:\n trade.max_running_profit_price = max(min(r.High, trade.profit_target), trade.max_running_profit_price)\n trade.max_running_loss_price = min(max(r.Low, trade.stop_loss), trade.max_running_loss_price)\n else: # short trade\n trade.max_running_profit_price = min(max(r.Low, trade.profit_target), trade.max_running_profit_price)\n trade.max_running_loss_price = max(min(r.High, trade.stop_loss), trade.max_running_loss_price) \n \n # zasažení PT nebo SL\n hit_pt = True if r.Low<=trade.profit_target<=r.High else False\n hit_sl = True if r.Low<=trade.stop_loss<=r.High else False\n hits = (hit_pt, hit_sl)\n if all(hits):\n # špatný stav - nedokážu přesně určit, zda obchod trefil první SL nebo PT\n trade.unrecognizable_trade = True\n trades_to_close.append(tid)\n elif hit_pt:\n trade.exit_idx = r.Idx\n trade.exit_price = trade.profit_target\n trade.exit_sml_number = trade.entry_sml_number+(1 if long_trade else -1)\n trades_to_close.append(tid)\n elif hit_sl:\n trade.exit_idx = r.Idx\n trade.exit_price = trade.stop_loss\n trade.exit_sml_number = trade.entry_sml_number-(1 if long_trade else -1)\n trades_to_close.append(tid)\n \n # Uzavření tradů\n for tid in trades_to_close:\n finished_trades.append(opened_trades[tid])\n del opened_trades[tid]\n\ndef entry_logic(opened_trades, finished_trades, r, prev_r, last_level, tick_size, rr_multiplier=1):\n \n if r.SMLTouchCount !=1:\n # TODO: tohle neni az tak uplne pravda\n # pokud je open pod oběma proraženými levely, je jasné, že levely byly\n # aktivovány v jasném pořadí, ale to asi není až tak důležité.\n return # nejde urcit, co bylo aktivováno dříve\n \n # zjistit, který level je aktivován => musí být z minulých levelů\n price_level_hit = r.prevSML[r.SMLTouch][0]\n \n for trade in opened_trades.values():\n if trade.entry_price == price_level_hit:\n return # zadny obchod nechci otevirat, uz je otevren\n \n newtid = len(opened_trades) + len(finished_trades) + 1\n idx_level_hit = SML_INDEXES[r.SMLTouch][0]\n # otevrit obchod na prorazenem levelu\n if price_level_hit < last_level:\n # dotek z vrchu == long\n lots = 1\n pt = round_to_tick_size(price_level_hit + prev_r.spread * rr_multiplier, tick_size)\n sl = round_to_tick_size(price_level_hit - prev_r.spread, tick_size)\n running_profit_price = r.High\n running_loss_price = r.Low\n else:\n # short\n lots = -1\n pt = round_to_tick_size(price_level_hit - prev_r.spread * rr_multiplier, tick_size)\n sl = round_to_tick_size(price_level_hit + prev_r.spread, tick_size)\n running_profit_price = r.Low\n running_loss_price = r.High\n \n \n new_trade = Trade(newtid, r.Idx, idx_level_hit, prev_r.spread, price_level_hit, lots, pt, sl, running_profit_price, running_loss_price)\n opened_trades[newtid] = new_trade\n \n \n \n\nlast_level = None # price of last SML for predicting \nopened_trades = {}\nfinished_trades = []\n\nfor idxdt, r in df.iterrows(): \n if not last_level:\n last_level = r.Close\n prev_r = r\n continue\n \n check_open_trades(opened_trades, finished_trades, r)\n entry_logic(opened_trades, finished_trades, r, prev_r, last_level, TICK_SIZE)\n \n # nastavit poslední vývoj pro kalkulaci v další svíci\n prev_r = r\n if r.SMLTouchCount == 1:\n last_level = r.prevSML[r.SMLTouch][0]\n elif r.SMLTouchCount > 1:\n last_level = r.Close \n\nfrom dataclasses import astuple\nfinished_trades = TradeList(finished_trades)\nopened_trades = TradeList(list(opened_trades.values()))\ncols = ['id', 'entryIdx', 'entrySmLvl', 'entrySmlSpread', 'entryPrice', 'lots', 'pt', 'sl', 'runningProfit', 'runningRisk', 'exitIdx', 'exitPrice', 'exitSmLvl', 'unrecognizableTrade']\nstats_opened = pd.DataFrame(astuple(opened_trades)[0], columns=cols)\nstats = pd.DataFrame(astuple(finished_trades)[0], columns=cols)\nstats",
"Statistika výsledků\nBacktest základní info",
"print('Od:', df.iloc[0].name)\nprint('Do', df.iloc[-1].name)\nprint('Časové období:', df.iloc[-1].name - df.iloc[0].name)\nprint('Počet obchodních dnů:', df.Close.resample('1D').ohlc().shape[0])\nprint('Počet záznamů jemného tf:', df.shape[0])",
"Validita nízkého timeframe pro backtest - možná zasenesená chyba\nZjištění, zda je zvolený SQUARE na vyšším timeframu dostatečný pro backtest na tomto nízkém timeframu. Tzn. pokud mám Square=32 z vyššího timeframe='30min', mohu zjistit jestli jsou záznamy timeframe='1min' vhodné pro backtest.\nPokud by byla vysoká chyba rozlišení nízkého timeframe (např. nad 5%), je třeba pro relevatní výsledky zvolit buď nižší rozlišení pro backtest např. '30s' příp. '1s', nebo zvýšit SQUARE=64 nebo zvýšit vysoký timeframe pro výpočet SML `1h, 2h, 4h, 8h, 1d, ...'.\nPočet průrazů na jednu malou svíčku\nDává informaci o tom, zda je tento malý rámec dostatečný pro výpočet obchodů a může mít vypovídající informaci o chybovosti.",
"touchCounts = df.SMLTouchCount.value_counts().to_frame(name='Occurences')\ntouchCounts['Occ%'] = touchCounts / df.shape[0]*100\nprint(f'Počet protnutích více něž jedné SML v jednom záznamu: v {(df.SMLTouchCount>1).sum()} případech ({(df.SMLTouchCount>1).sum()/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem\\n')\ntouchCounts",
"Velmi nízký SML spread",
"spread_stats = df.spread.value_counts().to_frame(name='Occurences')\nspread_stats['Occ%'] = spread_stats / df.shape[0]*100\nspread_stats['Ticks'] = spread_stats.index / TICK_SIZE # index musím\nprint(f'Počet spredu SML menších než 2 ticky v jednom záznamu: v {(df.spread/TICK_SIZE<2).sum()} případech ({(df.spread/TICK_SIZE<2).sum()/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem\\n')\nspread_stats",
"Výsledná možná chybovost na nízkém TF pro backtest",
"chybovost = df.spread[(df.spread/TICK_SIZE<2) | (df.SMLTouchCount>1)].shape[0]\nprint(f'Celková chybovost v nízkém timeframe může být v {chybovost} případech ({chybovost/df.shape[0]*100:.3f}%) z {df.shape[0]} celkem')",
"Validita výsledků obchodů",
"finishedCount = stats.shape[0]\nprint('Total finished trades:', finishedCount)\n# pokud je opravdu hodně \"unrecognizableTrade\", mám moc nízké rozlišení SquareMath levels (malý square)\nunrec_trades = stats.unrecognizableTrade.sum()\nprint('Unrecognizable trades:', unrec_trades, f'({unrec_trades/finishedCount *100:.3f}%)')\nprint('Opened trades:', stats_opened.shape[0])",
"Dál nebudu potřebovat unrecognized trades",
"stats.drop(stats[stats.unrecognizableTrade].index, inplace=True)\n\nshorts_mask = stats.lots<0\nlongs_mask = stats.lots>0\n\nstats.loc[shorts_mask, 'PnL'] = ((stats[shorts_mask].entryPrice - stats[shorts_mask].exitPrice) / TICK_SIZE).round()\nstats.loc[longs_mask, 'PnL'] = ((stats[longs_mask].exitPrice - stats[longs_mask].entryPrice) / TICK_SIZE).round()\nstats.PnL = stats.PnL.astype(int)\n\nstats['runPTicks'] = ((stats.entryPrice - stats.runningProfit).abs() / TICK_SIZE).round().astype(int)\nstats['runLTicks'] = ((stats.entryPrice - stats.runningRisk).abs() * -1 / TICK_SIZE).round().astype(int)\n\nstats['ptTicks'] = ((stats.entryPrice - stats.pt).abs() / TICK_SIZE).round().astype(int)\nstats['slTicks'] = ((stats.entryPrice - stats.sl).abs() * -1 / TICK_SIZE).round().astype(int)\n\nstats['tradeTime'] = stats.exitIdx - stats.entryIdx\n\nstats",
"Celkové výsledky",
"# masks\nshorts_mask = stats.lots<0\nlongs_mask = stats.lots>0\nprofit_mask = stats.PnL>0\nloss_mask = stats.PnL<0\nbreakeven_mask = stats.PnL==0\n\ntotal_trades = stats.shape[0]\nprofit_trades_count = stats.PnL[profit_mask].shape[0]\nloss_trades_count = stats.PnL[loss_mask].shape[0]\nbreakeven_trades_count = stats.PnL[breakeven_mask].shape[0]\n\nprint(f'Ziskových obchodů {profit_trades_count}({profit_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')\nprint(f'Ztrátových obchodů {loss_trades_count}({loss_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')\nprint(f'Break-even obchodů {breakeven_trades_count}({breakeven_trades_count/total_trades*100:.2f}%) z {total_trades} celkem')\n\nprint('---')\n\nprint(f'Počet Long obchodů = {stats[longs_mask].shape[0]} ({stats[longs_mask].shape[0]/stats.shape[0]*100:.2f}%) z {total_trades} celkem')\nprint(f'Počet Short obchodů = {stats[shorts_mask].shape[0]} ({stats[shorts_mask].shape[0]/stats.shape[0]*100:.2f}%) z {total_trades} celkem')\n\nprint('---')\n\nprint(f'Suma zisků = {stats.PnL[profit_mask].sum()} Ticks')\nprint(f'Suma ztrát = {stats.PnL[loss_mask].sum()} Ticks')\nprint(f'Celkem = {stats.PnL.sum()} Ticks')",
"Ztrátové obchody",
"selected_stats = stats[loss_mask]\nselected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')\nselected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100\nselected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE \nselected_pnl_stats",
"Max pohyb v zisku ve ztrátových obchodech",
"sns.distplot(selected_stats.runPTicks, color=\"g\");",
"Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.",
"sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color=\"g\");",
"Max pohyb ve ztrátě ve ztrátových obchodech",
"sns.distplot(selected_stats.runLTicks, color=\"r\");\n\nsns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color=\"r\");",
"Ziskové obchody",
"selected_stats = stats[profit_mask]\nselected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')\nselected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100\nselected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE \nselected_pnl_stats",
"PT adjustment ve ziskových obchodech - Max pohyb v zisku",
"sns.distplot(selected_stats.runPTicks, color=\"g\");",
"Poměrově pohyb v zisku k PT u ziskových obchodů.",
"sns.distplot(selected_stats.runPTicks/selected_stats.ptTicks, color=\"g\");",
"Max pohyb ve ztrátě ve ziskových obchodech",
"sns.distplot(selected_stats.runLTicks, color=\"r\");",
"poměr vývoje ztráty k zadanému SL v ziskových obchodech",
"sns.distplot(selected_stats.runLTicks/selected_stats.slTicks, color=\"r\");",
"Long obchody",
"selected_stats = stats[longs_mask]\nprint('Počet obchodů:', selected_stats.shape[0], f'({selected_stats.shape[0]/stats.shape[0]*100:.2f}%) z {stats.shape[0]}')\nprint('Počet win:', selected_stats[selected_stats.PnL>0].shape[0], f'({selected_stats[selected_stats.PnL>0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')\nprint('Počet loss:', selected_stats[selected_stats.PnL<0].shape[0], f'({selected_stats[selected_stats.PnL<0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')\nprint('Počet break-even:', selected_stats[selected_stats.PnL==0].shape[0], f'({selected_stats[selected_stats.PnL==0].shape[0]/selected_stats.shape[0]*100:.2f}%) z {selected_stats.shape[0]}')\nprint('---')\nprint(f'Průměrný zisk: {selected_stats.PnL[selected_stats.PnL>0].mean():.3f}')\nprint(f'Průměrná ztráta: {selected_stats.PnL[selected_stats.PnL<0].mean():.3f}')\nprint('---')\nselected_pnl_stats = selected_stats.PnL.value_counts().to_frame(name='PnLOccurences')\nselected_pnl_stats['Occ%'] = selected_pnl_stats / selected_stats.shape[0]*100\nselected_pnl_stats['Ticks'] = selected_pnl_stats.index / TICK_SIZE \nselected_pnl_stats",
"PT adjustment ve ztrátových long obchodech - Max pohyb v zisku",
"sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks, color=\"g\");",
"Poměrově pohyb v zisku k nastavenému PT u ztrátových obchodů.",
"sns.distplot(selected_stats[selected_stats.PnL<0].runPTicks/selected_stats[selected_stats.PnL<0].ptTicks, color=\"g\");",
"SL djustment ve ztrátových obchodech - max pohyb v zisku",
"sns.distplot(selected_stats[selected_stats.PnL<0].runLTicks, color=\"r\");\n\nsns.distplot(selected_stats[selected_stats.PnL<0].runLTicks/selected_stats[selected_stats.PnL<0].slTicks, color=\"r\"); # kontrola",
"PT adjustment v ziskových long obchodech - Max pohyb v zisku",
"sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks, color=\"g\");",
"Poměrově pohyb v zisku k nastavenému PT u ziskových obchodů.",
"sns.distplot(selected_stats[selected_stats.PnL>0].runPTicks/selected_stats[selected_stats.PnL>0].ptTicks, color=\"g\");",
"SL djustment v ziskových obchodech - max pohyb ve ztrátě",
"sns.distplot(selected_stats[selected_stats.PnL>0].runLTicks, color=\"r\");\n\nsns.distplot(selected_stats[selected_stats.PnL>0].runLTicks/selected_stats[selected_stats.PnL>0].slTicks, color=\"r\"); # kontrola",
"SML analýza\nCelkový počet vstupů na jednotlivých SML",
"#smlvl_stats = stats.entrySmLvl.value_counts().to_frame(name='entrySmLvlOcc')\nsmlvl_stats = stats[['entrySmLvl', 'lots']].groupby(['entrySmLvl']).count()\nsmlvl_stats.sort_values(by='lots', ascending=False, inplace=True)\nsmlvl_stats.rename(columns={'lots':'entrySmLvlOcc'}, inplace=True)\nsmlvl_stats['Occ%'] = smlvl_stats.entrySmLvlOcc / stats.shape[0] * 100\nprint(f'Vstup do obchodu z nejčastějších 3 levelů: {smlvl_stats.iloc[:3].index.to_list()} {smlvl_stats[\"Occ%\"].iloc[:3].sum():.2f}%')\nprint(f'Vstup do obchodu z nejčastějších 5 levelů: {smlvl_stats.iloc[:5].index.to_list()} {smlvl_stats[\"Occ%\"].iloc[:5].sum():.2f}%')\nprint('---')\nprint(f'Vstup do obchodu z nejčastějších 7 levelů: {smlvl_stats.iloc[:7].index.to_list()} {smlvl_stats[\"Occ%\"].iloc[:7].sum():.2f}%')\nprint(f'Vstup do obchodu z nejčastějších 9 levelů: {smlvl_stats.iloc[:9].index.to_list()} {smlvl_stats[\"Occ%\"].iloc[:9].sum():.2f}%')\nprint(f'Vstup do obchodu z nejčastějších 11 levelů: {smlvl_stats.iloc[:11].index.to_list()} {smlvl_stats[\"Occ%\"].iloc[:11].sum():.2f}%')\nprint('---')\nsmlvl_stats",
"Vstupy na jednotlivých levelech",
"sns.barplot(x=smlvl_stats.entrySmLvlOcc.sort_index().index, y=smlvl_stats.entrySmLvlOcc.sort_index());",
"Počet vstupů Buy nebo Sell na SML",
"stats.lots.replace({1: 'Long', -1: 'Short'}, inplace=True)\n\nsmlvl_stats_buy_sell = stats[['entrySmLvl', 'PnL', 'lots']].groupby(['entrySmLvl', 'lots']).count()\nsmlvl_stats_buy_sell.sort_index(ascending=False, inplace=True)\nsmlvl_stats_buy_sell.rename(columns={'PnL':'LongShortCount'}, inplace=True)\nsmlvl_stats_buy_sell\nsmlvl_stats_buy_sell['LongShortTotal%'] = smlvl_stats_buy_sell.LongShortCount / smlvl_stats_buy_sell.LongShortCount.sum() *100\nsmlvl_stats_buy_sell['SMLlongOrShort%'] = smlvl_stats_buy_sell[['LongShortCount']].groupby(level=0).apply(lambda x: 100 * x / float(x.sum()))\nsmlvl_stats_buy_sell",
"Úspěšnost Long obchodů na SML",
"stats['Win']=profit_mask\n\nstats['Win'] = stats['Win'].mask(~profit_mask) # groupby bude počítat jen výhry\nsmlvl_stats_buy_sell['WinCount'] = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).count().droplevel(2)\nsmlvl_stats_buy_sell['Win%'] = smlvl_stats_buy_sell.WinCount / smlvl_stats_buy_sell.LongShortCount * 100\nsmlvl_stats_buy_sell",
"Jen pro kontrolu. Win == True, Loss == False",
"# stats['Win'] = profit_mask\n# smlvl_stats_buy_sell2 = stats[['entrySmLvl', 'PnL', 'lots', 'Win']].groupby(['entrySmLvl', 'lots', 'Win']).sum()\n# smlvl_stats_buy_sell2.sort_index(ascending=False, inplace=True)\n# smlvl_stats_buy_sell2.rename(columns={'PnL':'WinLossCount'}, inplace=True)\n# smlvl_stats_buy_sell2",
"Seřazeny výsledky dle úspěsnosti:",
"smlvl_stats_buy_sell.sort_values('Win%', ascending=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
radhikapc/foundation-homework
|
homework05/Homework05_Spotify_radhika.ipynb
|
mit
|
[
"import requests\n\n\nLil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')\nLil_data = Lil_response.json()\n#Lil_data\n\n\nLil_data.keys()\n\nLil_data['artists'].keys()\n\nLil_artists = Lil_data['artists']['items']",
"1.Searching and Printing a List of 50 'Lil' Musicians\nWith \"Lil Wayne\" and \"Lil Kim\" there are a lot of \"Lil\" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.",
"#With \"Lil Wayne\" and \"Lil Kim\" there are a lot of \"Lil\" musicians. Do a search and print a list of 50 \n#that are playable in the USA (or the country of your choice), along with their popularity score.\n\ncount =0\nfor artist in Lil_artists:\n count += 1\n print(count,\".\", artist['name'],\"has the popularity of\", artist['popularity'])",
"2 Genres Most Represented in the Search Results\nWhat genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format \"GENRE_1, GENRE_2, GENRE_3\". If there are no genres, print \"No genres listed\".",
"# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres \n#in the format \"GENRE_1, GENRE_2, GENRE_3\". If there are no genres, print \"No genres listed\".\n#Tip: \"how to join a list Python\" might be a helpful search\n# if len(artist['genres']) == 0 )\n# print (\"no genres\")\n# else:\n# genres = \", \".join(artist['genres'])\n\ngenre_list = []\ngenre_loop = Lil_data['artists']['items']\n\nfor item in genre_loop:\n #print(item['genres'])\n item_gen = item['genres']\n for i in item_gen:\n genre_list.append(i)\n#print(sorted(genre_list))\n\n#COUNTING the most \n\ngenre_counter = {}\nfor word in genre_list:\n if word in genre_counter:\n genre_counter[word] += 1\n else:\n genre_counter[word] = 1\npopular_genre = sorted(genre_counter, key = genre_counter.get, reverse = True)\ntop_genre = popular_genre[:1]\nprint(\"The genre most represented is\", top_genre)\n\n#COUNTING the most with count to confirm\n\nfrom collections import Counter\ncount = Counter(genre_list)\nmost_count = count.most_common(1)\nprint(\"The genre most represented and the count are\", most_count)\n\n\nprint(\"-----------------------------------------------------\")\n \n\n\nfor artist in Lil_artists:\n num_genres = 'no genres listed'\n if len(artist['genres']) > 0:\n num_genres= str.join(',', (artist['genres']))\n print(artist['name'],\"has the popularity of\", artist['popularity'], \", and has\", num_genres, \"under genres\")",
"More Spotify - LIL' GRAPHICS\nUse Excel, Illustrator or something like https://infogr.am/ to make a graphic about the Lil's, or the Lil's vs. the Biggies. \nJust a simple bar graph of their various popularities sounds good to me.\nLink to the Line Graph of Lil's Popularity chart\nLil Popularity Graph",
"Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')\nLil_data = Lil_response.json()\n#Lil_data",
"The Second Highest Popular Artist\nUse a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers?",
"#Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. \n#Is it the same artist who has the largest number of followers?\nname_highest = \"\"\nname_follow =\"\"\nsecond_high_pop = 0\nhighest_pop = 0\nhigh_follow = 0\nfor artist in Lil_artists:\n if (highest_pop < artist['popularity']) & (artist['name'] != \"Lil Wayne\"):\n #second_high_pop = highest_pop\n #name_second = artist['name']\n highest_pop = artist['popularity']\n name_highest = artist['name']\n\n if (high_follow < artist['followers']['total']):\n high_follow = artist ['followers']['total']\n name_follow = artist['name']\n \n #print(artist['followers']['total'])\n\nprint(name_highest, \"has the second highest popularity, which is\", highest_pop)\nprint(name_follow, \"has the highest number of followers:\", high_follow)\n#print(\"the second highest popularity is\", second_high_pop)\n\nLil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')\nLil_data = Lil_response.json()\n#Lil_data",
"4. List of Lil's Popular Than Lil' Kim",
"\nLil_artists = Lil_data['artists']['items']\n#Print a list of Lil's that are more popular than Lil' Kim.\ncount = 0\nfor artist in Lil_artists:\n if artist['popularity'] > 62:\n count+=1\n print(count, artist['name'],\"has the popularity of\", artist['popularity'])\n \n #else:\n #print(artist['name'], \"is less popular with a score of\", artist['popularity'])\n ",
"5.Two Favorite Lils and Their Top Tracks",
"response = requests.get(\"https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US\")\ndata = response.json()\nfor artist in Lil_artists:\n #print(artist['name'],artist['id'])\n if artist['name'] == \"Lil Wayne\":\n wayne = artist['id']\n print(artist['name'], \"id is\",wayne) \n \n if artist['name'] == \"Lil Yachty\":\n yachty = artist['id']\n print(artist['name'], \"id is\", yachty) \n\n#Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.\n#Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.\nresponse = requests.get(\"https://api.spotify.com/v1/artists/\" +wayne+ \"/top-tracks?country=US\")\ndata = response.json()\ntracks = data['tracks']\nprint(\"Lil Wayne's top tracks are: \")\nfor track in tracks:\n print(\"-\", track['name'])\nprint(\"-----------------------------------------------\")\n\nresponse = requests.get(\"https://api.spotify.com/v1/artists/\" +yachty+ \"/top-tracks?country=US\")\ndata = response.json()\ntracks = data['tracks']\nprint(\"Lil Yachty 's top tracks are: \")\nfor track in tracks:\n print(\"-\", track['name'])\n",
"6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs\nWill the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit?",
"response = requests.get(\"https://api.spotify.com/v1/artists/\" +yachty+ \"/top-tracks?country=US\")\ndata = response.json()\ntracks = data['tracks']\n#print(tracks)\n\n#for track in tracks:\n #print(track.keys())\n\n#Get an average popularity for their explicit songs vs. their non-explicit songs. \n#How many minutes of explicit songs do they have? Non-explicit?\n# How explicit is Lils?\n\nresponse = requests.get(\"https://api.spotify.com/v1/artists/\" +yachty+ \"/top-tracks?country=US\")\ndata = response.json()\ntracks = data['tracks']\n# counter for tracks for explicit and clean\ntrack_count = 0 \nclean_count = 0\n#counter to find avg popularity\npopular_exp = 0\npopular_clean = 0\n#counter for avg time in minutes are below:\ntimer = 0\ndata_timer = 0\ntimer_clean = 0\n\nfor track in tracks:\n print(\"The track,\", track['name'],\", with the id\",track['id'], \"is\", track['explicit'],\"for explicit content, and has the popularity of\", track['popularity'])\n track_id = track['id']\n time_ms = track['duration_ms']\n if True:\n track_count = track_count + 1\n popular_exp = popular_exp + track['popularity']\n response = requests.get(\"https://api.spotify.com/v1/tracks/\" + track_id)\n data_track = response.json()\n print(\"and has the duration of\", data_track['duration_ms'], \"milli seconds.\")\n timer = timer + time_ms\n timer_minutes = ((timer / (1000*60)) % 60)\n if not track['explicit']:\n clean_count = clean_count + 1\n popular_clean = popular_clean + track['popularity']\n response = requests.get(\"https://api.spotify.com/v1/tracks/\" + track_id)\n data_tracks = response.json()\n timer_clean = timer_clean + time_ms\n timer_minutes_clean = ((data_timer / (1000*60)) % 60)\n print(\", and has the duration of\", timer_minutes_clean, \"minutes\")\n \nprint(\"------------------------------------\")\navg_pop = popular_exp / track_count\nprint(\"I have found\", track_count, \"tracks, and has the average popularity of\", avg_pop, \"and has the average duration of\", timer_minutes,\"minutes and\", clean_count, \"are clean\")\n\n#print(\"Overall, I discovered\", track_count, \"tracks\")\n#print(\"And\", clean_count, \"were non-explicit\")\n#print(\"Which means\", , \" percent were clean for Lil Wayne\")\n\n#Get an average popularity for their explicit songs vs. their non-explicit songs. \n#How many minutes of explicit songs do they have? Non-explicit?\n# How explicit is Lils?\n\nresponse = requests.get(\"https://api.spotify.com/v1/artists/\" +wayne+ \"/top-tracks?country=US\")\ndata = response.json()\n# counter for tracks for explicit and clean\ntrack_count = 0 \nclean_count = 0\n#counter to find avg popularity\npopular_exp = 0\npopular_clean = 0\n#counter for avg time in minutes are below:\ntimer = 0\n#data_timer = 0\ntimer_clean = 0\n\nfor track in tracks:\n print(\"The track,\", track['name'],\", with the id\",track['id'], \"is\", track['explicit'],\"for explicit content, and has the popularity of\", track['popularity'])\n track_id = track['id']\n time_ms = data_track['duration_ms']\n if True:\n track_count = track_count + 1\n popular_exp = popular_exp + track['popularity']\n response = requests.get(\"https://api.spotify.com/v1/tracks/\" + track_id)\n data_track = response.json()\n print(\"and has the duration of\", data_track['duration_ms'], \"milli seconds.\")\n timer = timer + time_ms\n timer_minutes = ((timer / (1000*60)) % 60)\n if not track['explicit']:\n clean_count = clean_count + 1\n popular_clean = popular_clean + track['popularity']\n response = requests.get(\"https://api.spotify.com/v1/tracks/\" + track_id)\n data_tracks = response.json()\n timer_clean = timer_clean + time_ms\n timer_minutes_clean = ((data_timer / (1000*60)) % 60)\n print(\", and has the duration of\", timer_minutes_clean, \"minutes\")\n \nprint(\"------------------------------------\")\navg_pop = popular_exp / track_count\nprint(\"I have found\", track_count, \"tracks, and has the average popularity of\", avg_pop, \"and has the average duration of\", timer_minutes,\"minutes and\", clean_count, \"are clean\")\n\n#print(\"Overall, I discovered\", track_count, \"tracks\")\n#print(\"And\", clean_count, \"were non-explicit\")\n#print(\"Which means\", , \" percent were clean for Lil Wayne\")",
"7a. Number of Biggies and Lils\nSince we're talking about Lils, what about Biggies? How many total \"Biggie\" artists are there? How many total \"Lil\"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?",
"#How many total \"Biggie\" artists are there? How many total \"Lil\"s? \n#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?\n\nbiggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US')\nbiggie_data = biggie_response.json()\nbiggie_artists = biggie_data['artists']['total']\nprint(\"Total number of Biggie artists are\", biggie_artists)\n\nlil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US')\nlil_data = lil_response.json()\nlil_artists = lil_data['artists']['total']\nprint(\"Total number of Lil artists are\", lil_artists)",
"7b. Time to Download All Information on Lil and Biggies",
"#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?\n\nlimit_download = 50\nbiggie_artists = biggie_data['artists']['total']\nLil_artist = Lil_data['artists']['total']\n\n#1n 5 sec = 50\n#in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec\n# for 4501 = 4501/10 sec\n# for 49 49/ 10 sec\n\nbig_count = biggie_artists/10\nlil_count = Lil_artist / 10\n\nprint(\"It would take\", big_count, \"seconds for Biggies, where as it would take\", lil_count,\"seconds for Lils\" )\n",
"8. Highest Average Popular Lils and Biggies Out of The Top 50",
"#Out of the top 50 \"Lil\"s and the top 50 \"Biggie\"s, who is more popular on average?\n\nbiggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US')\nbiggie_data = biggie_response.json()\n\nbiggie_artists = biggie_data['artists']['items']\nbig_count_pop = 0\nfor artist in biggie_artists:\n #count_pop = artist['popularity']\n big_count_pop = big_count_pop + artist['popularity']\nprint(\"Biggie has a total popularity of \", big_count_pop)\nbig_pop = big_count_pop / 49\nprint(\"Biggie is on an average\", big_pop,\"popular\")\n\n#Lil\nLil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')\nLil_data = Lil_response.json()\nLil_artists = Lil_data['artists']['items']\nlil_count_pop = 0\nfor artist in Lil_artists:\n count_pop_lil = artist['popularity']\n lil_count_pop = lil_count_pop + count_pop_lil\nlil_pop = lil_count_pop / 50\nprint(\"Lil is on an average\", lil_pop,\"popular\")\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
datactive/bigbang
|
examples/attendance/Extracting Org-Domain and Person-Org-Duration Information From Attendance Data.ipynb
|
mit
|
[
"IETF Affiliations from Attendance Records",
"import bigbang.datasets.domains as domains\nimport bigbang.analysis.utils as utils\nimport bigbang.analysis.attendance as attendance\n\nfrom ietfdata.datatracker import *\nfrom ietfdata.datatracker_ext import *\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport dataclasses\n\nimport bigbang.datasets.organizations as organizations\norg_cats = organizations.load_data()",
"Getting attendance records from datatracker\nWhen attendees register for a meeting, the report their name, email address, and affiliation.\nWhile this is noisy data (any human-entered data is!), we will use this information to associate domains with affilations. E.g. the email domain apple.com is associated with the company Apple.\nWe will also use this data to enrich our understanding of individual affiliations over time.",
"datatracker = DataTracker()\n\nmeetings = datatracker.meetings(meeting_type = datatracker.meeting_type(MeetingTypeURI('/api/v1/name/meetingtypename/ietf/')))\nfull_ietf_meetings = list(meetings)\n\nietf_meetings = []\nfor meeting in full_ietf_meetings:\n meetingd = dataclasses.asdict(meeting)\n meetingd['meeting_obj'] = meeting\n meetingd['num'] = int(meeting.number)\n ietf_meetings.append(meetingd) \n\nmeetings_df = pd.DataFrame.from_records(ietf_meetings)",
"Individual Affiliations",
"dt = DataTrackerExt() # initialize, for all meeting registration downloads",
"This will construct a dataframe of every attendee's registration at every specified meeting. (Downloading this data takes a while!)",
"ietf_meetings[110]['date']\n\nmeeting_attendees_df = pd.DataFrame()\nfor meeting in ietf_meetings:\n if meeting['num'] in [104,105,106,107,108,109]: # can filter here by the meetings to analyze\n registrations = dt.meeting_registrations(meeting=meeting['meeting_obj'])\n df = pd.DataFrame.from_records([dataclasses.asdict(x) for x in list(registrations)])\n df['num'] = meeting['num']\n df['date'] = meeting['date']\n df['domain'] = df['email'].apply(utils.extract_domain)\n full_name = df['first_name'] + \" \" + df['last_name']\n df['full_name'] = full_name\n meeting_attendees_df = meeting_attendees_df.append(df)",
"Filter by those who actually attended the meeting (checked in, didn't just register).",
"ind_affiliation = meeting_attendees_df[['full_name', 'affiliation', 'email', 'domain','date']]",
"This format of data -- with name, email, affiliation, and a timestamp -- can also be extracted from other IETF data, such as the RFC submission metadata. Later, we will use data of this form to infer duration of affilation for IETF attendees.",
"ind_affiliation[:10]\n\nind_affiliation['affiliation'].dropna().value_counts()",
"Matching affiliations with domains",
"affil_domain = ind_affiliation[['affiliation', 'domain', 'email']].pivot_table(\n index='affiliation',columns='domain', values='email', aggfunc = 'count')",
"Drop both known generic and known personal email domains.",
"ddf = domains.load_data()\n\ngenerics = ddf[ddf['category'] == 'generic'].index\npersonals = ddf[ddf['category'] == 'personal'].index\n\ngeneric_email_domains = set(affil_domain.columns).intersection(generics)\naffil_domain.drop(generic_email_domains, axis = 1, inplace = True)\n\npersonal_email_domains = set(affil_domain.columns).intersection(personals)\naffil_domain.drop(personal_email_domains, axis = 1, inplace = True)\n\nad_max = affil_domain.apply(lambda row: row.max(), axis=1)\nad_mean = affil_domain.apply(lambda row: row.dropna().mean(), axis=1)\nad_count = affil_domain.apply(lambda row: row.dropna().count(), axis=1)\nad_sum = affil_domain.apply(lambda row: row.dropna().sum(), axis=1)\n\nad_max_domain = affil_domain.apply(lambda row: row.idxmax(), axis=1)\n\n## Add the columns *after* computing the statistics!\naffil_domain['max'] = ad_max\naffil_domain['mean'] = ad_mean\naffil_domain['count'] = ad_count\naffil_domain['sum'] = ad_sum\naffil_domain['max_domain'] = ad_max_domain\n\nad_stats = affil_domain[['max_domain','max','count','mean','sum']].sort_values('max', ascending=False)\n\nad_stats[:100]\n\nad_stats[:100].to_csv(\"affiliation_domain_stats.csv\")\n\nad_stats['sum']",
"Duration of affiliation\nThe current data we have for individual affiliations is \"point\" data, reflecting the affiliation of an individual on a particular date.\nFor many kinds of analysis, we may want to understand the full duration for which an individual has been associated with an organization. This requires an inference from the available data points to dates that are not explicitly represented in the data.\nFor now, we will use a rather simple form of inference: filling in any missing data from the last (temporally) known data point. And then if there's still missing data, infer backwards.",
"affil_dates = ind_affiliation.pivot_table(\n index=\"date\",\n columns=\"full_name\",\n values=\"affiliation\",\n aggfunc=\"first\"\n).fillna(method='ffill').fillna(method='bfill')\n\ntop_attendees = ind_affiliation.groupby('full_name')['date'].count().sort_values(ascending=False)[:40].index\n\ntop_attendees\n\naffil_dates[top_attendees]\n\naffil_dates[top_attendees].to_csv(\"inferred_affiliation_dates.csv\")",
"Linking to Organization lists",
"import bigbang.analysis.process as process\n\n# drop subsidiary organizations\norg_cats = org_cats[org_cats['subsidiary of / alias of'].isna()]\n\norg_cats",
"Normalize/resolve the names from the IETF attedence records.",
"org_names = ad_stats['sum']\norg_names = org_names.append(\n pd.Series(index = org_cats['name'], data = 1)\n)\norg_names = org_names.sort_values(ascending = False)\norg_names = org_names[~org_names.index.duplicated(keep=\"first\")]\n\nents = process.resolve_entities(\n org_names,\n process.containment_distance,\n threshold=.15\n)\n\nreplacements = {}\nfor r in [{name: ent for name in ents[ent]} for ent in ents]:\n replacements.update(r)\n\nad_stats['norm_org'] = ad_stats.apply(lambda x : replacements[x.name], axis = 1)\norg_cats['norm_org'] = org_cats.apply(lambda x : replacements[x['name']], axis = 1)\n\norg_cats_plus = org_cats.join(ad_stats[['max_domain', 'norm_org']], on = 'norm_org', rsuffix=\"_ietf\")\n\norg_cats_plus_match = org_cats_plus[(~org_cats_plus['max_domain'].isna())].drop('norm_org_ietf',axis=1).rename({'max_domain' : 'max_domain_ietf'}, axis = 1)\n\norg_cats_plus_match.to_csv(\"org_categories_matched_with_ietf_attendence_domains.csv\")\n\norg_cats_plus_match[:20]",
"Export the graph of relations\nGetting the affiliation data relations extracted from the attendance tables.\nFinal form: Three tables:\n - Name - Email, earliest and latest date\n - Name - Affiliation, earliest and latest date\n - Email - Affiliation, earliest and latest date\nThese can be combined into a tripartite graph, which should have a component for each affiliation entity.",
"meeting_range = [106,107,108]\n\na, b, c = attendance.name_email_affil_relations_from_IETF_attendance(meeting_range, threshold = 0.17)\n\na\n\nb\n\nb['affiliation'].value_counts()['cisco']\n\nc",
"Match to a mailing list",
"from bigbang.archive import Archive\narx = Archive(\"httpbisa\")",
"From the archive data: From -> email address, Date\nMatch with table B: email,. min_date, max_date, to get Affiliation\nAdd Affiliation to the archive data.",
"arx.add_affiliation(b)\n\narx.data[['From','Date','affiliation']].dropna()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JorgeDeLosSantos/nusa
|
docs/nusa-info/es/beam-element.ipynb
|
mit
|
[
"Elemento Beam\nFundamento teórico\nEl elemento Beam (viga) es un elemento finito bidimensional donde las coordenadas locales y globales coinciden. Está caracterizado por una función de forma lineal. El elemento Beam tiene un modulo de elasticidad E, momento de inercia I y longitud L. Cada elemento Beam tiene dos nodos y se asume horizontal como se muestra en la figura. En este caso la matriz de rigidez del elemento está dada por la matriz siguiente, asumiendo que la deformación axial es despreciable:\n$$\nk = \\frac{EI}{L^3}\n\\begin{bmatrix}\n12 & 6L & -12 & 6L \\\n6L & 4L^2 & -6L & 2L^2 \\\n-12 & -6L & 12 & -6L \\\n6L & 2L^2 & -6L & 4L^2 \n\\end{bmatrix}\n$$\n<img src=\"src/beam-element/beam_element.PNG\" width=\"200px\"> </img>\nEstá claro que el elemento Beam tiene cuatro grados de libertad, dos en cada nodo: un desplazamiento transversal y una rotación. La convención de signos utilizada es la tradicional: los desplazamientos son positivos hacia arriba y las rotaciones cuando son antihorario.\nEjemplos resueltos\nEjemplo 1. Viga en voladizo",
"%matplotlib inline\nimport numpy as np\nfrom nusa import *\nimport itertools\nimport matplotlib.pyplot as plt\n\ndef pairwise(iterable):\n #~ \"s -> (s0,s1), (s1,s2), (s2, s3), ...\"\n a, b = itertools.tee(iterable)\n next(b, None)\n return zip(a, b)\n\n# Input data \nE = 210e9 # Pa\nI = 1e-5\nL = 1\nP = 10e3\n\nnelm = 10\nparts = np.linspace(0,L,nelm)\n\nnodos = []\nfor xc in parts:\n cn = Node((xc,0))\n nodos.append(cn)\n\nelementos = []\nfor x in pairwise(nodos):\n ni,nj = x[0], x[1]\n ce = Beam((ni,nj),E,I)\n elementos.append(ce)\n\nm = BeamModel()\n\nfor n in nodos: m.add_node(n)\nfor e in elementos: m.add_element(e)\n\nm.add_constraint(nodos[0], ux=0, uy=0, ur=0)\nm.add_force(nodos[-1], (-P,))\nm.plot_model()\nm.solve()\nm.plot_disp(1)\nxx = np.linspace(0,L)\nd = ((-P*xx**2.0)/(6.0*E*I))*(3*L - xx)\nplt.plot(xx,d)\nplt.axis(\"auto\")\nplt.xlim(0,1.1*L)",
"Ejemplo 2. Determine los desplazamientos nodales y rotaciones, fuerzas nodales globales, y fuerzas en elementos para la viga mostrada en la figura. Se ha discretizado la viga como se indica en la numeración nodal. La viga está fija en los nodos 1 y 5, y tiene un soporte de rodillo en el nodo 3. Las cargas verticales de 10 000 lb cada una son aplicadas en los nodos 2 y 4. Sea E=300x10<sup>6</sup> psi and I=500 in<sup>4</sup>.\n<img src=\"src/beam-element/logan_E42.PNG\" width=\"400px\"> </img>",
"\"\"\"\nLogan, D. (2007). A first course in the finite element analysis.\nExample 4.2 , pp. 166.\n\"\"\"\nfrom nusa.core import *\nfrom nusa.model import *\nfrom nusa.element import *\n\n# Input data \nE = 30e6\nI = 500.0\nP = 10e3\nL = 10*(12.0) # ft -> in\n# Model\nm1 = BeamModel(\"Beam Model\")\n# Nodes\nn1 = Node((0,0))\nn2 = Node((10*12,0))\nn3 = Node((20*12,0))\nn4 = Node((30*12,0))\nn5 = Node((40*12,0))\n# Elements\ne1 = Beam((n1,n2),E,I)\ne2 = Beam((n2,n3),E,I)\ne3 = Beam((n3,n4),E,I)\ne4 = Beam((n4,n5),E,I)\n\n# Add elements \nfor nd in (n1,n2,n3,n4,n5): m1.add_node(nd)\nfor el in (e1,e2,e3,e4): m1.add_element(el)\n\nm1.add_force(n2,(-P,))\nm1.add_force(n4,(-P,))\nm1.add_constraint(n1, ux=0,uy=0,ur=0) # fixed \nm1.add_constraint(n5, ux=0,uy=0,ur=0) # fixed\nm1.add_constraint(n3, uy=0, ur=0) # fixed\nm1.add_constraint(n2, ur=0)\nm1.add_constraint(n4, ur=0)\nm1.plot_model()\nm1.solve() # Solve model\n\n# Desplazamientos nodales y rotaciones\nprint(\"Desplazamientos nodales y rotaciones\")\nprint(\"UY \\t UR\")\nfor node in m1.get_nodes():\n print(\"{0} \\t {1}\".format(node.uy, node.ur))\n\n# Fuerzas nodales globales\nprint(\"\\nFuerzas nodales globales\")\nprint(\"FY \\t M\")\nfor node in m1.get_nodes():\n print(\"{0} \\t {1}\".format(node.fy, node.m))\n\n# Fuerzas en elementos\nprint(\"\\nFuerzas en elementos\")\nfor element in m1.get_elements():\n print(\"\\nFY:\\n{0} \\n M:\\n{1}\\n\".format(element.fy, element.m))\n\n# Dibujando diagramas de cortante y momento flexionante\nm1.plot_shear_diagram()\nm1.plot_moment_diagram()",
"Ejemplo 3.",
"\"\"\"\nBeer & Johnston. (2012) Mechanics of materials. \nProblem 9.13 , pp. 568.\n\"\"\"\n# Input data \nE = 29e6\nI = 291 # W14x30 \nP = 35e3\nL1 = 5*12 # in\nL2 = 10*12 #in\n# Model\nm1 = BeamModel(\"Beam Model\")\n# Nodes\nn1 = Node((0,0))\nn2 = Node((L1,0))\nn3 = Node((L1+L2,0))\n# Elements\ne1 = Beam((n1,n2),E,I)\ne2 = Beam((n2,n3),E,I)\n\n# Add elements \nfor nd in (n1,n2,n3): m1.add_node(nd)\nfor el in (e1,e2): m1.add_element(el)\n \nm1.add_force(n2, (-P,))\nm1.add_constraint(n1, ux=0, uy=0) # fixed \nm1.add_constraint(n3, uy=0) # fixed\nm1.solve() # Solve model\n\nm1.plot_shear_diagram()\nm1.plot_moment_diagram()\n\nn1.fy",
"Ejemplo 4",
"\"\"\"\nBeer & Johnston. (2012) Mechanics of materials. \nProblem 9.13 , pp. 568.\n\"\"\"\n\n# Datos\nP1 = 3e3\nP2 = 3e3\nM1 = 450\nE = 200e9\nI = (1/12)*(50e-3)*(60e-3)**3\n\nn1 = Node((0,0))\nn2 = Node((0.3,0))\nn3 = Node((0.5,0))\nn4 = Node((0.7,0))\nn5 = Node((1,0))\n\ne1 = Beam((n1,n2), E, I)\ne2 = Beam((n2,n3), E, I)\ne3 = Beam((n3,n4), E, I)\ne4 = Beam((n4,n5), E, I)\n\nmodel_cp = BeamModel()\n\nfor nodo in (n1,n2,n3,n4,n5): model_cp.add_node(nodo)\nfor el in (e1,e2,e3,e4): model_cp.add_element(el)\n\nmodel_cp.add_constraint(n1, ux=0, uy=0)\nmodel_cp.add_constraint(n5, uy=0)\nmodel_cp.add_force(n2, (-P1,))\nmodel_cp.add_force(n4, (-P2,))\nmodel_cp.add_moment(n3, (-M1,))\n\nmodel_cp.solve()\nmodel_cp.plot_shear_diagram()\nmodel_cp.plot_moment_diagram()\n\nuy = []\nfor n in model_cp.get_nodes():\n uy.append(n.fy)\n\nplt.plot(uy)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
markriedl/easygen
|
Easygen.ipynb
|
mit
|
[
"EasyGen\nEasyGen is a visual user interface to help set up simple neural network generation tasks.\nThere are a number of neural network frameworks (Tensorflow, TFLearn, Keras, PyTorch, etc.) that implement standard algorithms for generating text (e.g., recurrent neural networks, sequence2sequence) and images (e.g., generative adversarial networks). They require some familairity with coding. Beyond that, just running a simple experiment may require a complex set of steps to clean and prepare the data.\nEasyGen allows one to quickly set up the data cleaning and neural network training pipeline using a graphical user interface and a number of self-contained \"modules\" that implement stardard data preparation routines. EasyGen differs from other neural network user interfaces in that it doesn't focus on the graphical instantiation of the neural network itself. Instead, it provides an easy to use way to instantiate some of the most common neural network algorithms used for generation. EasyGen focuses on the data preparation.\nFor documentation see the EasyGen Github repo\nTo get started:\n\n\nClone the EasyGen notebook by following the link and selecting File -> Save a copy in Drive.\n\n\nTurn on GPU support under Edit -> Notebook setting.\n\n\nRun the cells in Sections 1. Some are optional if you know you aren't going to be using particular features.\n\n\nRun the cell in Section 2. If you know there are any models or datasets that you won't be using you can skip them.\n\n\nRun the cell in Section 3. This creates a blank area below the cell in which you can use the buttons to create your visual program. An example program is loaded automatically. You can clear it with the \"clear\" button below it. Afterwards you can create your own programs. Selecting \"Make New Module\" will cause the new module appears graphically above and can be dragged around. The inputs and outputs of different modules can be connected together by clicking on an output (red) and dragging to an input (green). Gray boxes are parameters that can be edited. Clicking on a gray box causes a text input field to appear at the bottom of the editing area, just above the \"Make New Module\" controls.\n\n\nSave your program by entering a program name and pressing the \"save\" button.\n\n\nRun your program by editing the program name in the cell in Section 4 and then running the cell.\n\n\n1. Setup\nDownload EasyGen",
"!git clone https://github.com/markriedl/easygen.git\n!cp easygen/*.js /usr/local/share/jupyter/nbextensions/google.colab/\n!cp easygen/images/*.png /usr/local/share/jupyter/nbextensions/google.colab/",
"Install requirements",
"!apt-get update\n!apt-get install chromium-chromedriver\n!pip install -r easygen/requirements.txt",
"Download StyleGAN",
"!git clone https://github.com/NVlabs/stylegan.git\n!cp easygen/stylegan_runner.py stylegan",
"Download GPT-2",
"!git clone https://github.com/nshepperd/gpt-2",
"Create backend hooks for saving and loading programs",
"import IPython\nfrom google.colab import output\n\ndef python_save_hook(file_text, filename):\n import easygen\n import hooks\n status = hooks.python_save_hook_aux(file_text, filename)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_load_hook(filename):\n import easygen\n import hooks\n result = hooks.python_load_hook_aux(filename)\n return IPython.display.JSON({'result': result})\n\ndef python_cwd_hook(dir):\n import easygen\n import hooks\n result = hooks.python_cwd_hook_aux(dir)\n return IPython.display.JSON({'result': result})\n\ndef python_copy_hook(path1, path2):\n import easygen\n import hooks\n status = hooks.python_copy_hook_aux(path1, path2)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_move_hook(path1, path2):\n import easygen\n import hooks\n status = hooks.python_move_hook_aux(path1, path2)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_open_text_hook(path):\n import easygen\n import hooks\n status = hooks.python_open_text_hook_aux(path)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n \ndef python_open_image_hook(path):\n import easygen\n import hooks\n status = hooks.python_open_image_hook_aux(path)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_mkdir_hook(path, dir_name):\n import easygen\n import hooks\n status = hooks.python_mkdir_hook_aux(path, dir_name)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_trash_hook(path):\n import easygen\n import hooks\n status = hooks.python_trash_hook_aux(path)\n ret_status = 'true' if status else 'false'\n return IPython.display.JSON({'result': ret_status})\n\ndef python_run_hook(path):\n import easygen\n program_file_name = path\n easygen.main(program_file_name)\n return IPython.display.JSON({'result': 'true'})\n\noutput.register_callback('notebook.python_cwd_hook', python_cwd_hook)\noutput.register_callback('notebook.python_copy_hook', python_copy_hook)\noutput.register_callback('notebook.python_move_hook', python_move_hook)\noutput.register_callback('notebook.python_open_text_hook', python_open_text_hook)\noutput.register_callback('notebook.python_open_image_hook', python_open_image_hook)\noutput.register_callback('notebook.python_save_hook', python_save_hook)\noutput.register_callback('notebook.python_load_hook', python_load_hook)\noutput.register_callback('notebook.python_mkdir_hook', python_mkdir_hook)\noutput.register_callback('notebook.python_trash_hook', python_trash_hook)\noutput.register_callback('notebook.python_run_hook', python_run_hook)",
"Import EasyGen",
"import sys\nsys.path.insert(0, 'easygen')\nimport easygen",
"2. Download pre-trained neural network models\n2.1 Download GPT-2 and StyleGan models\nDownload the GPT-2 small 117M model. Will save to models/117M directory.",
"!python gpt-2/download_model.py 117M",
"Download the GPT-2 medium 345M model. Will save to models/345M directory.",
"!python gpt-2/download_model.py 345M",
"Download the StyleGAN cats model (256x256). Will save as \"cats256x256.pkl\" in the home directory.",
"!wget -O cats256x256.pkl https://www.dropbox.com/s/1w97383h0nrj4ea/karras2019stylegan-cats-256x256.pkl?dl=0",
"2.2 Download Wikipedia\nYou only need to do this if you are using the ReadWikipedia functionality. This takes a long time. You may want to skip it if you know you wont be scraping data from Wikipedia.",
"!wget -O wiki.zip https://www.dropbox.com/s/39w6mj1akwy2a0r/wiki.zip?dl=0\n!unzip wiki.zip",
"3. Run the GUI\nRun the cell below. This will load a default example program that generate new, fictional paint names. Use the \"clear\" button to clear it and make your own.\nWhen done, name the program and press the \"save\" button. You should see your file appear in the file listing in the left panel.",
"%%html\n<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"/>\n\n<style>\n// MAKE CANVAS \ncanvas {\n border:1px solid #d3d3d3;\n background-color: #f1f1f1;\n}\n</style>\n</head>\n<body>\n<script src=\"nbextensions/google.colab/module_dicts.js\"></script>\n<script src=\"nbextensions/google.colab/gui.js\"></script>\n<script>load_program(\"easygen/examples/make_new_colors\")</script>\n<div id=\"inp\">\n <h1 id=\"inp_module\">text</h1>\n <strong id=\"inp_param\">text</strong>\n <input id=\"inp_val\" />\n <button onmouseup=\"do_input_button_up()\">ok</button>\n</div>\n<div id=\"make\">\n <h1>Make New Module</h1>\n <select id=\"module_select\"></select>\n <button onmouseup=\"do_make_module_button_up()\">Add Module</button>\n</div>\n<div>\n <h1>Save Program</h1>\n <input id=\"inp_save\" />\n <button onmouseup=\"save_program()\">Save</button>\n</div>\n<div>\n <h1>Load Program</h1>\n <input id=\"inp_load\" />\n <button onmouseup=\"load_program()\">Load</button>\n</div>\n<div>\n <h1>Clear Program</h1>\n <button onmouseup=\"clear_program()\">Clear</button>\n</div>\n</body>\n</html>",
"4. Run Your Program\nThis will run a default example program that will generate new, fictional paint names. If you don't want to run that program, skip to the next cell.",
"program_file_name = 'easygen/examples/make_new_colors'\neasygen.main(program_file_name)",
"Once you've made your own program, run the cell below, enter the program below, and the press the run button.",
"%%html\n<html>\n<body>\n<script src=\"nbextensions/google.colab/run_program.js\"></script>\n</script>\n<b>Run Program:</b> <input id=\"inp_run\" /> <button onmouseup=\"run_program()\">Run</button> \n</body>\n</html>",
"5. View Your Output Files\nRun the cell below to launch a file manager so you can view your text and image files. \nYou can use the panel to the left to download any files written to disk.",
"%%html\n<html>\n<body>\n<script src=\"nbextensions/google.colab/file_manager.js\"></script>\n<h1>Manage Files</h1>\n<table cols=\"3\" border=\"0\">\n <tr><td><strong id=\"path1\">/content</strong></td><td></td><td><strong id=\"path2\">/content</strong></td></tr>\n <tr><td><select multiple id=\"file_list1\"></select></td><td><p><button id=\"copy_button\" onmouseup=\"do_copy_mouse_up()\">Copy --></button></p><p><button id=\"move_button\" onmouseup=\"do_move_mouse_up()\">Move --></button></p></td><td><select multiple id=\"file_list2\"></select></td></tr>\n <tr><td><button id=\"open_text_button\" onmouseup=\"do_open_text_mouse_up()\">Open Text</button><br><button id=\"open_image_button\" onmouseup=\"do_open_image_mouse_up()\">Open Image</button><br><button id=\"open_text_button\" onmouseup=\"do_trash_mouse_up()\">Send to trash</button></td></tr>\n </table>\n <h2>Make Directories</h2>\n<input id=\"mkdir_input\" /><button onmouseup=\"do_mkdir_mouse_up()\">Make directory</button>\n</body>\n</html>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/examples
|
courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Tokenizing text and creating sequences for sentences\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c01_nlp_turn_words_into_tokens.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nThis colab shows you how to tokenize text and create sequences for sentences as the first stage of preparing text for use with TensorFlow models.\nImport the Tokenizer",
"# Import the Tokenizer\nfrom tensorflow.keras.preprocessing.text import Tokenizer\n",
"Write some sentences\nFeel free to change and add sentences as you like",
"sentences = [\n 'My favorite food is ice cream',\n 'do you like ice cream too?',\n 'My dog likes ice cream!',\n \"your favorite flavor of icecream is chocolate\",\n \"chocolate isn't good for dogs\",\n \"your dog, your cat, and your parrot prefer broccoli\"\n]",
"Tokenize the words\nThe first step to preparing text to be used in a machine learning model is to tokenize the text, in other words, to generate numbers for the words.",
"# Optionally set the max number of words to tokenize.\n# The out of vocabulary (OOV) token represents words that are not in the index.\n# Call fit_on_text() on the tokenizer to generate unique numbers for each word\ntokenizer = Tokenizer(num_words = 100, oov_token=\"<OOV>\")\ntokenizer.fit_on_texts(sentences)\n",
"View the word index\nAfter you tokenize the text, the tokenizer has a word index that contains key-value pairs for all the words and their numbers.\nThe word is the key, and the number is the value.\nNotice that the OOV token is the first entry.",
"# Examine the word index\nword_index = tokenizer.word_index\nprint(word_index)\n\n# Get the number for a given word\nprint(word_index['favorite'])",
"Create sequences for the sentences\nAfter you tokenize the words, the word index contains a unique number for each word. However, the numbers in the word index are not ordered. Words in a sentence have an order. So after tokenizing the words, the next step is to generate sequences for the sentences.",
"sequences = tokenizer.texts_to_sequences(sentences)\nprint (sequences)",
"Sequence sentences that contain words that are not in the word index\nLet's take a look at what happens if the sentence being sequenced contains words that are not in the word index.\nThe Out of Vocabluary (OOV) token is the first entry in the word index. You will see it shows up in the sequences in place of any word that is not in the word index.",
"sentences2 = [\"I like hot chocolate\", \"My dogs and my hedgehog like kibble but my squirrel prefers grapes and my chickens like ice cream, preferably vanilla\"]\n\nsequences2 = tokenizer.texts_to_sequences(sentences2)\nprint(sequences2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
citxx/sis-python
|
crash-course/slices.ipynb
|
mit
|
[
"<h1>Содержание<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Получение-среза\" data-toc-modified-id=\"Получение-среза-1\">Получение среза</a></span><ul class=\"toc-item\"><li><span><a href=\"#Без-параметров\" data-toc-modified-id=\"Без-параметров-1.1\">Без параметров</a></span></li><li><span><a href=\"#Указываем-конец\" data-toc-modified-id=\"Указываем-конец-1.2\">Указываем конец</a></span></li><li><span><a href=\"#Указываем-начало\" data-toc-modified-id=\"Указываем-начало-1.3\">Указываем начало</a></span></li><li><span><a href=\"#Указываем-шаг\" data-toc-modified-id=\"Указываем-шаг-1.4\">Указываем шаг</a></span></li><li><span><a href=\"#Отрицательный-шаг\" data-toc-modified-id=\"Отрицательный-шаг-1.5\">Отрицательный шаг</a></span></li></ul></li><li><span><a href=\"#Особенности-срезов\" data-toc-modified-id=\"Особенности-срезов-2\">Особенности срезов</a></span></li><li><span><a href=\"#Примеры-использования\" data-toc-modified-id=\"Примеры-использования-3\">Примеры использования</a></span></li><li><span><a href=\"#Срезы-и-строки\" data-toc-modified-id=\"Срезы-и-строки-4\">Срезы и строки</a></span></li></ul></div>\n\nСрезы\nПолучение среза\nБывает такое, что нам нужна только некоторая часть списка, например все элементы с 5 по 10, или все элементы с четными индексами. Подобное можно сделать с помощью срезов.\nСрез задаётся как список[start:end:step], где из списка будут браться элементы с индексами от start (включительно) до end (не включительно) с шагом step. Любое из значений start, end, step можно опустить. В таком случае по умолчанию start равен 0, end равен длине списка, то есть индексу последнего элемента + 1, step равен 1.\nCрезы и range очень похожи набором параметров.\nБез параметров\nСрез a[::] будет содержать просто весь список a:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[::])",
"Можно опустить и : перед указанием шага, если его не указывать:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[:])",
"Указываем конец\nУказываем до какого элемента выводить:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[:5]) # то же самое, что и lst[:5:]\n\nlst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[:0])",
"Указываем начало\nИли с какого элемента начинать:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[2:])\n\nlst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[2:5])",
"Указываем шаг",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[1:7:2])\n\nlst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[::2])",
"Отрицательный шаг\nМожно даже сделать отрицательный шаг, как в range:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[::-1])",
"С указанием начала срез с отрицательным шагом можно понимать как: \"Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1 до того, как список закончится\".",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[2::-1])",
"Для отрицательного шага важно правильно указывать порядок начала и конца, и помнить, что левое число всегда включительно, правое - не включительно:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\n\n# Допустим, хотим элементы с индексами 1 и 2 в обратном порядке\nprint(lst[1:3:-1])\n\n# Начиная с элемента с индексом 2 идти в обратную сторону с шагом 1\n# до того, как встретим элемент с индексом 0 (0 не включительно)\nprint(lst[2:0:-1])",
"Особенности срезов\nСрезы не изменяют текущий список, а создают копию. С помощью срезов можно решить проблему ссылочной реализации при изменении одного элемента списка:",
"a = [1, 2, 3, 4] # а - ссылка на список, каждый элемент списка это ссылки на объекты 1, 2, 3, 4\nb = a # b - ссылка на тот же самый список\n\na[0] = -1 # Меняем элемент списка a\nprint(\"a =\", a)\nprint(\"b =\", b) # Значение b тоже поменялось!\nprint()\n\na = [1, 2, 3, 4]\nb = a[:] # Создаём копию списка\n\na[0] = -1 # Меняем элемент списка a\nprint(\"a =\", a)\nprint(\"b =\", b) # Значение b не изменилось!",
"Примеры использования\nС помощью срезов можно, например, пропустить элемент списка с заданным индексом:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nprint(lst[:4] + lst[5:])",
"Или поменять местами две части списка:",
"lst = [1, 2, 3, 4, 5, 6, 7, 8]\nswapped = lst[5:] + lst[:5] # поменять местами, начиная с элемента с индексом 5\nprint(swapped)",
"Срезы и строки\nСрезы можно использовать не только для списков, но и для строк. Например, чтобы изменить третий символ строки, можно сделать так:",
"s = \"long string\"\ns = s[:2] + \"!\" + s[3:]\nprint(s)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Mayurji/Machine-Learning
|
Statistics/Pandas and ThinkStat.ipynb
|
gpl-3.0
|
[
"Pandas and ThinkStat",
"############ First we import pandas ############\nimport pandas as pd\nimport numpy as np\nimport math\nfrom collections import Counter, defaultdict\nimport matplotlib.pyplot as plt\nimport scipy.stats as stat\nimport random\nfrom IPython.display import Image\n\n\n%matplotlib inline\n############ Declaration of Series ############\nnumSeries = pd.Series([-1,52,33,64,15])\n\n############ Viewing Series with Default index ############\nnumSeries\n\n############ Finding the type of the Object ############\ntype(numSeries)\n\n############ Getting the values of Series Object ############\nnumSeries.values\n\n############ Getting the value of the Series Index ############\nnumSeries.index\n\n############ Customizing the Index ############\nnumSeries2 = pd.Series([23,45,32,23],index=['a','b','c','d'])\n\n############ viewing customized index ############\nnumSeries2\n\n############ Checking the customized index ############\nnumSeries2.index\n\n############ Accessing particular element of the index of a Series Object ############\nnumSeries2['a']\n\n############ Modifying thr particular element of the Series with index to access it ############\nnumSeries2['a'] = 56\n\n############ validating the modification in the series ############\nnumSeries2['a']\n\n############ Creating DataFrame ############\n\n############ A dict's Key is considered as column for a table in DataFrame and values of Dict is value of ############\n\n############ column in a table. ############\n\ndata = {'empid':['E1','E2','E3'],\n 'Salary':[10000,25000,40000],\n 'Name':['Jack','Joe','Jackie']}\n\n############ Dict object ############\ntype(data)\n\n############ converting Dict to DataFrame ############\ndataframe = pd.DataFrame(data)\n\n############ Simple DataFrame ############\ndataframe\n\n############ Accessing Dataframe column ############\ndataframe['Name']\n\n############ New Dataframe ############\ndata2 = {'empid':['E1','E2','E3'],\n 'position':['Junior consultant','consultant','Senior consultant'],\n }\ndataframe2 = pd.DataFrame(data2)\n\n############ merging two dataframe ############\n\ndf = pd.merge(dataframe,dataframe2,on='empid',how='inner')\n\n############ View the dataframe ############\ndf.head()\n\n#### Storing DF as CSV ############\n\ndf.to_csv('Sample_df.csv')\n\n############ Reading a CSV ############\n\nsample_df = pd.read_csv(\"Sample_df.csv\")\n\n############ Duplicate Indexes are created if index is not mentioned while Saving ############\nsample_df.head()\n\n############ Keeping default Index ############\n\n# We can create different column as your index by mentioning the column name as index='empid'\n\ndf.to_csv('Sample_df.csv',index=None)\n\n############ Reading a CSV ############\n\nsample_df = pd.read_csv(\"Sample_df.csv\")\nsample_df.head()\n\n############ Fetching record based on EMP ID ############\n\nsample_df[sample_df[\"empid\"]=='E1']\n\n############ Filtering records based on Salary ############\n\nsample_df[sample_df[\"Salary\"]>=25000] #sample_df[sample_df.Salary>10000]\n\n############ FIltering using String values of the Columns ############\n\nsample_df[sample_df.position.str.contains('junior',case=False)]",
"Valuable Functions in Pandas",
"### Social Network Ads\nsocial_network = pd.read_csv(\"Social_Network_Ads.csv\")\n\nsocial_network.head()\n\nsocial_network.shape",
"value_counts() - The Series class provides a method, value_counts, that counts the number of times each value appears.",
"social_network[\"Gender\"].value_counts()",
"isnull - It finds the number of values in each column with value as null.",
"social_network.isnull().sum()",
"sort_index() - It sorts the Series by index, so the values appear in order.",
"social_network[\"Age\"].value_counts().sort_index()",
"value_counts(sort=False) - It counts the number of times each value appears with least frequent value on top in an increasing order.",
"social_network[\"EstimatedSalary\"].value_counts(sort=False)",
"describe() - It gives the basic statistical metrics like mean etc.",
"social_network[\"EstimatedSalary\"].describe()",
"Consider salary above 140,000 is error or an outlier, then we can eliminate this error using np.nan.\nThe attribute loc provides several ways to select rows and columns from a DataFrame. In this example, the first expression in brackets is the row indexer; the second expression selects the column.\nThe expression social_network[\"EstimatedSalary\"] > 140000 yields a Series of type bool, where True indicates that the condition is true. When a boolean Series is used as an index, it selects only the elements that satisfy the condition.",
"social_network.loc[social_network[\"EstimatedSalary\"] > 140000, \"EstimatedSalary\"] = np.nan\n\nsocial_network[\"EstimatedSalary\"].describe()",
"Histogram\nOne of the best ways to describe a variable is to report the values that appear in the dataset and how many times each value appears. This description is called the distribution of the variable.\nThe most common representation of a distribution is a histogram, which is a graph that shows the frequency of each value. In this context, “frequency” means the number of times the value appears.",
"#Convert series into a list\nage = list(social_network[\"Age\"])\n\n# Create dict format for age, it helps in building histogram easily. \n# The result is a dictionary that maps from values to frequencies.\nhist = {}\nfor a in age:\n hist[a] = hist.get(a,0) + 1\n\n#Same as dict format above.\ncounter = Counter(age)\n\n#The result is a Counter object, which is a subclass of dictionary.\n\n#To loop through the values in order, you can use the built-in function sorted:\nfor val in sorted(counter):\n print(val, counter[val])\n\n# Use items() to iterate over the dict/counter.\nfor value, freq in counter.items():\n print(value, freq)",
"Plotting",
"plt.hist(age)\nplt.xlabel(\"Age\")\nplt.ylabel(\"Freq\")\n\npurchased_customer = social_network[social_network[\"Purchased\"]==1]\n\nplt.hist(purchased_customer[\"Age\"])\nplt.xlabel(\"Age\")\nplt.ylabel(\"Freq\")\n\nsocial_network.head()\n\nno_purchase = social_network[social_network[\"Purchased\"]==0]\n\nno_purchase.Age.mean()\n\npurchased_customer.Age.mean()",
"Some of the characteristics we might want to report are:\n\ncentral tendency: Do the values tend to cluster around a particular point?\nmodes: Is there more than one cluster?\nspread: How much variability is there in the values?\ntails: How quickly do the probabilities drop off as we move away from the modes?\noutliers: Are there extreme values far from the modes?\n\nWhy and Why not \"Mean\" can be used for central tendenancy\nSometimes the mean is a good description of a set of values. For example, apples are all pretty much the same size (at least the ones sold in supermar- kets). So if I buy 6 apples and the total weight is 3 pounds, it would be a reasonable summary to say they are about a half pound each.\nBut pumpkins are more diverse. Suppose I grow several varieties in my gar- den, and one day I harvest three decorative pumpkins that are 1 pound each, two pie pumpkins that are 3 pounds each, and one Atlantic Giant⃝R pumpkin that weighs 591 pounds. The mean of this sample is 100 pounds, but if I told you “The average pumpkin in my garden is 100 pounds,” that would be misleading. In this example, there is no meaningful average because there is no typical pumpkin.",
"print(\"Variance: \",purchased_customer.Age.var())\nprint(\"Standard Deviation: \",purchased_customer.Age.std())\n\nprint(\"Variance: \",no_purchase.Age.var())\nprint(\"Standard Deviation:\",no_purchase.Age.std())",
"Effect Size\nAn effect size is a summary statistic intended to describe (wait for it) the size of an effect. For example, to describe the difference between two groups, one obvious choice is the difference in the means.\nhttps://en.wikipedia.org/wiki/Effect_size",
"purchased_customer.Age.mean() - no_purchase.Age.mean()\n\npurchased_customer[\"EstimatedSalary\"].mean() - no_purchase[\"EstimatedSalary\"].mean()\n\ndef CohenEffectSize(group1, group2):\n diff = group1.mean() - group2.mean()\n var1 = group1.var()\n var2 = group2.var()\n n1, n2 = len(group1), len(group2)\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / math.sqrt(pooled_var)\n return d\n\nCohenEffectSize(purchased_customer.Age, no_purchase.Age)\n\n#Effect Size\n13.6/max(social_network[\"Age\"])*100",
"PMF - Probability Mass Function\nPurchased",
"pmf_age_purchased = {}\nfor age in purchased_customer[\"Age\"].value_counts().index:\n pmf_age_purchased[age] = purchased_customer[purchased_customer[\"Age\"]==age][\"Age\"].count() / purchased_customer[\"Age\"].shape[0]\n\n#The Pmf is normalized so total probability is 1.\nsum(list(pmf_age_purchased.values()))",
"Not Purchased",
"pmf_age_no_purchased = {}\nfor age in no_purchase[\"Age\"].value_counts().index:\n pmf_age_no_purchased[age] = no_purchase[no_purchase[\"Age\"]==age][\"Age\"].count() / no_purchase[\"Age\"].shape[0]\n\nsum(list(pmf_age_no_purchased.values()))",
"Difference between Hist and PMF\nThe biggest difference is that a Hist maps from values to integer counters; a Pmf maps from values to floating-point probabilities.",
"plt.bar(pmf_age_no_purchased.keys(), pmf_age_no_purchased.values())\nplt.bar(pmf_age_purchased.keys(), pmf_age_purchased.values())\n\n#27-41\nages = range(27, 41)\ndiffs = []\nfor age in ages:\n p1 = pmf_age_purchased[age]\n p2 = pmf_age_no_purchased[age]\n diff = 100 * (p1 - p2)\n diffs.append(diff)\nplt.bar(ages, diffs)",
"Dataframe Indexing",
"### Create Dataframe from array\n\narray = np.random.randn(4, 2)\ndf = pd.DataFrame(array)\n\ndf\n\ncolumns = ['A', 'B']\ndf = pd.DataFrame(array, columns=columns)\n\nindex = ['a', 'b', 'c', 'd']\ndf = pd.DataFrame(array, columns=columns, index=index)\ndf",
"To select a row by label, you can use the loc attribute, which returns a Series",
"df.loc['a']",
"If the integer position of a row is known, rather than its label, you can use the iloc attribute, which also returns a Series.",
"df.iloc[0]\n\nindices = ['a', 'c']\ndf.loc[indices]\n\ndf['a':'c']\n\ndf[0:2]",
"Above the result in either case is a DataFrame, but notice that the first result includes the end of the slice; the second doesn’t.\nLimits of PMFs\nPMFs work well if the number of values is small. But as the number of values increases, the probability associated with each value gets smaller and the effect of random noise increases.\nFor example, if we are interested in the distribution of age.\nThe parts of this figure are hard to interpret. There are many spikes and valleys, and some apparent differences between the distributions. It is hard to tell which of these features are meaningful. Also, it is hard to see overall patterns; for example, which distribution do you think has the higher mean?\nThese problems can be mitigated by binning the data; that is, dividing the range of values into non-overlapping intervals and counting the number of values in each bin. Binning can be useful, but it is tricky to get the size of the bins right. If they are big enough to smooth out noise, they might also smooth out useful information.\nAn alternative that avoids these problems is the cumulative distribution function (CDF), which is the subject of this chapter. But before I can explain CDFs, I have to explain percentiles.\nCumulative Distribution Function\nThe difference between “percentile” and “percentile rank” can be confusing, and people do not always use the terms precisely. To summarize, PercentileRank takes a value and computes its percentile rank in a set of values; Percentile takes a percentile rank and computes the corresponding value.",
"def PercentileRank(scores, your_score):\n count = 0\n for score in scores:\n if score <= your_score:\n count += 1\n percentile_rank = 100.0 * count / len(scores)\n return percentile_rank\n\nsocial_network.dropna(inplace=True)\n\nsalary = list(social_network[\"EstimatedSalary\"])\n\nmy_sal = 100000\n\nPercentileRank(salary, my_sal)\n\ndef Percentile(scores, percentile_rank):\n scores.sort()\n for score in scores:\n if PercentileRank(scores, score) >= percentile_rank:\n return score\n\nPercentile(salary, 50)\n\ndef Percentile2(scores, percentile_rank):\n scores.sort()\n index = percentile_rank * (len(scores)-1) // 100\n return scores[index]\n\nPercentile2(salary, 50)",
"CDF\nThe CDF is the function that maps from a value to its percentile rank.\nThe CDF is a function of x, where x is any value that might appear in the distribution. To evaluate CDF(x) for a particular value of x, we compute the fraction of values in the distribution less than or equal to x.",
"# This function is almost identical to PercentileRank, except that the result is a probability in the range 0–1 \n# rather than a percentile rank in the range 0–100.\n\ndef EvalCdf(sample, x):\n count = 0.0\n for value in sample:\n if value <= x:\n count += 1\n prob = count / len(sample)\n return prob\n\nsample = [1, 2, 2, 3, 5]\n\nEvalCdf(sample, 2)\n\nEvalCdf(sample, 3)\n\nEvalCdf(sample, 0)\n\nno_purchase_prob = []\nfor age in sorted(no_purchase[\"Age\"]):\n no_purchase_prob.append(EvalCdf(no_purchase[\"Age\"], age))\n\npurchase_prob = []\nfor age in sorted(purchased_customer[\"Age\"]):\n purchase_prob.append(EvalCdf(purchased_customer[\"Age\"], age))\n\ndef step_plot(values, probabilities, xlabel, ylabel = \"CDF probability\"):\n plt.step(values, probabilities)\n plt.grid()\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)",
"One way to read a CDF is to look up percentiles. For example, it looks like about 90% of people are aged less than 40 years, who didnt make a purchase. The CDF also provides a visual representation of the shape of the distribution. Common values appear as steep or vertical sections of the CDF; in this example, the mode at 35 years is apparent.",
"step_plot(sorted(no_purchase[\"Age\"]), no_purchase_prob, \"Age\")",
"It looks like about only 30% or less of people are aged less than 40 years, who made a purchase. Remaining 70%le people are aged above 40.",
"step_plot(sorted(purchased_customer[\"Age\"]), purchase_prob, \"Age\")",
"Estimated Salary (Purchase vs No Purchase)",
"no_purchase_prob = []\nfor sal in sorted(no_purchase[\"EstimatedSalary\"]):\n no_purchase_prob.append(EvalCdf(no_purchase[\"EstimatedSalary\"], sal))\n\npurchase_prob = []\nfor sal in sorted(purchased_customer[\"EstimatedSalary\"]):\n purchase_prob.append(EvalCdf(purchased_customer[\"EstimatedSalary\"], sal))",
"Under No purchase curve(Blue), the curve remains flat after 90K with minor bilps after that. But under purchase curve(orange), the curve keeps the steps increasing even after 90K, which suggest people with more salary have more purchasing power. And all above 50%le have 90K or more.",
"step_plot(sorted(no_purchase[\"EstimatedSalary\"]), no_purchase_prob, \"Estimated Salary\")\nstep_plot(sorted(purchased_customer[\"EstimatedSalary\"]), purchase_prob, \"Estimated Salary\")",
"Quantiles: https://en.wikipedia.org/wiki/Quantile",
"Percentile(list(purchased_customer[\"EstimatedSalary\"]),75)",
"Percentile ranks are useful for comparing measurements across different groups. For example, people who compete in foot races are usually grouped by age and gender. To compare people in different age groups, you can convert race times to percentile ranks.\nA few years ago I ran the James Joyce Ramble 10K in Dedham MA; I finished in 42:44, which was 97th in a field of 1633. I beat or tied 1537 runners out of 1633, so my percentile rank in the field is 94%.\nMore generally, given position and field size, we can compute percentile rank:",
"def PositionToPercentile(position, field_size):\n beat = field_size - position + 1\n percentile = 100.0 * beat / field_size\n return percentile",
"In my age group, denoted M4049 for “male between 40 and 49 years of age”, I came in 26th out of 256. So my percentile rank in my age group was 90%.\nIf I am still running in 10 years (and I hope I am), I will be in the M5059 division. Assuming that my percentile rank in my division is the same, how much slower should I expect to be?\nI can answer that question by converting my percentile rank in M4049 to a position in M5059. Here’s the code:",
"def PercentileToPosition(percentile, field_size):\n beat = percentile * field_size / 100.0\n position = field_size - beat + 1\n return position",
"There were 171 people in M5059, so I would have to come in between 17th and 18th place to have the same percentile rank. The finishing time of the 17th runner in M5059 was 46:05, so that’s the time I will have to beat to maintain my percentile rank.\nModeling Distributions\nExponential Distribution\nThe CDF of the exponential distribution is\n CDF(x) = 1 − e^(−λx)\n\nThe parameter, λ, determines the shape of the distribution.In the real world, exponential distributions come up when we look at a series of events and measure the times between events, called interarrival times. If the events are equally likely to occur at any time, the distribution of interarrival times tends to look like an exponential distribution.",
"babyboom = pd.read_csv('babyboom.dat',sep=\" \", header=None)\n\nbabyboom.columns = [\"time\", \"gender\", \"weight\", \"minutes\"]\n\ndiffs = list(babyboom.minutes.diff())\n\ne_cdf = []\nl = 0.5\ndef exponential_distribution(x):\n e_cdf.append(1 - math.exp(-1* l * x))",
"Normal Distribution\nThe normal distribution, also called Gaussian, is commonly used because it describes many phenomena, at least approximately. It turns out that there is a good reason for its ubiquity.\nThe normal distribution is characterized by two parameters: the mean, μ, and standard deviation σ. The normal distribution with μ = 0 and σ = 1 is called the standard normal distribution. Its CDF is defined by an integral that does not have a closed form solution, but there are algorithms that evaluate it efficiently.",
"def EvalNormalCdf(x, mu=0, sigma=1):\n return stat.norm.cdf(x, loc=mu, scale=sigma)\n\nmu = social_network[\"Age\"].mean()\n\nsigma = social_network[\"Age\"].std()\n\nstep_plot(sorted(social_network[\"Age\"]),EvalNormalCdf(sorted(social_network[\"Age\"]), mu=mu, sigma=sigma), \"Age\")",
"Pareto Distribution https://en.wikipedia.org/wiki/Pareto_distribution\nPreferential Attachment https://en.wikipedia.org/wiki/Preferential_attachment\nProbability Distribution Function\nThe derivative of CDF is PDF",
"Image('PDF.png')",
"Evaluating a PDF for a particular value of x is usually not useful. The result\n is not a probability; it is a probability density.\nIn physics, density is mass per unit of volume; in order to get a mass, you have to multiply by volume or, if the density is not constant, you have to integrate over volume.\nSimilarly, probability density measures probability per unit of x. In order to get a probability mass, you have to integrate over x.",
"Xs = sorted(social_network[\"Age\"])\n\nmean, std = social_network[\"Age\"].mean(), social_network[\"Age\"].std()\nPDF = stat.norm.pdf(Xs, mean, std)\n\nstep_plot(Xs, PDF, \"Age\", ylabel=\"Density\")",
"Kernel Density Estimation\nKernel density estimation (KDE) is an algorithm that takes a sample and finds an appropriately smooth PDF that fits the data.\nhttps://en.wikipedia.org/wiki/Kernel_density_estimation",
"sample = [random.gauss(mean, std) for i in range(500)]\n\nKernel_density_estimate = stat.gaussian_kde(sample)\n\nsample_pdf = Kernel_density_estimate.evaluate(sorted(social_network[\"Age\"]))\n\nstep_plot(Xs, PDF, \"Age\", ylabel=\"Density\")\nstep_plot(Xs, sample_pdf, \"Age\", ylabel=\"Density\")",
"Estimating a density function with KDE is useful for several purposes:\n\nVisualization: During the exploration phase of a project, CDFs are usually the best visualization of a distribution. After you look at a CDF, you can decide whether an estimated PDF is an appropriate model of the distribution. If so, it can be a better choice for presenting the distribution to an audience that is unfamiliar with CDFs.\nInterpolation: An estimated PDF is a way to get from a sample to a model of the population. If you have reason to believe that the population distribution is smooth, you can use KDE to interpolate the density for values that don’t appear in the sample.\nSimulation: Simulations are often based on the distribution of a sample. If the sample size is small, it might be appropriate to smooth the sample distribution using KDE, which allows the simulation to explore more possible outcomes, rather than replicating the observed data.\n\nThe distribution framework\nWe started with PMFs, which represent the probabilities for a discrete set of values. To get from a PMF to a CDF, you \nadd up the probability masses to get cumulative probabilities. To get from a CDF back to a PMF, you compute differences in cumulative probabilities.\nA PDF is the derivative of a continuous CDF; or, equivalently, a CDF is the integral of a PDF. Remember that a PDF maps from values to probability densities; to get a probability, you have to integrate.\nTo get from a discrete to a continuous distribution, you can perform various kinds of smoothing. One form of smoothing is to assume that the data come from an analytic continuous distribution (like exponential or normal) and to estimate the parameters of that distribution. Another option is kernel density estimation.\nThe opposite of smoothing is discretizing, or quantizing. If you evaluate a PDF at discrete points, you can generate a PMF that is an approximation of the PDF. You can get a better approximation using numerical integration.\nTo distinguish between continuous and discrete CDFs, it might be better for a discrete CDF to be a “cumulative mass function,” but as far as I can tell no one uses that term.",
"Image('distributions.png')",
"Pmf and Hist are almost the same thing, except that a Pmf maps values to floating-point probabilities, rather than integer frequencies. If the sum of the probabilities is 1, the Pmf is normalized. Pmf provides Normalize, which computes the sum of the probabilities and divides through by a factor\nhttps://en.wikipedia.org/wiki/Moment_of_inertia\nSkewness\nSkewness is a property that describes the shape of a distribution. If the distribution is symmetric around its central tendency, it is unskewed. If the values extend farther to the right, it is “right skewed” and if the values extend left, it is “left skewed.”\nThis use of “skewed” does not have the usual connotation of “biased.” Skewness only describes the shape of the distribution; it says nothing about whether the sampling process might have been biased.\nA way to evaluate the asymmetry of a distribution is to look at the relationship between the mean and median. Extreme values have more effect on the mean than the median, so in a distribution that skews left, the mean is less than the median. In a distribution that skews right, the mean is greater.\nPearson’s median skewness coefficient is a measure of skewness based on the difference between the sample mean and median:\n gp = 3(x ̄ − m)/S\n\nWhere x ̄ is the sample mean, m is the median, and S is the standard deviation.\nThe sign of the skewness coefficient indicates whether the distribution skews left or right, but other than that, they are hard to interpret. Sample skewness is less robust; that is, it is more susceptible to outliers. As a result it is less reliable when applied to skewed distributions, exactly when it would be most relevant.\nPearson’s median skewness is based on a computed mean and variance, so it is also susceptible to outliers, but since it does not depend on a third moment, it is somewhat more robust.\nRelationship between two variables\nTwo variables are related if knowing one gives you information about the other. For example, height and weight are related; people who are taller tend to be heavier. Of course, it is not a perfect relationship: there are short heavy people and tall light ones. But if you are trying to guess someone’s weight, you will be more accurate if you know their height than if you don’t.\nScatter Plot",
"mall_customer = pd.read_csv(\"Mall_Customers.csv\")\n\nmall_customer.isnull().sum()",
"Overlapping data points look darker, so darkness is proportional to density. In this version of the plot we can see two details that were not apparent before: vertical clusters at Annual income 57k$. \nJittering: https://blogs.sas.com/content/iml/2011/07/05/jittering-to-prevent-overplotting-in-statistical-graphics.html",
"plt.scatter(mall_customer[\"Age\"], mall_customer[\"Annual Income (k$)\"],alpha=0.2)\nplt.grid()\nplt.ylabel(\"Annual Income\")\nplt.xlabel(\"Age\")",
"HexBin for large Dataset\nTo handle larger datasets, another option is a hexbin plot, which divides the graph into hexagonal bins and colors each bin according to how many data points fall in it. An advantage of a hexbin is that it shows the shape of the relationship well, and it is efficient for large datasets, both in time and in the size of the file it generates. A drawback is that it makes the outliers invisible.\nCharacterizing the Relationship\nScatter plots provide a general impression of the relationship between vari- ables, but there are other visualizations that provide more insight into the nature of the relationship. One option is to bin one variable and plot percentiles of the other.",
"mall_customer.Age.describe()",
"Digitize computes the index of the bin that contains each value in df.htm3. The result is a NumPy array of integer indices. Values that fall below the lowest bin are mapped to index 0. Values above the highest bin are mapped to len(bins).",
"bins = np.arange(18, 75, 5)\n\nindices = np.digitize(mall_customer.Age, bins)",
"groupby is a DataFrame method that returns a GroupBy object; used in a for loop, groups iterates the names of the groups and the DataFrames that represent them.",
"groups = mall_customer.groupby(indices)",
"So, for example, we can print the number of rows in each group like this:",
"for i, group in groups:\n print(i, len(group))\n\nfor i, group in groups:\n print(i, len(group))\n\nages = [group.Age.mean() for i, group in groups]\n\n#heights\n\ncdf_group_income = defaultdict(list)\nfor i, grp in groups:\n for income in grp[\"Annual Income (k$)\"]:\n cdf_group_income[i].append(EvalCdf(grp[\"Annual Income (k$)\"], income))\n\nfor percent in [75, 50, 25]:\n incomes = [Percentile(cdf_group_income[k], percent) for k,v in cdf_group_income.items()]\n label = '%dth' %percent\n plt.plot(ages, incomes)\n plt.xlabel(\"Age\")\n plt.ylabel(\"Annual Income\")",
"Correlation\nA correlation is a statistic intended to quantify the strength of the relationship between two variables.\nA challenge in measuring correlation is that the variables we want to compare are often not expressed in the same units. And even if they are in the same units, they come from different distributions.\nThere are two common solutions to these problems:\n1. Transform each value to a standard score, which is the number of standard deviations from the mean. This transform leads to the “Pearson product-moment correlation coefficient.”\n2. Transform each value to its rank, which is its index in the sorted list of values. This transform leads to the “Spearman rank correlation coefficient.”\nIf X is a series of n values, xi, we can convert to standard scores by subtracting the mean and dividing by the standard deviation: zi = (xi − μ)/σ.\nThe numerator is a deviation: the distance from the mean. Dividing by σ standardizes the deviation, so the values of Z are dimensionless (no units) and their distribution has mean 0 and variance 1.\nIf X is normally distributed, so is Z. But if X is skewed or has outliers, so does Z; in those cases, it is more robust to use percentile ranks. If we compute a new variable, R, so that ri is the rank of xi, the distribution of R is uniform from 1 to n, regardless of the distribution of X.\nCovariance\nCovariance is a measure of the tendency of two variables to vary together.\nIf we have two series, X and Y , their deviations from the mean are \n dxi = xi − x ̄\n dyi = yi − y ̄\n\nwhere x ̄ is the sample mean of X and y ̄ is the sample mean of Y. If X and Y vary together, their deviations tend to have the same sign.\nIf we multiply them together, the product is positive when the deviations have the same sign and negative when they have the opposite sign. So adding up the products gives a measure of the tendency to vary together.\nCovariance is the mean of these products: Cov(X,Y)= 1/n * SUMMATION (dxi*dyi)\nwhere n is the length of the two series (they have to be the same length).\nIf you have studied linear algebra, you might recognize that Cov is the dot product of the deviations, divided by their length. So the covariance is maximized if the two vectors are identical, 0 if they are orthogonal, and negative if they point in opposite directions.",
"def Cov(xs, ys, meanx=None, meany=None):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n if meanx is None:\n meanx = np.mean(xs)\n if meany is None:\n meany = np.mean(ys)\n cov = np.dot(xs-meanx, ys-meany) / len(xs)\n return cov",
"By default Cov computes deviations from the sample means, or you can provide known means. If xs and ys are Python sequences, np.asarray converts them to NumPy arrays. If they are already NumPy arrays, np.asarray does nothing.\nThis implementation of covariance is meant to be simple for purposes of explanation. NumPy and pandas also provide implementations of covariance, but both of them apply a correction for small sample sizes that we have not covered yet, and np.cov returns a covariance matrix, which is more than we need for now.\nPearson Correlation\nCovariance is useful in some computations, but it is seldom reported as a summary statistic because it is hard to interpret. Among other problems, its units are the product of the units of X and Y .\nOne solution to this problem is to divide the deviations by the standard deviation, which yields standard scores, and \ncompute the product of standard scores:\n\n p i = ( x i − x ̄ )*( y i − y ̄ )/ SX*SY\n\nWhere SX and SY are the standard deviations of X and Y . The mean of these products is\nρ = 1/n SUMMATION pi\n\nOr we can rewrite ρ by factoring out SX and SY :\nρ= Cov(X,Y)/SX*SY\n\nThis value is called Pearson’s correlation after Karl Pearson, an influential early statistician. It is easy to compute and easy to interpret. Because standard scores are dimensionless, so is ρ.",
"def Corr(xs, ys):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n meanx, varx = np.mean(xs), np.var(xs)\n meany, vary = np.mean(ys), np.var(ys)\n corr = Cov(xs, ys, meanx, meany) / math.sqrt(varx * vary)\n return corr",
"MeanVar computes mean and variance slightly more efficiently than separate calls to np.mean and np.var.\nPearson’s correlation is always between -1 and +1 (including both). If ρ is positive, we say that the correlation is positive, which means that when one variable is high, the other tends to be high. If ρ is negative, the correlation is negative, so when one variable is high, the other is low.\nThe magnitude of ρ indicates the strength of the correlation. If ρ is 1 or -1, the variables are perfectly correlated, which means that if you know one, you can make a perfect prediction about the other.\nMost correlation in the real world is not perfect, but it is still useful. The correlation of height and weight is 0.51, which is a strong correlation compared to similar human-related variables.\nNonlinear Relationship\nIf Pearson’s correlation is near 0, it is tempting to conclude that there is no relationship between the variables, but that conclusion is not valid. Pear- son’s correlation only measures linear relationships. If there’s a nonlinear relationship, ρ understates its strength.\nhttps://wikipedia.org/wiki/Correlation_and_dependence\nLook at a scatter plot of your data before blindly computing a correlation coefficient.\nSpearman’s rank correlation\nPearson’s correlation works well if the relationship between variables is linear and if the variables are roughly normal. But it is not robust in the presence of outliers. Spearman’s rank correlation is an alternative that mitigates the effect of outliers and skewed distributions. \nTo compute Spearman’s correlation, we have to compute the rank of each value, which is its index in the sorted sample. For example, in the sample [1, 2, 5, 7] the rank of the value 5 is 3, because it appears third in the sorted list. Then we compute Pearson’s correlation for the ranks.",
"def SpearmanCorr(xs, ys):\n xranks = pd.Series(xs).rank()\n yranks = pd.Series(ys).rank()\n return Corr(xranks, yranks)",
"I convert the arguments to pandas Series objects so I can use rank, which computes the rank for each value and returns a Series. Then I use Corr to compute the correlation of the ranks.\nI could also use Series.corr directly and specify Spearman’s method:",
"def SpearmanCorr(xs, ys):\n xs = pd.Series(xs)\n ys = pd.Series(ys)\n return xs.corr(ys, method='spearman')\n\nSpearmanCorr(mall_customer[\"Age\"], mall_customer[\"Annual Income (k$)\"])\n\nSpearmanCorr(mall_customer[\"Annual Income (k$)\"], mall_customer[\"Spending Score (1-100)\"])\n\nSpearmanCorr(mall_customer[\"Age\"], mall_customer[\"Spending Score (1-100)\"])\n\nSpearmanCorr(social_network[\"Age\"], social_network[\"EstimatedSalary\"])\n\nCorr(social_network[\"Age\"], social_network[\"EstimatedSalary\"])",
"The Spearman rank correlation for the BRFSS data is 0.54, a little higher than the Pearson correlation, 0.51. There are several possible reasons for the difference, including:\n\nIf the relationship is nonlinear, Pearson’s correlation tends to underestimate the strength of the relationship, and\nPearson’s correlation can be affected (in either direction) if one of the distributions is skewed or contains outliers. Spearman’s rank correlation is more robust.\n\nCorrelation and causation\nIf variables A and B are correlated, there are three possible explanations: A causes B, or B causes A, or some other set of factors causes both A and B. These explanations are called “causal relationships”.\nCorrelation alone does not distinguish between these explanations, so it does not tell you which ones are true. This rule is often summarized with the phrase “Correlation does not imply causation,” which is so pithy it has its own Wikipedia page: http://wikipedia.org/wiki/Correlation_does_not_imply_causation.\nSo what can you do to provide evidence of causation?\n1. Use time. If A comes before B, then A can cause B but not the other way around (at least according to our common understanding of causation). The order of events can help us infer the direction of causation, but it does not preclude the possibility that something else causes both A and B.\n2. Use randomness. If you divide a large sample into two groups at ran- dom and compute the means of almost any variable, you expect the difference to be small. If the groups are nearly identical in all variables but one, you can eliminate spurious relationships.\nThis works even if you don’t know what the relevant variables are, but it works even better if you do, because you can check that the groups are identical.\nThese ideas are the motivation for the randomized controlled trial, in which subjects are assigned randomly to two (or more) groups: a treatment group that receives some kind of intervention, like a new medicine, and a control group that receives no intervention, or another treatment whose effects are known.\nA randomized controlled trial is the most reliable way to demonstrate a causal relationship, and the foundation of science-based medicine (see http://wikipedia.org/wiki/Randomized_controlled_trial).\nUnfortunately, controlled trials are only possible in the laboratory sciences, medicine, and a few other disciplines. \nIn the social sciences, controlled experiments are rare, usually because they are impossible or unethical.\nAn alternative is to look for a natural experiment, where different “treatments” are applied to groups that are otherwise similar. One danger of natural experiments is that the groups might differ in ways that are not apparent. You can read more about this topic at http://wikipedia.org/wiki/Natural_experiment.\nEstimation\nLet’s play a game. I think of a distribution, and you have to guess what it is. I’ll give you two hints: it’s a normal distribution, and here’s a random sample drawn from it:\n[-0.441, 1.774, -0.101, -1.138, 2.975, -2.138]\nWhat do you think is the mean parameter, μ, of this distribution?\nOne choice is to use the sample mean, x ̄, as an estimate of μ. In this example, x ̄ is 0.155, so it would be reasonable to guess μ = 0.155. This process is called estimation, and the statistic we used (the sample mean) is called an estimator.\nUsing the sample mean to estimate μ is so obvious that it is hard to imagine a reasonable alternative. But suppose we change the game by introducing outliers.\nEstimation if Outlier exists\nI’m thinking of a distribution. It’s a normal distribution, and here’s a sam- ple that was collected by an unreliable surveyor who occasionally puts the decimal point in the wrong place.\n[-0.441, 1.774, -0.101, -1.138, 2.975, -213.8]\nNow what’s your estimate of μ? If you use the sample mean, your guess is -35.12. Is that the best choice? What are the alternatives?\nOne option is to identify and discard outliers, then compute the sample mean of the rest. Another option is to use the median as an estimator.\nWhich estimator is best depends on the circumstances (for example, whether there are outliers) and on what the goal is. Are you trying to minimize errors, or maximize your chance of getting the right answer?\nIf there are no outliers, the sample mean minimizes the mean squared error (MSE).\nThat is, if we play the game many times, and each time compute the error x ̄ − μ, the sample mean minimizes \n M S E = 1/m SUMMATION ( x ̄ − μ )^2\n\nWhere m is the number of times you play the estimation game, not to be confused with n, which is the size of the sample used to compute x ̄.\nHere is a function that simulates the estimation game and computes the root mean squared error (RMSE), which is the square root of MSE:",
"def RMSE(estimates, actual):\n e2 = [(estimate-actual)**2 for estimate in estimates]\n mse = np.mean(e2)\n return math.sqrt(mse)\n\ndef Estimate1(n=7, m=1000):\n mu = 0\n sigma = 1\n means = []\n medians = []\n for _ in range(m):\n xs = [random.gauss(mu, sigma) for i in range(n)]\n xbar = np.mean(xs)\n median = np.median(xs)\n means.append(xbar)\n medians.append(median)\n print('rmse xbar', RMSE(means, mu))\n print('rmse median', RMSE(medians, mu))\n\nEstimate1()",
"estimates is a list of estimates; actual is the actual value being estimated. In practice, of course, we don’t know actual; if we did, we wouldn’t have to estimate it. The purpose of this experiment is to compare the performance of the two estimators.\nWhen I ran this code, the RMSE of the sample mean was 0.38, which means that if we use x ̄ to estimate the mean of this distribution, based on a sample with n = 7, we should expect to be off by 0.38 on average. Using the median to estimate the mean yields RMSE 0.45, which confirms that x ̄ yields lower RMSE, at least for this example.\nMinimizing MSE is a nice property, but it’s not always the best strategy. For example, suppose we are estimating the distribution of wind speeds at a building site. If the estimate is too high, we might overbuild the structure, increasing its cost. But if it’s too low, the building might collapse. Because cost as a function of error is not symmetric, minimizing MSE is not the best strategy.\nAs another example, suppose I roll three six-sided dice and ask you to predict the total. If you get it exactly right, you get a prize; otherwise you get nothing. In this case the value that minimizes MSE is 10.5, but that would be a bad guess, because the total of three dice is never 10.5. For this game, you want an estimator that has the highest chance of being right, which is a maximum likelihood estimator (MLE). If you pick 10 or 11, your chance of winning is 1 in 8, and that’s the best you can do.\nEstimate Variance\nI’m thinking of a distribution. It’s a normal distribution, and here’s a (familiar) sample:\n[-0.441, 1.774, -0.101, -1.138, 2.975, -2.138]\nWhat do you think is the variance, σ2, of my distribution? Again, the obvious choice is to use the sample variance, S^2, as an estimator.\n S^2 = 1/n SUMMATION ( x i − x ̄ )^2\n\nFor large samples, S^2 is an adequate estimator, but for small samples it tends to be too low. Because of this unfortunate property, it is called a biased estimator. An estimator is unbiased if the expected total (or mean) error, after many iterations of the estimation game, is 0.\nFortunately, there is another simple statistic that is an unbiased estimator of σ2:",
"Image('unbiased_estimator.png')",
"For an explanation of why S^2 is biased, and a proof that (Sn−1)^2 is unbiased, http://wikipedia.org/wiki/Bias_of_an_estimator.\nThe biggest problem with this estimator is that its name and symbol are used inconsistently. The name “sample variance” can refer to either S^2 or (Sn−1)^2, and the symbol S^2 is used for either or both.\nHere is a function that simulates the estimation game and tests the perfor- mance of S^2 and (Sn−1)^2:",
"def Estimate2(n=7, m=1000):\n mu = 0\n sigma = 1\n estimates1 = []\n estimates2 = []\n for _ in range(m):\n xs = [random.gauss(mu, sigma) for i in range(n)]\n biased = np.var(xs)\n unbiased = np.var(xs, ddof=1)\n estimates1.append(biased)\n estimates2.append(unbiased)\n print('mean error biased', MeanError(estimates1, sigma**2))\n print('mean error unbiased', MeanError(estimates2, sigma**2))\n",
"Again, n is the sample size and m is the number of times we play the game. np.var computes S^2 by default and (Sn−1)^2 if you provide the argument ddof=1, which stands for “delta degrees of freedom.” \nDOF: http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics).\nMean Error\nMeanError computes the mean difference between the estimates and the actual value:",
"def MeanError(estimates, actual):\n errors = [estimate-actual for estimate in estimates]\n return np.mean(errors)",
"When I ran this code, the mean error for S^2 was -0.13. As expected, this biased estimator tends to be too low. For (Sn−1)^2, the mean error was 0.014, about 10 times smaller. As m increases, we expect the mean error for (Sn−1)^2 to approach 0.\nProperties like MSE and bias are long-term expectations based on many iterations of the estimation game.\nBut when you apply an estimator to real data, you just get one estimate. It would not be meaningful to say that the estimate is unbiased; being unbiased is a property of the estimator, not the estimate.\nAfter you choose an estimator with appropriate properties, and use it to generate an estimate, the next step is to characterize the uncertainty of the estimate.",
"#Estimate2()",
"Sampling Distributions\nSuppose you are a scientist studying gorillas in a wildlife preserve. You want to know the average weight of the adult female gorillas in the preserve. To weigh them, you have to tranquilize them, which is dangerous, expensive, and possibly harmful to the gorillas. But if it is important to obtain this information, it might be acceptable to weigh a sample of 9 gorillas. Let’s assume that the population of the preserve is well known, so we can choose a representative sample of adult females. We could use the sample mean, x ̄, to estimate the unknown population mean, μ.\nHaving weighed 9 female gorillas, you might find x ̄ = 90 kg and sample standard deviation, S = 7.5 kg. The sample mean is an unbiased estimator of μ, and in the long run it minimizes MSE. So if you report a single estimate that summarizes the results, you would report 90 kg.\nBut how confident should you be in this estimate? If you only weigh n = 9 gorillas out of a much larger population, you might be unlucky and choose the 9 heaviest gorillas (or the 9 lightest ones) just by chance. Variation in the estimate caused by random selection is called sampling error.\nSampling Error\nTo quantify sampling error, we can simulate the sampling process with hypothetical values of μ and σ, and see how much x ̄ varies.\nSince we don’t know the actual values of μ and σ in the population, we’ll use the estimates x ̄ and S. So the question we answer is: “If the actual values of μ and σ were 90 kg and 7.5 kg, and we ran the same experiment many times, how much would the estimated mean, x ̄, vary?”",
"def SimulateSample(mu=90, sigma=7.5, n=9, m=1000):\n means = []\n for j in range(m):\n xs = np.random.normal(mu, sigma, n)\n xbar = np.mean(xs)\n means.append(xbar)\n \n return sorted(means)",
"mu and sigma are the hypothetical values of the parameters. n is the sample size, the number of gorillas we measured. m is the number of times we run the simulation.",
"means = SimulateSample()\n\ncdfs = [EvalCdf(means,m) for m in means]\nplt.step(sorted(means),cdfs)\n\nci_5 = Percentile(means, 5)\nci_95 = Percentile(means, 95)\nprint(ci_5, ci_95)\nstderr = RMSE(means, 90)\n\nstderr",
"In each iteration, we choose n values from a normal distribution with the given parameters, and compute the sample mean, xbar. We run 1000 simulations and then compute the distribution, cdf, of the estimates. The result is shown in Figure. This distribution is called the sampling distribution of the estimator. It shows how much the estimates would vary if we ran the experiment over and over.\nThe mean of the sampling distribution is pretty close to the hypothetical value of μ, which means that the experiment yields the right answer, on average. After 1000 tries, the lowest result is 82 kg, and the highest is 98 kg. This range suggests that the estimate might be off by as much as 8 kg.\nThere are two common ways to summarize the sampling distribution:\n* Standard error (SE) is a measure of how far we expect the estimate to be off, on average. For each simulated experiment, we compute the error, x ̄ − μ, and then compute the root mean squared error (RMSE). In this example, it is roughly 2.5 kg.\n* A confidence interval (CI) is a range that includes a given fraction of the sampling distribution. For example, the 90% confidence interval is the range from the 5th to the 95th percentile. In this example, the 90% CI is (86, 94) kg.\nStandard errors and confidence intervals are the source of much confusion:\n\nPeople often confuse standard error and standard deviation. Remember that standard deviation describes variability in a measured quantity; in this example, the standard deviation of gorilla weight is 7.5 kg. Standard error describes variability in an estimate. In this example, the standard error of the mean, based on a sample of 9 measurements, is 2.5 kg.\n\nOne way to remember the difference is that, as sample size increases, standard error gets smaller; standard deviation does not.\n* People often think that there is a 90% probability that the actual pa- rameter, μ, falls in the 90% confidence interval. Sadly, that is not true. If you want to make a claim like that, you have to use Bayesian methods (see my book, Think Bayes).\nThe sampling distribution answers a different question: it gives you a sense of how reliable an estimate is by telling you how much it would vary if you ran the experiment again.\nIt is important to remember that confidence intervals and standard errors only quantify sampling error; that is, error due to measuring only part of the population. The sampling distribution does not account for other sources of error, notably sampling bias and measurement error.\nSampling Bias\nSuppose that instead of the weight of gorillas in a nature preserve, you want to know the average weight of women in the city where you live. It is unlikely that you would be allowed to choose a representative sample of women and weigh them.\nA simple alternative would be “telephone sampling;” that is, you could choose random numbers from the phone book, call and ask to speak to an adult woman, and ask how much she weighs.\nTelephone sampling has obvious limitations. For example, the sample is limited to people whose telephone numbers are listed, so it eliminates people without phones (who might be poorer than average) and people with unlisted numbers (who might be richer). Also, if you call home telephones during the day, you are less likely to sample people with jobs. And if you only sample the person who answers the phone, you are less likely to sample people who share a phone line.\nIf factors like income, employment, and household size are related to weight and it is plausible that they are the results of your survey would be affected one way or another. This problem is called sampling bias because it is a property of the sampling process.\nThis sampling process is also vulnerable to self-selection, which is a kind of sampling bias. Some people will refuse to answer the question, and if the tendency to refuse is related to weight, that would affect the results.\nFinally, if you ask people how much they weigh, rather than weighing them, the results might not be accurate. Even helpful respondents might round up or down if they are uncomfortable with their actual weight. And not all respondents are helpful. These inaccuracies are examples of measurement error.\nWhen you report an estimated quantity, it is useful to report standard error, or a confidence interval, or both, in order to quantify sampling error. But it is also important to remember that sampling error is only one source of error, and often it is not the biggest.\nExponential distributions\nLet’s play one more round of the estimation game. I’m thinking of a distribution. It’s an exponential distribution, and here’s a sample:\n[5.384, 4.493, 19.198, 2.790, 6.122, 12.844]\nWhat do you think is the parameter, λ, of this distribution?\nIn general, the mean of an exponential distribution is 1/λ, so working backwards, we might choose\n L = 1 / x ̄\n\nL is an estimator of λ. And not just any estimator; it is also the maximum likelihood estimator (see http://wikipedia.org/wiki/Exponential_distribution#Maximum_likelihood). So if you want to maximize your chance of guessing λ exactly, L is the way to go.\nBut we know that x ̄ is not robust in the presence of outliers, so we expect L to have the same problem.\nWe can choose an alternative based on the sample median. The median of an exponential distribution is ln(2)/λ,\nso working backwards again, we can define an estimator\n Lm = ln(2)/m\n\nwhere m is the sample median. To test the performance of these estimators, we can simulate the sampling process:",
"def Estimate3(n=7, m=1000):\n lam = 2\n means = []\n medians = []\n for _ in range(m):\n xs = np.random.exponential(1.0/lam, n)\n L = 1 / np.mean(xs)\n Lm = math.log(2) / pd.Series(xs).median()\n means.append(L)\n medians.append(Lm)\n print('rmse L', RMSE(means, lam))\n print('rmse Lm', RMSE(medians, lam))\n print('mean error L', MeanError(means, lam))\n print('mean error Lm', MeanError(medians, lam))",
"When I run this experiment with λ = 2, the RMSE of L is 1.1. For the median-based estimator Lm, RMSE is 2.2. We can’t tell from this experiment whether L minimizes MSE, but at least it seems better than Lm.\nSadly, it seems that both estimators are biased. For L the mean error is 0.39; for Lm it is 0.54. And neither converges to 0 as m increases. It turns out that x ̄ is an unbiased estimator of the mean of the distribution, 1/λ, but L is not an unbiased estimator of λ. The values changes with each call to the function.",
"Estimate3()",
"Hypothesis Testing\nThe fundamental question we want to address is whether the effects we see in a sample are likely to appear in the larger population. For example, in the Social Network ads sample we see a difference in mean Age for purchased customer and others. We would like to know if that effect reflects a real difference for women in the U.S., or if it might appear in the sample by chance.\nThere are several ways we could formulate this question, including Fisher null hypothesis testing, Neyman-Pearson decision theory, and Bayesian in- ference1. What I present here is a subset of all three that makes up most of what people use in practice, which I will call classical hypothesis testing.\nThe goal of classical hypothesis testing is to answer the question, “Given a sample and an apparent effect, what is the probability of seeing such an effect by chance?” Here’s how we answer that question:\n\n\nThe first step is to quantify the size of the apparent effect by choosing a test statistic. In the Social Network ads example, the apparent effect is a difference in Age between purchased customer and others, so a natural choice for the test statistic is the difference in means between the two groups.\n\n\nThe second step is to define a null hypothesis, which is a model of the system based on the assumption that the apparent effect is not real. Social Network ads example the null hypothesis is that there is no difference between purchased customer and others; that is, that age for both groups have the same distribution.\n\n\nThe third step is to compute a p-value, which is the probability of seeing the apparent effect if the null hypothesis is true. In the Social Network ads example, we would compute the actual difference in means, then compute the probability of seeing a difference as big, or bigger, under the null hypothesis.\n\n\nThe last step is to interpret the result. If the p-value is low, the effect is said to be statistically significant, which means that it is unlikely to have occurred by chance. In that case we infer that the effect is more likely to appear in the larger population.\n\n\nThe logic of this process is similar to a proof by contradiction. To prove a mathematical statement, A, you assume temporarily that A is false. If that assumption leads to a contradiction, you conclude that A must actually be true.\nSimilarly, to test a hypothesis like, “This effect is real,” we assume, temporarily, that it is not. That’s the null hypothesis. Based on that assumption, we compute the probability of the apparent effect. That’s the p-value. If the p-value is low, we conclude that the null hypothesis is unlikely to be true.\nImplement Hypothesis Testing\nAs a simple example2, suppose we toss a coin 250 times and see 140 heads and 110 tails. Based on this result, we might suspect that the coin is biased; that is, more likely to land heads. To test this hypothesis, we compute the probability of seeing such a difference if the coin is actually fair:",
"data = (140, 110)\nheads, tails = data[0], data[1]\nactual = heads - tails\n\ndef test_statistic(data):\n heads, tails = data[\"H\"], data[\"T\"]\n test_stat = abs(heads - tails)\n return test_stat\n\ndef generate_sample(data):\n heads, tails = data[0], data[1]\n n = data[0] + data[1]\n toss_sample = {}\n sample = [random.choice('HT') for _ in range(n)]\n for toss, count in zip(pd.Series(sample).value_counts().index, pd.Series(sample).value_counts().values):\n toss_sample[toss] = count\n \n return toss_sample\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [test_statistic(generate_sample(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual)\n return count / iters\n\ncalculate_pvalue(data)",
"The result is about 0.059, which means that if the coin is fair, we expect to see a difference as big as 30 about 5.9% of the time.\nInterpreting the Results\nHow should we interpret this result? By convention, 5% is the threshold of statistical significance. If the p-value is less than 5%, the effect is considered significant; otherwise it is not.\nBut the choice of 5% is arbitrary, and (as we will see later) the p-value depends on the choice of the test statistics and the model of the null hypothesis. So p-values should not be considered precise measurements.\nI recommend interpreting p-values according to their order of magnitude: if the p-value is less than 1%, the effect is unlikely to be due to chance; if it is greater than 10%, the effect can plausibly be explained by chance. P-values between 1% and 10% should be considered borderline. So in this example I conclude that the data do not provide strong evidence that the coin is biased or not\nDiffMeansPermute\nTesting a difference in means\nOne of the most common effects to test is a difference in mean between two groups. In the NSFG data, we saw that the mean age for purchasing customer is slightly longer, and the mean estimated_salary of purchasing customer is more than other. Now we will see if those effects are statistically significant.\nFor these examples, the null hypothesis is that the distributions for the two groups are the same. One way to model the null hypothesis is by permutation; that is, we can take values for purchasing customer and others and shuffle them, treating the two groups as one big group:",
"def TestStatistic(data):\n group1, group2 = data\n test_stat = abs(group1.mean() - group2.mean())\n return test_stat\n \ndef MakeModel(data):\n group1, group2 = data\n n, m = len(group1), len(group2)\n pool = np.hstack((group1, group2))\n #print(pool.shape)\n return pool, n\n \ndef RunModel(pool, n):\n np.random.shuffle(pool)\n data = pool[:n], pool[n:]\n return data\n\ndef sample_generator(data):\n pool, n = MakeModel(data)\n return RunModel(pool, n)",
"data is a pair of sequences, one for each group.\nThe test statistic is the absolute difference in the means.\nMakeModel records the sizes of the groups, n and m, and combines the groups into one NumPy array, pool.\nRunModel simulates the null hypothesis by shuffling the pooled values and splitting them into two groups with sizes n and m. As always, the return value from RunModel has the same format as the observed data.",
"purchased_customer.dropna(inplace=True)\nno_purchase.dropna(inplace=True)\n\ndata = purchased_customer.Age.values, no_purchase.Age.values\nht = sample_generator(data)\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(sample_generator(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\ntest_stats, pval = calculate_pvalue(ht)\n\ncdfs = [EvalCdf(test_stats,ts) for ts in test_stats]\nplt.step(sorted(test_stats),cdfs)\nplt.xlabel(\"test statistics\")\nplt.ylabel(\"CDF\")",
"The result pvalue is about 0.0, which means that we expect to see a difference as big as the observed effect about 0% of the time. So this effect is statistically significant.\nIf we run the same analysis with estimated salary, the computed p-value is 0; after 1000 attempts, the simulation never yields an effect as big as the observed difference, 18564.80. So we would report p < 0.001, and conclude that the difference in estimated salary is statistically significant.\nOther statistics Test\nChoosing the best test statistic depends on what question you are trying to address. For example, if the relevant question is whether age are different for purchasing customer, then it makes sense to test the absolute difference in means, as we did in the previous section.\nIf we had some reason to think that purchasing customer are likely to be older, then we would not take the absolute value of the difference; instead we would use this test statistic:\nDiffMeansOneSided",
"def TestStatistic(data):\n group1, group2 = data\n test_stat = group1.mean() - group2.mean()\n return test_stat\n\ndef MakeModel(data):\n group1, group2 = data\n n, m = len(group1), len(group2)\n pool = np.hstack((group1, group2))\n #print(pool.shape)\n return pool, n\n \ndef RunModel(pool, n):\n np.random.shuffle(pool)\n data = pool[:n], pool[n:]\n return data",
"DiffMeansOneSided inherits MakeModel and RunModel from above testing technique; the only difference is that TestStatistic does not take the absolute value of the difference. This kind of test is called one-sided because it only counts one side of the distribution of differences. The previous test, using both sides, is two-sided.\nFor this version of the test, the p-value is half of previous. In general the p-value for a one-sided test is about half the p-value for a two-sided test, depending on the shape of the distribution.\nThe one-sided hypothesis, that purchasing customer is old, is more specific than the two-sided hypothesis, so the p-value is smaller.\nWe can use the same framework to test for a difference in standard deviation. So we might hypothesize that the standard deviation is higher. Here’s how we can test that:\nDiffStdPermute",
"def TestStatistic(data):\n group1, group2 = data\n test_stat = group1.std() - group2.std()\n return test_stat\n\ndef MakeModel(data):\n group1, group2 = data\n n, m = len(group1), len(group2)\n pool = np.hstack((group1, group2))\n #print(pool.shape)\n return pool, n\n \ndef RunModel(pool, n):\n np.random.shuffle(pool)\n data = pool[:n], pool[n:]\n return data\n\n\ndef sample_generator(data):\n pool, n = MakeModel(data)\n return RunModel(pool, n)",
"This is a one-sided test because the hypothesis is that the standard deviation for customer purchasing is high, not just different. The p-value is 0.23, which is not statistically significant.",
"data = purchased_customer.Age.values, no_purchase.Age.values\nht = sample_generator(data)\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(sample_generator(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\nactual_diff\n\ntest_stats, pval = calculate_pvalue(ht)\n\ncdfs = [EvalCdf(test_stats,ts) for ts in test_stats]\nplt.step(sorted(test_stats),cdfs)\nplt.xlabel(\"test statistics\")\nplt.ylabel(\"CDF\")\n\npval",
"Testing Correlation\nThis framework can also test correlations. For example, in the NSFG data set, the correlation between customer's Age and his estimated salary is about 0.11. It seems like older customers have more salary. But could this effect be due to chance?\nFor the test statistic, I use Pearson’s correlation, but Spearman’s would work as well. If we had reason to expect positive correlation, we would do a one-sided test. But since we have no such reason, I’ll do a two-sided test using the absolute value of correlation.\nThe null hypothesis is that there is no correlation between customers age and his salary. By shuffling the observed values, we can simulate a world where the distributions of age and salary are the same, but where the variables are unrelated:",
"Corr(social_network[\"Age\"], social_network[\"EstimatedSalary\"])\n\ndef TestStatistic(data):\n xs, ys = data\n test_stat = abs(Corr(xs, ys))\n return test_stat\n\ndef RunModel(data):\n xs, ys = data\n xs = np.random.permutation(xs)\n return xs, ys\n\n",
"data is a pair of sequences. TestStatistic computes the absolute value of Pearson’s correlation. RunModel shuffles the xs and returns simulated data.",
"data = social_network.Age.values, social_network.EstimatedSalary.values\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(RunModel(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\ntest_stats, pval = calculate_pvalue(data)\n\npval",
"The actual correlation is 0.11. The computed p-value is 0.019; after 1000 iterations the largest simulated correlation is 0.16. So although the observed correlation is small, it is statistically significant.\nThis example is a reminder that “statistically significant” does not always mean that an effect is important, or significant in practice. It only means that it is unlikely to have occurred by chance.\nTesting Proportions\nSuppose you run a casino and you suspect that a customer is using a crooked die; that is, one that has been modified to make one of the faces more likely than the others. You apprehend the alleged cheater and confiscate the die, but now you have to prove that it is crooked. You roll the die 60 times and get the following results:\nOn average you expect each value to appear 10 times. In this dataset, the value 3 appears more often than expected, and the value 4 appears less often. But are these differences statistically significant?\nValue\n1\n2\n3\n4\n5\n6\nFrequency\n8\n9\n19\n5\n8\n11\nTo test this hypothesis, we can compute the expected frequency for each value, the difference between the expected and observed frequencies, and the total absolute difference. In this example, we expect each side to come up 10 times out of 60; the deviations from this expectation are -2, -1, 9, -5, -2, and 1; so the total absolute difference is 20. \nHow often would we see such a difference by chance?",
"def TestStatistic(data):\n observed = data\n n = sum(observed)\n expected = np.ones(6) * n / 6\n test_stat = sum(abs(observed - expected))\n return test_stat\ndef RunModel(data):\n n = sum(data)\n values = [1, 2, 3, 4, 5, 6]\n rolls = np.random.choice(values, n, replace=True)\n freqs = Counter(rolls)\n freqs = list(freqs.values())\n return freqs",
"The data are represented as a list of frequencies: the observed values are [8, 9, 19, 5, 8, 11]; the expected frequencies are all 10. The test statistic is the sum of the absolute differences \nThe null hypothesis is that the die is fair, so we simulate that by drawing random samples from values. RunModel uses Hist to compute and return the list of frequencies.",
"data = [8, 9, 19, 5, 8, 11]\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(RunModel(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\ntest_stats, pval = calculate_pvalue(data)\n\npval",
"The p-value for this data is 0.13, which means that if the die is fair we expect to see the observed total deviation, or more, about 13% of the time. So the apparent effect is not statistically significant.\nChi-squared tests\nIn the previous section we used total deviation as the test statistic. But for testing proportions it is more common to use the chi-squared statistic:\n χ2 = SUMMATION (Oi − Ei)^2 / Ei\n\nWhere Oi are the observed frequencies and Ei are the expected frequencies. Here’s the Python code:",
"def TestStatistic(self, data):\n observed = data\n n = sum(observed)\n expected = np.ones(6) * n / 6\n test_stat = sum((observed - expected)**2 / expected)\n return test_stat",
"Squaring the deviations (rather than taking absolute values) gives more weight to large deviations. Dividing through by expected standardizes the deviations, although in this case it has no effect because the expected fre- quencies are all equal.\nThe p-value using the chi-squared statistic is 0.04, substantially smaller than what we got using total deviation, 0.13. If we take the 5% threshold seriously, we would consider this effect statistically significant. But considering the two tests togther, I would say that the results are borderline. I would not rule out the possibility that the die is crooked, but I would not convict the accused cheater.\nThis example demonstrates an important point: the p-value depends on the choice of test statistic and the model of the null hypothesis, and sometimes these choices determine whether an effect is statistically significant or not.\nErrors\nIn classical hypothesis testing, an effect is considered statistically significant if the p-value is below some threshold, commonly 5%. This procedure raises two questions:\n\n\nIf the effect is actually due to chance, what is the probability that we will wrongly consider it significant? This probability is the false positive rate.\n\n\nIf the effect is real, what is the chance that the hypothesis test will fail? This probability is the false negative rate.\n\n\nThe false positive rate is relatively easy to compute: if the threshold is 5%, the false positive rate is 5%. Here’s why:\n\n\nIf there is no real effect, the null hypothesis is true, so we can compute the distribution of the test statistic by simulating the null hypothesis. Call this distribution CDFT .\n\n\nEach time we run an experiment, we get a test statistic, t, which is drawn from CDFT . Then we compute a p-value, which is the probability that a random value from CDFT exceeds t, so that’s 1−CDFT(t).\n\n\nThe p-value is less than 5% if CDFT (t) is greater than 95%; that is, if t exceeds the 95th percentile. And how often does a value chosen from CDFT exceed the 95th percentile? 5% of the time.\n\n\nSo if you perform one hypothesis test with a 5% threshold, you expect a false positive 1 time in 20.\nPower\nThe false negative rate is harder to compute because it depends on the actual effect size, and normally we don’t know that. One option is to compute a rate conditioned on a hypothetical effect size.\nFor example, if we assume that the observed difference between groups is accurate, we can use the observed samples as a model of the population and run hypothesis tests with simulated data:",
"def resample(xs):\n return np.random.choice(xs, len(xs), replace=True)\n\ndef TestStatistic(data):\n group1, group2 = data\n test_stat = abs(group1.mean() - group2.mean())\n return test_stat\n \ndef MakeModel(data):\n group1, group2 = data\n n, m = len(group1), len(group2)\n pool = np.hstack((group1, group2))\n #print(pool.shape)\n return pool, n\n \ndef RunModel(pool, n):\n np.random.shuffle(pool)\n data = pool[:n], pool[n:]\n return data\n\ndef sample_generator(data):\n pool, n = MakeModel(data)\n return RunModel(pool, n)\n\n\ndata = purchased_customer.Age.values, no_purchase.Age.values\nht = sample_generator(data)\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(sample_generator(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\ndef FalseNegRate(data, num_runs=100):\n group1, group2 = data\n count = 0\n for i in range(num_runs):\n sample1 = resample(group1)\n sample2 = resample(group2)\n ht = sample_generator((sample1,sample2))\n test_stats, pval = calculate_pvalue(ht)\n if pval > 0.05:\n count += 1\n return count / num_runs",
"FalseNegRate takes data in the form of two sequences, one for each group. Each time through the loop, it simulates an experiment by drawing a random sample from each group and running a hypothesis test. Then it checks the result and counts the number of false negatives.\nResample takes a sequence and draws a sample with the same length, with replacement:",
"data = purchased_customer.Age.values, no_purchase.Age.values\nneg_rate = FalseNegRate(data)\n\nneg_rate",
"Replication\nThe hypothesis testing process I demonstrated in this above, strictly speaking, good practice.\nFirst, I performed multiple tests. If you run one hypothesis test, the chance of a false positive is about 1 in 20, which might be acceptable. But if you run 20 tests, you should expect at least one false positive, most of the time.\nSecond, I used the same dataset for exploration and testing. If you explore a large dataset, find a surprising effect, and then test whether it is significant, you have a good chance of generating a false positive.\nTo compensate for multiple tests, you can adjust the p-value threshold (see https://en.wikipedia.org/wiki/Holm-Bonferroni_method). Or you can address both problems by partitioning the data, using one set for exploration and the other for testing.\nIn some fields these practices are required or at least encouraged. But it is also common to address these problems implicitly by replicating published results. Typically the first paper to report a new result is considered ex- ploratory. Subsequent papers that replicate the result with new data are considered confirmatory.\nLinear Least Squares\nCorrelation coefficients measure the strength and sign of a relationship, but not the slope. There are several ways to estimate the slope; the most common is a linear least squares fit. A “linear fit” is a line intended to model the relationship between variables. A “least squares” fit is one that minimizes the mean squared error (MSE) between the line and the data.\nSuppose we have a sequence of points, ys, that we want to express as a function of another sequence xs. If there is a linear relationship between xs and ys with intercept inter and slope slope, \nwe expect each y[i] to be inter + slope * x[i].\n\nBut unless the correlation is perfect, this prediction is only approximate. The vertical deviation from the line, or residual, is\nres = ys - (inter + slope * xs)\n\nThe residuals might be due to random factors like measurement error, or non- random factors that are unknown. For example, if we are trying to predict Salary as a function of experience, unknown factors might include initial package, responsibilites, and role etc.\nIf we get the parameters inter and slope wrong, the residuals get bigger, so it makes intuitive sense that the parameters we want are the ones that minimize the residuals.\nWe might try to minimize the absolute value of the residuals, or their squares, or their cubes; but the most common choice is to minimize the sum of squared residuals, sum(res^2)).\nWhy? There are three good reasons and one less important one:\n\nSquaring has the feature of treating positive and negative residuals the same, which is usually what we want.\nSquaring gives more weight to large residuals, but not so much weight that the largest residual always dominates.\nIf the residuals are uncorrelated and normally distributed with mean 0 and constant (but unknown) variance, then the least squares fit is also the maximum likelihood estimator of inter and slope. See https://en.wikipedia.org/wiki/Linear_regression.\nThe values of inter and slope that minimize the squared residuals can be computed efficiently.\n\nThe last reason made sense when computational efficiency was more important than choosing the method most appropriate to the problem at hand. That’s no longer the case, so it is worth considering whether squared residuals are the right thing to minimize.\nFor example, if you are using xs to predict values of ys, guessing too high might be better (or worse) than guessing too low. In that case you might want to compute some cost function for each residual, and minimize total cost, sum(cost(res)). However, computing a least squares fit is quick, easy and often good enough.",
"#Implementation of Linear Least square\n\ndef LeastSquares(xs, ys):\n meanx, varx = pd.Series(xs).mean(), pd.Series(xs).var()\n meany = pd.Series(ys).mean()\n slope = Cov(xs, ys, meanx, meany) / varx\n inter = meany - slope * meanx\n return inter, slope",
"LeastSquares takes sequences xs and ys and returns the estimated parameters inter and slope. For details on how it works, see http://wikipedia.org/wiki/Numerical_methods_for_linear_least_squares.\nFitLine, which takes inter and slope and re- turns the fitted line for a sequence of xs.",
"def FitLine(xs, inter, slope):\n fit_xs = np.sort(xs)\n fit_ys = inter + slope * fit_xs\n return fit_xs, fit_ys",
"Least square fit between salary and experience",
"regression_data = pd.read_csv(\"Salary_Data.csv\")\n\ninter, slope = LeastSquares(regression_data[\"YearsExperience\"], regression_data[\"Salary\"])\n\nfit_xs, fit_ys = FitLine(regression_data[\"YearsExperience\"], inter, slope)\n\nprint(\"intercept: \", inter)\nprint(\"Slope: \", slope)",
"The estimated intercept and slope are 27465.89 and 9134.96 salary per year. These values are hard to interpret in this form: the intercept is the expected salary of an employee, who has 0 year experience, like salary for fresher.",
"plt.scatter(regression_data[\"YearsExperience\"], regression_data[\"Salary\"])\nplt.plot(fit_xs, fit_ys)\nplt.xlabel(\"Experience\")\nplt.ylabel(\"Salary\")",
"It’s a good idea to look at a figure like this to assess whether the relationship is linear and whether the fitted line seems like a good model of the relationship.\nAnother useful test is to plot the residuals. A residuals function below",
"def Residuals(xs, ys, inter, slope):\n xs = np.asarray(xs)\n ys = np.asarray(ys)\n res = ys - (inter + slope * xs)\n return res",
"Residuals takes sequences xs and ys and estimated parameters inter and slope. It returns the differences between the actual values and the fitted line.",
"residuals = list(Residuals(regression_data[\"YearsExperience\"], regression_data[\"Salary\"], inter, slope))\n\nregression_data[\"Residuals\"] = residuals\n\nbins = np.arange(0, 15, 2)\nindices = np.digitize(regression_data.YearsExperience, bins)\ngroups = regression_data.groupby(indices)\nfor i, group in groups:\n print(i, len(group))\n\nyear_exps = [group.YearsExperience.mean() for i, group in groups]\n\nage_residuals = defaultdict(list)\nfor i, grp in groups:\n for res in grp[\"Residuals\"]:\n age_residuals[i].append(EvalCdf(grp[\"Residuals\"], res))\n\nage_residuals\n\nfor percent in [75, 50, 25]:\n residue = [Percentile(age_residuals[k], percent) for k,v in age_residuals.items()]\n label = '%dth' %percent\n plt.plot(year_exps, residue)\n plt.xlabel(\"Experience Year\")\n plt.ylabel(\"Residuals\")",
"Ideally these lines should be flat, indicating that the residuals are random, and parallel, indicating that the variance of the residuals is the same for all age groups. In fact, the lines are close to parallel, so that’s good; but they have some curvature, indicating that the relationship is nonlinear. Nevertheless, the linear fit is a simple model that is probably good enough for some purposes.\nEstimation\nThe parameters slope and inter are estimates based on a sample; like other estimates, they are vulnerable to sampling bias, measurement error, and sampling error. sampling bias is caused by non-representative sampling, measurement error is caused by errors in collecting and recording data, and sampling error is the result of measuring a sample rather than the entire population.\nTo assess sampling error, we ask, “If we run this experiment again, how much variability do we expect in the estimates?” We can answer this question by running simulated experiments and computing sampling distributions of the estimates.\nGoodness of Fit\nThere are several ways to measure the quality of a linear model, or goodness\nof fit. One of the simplest is the standard deviation of the residuals.\nIf you use a linear model to make predictions, Std(res) is the root mean squared error (RMSE) of your predictions.\nAnother way to measure goodness of fit is the coefficient of determina- tion, usually denoted R2 and called “R-squared”:",
"def CoefDetermination(ys, res):\n return 1 - pd.Series(res).var() / pd.Series(ys).var()",
"Var(res) is the MSE of your guesses using the model, Var(ys) is the MSE without it. So their ratio is the fraction of MSE that remains if you use the model, and R2 is the fraction of MSE the model eliminates.\nThere is a simple relationship between the coefficient of determination and Pearson’s coefficient of correlation: R2 = ρ2. For example, if ρ is 0.8 or -0.8, R2 = 0.64.\nAlthough ρ and R2 are often used to quantify the strength of a relationship, they are not easy to interpret in terms of predictive power. In my opinion, Std(res) is the best representation of the quality of prediction, especially if it is presented in relation to Std(ys).\nFor example, when people talk about the validity of the SAT (a standardized test used for college admission in the U.S.) they often talk about correlations between SAT scores and other measures of intelligence.\nAccording to one study, there is a Pearson correlation of ρ = 0.72 between total SAT scores and IQ scores, which sounds like a strong correlation. But R2 = ρ2 = 0.52, so SAT scores account for only 52% of variance in IQ.",
"#IQ scores are normalized with Std(ys) = 15, so\nvar_ys = 15**2\nrho = 0.72\nr2 = rho**2\nvar_res = (1 - r2) * var_ys\nstd_res = math.sqrt(var_res)\nprint(std_res)",
"So using SAT score to predict IQ reduces RMSE from 15 points to 10.4 points. A correlation of 0.72 yields a reduction in RMSE of only 31%.\nIf you see a correlation that looks impressive, remember that R2 is a better indicator of reduction in MSE, and reduction in RMSE is a better indicator of predictive power.\nTesting a linear model\nThe effect of years of experience is high on salary. So is it possible that the apparent relationship is due to chance? There are several ways we might test the results of a linear fit.\nOne option is to test whether the apparent reduction in MSE is due to chance. In that case, the test statistic is R2 and the null hypothesis is that there is no relationship between the variables. We can simulate the null hypothesis by permutation. In fact, because R2 = ρ2, a one-sided test of R2 is equivalent to a two-sided test of ρ. We’ve already done that test, and found p < 0.001, so we conclude that the apparent relationship between experience and salary is statistically significant.\nAnother approach is to test whether the apparent slope is due to chance. The null hypothesis is that the slope is actually zero; in that case we can model the salary as random variations around their mean. Try out hypothesis as before!",
"Corr(regression_data[\"YearsExperience\"], regression_data[\"Salary\"])\n\ndef TestStatistic(data):\n exp, sal = data\n _, slope = LeastSquares(exp, sal)\n return slope\ndef MakeModel(data):\n _, sals = data\n ybar = sals.mean()\n res = sals - ybar\n return ybar, res\ndef RunModel(data):\n exp, _ = data\n sals = ybar + np.random.permutation(res)\n return exp, sals\n\nybar, res = MakeModel(data)",
"The data are represented as sequences of exp and sals. The test statistic is the slope estimated by LeastSquares. The model of the null hypothesis is represented by the mean sals of all employees and the deviations from the mean. To generate simulated data, we permute the deviations and add them to the mean.",
"data = regression_data.YearsExperience.values, regression_data.Salary.values\nactual_diff = TestStatistic(data)\n\ndef calculate_pvalue(data, iters=1000):\n test_stats = [TestStatistic(RunModel(data))\n for _ in range(iters)]\n count = sum(1 for x in test_stats if x >= actual_diff)\n return sorted(test_stats),count / iters\n\ntest_stats, pval = calculate_pvalue(data)\n\npval",
"The p-value is less than 0.001, so although the estimated slope is small, it is unlikely to be due to chance.\nWeighted Resampling\nAs an example, if you survey 100,000 people in a country of 300 million, each respondent represents 3,000 people. If you oversample one group by a factor of 2, each person in the oversampled group would have a lower weight, about 1500.\nTo correct for oversampling, we can use resampling; that is, we can draw samples from the survey using probabilities proportional to sampling weights. Then, for any quantity we want to estimate, we can generate sampling dis- tributions, standard errors, and confidence intervals.\nRegression\nThe linear least squares fit is an example of regression, which is the more general problem of fitting any kind of model to any kind of data. This use of the term “regression” is a historical accident; it is only indirectly related to the original meaning of the word.\nThe goal of regression analysis is to describe the relationship between one set of variables, called the dependent variables, and another set of variables, called independent or explanatory variables.\nPreviously we used employee's experience as an explanatory variable to predict salary as a dependent variable. When there is only one depen- dent and one explanatory variable, that’s simple regression. Here, we move on to multiple regression, with more than one explanatory variable. If there is more than one dependent variable, that’s multivariate regression.\nIf the relationship between the dependent and explanatory variable is linear, that’s linear regression. For example, if the dependent variable is y and the explanatory variables are x1 and x2, we would write the following linear regression model:\n y = β0 + β1x1 + β2x2 + ε\n\nwhere β0 is the intercept, β1 is the parameter associated with x1, β2 is the parameter associated with x2, and ε is the residual due to random variation or other unknown factors.\nGiven a sequence of values for y and sequences for x1 and x2, we can find the parameters, β0 , β1 , and β2 , that minimize the sum of ε2 . This process is called ordinary least squares. The computation is similar to LeastSquare, but generalized to deal with more than one explanatory variable. You can find the details at https://en.wikipedia.org/wiki/Ordinary_least_squares\nLinear Regression using statsmodel\nFor multiple regression we’ll switch to StatsModels, a Python package that provides several forms of regression and other analyses. If you are using Anaconda, you already have StatsModels; otherwise you might have to install it.",
"import statsmodels.formula.api as smf\nformula = 'Salary ~ YearsExperience'\nmodel = smf.ols(formula, data=regression_data)\nresults = model.fit()\n\nresults",
"statsmodels provides two interfaces (APIs); the “formula” API uses strings to identify the dependent and explanatory variables. It uses a syntax called patsy; in this example, the ~ operator separates the dependent variable on the left from the explanatory variables on the right.\nsmf.ols takes the formula string and the DataFrame, regression_data, and returns an OLS object that represents the model. The name ols stands for “ordinary least squares.”\nThe fit method fits the model to the data and returns a RegressionResults object that contains the results.\nThe results are also available as attributes. params is a Series that maps from variable names to their parameters, so we can get the intercept and slope like this:",
"inter = results.params['Intercept']\nslope = results.params['YearsExperience']\nslope_pvalue = results.pvalues['YearsExperience']\nprint(slope_pvalue)",
"pvalues is a Series that maps from variable names to the associated p-values, so we can check whether the estimated slope is statistically significant:\nThe p-value associated with agepreg is 1.14e-20, which is less than 0.001, as\nexpected.",
"print(results.summary())\nprint(results.rsquared)",
"results.rsquared contains R2, which is 0.0047. results also provides f_pvalue, which is the p-value associated with the model as a whole, similar to testing whether R2 is statistically significant.\nAnd results provides resid, a sequence of residuals, and fittedvalues, a sequence of fitted values corresponding to agepreg.\nThe results object provides summary(), which represents the results in a readable format.\nprint(results.summary())\n\nBetting Pool https://en.wikipedia.org/wiki/Betting_pool\nTheory\nLinear regression can be generalized to handle other kinds of dependent vari- ables. If the dependent variable is boolean, the generalized model is called logistic regression. If the dependent variable is an integer count, it’s called Poisson regression.\nSuppose a friend of yours is pregnant and you want to predict whether the baby is a boy or a girl. You could use data from the NSFG to find factors that affect the “sex ratio”, which is conventionally defined to be the probability of having a boy.\nIf you encode the dependent variable numerically, for example 0 for a girl and 1 for a boy, you could apply ordinary least squares, but there would be problems. The linear model might be something like this:\ny = β0 + β1x1 + β2x2 + ε\n\nWhere y is the dependent variable, and x1 and x2 are explanatory variables.\nThen we could find the parameters that minimize the residuals.\nThe problem with this approach is that it produces predictions that are hard to interpret. Given estimated parameters and values for x1 and x2, the model might predict y = 0.5, but the only meaningful values of y are 0 and 1.\nIt is tempting to interpret a result like that as a probability; for example, we might say that a respondent with particular values of x1 and x2 has a 50% chance of having a boy. But it is also possible for this model to predict y = 1.1 or y = −0.1, and those are not valid probabilities.\nLogistic regression avoids this problem by expressing predictions in terms of odds rather than probabilities. If you are not familiar with odds, “odds in favor” of an event is the ratio of the probability it will occur to the probability that it will not.\nSo if I think my team has a 75% chance of winning, I would say that the odds in their favor are three to one, because the chance of winning is three times the chance of losing.\nOdds and probabilities are different representations of the same information. Given a probability, you can compute the odds like this:\no = p / (1-p)\n\nGiven odds in favor, you can convert to probability like this:\np = o / (o+1)\n\nLogistic regression is based on the following model: logo=β0 +β1x1 +β2x2 +ε\nWhere o is the odds in favor of a particular outcome; in the example, o would be the odds of having a boy.\nSuppose we have estimated the parameters β0, β1, and β2 (I’ll explain how in a minute). And suppose we are given values for x1 and x2. We can compute the predicted value of log o, and then convert to a probability:\n o = np.exp(log_o)\n p = o / (o+1)\nSo in the office pool scenario we could compute the predictive probability of having a boy. But how do we estimate the parameters?\nEstimating parameters\nUnlike linear regression, logistic regression does not have a closed form solu- tion, so it is solved by guessing an initial solution and improving it iteratively.\nThe usual goal is to find the maximum-likelihood estimate (MLE), which is the set of parameters that maximizes the likelihood of the data. For example, suppose we have the following data:",
"y = np.array([0, 1, 0, 1])\nx1 = np.array([0, 0, 0, 1])\nx2 = np.array([0, 1, 1, 1])\n\n#And we start with the initial guesses\nβ0 = -1.5, \nβ1 = 2.8, \nβ2 = 1.1\n\nbeta = [-1.5, 2.8, 1.1]\n\n#Then for each row we can compute log_o:\nlog_o = beta[0] + beta[1] * x1 + beta[2] * x2\n\n#convert from log odds to probabilities:\no = np.exp(log_o)\n\np = o / (o+1)",
"Notice that when log_o is greater than 0, o is greater than 1 and p is greater than 0.5.\nThe likelihood of an outcome is p when y==1 and 1-p when y==0. For example, if we think the probability of a boy is 0.8 and the outcome is a boy, the likelihood is 0.8; if the outcome is a girl, the likelihood is 0.2. We can compute that like this:",
"likes = y * p + (1-y) * (1-p)\nprint(likes)\n\n#The overall likelihood of the data is the product of likes:\nlike = np.prod(likes)",
"For these values of beta, the likelihood of the data is 0.18. The goal of logistic regression is to find parameters that maximize this like- lihood. To do that, most statistics packages use an iterative solver like Newton’s method (see https://en.wikipedia.org/wiki/Logistic_regression#Model_fitting).\nNote I have skipped few lessons on Multiple regression and Logistic regression, as the lesson where straightforward.\nOnly difference between them is the dependent variable which is binary in logistics and continous in ml regression. Refer the thinkstat books for reference for implementation.\nTime Series Analysis\nA time series is a sequence of measurements from a system that varies in time. One famous example is the “hockey stick graph” that shows global average temperature over time (see https://en.wikipedia.org/wiki/Hockey_stick_graph).\nThe example I work with in this chapter comes from Zachary M. Jones, a researcher in political science who studies the black market for cannabis in the U.S. (http://zmjones.com/marijuana). He collected data from a web site called “Price of Weed” that crowdsources market information by asking participants to report the price, quantity, quality, and location of cannabis transactions (http://www.priceofweed.com/). The goal of his project is to investigate the effect of policy decisions, like legalization, on markets. I find this project appealing because it is an example that uses data to address important political questions, like drug policy.\nI hope you will find this chapter interesting, but I’ll take this opportunity to reiterate the importance of maintaining a professional attitude to data analysis. Whether and which drugs should be illegal are important and difficult public policy questions; our decisions should be informed by accurate data reported honestly.",
"mj_clean = pd.read_csv('mj-clean.csv', engine='python', parse_dates=[5])\n#parse_dates tells read_csv to interpret values in column 5 as dates and convert them to NumPy datetime64 objects.",
"The DataFrame has a row for each reported transaction and the following\ncolumns:\n\ncity: string city name.\nstate: two-letter state abbreviation.\nprice: price paid in dollars.\namount: quantity purchased in grams.\nquality: high, medium, or low quality, as reported by the purchaser. • date: date of report, presumed to be shortly after date of purchase. • ppg: price per gram, in dollars.\nstate.name: string state name.\nlat: approximate latitude of the transaction, based on city name.\nlon: approximate longitude of the transaction.\n\nEach transaction is an event in time, so we could treat this dataset as a time series. But the events are not equally spaced in time; the number of transactions reported each day varies from 0 to several hundred. Many methods used to analyze time series require the measurements to be equally spaced, or at least things are simpler if they are.\nIn order to demonstrate these methods, I divide the dataset into groups by reported quality, and then transform each group into an equally spaced series by computing the mean daily price per gram.",
"def GroupByQualityAndDay(transactions):\n groups = transactions.groupby('quality')\n dailies = {}\n for name, group in groups:\n dailies[name] = GroupByDay(group)\n return dailies",
"groupby is a DataFrame method that returns a GroupBy object, groups; used in a for loop, it iterates the names of the groups and the DataFrames that represent them. Since the values of quality are low, medium, and high, we get three groups with those names.\nThe loop iterates through the groups and calls GroupByDay, which computes the daily average price and returns a new DataFrame:",
"def GroupByDay(transactions, func=np.mean):\n grouped = transactions[['date', 'ppg']].groupby('date')\n daily = grouped.aggregate(func)\n daily['date'] = daily.index\n start = daily.date[0]\n one_year = np.timedelta64(1, 'Y')\n daily['years'] = (daily.date - start) / one_year\n return daily",
"The parameter, transactions, is a DataFrame that contains columns date\nand ppg. We select these two columns, then group by date.\nThe result, grouped, is a map from each date to a DataFrame that contains prices reported on that date. aggregate is a GroupBy method that iterates through the groups and applies a function to each column of the group; in this case there is only one column, ppg. So the result of aggregate is a DataFrame with one row for each date and one column, ppg.\nDates in these DataFrames are stored as NumPy datetime64 objects, which are represented as 64-bit integers in nanoseconds. For some of the analyses coming up, it will be convenient to work with time in more human-friendly units, like years. So GroupByDay adds a column named date by copying the index, then adds years, which contains the number of years since the first transaction as a floating-point number.\nThe resulting DataFrame has columns ppg, date, and years.",
"dailies = GroupByQualityAndDay(mj_clean)\n\nplt.figure(figsize=(6,8))\nplt.subplot(3, 1, 1)\nfor i, (k, v) in enumerate(dailies.items()):\n plt.subplot(3, 1, i+1)\n plt.title(k)\n plt.scatter(dailies[k].index, dailies[k].ppg, s=10)\n plt.xticks(rotation=30)\n plt.ylabel(\"Price per gram\")\n plt.xlabel(\"Months\")\n plt.tight_layout()",
"One apparent feature in these plots is a gap around November 2013. It’s possible that data collection was not active during this time, or the data might not be available. We will consider ways to deal with this missing data later.\nVisually, it looks like the price of high quality cannabis is declining during this period, and the price of medium quality is increasing. The price of low quality might also be increasing, but it is harder to tell, since it seems to be more volatile. Keep in mind that quality data is reported by volunteers, so trends over time might reflect changes in how participants apply these labels.\nLinear regression\nAlthough there are methods specific to time series analysis, for many prob- lems a simple way to get started is by applying general-purpose tools like linear regression. The following function takes a DataFrame of daily prices and computes a least squares fit, returning the model and results objects from StatsModels:",
"def RunLinearModel(daily):\n model = smf.ols('ppg ~ years', data=daily)\n results = model.fit()\n return model, results\n\ndef SummarizeResults(results):\n \"\"\"Prints the most important parts of linear regression results:\n results: RegressionResults object\n \"\"\"\n for name, param in results.params.items():\n pvalue = results.pvalues[name]\n print('%s %0.3g (%.3g)' % (name, param, pvalue))\n\n try:\n print('R^2 %.4g' % results.rsquared)\n ys = results.model.endog\n print('Std(ys) %.4g' % ys.std())\n print('Std(res) %.4g' % results.resid.std())\n print('\\n')\n except AttributeError:\n print('R^2 %.4g' % results.prsquared)\n print('\\n')\n\n#Then we can iterate through the qualities and fit a model to each:\nfor name, daily in dailies.items():\n model, results = RunLinearModel(daily)\n print(name)\n SummarizeResults(results)",
"The estimated slopes indicate that the price of high quality cannabis dropped by about 71 cents per year during the observed interval; for medium quality it increased by 28 cents per year, and for low quality it increased by 57 cents per year. These estimates are all statistically significant with very small p-values.\nThe R2 value for high quality cannabis is 0.44, which means that time as an explanatory variable accounts for 44% of the observed variability in price. For the other qualities, the change in price is smaller, and variability in prices is higher, so the values of R2 are smaller (but still statistically significant).",
"#The following code plots the observed prices and the fitted values:\ndef PlotFittedValues(model, results, label=''):\n years = model.exog[:,1]\n values = model.endog\n plt.scatter(years, values, s=15, label=label)\n plt.plot(years, results.fittedvalues, label='model')\n plt.xlabel(\"years\")\n plt.ylabel(\"ppg\")\n\nPlotFittedValues(model, results)",
"PlotFittedValues makes a scatter plot of the data points and a line plot of the fitted values. Plot shows the results for high quality cannabis. The model seems like a good linear fit for the data; nevertheless, linear regression is not the most appropriate choice for this data:\n\nFirst, there is no reason to expect the long-term trend to be a line or any other simple function. In general, prices are determined by supply and demand, both of which vary over time in unpredictable ways.\nSecond, the linear regression model gives equal weight to all data, recent and past. For purposes of prediction, we should probably give more weight to recent data.\nFinally, one of the assumptions of linear regression is that the residuals are uncorrelated noise. With time series data, this assumption is often false because successive values are correlated.\n\nMoving Average\nMost time series analysis is based on the modeling assumption that the ob- served series is the sum of three components:\n* Trend: A smooth function that captures persistent changes.\n* Seasonality: Periodic variation, possibly including daily, weekly,\nmonthly, or yearly cycles.\n* Noise: Random variation around the long-term trend.\nRegression is one way to extract the trend from a series, as we saw in the previous section. But if the trend is not a simple function, a good alternative is a moving average. A moving average divides the series into overlapping regions, called windows, and computes the average of the values in each window.\nOne of the simplest moving averages is the rolling mean, which computes the mean of the values in each window. For example, if the window size is 3, the rolling mean computes the mean of values 0 through 2, 1 through 3, 2 through 4, etc.\npandas provides rolling_mean, which takes a Series and a window size and returns a new Series.",
"series = pd.Series(np.arange(10))\n\nmoving_avg = series.rolling(3).mean()",
"The first two values are nan; the next value is the mean of the first three elements, 0, 1, and 2. The next value is the mean of 1, 2, and 3. And so on.\nBefore we can apply rolling mean to the cannabis data, we have to deal with missing values. There are a few days in the observed interval with no reported transactions for one or more quality categories, and a period in 2013 when data collection was not active.\nIn the DataFrames we have used so far, these dates are absent; the index skips days with no data. For the analysis that follows, we need to represent this missing data explicitly. We can do that by “reindexing” the DataFrame:",
"dates = pd.date_range(dailies[\"high\"].index.min(), dailies[\"high\"].index.max())\nreindexed = dailies[\"high\"].reindex(dates)\n\n#dailies[\"high\"].index\n\nreindexed.shape",
"The first line computes a date range that includes every day from the be- ginning to the end of the observed interval. The second line creates a new DataFrame with all of the data from daily, but including rows for all dates, filled with nan.",
"#Now we can plot the rolling mean like this:\n#The window size is 30, so each value in roll_mean is the mean of 30 values from reindexed.ppg.\nroll_mean = reindexed.ppg.rolling(30).mean()\nplt.plot(roll_mean.index, roll_mean)\nplt.xticks(rotation=30)",
"The rolling mean seems to do a good job of smoothing out the noise and extracting the trend. The first 29 values are nan, and wherever there’s a missing value, it’s followed by another 29 nans. There are ways to fill in these gaps, but they are a minor nuisance.\nAn alternative is the exponentially-weighted moving average (EWMA), which has two advantages. First, as the name suggests, it computes a weighted average where the most recent value has the highest weight and the weights for previous values drop off exponentially. Second, the pandas implementation of EWMA handles missing values better.",
"ewma = reindexed.ppg.ewm(30).mean()\nplt.plot(ewma.index, ewma)\nplt.xticks(rotation=30)",
"The span parameter corresponds roughly to the window size of a moving average; it controls how fast the weights drop off, so it determines the number of points that make a non-negligible contribution to each average.\nabove plot shows the EWMA for the same data. It is similar to the rolling mean, where they are both defined, but it has no missing values, which makes it easier to work with. The values are noisy at the beginning of the time series, because they are based on fewer data points.\nMissing Value\nNow that we have characterized the trend of the time series, the next step is to investigate seasonality, which is periodic behavior. Time series data based on human behavior often exhibits daily, weekly, monthly, or yearly cycles. Next, I present methods to test for seasonality, but they don’t work well with missing data, so we have to solve that problem first.\nA simple and common way to fill missing data is to use a moving average. The Series method fillna does just what we want:",
"reindexed.ppg.fillna(ewma, inplace=True)",
"Wherever reindexed.ppg is nan, fillna replaces it with the corresponding value from ewma. The inplace flag tells fillna to modify the existing Series rather than create a new one.\nA drawback of this method is that it understates the noise in the series. We can solve that problem by adding in resampled residuals:",
"def Resample(xs, n=None):\n \"\"\"Draw a sample from xs with the same length as xs.\n xs: sequence\n n: sample size (default: len(xs))\n returns: NumPy array\n \"\"\"\n if n is None:\n n = len(xs)\n return np.random.choice(xs, n, replace=True)\n\nresid = (reindexed.ppg - ewma).dropna()\nfake_data = ewma + Resample(resid, len(reindexed))\nreindexed.ppg.fillna(fake_data, inplace=True)",
"resid contains the residual values, not including days when ppg is nan. fake_data contains the sum of the moving average and a random sample of residuals. Finally, fillna replaces nan with values from fake_data.\nThe filled data is visually similar to the actual values. Since the resampled residuals are random, the results are different every time; later we’ll see how to characterize the error created by missing values.\nSerial correlation\nAs prices vary from day to day, you might expect to see patterns. If the price is high on Monday, you might expect it to be high for a few more days; and if it’s low, you might expect it to stay low. A pattern like this is called serial\ncorrelation, because each value is correlated with the next one in the series.\nTo compute serial correlation, we can shift the time series by an interval called a lag, and then compute the correlation of the shifted series with the original:",
"def SerialCorr(series, lag=1):\n xs = series[lag:]\n ys = series.shift(lag)[lag:]\n corr = Corr(xs, ys)\n return corr",
"After the shift, the first lag values are nan, so I use a slice to remove them before computing Corr.\nIf we apply SerialCorr to the raw price data with lag 1, we find serial correlation 0.48 for the high quality category, 0.16 for medium and 0.10 for low. In any time series with a long-term trend, we expect to see strong serial correlations; for example, if prices are falling, we expect to see values above the mean in the first half of the series and values below the mean in the second half.\nIt is more interesting to see if the correlation persists if you subtract away the trend. For example, we can compute the residual of the EWMA and then compute its serial correlation:",
"ewma = reindexed.ppg.ewm(30).mean()\nresid = reindexed.ppg - ewma\ncorr = SerialCorr(resid, 1)\nprint(corr)",
"With lag=1, the serial correlations for the de-trended data are -0.022 for high quality, -0.015 for medium, and 0.036 for low. These values are small, indicating that there is little or no one-day serial correlation in this series.",
"ewma = reindexed.ppg.ewm(30).mean()\nresid = reindexed.ppg - ewma\ncorr = SerialCorr(resid, 7)\nprint(corr)\n\newma = reindexed.ppg.ewm(30).mean()\nresid = reindexed.ppg - ewma\ncorr = SerialCorr(resid, 30)\nprint(corr)",
"at this point we can tentatively conclude that there are no substantial seasonal patterns in these series, at least not with these lags.\nAutocorrelation\nIf you think a series might have some serial correlation, but you don’t know which lags to test, you can test them all! The autocorrelation function is a function that maps from lag to the serial correlation with the given lag. “Autocorrelation” is another name for serial correlation, used more often when the lag is not 1.\nStatsModels, which we used for linear regression, also provides functions for time series analysis, including acf, which computes the autocorrelation function:",
"import statsmodels.tsa.stattools as smtsa\nacf = smtsa.acf(resid, nlags=120, unbiased=True)\n\nacf[0], acf[1], acf[45], acf[60]",
"With lag=0, acf computes the correlation of the series with itself, which is always 1.",
"plt.plot(range(len(acf)),acf)",
"Prediction\nTime series analysis can be used to investigate, and sometimes explain, the behavior of systems that vary in time. It can also make predictions.\nThe linear regressions can be used for prediction. The RegressionResults class provides predict, which takes a DataFrame containing the explanatory variables and returns a sequence of predictions. \nIf all we want is a single, best-guess prediction, we’re done. But for most purposes it is important to quantify error. In other words, we want to know how accurate the prediction is likely to be.\nThere are three sources of error we should take into account:\n\n\nSampling error: The prediction is based on estimated parameters, which depend on random variation in the sample. If we run the exper- iment again, we expect the estimates to vary.\n\n\nRandom variation: Even if the estimated parameters are perfect, the observed data varies randomly around the long-term trend, and we expect this variation to continue in the future.\n\n\nModeling error: We have already seen evidence that the long-term trend is not linear, so predictions based on a linear model will eventually fail.\n\n\nAnother source of error to consider is unexpected future events. Agricultural prices are affected by weather, and all prices are affected by politics and law. As I write this, cannabis is legal in two states and legal for medical purposes in 20 more. If more states legalize it, the price is likely to go down. But if the federal government cracks down, the price might go up.\nModeling errors and unexpected future events are hard to quantify.\nSurvival Analysis\nSurvival analysis is a way to describe how long things last. It is often used to study human lifetimes, but it also applies to “survival” of mechanical and electronic components, or more generally to intervals in time before an event.\nIf someone you know has been diagnosed with a life-threatening disease, you might have seen a “5-year survival rate,” which is the probability of surviving five years after diagnosis. That estimate and related statistics are the result of survival analysis.\nSurvival Curves\nThe fundamental concept in survival analysis is the survival curve, S(t), which is a function that maps from a duration, t, to the probability of surviv- ing longer than t. If you know the distribution of durations, or “lifetimes”, finding the survival curve is easy; it’s just the complement of the CDF:\n S(t) = 1 − CDF(t)\n\nwhere CDF(t) is the probability of a lifetime less than or equal to t.\nHazard function\nFrom the survival curve we can derive the hazard function; for pregnancy lengths, the hazard function maps from a time, t, to the fraction of pregnan- cies that continue until t and then end at t. To be more precise:\nλ(t) = S(t)−S(t+1) S (t)\n\nThe numerator is the fraction of lifetimes that end at t, which is also PMF(t)\nInferring survival curves\nIf someone gives you the CDF of lifetimes, it is easy to compute the survival and hazard functions. But in many real-world scenarios, we can’t measure the distribution of lifetimes directly. We have to infer it.\nFor example, suppose you are following a group of patients to see how long they survive after diagnosis. Not all patients are diagnosed on the same day, so at any point in time, some patients have survived longer than others. If some patients have died, we know their survival times. For patients who are still alive, we don’t know survival times, but we have a lower bound.\nIf we wait until all patients are dead, we can compute the survival curve, but if we are evaluating the effectiveness of a new treatment, we can’t wait that long! We need a way to estimate survival curves using incomplete information.\nAs a more cheerful example, I will use NSFG data to quantify how long respondents “survive” until they get married for the first time. The range of respondents’ ages is 14 to 44 years, so the dataset provides a snapshot of women at different stages in their lives.\nFor women who have been married, the dataset includes the date of their first marriage and their age at the time. For women who have not been married, we know their age when interviewed, but have no way of knowing when or if they will get married.\nSince we know the age at first marriage for some women, it might be tempt- ing to exclude the rest and compute the CDF of the known data. That is a bad idea. The result would be doubly misleading: (1) older women would be overrepresented, because they are more likely to be married when interviewed, and (2) married women would be overrepresented! In fact, this analysis would lead to the conclusion that all women get married, which is obviously incorrect.\nKaplan-Meier estimation\nIn this example it is not only desirable but necessary to include observations of unmarried women, which brings us to one of the central algorithms in survival analysis, Kaplan-Meier estimation.\nThe general idea is that we can use the data to estimate the hazard function, then convert the hazard function to a survival curve. To estimate the hazard function, we consider, for each age, (1) the number of women who got married at that age and (2) the number of women “at risk” of getting married, which includes all women who were not married at an earlier age.\nCohort Effects\nOne of the challenges of survival analysis is that different parts of the esti- mated curve are based on different groups of respondents. The part of the curve at time t is based on respondents whose age was at least t when they were interviewed. So the leftmost part of the curve includes data from all respondents, but the rightmost part includes only the oldest respondents.\nIf the relevant characteristics of the respondents are not changing over time, that’s fine, but in this case it seems likely that marriage patterns are different for women born in different generations. We can investigate this effect by grouping respondents according to their decade of birth. Groups like this, defined by date of birth or similar events, are called cohorts, and differences between the groups are called cohort effects.\nAnalytics Methods\nSuppose you are a scientist studying gorillas in a wildlife preserve. Having weighed 9 gorillas, you find sample mean x ̄ = 90 kg and sample standard deviation, S = 7.5 kg. If you use x ̄ to estimate the population mean, what is the standard error of the estimate?\nTo answer that question, we need the sampling distribution of x ̄. We approximated this distribution by simulating the experiment (weighing 9 gorillas), computing x ̄ for each simulated experiment, and accu- mulating the distribution of estimates.\nThe result is an approximation of the sampling distribution. Then we use the sampling distribution to compute standard errors and confidence intervals:\n\n\nThe standard deviation of the sampling distribution is the standard error of the estimate; in the example, it is about 2.5 kg.\n\n\nThe interval between the 5th and 95th percentile of the sampling dis- tribution is a 90% confidence interval. If we run the experiment many times, we expect the estimate to fall in this interval 90% of the time. In the example, the 90% CI is (86, 94) kg.\n\n\nNow we’ll do the same calculation analytically. We take advantage of the fact that the weights of adult female gorillas are roughly normally distributed. Normal distributions have two properties that make them amenable for anal- ysis: they are “closed” under linear transformation and addition. To explain what that means, I need some notation.\nIf the distribution of a quantity, X, is normal with parameters μ and σ, you can write\n X∼ N (μ,σ2)\n\nwhere the symbol ∼ means “is distributed” and the script letter N stands for “normal.”\nA linear transformation of X is something like X′ = aX+b, where a and b are real numbers. A family of distributions is closed under linear transformation if X′ is in the same family as X. The normal distribution has this property; \n if X∼N (μ,σ2),\n X′ ∼ N (aμ+b,a2σ2) (1)\n\nNormal distributions are also closed under addition. If Z = X + Y and X∼N (μX,σX2 ) and Y ∼N (μY,σY2) then\n Z ∼ N(μX+μY,σX2 +σY2) (2)\n\nIn the special case Z = X + X, we have\n Z ∼ N (2μX , 2σX2 )\n\nand in general if we draw n values of X and add them up, we have\n Z ∼ N (nμX , nσX2 )\n\nSampling distributions\nNow we have everything we need to compute the sampling distribution of x ̄. Remember that we compute x ̄ by weighing n gorillas, adding up the total weight, and dividing by n.\nAssume that the distribution of gorilla weights, X, is approximately normal: \n X∼N (μ,σ2)\n\nIf we weigh n gorillas, the total weight, Y , is distributed \n Y∼N (nμ,nσ2)\n\nusing Equation 3. And if we divide by n, the sample mean, Z, is distributed \n Z∼N (μ,σ2/n)\n\nusing Equation 1 with a = 1/n.\nThe distribution of Z is the sampling distribution of x ̄. The mean of Z is μ, which shows that x ̄ is an unbiased estimate of μ. The variance of the sampling distribution is σ2/n.\nSo the standard deviation of the sampling distribution, which is the standard error of the estimate, is σ/√n. In the example, σ is 7.5 kg and n is 9, so the standard error is 2.5 kg. That result is consistent with what we estimated by simulation, but much faster to compute!\nWe can also use the sampling distribution to compute confidence intervals. A 90% confidence interval for x ̄ is the interval between the 5th and 95th percentiles of Z. Since Z is normally distributed, we can compute percentiles by evaluating the inverse CDF.\nThere is no closed form for the CDF of the normal distribution or its inverse, but there are fast numerical methods and they are implemented in SciPy.\nGiven a probability, p, it returns the corresponding percentile from a normal distribution with parameters mu and sigma.",
"def EvalNormalCdfInverse(p, mu=0, sigma=1):\n return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)",
"Central limit theorem\nIf we add values drawn from normal distributions, the distribution of the sum is normal. Most other distributions don’t have this property; if we add values drawn from other distributions, the sum does not generally have an analytic distribution.\nBut if we add up n values from almost any distribution, the distribution of the sum converges to normal as n increases.\nMore specifically, if the distribution of the values has mean and standard deviation μ and σ, the distribution of the sum is approximately N(nμ,nσ2).\nThis result is the Central Limit Theorem (CLT). It is one of the most useful tools for statistical analysis, but it comes with caveats:\n* The values have to be drawn independently. If they are correlated, the CLT doesn’t apply (although this is seldom a problem in practice).\n* The values have to come from the same distribution (although this requirement can be relaxed).\n* The values have to be drawn from a distribution with finite mean and variance. So most Pareto distributions are out.\n* The rate of convergence depends on the skewness of the distribution. Sums from an exponential distribution converge for small n. Sums from a lognormal distribution require larger sizes.\nThe Central Limit Theorem explains the prevalence of normal distributions in the natural world. Many characteristics of living things are affected by genetic and environmental factors whose effect is additive. The character- istics we measure are the sum of a large number of small effects, so their distribution tends to be normal.\nCorrelation test\nwe used a permutation test for the correlation between Age and Estimated Salary, and found that it is statistically significant, with p-value less than 0.001.\nNow we can do the same thing analytically. The method is based on this mathematical result: given two variables that are normally distributed and uncorrelated, if we generate a sample with size n, compute Pearson’s corre- lation, r, and then compute the transformed correlation\n t = r * sqrt(n-2/1-r^2)\n\nthe distribution of t is Student’s t-distribution with parameter n − 2. The t- distribution is an analytic distribution; the CDF can be computed efficiently using gamma functions.\nWe can use this result to compute the sampling distribution of correlation under the null hypothesis; that is, if we generate uncorrelated sequences of normal values, what is the distribution of their correlation? StudentCdf takes the sample size, n, and returns the sampling distribution of correlation:",
"def StudentCdf(n):\n ts = np.linspace(-3, 3, 101)\n ps = scipy.stats.t.cdf(ts, df=n-2)\n rs = ts / np.sqrt(n - 2 + ts**2)\n return thinkstats2.Cdf(rs, ps)",
"ts is a NumPy array of values for t, the transformed correlation. ps contains the corresponding probabilities, computed using the CDF of the Student’s t-distribution implemented in SciPy. The parameter of the t-distribution, df, stands for “degrees of freedom.” I won’t explain that term, but you can read about it at http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics).\nTo get from ts to the correlation coefficients, rs, we apply the inverse transform,\n r = t / sqrt(n − 2 + t^2)\n\nThe result is the sampling distribution of r under the null hypothesis. By the Central Limit Theorem, these moment- based statistics are normally distributed even if the data are not.\nto occur if the variables are actually uncorrelated. Using the analytic distri- bution, we can compute just how unlikely:\n t = r * math.sqrt((n-2) / (1-r**2))\n\n p_value = 1 - scipy.stats.t.cdf(t, df=n-2)\n\nWe compute the value of t that corresponds to r=0.07, and then evaluate the t-distribution at t. The result is 2.9e-11. This example demonstrates an advantage of the analytic method: we can compute very small p-values. But in practice it usually doesn’t matter.\nChi-squared test\nchi-squared statistic to test whether a die is crooked. The chi-squared statistic measures the total normalized deviation from the expected values in a table:\n χ2 = SUMMATION (Oi − Ei)^2 / Ei\n\nOne reason the chi-squared statistic is widely used is that its sampling distri- bution under the null hypothesis is analytic; by a remarkable coincidence1, it is called the chi-squared distribution. Like the t-distribution, the chi-squared CDF can be computed efficiently using gamma functions.\nSciPy provides an implementation of the chi-squared distribution, which we use to compute the sampling distribution of the chi-squared statistic:",
"def ChiSquaredCdf(n):\n xs = np.linspace(0, 25, 101)\n ps = scipy.stats.chi2.cdf(xs, df=n-1)\n return thinkstats2.Cdf(xs, ps)",
"The parameter of the chi-squared distribution is “degrees of freedom” again. In this case the correct parameter is n-1, where n is the size of the table"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aschaffn/phys202-2015-work
|
assignments/assignment04/TheoryAndPracticeEx02.ipynb
|
mit
|
[
"Theory and Practice of Visualization Exercise 2\nImports",
"from IPython.display import Image",
"Violations of graphical excellence and integrity\nFind a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information.\n\nCNN\nFox News\nTime\n\nUpload the image for the visualization to this directory and display the image inline in this notebook.",
"# Add your filename and uncomment the following line:\n# Image(filename='yourfile.png')",
"Describe in detail the ways in which the visualization violates graphical integrity and excellence:\nYOUR ANSWER HERE"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MIT-LCP/mimic-code
|
mimic-iv-cxr/dcm/create-mimic-cxr-jpg-metadata.ipynb
|
mit
|
[
"import pandas as pd\nimport os\nfrom collections import OrderedDict\nfrom pathlib import Path\nimport gzip\nimport json\n\nimport pydicom\nfrom pydicom._dicom_dict import DicomDictionary\n\n# we need the location of MIMIC-CXR 2.0.0\n# we use this to get cxr-records-list.csv.gz\nmimic_cxr_path = Path('/db/mimic-cxr')\n\n# we also need dicom-metadata.csv.gz and dicom-metadata.json.gz generated\n# these are generated by export_metadata.py in this folder.",
"In order to store sequences from the DICOM, we created a JSON. We will load in that JSON now.",
"# load json\nwith gzip.open('dicom-metadata.json.gz', 'r') as fp:\n tmp = json.load(fp)\n\ndcm_metadata = dict()\n# convert from length list of 1 item dicts to single dict\nfor d in tmp:\n for k, v in d.items():\n dcm_metadata[k] = v\n \ndel tmp\n\n# figure out how many unique top level meta-data fields in the json\n# also get a list of all the top level tags\njson_keys = [list(dcm_metadata[x].keys()) for x in dcm_metadata]\njson_keys = set([int(item) for sublist in json_keys for item in sublist])\njson_keys = list(json_keys)\njson_keys.sort()\n\nn_attrib = len(json_keys)\nprint(f'There are {n_attrib} top-level attributes in the DICOM json.')\n\n# show an example\ndcm_metadata['000046e4-e4d7f796-72c3dba4-8b67a485-0eea211d']",
"There are two very useful items in this sequence that we'd like to have in an easier form for all images: the procedure code sequence ('528434'), the coded view position ('5505568'), and the coded patient orientation ('5506064'). For convenience, we will pull the textual description of each ('524548'), rather than the ontology code itself.",
"cols = ['528434', '5505568', '5506064']\ndcm_metadata_simple = {}\nfor k, v in dcm_metadata.items():\n dcm_metadata_simple[k] = [v[c][0]['524548']\n for c in cols\n if c in v and len(v[c])>0]\ndcm_metadata_simple = pd.DataFrame.from_dict(dcm_metadata_simple, orient='index')\n\n# convert columns to be human readable\ndcm_metadata_simple.columns = [DicomDictionary[int(c)][-1] + '_' + DicomDictionary[int('524548')][-1] for c in cols]\ndcm_metadata_simple.head()\n\nmetadata.head()\n\n# load in MIMIC-CXR 2.0.0 record list\nrecords = pd.read_csv(mimic_cxr_path / 'cxr-record-list.csv.gz')\nrecords.set_index('dicom_id', inplace=True)\n\n# load in a CSV of meta-data derived from MIMIC-CXR\nmetadata = pd.read_csv('dicom-metadata.csv.gz', index_col=0)\nmetadata.index.name = 'dicom_id'\n\n# subselect to useful metadata\nmetadata = metadata[['4194900', '1593601', '2621456', '2621457', '524320', '524336', '1577984']]\n\n# rename columns to be human readable\nmetadata.columns = [DicomDictionary[int(c)][-1] for c in metadata.columns]\n\n# merge into records\nmetadata = records[['subject_id', 'study_id']].merge(\n metadata, how='left', left_index=True, right_index=True\n)\n\n# add in the metadata from the JSON file\nmetadata = metadata.merge(\n dcm_metadata_simple, how='left', left_index=True, right_index=True\n)\nmetadata.head()\n\nmetadata.sort_values(['subject_id', 'study_id'], inplace=True)\nmetadata.to_csv('mimic-cxr-2.0.0-metadata.csv.gz', index=True, compression='gzip')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
jinzishuai/learn2deeplearn
|
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
|
gpl-3.0
|
[
"Planar data classification with one hidden layer\nWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. \nYou will learn how to:\n- Implement a 2-class classification neural network with a single hidden layer\n- Use units with a non-linear activation function, such as tanh \n- Compute the cross entropy loss \n- Implement forward and backward propagation\n1 - Packages\nLet's first import all the packages that you will need during this assignment.\n- numpy is the fundamental package for scientific computing with Python.\n- sklearn provides simple and efficient tools for data mining and data analysis. \n- matplotlib is a library for plotting graphs in Python.\n- testCases provides some test examples to assess the correctness of your functions\n- planar_utils provide various useful functions used in this assignment",
"# Package imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom testCases_v2 import *\nimport sklearn\nimport sklearn.datasets\nimport sklearn.linear_model\nfrom planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets\n\n%matplotlib inline\n\nnp.random.seed(1) # set a seed so that the results are consistent",
"2 - Dataset\nFirst, let's get the dataset you will work on. The following code will load a \"flower\" 2-class dataset into variables X and Y.",
"X, Y = load_planar_dataset()",
"Visualize the dataset using matplotlib. The data looks like a \"flower\" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.",
"# Visualize the data:\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);",
"You have:\n - a numpy-array (matrix) X that contains your features (x1, x2)\n - a numpy-array (vector) Y that contains your labels (red:0, blue:1).\nLets first get a better sense of what our data is like. \nExercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? \nHint: How do you get the shape of a numpy array? (help)",
"### START CODE HERE ### (≈ 3 lines of code)\nshape_X = np.shape(X)\nshape_Y = np.shape(Y)\nm = shape_Y[1] # training set size\n### END CODE HERE ###\n\nprint ('The shape of X is: ' + str(shape_X))\nprint ('The shape of Y is: ' + str(shape_Y))\nprint ('I have m = %d training examples!' % (m))",
"Expected Output:\n<table style=\"width:20%\">\n\n <tr>\n <td>**shape of X**</td>\n <td> (2, 400) </td> \n </tr>\n\n <tr>\n <td>**shape of Y**</td>\n <td>(1, 400) </td> \n </tr>\n\n <tr>\n <td>**m**</td>\n <td> 400 </td> \n </tr>\n\n</table>\n\n3 - Simple Logistic Regression\nBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.",
"# Train the logistic regression classifier\nclf = sklearn.linear_model.LogisticRegressionCV();\nclf.fit(X.T, Y.T);",
"You can now plot the decision boundary of these models. Run the code below.",
"# Plot the decision boundary for logistic regression\nplot_decision_boundary(lambda x: clf.predict(x), X, Y)\nplt.title(\"Logistic Regression\")\n\n# Print accuracy\nLR_predictions = clf.predict(X.T)\nprint ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +\n '% ' + \"(percentage of correctly labelled datapoints)\")",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 47% </td> \n </tr>\n\n</table>\n\nInterpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! \n4 - Neural Network model\nLogistic regression did not work well on the \"flower dataset\". You are going to train a Neural Network with a single hidden layer.\nHere is our model:\n<img src=\"images/classification_kiank.png\" style=\"width:600px;height:300px;\">\nMathematically:\nFor one example $x^{(i)}$:\n$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\\tag{1}$$ \n$$a^{[1] (i)} = \\tanh(z^{[1] (i)})\\tag{2}$$\n$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\\tag{3}$$\n$$\\hat{y}^{(i)} = a^{[2] (i)} = \\sigma(z^{ [2] (i)})\\tag{4}$$\n$$y^{(i)}_{prediction} = \\begin{cases} 1 & \\mbox{if } a^{2} > 0.5 \\ 0 & \\mbox{otherwise } \\end{cases}\\tag{5}$$\nGiven the predictions on all the examples, you can also compute the cost $J$ as follows: \n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large\\left(\\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large \\right) \\small \\tag{6}$$\nReminder: The general methodology to build a Neural Network is to:\n 1. Define the neural network structure ( # of input units, # of hidden units, etc). \n 2. Initialize the model's parameters\n 3. Loop:\n - Implement forward propagation\n - Compute loss\n - Implement backward propagation to get the gradients\n - Update parameters (gradient descent)\nYou often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.\n4.1 - Defining the neural network structure\nExercise: Define three variables:\n - n_x: the size of the input layer\n - n_h: the size of the hidden layer (set this to 4) \n - n_y: the size of the output layer\nHint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.",
"# GRADED FUNCTION: layer_sizes\n\ndef layer_sizes(X, Y):\n \"\"\"\n Arguments:\n X -- input dataset of shape (input size, number of examples)\n Y -- labels of shape (output size, number of examples)\n \n Returns:\n n_x -- the size of the input layer\n n_h -- the size of the hidden layer\n n_y -- the size of the output layer\n \"\"\"\n ### START CODE HERE ### (≈ 3 lines of code)\n n_x = X.shape[0] # size of input layer\n n_h = 4\n n_y = Y.shape[0] # size of output layer\n ### END CODE HERE ###\n return (n_x, n_h, n_y)\n\nX_assess, Y_assess = layer_sizes_test_case()\n(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)\nprint(\"The size of the input layer is: n_x = \" + str(n_x))\nprint(\"The size of the hidden layer is: n_h = \" + str(n_h))\nprint(\"The size of the output layer is: n_y = \" + str(n_y))",
"Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).\n<table style=\"width:20%\">\n <tr>\n <td>**n_x**</td>\n <td> 5 </td> \n </tr>\n\n <tr>\n <td>**n_h**</td>\n <td> 4 </td> \n </tr>\n\n <tr>\n <td>**n_y**</td>\n <td> 2 </td> \n </tr>\n\n</table>\n\n4.2 - Initialize the model's parameters\nExercise: Implement the function initialize_parameters().\nInstructions:\n- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.\n- You will initialize the weights matrices with random values. \n - Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).\n- You will initialize the bias vectors as zeros. \n - Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.",
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n params -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = np.random.randn(n_h,n_x) * 0.01 \n b1 = np.zeros((n_h,1))\n W2 = np.random.randn(n_y,n_h) * 0.01 \n b2 = np.zeros((n_y,1))\n ### END CODE HERE ###\n \n assert (W1.shape == (n_h, n_x))\n assert (b1.shape == (n_h, 1))\n assert (W2.shape == (n_y, n_h))\n assert (b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nn_x, n_h, n_y = initialize_parameters_test_case()\n\nparameters = initialize_parameters(n_x, n_h, n_y)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00416758 -0.00056267]\n [-0.02136196 0.01640271]\n [-0.01793436 -0.00841747]\n [ 0.00502881 -0.01245288]] </td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]] </td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.]] </td> \n </tr>\n\n</table>\n\n4.3 - The Loop\nQuestion: Implement forward_propagation().\nInstructions:\n- Look above at the mathematical representation of your classifier.\n- You can use the function sigmoid(). It is built-in (imported) in the notebook.\n- You can use the function np.tanh(). It is part of the numpy library.\n- The steps you have to implement are:\n 1. Retrieve each parameter from the dictionary \"parameters\" (which is the output of initialize_parameters()) by using parameters[\"..\"].\n 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).\n- Values needed in the backpropagation are stored in \"cache\". The cache will be given as an input to the backpropagation function.",
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Argument:\n X -- input data of size (n_x, m)\n parameters -- python dictionary containing your parameters (output of initialization function)\n \n Returns:\n A2 -- The sigmoid output of the second activation\n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\"\n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Implement Forward Propagation to calculate A2 (probabilities)\n ### START CODE HERE ### (≈ 4 lines of code)\n Z1 = np.dot(W1, X) + b1\n A1 = np.tanh(Z1)\n Z2 = np.dot(W2, A1) + b2\n A2 = sigmoid(Z2)\n ### END CODE HERE ###\n \n assert(A2.shape == (1, X.shape[1]))\n \n cache = {\"Z1\": Z1,\n \"A1\": A1,\n \"Z2\": Z2,\n \"A2\": A2}\n \n return A2, cache\n\nX_assess, parameters = forward_propagation_test_case()\nA2, cache = forward_propagation(X_assess, parameters)\n\n# Note: we use the mean here just to make sure that your output matches ours. \nprint(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))",
"Expected Output:\n<table style=\"width:50%\">\n <tr>\n <td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td> \n </tr>\n</table>\n\nNow that you have computed $A^{[2]}$ (in the Python variable \"A2\"), which contains $a^{2}$ for every example, you can compute the cost function as follows:\n$$J = - \\frac{1}{m} \\sum\\limits_{i = 0}^{m} \\large{(} \\small y^{(i)}\\log\\left(a^{[2] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[2] (i)}\\right) \\large{)} \\small\\tag{13}$$\nExercise: Implement compute_cost() to compute the value of the cost $J$.\nInstructions:\n- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented\n$- \\sum\\limits_{i=0}^{m} y^{(i)}\\log(a^{2})$:\npython\nlogprobs = np.multiply(np.log(A2),Y)\ncost = - np.sum(logprobs) # no need to use a for loop!\n(you can use either np.multiply() and then np.sum() or directly np.dot()).",
"# GRADED FUNCTION: compute_cost\n\ndef compute_cost(A2, Y, parameters):\n \"\"\"\n Computes the cross-entropy cost given in equation (13)\n \n Arguments:\n A2 -- The sigmoid output of the second activation, of shape (1, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n parameters -- python dictionary containing your parameters W1, b1, W2 and b2\n \n Returns:\n cost -- cross-entropy cost given equation (13)\n \"\"\"\n \n m = Y.shape[1] # number of example\n\n # Compute the cross-entropy cost\n ### START CODE HERE ### (≈ 2 lines of code)\n logprobs = np.multiply(np.log(A2),Y)+np.multiply(np.log(1-A2),1-Y)\n cost = - np.sum(logprobs)/m \n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # makes sure cost is the dimension we expect. \n # E.g., turns [[17]] into 17 \n assert(isinstance(cost, float))\n \n return cost\n\nA2, Y_assess, parameters = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(A2, Y_assess, parameters)))",
"Expected Output:\n<table style=\"width:20%\">\n <tr>\n <td>**cost**</td>\n <td> 0.693058761... </td> \n </tr>\n\n</table>\n\nUsing the cache computed during forward propagation, you can now implement backward propagation.\nQuestion: Implement the function backward_propagation().\nInstructions:\nBackpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. \n<img src=\"images/grad_summary.png\" style=\"width:600px;height:300px;\">\n<!--\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } = \\frac{1}{m} (a^{[2](i)} - y^{(i)})$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_2 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } a^{[1] (i) T} $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial b_2 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)}}}$\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } = W_2^T \\frac{\\partial \\mathcal{J} }{ \\partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $\n\n$\\frac{\\partial \\mathcal{J} }{ \\partial W_1 } = \\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)} } X^T $\n\n$\\frac{\\partial \\mathcal{J} _i }{ \\partial b_1 } = \\sum_i{\\frac{\\partial \\mathcal{J} }{ \\partial z_{1}^{(i)}}}$\n\n- Note that $*$ denotes elementwise multiplication.\n- The notation you will use is common in deep learning coding:\n - dW1 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_1 }$\n - db1 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_1 }$\n - dW2 = $\\frac{\\partial \\mathcal{J} }{ \\partial W_2 }$\n - db2 = $\\frac{\\partial \\mathcal{J} }{ \\partial b_2 }$\n\n!-->\n\n\nTips:\nTo compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute \n$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).",
"# GRADED FUNCTION: backward_propagation\n\ndef backward_propagation(parameters, cache, X, Y):\n \"\"\"\n Implement the backward propagation using the instructions above.\n \n Arguments:\n parameters -- python dictionary containing our parameters \n cache -- a dictionary containing \"Z1\", \"A1\", \"Z2\" and \"A2\".\n X -- input data of shape (2, number of examples)\n Y -- \"true\" labels vector of shape (1, number of examples)\n \n Returns:\n grads -- python dictionary containing your gradients with respect to different parameters\n \"\"\"\n m = X.shape[1]\n \n # First, retrieve W1 and W2 from the dictionary \"parameters\".\n ### START CODE HERE ### (≈ 2 lines of code)\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n ### END CODE HERE ###\n \n # Retrieve also A1 and A2 from dictionary \"cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n A1 = cache['A1']\n A2 = cache['A2']\n ### END CODE HERE ###\n \n # Backward propagation: calculate dW1, db1, dW2, db2. \n ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)\n dZ2 = A2 - Y\n dW2 = 1/m*np.dot(dZ2, A1.T)\n db2 = 1/m*np.sum(dZ2, axis=1, keepdims=True)\n dZ1 = np.dot(W2.T,dZ2)*(1 - np.power(A1, 2))\n dW1 = 1/m*np.dot(dZ1,X.T)\n db1 = 1/m*np.sum(dZ1, axis=1, keepdims=True)\n ### END CODE HERE ###\n \n grads = {\"dW1\": dW1,\n \"db1\": db1,\n \"dW2\": dW2,\n \"db2\": db2}\n \n return grads\n\nparameters, cache, X_assess, Y_assess = backward_propagation_test_case()\n\ngrads = backward_propagation(parameters, cache, X_assess, Y_assess)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"db1 = \"+ str(grads[\"db1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"db2 = \"+ str(grads[\"db2\"]))",
"Expected output:\n<table style=\"width:80%\">\n <tr>\n <td>**dW1**</td>\n <td> [[ 0.00301023 -0.00747267]\n [ 0.00257968 -0.00641288]\n [-0.00156892 0.003893 ]\n [-0.00652037 0.01618243]] </td> \n </tr>\n\n <tr>\n <td>**db1**</td>\n <td> [[ 0.00176201]\n [ 0.00150995]\n [-0.00091736]\n [-0.00381422]] </td> \n </tr>\n\n <tr>\n <td>**dW2**</td>\n <td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td> \n </tr>\n\n\n <tr>\n <td>**db2**</td>\n <td> [[-0.16655712]] </td> \n </tr>\n\n</table>\n\nQuestion: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).\nGeneral gradient descent rule: $ \\theta = \\theta - \\alpha \\frac{\\partial J }{ \\partial \\theta }$ where $\\alpha$ is the learning rate and $\\theta$ represents a parameter.\nIllustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.\n<img src=\"images/sgd.gif\" style=\"width:400;height:400;\"> <img src=\"images/sgd_bad.gif\" style=\"width:400;height:400;\">",
"# GRADED FUNCTION: update_parameters\n\ndef update_parameters(parameters, grads, learning_rate = 1.2):\n \"\"\"\n Updates parameters using the gradient descent update rule given above\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients \n \n Returns:\n parameters -- python dictionary containing your updated parameters \n \"\"\"\n # Retrieve each parameter from the dictionary \"parameters\"\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Retrieve each gradient from the dictionary \"grads\"\n ### START CODE HERE ### (≈ 4 lines of code)\n dW1 = grads[\"dW1\"]\n db1 = grads[\"db1\"]\n dW2 = grads[\"dW2\"]\n db2 = grads[\"db2\"]\n ## END CODE HERE ###\n \n # Update rule for each parameter\n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = W1 - learning_rate*dW1\n b1 = b1 - learning_rate*db1\n W2 = W2 - learning_rate*dW2\n b2 = b2 - learning_rate*db2\n ### END CODE HERE ###\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters\n\nparameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads)\n\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:80%\">\n <tr>\n <td>**W1**</td>\n <td> [[-0.00643025 0.01936718]\n [-0.02410458 0.03978052]\n [-0.01653973 -0.02096177]\n [ 0.01046864 -0.05990141]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ -1.02420756e-06]\n [ 1.27373948e-05]\n [ 8.32996807e-07]\n [ -3.20136836e-06]]</td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.00010457]] </td> \n </tr>\n\n</table>\n\n4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()\nQuestion: Build your neural network model in nn_model().\nInstructions: The neural network model has to use the previous functions in the right order.",
"# GRADED FUNCTION: nn_model\n\ndef nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):\n \"\"\"\n Arguments:\n X -- dataset of shape (2, number of examples)\n Y -- labels of shape (1, number of examples)\n n_h -- size of the hidden layer\n num_iterations -- Number of iterations in gradient descent loop\n print_cost -- if True, print the cost every 1000 iterations\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n np.random.seed(3)\n n_x = layer_sizes(X, Y)[0]\n n_y = layer_sizes(X, Y)[2]\n \n # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: \"n_x, n_h, n_y\". Outputs = \"W1, b1, W2, b2, parameters\".\n ### START CODE HERE ### (≈ 5 lines of code)\n parameters = initialize_parameters(n_x, n_h, n_y)\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n ### END CODE HERE ###\n \n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n \n ### START CODE HERE ### (≈ 4 lines of code)\n # Forward propagation. Inputs: \"X, parameters\". Outputs: \"A2, cache\".\n A2, cache = forward_propagation(X, parameters)\n \n # Cost function. Inputs: \"A2, Y, parameters\". Outputs: \"cost\".\n cost = compute_cost(A2, Y, parameters)\n \n # Backpropagation. Inputs: \"parameters, cache, X, Y\". Outputs: \"grads\".\n grads = backward_propagation(parameters, cache, X, Y)\n \n # Gradient descent parameter update. Inputs: \"parameters, grads\". Outputs: \"parameters\".\n parameters = update_parameters(parameters, grads)\n \n ### END CODE HERE ###\n \n # Print the cost every 1000 iterations\n if print_cost and i % 1000 == 0:\n print (\"Cost after iteration %i: %f\" %(i, cost))\n\n return parameters\n\nX_assess, Y_assess = nn_model_test_case()\nparameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"Expected Output:\n<table style=\"width:90%\">\n\n<tr> \n <td> \n **cost after iteration 0**\n </td>\n <td> \n 0.692739\n </td>\n</tr>\n\n<tr> \n <td> \n <center> $\\vdots$ </center>\n </td>\n <td> \n <center> $\\vdots$ </center>\n </td>\n</tr>\n\n <tr>\n <td>**W1**</td>\n <td> [[-0.65848169 1.21866811]\n [-0.76204273 1.39377573]\n [ 0.5792005 -1.10397703]\n [ 0.76773391 -1.41477129]]</td> \n </tr>\n\n <tr>\n <td>**b1**</td>\n <td> [[ 0.287592 ]\n [ 0.3511264 ]\n [-0.2431246 ]\n [-0.35772805]] </td> \n </tr>\n\n <tr>\n <td>**W2**</td>\n <td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td> \n </tr>\n\n\n <tr>\n <td>**b2**</td>\n <td> [[ 0.20459656]] </td> \n </tr>\n\n</table>\n\n4.5 Predictions\nQuestion: Use your model to predict by building predict().\nUse forward propagation to predict results.\nReminder: predictions = $y_{prediction} = \\mathbb 1 \\text{{activation > 0.5}} = \\begin{cases}\n 1 & \\text{if}\\ activation > 0.5 \\\n 0 & \\text{otherwise}\n \\end{cases}$ \nAs an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)",
"# GRADED FUNCTION: predict\n\ndef predict(parameters, X):\n \"\"\"\n Using the learned parameters, predicts a class for each example in X\n \n Arguments:\n parameters -- python dictionary containing your parameters \n X -- input data of size (n_x, m)\n \n Returns\n predictions -- vector of predictions of our model (red: 0 / blue: 1)\n \"\"\"\n \n # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.\n ### START CODE HERE ### (≈ 2 lines of code)\n A2, cache = forward_propagation(X, parameters)\n predictions = (A2 > 0.5)\n ### END CODE HERE ###\n \n return predictions\n\nparameters, X_assess = predict_test_case()\n\npredictions = predict(parameters, X_assess)\nprint(\"predictions mean = \" + str(np.mean(predictions)))",
"Expected Output: \n<table style=\"width:40%\">\n <tr>\n <td>**predictions mean**</td>\n <td> 0.666666666667 </td> \n </tr>\n\n</table>\n\nIt is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.",
"# Build a model with a n_h-dimensional hidden layer\nparameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)\n\n# Plot the decision boundary\nplot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\nplt.title(\"Decision Boundary for hidden layer size \" + str(4))",
"Expected Output:\n<table style=\"width:40%\">\n <tr>\n <td>**Cost after iteration 9000**</td>\n <td> 0.218607 </td> \n </tr>\n\n</table>",
"# Print accuracy\npredictions = predict(parameters, X)\nprint ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')",
"Expected Output: \n<table style=\"width:15%\">\n <tr>\n <td>**Accuracy**</td>\n <td> 90% </td> \n </tr>\n</table>\n\nAccuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. \nNow, let's try out several hidden layer sizes.\n4.6 - Tuning hidden layer size (optional/ungraded exercise)\nRun the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.",
"# This may take about 2 minutes to run\n\nplt.figure(figsize=(16, 32))\nhidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]\nfor i, n_h in enumerate(hidden_layer_sizes):\n plt.subplot(5, 2, i+1)\n plt.title('Hidden Layer of size %d' % n_h)\n parameters = nn_model(X, Y, n_h, num_iterations = 5000)\n plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)\n predictions = predict(parameters, X)\n accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)\n print (\"Accuracy for {} hidden units: {} %\".format(n_h, accuracy))",
"Interpretation:\n- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. \n- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.\n- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. \nOptional questions:\nNote: Remember to submit the assignment but clicking the blue \"Submit Assignment\" button at the upper-right. \nSome optional/ungraded questions that you can explore if you wish: \n- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?\n- Play with the learning_rate. What happens?\n- What if we change the dataset? (See part 5 below!)\n<font color='blue'>\nYou've learnt to:\n- Build a complete neural network with a hidden layer\n- Make a good use of a non-linear unit\n- Implemented forward propagation and backpropagation, and trained a neural network\n- See the impact of varying the hidden layer size, including overfitting.\nNice work! \n5) Performance on other datasets\nIf you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.",
"# Datasets\nnoisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()\n\ndatasets = {\"noisy_circles\": noisy_circles,\n \"noisy_moons\": noisy_moons,\n \"blobs\": blobs,\n \"gaussian_quantiles\": gaussian_quantiles}\n\n### START CODE HERE ### (choose your dataset)\ndataset = \"noisy_moons\"\n### END CODE HERE ###\n\nX, Y = datasets[dataset]\nX, Y = X.T, Y.reshape(1, Y.shape[0])\n\n# make blobs binary\nif dataset == \"blobs\":\n Y = Y%2\n\n# Visualize the data\nplt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);",
"Congrats on finishing this Programming Assignment!\nReference:\n- http://scs.ryerson.ca/~aharley/neural-networks/\n- http://cs231n.github.io/neural-networks-case-study/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
seifip/udacity-deep-learning-nanodegree
|
embeddings/Skip-Gram_word2vec.ipynb
|
mit
|
[
"Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.",
"import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils",
"Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()",
"Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.",
"words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))",
"And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.",
"vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]",
"Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.",
"from collections import Counter\nimport random\n\ndrop_threshold = 1e-5\nword_counts = Counter(int_words)\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in word_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\ntrain_words = [word for word in int_words if random.random() < (1 - p_drop[word])]",
"Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.",
"def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n return list(target_words)",
"Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.",
"def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ",
"Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.",
"train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='labels')",
"Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.",
"n_vocab = len(int_to_vocab)\nn_embedding = 200\nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, inputs)",
"Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.",
"# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))\n softmax_b = tf.Variable(tf.zeros(n_vocab))\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, \n labels, embed,\n n_sampled, n_vocab)\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)",
"Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.",
"with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints",
"Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.",
"epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n embed_mat = sess.run(normalized_embedding)",
"Restore the trained network if you need to:",
"with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)",
"Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gfrubi/FM2
|
Notebooks/Ejemplos-Transformada-Fourier.ipynb
|
gpl-3.0
|
[
"Ejemplos de transformadas de Fourier:\nEn el presente notebook se han definido y graficado diversas funciones y sus correspondientes transformadas de Fourier.",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interact\n\nplt.style.use('classic')",
"Pulso cuadrado:\nSea \n$$\nf(x):=\\left{\n\\begin{matrix}\n1, &\\rm{si}\\quad |x|<a \\\\\n0, &\\rm{si}\\quad|x|>a \\\n\\end{matrix}\\right. ,\n$$\ncon transformada de Fourier\n$$\n{\\cal F}f=2\\frac{\\sin(k a)}{k}.\n$$",
"def p(x,a):\n if abs(x)<a:\n return 1.\n else:\n return 0.\npulso = np.vectorize(p) #vectorizando la función pulso",
"Definimos 1000 puntos en el intervalo $[-\\pi,\\pi]$:",
"x = np.linspace(-10,10,1000)\nk = np.linspace(-10,10,1000)\n\ndef p(a=1):\n plt.figure(figsize=(12,5))\n plt.subplot(1,2,1)\n #fig,ej=subplots(1,2,figsize=(14,5))\n plt.plot(x,pulso(x,a), lw = 2)\n plt.xlim(-10,10)\n plt.ylim(-.1,1.1)\n plt.grid(True)\n plt.xlabel(r'$x$',fontsize=15)\n plt.ylabel(r'$f(x)$',fontsize=15)\n plt.subplot(1,2,2)\n plt.plot(k,2*(np.sin(k*a)/k), lw = 2)\n plt.xlim(-10,10)\n plt.grid(True)\n plt.xlabel('$k$',fontsize=15)\n plt.ylabel('$\\\\tilde{f}(k)$',fontsize=15)\n\n#p(5)\n#plt.savefig('fig-transformada-Fourier-pulso-cuadrado.pdf')\n\ninteract(p, a=(1,10))",
"Función Guassiana\nSea \n$$\nf(x):=e^{-\\alpha x^2}, \\qquad \\alpha>0,\n$$\ncon transformada de Fourier\n$$\n{\\cal F}f=\\sqrt{\\frac{\\pi}{\\alpha}}e^{-k^2/(4\\alpha)}.\n$$",
"def gaussina(alpha=1):\n plt.figure(figsize=(12,5))\n plt.subplot(1,2,1)\n plt.plot(x,np.exp(-alpha*x**2), lw=2)\n plt.xlim(-3,3)\n plt.grid(True)\n plt.xlabel('$x$',fontsize=15)\n plt.ylabel('$f(x)$',fontsize=15)\n plt.subplot(1,2,2)\n plt.plot(k,np.sqrt(np.pi/alpha)*np.exp(-k**2/(4.*alpha)), lw=2)\n plt.xlim(-10,10)\n plt.ylim(0,2)\n plt.grid(True)\n plt.xlabel('$k$',fontsize=15)\n plt.ylabel('$\\\\tilde{f}(k)$',fontsize=15)\n\ninteract(gaussina, alpha=(1,50))\n\n#gaussina(5)\n#plt.savefig('fig-transformada-Fourier-gaussiana.pdf')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mayankjohri/LetsExplorePython
|
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
|
gpl-3.0
|
[
"Modules, Classes, and Objects & OOPS\n\nPython is something called an “object- oriented programming language.” What this means is\nthere’s a construct in Python called a class that lets you structure your software in a particular\nway. Using classes, you can add consistency to your programs so that they can be used in a cleaner\nway, or at least that’s the theory.\nClasses and objects are the two main aspects of object oriented programming. A class creates a new type where objects are instances of the class. An analogy is that you can have variables of type i n t which translates to saying that variables that store integers are variables which are instances (objects) of the int class.\nObjects can store data using ordinary variables that belong to the object. Variables that\nbelong to an object or class are referred to as fields. Objects can also have functionality by\nusing functions that belong to a class. Such functions are called methods of the class. This\nterminology is important because it helps us to differentiate between functions and\nvariables which are independent and those which belong to a class or object. Collectively,\nthe fields and methods can be referred to as the attributes of that class.\nFields are of two types - they can belong to each instance/object of the class or they can\nbelong to the class itself. They are called instance variables and class variables\nrespectively.\nA class is created using the class keyword. The fields and methods of the class are listed\nin an indented block.\nThe self\nClass methods have only one specific difference from ordinary functions - they must have an extra first name that has to be added to the beginning of the parameter list, but you do not give a value for this parameter when you call the method, Python will provide it. This particular variable refers to the object itself, and by convention, it is given the name self.\nClasses\nA class is merely a container for static data members or function declarations, called a class's attributes. Classes provide something which can be considered a blueprint for creating \"real\" objects, called class instances. Functions which are part of classes are called methods.\nThe simplest class possible is shown in the following example.",
"# Declare a Class\nclass Class_Name(object):\n pass\n\nclass Class_Name_slow():\n pass\n\nclass Class_Name_for_lazy:\n pass\n",
"```python\nclass Class_Name(base_classes_if_any):\n \"\"\"optional documentation string\"\"\"\nstatic_member_declarations = 1\n\ndef method_declarations(self):\n \"\"\"\n documentation\n \"\"\"\n pass\n\n```",
"# first.py\n\nclass First:\n pass\n\nfr = First()\nprint (type(fr))\nprint (type(First))\nprint(type(int))\n\n# first.py\n\nclass First(object):\n pass\n\nfr = First()\nprint (type(fr))\nprint (type(First))\nprint(type(int))\n\n# first.py\n# Class with it's methods \n\nclass Second:\n def set_name(self, name):\n self.fullname = name\n \n def get_name(self):\n return self.fullname\n\ntry:\n sec = Second()\n print(sec.get_name())\nexcept Exception as e:\n print(e)\n\n# first.py\n\nclass Second:\n def set_name(self, name):\n print(id(self))\n self.fullname = name\n \n def get_name(self):\n return self.fullname\n\nsec = Second()\nprint(id(sec))\nsec.set_name(\"Manish Gupta\")\nprint(sec.get_name())",
"NOTE: both Second and sec are same object as their id's are same",
"class Second:\n def __init__(self, name = \"\"):\n self.fullname = name\n \n def set_name(self, name):\n print(id(self))\n self.fullname = name\n \n def get_name(self):\n return self.fullname\n\nsec = Second(\"Vishal Saxena\")\nprint(sec.get_name())\n\n# first.py\n\nclass Second:\n def __init__(self, name, age=35):\n self.name(name)\n self.age = age\n \n def name(self, new_name):\n self.fullname = new_name\n \n def get_name(self):\n return self.fullname\n\nsec = Second(\"Arya\")\nprint(sec.get_name())\nprint(sec.age)\n\nclass Second:\n def __init__(self, name, age=55):\n self.name(name)\n self.age = age\n \n def name(self, name):\n self.name = name\n \n def get_name(self):\n return self.name\n\nsec = Second(\"Rajneekanth\")\nprint(sec.get_name())\nprint(sec.age)\nprint(dir(sec))\ndir(sec.__dir__)\n\n# first.py\n\nclass Second:\n fullname = \"Mayank Johri\"\n age = 33\n \n def name(self, name):\n self.fullname = name\n \n def get_name(self):\n return self.fullname\n\nsec = Second()\nprint(dir(sec))\nprint(sec.get_name())\nprint(id(sec.fullname))\nsec2 = Second()\nprint(id(sec2.fullname))\n\n# first.py\n\nclass Second:\n fullname = \"Ram Setu\"\n age = 33\n \n def name(self, name):\n self.fullname = name\n \n def get_name(self):\n return self.fullname\n\nrs_1 = Second()\nrs_2 = Second()\n\nprint(rs_1.fullname == rs_2.fullname)\nprint(id(rs_1.fullname) == id(rs_2.fullname))\n\nrs_1.name(\"ram setu\")\n\nprint(rs_1.fullname == rs_2.fullname)\nprint(id(rs_1.fullname) == id(rs_2.fullname))\n\nprint(rs_1.fullname, \" - \", rs_2.fullname)",
"The magic of mutables",
"class Bridge:\n fullname = [\"Mayank\", \"Johri\"]\n age = 33\n \n def name(self, name):\n self.fullname.append(name)\n \n def get_name(self):\n return self.fullname\n\nrs_1 = Bridge()\nrs_2 = Bridge()\n\nprint(rs_1.fullname == rs_2.fullname)\nprint(id(rs_1.fullname) == id(rs_2.fullname))\n\nrs_1.name(\"ram setu\")\n\nprint(rs_1.fullname == rs_2.fullname)\nprint(id(rs_1.fullname) == id(rs_2.fullname))\n\nprint(rs_1.fullname, \" - \", rs_2.fullname)\n\n# Example\nclass FooClass:\n \"\"\"my very first class: FooClass\"\"\"\n __version = 0.11 # class (data) attribute\n ver = 0.1\n \n def __init__(self, nm='John Doe'):\n 'constructor'\n self.name = nm # class instance (data) attribute\n \n def showName(self):\n 'display instance attribute and class name'\n print ('Your name is: ', self.name)\n print( 'My name is: ', self.__class__ )# full class name\n\n def showVersion(self):\n 'display class(static) attribute'\n print( self.__version )# references FooClass.version\n \n def showVer(self):\n 'display class(static) attribute'\n print( self.ver )# references FooClass.version \n \n def setVersion(self, ver):\n 'display class(static) attribute'\n self.__version = ver\n print( self.__version )# references FooClass.version \n\n \n# Create Class Instances\nfoo = FooClass()\narya = FooClass(\"Arya\")\narya.showName()\n# Calling class methods\nfoo.showName()\n\n# print(foo.showName())\nfoo.showVer()\narya.showVer()\n\nprint(id(foo.ver))\nprint(id(arya.ver))\n\nprint(foo.ver)\nfoo.setVersion(10) # __version\nfoo.ver = 2020202 # ver\n\nfoo.showVer()\narya.showVer()\n\nfoo.name\n\nfoo.name = \"Anamika Johri\"\nfoo.name\n\nfoo.showVer()\nprint(foo.ver)\nprint(\"-\"*20)\nprint(arya.showVer())\n\n# print(FooClass.__version)\n\ntry:\n print(foo.__version)\nexcept Exception as e:\n print(e)\n\n# Example\nclass User:\n \"\"\"my very first class: FooClass\"\"\"\n __version = 0.11 # class (data) attribute\n ver = 0.1\n \n def __init__(self, firstname='John', surname=\"Doe\"):\n 'constructor'\n self.name = firstname + \" \" + surname \n print ('Created a class instance for: ', self.name)\n \n def showName(self):\n 'display instance attribute and class name'\n print ('Your name is: ', self.name)\n print( 'My name is: ', self.__class__ )# full class name\n\n def showVersion(self):\n 'display class(static) attribute'\n print( self.__version )# references FooClass.version\n \n def showVer(self):\n 'display class(static) attribute'\n print( self.ver )# references FooClass.version \n \n def setVersion(self, ver):\n 'display class(static) attribute'\n self.__version = ver\n print( self.__version )# references FooClass.version \n\n# Create Class Instances\nuser = User()\narya = User(\"Arya\")\ngupta = User(surname=\"Gupta\")\nprint(arya.showName())\n\n# Example\nclass User:\n \"\"\"my very first class: FooClass\"\"\"\n __version = 0.11 # class (data) attribute\n ver = 0.1\n \n def __init__(self, firstname, surname):\n 'constructor'\n self.name = firstname + \" \" + surname \n print ('Created a class instance for: ', self.name)\n \n # full class name\n def showName(self):\n 'display instance attribute and class name'\n print ('Your name is: ', self.name)\n print( 'My name is: ', self.__class__ )\n\n def showVersion(self):\n 'display class(static) attribute'\n print( self.__version )# references FooClass.version\n \n def showVer(self):\n 'display class(static) attribute'\n print( self.ver )# references FooClass.version \n \n def setVersion(self, ver):\n 'display class(static) attribute'\n self.__version = ver\n print( self.__version )# references FooClass.version \n\n\n# Create Class Instances\ntry:\n user = User()\n arya = User(\"Arya\")\n gupta = User(surname=\"Gupta\")\n gupta = User(surname=\"Gupta\", firstname=\"Manish\")\n arya.showName()\nexcept Exception as e:\n print(e)",
"So, we can't have any object creation with lesser than two parameters. Lets comment out the first three object creation code and try again",
"# Create Class Instances\ntry:\n# user = User()\n# arya = User(\"Arya\")\n# gupta = User(surname=\"Gupta\")\n gupta = User(surname=\"Gupta\", firstname=\"Manish\")\n arya.showName()\nexcept Exception as e:\n print(e)\n\nclass PrivateVariables():\n __version = 1.0\n _vers = 11.0\n ver = 10.0\n \n def show_version(self):\n return(self.__version)\n \n def show_vers(self):\n print(self._vers)\n\npv = PrivateVariables()\nprint(pv.ver)\nprint(pv._vers)\n# print(pv.__version)\n\npv.ver = 111\nprint(pv.ver)\npv._vers = 1000\nprint(pv._vers) # Convension only \nprint(pv.show_version())\n\nprint(dir(pv))\n\nprint(pv.__dict__)\n\nprint(pv.__dict__.get('__version', \"default value\"))\n\nprint(pv.__dict__.get('ver'))\n\npv.__dict__['ver'] = 1010\nprint(pv.__dict__.get('ver'))\n\ntry:\n print(pv.__version)\nexcept Exception as e:\n print(e)",
"static / class variables\n Reference: \nhttps://stackoverflow.com/questions/68645/are-static-class-variables-possible.\nStatic variables are variables declared inside the class definition, and not inside a method are class or static variables.\nBut before you go all, Yahooooo... about understanding of static variables. Please note that the implementation of static variables in python are different from Java/C++, they are unique in many ways. \nLets understand them a little using the following code",
"class Static_Test(object):\n val = \"Rajeev Chaturvedi\"\n\ns = Static_Test()\n\nprint(s.val,\"\\b,\", id(s.val))\nprint(Static_Test.val, \"\\b,\", id(Static_Test.val))",
"So far so good, val & id of val from both the instance and class seems to be same, thus they are pointing to same memory location which contains the value.\nNow lets try to update it in class",
"Static_Test.val = \"राजीव चतुर्वेदी\"\n\nprint(s.val,\"\\b,\", id(s.val))\nprint(Static_Test.val, \"\\b,\", id(Static_Test.val))\ns_new = Static_Test()\nprint(s_new.val,\"\\b,\", id(s.val))",
"So, if we update values at class level, than they are getting reflected in all the instances as well. Now lets try to update its value in an instance and check its effect",
"s.val = \"Sachin\"\nprint(s.val,\"\\b,\", id(s.val))\nprint(Static_Test.val, \"\\b,\", id(Static_Test.val))\ns_new = Static_Test()\nprint(s_new.val,\"\\b,\", id(s.val))",
"Once, instance value has been changed then it remain changed and cannot be reverted by changing class variable value as shown in the below code",
"Static_Test.val = \"Sachin Shah\"\nprint(s.val,\"\\b,\", id(s.val))\nprint(Static_Test.val, \"\\b,\", id(Static_Test.val))\ns_new = Static_Test()\nprint(s_new.val,\"\\b,\", id(s.val))",
"Static and Class Methods\nPython provides decorators @classmethod & @staticmethod \n@staticmethod\nA static method does not receive an implicit first argument (self or cls). To declare a static method decorator staticmethod is used as shown in the below example",
"class Circle(object):\n PI = 3.14\n @staticmethod\n def area_circle(radius):\n area = 0\n try:\n area = PI * radius * radius\n except Exception as e:\n print(e)\n return area\n\nc = Circle()\nprint(c.area_circle(10))",
"As shown in the above example, static methods do not have access to any class or instance attributes. We tried to access class attribute PI and received error message that variable not defined.\nStatic methods for all intent and purpose act as normal function, but are called from within an object or class.\nStatic methods similar to class methods are bound to a class instead of its object, thus do not require a class instance creation and thus are not dependent on the state of the object.\nStill there are few noticible differences between a static method and a class method, few of them are as follows:\n\nStatic method are isolated from its class/object and have access only to the parameters passed to it.\nClass method works with the class since its parameter is always the class itself.\n\nWhen do you use static method\nSo, if they do not have access to the class, then why are they created. We will try to understand the logic of why they should be created.\nGrouping utility function to a class",
"Many times, we have to ",
"Having a single implementation\nattributes\nIn Python, attribute is everything, contained inside an object. In Python there is no real distinction between plain data and functions, being both objects.\nThe following example represents a book with a title and an author. It also provides a get_entry() method which returns a string representation of the book.",
"class Book:\n def __init__(self, title, author):\n self.title = title\n self.author = author\n\n def get_entry(self):\n return f\"{self.title} by {self.author}\"",
"Every instance of this class will contain three attributes, namely title, author, and get_entry, in addition to the standard attributes provided by the object ancestor.",
"b = Book(title=\"Akme\", author=\"Mayank\")\n\nprint(dir(b))\n\nprint(b.title)\nb.title = \"Lets Go\"\nprint(b.title)\nprint(b.get_entry())\n\ndata = b.get_entry\nprint(data)\nprint(data())\nprint(type(b.__dict__))\nprint(b.__dict__)\n#print(b.nonExistAttribute())\n\ndef testtest(func):\n print(func())\n\ntesttest(data)",
"Instead of using the normal statements to access attributes, you can use the following functions −\ngetattr\n: to access the attribute of the object\nThe getattr(obj, name[, default])\n: to access the attribute of object.\nThe hasattr(obj,name)\n: to check if an attribute exists or not.\nThe setattr(obj,name,value)\n : to set an attribute. If attribute does not exist, then it would be created.\nThe delattr(obj, name)\n : to delete an attribute.\nProperties\nSometimes you want to have an attribute whose value comes from other attributes or, in general, which value shall be computed at the moment. The standard way to deal with this situation is to create a method, called getter, just like I did with get_entry().\nIn Python you can \"mask\" the method, aliasing it with a data attribute, which in this case is called property.",
"class Book(object):\n def __init__(self, title, author):\n self.title = title\n self.author = author\n\n def get_entry(self):\n return \"{0} by {1}\".format(self.title, self.author)\n\n entry = property(get_entry)\n\nb = Book(title=\"Pawn of Prophecy\", author=\"David Eddings\")\nprint(b.entry)",
"Properties allow to specify also a write method (a setter), that is automatically called when you try to change the value of the property itself.\n\nNOTE: \nDon't Worry to much about properties, we have entire chapter dedicated for it.",
"class User():\n def __init__(self, name):\n self.name = name\n \n def getname(self):\n return \"User's full name is: {0}\".format(self.name) \n \n def setname(self, name):\n self.name = name\n \n fullname = property(getname, setname)\n \nuser = User(\"Roshan Musheer\")\nprint(user.fullname)\nuser.fullname = \"Shaeel Parez\"\nprint(user.fullname)\n# print(x)\n# print(p.name)\n\nclass TestSetter():\n def setter(self, name):\n self.name = name\n myname = property(fset=setter)\n \nts = TestSetter()\n\nts.myname = \"Mayank\"\nprint(ts.name)\n\nclass A:\n def get_x(self, neg=False):\n return -5 if neg else 5\n x = property(get_x)\n \na = A()\nprint(a.x)\n\nclass Book(object):\n def __init__(self, title, author):\n self.__title = title\n self.__author = author\n\n def __get_entry(self):\n# print(\"_get_entry\")\n return \"{0} by {1}\".format(self.__title, self.__author)\n\n def __set_entry(self, value):\n if \" by \" not in value:\n raise ValueError(\"Entries shall be formatted as '<title> by <author>'\")\n self.__title, self.__author = value.split(\" by \")\n \n entry = property(__get_entry, __set_entry)\n\n def __getattr__(self, attr):\n print(\"Sorry attribure do not exist\")\n return None\n\nb = Book(title=\"Step in C\", author=\"Mayank Johri\")\nprint(b.entry)\nb.entry = \"Lets learn C by Mayank Johri\"\nprint(\"*\"*20)\nprint(b.entry)\nprint(\"*\"*20)\nb.entry = \"Explore Go by Mayank Johri\"\nprint(\"*\"*20)\nprint(b.entry)\nb.nonExistAttribute",
"__new__\n__new__ is called for new Class type, \nOverriding the new method\nAs per \"https://www.python.org/download/releases/2.2/descrintro/#new\"\nHere are some rules for __new__:\n\n__new__ is a static method. When defining it, you don't need to (but may!) use the phrase \"__new__ = staticmethod(__new__)\", because this is implied by its name (it is special-cased by the class constructor).\nThe first argument to __new__ must be a class; the remaining arguments are the arguments as seen by the constructor call.\nA __new__ method that overrides a base class's __new__ method may call that base class's __new__ method. The first argument to the base class's __new__ method call should be the class argument to the overriding __new__ method, not the base class; if you were to pass in the base class, you would get an instance of the base class.\nUnless you want to play games like those described in the next two bullets, a __new__ method must call its base class's __new__ method; that's the only way to create an instance of your object. The subclass __new__ can do two things to affect the resulting object: pass different arguments to the base class __new__, and modify the resulting object after it's been created (for example to initialize essential instance variables).\n__new__ must return an object. There's nothing that requires that it return a new object that is an instance of its class argument, although that is the convention. If you return an existing object, the constructor call will still call its __init__ method. If you return an object of a different class, its __init__ method will be called. If you forget to return something, Python will unhelpfully return None, and your caller will probably be very confused.\nFor immutable classes, your __new__ may return a cached reference to an existing object with the same value; this is what the int, str and tuple types do for small values. This is one of the reasons why their __init__ does nothing: cached objects would be re-initialized over and over. (The other reason is that there's nothing left for __init__ to initialize: __new__ returns a fully initialized object.)\nIf you subclass a built-in immutable type and want to add some mutable state (maybe you add a default conversion to a string type), it's best to initialize the mutable state in the __init__ method and leave __new__ alone.\nIf you want to change the constructor's signature, you often have to override both __new__ and __init__ to accept the new signature. However, most built-in types ignore the arguments to the method they don't use; in particular, the immutable types (int, long, float, complex, str, unicode, and tuple) have a dummy __init__, while the mutable types (dict, list, file, and also super, classmethod, staticmethod, and property) have a dummy __new__. The built-in type 'object' has a dummy __new__ and a dummy __init__ (which the others inherit). The built-in type 'type' is special in many respects; see the section on metaclasses.\n(This has nothing to do to __new__, but is handy to know anyway.) If you subclass a built-in type, extra space is automatically added to the instances to accomodate dict and weakrefs. (The dict is not initialized until you use it though, so you shouldn't worry about the space occupied by an empty dictionary for each instance you create.) If you don't need this extra space, you can add the phrase \"`__slots__ = []\" to your class. (See above for more aboutslots`.)\nFactoid: __new__ is a static method, not a class method. I initially thought it would have to be a class method, and that's why I added the classmethod primitive. Unfortunately, with class methods, upcalls don't work right in this case, so I had to make it a static method with an explicit class as its first argument. Ironically, there are now no known uses for class methods in the Python distribution (other than in the test suite). I might even get rid of classmethod in a future release if no good use for it can be found!\n\nWhat is the difference between __new__ and __init__\nUse __new__ when you need to control the creation of a new instance. Use __init__ when you need to control initialization of a new instance.\n__new__ is the first step of instance creation. It's called first, and is responsible for returning a new instance of your class. In contrast, __init__ doesn't return anything; it's only responsible for initializing the instance after it's been created.\nIn general, you shouldn't need to override __new__ unless you're subclassing an immutable type like str, int, unicode or tuple.\nFrom: http://mail.python.org/pipermail/tutor/2008-April/061426.html",
"class MyTest:\n def __new__(self):\n print(\"in new\")\n \n def __init__(self):\n print(\"in init\")\n\nmnt = MyTest()\n\nclass MyNewTest:\n def __new__(self):\n print(\"in new\")\n\n def __new__(self, name):\n print(\"in new\", name)\n \n def __init__(self, name):\n print(\"in init\", name)\n\nmnt = MyNewTest(\"Hari Hari\")",
"Lets look at another example, we have removed the __new__ method from the above class and created an object.",
"class MyNewTest: \n def __init__(self, name):\n print(\"in init\", name)\n\nmnt = MyNewTest(\"Hari Hari\")",
"Now lets check where its goog idea to use __init__ and where __new__. \nOne thumb rule is try to avoid using __new__ and let python handle it because almost all the things you wish to do in constructor can be done in __init__. Still if you wish to do so, below examples will show you how to do it currectly.\nIn the first example, we have __init__ function and are using it.",
"class MyNewTest: \n def __init__(self, name):\n print(\"in init\", name)\n self.name = name\n \n def print_name(self):\n print(self.name)\n\n\nmnt = MyNewTest(\"Hari Hari\")\nmnt.print_name()",
"We saw, that everything was working without any issue. Now lets try to replace __init__ with __new__.",
"# -----------------#\n# Very Bad Example #\n# -----------------#\nclass MyNewTest: \n def __new__(cls, name):\n print(\"in init\", name)\n cls.name = name\n \n def print_name(self):\n print(self.name)\n\ntry:\n mnt = MyNewTest(\"Hari Hari\")\n mnt.print_name()\nexcept Exception as e:\n print(e)",
"Now, since we have not returned any thing in __new__ thus mnt is null. We must have __new__ which returns the object itself. Now to over come this issue, we need to return an instance of our class. We can do that using instance = super(<class>, cls).__new__(cls) as shown in the below example",
"class MyNewTest(object): \n def __new__(cls, name):\n print(\"in __new__:\\n\\t{0}\".format(name))\n instance = super(MyNewTest, cls).__new__(cls)\n instance.name = name\n return instance\n \n def print_name(self):\n print(\"print_name:\\n\\t{0}\".format(self.name))\n \nmnt = MyNewTest(\"!!! Hari Om Hari Om !!!\")\nmnt.print_name()\nram_ram = MyNewTest(\"!!! Ram Ram !!!\")\nram_ram.print_name()\nmnt.print_name()",
"or, we can create the class using the following code, instance = object.__new__(cls). As object is parent, we are directly calling it instead of using super.",
"class MyNewTest(object): \n def __new__(cls, name):\n print(\"in __new__\", name)\n instance = object.__new__(cls)\n instance.name = name\n print(\"exiting __new__\", name)\n return instance\n \n # __init__ is redundent in this example. \n def __init__(self, name): \n print(\"in __init__\", name)\n \n def print_name(self):\n print(self.name)\n \nmnt = MyNewTest(\"Hari Hari\")\nmnt.print_name()\nram_ram = MyNewTest(\"Ram Ram\")\nprint(ram_ram)",
"both super(MyNewTest, cls).__new__(cls) and object.__new__(cls) produce the desired instance as shown in the above examples.\nIf we were to return anything other than instance of object, then __init__ function will never be called as shown in the below example.",
"class Distance(float): \n def __new__(cls, dist):\n print(\"in __new__\", dist)\n return dist*0.0254\n \n # __init__ is redundent in this example, \n # as it will never be called. \n def __init__(self, dist): \n print(\"in __init__\", dist)\n \n def print_dist(self):\n print(self.__name__)\n \ntry:\n mnt = Distance(22)\n print(mnt, type(mnt))\n mnt.print_dist()\nexcept Exception as e:\n print(e)\n\nclass Distance(float): \n def __new__(cls, dist):\n print(\"in __new__\", dist)\n instance = super(Distance, cls).__new__(cls)\n print(type(instance))\n instance.val = dist*0.0254\n return instance\n \n def __init__(self, dist): \n print(\"in __init__\", dist)\n \n def print_dist(self):\n print(self.val)\n \n\nif __name__ == \"__main__\":\n try:\n mnt = Distance(22)\n print(mnt, type(mnt))\n mnt.print_dist()\n except Exception as e:\n print(e)",
"Where can we use __new__\nCreating singleton class\nIn singleton pattern, we create one instance of the class and all subsequent objects of that class points to the first instance.\nLets try to create a singleton class using __new__ constructor.",
"class Godlike(object):\n \n def __new__(cls, name):\n it = cls.__dict__.get(\"__it__\")\n if it is not None:\n return it\n cls.__it__ = it = object.__new__(cls)\n it.init(name)\n return it\n \n def init(self, name):\n self.name = name\n \n def print_name(self):\n print(self.name)\n \n \nohm = Godlike(\"Ohm\")\nram = Godlike(\"ram\")\nhari = Godlike(\"hari\")\n\nprint(ohm is ram)\nprint(ohm is hari)\nohm.print_name()\nram.print_name()",
"Note, in the above example all three objects are pointing to same object ohm meaning all three objects are same. \nNow, we might have situations where we need to raise exception, if creation of more than one instance is attempted. We can achieve it by raising an exception as shown in below example.",
"class SingletonError(Exception):\n pass\n\nclass HeadMaster(object):\n \n def __new__(cls, name):\n it = cls.__dict__.get(\"__it__\")\n if it is not None:\n raise SingletonError(f\"Count not create new instance for value {name}\")\n \n cls.__it__ = it = object.__new__(cls)\n it.__init__(name)\n return it\n \n def __init__(self, name): \n self.name = name\n \n def print_name(self):\n print(self.name)\n \n\ntry:\n print(\"Creating Anshu Mam as Primary School headmistress.\")\n anshu_mam = HeadMaster(\"Anshu Shrivastava\")\n print(\"Creating Rahim Sir as Primary School headmaster.\")\n rahim_sir = HeadMaster(\"Rahim Khan\")\nexcept Exception as e:\n print(e)",
"Regulating number of object creation\nwe are going to tweak previous example and convert it to have a finite number of objects created for the class",
"class HeadMaster(object):\n _instances = [] # Keep track of instance reference\n limit = 2\n\n def __new__(cls, *args, **kwargs):\n if len(cls._instances) >= cls.limit:\n raise RuntimeError(\"Creation Limit %s reached\" % cls.limit)\n instance = object.__init__(cls)\n cls._instances.append(instance)\n return instance\n\n def __del__(self):\n self._instance.remove(self)\n\ntry:\n li1 = HeadMaster()\n li2 = HeadMaster()\n li3 = HeadMaster()\n li4 = HeadMaster()\nexcept Exception as e:\n print(e)",
"Customize instance object\nWe can customize instance object using __new__\nCustomize Returned Object\nAs shown above, we can also return custom objects instead of instance of requested class as shown in one of the previous example."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kmorel/kmorel.github.io
|
images/better-plots/Detail_MultiSeries.ipynb
|
mit
|
[
"from __future__ import print_function",
"When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts.",
"import pandas\nimport numpy\nimport toyplot\nimport toyplot.pdf\nimport toyplot.png\nimport toyplot.svg\n\nprint('Pandas version: ', pandas.__version__)\nprint('Numpy version: ', numpy.__version__)\nprint('Toyplot version: ', toyplot.__version__)",
"Load in the \"auto\" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https://archive.ics.uci.edu/ml/datasets/Auto+MPG.\nThe data are stored in a text file containing columns of data. We use the pandas.read_table() method to parse the data and load it in a pandas DataFrame. The file does not contain a header row, so we need to specify the names of the columns manually.",
"column_names = ['MPG',\n 'Cylinders',\n 'Displacement',\n 'Horsepower',\n 'Weight',\n 'Acceleration',\n 'Model Year',\n 'Origin Index',\n 'Car Name']\ndata = pandas.read_table('auto-mpg.data',\n delim_whitespace=True,\n names=column_names,\n index_col=False)",
"The origin column indicates the country of origin for the car manufacture. It has three numeric values, 1, 2, or 3. These indicate USA, Europe, or Japan, respectively. Replace the origin column with a string representing the country name.",
"country_map = pandas.Series(index=[1,2,3],\n data=['USA', 'Europe', 'Japan'])\ndata['Origin'] = numpy.array(country_map[data['Origin Index']])",
"In this plot we are going to show the trend of the average miles per gallon (MPG) rating for subsequent model years separated by country of origin. This time period saw a significant increase in MPG driven by the U.S. fuel crisis. We can use the pivot_table feature of pandas to get this information from the data. (Excel and other spreadsheets have similar functionality.)",
"average_mpg_per_year = data.pivot_table(index='Model Year',\n columns='Origin',\n values='MPG',\n aggfunc='mean')\naverage_mpg_per_year",
"Use toyplot to make a plot of the MPG for every car in the database organized by year and colored by origin.",
"canvas = toyplot.Canvas('4in', '2.6in')\n\naxes = canvas.cartesian(bounds=(41,-1,6,-43),\n xlabel = 'Model Year',\n ylabel = 'MPG')\n\ncolormap = toyplot.color.CategoricalMap()\n\naxes.scatterplot(data['Model Year'] + 1900 + 0.2*(data['Origin Index']-2),\n data['MPG'],\n size=4,\n opacity=0.75,\n color=(numpy.array(data['Origin Index'])-1,colormap))\n\nfor country in country_map:\n series = average_mpg_per_year[country]\n x = series.index[-1] + 1900\n y = numpy.array(series)[-1]\n axes.text(x, y, country,\n style={\"text-anchor\":\"start\",\n \"-toyplot-anchor-shift\":\"15px\"})\n\n# It's usually best to make the y-axis 0-based.\naxes.y.domain.min = 0\n\n# Toyplot is sometimes inaccurate in judging the width of labels.\naxes.x.domain.max = 1984.2\n\n# The labels can make for odd tick placement.\n# Place them manually\naxes.x.ticks.locator = \\\n toyplot.locator.Explicit([1970,1974,1978,1982])\n\ntoyplot.pdf.render(canvas, 'Detail_MultiSeries.pdf')\ntoyplot.svg.render(canvas, 'Detail_MultiSeries.svg')\ntoyplot.png.render(canvas, 'Detail_MultiSeries.png', scale=5)",
"Now use toyplot to plot this data along with trend lines.",
"canvas = toyplot.Canvas('4in', '2.6in')\n\naxes = canvas.cartesian(bounds=(41,-1,6,-43),\n xlabel = 'Model Year',\n ylabel = 'MPG')\n\ncolormap = toyplot.color.CategoricalMap()\n\naxes.scatterplot(data['Model Year'] + 1900 + 0.2*(data['Origin Index']-2),\n data['MPG'],\n size=4,\n opacity=1.0,\n color=(numpy.array(data['Origin Index'])-1,colormap))\n\nfor column in country_map:\n series = average_mpg_per_year[column]\n x = series.index + 1900\n y = numpy.array(series)\n axes.plot(x, y, opacity=0.5)\n axes.text(x[-1], y[-1], column,\n style={\"text-anchor\":\"start\",\n \"-toyplot-anchor-shift\":\"10px\"})\n\n# It's usually best to make the y-axis 0-based.\naxes.y.domain.min = 0\n\n# Toyplot is sometimes inaccurate in judging the width of labels.\naxes.x.domain.max = 1984.2\n\n# The labels can make for odd tick placement.\n# Place them manually\naxes.x.ticks.locator = \\\n toyplot.locator.Explicit([1970,1974,1978,1982])\n\ntoyplot.pdf.render(canvas, 'Detail_MultiSeries_Trend.pdf')\ntoyplot.svg.render(canvas, 'Detail_MultiSeries_Trend.svg')\ntoyplot.png.render(canvas, 'Detail_MultiSeries_Trend.png', scale=5)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Vvkmnn/books
|
AutomateTheBoringStuffWithPython/lesson48.ipynb
|
gpl-3.0
|
[
"Lesson 48:\nControlling the Mouse with Python\nPython can be used to control the keyboard and mouse, which allows us to automate any program that uses these as inputs. \nGraphical User Interface (GUI) Automation is particularly useful for repetative clicking or keyboard entry. The program's own module will probably deliver better programmatic performance, but GUI automation is more broadly applicable. \nWe will be using the pyautogui module. The documentation is available here.\nYou can follow instructions on this page to install the necessary packages. (I had particular trouble installing the pyobjc dependency, so if you have similar issues in your environment, it might be useful to install it from source using Mercurial. You may also need the pillow module.)",
"import pyautogui",
"This lesson will control all the mouse controlling functions in this module. \nThe module treats the screen as cartesian coordinates of pixels, with x referencing a point on the horizontal line, and y referencing a point on the vertical line. \n\nWe can examine the available screen size using the size() function, with variables returned as an (x,y) pair.",
"pyautogui.size()",
"We can store this tuple in two variables, width and height:",
"width, height = pyautogui.size()",
"Similarly, the position() function returns the current position of the mouse cursor.",
"pyautogui.position()",
"The left and right most corners are 1 less than the maximum.",
"# left corner\nprint(pyautogui.position())\n\n# right corner\nprint(pyautogui.position())",
"The first function to control the mouse is the moveTo() function, which moves the mouse immediately to an absolute location.",
"pyautogui.moveTo(10,10)",
"We can pass the duration parameter to this function slow down this process to simulate human activity over the duration defined.",
"pyautogui.moveTo(10,10, duration=2)",
"We can also use the moveRel() function to move the mouse relative to a certain position.",
"pyautogui.moveRel(200, 0, duration=2)",
"We can also pass in y coordinates to move up or down, but we have to use negative values to move 'up' in a relative way.",
"pyautogui.moveRel(0, -100, duration=1.5)",
"Now that we have mastered movement, we can now use click() functions to interact with objects.",
"# Find the 'Help' button in the top Jupyter Navigation\n# helpCoordinates = pyautogui.position()\n\nhelpCoordinates = (637, 126)\n\n# Click at those coordinates\npyautogui.click(helpCoordinates)",
"We can use functions like rightClick(), doubleClick(), or middleClick() for similar behavior. We can even run these functions without any coordinates, which will click at any particular location. \nWe also have dragRel() and dragTo() functions, which can be used to click and drag; used here to draw in a paint program.\n\nAn important thing to note is that during automation, the script will control your mouse and keyboard, which may affect your ability to interact with the computer (like what happened in Fantasia).\nTo avoid this pyautogui has a built in fail safe, which checks if the mouse is in coordinate (0,0); the top left of the screen. If it is, the script terminates. Commands take a tenth of a second between commands to run, so any indicator in that period will raise an error.",
"pyautogui.moveRel(0, -100, duration=1.5)",
"To make pyautogui useful, you need to know the coordinates on the screen at any given time, and interact with them. The module contains a useful sub program, called displayMousePosition, which can be run via the terminal to track the position of the real-time mouse at any given time (it also returns RGB values.)",
"# Not run here, just keeps spitting out print functions. Much more useful in the terminal.\npyautogui.displayMousePosition()",
"Recap\n\nControlling the mouse and keyboard is called GUI automation.\nThe pyautogui module has many functions to control the mouse and keyboard.\nThe pyautogui.size() function returns the current screen resolution.\nThe pyautogui.position() returns the current mouse position, as a tuple of two integers.\nThe pyautogui.moveTo() function moves the mouse instantly to a cartesian (x,y) coordinate on the string.\nThe pyautogui.moveRel() function moves the mouse to a point relative to the current position.\nBoth of these functions take a duration paramater to slow the mouse transition.\nThe pyautogui.click(), pyautogui.doubleClick(), pyautogui.rightClick() and pyautogui.middleClick() all clcik the mouse buttons.\nThe pyautogui.dragTo() and pyautogui.dragRel() functions will move the mouse while holding down the mouse button. \nIf your program gets out of control, activate the failsafe by quickly moving the cursor to the top left of the screen."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bollwyvl/ip-bootstrap
|
docs/Icon.ipynb
|
bsd-3-clause
|
[
"Icon",
"from IPython.html import widgets\nfrom ipbs.widgets import Icon\nimport ipbs.bootstrap as bs\nfrom ipbs.icons import FontAwesome, Size",
"First, grab a FontAwesome instance which knows about all of the icons.",
"fa = FontAwesome()",
"fa exposes Python-friendly, autocompletable names for all of the FontAwesome icons, and you can preview them immediately.",
"fa.space_shuttle",
"You can apply effects like rotation and scaling.",
"fa.space_shuttle.rotate_270 * 3",
"The actual widget supports the stack case, such that you can display a single icon...",
"icon = Icon(fa.space_shuttle)\nicon",
"Or several icons stacked together...",
"icon = Icon(fa.square * 2, fa.empire.context_inverse, size=Size.x3)\nicon"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/inpe/cmip6/models/besm-2-7/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: INPE\nSource ID: BESM-2-7\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:06\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DillmannFrench/Intro-PYTHON
|
Cours09_DILLMANN_ISEP2016.ipynb
|
gpl-3.0
|
[
"Devoir à mi-parcours\nExigeant d'avoir abordé les notions de : \n- Genéricité \n- Portée d'une Variable\n- Structures de Contrôle (Bases)\n- Types de base\n- Recursivité\n1) Rapels de cours, sur les declarations de contrôle\n(Control Statements)\n1.1) Nom de variable et valeur\nQuel est/quels sont le ou les cas en python où l’on peut déclarer une variable (donc un nom) en python sans lui assigner directement une valeur ? Comment s’appelle ce type de variable spécifiquement ?",
"%reset\nglobal MyVariable\nglobals()",
"1.2) Boucles en python\nExpliquez brièvement le fonctionnement des deux principales boucles en python (for et while) en précisant notamment : \n\n\nqu’est ce qui change à chaque itération\nquand est-ce que le boucle s’arrête",
"for it in range(10):\n print(it)\n\nit=0\nwhile it < 10:\n print(it)\n it +=1",
"1.3) Concept informatique\nQu’est ce qu’est la généricité (le fait qu’un code soit générique) ?",
"def Ajoute_dix(inputData):\n if isinstance(inputData, int):\n return inputData + 10\n elif isinstance(inputData, str):\n return int(inputData) + 10\n else:\n return 10\n\n# applications avec un INT\na=15\nprint(\"Le type de a est : {}\".format(type(a)))\nb=Ajoute_dix(a)\nprint(\"En applicant la fonction Ajoute_dix à (a) on obtient : {}\".format(b))\nprint(\"Qui est de type : {}\".format(type(b)))\n# applications avec un STR\na='15'\nprint(\"Le type de a est : {}\".format(type(a)))\nb=Ajoute_dix(a)\nprint(\"En applicant la fonction Ajoute_dix à (a) on obtient : {}\".format(b))\nprint(\"Qui est de type : {}\".format(type(b)))\n",
"2) Partie 2 : Operateurs Arithmétiques",
"a=5%2\nprint('a=5%2 is a {} of type {}'.format(a,type(a)))\n\na=[1]+3\nprint('a=[1]+3 is a {} of type {}'.format(a,type(a)))\n\na=[1]*3\nprint('a=a=[1]*3 is a {} of type {}'.format(a,type(a)))\n\na=9.0*9\nprint('a=9.0*9 is a {} of type {}'.format(a,type(a)))\n\na=9*\"9\"\nprint(\"a=9*“9” is a {} of type {}\".format(a,type(a)))\n\na = 4 == \"4\"\nprint(\"a=4=“4” is a {}\".format(a,type(a)))\n\na=4=\"4\"\nprint(\"a=4=“4” is a {}\".format(a)))",
"2.2) Question 2",
"x=3\ny=5\nz=11\n\na = True\nb = False\n\n(x < y ) and (a or b)\n\n(x < y < z) != (a != b)\n\n(a and (not b) and ( -y <= -z ) )\n\n(not (a and b)) or ((y**2 < z**2))",
"2.3 Question 3",
"liste1 = [\"a\", \"b\", \"c\", \"d\", \"e\"]",
"2.3.1 Question 3.1",
"liste1[ len(liste1) ]\n\nliste1[ - len(liste1) ]",
"2.4 Question 4",
"a=2\nwhile a <= 7:\n a = a*a + a + 1\nprint(a)",
"2.5 Question 5",
"a=1\nfor it in range(5):\n a = a + it\nprint(a)",
"2.6 Question 6",
"liste1 = [1,2,3]\nliste2 = liste1\nliste2.append(4)\nprint(liste1)",
"2.7 Question 7",
"x=8\nliste1 = [ ]\nfor it in range(x//2):\n liste1.append( x - it )\nprint( liste1 )",
"3) Debogage de code\nDans cette partie les codes fournis ne fonctionnent pas, soit parce qu’ils provoquent une erreur, soit parce qu’ils ne produisent pas les résultats attendus.\nLe but est de les corriger.\n3.1 Exercice 1 : Syntax error\n3.1.1 Exercice 1.1",
"def cube_si_abs_plus_grande_que_un( x ):\n if abs( x*x*x >= 1.0 :\n print(\"Erreur\")\n return\n return x*x*x\n\ndef cube_si_abs_plus_grande_que_un( x ):\n if abs( x*x*x <= 1.0 ) : ## Condition mal formeé\n print(\"Erreur\")\n return\n return x*x*x\n\nprint(cube_si_abs_plus_grande_que_un(3))",
"3.1.2 Exercice 1.2",
"def factorielle( int( n ) ):\n res = 1\n for it in range(1,n+1):\n res = res*it\nreturn res\n\nprint( factorielle( 10 ) )\n\ndef factorielle( n ):\n res = 1\n for it in range(1,n+1):\n res = res*it\n return res\n\nprint( factorielle( 10 ) )",
"3.1.3 Exercice 1.3",
"def somme_carre( n ):\n res = 0\ni=0\nwhile i < n:\ni += 1\n res += i*i\n return res\n\ndef somme_carre( n ):\n res = 0\n i=0\n while i < n:\n i += 1\n res += i*i\n return res\n\nsomme_carre(10)",
"3.1.4) Exercice 1.4",
"from math import srqt #Faute de frape\ndef somme_sqrt( n ):\nres = 0 i=0 while i < n #On ne declare pas les vaiables sur la m ligne\ni += 1#Faute d'indentation\n res += sqrt(i)\n return res\n\nfrom math import sqrt #Faute de frape\ndef somme_sqrt( n ):\n res = 0 \n i=0 \n while (i < n):\n i += 1\n res += sqrt(i)\n return res\n\nsomme_sqrt(5)",
"3.2 Exercice 3",
"def somme_racine_cubique( n ):\n res = 0\n for it in range(n):\n res += it**(1/3)\n return res\n\nsomme_racine_cubique(8)\n\ndef somme_racine_cubique( n ):\n res = 0\n for it in range(n+1):\n res += it**(1/3)\n return res\n\nsomme_racine_cubique(8)",
"4) Analyse de code\nDans cette partie les codes fonctionnent, le but étant de comprendre leur effet, ou d’être capable de les simplifier.\n4.1 Exercice 1",
"def f(liste):\n for i in range(len(liste)):\n for j in range(i):\n if liste[i][j] != 0: \n return False\n return True\n\na = f( [[1,1,1],\n [0,1,1],\n [0,0,1]] )\nprint('Est ce que la matrice contient des 0 sous la diagonale ? {}'.format(a))\n\na = f( [[1,0,0],\n [0,1,0],\n [0,0,1]] )\nprint(a)\n\nb = f( [[1,1,1],\n [0,1,1],\n [0,0,1]] )\nprint(b)\n\nc = f( [[1,0,0],\n [1,1,0],\n [1,1,1]] )\nprint(c)\n\nd = f( [[1,1,1],\n [1,1,1],\n [1,1,1]] )\nprint(d)",
"4.2 Exercice 2",
"def f( n ):\n if n == 0:\n return False\n return not f( n - 1 )\n\na = f(12)\nb = f(8)\nc = f(13)\nprint(a,b,c)\n\nfor it in range(50):\n print('le nombre {0} est il impair ? {1}'.format(it,f(it)))",
"4.3 Exercice 3",
"def f( n ):\n res = 0\n for it in range(n):\n if it % 5 == 1:\n res = res + it\n if it % 7 == 1:\n res = res + it\n if it % 9 == 1:\n res = res + it\n return res\n\nfor it in range(50):\n print('Si on applique à {} la fonction on obtient {} '.format(it,f(it)))\n\ndef f( n ):\n res=0\n for it in range(n):\n if (it % 5 == 1): res += it\n if (it % 7 == 1): res += it\n if (it % 9 == 1): res += it \n return res\n\ndef f (n ):\n l=range(n)\n liste1=[it for it in l if it % 5 == 1]\n liste2=[it for it in l if it % 7 == 1]\n liste3=[it for it in l if it % 9 == 1]\n return (sum(liste1)+sum(liste2)+sum(liste3))\n\nfor it in range(50):\n print('Si on applique à {} la fonction on obtient {} '.format(it,f(it)))",
"5) Codage\n5.1 Exercice 1 : somme de multiples pairs\net evaluation de leur performance...",
"from time import clock\ndef duree(fonction, n=10):\n debut = clock()\n fonction(n)\n fin = clock()\n return fin - debut\n\ndef somme_n_entiers1(n):\n ref=0\n l=range(n+1)\n for it in l:\n if it % 2 == 0:\n ref += it\n return ref\n\ndef somme_n_entiers2(n):\n it=0\n ref=0\n l=range(n+1)\n while (it < len(l)):\n if (l[it] % 2 == 0):\n ref += it\n it += 1\n return ref\n\ndef SumRec(n):\n if n == 0:\n return(0)\n elif n % 2 != 0:\n return(n-1 + SumRec(n-3))\n else :\n return(n + SumRec(n-2))\n\ndef SommePair(n):\n return sum(range(0,n+1,2))\n\nprint('{:>25}> {:f} s'.format('utilisation de la fonction for ',duree(somme_n_entiers1) ))\nprint('{:>25}> {:f} s'.format('utilisation de la fonction while ',duree(somme_n_entiers2) ))\nprint('{:>25}> {:f} s'.format('utilisation la plus simple ',duree(SommePair) ))\nprint('{:>25}> {:f} s'.format('utilisation la recursivité ',duree(SumRec) ))",
"5.2 Exercice 2 : recherche de multiple de 11",
"def Test_11(My_Liste):\n for it in My_Liste:\n if it % 11 == 0: \n return True\n break\n return False\n\nTest_11([1, 2, 3, 22])\n",
"6 Algorithme des différences successives",
"def div_euclidiene(a,b):\n q,r=0,a\n while r>=b:\n q,r=q+1,r-b\n return(q,r)\n\n\na=546\nb=34\nquotient, reste = div_euclidiene(a,b)\nprint(\"La Division euclidienne peut s'ecrire \\n {0} = {1} x {2} + {3}\".format(a,b,quotient,reste))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vlad17/np-learn
|
presentation.ipynb
|
apache-2.0
|
[
"Advanced Numpy Techniques\n<img src=\"assets/numpylogo.png\" alt=\"http://www.numpy.org/#\">\nGeneral, user-friendly documentation with lots of examples.\nTechnical, \"hard\" reference.\nBasic Python knowledge assumed.\nCPython ~3.6, NumPy ~1.12\nIf you like content like this, you might be interested in my blog\nWhat is it?\nNumPy is an open-source package that's part of the SciPy ecosystem. Its main feature is an array object of arbitrary dimension, but this fundamental collection is integral to any data-focused Python application.\n<table>\n<tr>\n<th>\n<img src=\"assets/nonsteeplearn.png\" width=\"200\" alt=\"http://gabriellhanna.blogspot.com/2015/03/negatively-accelerated-learning-curve-i.html\">\n</th><th>\n<img src=\"assets/steeplearn.jpg\" width=\"200\" alt=\"http://malaher.org/2007/03/pet-peeve-learning-curve-misuse/\">\n</th></tr></table>\n\nMost people learn numpy through assimilation or necessity. I believe NumPy has the latter learning curve (steep/easy to learn), so you can actually invest just a little bit of time now (by going through this notebook, for instance), and reap a lot of reward!\nMotivation\n\nProvide a uniform interface for handling numerical structured data\nCollect, store, and manipulate numerical data efficiently\nLow-cost abstractions\nUniversal glue for numerical information, used in lots of external libraries! The API establishes common functions and re-appears in many other settings with the same abstractions.\n\n<table>\n<tr>\n<th>\n<img src=\"assets/numba.png\" alt=\"http://numba.pydata.org/\" width=\"150\"></th><th><img src=\"assets/pandas.png\" alt=\"http://pandas.pydata.org/\" width=\"150\"> </th><th><img src=\"assets/tf.png\" alt=\"https://github.com/tensorflow/tensorflow\" width=\"150\"></th><th> <img src=\"assets/sklearn.png\" alt=\"https://github.com/scikit-learn/scikit-learn\" width=\"150\"> </th><th><img src=\"assets/stan.png\" alt=\"http://mc-stan.org/\" width=\"150\"></th>\n</tr>\n</table>\n\nGoals and Non-goals\nGoals\nWhat I'll do:\n\nGive a bit of basics first.\nDescribe NumPy, with under-the-hood details to the extent that they are useful to you, the user\nHighlight some [GOTCHA]s, avoid some common bugs\nPoint out a couple useful NumPy functions\n\nThis is not an attempt to exaustively cover the reference manual (there's too many individual functions to keep in your head, anyway).\nInstead, I'll try to...\n\nprovide you with an overview of the API structure so next time you're doing numeric data work you'll know where to look\nconvince you that NumPy arrays offer the perfect data structure for the following (wide-ranging) use case:\n\nRAM-sized general-purpose structured numerical data applications: manipulation, collection, and analysis.\nNon-goals\n\nNo emphasis on multicore processing, but will be briefly mentioned\nSome NumPy functionality not covered -- mentioned briefly at end\nHPC concerns\nGPU programming\n\nWhy not a Python list?\nA list is a resizing contiguous array of pointers.\n<img src=\"assets/pylist.png\" alt=\"http://www.laurentluce.com/posts/python-list-implementation/\">\nNested lists are even worse - there are two levels of indirection.\n<img src=\"assets/nestlist.png\" alt=\"http://www.cs.toronto.edu/~gpenn/csc401/401_python_web/pyseq.html\">\nCompare to NumPy arrays, happy contiguous chunks of memory, even across axes. This image is only illustrative, a NumPy array may not necessarily be in C-order (more on that later):\n<img src=\"assets/nparr.png\" alt=\"https://www.safaribooksonline.com/library/view/python-for-data/9781491957653/ch04.html\" width=300>\nRecurring theme: NumPy lets us have the best of both worlds (high-level Python for development, optimized representation and speed via low-level C routines for execution)",
"import numpy as np\nimport time\nimport gc\nimport sys\n\nassert sys.maxsize > 2 ** 32, \"get a new computer!\"\n\n# Allocation-sensitive timing needs to be done more carefully\n# Compares runtimes of f1, f2\ndef compare_times(f1, f2, setup1=None, setup2=None, runs=5):\n print(' format: mean seconds (standard error)', runs, 'runs')\n maxpad = max(len(f.__name__) for f in (f1, f2))\n means = []\n for setup, f in [[setup1, f1], [setup2, f2]]:\n setup = (lambda: tuple()) if setup is None else setup\n \n total_times = []\n for _ in range(runs):\n try:\n gc.disable()\n args = setup()\n \n start = time.time()\n if isinstance(args, tuple):\n f(*args)\n else:\n f(args)\n end = time.time()\n \n total_times.append(end - start)\n finally:\n gc.enable()\n \n mean = np.mean(total_times)\n se = np.std(total_times) / np.sqrt(len(total_times))\n print(' {} {:.2e} ({:.2e})'.format(f.__name__.ljust(maxpad), mean, se))\n means.append(mean)\n print(' improvement ratio {:.1f}'.format(means[0] / means[1]))",
"Bandwidth-limited ops\n\nHave to pull in more cache lines for the pointers\nPoor locality causes pipeline stalls",
"size = 10 ** 7 # ints will be un-intered past 258\nprint('create a list 1, 2, ...', size)\n\n\ndef create_list(): return list(range(size))\ndef create_array(): return np.arange(size, dtype=int)\n\ncompare_times(create_list, create_array)\n\nprint('deep copies (no pre-allocation)') # Shallow copy is cheap for both!\nsize = 10 ** 7\n\nls = list(range(size))\ndef copy_list(): return ls[:]\n\nar = np.arange(size, dtype=int)\ndef copy_array(): return np.copy(ar)\n\ncompare_times(copy_list, copy_array)\n\nprint('Deep copy (pre-allocated)')\nsize = 10 ** 7\n\ndef create_lists(): return list(range(size)), [0] * size\ndef deep_copy_lists(src, dst): dst[:] = src\n\ndef create_arrays(): return np.arange(size, dtype=int), np.empty(size, dtype=int)\ndef deep_copy_arrays(src, dst): dst[:] = src\n\ncompare_times(deep_copy_lists, deep_copy_arrays, create_lists, create_arrays)",
"Flop-limited ops\n\nCan't engage VPU on non-contiguous memory: won't saturate CPU computational capabilities of your hardware (note that your numpy may not be vectorized anyway, but the \"saturate CPU\" part still holds)",
"print('square out-of-place')\n\ndef square_lists(src, dst):\n for i, v in enumerate(src):\n dst[i] = v * v\n\ndef square_arrays(src, dst):\n np.square(src, out=dst)\n \ncompare_times(square_lists, square_arrays, create_lists, create_arrays)\n\n# Caching and SSE can have huge cumulative effects\n\nprint('square in-place')\nsize = 10 ** 7\n\ndef create_list(): return list(range(size))\ndef square_list(ls):\n for i, v in enumerate(ls):\n ls[i] = v * v\n\ndef create_array(): return np.arange(size, dtype=int)\ndef square_array(ar):\n np.square(ar, out=ar)\n \ncompare_times(square_list, square_array, create_list, create_array)",
"Memory consumption\nList representation uses 8 extra bytes for every value (assuming 64-bit here and henceforth)!",
"from pympler import asizeof\nsize = 10 ** 4\n\nprint('list kb', asizeof.asizeof(list(range(size))) // 1024)\nprint('array kb', asizeof.asizeof(np.arange(size, dtype=int)) // 1024)",
"Disclaimer\nRegular python lists are still useful! They do a lot of things arrays can't:\n\nList comprehensions [x * x for x in range(10) if x % 2 == 0]\nRagged nested lists [[1, 2, 3], [1, [2]]]\n\nThe NumPy Array\ndoc\nAbstraction\nWe know what an array is -- a contiugous chunk of memory holding an indexed list of things from 0 to its size minus 1. If the things have a particular type, using, say, dtype as a placeholder, then we can refer to this as a classical_array of dtypes.\nThe NumPy array, an ndarray with a datatype, or dtype, dtype is an N-dimensional array for arbitrary N. This is defined recursively:\n* For N > 0, an N-dimensional ndarray of dtype dtype is a classical_array of N - 1 dimensional ndarrays of dtype dtype, all with the same size.\n* For N = 0, the ndarray is a dtype\nWe note some familiar special cases:\n* N = 0, we have a scalar, or the datatype itself\n* N = 1, we have a classical_array\n* N = 2, we have a matrix\nEach axis has its own classical_array length: this yields the shape.",
"n0 = np.array(3, dtype=float)\nn1 = np.stack([n0, n0, n0, n0])\nn2 = np.stack([n1, n1])\nn3 = np.stack([n2, n2])\n\nfor x in [n0, n1, n2, n3]:\n print('ndim', x.ndim, 'shape', x.shape)\n print(x)",
"Axes are read LEFT to RIGHT: an array of shape (n0, n1, ..., nN-1) has axis 0 with length n0, etc.\nDetour: Formal Representation\nWarning, these are pretty useless definitions unless you want to understand np.einsum, which is only at the end anyway.\nFormally, a NumPy array can be viewed as a mathematical object. If:\n\nThe dtype belongs to some (usually field) $F$\nThe array has dimension $N$, with the $i$-th axis having length $n_i$\n$N>1$\n\nThen this array is an object in:\n$$\nF^{n_0}\\otimes F^{n_{1}}\\otimes\\cdots \\otimes F^{n_{N-1}}\n$$\n$F^n$ is an $n$-dimensional vector space over $F$. An element in here can be represented by its canonical basis $\\textbf{e}_i^{(n)}$ as a sum for elements $f_i\\in F$:\n$$\nf_1\\textbf{e}1^{(n)}+f{2}\\textbf{e}{2}^{(n)}+\\cdots +f{n}\\textbf{e}_{n}^{(n)}\n$$\n$F^n\\otimes F^m$ is a tensor product, which takes two vector spaces and gives you another. Then the tensor product is a special kind of vector space with dimension $nm$. Elements in here have a special structure which we can tie to the original vector spaces $F^n,F^m$:\n$$\n\\sum_{i=1}^n\\sum_{j=1}^mf_{ij}(\\textbf{e}{i}^{(n)}\\otimes \\textbf{e}{j}^{(m)})\n$$\nAbove, $(\\textbf{e}{i}^{(n)}\\otimes \\textbf{e}{j}^{(m)})$ is a basis vector of $F^n\\otimes F^m$ for each pair $i,j$.\nWe will discuss what $F$ can be later; but most of this intuition (and a lot of NumPy functionality) is based on $F$ being a type corresponding to a field.\nBack to CS / Mutability / Losing the Abstraction\nThe above is a (simplified) view of ndarray as a tensor, but gives useful intuition for arrays that are not mutated.\nAn ndarray Python object is a actually a view into a shared ndarray. The base is a representative of the equaivalence class of views of the same array\n<img src=\"assets/ndarrayrep.png\" alt=\"https://docs.scipy.org/doc/numpy/reference/arrays.html\">\nThis diagram is a lie (the array isn't in your own bubble, it's shared)!",
"original = np.arange(10)\n\n# shallow copies\ns1 = original[:]\ns2 = s1.view()\ns3 = original[:5]\n\nprint(original)\n\noriginal[2] = -1\nprint('s1', s1)\nprint('s2', s2)\nprint('s3', s3)\n\nid(original), id(s1.base), id(s2.base), id(s3.base), original.base",
"Dtypes\n$F$ (our dtype) can be (doc):\n\nboolean\nintegral\nfloating-point\ncomplex floating-point\nany structure (record array) of the above, e.g. complex integral values\n\nThe dtype can also be unicode, a date, or an arbitrary object, but those don't form fields. This means that most NumPy functions aren't usful for this data, since it's not numeric. Why have them at all?\n\nfor all: NumPy ndarrays offer the tensor abstraction described above.\nunicode: consistent format in memory for bit operations and for I/O\ndate: compact representation, addition/subtraction, basic parsing",
"# Names are pretty intuitive for basic types\n\ni16 = np.arange(100, dtype=np.uint16)\ni64 = np.arange(100, dtype=np.uint64)\nprint('i16', asizeof.asizeof(i16), 'i64', asizeof.asizeof(i64))\n\n# We can use arbitrary structures for our own types\n# For example, exact Gaussian (complex) integers\n\ngauss = np.dtype([('re', np.int32), ('im', np.int32)])\nc2 = np.zeros(2, dtype=gauss)\nc2[0] = (1, 1)\nc2[1] = (2, -1)\n\ndef print_gauss(g):\n print('{}{:+d}i'.format(g['re'], g['im']))\n \nprint(c2)\nfor x in c2:\n print_gauss(x)\n\nl16 = np.array(5, dtype='>u2') # little endian signed char\nb16 = l16.astype('<u2') # big endian unsigned char\nprint(l16.tobytes(), np.binary_repr(l16, width=16))\nprint(b16.tobytes(), np.binary_repr(b16, width=16))",
"Indexing doc\nProbably the most creative, unique part of the entire library. This is what makes NumPy ndarray better than any other array.\nAnd index returns an ndarray view based on the other ndarray.\nBasic Indexing",
"x = np.arange(10)\n\n# start:stop:step\n# inclusive start, exclusive stop\nprint(x)\nprint(x[2:6:2])\nprint(id(x), id(x[2:6:2].base))\n\n# Default start is 0, default end is length, default step is 1\nprint(x[:3])\nprint(x[7:])\n\n# Don't worry about overshooting\nprint(x[:100])\nprint(x[7:2:1])\n\n# Negatives wrap around (taken mod length of axis)\nprint(x[-4:-1])\n\n# An array whose index goes up in reverse\nprint(x[::-1]) # default start = n-1 and stop = -1 for negative step [GOTCHA]\nprint(x[::-1][:3])\n\n# What happens if we do an ascending sort on an array with the reverse index?\nx = np.arange(10)\n\nprint('x[:5] ', x[:5])\nprint('x[:5][::-1] ', x[:5][::-1])\nx[:5][::-1].sort()\nprint('calling x[:5][::-1].sort()')\nprint('x[:5][::-1] (sorted)', x[:5][::-1])\nprint('x[:5] (rev-sorted) ', x[:5])\nprint('x ', x)\n\n# Multi-dimensional\n\ndef display(exp):\n print(exp, eval(exp).shape)\n print(eval(exp))\n print()\n \nx = np.arange(4 * 4 * 2).reshape(2, 4, 4)\ndisplay('x')\ndisplay('x[1, :, :1]')\ndisplay('x[1, :, 0]')\n\n# Add as many length-1 axes as you want [we'll see why later]\ny = np.arange(2 * 2).reshape(2, 2)\ndisplay('y')\ndisplay('y[:, :, np.newaxis]')\ndisplay('y[np.newaxis, :, :, np.newaxis]')\n\n# Programatically create indices\ndef f(): return slice(0, 2, 1)\ns = f()\nprint('slice', s.start, s.stop, s.step)\ndisplay('x[0, 0, s]')\n# equivalent notation\ndisplay('x[tuple([0, 0, s])]')\ndisplay('x[(0, 0, s)]')",
"Basic indices let us access hyper-rectangles with strides:\n<img src=\"assets/slices.png\" alt=\"http://www.scipy-lectures.org/intro/numpy/numpy.html\" width=\"300\">\nAdvanced Indexing\nArbitrary combinations of basic indexing. GOTCHA: All advanced index results are copies, not views.",
"m = np.arange(4 * 5).reshape(4, 5)\n\n# 1D advanced index\ndisplay('m')\ndisplay('m[[1,2,1],:]')\n\nprint('original indices')\nprint(' rows', np.arange(m.shape[0]))\nprint(' cols', np.arange(m.shape[1]))\nprint('new indices')\nprint(' rows', ([1, 2, 1]))\nprint(' cols', np.arange(m.shape[1]))\n\n# 2D advanced index\ndisplay('m')\ndisplay('m[0:1, [[1, 1, 2],[0, 1, 2]]]')",
"Why on earth would you do the above? Selection, sampling, algorithms that are based on offsets of arrays (i.e., basically all of them).\nWhat's going on?\nAdvanced indexing is best thought of in the following way:\nA typical ndarray, x, with shape (n0, ..., nN-1) has N corresponding indices. \n(range(n0), ..., range(nN-1))\nIndices work like this: the (i0, ..., iN-1)-th element in an array with the above indices over x is:\n(range(n0)[i0], ..., range(n2)[iN-1]) == (i0, ..., iN-1)\nSo the (i0, ..., iN-1)-th element of x is the (i0, ..., iN-1)-th element of \"x with indices (range(n0), ..., range(nN-1))\".\nAn advanced index x[:, ..., ind, ..., :], where ind is some 1D list of integers for axis j between 0 and nj, possibly with repretition, replaces the straightforward increasing indices with:\n(range(n0), ..., ind, ..., range(nN-1))\nThe (i0, ..., iN-1)-th element is (i0, ..., ind[ij], ..., iN-1) from x.\nSo the shape will now be (n0, ..., len(ind), ..., nN-1).\nIt can get even more complicated -- ind can be higher dimensional.",
"# GOTCHA: accidentally invoking advanced indexing\ndisplay('x')\ndisplay('x[(0, 0, 1),]') # advanced\ndisplay('x[(0, 0, 1)]') # basic\n# best policy: don't parenthesize when you want basic",
"The above covers the case of one advanced index and the rest being basic. One other common situation that comes up in practice is every index is advanced.\nRecall array x with shape (n0, ..., nN-1). Let indj be integer ndarrays all of the same shape (say, (m0, ..., mM-1)).\nThen x[ind0, ... indN-1] has shape (m0, ..., mM-1) and its t=(j0, ..., jM-1)-th element is the (ind0[t], ..., indN-1(t))-th element of x.",
"display('m')\ndisplay('m[[1,2],[3,4]]')\n\n# ix_: only applies to 1D indices. computes the cross product\ndisplay('m[np.ix_([1,2],[3,4])]')\n\n# r_: concatenates slices and all forms of indices\ndisplay('m[0, np.r_[:2, slice(3, 1, -1), 2]]')\n\n# Boolean arrays are converted to integers where they're true\n# Then they're treated like the corresponding integer arrays\nnp.random.seed(1234)\ndigits = np.random.permutation(np.arange(10))\nis_odd = digits % 2\nprint(digits)\nprint(is_odd)\nprint(is_odd.astype(bool))\nprint(digits[is_odd]) # GOTCHA\nprint(digits[is_odd.astype(bool)])\n\nprint(digits)\nprint(is_odd.nonzero()[0])\nprint(digits[is_odd.nonzero()])\n\n# Boolean selection in higher dimensions:\nx = np.arange(2 *2).reshape(2, -1)\ny = (x % 2).astype(bool)\nprint(x)\nprint(y)\nprint(y.nonzero())\nprint(x[y]) # becomes double advanced index",
"Indexing Applications",
"# Data cleanup / filtering\n\nx = np.array([1, 2, 3, np.nan, 2, 1, np.nan])\nb = ~np.isnan(x)\nprint(x)\nprint(b)\nprint(x[b])\n\n# Selecting labelled data (e.g. for plotting)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# From DBSCAN sklearn ex\nfrom sklearn.datasets.samples_generator import make_blobs\n\nX, labels = make_blobs(n_samples=100, centers=[[0, 0], [1, 1]], cluster_std=0.4, random_state=0)\nprint(X.shape)\nprint(labels.shape)\nprint(np.unique(labels))\n\nfor label, color in [(0, 'b'), (1, 'r')]:\n xy = X[labels == label]\n plt.scatter(xy[:, 0], xy[:, 1], color=color, marker='.')\n\nplt.axis([-1, 2, -1, 2])\nplt.show()\n\n# Contour plots\n# How to plot sin(x)*sin(y) heatmap?\n\nxs, ys = np.mgrid[0:5:100j, 0:5:100j] # genertate mesh\nZ = np.sin(xs) * np.sin(ys)\nplt.imshow(Z, extent=(0, 5, 0, 5))\nplt.show()\n\n# Actual problem from my research:\n\n# Suppose you have 2 sensors, each of which should take measurements\n# at even intervals over the day. We want to make a method which can let us\n# recover from device failure: if a sensor goes down for an extended period,\n# can we impute the missing values from the other?\n\n# Take for example two strongly correlated measured signals:\n\nnp.random.seed(1234)\ns1 = np.sin(np.linspace(0, 10, 100)) + np.random.randn(100) * 0.05\ns2 = 2 * np.sin(np.linspace(0, 10, 100)) + np.random.randn(100) * 0.05\nplt.plot(s1, color='blue')\nplt.plot(s2, color='red')\nplt.show()\n\n# Simulate a failure in sensor 2 for a random 40-index period\n\ndef holdout(): # gives arbitrary slice from 0 to 100 width 40\n width = 40\n start = np.random.randint(0, len(s2) - width)\n missing = slice(start, start + width)\n return missing, np.r_[:start, missing.stop:len(s2)]\n\n# Find the most likely scaling for reconstructing s2 from s1\ndef factor_finder(train_ix):\n return np.mean((s2[train_ix] + 0.0001) / (s1[train_ix] + 0.0001))\n\ntest, train = holdout()\nf = factor_finder(train)\n\ndef plot_factor(factor):\n times = np.arange(len(s1))\n test, train = holdout()\n plt.plot(times, s1, color='blue', ls='--', label='s1')\n plt.scatter(times[train], s2[train], color='red', marker='.', label='train')\n plt.plot(times[test], s1[test] * factor, color='green', alpha=0.6, label='prediction')\n plt.scatter(times[test], s2[test], color='magenta', marker='.', label='test')\n plt.legend(bbox_to_anchor=(1.05, 0.6), loc=2)\n plt.title('prediction factor {}'.format(factor))\n plt.show()\n\nplot_factor(f)\n\n# Cubic kernel convolution and interpolation\n# Complicated example; take a look on your own time!\n\nimport scipy\nimport scipy.sparse\n\n# From Cubic Convolution Interpolation (Keys 1981)\n# Computes a piecewise cubic kernel evaluated at each data point in x\ndef cubic_kernel(x):\n y = np.zeros_like(x)\n x = np.fabs(x)\n if np.any(x > 2):\n raise ValueError('only absolute values <= 2 allowed')\n q = x <= 1\n y[q] = ((1.5 * x[q] - 2.5) * x[q]) * x[q] + 1\n q = ~q\n y[q] = ((-0.5 * x[q] + 2.5) * x[q] - 4) * x[q] + 2\n return y\n\n# Everything is 1D\n# Given a uniform grid of size grid_size\n# and requested samples of size n_samples,\n# generates an n_samples x grid_size interpolation matrix W\n# such that W.f(grid) ~ f(samples) for differentiable f and samples\n# inside of the grid.\ndef interp_cubic(grid, samples):\n delta = grid[1] - grid[0]\n factors = (samples - grid[0]) / delta\n # closest refers to the closest grid point that is smaller\n idx_of_closest = np.floor(factors)\n dist_to_closest = factors - idx_of_closest # in units of delta\n\n grid_size = len(grid)\n n_samples = len(samples)\n csr = scipy.sparse.csr_matrix((n_samples, grid_size), dtype=float)\n for conv_idx in range(-2, 2): # sliding convolution window\n coeff_idx = idx_of_closest - conv_idx\n coeff_idx[coeff_idx < 0] = 0 # threshold (no wraparound below)\n coeff_idx[coeff_idx >= grid_size] = grid_size - 1 # threshold (no wraparound above)\n \n relative_dist = dist_to_closest + conv_idx\n data = cubic_kernel(relative_dist)\n col_idx = coeff_idx\n ind_ptr = np.arange(0, n_samples + 1)\n csr += scipy.sparse.csr_matrix((data, col_idx, ind_ptr),\n shape=(n_samples, grid_size))\n return csr\n \nlo, hi = 0, 1\nfine = np.linspace(lo, hi, 100)\ncoarse = np.linspace(lo, hi, 15)\nW = interp_cubic(coarse, fine)\nW.shape\n\ndef f(x):\n a = np.sin(2 / (x + 0.2)) * (x + 0.1)\n #a = a * np.cos(5 * x)\n a = a * np.cos(2 * x)\n return a\n\nknown = f(coarse) # only use coarse\ninterp = W.dot(known)\n\nplt.scatter(coarse, known, color='blue', label='grid')\nplt.plot(fine, interp, color='red', label='interp')\nplt.plot(fine, f(fine), color='black', label='exact', ls=':')\nplt.legend(bbox_to_anchor=(1.05, 0.6), loc=2)\nplt.show()",
"Array Creation and Initialization\ndoc\nIf unspecified, default dtype is usually float, with an exception for arange.",
"display('np.linspace(4, 8, 2)')\ndisplay('np.arange(4, 8, 2)') # GOTCHA\n\nplt.plot(np.linspace(1, 4, 10), np.logspace(1, 4, 10))\nplt.show()\n\nshape = (4, 2)\nprint(np.zeros(shape)) # init to zero. Use np.ones or np.full accordingly\n\n# [GOTCHA] np.empty won't initialize anything; it will just grab the first available chunk of memory\nx = np.zeros(shape)\nx[0] = [1, 2]\ndel x\nprint(np.empty(shape))\n\n# From iterator/list/array - can just use constructor\nnp.array([[1, 2], range(3, 5), np.array([5, 6])]) # auto-flatten (if possible)\n\n# Deep copies & shape/dtype preserving creations\nx = np.arange(4).reshape(2, 2)\ny = np.copy(x)\nz = np.zeros_like(x)\nx[1, 1] = 5\nprint(x)\nprint(y)\nprint(z)",
"Extremely extensive random generation. Remember to seed!\nTransposition\nUnder the hood. So far, we've just been looking at the abstraction that NumPy offers. How does it actually keep things contiguous in memory?\nWe have a base array, which is one long contiguous array from 0 to size - 1.",
"x = np.arange(2 * 3 * 4).reshape(2, 3, 4)\nprint(x.shape)\nprint(x.size)\n\n# Use ravel() to get the underlying flat array. np.flatten() will give you a copy\nprint(x)\nprint(x.ravel())\n\n# np.transpose or *.T will reverse axes\nprint('transpose', x.shape, '->', x.T.shape)\n# rollaxis pulls the argument axis to axis 0, keeping all else the same.\nprint('rollaxis', x.shape, '->', np.rollaxis(x, 1, 0).shape)\n\nprint()\n# all the above are instances of np.moveaxis\n# it's clear how these behave:\n\nperm = np.array([0, 2, 1])\nmoved = np.moveaxis(x, range(3), perm)\n\nprint('arbitrary permutation', list(range(3)), perm)\nprint(x.shape, '->', moved.shape)\nprint('moved[1, 2, 0]', moved[1, 2, 0], 'x[1, 0, 2]', x[1, 0, 2])\n\n# When is transposition useful?\n# Matrix stuff, mostly:\nnp.random.seed(1234)\n\nX = np.random.randn(3, 4)\nprint('sigma {:.2f}, eig {:.2f}'.format(\n np.linalg.svd(X)[1].max(),\n np.sqrt(np.linalg.eigvalsh(X.dot(X.T)).max())))\n\n# Create a random symmetric matrix\nX = np.random.randn(3, 3)\nplt.imshow(X)\nplt.show()\n\nX += X.T\nplt.imshow(X)\nplt.show()\n\nprint('Check frob norm upper vs lower tri', np.linalg.norm(np.triu(X) - np.tril(X).T))\n\n# Row-major, C-order\n# largest axis changes fastest\nA = np.arange(2 * 3).reshape(2, 3).copy(order='C')\n\n# Row-major, Fortran-order\n# smallest axis changes fastest\n# GOTCHA: many numpy funcitons assume C ordering\nB = np.arange(2 * 3).reshape(2, 3).copy(order='F')\n\n# Differences in representation don't manifest in abstraction\nprint(A)\nprint(B)\n\n# Array manipulation functions with order option\n# will use C/F ordering, but this is independent of the underlying layout\nprint(A.ravel())\nprint(A.ravel(order='F'))\n\n# Reshape ravels an array, then folds back into shape, according to the given order\n# Note reshape can infer one dimension; we leave it as -1.\nprint(A.ravel(order='F').reshape(-1, 3))\nprint(A.ravel(order='F').reshape(-1, 3, order='F'))\n\n# GOTCHA: ravel will copy the array so that everything is contiguous\n# if the order differs\nprint(id(A), id(A.ravel().base), id(A.ravel(order='F')))",
"Transposition Example: Kronecker multiplication\nBased on Saatci 2011 (PhD thesis).\nRecall the tensor product over vector spaces $V \\otimes W$ from before. If $V$ has basis $\\textbf{v}_i$ and $W$ has $\\textbf{w}_j$, we can define the tensor product over elements $\\nu\\in V,\\omega\\in W$ as follows.\nLet $\\nu= \\sum_{i=1}^n\\nu_i\\textbf{v}i$ and $\\omega= \\sum{j=1}^m\\omega_j\\textbf{w}j$. Then:\n$$\nV \\otimes W\\ni \\nu\\otimes \\omega=\\sum{i=1}^n\\sum_{j=1}^m\\nu_i\\omega_j(\\textbf{v}_i\\otimes \\textbf{w}_j)\n$$\nIf $V$ is the vector space of $a\\times b$ matrices, then its basis vectors correspond to each of the $ab$ entries. If $W$ is the vector space of $c\\times d$ matrices, then its basis vectors correspond similarly to the $cd$ entries. In the tensor product, $(\\textbf{v}_i\\otimes \\textbf{w}_j)$ is the basis vector for an entry in the $ac\\times bd$ matrices that make up $V\\otimes W$.",
"# Kronecker demo\n\nA = np.array([[1, 1/2], [-1/2, -1]])\nB = np.identity(2)\n\nf, axs = plt.subplots(2, 2)\n\n# Guess what a 2x2 axes subplot type is?\nprint(type(axs))\n# Use of numpy for convenience: arbitrary object flattening\nfor ax in axs.ravel():\n ax.axis('off')\n \nax1, ax2, ax3, ax4 = axs.ravel()\n\nax1.imshow(A, vmin=-1, vmax=1)\nax1.set_title('A')\nax2.imshow(B, vmin=-1, vmax=1)\nax2.set_title('B')\nax3.imshow(np.kron(A, B), vmin=-1, vmax=1)\nax3.set_title(r'$A\\otimes B$')\nim = ax4.imshow(np.kron(B, A), vmin=-1, vmax=1)\nax4.set_title(r'$B\\otimes A$')\n\nf.colorbar(im, ax=axs.ravel().tolist())\nplt.axis('off')\nplt.show()\n\n# Transposition demo: using transpose, you can compute\n\nA = np.random.randn(40, 40)\nB = np.random.randn(40, 40)\nAB = np.kron(A, B)\nz = np.random.randn(40 * 40)\n\ndef kron_mvm():\n return AB.dot(z)\n\ndef saatci_mvm():\n # This differs from the paper's MVM, but is the equivalent for\n # a C-style ordering of arrays.\n x = z.copy()\n for M in [B, A]:\n n = M.shape[1]\n x = x.reshape(-1, n).T\n x = M.dot(x)\n return x.ravel()\n\nprint('diff', np.linalg.norm(kron_mvm() - saatci_mvm()))\nprint('Kronecker matrix vector multiplication')\ncompare_times(kron_mvm, saatci_mvm)",
"Ufuncs and Broadcasting\ndoc",
"# A ufunc is the most common way to modify arrays\n\n# In its simplest form, an n-ary ufunc takes in n numpy arrays\n# of the same shape, and applies some standard operation to \"parallel elements\"\n\na = np.arange(6)\nb = np.repeat([1, 2], 3)\nprint(a)\nprint(b)\nprint(a + b)\nprint(np.add(a, b))\n\n# If any of the arguments are of lower dimension, they're prepended with 1\n# Any arguments that have dimension 1 are repeated along that axis\n\nA = np.arange(2 * 3).reshape(2, 3)\nb = np.arange(2)\nc = np.arange(3)\nfor i in ['A', 'b', 'c']:\n display(i)\n\n# On the right, broadcasting rules will automatically make the conversion\n# of c, which has shape (3,) to shape (1, 3)\ndisplay('A * c')\ndisplay('c.reshape(1, 3)')\ndisplay('np.repeat(c.reshape(1, 3), 2, axis=0)')\n\ndisplay('np.diag(c)')\ndisplay('A.dot(np.diag(c))')\ndisplay('A * c')\n\n# GOTCHA: this won't compile your code to C: it will just make a slow convenience wrapper\ndemo = np.frompyfunc('f({}, {})'.format, 2, 1)\n\n# GOTCHA: common broadcasting mistake -- append instead of prepend\ndisplay('A')\ndisplay('b')\ntry:\n demo(A, b) # can't prepend to (2,) with 1 to get something compatible with (2, 3)\nexcept ValueError as e:\n print('ValueError!')\n print(e)\n\n# np.newaxis adds a 1 in the corresponding axis\ndisplay('b[:, np.newaxis]')\ndisplay('np.repeat(b[:, np.newaxis], 3, axis=1)')\ndisplay('demo(A, b[:, np.newaxis])')\n# note broadcasting rules are invariant to order\n# even if the ufunc isn't \ndisplay('demo(b[:, np.newaxis], A)')\n\n# Using broadcasting, we can do cheap diagonal matrix multiplication\ndisplay('b')\ndisplay('np.diag(b)')\n# without representing the full diagonal matrix.\ndisplay('b[:, np.newaxis] * A')\ndisplay('np.diag(b).dot(A)')\n\n# (Binary) ufuncs get lots of efficient implementation stuff for free\na = np.arange(4)\nb = np.arange(4, 8)\ndisplay('demo.outer(a, b)')\ndisplay('np.bitwise_or.accumulate(b)')\ndisplay('np.bitwise_or.reduce(b)') # last result of accumulate\n\ndef setup(): return np.arange(10 ** 6)\n\ndef manual_accum(x):\n res = np.zeros_like(x)\n for i, v in enumerate(x):\n res[i] = res[i-1] | v\n \ndef np_accum(x):\n np.bitwise_or.accumulate(x)\n \nprint('accumulation speed comparison')\ncompare_times(manual_accum, np_accum, setup, setup)",
"Aliasing\nYou can save on allocations and copies by providing the output array to copy into.\nAliasing occurs when all or part of the input is repeated in the output\nUfuncs allow aliasing",
"# Example: generating random symmetric matrices\nA = np.random.randint(0, 10, size=(3,3))\nprint(A)\nA += A.T # this operation is WELL-DEFINED, even though A is changing\nprint(A)\n\n# Above is sugar for\nnp.add(A, A, out=A)\n\nx = np.arange(10)\nprint(x)\nnp.subtract(x[:5], x[5:], x[:5])\nprint(x)",
"[GOTCHA]: If it's not a ufunc, aliasing is VERY BAD: Search for \"In general the rule\" in this discussion. Ufunc aliasing is safe since this pr",
"x = np.arange(2 * 2).reshape(2, 2)\ntry:\n x.dot(np.arange(2), out=x)\n # GOTCHA: some other functions won't warn you!\nexcept ValueError as e:\n print(e)",
"Configuration and Hardware Acceleration\nNumPy works quickly because it can perform vectorization by linking to C functions that were built for your particular system.\n[GOTCHA] There are two different high-level ways in which NumPy uses hardware to accelerate your computations.\nUfunc\nWhen you perform a built-in ufunc:\n* The corresponding C function is called directly from the Python interpreter\n* It is not parallelized\n* It may be vectorized\nIn general, it is tough to check whether your code is using vectorized instructions (or, in particular, which instruction set is being used, like SSE or AVX512.\n\nIf you installed from pip or Anaconda, you're probably not vectorized.\nIf you compiled NumPy yourself (and select the correct flags), you're probably fine.\nIf you're using the Numba JIT, then you'll be vectorized too.\nIf have access to icc and MKL, then you can use the Intel guide or Anaconda\n\nBLAS\nThese are optimized linear algebra routines, and are only called when you invoke operations that rely on these routines.\nThis won't make your vectors add faster (first, NumPy doesn't ask BLAS to nor could it, since bandwidth-limited ops are not the focus of BLAS). It will help with:\n* Matrix multiplication (np.dot)\n* Linear algebra (SVD, eigenvalues, etc) (np.linalg)\n* Similar stuff from other libraries that accept NumPy arrays may use BLAS too.\nThere are different implementations for BLAS. Some are free, and some are proprietary and built for specific chips (MKL). You can check which version you're using this way, though you can only be sure by inspecting the binaries manually.\nAny NumPy routine that uses BLAS will use, by default ALL AVAILABLE CORES. This is a departure from the standard parallelism of ufunc or other numpy transformations. You can change BLAS parallelism with the OMP_NUM_THREADS environment variable.\nStuff to Avoid\nNumPy has some cruft left over due to backwards compatibility. There are some edge cases when you would (maybe) use these things (but probably not). In general, avoid them:\n\nnp.chararray: use an np.ndarray with unicode dtype\nnp.MaskedArrays: use a boolean advanced index\nnp.matrix: use a 2-dimensional np.ndarray\n\nStuff Not Mentioned\n\nGeneral array manipulation\nSelection-related convenience methods np.sort, np.unique\nArray composition and decomposition np.split, np.stack\nReductions many-to-1 np.sum, np.prod, np.count_nonzero\nMany-to-many array transformations np.fft, np.linalg.cholesky\n\n\nString formatting np.array2string\nIO np.loadtxt, np.savetxt\nPolynomial interpolation and related scipy integration\nEquality testing\n\nTakeaways\n\nUse NumPy arrays for a compact, cache-friendly, in-memory representation of structured numeric data.\nVectorize, vectorize, vectorize! Less loops!\nExpressive\nFast\nConcise\nKnow when copies happen vs. when views happen\nAdvanced indexing -> copy\nBasic indexing -> view\nTranspositions -> usually view (depends if memory order changes)\nUfuncs/many-to-many -> copy (possibly with overwrite\nRely on powerful indexing API to avoid almost all Python loops\nRolling your own algorithm? Google it, NumPy probably has it built-in!\nBe concious of what makes copies, and what doesn't\n\nDownsides. Can't optimize across NumPy ops (like a C compiler would/numpy would). But do you need that? Can't parallelize except BLAS, but is it computaitonal or memory bandwidth limited?\nCherry on Top: Einsum\ndoc\nRecall the Kronecker product $\\otimes$ from before? Let's recall the fully general tensor product.\nIf $V$ has basis $\\textbf{v}_i$ and $W$ has $\\textbf{w}_j$, we can define the tensor product over elements $\\nu\\in V,\\omega\\in W$ as follows.\nLet $\\nu= \\sum_{i=1}^n\\nu_i\\textbf{v}i$ and $\\omega= \\sum{j=1}^m\\omega_j\\textbf{w}j$. Then:\n$$\nV \\otimes W\\ni \\nu\\otimes \\omega=\\sum{i=1}^n\\sum_{j=1}^m\\nu_i\\omega_j(\\textbf{v}_i\\otimes \\textbf{w}_j)\n$$\nBut what if $V$ is itself a tensor space, like a matrix space $F^{m\\times n}$, and $W$ is $F^{n\\times k}$. Then $\\nu\\otimes\\omega$ is a tensor with shape $(m, n, n, k)$, where the $(i_1, i_2,i_3,i_4)$-th element is given by $\\nu_{i_1i_2}\\omega_{i_3i_4}$ (the corresponding cannonical basis vector being $\\textbf{e}^{(m)}{i_1}(\\textbf{e}^{(n)}{i_2})^\\top\\otimes \\textbf{e}^{(n)}{i_3}(\\textbf{e}^{(k)}{i_4})^\\top$, where $\\textbf{e}^{(m)}{i_1}(\\textbf{e}^{(n)}{i_2})^\\top$, the cannonical matrix basis vector, is not that scary - here's an example in $2\\times 3$:\n$$\n\\textbf{e}^{(2)}{1}(\\textbf{e}^{(3)}{2})^\\top=\\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 0 \\end{pmatrix}\n$$\nWhat happens if we contract along the second and third axis, both of which have length $n$, where contraction in this example builds a tensor with shape $(m, k)$ such that the $(i_1,i_4)$-th entry is the sum of all entries in the tensor product $\\nu\\otimes \\omega$ which have the same values $i_2=i_3$. In other words:\n$$\n[\\text{contract}{12}(\\nu\\otimes\\omega)]{i_1i_4}=\\sum_{i_2=1}^n(\\nu\\otimes\\omega){i_1,i_2,i_2,i_4}=\\sum{i_2=1}^n\\nu_{i_1i_2}\\omega_{i_2,i_4}\n$$\nDoes that last term look familiar? It is, it's the matrix product! Indeed, a matrix product is a generalized trace of the outer product of two compatible matrices.\nThat's one way of thinking about einsum: it lets you do generalized matrix products; in that you take in an arbitrary number of matrices, compute their outer product, and then specify which axes to trace. But then it also lets you arbitrarily transpose and select diagonal elements of your tensors, too.",
"# Great resources to learn einsum:\n# https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/\n# http://ajcr.net/Basic-guide-to-einsum/\n\n# Examples of how it's general:\n\nnp.random.seed(1234)\nx = np.random.randint(-10, 11, size=(2, 2, 2))\nprint(x)\n\n# Swap axes\nprint(np.einsum('ijk->kji', x))\n\n# Sum [contraction is along every axis]\nprint(x.sum(), np.einsum('ijk->', x))\n\n# Multiply (pointwise) [take the diagonal of the outer product; don't sum]\ny = np.random.randint(-10, 11, size=(2, 2, 2))\nnp.array_equal(x * y, np.einsum('ijk,ijk->ijk', x, y))\n\n# Already, an example where einsum is more clear: multiply pointwise along different axes:\nprint(np.array_equal(x * y.transpose(), np.einsum('ijk,kji->ijk', x, y)))\nprint(np.array_equal(x * np.rollaxis(y, 2), np.einsum('ijk,jki->ijk', x, y)))\n\n# Outer (tensor) product\nx = np.arange(4)\ny = np.arange(4, 8)\nnp.array_equal(np.outer(x, y), np.einsum('i,j->ij', x, y))\n\n# Arbitrary inner product\na = np.arange(2 * 2).reshape(2, 2)\nprint(np.linalg.norm(a, 'fro') ** 2, np.einsum('ij,ij->', a, a))\n\nnp.random.seed(1234)\nx = np.random.randn(2, 2)\ny = np.random.randn(2, 2)\n\n# Matrix multiply\nprint(np.array_equal(x.dot(y), np.einsum('ij,jk->ik', x, y)))\n\n# Batched matrix multiply\nx = np.random.randn(3, 2, 2)\ny = np.random.randn(3, 2, 2)\nprint(np.array_equal(\n np.array([i.dot(j) for i, j in zip(x, y)]),\n np.einsum('bij,bjk->bik', x, y)))\n\n# all of {np.matmul, np.tensordot, np.dot} are einsum instances\n# The specializations may have marginal speedups, but einsum is\n# more expressive and clear code.",
"General Einsum Approach\nAgain, lots of visuals in this blog post.\n[GOTCHA]. You can't use more than 52 different letters.. But if you find yourself writing np.einsum with more than 52 active dimensions, you should probably make two np.einsum calls. If you have dimensions for which nothing happens, then ... can be used to represent an arbitrary amount of missed dimensions.\nHere's the way I think about an np.einsum (the actual implementation is more efficient).",
"# Let the contiguous blocks of letters be words\n# If they're on the left, they're argument words. On the right, result words.\n\nnp.random.seed(1234)\nx = np.random.randint(-10, 11, 3 * 2 * 2 * 1).reshape(3, 2, 2, 1)\ny = np.random.randint(-10, 11, 3 * 2 * 2).reshape(3, 2, 2)\nz = np.random.randint(-10, 11, 2 * 3).reshape(2, 3)\n\n# Example being followed in einsum description:\n# np.einsum('ijkm,iko,kp->mip', x, y, z)\n\n# 1. Line up each argument word with the axis of the array.\n# Make sure that word length == dimension\n# Make sure same letters correspond to same lengths\n# x.shape (3, 2, 2, 1)\n# i j k m\n# y.shape (3, 2, 2)\n# i k o\n# z.shape (2, 3)\n# k p\n\n# 2. Create the complete tensor product\nouter = np.tensordot(np.tensordot(x, y, axes=0), z, axes=0)\nprint(outer.shape)\nprint('(i j k m i k o k p)')\n\n# 3. Every time a letter repeats, only look at the corresponding \"diagonal\" elements.\n\n# Repeat i: (i j k m i k o k p)\n# (i i )\n# Expected: (i j k m k o k p)\n\n# The expected index corresponds to the above index in the outer product\n# We can do this over all other values with two advanced indices\nspan_i = np.arange(3)\nrepeat_i = outer[span_i, :, :, :, span_i, ...] # ellipses means \"fill with :\"\nprint(repeat_i.shape)\nprint('(i j k m k o k p)')\n\n# Repeat k: (i j k m k o k p)\n# ( k k k )\n# Expected: (i j k m o p)\nspan_k = np.arange(2)\nrepeat_k = repeat_i[:, :, span_k, :, span_k, :, span_k, :]\n# GOTCHA: advanced indexing brings shared advanced index to front, fixed with rollaxis\nrepeat_k = np.rollaxis(repeat_k, 0, 2)\nprint(repeat_k.shape)\nprint('(i j k m o p)')\n\n# 4. Compare the remaining word to the result word; sum out missing letters\n\n# Result word: (m i p)\n# Current word: (i j k m o p)\n\n# Sum out j: (i k m o p)\n# The resulting array has at entry (i k m o p) the following:\n# (i 0 k m o p) + (i 1 k m o p) + ... + (i [axis j length] k m o p)\nsumj = repeat_k.sum(axis=1)\nprint(sumj.shape)\nprint('(i k m o p)')\n\n# Sum out k: (i m o p)\nsumk = sumj.sum(axis=1)\nprint(sumk.shape)\nprint('(i m o p)')\n\n# Sum out o: (i m p)\nsumo = sumk.sum(axis=2)\nprint(sumo.shape)\nprint('(i m p)')\n\n# 6. Transpose remaining word until it has the same order as the result word\n\n# (i m p) -> (m i p)\nprint(np.moveaxis(sumo, [0, 1, 2], [1, 0, 2]))\nprint(np.einsum('ijkm,iko,kp->mip', x, y, z))",
"Neural Nets with Einsum\nOriginal post\n<table>\n<tr>\n<th>\n<img src=\"assets/mlp1.png\" alt=\"https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/\" width=\"600\" >\n</th><th>\n<img src=\"assets/mlp2.png\" alt=\"https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-summation-in-numpy/\" width=\"600\" >\n</th></tr></table>\n\nNotice how np.einsum captures succinctly the tensor flow (yep): the extension to batch is extremely natural. You can imagine a similar extension to RGB input (instead of a black/white float, we have an array of 3 values, so our input is now a 4D tensor (batch_size, height, width, 3)).\nReal Application\nUnder certain conditions, a kernel for a Gaussian process, a model for regression, is a matrix with the following form:\n$$\nK = \\sum_{i=1}^nB_i\\otimes D_i\n$$\n$B_i$ has shape $a\\times a$, and they are small dense matrices. $D_i$ is a $b\\times b$ diagonal matrix, and $b$ is so large that we can't even hold $b^2$ in memory. So we only have a vector to represent $D_i$. A useful operation in Gaussian process modelling is the multiplication of $K$ with a vector, $K\\textbf{z}$. How can we do this efficiently and expressively?",
"np.random.seed(1234)\na = 3\nb = 300\nBs = np.random.randn(10, a, a)\nDs = np.random.randn(10, b) # just the diagonal\n\nz = np.random.randn(a * b)\n\ndef quadratic_impl():\n K = np.zeros((a * b, a * b))\n for B, D in zip(Bs, Ds):\n K += np.kron(B, np.diag(D))\n return K.dot(z)\n\ndef einsum_impl():\n # Ellipses trigger broadcasting\n left_kron_saatci = np.einsum('N...b,ab->Nab', Ds, z.reshape(a, b))\n full_sum = np.einsum('Nca,Nab->cb', Bs, left_kron_saatci)\n return full_sum.ravel()\n\nprint('L2 norm of difference', np.linalg.norm(quadratic_impl() - einsum_impl()))\n# Of course, we can make this arbitrarily better by increasing b...\nprint('Matrix-vector multiplication')\ncompare_times(quadratic_impl, einsum_impl)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/applied-machine-learning-intensive
|
content/03_regression/08_regression_with_tensorflow/colab.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/03_regression/08_regression_with_tensorflow/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Regression with TensorFlow\nWe have trained a linear regression model in TensorFlow and used it to predict housing prices. However, the model didn't perform as well as we would have liked it to. In this lab, we will build a neural network to try to tackle the same regression problem and see if we can get better results.\nLoading and Preparing the Data\nThe dataset we'll use for this Colab contains California housing information taken from the 1990 census data. We explored this data in a previous lab, so we won't do an analysis here. As a reminder, the documentation for the dataset can be found on Kaggle.\nUpload your kaggle.json file and run the code block below.",
"! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'",
"Once you are done, use the kaggle command to download the file into the lab.",
"!kaggle datasets download camnugent/california-housing-prices\n!ls",
"We now have a file called california-housing-prices.zip that we can load into a DataFrame.",
"import pandas as pd\n\nhousing_df = pd.read_csv('california-housing-prices.zip')\n\nhousing_df",
"Next we can define which columns are features and which is the target.\nWe'll also make a separate list of our numeric columns.",
"target_column = 'median_house_value'\nfeature_columns = [c for c in housing_df.columns if c != target_column]\nnumeric_feature_columns = [c for c in feature_columns if c != 'ocean_proximity']\n\ntarget_column, feature_columns, numeric_feature_columns",
"We also reduced the value of our targets by a factor in the previous lab. This reduction in magnitude was done to help the model train faster. Let's do that again.",
"TARGET_FACTOR = 100000\n\nhousing_df[target_column] /= TARGET_FACTOR\n\nhousing_df[target_column].describe()",
"And we filled in some missing total_bedrooms values.",
"has_all_data = housing_df[~housing_df['total_bedrooms'].isna()]\n\nsums = has_all_data[['total_bedrooms', 'total_rooms']].sum().tolist()\n\nbedrooms_to_total_rooms_ratio = sums[0] / sums[1]\n\nmissing_total_bedrooms_idx = housing_df['total_bedrooms'].isna()\n\nhousing_df.loc[missing_total_bedrooms_idx, 'total_bedrooms'] = housing_df[\n missing_total_bedrooms_idx]['total_rooms'] * bedrooms_to_total_rooms_ratio\n\nhousing_df.describe()",
"Exercise 1: Standardization\nPreviously when we worked with this dataset, we normalized the feature data in order to get it ready for the model. Normalization was the process of making all of the data fit between 0.0 and 1.0 by subtracting the minimum of each column from each data point in that column and then dividing by the delta between the maximum and minimum values.\nIn this exercise you will need to standardize all of the feature columns. Standardization is performed by subtracting the mean value of each column from each data point in that column and then dividing by the standard deviation.\n\nHint: When you are done call describe() and ensure that the standard deviation for every feature column is 1.0`\n\nStudent Solution",
"# Your Code Goes Here",
"One-Hot Encoding\nThe ocean_proximity column will not work with the neural network model that we are planning to build. Neural networks expect numeric values, but ocean_proximity contains string values.\nLet's remind ourselves which values it contains:",
"sorted(housing_df['ocean_proximity'].unique())",
"There are five string values. In our linear regression Colab we told TensorFlow to treat these values as a categorical column. Each string was converted to a whole number that represented their position in a vocabulary list: 0, 1, 2, 3, or 4.\nFor neural networks it is common to see another strategy called one-hot encoding. One-hot encoding is the process of taking a column with a fixed list of string values and turning it into multiple columns containing only zeros and ones.\nFor instance the column ocean_proximity containing five strings would be converted to five columns containing ones and zeros:\nop_sub_hr | op_inland | op_island | op_near_bay | op_near_ocean\n----------|-----------|-----------|-------------|--------------\n 0 | 0 | 0 | 1 | 0\n 0 | 1 | 0 | 0 | 0\n 0 | 1 | 0 | 0 | 0\n 1 | 0 | 0 | 0 | 0\n 0 | 0 | 1 | 0 | 0\n 0 | 0 | 0 | 0 | 1\n 0 | 0 | 1 | 0 | 0\nNotice that in each row, only one column has a value of 1. The rest are all 0. This is the \"one-hot\" in one-hot encoding.\nAs you can imagine, it doesn't scale well for columns with many distinct values. In our case, 5 is perfectly reasonable.\nLet's manually one-hot encode our data.",
"for op in sorted(housing_df['ocean_proximity'].unique()):\n op_col = op.lower().replace(' ', '_').replace('<', '')\n housing_df[op_col] = (housing_df['ocean_proximity'] == op).astype(int)\n feature_columns.append(op_col)\n\nfeature_columns.remove('ocean_proximity')\n\nhousing_df",
"Exercise 2: Split the Data\nWe want to hold out some of the data for validation. Using standard Python or a library, split the data. Put 20% of the data in a DataFrame called testing_df and the other 80% in a DataFrame called training_df. Be sure to shuffle the data before splitting. Print the number of records in testing_df and training_df in order to check your work.\nStudent Solution",
"# Your Code Goes Here",
"Building the Model\nWe will build the model using TensorFlow 2. Let's enable it and go ahead and load up TensorFlow.",
"%tensorflow_version 2.x\n\nimport tensorflow as tf\ntf.__version__",
"When we built a TensorFlow LinearRegressor in a previous lab, we were using a pre-configured model. For our neural network regressor, we will build the model ourselves using the Keras API of TensorFlow.\nWe'll build a sequential model where one layer feeds into the next. Each layer will be densely connected, which means every node in one layer connects to every node in the next layer.\nA few things are required for our network. We need to have 13 input nodes since that is the number of features that we have (8 original numerical columns, plus the 5 one-hot encoded ocean proximity columns that we added). We also need to have one output node since we are trying to predict a single price value.\nLet's see what that would look like:",
"from tensorflow import keras\nfrom tensorflow.keras import layers\n\n# Create the Sequential model.\nmodel = keras.Sequential()\n\n# Determine the \"input shape\", which is the number\n# of features that we will feed into the model.\ninput_shape = len(feature_columns)\n\n# Create a layer that accepts our features and outputs\n# a single value, the predicted median home price.\nlayer = layers.Dense(1, input_shape=[input_shape])\n\n# Add the layer to our model.\nmodel.add(layer)\n\n# Print out a model summary.\nmodel.summary()",
"Above we have basically recreated our linear regression from an earlier lab. We have all of our inputs directly mapping to a single output. We didn't choose an activation function, and the default activation function for a Dense layer is a linear function $f(x) = x$.\nNote that the way we built this model was pretty verbose. You typically see simple models like this built in a more compact manner:",
"from tensorflow import keras\nfrom tensorflow.keras import layers\n\nmodel = keras.Sequential(layers=[\n layers.Dense(1, input_shape=[len(feature_columns)])\n])\n\nmodel.summary()",
"Also notice that the layers are named dense_1, dense_2, etc.\nIf you don't supply a name for a layer, TensorFlow will provide a name for you. In small models, this isn't a problem, but you might want to have a meaningful layer name in larger models.\nEven in simple models, is dense_2 a good name for the first layer in a model?\nExercise 3: Name Your Layers\nThe default naming scheme for layers can start to become confusing, especially if you repeatedly run a cell block to iterate on your model design.\nIn this exercise consult the Dense documentation and find the argument that allows you to name your layer. Use that argument in the code below to name your layer 'the_only_layer'. Note that you might have to consult the documentation for the parent classes of Dense.\nAlso, don't forget to answer the question below the code block!\nStudent Solution",
"from tensorflow import keras\nfrom tensorflow.keras import layers\n\nmodel = keras.Sequential(layers=[\n layers.Dense(\n 1,\n input_shape=[len(feature_columns)],\n # Name your layer here\n )\n])\n\nmodel.summary()",
"Which class did the parameter that you used originate from?\n\nYour answer goes here\n\n\nMaking a Deep Neural Network\nWhere neural networks really get powerful is when you add hidden layers. These hidden layers can find complex patterns in your data.\nLet's create a model with a few hidden layers. We'll add two layers with sixty-four nodes each.",
"from tensorflow import keras\nfrom tensorflow.keras import layers\n\nfeature_count = len(feature_columns)\n\nmodel = keras.Sequential([\n layers.Dense(64, input_shape=[feature_count]),\n layers.Dense(64),\n layers.Dense(1)\n])\n\nmodel.summary()",
"We now have a deep neural network model. The model has 13 input nodes. These nodes feed into our first hidden layer of 64 nodes.\nThe first line of our model summary tells us that we have 64 nodes and 896 parameters. The node count in 'Output Shape' makes sense, but what about the 'Param #' of 896?\nRemember that we have 13 input nodes feeding into 64 nodes in our first hidden layer. The layers are densely connected, so each of the 13 input nodes connects to each of the 64 nodes in the next layer. 13 * 64 = 832 connections. Add another 64 for the number of nodes in the layer, and you get the 896 number.\nThis pattern repeats for the next layer. 64 nodes connecting to 64 nodes: 64 * 64 + 64 = 4160.\nAnd finally 64 nodes connect to the final output node: 64 * 1 + 1 = 65.\nThis makes for a total of 5121 parameters in the model. Even a very small neural network like this can have a lot of trainable parameters inside of it!\nBefore we start training it, we need to tell TensorFlow how and what to optimize the model for using the compile method. In our example below, we are optimizing for mean squared error using the Adam optimizer. We'll calculate and report the mean squared error and mean absolute error along the way.",
"model.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nmodel.summary()",
"Training the Model\nWe can now train the model using the fit() method. Training is performed for a specified number of epochs. An epoch is a full pass over the training data. In this case, we are asking to train over the full dataset 50 times.\nIn order to get the data into the model, we don't have to write an input function like we did with the Estimator API. The Keras API provides for a much more direct format.",
"EPOCHS = 50\n\nmodel.fit(\n training_df[feature_columns],\n training_df[target_column],\n epochs=EPOCHS,\n validation_split=0.2,\n)",
"Validating the Model\nWe can now see how well our model performs on our validation test set. In order to get the model to make predictions, we use the predict method.",
"predictions = model.predict(testing_df[feature_columns])\n\npredictions",
"Notice that the predictions are lists of lists. This is because neural networks can return more than one prediction per input. We set this network up to have a single final node, but could have had more.\nExercise 4: Calculating RMSE\nAt this point we have the predicted values from our test features and the actual values. In this exercise you are tasked with computing the root-mean squared error of those predictions. Given the predictions stored in predictions above, write code that computes the root mean squared error of those predictions vs. the truth found in testing_df. Print the root mean squared error.\nStudent Solution",
"# Your Code Goes here",
"Improving the Model\nIn the exercise above, you likely got a root mean squared error very close to the error we got in the linear regression lab. What's going on? I thought deep learning models were supposed to be really, really good!\nDeep learning models can be really good, but they often require a bit of hyperparameter tuning. Aside from the breadth and depth of the hidden layers, the activation function for the model can have a big impact on how a model performs.\nEarlier we mentioned that the default activation function for Dense layers is the linear function $f(x) = x$. It turns out that if you stack layers of linear functions, you just get a single linear function, so the network that we built is basically just one big linear regression.\nWe can change the activation function layer by layer for our model. In order to do that, we just pass an activation argument to our Dense class. Keras has many built-in activations that you can reference by name like:\npython\n layers.Dense(64, activation='sigmoid')\nFor activations that aren't built into Keras, you can use the full path to their class:\npython\n layers.Dense(64, activation=tf.nn.swish)\nThe tf.nn namespace is a little crowded, but there are activations functions in there including swish, leaky_relu, and more.\nExercise 5: A Better Activation Function\nExperiment with different activation functions and find one that performs better than the linear activation that we used above. You can set the activation function on any or all of the layers in the network. The functions don't have to be the same.\nPrint out the root mean squared error once you find an acceptable activation function.\nStudent Solution",
"# Your Code Goes Here",
"Visualizing Training\nAt this point, we have a pretty solid neural network regression model. It performs better than our linear regression model, though it does take a while to train.\nTraining time is largely a product of two factors:\n\nThe size of the model\nThe number of epochs\n\nLarger models take longer to train. That shouldn't come as a surprise. Remember from above that we calculated the number of parameters in our model. Every layer that is densely connected adds many more parameters that need to be adjusted during training.\nOur goal is to find a model that is big enough, but not too big. This, it turns out, is very much an area where experimentation is required.\nThe second determination of model training time is the number of epochs. We can choose an arbitrary number of epochs from one to infinity. How many do we need?\nIt turns out that we can be much more scientific about this parameter. As a model begins to converge, there is less and less benefit for each subsequent epoch.\nMore training does not necessarily mean a better model.\nThere are a few ways to determine the appropriate number of epochs. One is to plot the error and see when it flattens out.\nIt turns out that our model actually returns the error values when you fit the model.",
"model = keras.Sequential([\n layers.Dense(64, input_shape=[feature_count]),\n layers.Dense(64),\n layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nEPOCHS = 5\n\nhistory = model.fit(\n training_df[feature_columns],\n training_df[target_column],\n epochs=EPOCHS,\n verbose=0, # New parameter to make model training silent\n validation_split=0.2,\n)\n\nhistory.history",
"Notice that the history.history contains our model's loss (loss), mean absolute error (mae), mean squared error (mse), validation loss (val_loss), validation mean absolute error (val_mae), and validation mean squared error (val_mse) at each epoch.\nIt would be useful to plot the error over time. In the next exercise, you will create a visualization that will help us determine when to stop training the model.\nExercise 6: Plotting Error\nUse matplotlib.pyplot or seaborn to create a line plot that shows the mean squared error and the validation mean squared error per epoch.\nIn the code block below, we save the errors per epoch in the variable history. Inspect the variable and plot a line plot which has the epoch on the x-axis and the mean squared error on the y-axis. There should be two lines on the visualization: mean absolute error and validation mean absolute error.\nNote that we created the model with the default activation function. Use the activation function that you found to be more useful in exercise 5.\nThe result should be a line plot of epoch and error with two lines similar to:",
"model = keras.Sequential([\n layers.Dense(64, input_shape=[feature_count]),\n layers.Dense(64),\n layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nEPOCHS = 100\n\nhistory = model.fit(\n training_df[feature_columns],\n training_df[target_column],\n epochs=EPOCHS,\n verbose=0,\n validation_split=0.2,\n)",
"Student Solution",
"# Your Code Goes Here",
"Interpreting Loss Visualizations\nWe have now created a visualization that should look something like this:\n\nBut how do we interpret this visualization?\nThe blue line is the mean squared error for the training data. You can see it plummeting fast as the model quickly learns.\nThe orange line is the validation data. This is a holdout set of data that the model checks after each epoch. You can see it dropping pretty quickly, too, but then it seems to stabilize somewhat by 20 epochs.\nToward the right side of the graph, you can see that our validation set says volatile but relatively flat, while our training data set keeps getting better and better.\nShould we train more or less?\nThe constantly reducing blue line is actually a signal of overfitting on the training data.\nThe flat(ish) orange line signals this model is as good as we can get.\nFor this model we could possibly stop training after even 25 epochs and get similar performance.\nBut how do you know when to stop?\nLuckily there is an early stopping algorithm that allows a model to stop training when validation data isn't improving.\nIn the example below, we set up a model to train for 1000 epochs; however, we add an early stopping callback. Early stopping stops training when the model isn't progressing upon validation.\nIf you run the code block below, you'll see far fewer than 1000 epochs run.",
"model = keras.Sequential([\n layers.Dense(64, input_shape=[feature_count]),\n layers.Dense(64),\n layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nEPOCHS = 1000\n\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory = model.fit(\n training_df[feature_columns],\n training_df[target_column],\n epochs=EPOCHS,\n validation_split=0.2,\n callbacks=[early_stop],\n)",
"Conclusion\nWe have now learned how to build a deep neural network to solve a regression problem. We have visualized our loss in order to determine when we might stop training, and we have utilized early stopping to avoid wasting time training a model.\nWelcome to deep neural networks. They are deceptively simple to build, but they are very complex to master. When you can build a model to fit a domain, you can create amazing predictions that rival human experts."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
martintb/pe_optimization_tutorial
|
02-numpy.ipynb
|
mit
|
[
"Numpy\nThe best part about Numpy is that, not only do we get massive speedups because numpy can perform many of its operations at the C-level, the vectorized api makes the code simpler and (to some extent more pythonic). The only \"downside\" is that we have to learn to write our code using Numpy idioms rather than Python idioms.",
"%load_ext memory_profiler\n%load_ext snakeviz\n%load_ext cython\nimport holoviews as hv\nhv.extension('bokeh','matplotlib')\nfrom IPython.core import debugger\nist = debugger.set_trace",
"We load in the position and box information created in the intro notebook. If you haven't run that notebook, this line will not work! (You don't have to read the wall of text, just run the cells...)",
"import numpy as np\npos = np.loadtxt('data/positions.dat')\nbox = np.loadtxt('data/box.dat')\n\nprint('Read {:d} positions.'.format(pos.shape[0]))\nprint('x min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[0],pos.max(0)[0]))\nprint('y min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[1],pos.max(0)[1]))\nprint('z min/max: {:+4.2f}/{:+4.2f}'.format(pos.min(0)[2],pos.max(0)[2]))",
"Round 1: Vectorized Operations\nWe need to re-implement the potential energy function in numpy.",
"import numpy as np\n\ndef potentialEnergyFunk(r,width=1.0,height=10.0):\n '''\n Calculates the (soft) potential energy between two atoms\n \n Parameters\n ----------\n r: ndarray (float)\n separation distances between two atoms\n height: float\n breadth of the potential i.e. where the potential goes to zero\n width: float\n strength/height of the potential\n '''\n U = np.zeros_like(r)\n mask = (r<width) #only do calculation below the cutoff width\n U[mask] = 0.5 * height * (1 + np.cos(np.pi*r[mask]/width))\n return U",
"We can plot the potential energy again just to make sure this function behaves as expected.",
"%%opts Curve [width=600,show_grid=True,height=350]\n\ndr = 0.05 # spacing of r points\nrmax = 10.0 # maximum r value\npts = int(rmax/dr) # number of r points\nr = np.arange(dr,rmax,dr)\n\ndef plotFunk(width,height,label='dynamic'):\n U = potentialEnergyFunk(r,width,height)\n return hv.Curve((r,U),kdims=['Separation Distance'],vdims=['Potential Energy'],label=label)\n \ndmap = hv.DynamicMap(plotFunk,kdims=['width','height'])\ndmap = dmap.redim.range(width=((1.0,10.0)),height=((1.0,5.0)))\ndmap*plotFunk(10.0,5.0,label='width: 10., height: 5.')*plotFunk(1.0,1.0,label='width: 1., height: 1.')\n\nfrom math import sqrt\n\ndef calcTotalEnergy1(pos,box):\n '''\n Parameters\n ----------\n pos: ndarray, size (N,3), (float)\n array of cartesian coordinate positions\n \n box: ndarray, size (3), (float)\n simulation box dimensions\n '''\n \n #sanity check\n assert box.shape[0] == 3\n \n # This next line is rather unpythonic but essentially it convinces\n # numpy to perform a subtraction between the full Cartesian Product\n # of the positions array\n dr = np.abs(pos - pos[:,np.newaxis,:])\n \n #still need to apply periodic boundary conditions\n dr = np.where(dr>box/2.0,dr-box,dr)\n \n dist = np.sqrt(np.sum(np.square(dr),axis=-1))\n \n # calculate the full N x N distance matrix\n U = potentialEnergyFunk(dist)\n\n # extract the upper triangle from U\n U = np.triu(U,k=1) \n \n return U.sum() ",
"Runtime profiling!",
"%%prun -D prof/numpy1.prof\nenergy = calcTotalEnergy1(pos,box)\n\nwith open('energy/numpy1.dat','w') as f:\n f.write('{}\\n'.format(energy))",
"Memory profiling!",
"memprof = %memit -o calcTotalEnergy1(pos,box)\n\nusage = memprof.mem_usage[0]\nincr = memprof.mem_usage[0] - memprof.baseline\nwith open('prof/numpy1.memprof','w') as f:\n f.write('{}\\n{}\\n'.format(usage,incr))",
"Round 2: Less is More\nThis is good, but can we do better? With this implementation, we are actually calculating twice as potential energies as we need to! Let's reimplement the above to see if we can speed up this function (and possible reduce the memory usage).",
"from math import sqrt\n\ndef calcTotalEnergy2(pos,box):\n '''\n Parameters\n ----------\n pos: ndarray, size (N,3), (float)\n array of cartesian coordinate positions\n \n box: ndarray, size (3), (float)\n simulation box dimensions\n '''\n \n #sanity check\n assert box.shape[0] == 3\n \n # This next line is rather unpythonic but essentially it convinces\n # numpy to perform a subtraction between the full Cartesian Product\n # of the positions array\n dr = np.abs(pos - pos[:,np.newaxis,:])\n \n #extract out upper triangle\n dr = dr[np.triu_indices(dr.shape[0],k=1)] #<<<<<<<\n \n #still need to apply periodic boundary conditions\n dr = np.where(dr>box/2.0,dr-box,dr)\n \n dist = np.sqrt(np.sum(np.square(dr),axis=-1))\n \n # calculate the full N x N distance matrix\n U = potentialEnergyFunk(dist)\n \n return U.sum() \n\n%%prun -D prof/numpy2.prof\nenergy = calcTotalEnergy2(pos,box)\n\nwith open('energy/numpy2.dat','w') as f:\n f.write('{}\\n'.format(energy))",
"Memory profiling!",
"memprof = %memit -o calcTotalEnergy2(pos,box)\n\nusage = memprof.mem_usage[0]\nincr = memprof.mem_usage[0] - memprof.baseline\nwith open('prof/numpy2.memprof','w') as f:\n f.write('{}\\n{}\\n'.format(usage,incr))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
stable/_downloads/9619fd95b952a0c715b83d0e6b37c416/10_epochs_overview.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"The Epochs data structure: discontinuous data\nThis tutorial covers the basics of creating and working with :term:epoched\n<epochs> data. It introduces the :class:~mne.Epochs data structure in\ndetail, including how to load, query, subselect, export, and plot data from an\n:class:~mne.Epochs object. For more information about visualizing\n:class:~mne.Epochs objects, see tut-visualize-epochs. For info on\ncreating an :class:~mne.Epochs object from (possibly simulated) data in a\n:class:NumPy array <numpy.ndarray>, see tut-creating-data-structures.\nAs usual we'll start by importing the modules we need:",
"import os\nimport mne",
":class:~mne.Epochs objects are a data structure for representing and\nanalyzing equal-duration chunks of the EEG/MEG signal. :class:~mne.Epochs\nare most often used to represent data that is time-locked to repeated\nexperimental events (such as stimulus onsets or subject button presses), but\ncan also be used for storing sequential or overlapping frames of a continuous\nsignal (e.g., for analysis of resting-state activity; see\nfixed-length-events). Inside an :class:~mne.Epochs object, the data\nare stored in an :class:array <numpy.ndarray> of shape (n_epochs,\nn_channels, n_times).\n:class:~mne.Epochs objects have many similarities with :class:~mne.io.Raw\nobjects, including:\n\n\nThey can be loaded from and saved to disk in .fif format, and their\n data can be exported to a :class:NumPy array <numpy.ndarray> through the\n :meth:~mne.Epochs.get_data method or to a :class:Pandas DataFrame\n <pandas.DataFrame> through the :meth:~mne.Epochs.to_data_frame method.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects support channel\n selection by index or name, including :meth:~mne.Epochs.pick,\n :meth:~mne.Epochs.pick_channels and :meth:~mne.Epochs.pick_types\n methods.\n\n\n:term:SSP projector <projector> manipulation is possible through\n :meth:~mne.Epochs.add_proj, :meth:~mne.Epochs.del_proj, and\n :meth:~mne.Epochs.plot_projs_topomap methods.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have\n :meth:~mne.Epochs.copy, :meth:~mne.Epochs.crop,\n :meth:~mne.Epochs.time_as_index, :meth:~mne.Epochs.filter, and\n :meth:~mne.Epochs.resample methods.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have\n :attr:~mne.Epochs.times, :attr:~mne.Epochs.ch_names,\n :attr:~mne.Epochs.proj, and :class:info <mne.Info> attributes.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have built-in\n plotting methods :meth:~mne.Epochs.plot, :meth:~mne.Epochs.plot_psd,\n and :meth:~mne.Epochs.plot_psd_topomap.\n\n\nCreating Epoched data from a Raw object\nThe example dataset we've been using thus far doesn't include pre-epoched\ndata, so in this section we'll load the continuous data and create epochs\nbased on the events recorded in the :class:~mne.io.Raw object's STIM\nchannels. As we often do in these tutorials, we'll :meth:~mne.io.Raw.crop\nthe :class:~mne.io.Raw data to save memory:",
"sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)",
"As we saw in the tut-events-vs-annotations tutorial, we can extract an\nevents array from :class:~mne.io.Raw objects using :func:mne.find_events:",
"events = mne.find_events(raw, stim_channel='STI 014')",
"<div class=\"alert alert-info\"><h4>Note</h4><p>We could also have loaded the events from file, using\n :func:`mne.read_events`::\n\n sample_data_events_file = os.path.join(sample_data_folder,\n 'MEG', 'sample',\n 'sample_audvis_raw-eve.fif')\n events_from_file = mne.read_events(sample_data_events_file)\n\n See `tut-section-events-io` for more details.</p></div>\n\nThe :class:~mne.io.Raw object and the events array are the bare minimum\nneeded to create an :class:~mne.Epochs object, which we create with the\n:class:mne.Epochs class constructor. However, you will almost surely want\nto change some of the other default parameters. Here we'll change tmin\nand tmax (the time relative to each event at which to start and end each\nepoch). Note also that the :class:~mne.Epochs constructor accepts\nparameters reject and flat for rejecting individual epochs based on\nsignal amplitude. See the tut-reject-epochs-section section for\nexamples.",
"epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)",
"You'll see from the output that:\n\n\nall 320 events were used to create epochs\n\n\nbaseline correction was automatically applied (by default, baseline is\n defined as the time span from tmin to 0, but can be customized with\n the baseline parameter)\n\n\nno additional metadata was provided (see tut-epochs-metadata for\n details)\n\n\nthe projection operators present in the :class:~mne.io.Raw file were\n copied over to the :class:~mne.Epochs object\n\n\nIf we print the :class:~mne.Epochs object, we'll also see a note that the\nepochs are not copied into memory by default, and a count of the number of\nepochs created for each integer Event ID.",
"print(epochs)",
"Notice that the Event IDs are in quotes; since we didn't provide an event\ndictionary, the :class:mne.Epochs constructor created one automatically and\nused the string representation of the integer Event IDs as the dictionary\nkeys. This is more clear when viewing the event_id attribute:",
"print(epochs.event_id)",
"This time let's pass preload=True and provide an event dictionary; our\nprovided dictionary will get stored as the event_id attribute and will\nmake referencing events and pooling across event types easier:",
"event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'face': 5, 'buttonpress': 32}\nepochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,\n preload=True)\nprint(epochs.event_id)\ndel raw # we're done with raw, free up some memory",
"Notice that the output now mentions \"1 bad epoch dropped\". In the tutorial\nsection tut-reject-epochs-section we saw how you can specify channel\namplitude criteria for rejecting epochs, but here we haven't specified any\nsuch criteria. In this case, it turns out that the last event was too close\nthe end of the (cropped) raw file to accommodate our requested tmax of\n0.7 seconds, so the final epoch was dropped because it was too short. Here\nare the drop_log entries for the last 4 epochs (empty lists indicate\nepochs that were not dropped):",
"print(epochs.drop_log[-4:])",
"<div class=\"alert alert-info\"><h4>Note</h4><p>If you forget to provide the event dictionary to the :class:`~mne.Epochs`\n constructor, you can add it later by assigning to the ``event_id``\n attribute::\n\n epochs.event_id = event_dict</p></div>\n\nBasic visualization of Epochs objects\nThe :class:~mne.Epochs object can be visualized (and browsed interactively)\nusing its :meth:~mne.Epochs.plot method:",
"epochs.plot(n_epochs=10)",
"Notice that the individual epochs are sequentially numbered along the bottom\naxis and are separated by vertical dashed lines.\nEpoch plots are interactive (similar to :meth:raw.plot()\n<mne.io.Raw.plot>) and have many of the same interactive controls as\n:class:~mne.io.Raw plots. Horizontal and vertical scrollbars allow browsing\nthrough epochs or channels (respectively), and pressing :kbd:? when the\nplot is focused will show a help screen with all the available controls. See\ntut-visualize-epochs for more details (as well as other ways of\nvisualizing epoched data).\nSubselecting epochs\nNow that we have our :class:~mne.Epochs object with our descriptive event\nlabels added, we can subselect epochs easily using square brackets. For\nexample, we can load all the \"catch trials\" where the stimulus was a face:",
"print(epochs['face'])",
"We can also pool across conditions easily, thanks to how MNE-Python handles\nthe / character in epoch labels (using what is sometimes called\n\"tag-based indexing\"):",
"# pool across left + right\nprint(epochs['auditory'])\nassert len(epochs['auditory']) == (len(epochs['auditory/left']) +\n len(epochs['auditory/right']))\n# pool across auditory + visual\nprint(epochs['left'])\nassert len(epochs['left']) == (len(epochs['auditory/left']) +\n len(epochs['visual/left']))",
"You can also pool conditions by passing multiple tags as a list. Note that\nMNE-Python will not complain if you ask for tags not present in the object,\nas long as it can find some match: the below example is parsed as\n(inclusive) 'right' or 'bottom', and you can see from the output\nthat it selects only auditory/right and visual/right.",
"print(epochs[['right', 'bottom']])",
"However, if no match is found, an error is returned:",
"try:\n print(epochs[['top', 'bottom']])\nexcept KeyError:\n print('Tag-based selection with no matches raises a KeyError!')",
"Selecting epochs by index\n:class:~mne.Epochs objects can also be indexed with integers, :term:slices\n<slice>, or lists of integers. This method of selection ignores event\nlabels, so if you want the first 10 epochs of a particular type, you can\nselect the type first, then use integers or slices:",
"print(epochs[:10]) # epochs 0-9\nprint(epochs[1:8:2]) # epochs 1, 3, 5, 7\n\nprint(epochs['buttonpress'][:4]) # first 4 \"buttonpress\" epochs\nprint(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line",
"Selecting, dropping, and reordering channels\nYou can use the :meth:~mne.Epochs.pick, :meth:~mne.Epochs.pick_channels,\n:meth:~mne.Epochs.pick_types, and :meth:~mne.Epochs.drop_channels methods\nto modify which channels are included in an :class:~mne.Epochs object. You\ncan also use :meth:~mne.Epochs.reorder_channels for this purpose; any\nchannel names not provided to :meth:~mne.Epochs.reorder_channels will be\ndropped. Note that these channel selection methods modify the object\nin-place (unlike the square-bracket indexing to select epochs seen above)\nso in interactive/exploratory sessions you may want to create a\n:meth:~mne.Epochs.copy first.",
"epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True)\nprint(epochs_eeg.ch_names)\n\nnew_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521']\nepochs_subset = epochs.copy().reorder_channels(new_order)\nprint(epochs_subset.ch_names)\n\ndel epochs_eeg, epochs_subset",
"Changing channel name and type\nYou can change the name or type of a channel using\n:meth:~mne.Epochs.rename_channels or :meth:~mne.Epochs.set_channel_types.\nBoth methods take :class:dictionaries <dict> where the keys are existing\nchannel names, and the values are the new name (or type) for that channel.\nExisting channels that are not in the dictionary will be unchanged.",
"epochs.rename_channels({'EOG 061': 'BlinkChannel'})\n\nepochs.set_channel_types({'EEG 060': 'ecg'})\nprint(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:])\n\n# let's set them back to the correct values before moving on\nepochs.rename_channels({'BlinkChannel': 'EOG 061'})\nepochs.set_channel_types({'EEG 060': 'eeg'})",
"Selection in the time domain\nTo change the temporal extent of the :class:~mne.Epochs, you can use the\n:meth:~mne.Epochs.crop method:",
"shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True)\n\nfor name, obj in dict(Original=epochs, Cropped=shorter_epochs).items():\n print('{} epochs has {} time samples'\n .format(name, obj.get_data().shape[-1]))",
"Cropping removed part of the baseline. When printing the\ncropped :class:~mne.Epochs, MNE-Python will inform you about the time\nperiod that was originally used to perform baseline correction by displaying\nthe string \"baseline period cropped after baseline correction\":",
"print(shorter_epochs)",
"However, if you wanted to expand the time domain of an :class:~mne.Epochs\nobject, you would need to go back to the :class:~mne.io.Raw data and\nrecreate the :class:~mne.Epochs with different values for tmin and/or\ntmax.\nIt is also possible to change the \"zero point\" that defines the time values\nin an :class:~mne.Epochs object, with the :meth:~mne.Epochs.shift_time\nmethod. :meth:~mne.Epochs.shift_time allows shifting times relative to the\ncurrent values, or specifying a fixed time to set as the new time value of\nthe first sample (deriving the new time values of subsequent samples based on\nthe :class:~mne.Epochs object's sampling frequency).",
"# shift times so that first sample of each epoch is at time zero\nlater_epochs = epochs.copy().shift_time(tshift=0., relative=False)\nprint(later_epochs.times[:3])\n\n# shift times by a relative amount\nlater_epochs.shift_time(tshift=-7, relative=True)\nprint(later_epochs.times[:3])\n\ndel shorter_epochs, later_epochs",
"Note that although time shifting respects the sampling frequency (the spacing\nbetween samples), it does not enforce the assumption that there is a sample\noccurring at exactly time=0.\nExtracting data in other forms\nThe :meth:~mne.Epochs.get_data method returns the epoched data as a\n:class:NumPy array <numpy.ndarray>, of shape (n_epochs, n_channels,\nn_times); an optional picks parameter selects a subset of channels by\nindex, name, or type:",
"eog_data = epochs.get_data(picks='EOG 061')\nmeg_data = epochs.get_data(picks=['mag', 'grad'])\nchannel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2))\n\nfor name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items():\n print('{} contains {} channels'.format(name, arr.shape[1]))",
"Note that if your analysis requires repeatedly extracting single epochs from\nan :class:~mne.Epochs object, epochs.get_data(item=2) will be much\nfaster than epochs[2].get_data(), because it avoids the step of\nsubsetting the :class:~mne.Epochs object first.\nYou can also export :class:~mne.Epochs data to :class:Pandas DataFrames\n<pandas.DataFrame>. Here, the :class:~pandas.DataFrame index will be\nconstructed by converting the time of each sample into milliseconds and\nrounding it to the nearest integer, and combining it with the event types and\nepoch numbers to form a hierarchical :class:~pandas.MultiIndex. Each\nchannel will appear in a separate column. Then you can use any of Pandas'\ntools for grouping and aggregating data; for example, here we select any\nepochs numbered 10 or less from the auditory/left condition, and extract\ntimes between 100 and 107 ms on channels EEG 056 through EEG 058\n(note that slice indexing within Pandas' :obj:~pandas.DataFrame.loc is\ninclusive of the endpoint):",
"df = epochs.to_data_frame(index=['condition', 'epoch', 'time'])\ndf.sort_index(inplace=True)\nprint(df.loc[('auditory/left', slice(0, 10), slice(100, 107)),\n 'EEG 056':'EEG 058'])\n\ndel df",
"See the tut-epochs-dataframe tutorial for many more examples of the\n:meth:~mne.Epochs.to_data_frame method.\nLoading and saving Epochs objects to disk\n:class:~mne.Epochs objects can be loaded and saved in the .fif format\njust like :class:~mne.io.Raw objects, using the :func:mne.read_epochs\nfunction and the :meth:~mne.Epochs.save method. Functions are also\navailable for loading data that was epoched outside of MNE-Python, such as\n:func:mne.read_epochs_eeglab and :func:mne.read_epochs_kit.",
"epochs.save('saved-audiovisual-epo.fif', overwrite=True)\nepochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)",
"The MNE-Python naming convention for epochs files is that the file basename\n(the part before the .fif or .fif.gz extension) should end with\n-epo or _epo, and a warning will be issued if the filename you\nprovide does not adhere to that convention.\nAs a final note, be aware that the class of the epochs object is different\nwhen epochs are loaded from disk rather than generated from a\n:class:~mne.io.Raw object:",
"print(type(epochs))\nprint(type(epochs_from_file))",
"In almost all cases this will not require changing anything about your code.\nHowever, if you need to do type checking on epochs objects, you can test\nagainst the base class that these classes are derived from:",
"print(all([isinstance(epochs, mne.BaseEpochs),\n isinstance(epochs_from_file, mne.BaseEpochs)]))",
"Iterating over Epochs\nIterating over an :class:~mne.Epochs object will yield :class:arrays\n<numpy.ndarray> rather than single-trial :class:~mne.Epochs objects:",
"for epoch in epochs[:3]:\n print(type(epoch))",
"If you want to iterate over :class:~mne.Epochs objects, you can use an\ninteger index as the iterator:",
"for index in range(3):\n print(type(epochs[index]))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Yu-Group/scikit-learn-sandbox
|
jupyter/backup_deprecated_nbs/20_refined_combined_run.ipynb
|
mit
|
[
"Key Requirements for the iRF scikit-learn implementation\n\nThe following is a documentation of the main requirements for the iRF implementation\n\nTypical Setup\nImport the required dependencies\n\nIn particular irf_utils and irf_jupyter_utils",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom sklearn.datasets import load_breast_cancer\nimport numpy as np\nfrom functools import reduce\n\n# Import our custom utilities\nfrom imp import reload\nfrom utils import irf_jupyter_utils\nfrom utils import irf_utils\nreload(irf_jupyter_utils)\nreload(irf_utils)",
"Step 1: Fit the Initial Random Forest\n\nJust fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn",
"load_breast_cancer = load_breast_cancer()\n\nX_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=10)",
"Check out the data",
"print(\"Training feature dimensions\", X_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Training outcome dimensions\", y_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test feature dimensions\", X_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test outcome dimensions\", y_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set features\", X_train[:2], sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set outcomes\", y_train[:2], sep = \":\\n\")",
"Step 2: Get all Random Forest and Decision Tree Data\n\nExtract in a single dictionary the random forest data and for all of it's decision trees\nThis is as required for RIT purposes",
"all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf,\n X_train=X_train, y_train=y_train, \n X_test=X_test, y_test=y_test)",
"STEP 3: Get the RIT data and produce RITs",
"all_rit_tree_data = irf_utils.get_rit_tree_data(\n all_rf_tree_data=all_rf_tree_data,\n bin_class_type=1,\n random_state=12,\n M=10,\n max_depth=3,\n noisy_split=False,\n num_splits=2)",
"Perform Manual CHECKS on the irf_utils\n\nThese should be converted to unit tests and checked with nosetests -v test_irf_utils.py\n\nStep 4: Plot some Data\nList Ranked Feature Importances",
"# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfeature_importances_rank_idx = all_rf_tree_data['feature_importances_rank_idx']\nfeature_importances = all_rf_tree_data['feature_importances']\n\nfor f in range(X_train.shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1\n , feature_importances_rank_idx[f]\n , feature_importances[feature_importances_rank_idx[f]]))",
"Plot Ranked Feature Importances",
"# Plot the feature importances of the forest\nfeature_importances_std = all_rf_tree_data['feature_importances_std']\n\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1])\n , feature_importances[feature_importances_rank_idx]\n , color=\"r\"\n , yerr = feature_importances_std[feature_importances_rank_idx], align=\"center\")\nplt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()",
"Decision Tree 0 (First) - Get output\nCheck the output against the decision tree graph",
"# Now plot the trees individually\nirf_jupyter_utils.draw_tree(decision_tree = all_rf_tree_data['rf_obj'].estimators_[0])",
"Compare to our dict of extracted data from the tree",
"irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0'])\n\n# Count the number of samples passing through the leaf nodes\nsum(all_rf_tree_data['dtree0']['tot_leaf_node_values'])",
"Check output against the diagram",
"irf_jupyter_utils.pretty_print_dict(inp_dict = all_rf_tree_data['dtree0']['all_leaf_paths_features'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.