repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
Applied-Groundwater-Modeling-2nd-Ed/Chapter_5_problems-1
|
P5.1_Flopy_Island_Recharge_grid_sensitivity.ipynb
|
gpl-2.0
|
[
"<img src=\"AW&H2015.tiff\" style=\"float: left\">\n<img src=\"flopylogo.png\" style=\"float: center\">\nProblem P5.1 Island Recharge Grid Sensitivity\nIn Problem P5.1 from pages 244-245 in Anderson, Woessner and Hunt (2015), we are asked to construct an areal 2D model to assess impacts of grid sensitivity on pumping. Develop a 2D areal model using Flopy to solve for heads in the upper right-hand quadrant of the island shown in Fig. P5.1. The aquifer is confined, homogeneous, and isotropic with transmissivity, T, equal to 10,000 ft2/day. Recharge, R, occurs uniformly through a leaky confining bed at a rate of 0.00305 ft/day. The half-width of the island, l, is 12,000 ft. The head at the perimeter of the island is at sea level (use h = 0 ft). The heads are symmetric across the groundwater divides that separate the island into four quadrants (Fig. P5.1). Use a point-centered FD grid so that the node at the observation well in the center of the island (Fig. P5.1) is located directly on the groundwater divides that form the lefthand side and lower boundary of the quadrant model.\n<img src=\"P5.1_figure.tiff\" style=\"float: center\">\nInclude a water budget in your model. The inflow to the model is the volume of water entering from recharge. In this notebook, we will work through the problem again using MODFLOW and the Python tool set Flopy. Notice how much code is reused from previous examples. Note also, P5.1 gives directions for calculating the waterbudget; because in this exercise we are using MODFLOW it handles all water budget calculations for us. \nPart a.\nWrite the mathematical model for this problem including the governing equation and boundary conditions of the quadrant model.\nPart b.\nSolve the model using an error tolerance equal to 1E-4 ft and test two designs\nfor the nodal network: (1) a 4 x 7 array of nodes (delta x = delta y = 4000 ft); (2) a\n13 x 25 array of nodes (delta x = delta y= 1000 ft). For each nodal network, print out the head solution to the fourth decimal place.\nBelow is an iPython Notebook that builds a Python MODFLOW model for this problem and plots results. See the Github wiki associated with this Chapter for information on one suggested installation and setup configuration for Python and iPython Notebook.\n[Acknowledgements: This tutorial was created by Randy Hunt and all failings are mine. The exercise here has benefited greatly from the online Flopy tutorial and example notebooks developed by Chris Langevin and Joe Hughes for the USGS Spring 2015 Python Training course GW1774]\nCreating the Model\nIn this example, we will create a simple groundwater flow modelusing the Flopy website approach. Visit the tutorial website here.\nSetup the Notebook Environment and Import Flopy\nLoad a few standard libraries, and then load flopy.",
"%matplotlib inline\nimport sys\nimport os\nimport shutil\nimport numpy as np\nfrom subprocess import check_output\n\n# Import flopy\nimport flopy",
"Setup a New Directory and Change Paths\nFor this tutorial, we will work in a new subdirectory underneath the directory where the notebook is located. We can use some fancy Python tools to help us manage the directory creation. Note that if you encounter path problems with this workbook, you can stop and then restart the kernel and the paths will be reset.",
"# Set the name of the path to the model working directory\ndirname = \"P5-1_Island_recharge\"\ndatapath = os.getcwd()\nmodelpath = os.path.join(datapath, dirname)\nprint 'Name of model path: ', modelpath\n\n# Now let's check if this directory exists. If not, then we will create it.\nif os.path.exists(modelpath):\n print 'Model working directory already exists.'\nelse:\n print 'Creating model working directory.'\n os.mkdir(modelpath)",
"Define the Model Extent, Grid Resolution, and Characteristics\nIt is normally good practice to group things that you might want to change into a single code block. This makes it easier to make changes and rerun the code.",
"# model domain and grid definition\n# for clarity, user entered variables are all caps; python syntax are lower case or mixed case\n# This is an areal 2D model that uses island symmetry to reduce the grid size.\nLX = 16000. # half width of island + one node for constant head boundary condition\nLY = 28000. # half height of island + one node for constant head boundary condition\nZTOP = 0. # the system is confined \nZBOT = -50.\nNLAY = 1\nNROW = 7\nNCOL = 4\nDELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)\nDELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)\nDELV = (ZTOP - ZBOT) / NLAY\nBOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)\nHK = 200.\nVKA = 1.\nRCH = 0.00305 \n# WELLQ = 0. #not needed for Problem 5.1\nprint \"DELR =\", DELR, \" DELC =\", DELC, ' DELV =', DELV\nprint \"BOTM =\", BOTM\nprint \"Recharge =\", RCH \n#print \"Pumping well rate =\", WELLQ\n",
"Create the MODFLOW Model Object\nCreate a flopy MODFLOW object: flopy.modflow.Modflow.",
"# Assign name and create modflow model object\nmodelname = 'P5-1'\n#exe_name = os.path.join(datapath, 'mf2005.exe') # for Windows OS\nexe_name = os.path.join(datapath, 'mf2005') # for Mac OS\nprint 'Model executable: ', exe_name\nMF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)",
"Discretization Package\nCreate a flopy discretization package object: flopy.modflow.ModflowDis.",
"# Create the discretization object\nTOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)\n\nDIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,\n top=TOP, botm=BOTM[1:], laycbd=0)\n# print DIS_PACKAGE #uncomment this on far left to see information about the flopy object",
"Basic Package\nCreate a flopy basic package object: flopy.modflow.ModflowBas.",
"# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based! Sets first row\nIBOUND[:, :, 0] = -1 # Sets first column\nprint IBOUND\n\nSTRT = 1 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1` through out model domain\nSTRT[:, 0, :] = 0. # top row ocean elevation for setting constant head\nSTRT[:, :, 0] = 0. # first column ocean elevation for setting constant head\nprint STRT\n\nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object",
"Layer Property Flow Package\nCreate a flopy layer property flow package object: flopy.modflow.ModflowLpf.",
"LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=HK, vka=VKA) # we defined the K and anisotropy at top of file\n# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object",
"Well Package\nThis is not needed for Problem P5.1",
"#WEL_PACKAGE = flopy.modflow.ModflowWel(MF, stress_period_data=[0,0,0,WELLQ]) # remember python 0 index, layer 0 = layer 1 in MF\n#print WEL_PACKAGE # uncomment this at far left to see the information about the flopy WEL object",
"Output Control\nCreate a flopy output control object: flopy.modflow.ModflowOc.",
"OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output\n# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object",
"Preconditioned Conjugate Gradient Solver\nCreate a flopy pcg package object: flopy.modflow.ModflowPcg.",
"PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5) \n# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object",
"Recharge Package\nCreate a flopy pcg package object: flopy.modflow.ModflowRch.",
"RCH_PACKAGE = flopy.modflow.ModflowRch(MF, rech=RCH)\n# print RCH_PACKAGE # uncomment this at far left to see the information about the flopy RCH object",
"Writing the MODFLOW Input Files\nBefore we create the model input datasets, we can do some directory cleanup to make sure that we don't accidently use old files.",
"#Before writing input, destroy all files in folder to prevent reusing old files\n#Here's the working directory\nprint modelpath\n#Here's what's currently in the working directory\nmodelfiles = os.listdir(modelpath)\nprint modelfiles\n\n#delete these files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()",
"The model datasets are written using a single command (mf.write_input).\nCheck in the model working directory and verify that the input files have been created. Or if you might just add another cell, right after this one, that prints a list of all the files in our model directory. The path we are working in is returned from this next block.",
"# return current working directory\nprint \"You can check the newly created files in\", modelpath\n",
"Running the Model\nFlopy has several methods attached to the model object that can be used to run the model. They are run_model, run_model2, and run_model3. Here we use run_model3, which will write output to the notebook.",
"silent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)",
"Post Processing the Results\nTo read heads from the MODFLOW binary output file, we can use the flopy.utils.binaryfile module. Specifically, we can use the HeadFile object from that module to extract head data arrays.",
"#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(12,10))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(0., 26., 5.)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 28000, 4000)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"P5.1 Island Recharge Problem\")\nAX1.text(2250, 10000, r\"side ocean boundary condition\", fontsize=10, color=\"blue\", rotation='vertical')\nAX1.text(5500, 25000, r\"top ocean boundary condition\", fontsize=10, color=\"blue\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"P5.1 color flood\")\nAX2.text(5500, 25000, r\"top ocean boundary condition\", fontsize=10, color=\"white\")\nAX2.text(2500, 10500, r\"side ocean boundary condition\", fontsize=10, color=\"white\", rotation='vertical')\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\n\nprint HEAD",
"Look at the bottom of the MODFLOW output file (ending with a *.list) and write down the water balance reported to compare to the less coarse grid size done in the next part of the problem. \nChanging the grid size and rerunning/plotting\nRecall that the second part of Part b is to redo the problem with a finer grid spacing: a 13 x 25 array of nodes (delta x = delta y= 1000 ft).",
"LX = 13000. # same as before - half width of island + one node for constant head boundary condition\nLY = 25000. # same as before - half height of island + one node for constant head boundary condition\nNLAY = 1\nNROW = 25\nNCOL = 13\nDELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)\nDELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)\nDELV = (ZTOP - ZBOT) / NLAY\nBOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)\nHK = 200.\nVKA = 1.\nRCH = 0.00305 \n# WELLQ = 0. #not needed for Problem 5.1\nprint \"DELR =\", DELR, \" DELC =\", DELC, ' DELV =', DELV\nprint \"BOTM =\", BOTM\nprint \"Recharge =\", RCH \n\n# Create the discretization object\nTOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)\nDIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,\n top=TOP, botm=BOTM[1:], laycbd=0)\n\n# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based! Sets first row\nIBOUND[:, :, 0] = -1 # Sets first column\nprint IBOUND\n\nSTRT = 1 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1` through out model domain\nSTRT[:, 0, :] = 0. # top row ocean elevation for setting constant head\nSTRT[:, :, 0] = 0. # first column ocean elevation for setting constant head\nprint STRT\n\nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object\n\n#delete earlier files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n\n#Now write the model input files\nMF.write_input()\n# return current working directory\nprint \"You can check the newly created files in\", modelpath\n\n#Run MODFLOW\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()\n\n#Create a contour plot of heads\nFIG = plt.figure(figsize=(12,10))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(0., 26., 5.)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 28000, 4000)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"P5.1 Island Recharge Problem\")\nAX1.text(900, 10000, r\"side ocean boundary condition\", fontsize=10, color=\"blue\", rotation='vertical')\nAX1.text(4000, 23900, r\"top ocean boundary condition\", fontsize=10, color=\"blue\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"P5.1 color flood\")\nAX2.text(4000, 23900, r\"top ocean boundary condition\", fontsize=10, color=\"white\")\nAX2.text(600, 10500, r\"side ocean boundary condition\", fontsize=10, color=\"white\", rotation='vertical')\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)\n\nprint HEAD",
"Write down the new mass balance information for this 1000 ft grid size run (or rename the *.list file). Compare your previous result obtained with the 4000 foot grid size.\nP5.1 Part e.\nRun your model with a nodal spacing of 500 ft and again with a nodal spacing of 250 ft. Compare head values and the volumetric discharge rate at the shoreline for all four nodal networks. Do you think a nodal spacing of 1000 ft is adequate for this problem? Justify your answer.",
"# Same as above but now 500 foot nodal spacing\nLX = 12500. # same as before - half width of island + one node for constant head boundary condition\nLY = 24500. # same as before - half height of island + one node for constant head boundary condition\nNLAY = 1\nNROW = 49\nNCOL =25\nDELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)\nDELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)\nDELV = (ZTOP - ZBOT) / NLAY\nBOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)\nHK = 200.\nVKA = 1.\nRCH = 0.00305 \n# WELLQ = 0. #not needed for Problem 5.1\nprint \"DELR =\", DELR, \" DELC =\", DELC, ' DELV =', DELV\nprint \"BOTM =\", BOTM\nprint \"Recharge =\", RCH \n\n# Create the discretization object\nTOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)\nDIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,\n top=TOP, botm=BOTM[1:], laycbd=0)\n# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based! Sets first row\nIBOUND[:, :, 0] = -1 # Sets first column\nprint IBOUND\n\nSTRT = 1 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1` through out model domain\nSTRT[:, 0, :] = 0. # top row ocean elevation for setting constant head\nSTRT[:, :, 0] = 0. # first column ocean elevation for setting constant head\nprint STRT\n\nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object\n#delete earlier files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n#Now write the model input files\nMF.write_input()\n# return current working directory\nprint \"New files written. You can check them in\", modelpath\n#Run MODFLOW\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()",
"Note that the max head is getting closer to the analytical solution of 20 ft.",
"#Create a contour plot of heads\nFIG = plt.figure(figsize=(12,10))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(0., 26., 5.)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 28000, 4000)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"P5.1 Island Recharge Problem\")\nAX1.text(500, 10000, r\"side ocean boundary condition\", fontsize=10, color=\"blue\", rotation='vertical')\nAX1.text(4000, 23900, r\"top ocean boundary condition\", fontsize=10, color=\"blue\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"P5.1 color flood\")\nAX2.text(4000, 23900, r\"top ocean boundary condition\", fontsize=10, color=\"white\")\nAX2.text(400, 10500, r\"side ocean boundary condition\", fontsize=10, color=\"white\", rotation='vertical')\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)",
"Again write down the mass balance information from the *.list file for comparison to other grid sizes.",
"# Same as above but now 250 foot nodal spacing\nLX = 12250. # same as before - half width of island + one node for constant head boundary condition\nLY = 24250. # same as before - half height of island + one node for constant head boundary condition\nNLAY = 1\nNROW = 97\nNCOL =49\nDELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)\nDELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)\nDELV = (ZTOP - ZBOT) / NLAY\nBOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)\nHK = 200.\nVKA = 1.\nRCH = 0.00305 \n# WELLQ = 0. #not needed for Problem 5.1\nprint \"DELR =\", DELR, \" DELC =\", DELC, ' DELV =', DELV\nprint \"BOTM =\", BOTM\nprint \"Recharge =\", RCH \n\n# Create the discretization object\nTOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)\nDIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,\n top=TOP, botm=BOTM[1:], laycbd=0)\n# Variables for the BAS package\nIBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)\n\n# make the top of the profile specified head by setting the IBOUND = -1\nIBOUND[:, 0, :] = -1 #don't forget arrays are zero-based! Sets first row\nIBOUND[:, :, 0] = -1 # Sets first column\nSTRT = 1 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to 1` through out model domain\nSTRT[:, 0, :] = 0. # top row ocean elevation for setting constant head\nSTRT[:, :, 0] = 0. # first column ocean elevation for setting constant head\nBAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)\n# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object\n#delete earlier files to prevent us from reading old results\nmodelfiles = os.listdir(modelpath)\nfor filename in modelfiles:\n f = os.path.join(modelpath, filename)\n if modelname in f:\n try:\n os.remove(f)\n print 'Deleted: ', filename\n except:\n print 'Unable to delete: ', filename\n#Now write the model input files\nMF.write_input()\n# return current working directory\nprint \"New files written. You can check them in\", modelpath\n#Run MODFLOW\nsilent = False #Print model output to screen?\npause = False #Require user to hit enter? Doesn't mean much in Ipython notebook\nreport = True #Store the output from the model in buff\nsuccess, buff = MF.run_model(silent=silent, pause=pause, report=report)\n#imports for plotting and reading the MODFLOW binary output file\nimport matplotlib.pyplot as plt\nimport flopy.utils.binaryfile as bf\n\n#Create the headfile object and grab the results for last time.\nheadfile = os.path.join(modelpath, modelname + '.hds')\nheadfileobj = bf.HeadFile(headfile)\n\n#Get a list of times that are contained in the model\ntimes = headfileobj.get_times()\nprint 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times\n#Get a numpy array of heads for totim = 1.0\n#The get_data method will extract head data from the binary file.\nHEAD = headfileobj.get_data(totim=1.0)\n\n#Print statistics on the head\nprint 'Head statistics'\nprint ' min: ', HEAD.min()\nprint ' max: ', HEAD.max()\nprint ' std: ', HEAD.std()",
"Note that the max head is again getting closer to the analytical solution of 20 ft.",
"#Create a contour plot of heads\nFIG = plt.figure(figsize=(12,10))\n\n#setup contour levels and plot extent\nLEVELS = np.arange(0., 26., 5.)\nEXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)\nprint 'Contour Levels: ', LEVELS\nprint 'Extent of domain: ', EXTENT\n\n#Make a contour plot on the first axis\nAX1 = FIG.add_subplot(1, 2, 1, aspect='equal')\nAX1.set_xlabel(\"x\")\nAX1.set_ylabel(\"y\")\nYTICKS = np.arange(0, 28000, 4000)\nAX1.set_yticks(YTICKS)\nAX1.set_title(\"P5.1 Island Recharge Problem\")\nAX1.text(220, 10000, r\"side ocean boundary condition\", fontsize=10, color=\"blue\", rotation='vertical')\nAX1.text(4000, 23800, r\"top ocean boundary condition\", fontsize=10, color=\"blue\")\nAX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)\n\n#Make a color flood on the second axis\nAX2 = FIG.add_subplot(1, 2, 2, aspect='equal')\nAX2.set_xlabel(\"x\")\nAX2.set_ylabel(\"y\")\nAX2.set_yticks(YTICKS)\nAX2.set_title(\"P5.1 color flood\")\nAX2.text(4000, 23800, r\"top ocean boundary condition\", fontsize=10, color=\"white\")\nAX2.text(200, 10000, r\"side ocean boundary condition\", fontsize=10, color=\"white\",rotation='vertical')\ncax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')\ncbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)",
"Again write down the mass balance information from the *.list file for the 250 foot grid spacing. Compare all grid sizes."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
HazyResearch/snorkel
|
tutorials/crowdsourcing/Crowdsourced_Sentiment_Analysis-pandas.ipynb
|
apache-2.0
|
[
"Training a Sentiment Analysis LSTM Using Noisy Crowd Labels\nThis is a version of the crowdsourcing tutorial that uses PySpark using Pandas instead of SparkSQL.\nIn this tutorial, we'll provide a simple walkthrough of how to use Snorkel to resolve conflicts in a noisy crowdsourced dataset for a sentiment analysis task, and then use these denoised labels to train an LSTM sentiment analysis model which can be applied to new, unseen data to automatically make predictions!\n\nCreating basic Snorkel objects: Candidates, Contexts, and Labels\nTraining the GenerativeModel to resolve labeling conflicts\nTraining a simple LSTM sentiment analysis model, which can then be used on new, unseen data!\n\nNote that this is a simple tutorial meant to give an overview of the mechanics of using Snorkel-- we'll note places where more careful fine-tuning could be done!\nTask Detail: Weather Sentiments in Tweets\nIn this tutorial we focus on the Weather sentiment task from Crowdflower.\nIn this task, contributors were asked to grade the sentiment of a particular tweet relating to the weather. Contributors could choose among the following categories:\n1. Positive\n2. Negative\n3. I can't tell\n4. Neutral / author is just sharing information\n5. Tweet not related to weather condition\nThe catch is that 20 contributors graded each tweet. Thus, in many cases contributors assigned conflicting sentiment labels to the same tweet. \nThe task comes with two data files (to be found in the data directory of the tutorial:\n1. weather-non-agg-DFE.csv contains the raw contributor answers for each of the 1,000 tweets.\n2. weather-evaluated-agg-DFE.csv contains gold sentiment labels by trusted workers for each of the 1,000 tweets.",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport os\nimport numpy as np\nfrom snorkel import SnorkelSession\nsession = SnorkelSession()",
"Step 1: Preprocessing - Data Loading\nWe load the raw data for our crowdsourcing task (stored in a local csv file) into a dataframe",
"import pandas as pd\n\n# Load Raw Crowdsourcing Data\nraw_crowd_answers = pd.read_csv(\"data/weather-non-agg-DFE.csv\")\n\n# Load Groundtruth Crowdsourcing Data\ngold_crowd_answers = pd.read_csv(\"data/weather-evaluated-agg-DFE.csv\")\n# Filter out low-confidence answers\ngold_answers = gold_crowd_answers[['tweet_id', 'sentiment', 'tweet_body']][(gold_crowd_answers.correct_category == 'Yes') & (gold_crowd_answers.correct_category_conf == 1)] \n\n# Keep Only the Tweets with Available Groundtruth\ncandidate_labeled_tweets = raw_crowd_answers.join(gold_answers.set_index('tweet_id',drop=False),on=['tweet_id'],lsuffix='.raw',rsuffix='.gold',how='inner')\ncandidate_labeled_tweets = candidate_labeled_tweets[['tweet_id.raw','tweet_body.raw','worker_id','emotion']]\ncandidate_labeled_tweets.columns = ['tweet_id','tweet_body','worker_id','emotion']",
"As mentioned above, contributors can provide conflicting labels for the same tweet:",
"candidate_labeled_tweets.sort_values(['worker_id','tweet_id']).head()",
"Step 2: Generating Snorkel Objects\nCandidates\nCandidates are the core objects in Snorkel representing objects to be classified. We'll use a helper function to create a custom Candidate sub-class, Tweet, with values representing the possible labels that it can be classified with:",
"from snorkel.models import candidate_subclass\n\nvalues = list(candidate_labeled_tweets.emotion.unique())\n\nTweet = candidate_subclass('Tweet', ['tweet'], values=values)",
"Contexts\nAll Candidate objects point to one or more Context objects, which represent the raw data that they are rooted in. In this case, our candidates will each point to a single Context object representing the raw text of the tweet.\nOnce we have defined the Context for each Candidate, we can commit them to the database. Note that we also split into two sets while doing this:\n\n\nTraining set (split=0): The tweets for which we have noisy, conflicting crowd labels; we will resolve these conflicts using the GenerativeModel and then use them as training data for the LSTM\n\n\nTest set (split=1): We will pretend that we do not have any crowd labels for this split of the data, and use these to test the LSTM's performance on unseen data",
"from snorkel.models import Context, Candidate\nfrom snorkel.contrib.models.text import RawText\n\n# Make sure DB is cleared\nsession.query(Context).delete()\nsession.query(Candidate).delete()\n\n# Now we create the candidates with a simple loop\ntweet_bodies = candidate_labeled_tweets \\\n [[\"tweet_id\", \"tweet_body\"]] \\\n .sort_values(\"tweet_id\") \\\n .drop_duplicates()\n\n# Generate and store the tweet candidates to be classified\n# Note: We split the tweets in two sets: one for which the crowd \n# labels are not available to Snorkel (test, 10%) and one for which we assume\n# crowd labels are obtained (to be used for training, 90%)\ntotal_tweets = len(tweet_bodies)\ntweet_list = []\ntest_split = total_tweets*0.1\nfor i, t in tweet_bodies.iterrows():\n split = 1 if i <= test_split else 0\n raw_text = RawText(stable_id=t.tweet_id, name=t.tweet_id, text=t.tweet_body)\n tweet = Tweet(tweet=raw_text, split=split)\n tweet_list.append(tweet)\n session.add(tweet)\nsession.commit()",
"Labels\nNext, we'll store the labels for each of the training candidates in a sparse matrix (which will also automatically be saved to the Snorkel database), with one row for each candidate and one column for each crowd worker:",
"from snorkel.annotations import LabelAnnotator\nfrom collections import defaultdict\n\n# Extract worker votes\n# Cache locally to speed up for this small set\nworker_labels = candidate_labeled_tweets[[\"tweet_id\", \"worker_id\", \"emotion\"]]\nwls = defaultdict(list)\nfor i, row in worker_labels.iterrows():\n wls[str(row.tweet_id)].append((str(row.worker_id), row.emotion))\n \n\n# Create a label generator\ndef worker_label_generator(t):\n \"\"\"A generator over the different (worker_id, label_id) pairs for a Tweet.\"\"\"\n for worker_id, label in wls[t.tweet.name]:\n yield worker_id, label\n \nlabeler = LabelAnnotator(label_generator=worker_label_generator)\n%time L_train = labeler.apply(split=0)\nL_train",
"Finally, we load the ground truth (\"gold\") labels for both the training and test sets, and store as numpy arrays\"",
"gold_labels = defaultdict(list)\n\n# Get gold labels in verbose form\nverbose_labels = dict([(str(t.tweet_id), t.sentiment) \n for i, t in gold_answers[[\"tweet_id\", \"sentiment\"]].iterrows()])\n\n# Iterate over splits, align with Candidate ordering\nfor split in range(2):\n cands = session.query(Tweet).filter(Tweet.split == split).order_by(Tweet.id).all() \n for c in cands:\n # Think this is just an odd way of label encoding between 1 and 5?\n gold_labels[split].append(values.index(verbose_labels[c.tweet.name]) + 1) \n \ntrain_cand_labels = np.array(gold_labels[0])\ntest_cand_labels = np.array(gold_labels[1])",
"Step 3: Resolving Crowd Conflicts with the Generative Model\nUntil now we have converted the raw crowdsourced data into a labeling matrix that can be provided as input to Snorkel. We will now show how to:\n\nUse Snorkel's generative model to learn the accuracy of each crowd contributor.\nUse the learned model to estimate a marginal distribution over the domain of possible labels for each task.\nUse the estimated marginal distribution to obtain the maximum a posteriori probability estimate for the label that each task takes.",
"# Imports\nfrom snorkel.learning.gen_learning import GenerativeModel\n\n# Initialize Snorkel's generative model for\n# learning the different worker accuracies.\ngen_model = GenerativeModel(lf_propensity=True)\n\n# Train the generative model\ngen_model.train(\n L_train,\n reg_type=2,\n reg_param=0.1,\n epochs=30\n)",
"Infering the MAP assignment for each task\nEach task corresponds to an independent random variable. Thus, we can simply associate each task with the most probably label based on the estimated marginal distribution and get an accuracy score:",
"accuracy = gen_model.score(L_train, train_cand_labels)\nprint(\"Accuracy: {:.10f}\".format(accuracy))",
"Majority vote\nIt seems like we did well- but how well? Given that this is a fairly simple task--we have 20 contributors per tweet (and most of them are far better than random)--we expect majority voting to perform extremely well, so we can check against majority vote:",
"from collections import Counter\n\n# Collect the majority vote answer for each tweet\nmv = []\nfor i in range(L_train.shape[0]):\n c = Counter([L_train[i,j] for j in L_train[i].nonzero()[1]])\n mv.append(c.most_common(1)[0][0])\nmv = np.array(mv)\n\n# Count the number correct by majority vote\nn_correct = np.sum([1 for i in range(L_train.shape[0]) if mv[i] == train_cand_labels[i]])\nprint (\"Accuracy:{}\".format(n_correct / float(L_train.shape[0])))\nprint (\"Number incorrect:{}\".format(L_train.shape[0] - n_correct))",
"We see that while majority vote makes 10 errors, the Snorkel model makes only 3! What about an average crowd worker?\nAverage human accuracy\nWe see that the average accuracy of a single crowd worker is in fact much lower:",
"accs = []\nfor j in range(L_train.shape[1]):\n n_correct = np.sum([1 for i in range(L_train.shape[0]) if L_train[i,j] == train_cand_labels[i]])\n acc = n_correct / float(L_train[:,j].nnz)\n accs.append(acc)\nprint( \"Mean Accuracy:{}\".format( np.mean(accs)))",
"Step 4: Training an ML Model with Snorkel for Sentiment Analysis over Unseen Tweets\nIn the previous step, we saw that Snorkel's generative model can help to denoise crowd labels automatically. However, what happens when we don't have noisy crowd labels for a tweet?\nIn this step, we'll use the estimates of the generative model as probabilistic training labels to train a simple LSTM sentiment analysis model, which takes as input a tweet for which no crowd labels are available and predicts its sentiment.\nFirst, we get the probabilistic training labels (training marginals) which are just the marginal estimates of the generative model:",
"train_marginals = gen_model.marginals(L_train)\n\nfrom snorkel.annotations import save_marginals\nsave_marginals(session, L_train, train_marginals)",
"Next, we'll train a simple LSTM:",
"from snorkel.learning.tensorflow import TextRNN\n\ntrain_kwargs = {\n 'lr': 0.01,\n 'dim': 100,\n 'n_epochs': 200,\n 'dropout': 0.2,\n 'print_freq': 5\n}\n\nlstm = TextRNN(seed=1701, cardinality=Tweet.cardinality)\ntrain_cands = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()\nlstm.train(train_cands, train_marginals, **train_kwargs)\n\ntest_cands = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()\naccuracy = lstm.score(test_cands, test_cand_labels)\nprint(\"Accuracy: {:.10f}\".format(accuracy))",
"We see that we're already close to the accuracy of an average crowd worker! If we wanted to improve the score, we could tune the LSTM model using grid search (see the Intro tutorial), use pre-trained word embeddings, or many other common techniques for getting state-of-the-art scores. Notably, we're doing this without using gold labels, but rather noisy crowd-labels!\nFor more, checkout the other tutorials!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
thesby/CaffeAssistant
|
tutorial/ipynb/mnist_siamese.ipynb
|
mit
|
[
"Setup\nImport Caffe and the usual modules.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Make sure that caffe is on the python path:\ncaffe_root = '../../' # this file is expected to be in {caffe_root}/examples/siamese\nimport sys\nsys.path.insert(0, caffe_root + 'python')\n\nimport caffe",
"Load the trained net\nLoad the model definition and weights and set to CPU mode TEST phase computation with input scaling.",
"MODEL_FILE = 'mnist_siamese.prototxt'\n# decrease if you want to preview during training\nPRETRAINED_FILE = 'mnist_siamese_iter_50000.caffemodel' \ncaffe.set_mode_cpu()\nnet = caffe.Net(MODEL_FILE, PRETRAINED_FILE, caffe.TEST)",
"Load some MNIST test data",
"TEST_DATA_FILE = '../../data/mnist/t10k-images-idx3-ubyte'\nTEST_LABEL_FILE = '../../data/mnist/t10k-labels-idx1-ubyte'\nn = 10000\n\nwith open(TEST_DATA_FILE, 'rb') as f:\n f.read(16) # skip the header\n raw_data = np.fromstring(f.read(n * 28*28), dtype=np.uint8)\n\nwith open(TEST_LABEL_FILE, 'rb') as f:\n f.read(8) # skip the header\n labels = np.fromstring(f.read(n), dtype=np.uint8)",
"Generate the Siamese features",
"# reshape and preprocess\ncaffe_in = raw_data.reshape(n, 1, 28, 28) * 0.00390625 # manually scale data instead of using `caffe.io.Transformer`\nout = net.forward_all(data=caffe_in)",
"Visualize the learned Siamese embedding",
"feat = out['feat']\nf = plt.figure(figsize=(16,9))\nc = ['#ff0000', '#ffff00', '#00ff00', '#00ffff', '#0000ff', \n '#ff00ff', '#990000', '#999900', '#009900', '#009999']\nfor i in range(10):\n plt.plot(feat[labels==i,0].flatten(), feat[labels==i,1].flatten(), '.', c=c[i])\nplt.legend(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])\nplt.grid()\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
plipp/informatica-pfr-2017
|
nbs/2/3-OPTIONAL-More-Pandas-Exercises.ipynb
|
mit
|
[
"[Optional] More Pandas Exercises\nOriginal Source: Coursera Introduction to Data Science in Python: Assignment 3\nAdditional Requirements\nbash\npip install xlrd\nExercise 1\nLoad the energy data from the file Energy Indicators.xls, which is a list of indicators of energy supply and renewable electricity production from the United Nations for the year 2013, and should be put into a DataFrame with the variable name of energy.\nKeep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:\n['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']\nConvert Energy Supply to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with \"...\") make sure this is reflected as np.NaN values.\nRename the following list of countries (for use in later questions):\n\"Republic of Korea\": \"South Korea\",\n\"United States of America\": \"United States\",\n\"United Kingdom of Great Britain and Northern Ireland\": \"United Kingdom\",\n\"China, Hong Kong Special Administrative Region\": \"Hong Kong\"\nThere are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, \ne.g. \n'Bolivia (Plurinational State of)' should be 'Bolivia', \n'Switzerland17' should be 'Switzerland'.\n<br>\nNext, load the GDP data from the file world_bank.csv, which is a csv containing countries' GDP from 1960 to 2015 from World Bank. Call this DataFrame GDP. \nMake sure to skip the header, and rename the following list of countries:\n\"Korea, Rep.\": \"South Korea\", \n\"Iran, Islamic Rep.\": \"Iran\",\n\"Hong Kong SAR, China\": \"Hong Kong\"\n<br>\nFinally, load the Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology from the file scimagojr-3.xlsx, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame ScimEn.\nJoin the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15). \nThe index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',\n 'Citations per document', 'H index', 'Energy Supply',\n 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008',\n '2009', '2010', '2011', '2012', '2013', '2014', '2015'].\nThis function should return a DataFrame with 20 columns and 15 entries.",
"import pandas as pd\nimport numpy as np\n\ndef top15_countries():\n pass # TODO\n\nTop15 = top15_countries()\nTop15",
"Exercise 2\nThe previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?\nThis function should return a single number.",
"%%HTML\n<svg width=\"800\" height=\"300\">\n <circle cx=\"150\" cy=\"180\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"blue\" />\n <circle cx=\"200\" cy=\"100\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"red\" />\n <circle cx=\"100\" cy=\"100\" r=\"80\" fill-opacity=\"0.2\" stroke=\"black\" stroke-width=\"2\" fill=\"green\" />\n <line x1=\"150\" y1=\"125\" x2=\"300\" y2=\"150\" stroke=\"black\" stroke-width=\"2\" fill=\"black\" stroke-dasharray=\"5,3\"/>\n <text x=\"300\" y=\"165\" font-family=\"Verdana\" font-size=\"35\">Everything but this!</text>\n</svg>\n\ndef missed_entries():\n pass # TODO\n \nmissed_entries()",
"<br>\nAnswer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka Top15)\nExercise 3\nWhat is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)\nThis function should return a Series named avgGDP with 15 countries and their average GDP sorted in descending order.",
"def average_gdp(Top15):\n pass # TODO\n\naverage_gdp(Top15)\n",
"Exercise 4\nBy how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?\nThis function should return a single number.",
"def delta_gdp(Top15):\n pass # TODO\n\ndelta_gdp(Top15)",
"Exercise 5\nWhat is the mean Energy Supply per Capita?\nThis function should return a single number.",
"def mean_energy_supply_per_capita(Top15):\n pass # TODO\n\nmean_energy_supply_per_capita(Top15)",
"Exercise 6\nWhat country has the maximum % Renewable and what is the percentage?\nThis function should return a tuple with the name of the country and the percentage.",
"def country_pct_with_max_renewals(Top15):\n pass # TODO\n\ncountry_pct_with_max_renewals(Top15)",
"Exercise 7\nCreate a new column that is the ratio of Self-Citations to Total Citations. \nWhat is the maximum value for this new column, and what country has the highest ratio?\nThis function should return a tuple with the name of the country and the ratio.",
"def ratio_self_to_total_citation(Top15):\n pass # TODO\n\nratio_self_to_total_citation(Top15)\n ",
"Exercise 8\nCreate a column that estimates the population using Energy Supply and Energy Supply per capita. \nWhat is the third most populous country according to this estimate?\nThis function should return a single string value.",
"def third_most_populated(Top15):\n pass # TODO\n\nthird_most_populated(Top15)",
"Exercise 9\nCreate a column that estimates the number of citable documents per person. \nWhat is the correlation between the number of citable documents per capita and the energy supply per capita? Use the .corr() method, (Pearson's correlation).\nThis function should return a single number.\n(Optional: Use the built-in function plot9() to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)",
"def corr_citation_energy_supply(Top15):\n pass # TODO\n\ncorr_citation_energy_supply(Top15)\n\ndef plot9():\n import matplotlib as plt\n %matplotlib inline\n \n Top15 = top15_countries()\n Top15['PopEst'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']\n Top15['Citable docs per Capita'] = Top15['Citable documents'] / Top15['PopEst']\n Top15.plot(x='Citable docs per Capita', y='Energy Supply per Capita', kind='scatter', xlim=[0, 0.0006])\n\n# TODO\n# plot9()",
"Exercise 10\nCreate a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.\nThis function should return a series named HighRenew whose index is the country name sorted in ascending order of rank.",
"def calc_high_renew(Top15):\n pass # TODO\n\ncalc_high_renew(Top15)",
"Exercise 11\nUse the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.\npython\nContinentDict = {'China':'Asia', \n 'United States':'North America', \n 'Japan':'Asia', \n 'United Kingdom':'Europe', \n 'Russian Federation':'Europe', \n 'Canada':'North America', \n 'Germany':'Europe', \n 'India':'Asia',\n 'France':'Europe', \n 'South Korea':'Asia', \n 'Italy':'Europe', \n 'Spain':'Europe', \n 'Iran':'Asia',\n 'Australia':'Australia', \n 'Brazil':'South America'}\nThis function should return a DataFrame with index named Continent ['Asia', 'Australia', 'Europe', 'North America', 'South America'] and columns ['size', 'sum', 'mean', 'std']",
"ContinentDict = {'China':'Asia', \n 'United States':'North America', \n 'Japan':'Asia', \n 'United Kingdom':'Europe', \n 'Russian Federation':'Europe', \n 'Canada':'North America', \n 'Germany':'Europe', \n 'India':'Asia',\n 'France':'Europe', \n 'South Korea':'Asia', \n 'Italy':'Europe', \n 'Spain':'Europe', \n 'Iran':'Asia',\n 'Australia':'Australia', \n 'Brazil':'South America'}\n\ndef stats_for_pop_est(Top15):\n pass # TODO\n\nstats_for_pop_est(Top15)",
"Exercise 12\nCut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?\nThis function should return a Series with a MultiIndex of Continent, then the bins for % Renewable. Do not include groups with no countries.",
"def count_by_renewable(Top15):\n pass # TODO\n\ncount_by_renewable(Top15)",
"Exercise 13\nConvert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.\ne.g. 317615384.61538464 -> 317,615,384.61538464\nThis function should return a Series PopEst whose index is the country name and whose values are the population estimate string.",
"def formatted_pop_est(Top15):\n pass # TODO\n\nformatted_pop_est(Top15)",
"Exercise 14\nUse the built in function plot_14() to see an example visualization.",
"def plot_14(Top15):\n import matplotlib as plt\n %matplotlib inline\n ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter', \n c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',\n '#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'], \n xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);\n\n for i, txt in enumerate(Top15.index):\n ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')\n\n print(\"This is an example of a visualization that can be created to help understand the data. \\\nThis is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \\\n2014 GDP, and the color corresponds to the continent.\")\n\n# TODO"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_source_power_spectrum.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute power spectrum densities of the sources with dSPM\nReturns an STC file containing the PSD (in dB) of each of the sources.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, compute_source_psd\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_label = data_path + '/MEG/sample/labels/Aud-lh.label'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname, verbose=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\ninverse_operator = read_inverse_operator(fname_inv)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, exclude='bads')\n\ntmin, tmax = 0, 120 # use the first 120s of data\nfmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz\nn_fft = 2048 # the FFT size (n_fft). Ideally a power of 2\nlabel = mne.read_label(fname_label)\n\nstc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method=\"dSPM\",\n tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,\n pick_ori=\"normal\", n_fft=n_fft, label=label)\n\nstc.save('psd_dSPM')",
"View PSD of sources in label",
"plt.plot(1e3 * stc.times, stc.data.T)\nplt.xlabel('Frequency (Hz)')\nplt.ylabel('PSD (dB)')\nplt.title('Source Power Spectrum (PSD)')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/actuariat_python
|
_doc/notebooks/sessions/population_recuperation_donnees.ipynb
|
mit
|
[
"Récupération des données\nCe notebook donne quelques exemples de codes qui permettent de récupérer les données utilisées par d'autres notebooks. Le module actuariat_python est implémenté avec Python 3. Pour les utilisateurs de Python 2.7, il suffira de recopier le code chaque fonction dans le notebook (suivre les liens insérés dans le notebook).",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n# le code qui suit n'est pas indispensable, il génère automatiquement un menu\n# dans le notebook\nfrom jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Population française janvier 2017\nLes données sont disponibles sur le site de l'INSEE Pyramide des âges au 1er janvier. Elles sont disponibles au format Excel. Le format n'est pas le plus simple et il a le don d'être parfois illisible avec pandas. Le plus simple est de le convertir au format texte avec Excel.",
"url = \"https://www.insee.fr/fr/statistiques/fichier/1892086/pop-totale-france.xls\"\nurl = \"pop-totale-france.txt\"\nimport pandas\ndf=pandas.read_csv(url, sep=\"\\t\", encoding=\"latin-1\")\ndf.head(n=5)\n\ndf=pandas.read_csv(url, sep=\"\\t\", encoding=\"latin-1\", skiprows=3)\ndf.head(n=5)\n\ndf.tail(n=5)",
"La récupération de ces données est implémentée dans la fonction population_france_year :",
"from actuariat_python.data import population_france_year\ndf = population_france_year()\n\ndf.head(n=3)\n\ndf.tail(n=3)",
"D'après cette table, il y a plus de personnes âgées de 110 ans que de 109 ans. C'est dû au fait que la dernière ligne aggrège toutes les personnes âgées de plus de 110 ans.\nTable de mortalité 2000-2002 (France)\nOn utilise quelques raccourcis afin d'éviter d'y passer trop de temps. Les données sont fournis au format Excel à l'adresse : http://www.institutdesactuaires.com/gene/main.php?base=314. La fonction table_mortalite_france_00_02 permet de les récupérer.",
"from actuariat_python.data import table_mortalite_france_00_02\ndf=table_mortalite_france_00_02()\ndf.head()\n\ndf.plot(x=\"Age\",y=[\"Homme\", \"Femme\"],xlim=[0,100])",
"Taux de fécondité (France)\nOn procède de même pour cette table avec la fonction fecondite_france. Source : INSEE : Fécondité selon l'âge détaillé de la mère.",
"from actuariat_python.data import fecondite_france\ndf=fecondite_france()\ndf.head()\n\ndf.plot(x=\"age\", y=[\"2005\",\"2015\"])",
"Table de mortalité étendue 1960-2010\ntable de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat.",
"from actuariat_python.data import table_mortalite_euro_stat \ntable_mortalite_euro_stat()\n\nimport os\nos.stat(\"mortalite.txt\")\n\nimport pandas\ndf = pandas.read_csv(\"mortalite.txt\", sep=\"\\t\", encoding=\"utf8\", low_memory=False)\ndf.head()\n\ndf [ ((df.age==\"Y60\") | (df.age==\"Y61\")) & (df.annee == 2000) & (df.pays==\"FR\") & (df.genre==\"F\")]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adrn/gary
|
docs/potential/define-milky-way-model.ipynb
|
mit
|
[
"Defining a Milky Way potential model",
"# Third-party dependencies\nfrom astropy.io import ascii\nimport astropy.units as u\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import leastsq\n\n# Gala\nfrom gala.mpl_style import mpl_style\nplt.style.use(mpl_style)\nimport gala.dynamics as gd\nimport gala.integrate as gi\nimport gala.potential as gp\nfrom gala.units import galactic\n%matplotlib inline",
"Introduction\ngala provides a simple and easy way to access and integrate orbits in an\napproximate mass model for the Milky Way. The parameters of the mass model are\ndetermined by least-squares fitting the enclosed mass profile of a pre-defined\npotential form to recent measurements compiled from the literature. These\nmeasurements are provided with the documentation of gala and are shown below.\nThe radius units are kpc, and mass units are solar masses:",
"tbl = ascii.read('data/MW_mass_enclosed.csv')\n\ntbl",
"Let's now plot the above data and uncertainties:",
"fig, ax = plt.subplots(1, 1, figsize=(4,4))\n\nax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']), \n marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa', \n capthick=0, linestyle='none', elinewidth=1.)\n\nax.set_xlim(1E-3, 10**2.6)\nax.set_ylim(7E6, 10**12.25)\n\nax.set_xlabel('$r$ [kpc]')\nax.set_ylabel('$M(<r)$ [M$_\\odot$]')\n\nax.set_xscale('log')\nax.set_yscale('log')\n\nfig.tight_layout()",
"We now need to assume some form for the potential. For simplicity and within reason, we'll use a four component potential model consisting of a Hernquist (1990) bulge and nucleus, a Miyamoto-Nagai (1975) disk, and an NFW (1997) halo. We'll fix the parameters of the disk and bulge to be consistent with previous work (Bovy 2015 - please cite that paper if you use this potential model) and vary the scale mass and scale radius of the nucleus and halo, respectively. We'll fit for these parameters in log-space, so we'll first define a function that returns a gala.potential.CCompositePotential object given these four parameters:",
"def get_potential(log_M_h, log_r_s, log_M_n, log_a):\n mw_potential = gp.CCompositePotential()\n mw_potential['bulge'] = gp.HernquistPotential(m=5E9, c=1., units=galactic)\n mw_potential['disk'] = gp.MiyamotoNagaiPotential(m=6.8E10*u.Msun, a=3*u.kpc, b=280*u.pc,\n units=galactic)\n mw_potential['nucl'] = gp.HernquistPotential(m=np.exp(log_M_n), c=np.exp(log_a)*u.pc,\n units=galactic)\n mw_potential['halo'] = gp.NFWPotential(m=np.exp(log_M_h), r_s=np.exp(log_r_s), units=galactic)\n\n return mw_potential",
"We now need to specify an initial guess for the parameters - let's do that (by making them up), and then plot the initial guess potential over the data:",
"# Initial guess for the parameters- units are:\n# [Msun, kpc, Msun, pc]\nx0 = [np.log(6E11), np.log(20.), np.log(2E9), np.log(100.)] \ninit_potential = get_potential(*x0)\n\nxyz = np.zeros((3, 256))\nxyz[0] = np.logspace(-3, 3, 256)\n\nfig, ax = plt.subplots(1, 1, figsize=(4,4))\n\nax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']), \n marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa', \n capthick=0, linestyle='none', elinewidth=1.)\n\nfit_menc = init_potential.mass_enclosed(xyz*u.kpc)\nax.loglog(xyz[0], fit_menc.value, marker='', color=\"#3182bd\",\n linewidth=2, alpha=0.7)\n\nax.set_xlim(1E-3, 10**2.6)\nax.set_ylim(7E6, 10**12.25)\n\nax.set_xlabel('$r$ [kpc]')\nax.set_ylabel('$M(<r)$ [M$_\\odot$]')\n\nax.set_xscale('log')\nax.set_yscale('log')\n\nfig.tight_layout()",
"It looks pretty good already! But let's now use least-squares fitting to optimize our nucleus and halo parameters. We first need to define an error function:",
"def err_func(p, r, Menc, Menc_err):\n pot = get_potential(*p)\n xyz = np.zeros((3,len(r)))\n xyz[0] = r\n model_menc = pot.mass_enclosed(xyz).to(u.Msun).value\n return (model_menc - Menc) / Menc_err",
"Because the uncertainties are all approximately but not exactly symmetric, we'll take the maximum of the upper and lower uncertainty values and assume that the uncertainties in the mass measurements are Gaussian (a bad but simple assumption):",
"err = np.max([tbl['Menc_err_pos'], tbl['Menc_err_neg']], axis=0)\np_opt, ier = leastsq(err_func, x0=x0, args=(tbl['r'], tbl['Menc'], err))\nassert ier in range(1,4+1), \"least-squares fit failed!\"\nfit_potential = get_potential(*p_opt)",
"Now we have a best-fit potential! Let's plot the enclosed mass of the fit potential over the data:",
"xyz = np.zeros((3, 256))\nxyz[0] = np.logspace(-3, 3, 256)\n\nfig, ax = plt.subplots(1, 1, figsize=(4,4))\n\nax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']), \n marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa', \n capthick=0, linestyle='none', elinewidth=1.)\n\nfit_menc = fit_potential.mass_enclosed(xyz*u.kpc)\nax.loglog(xyz[0], fit_menc.value, marker='', color=\"#3182bd\",\n linewidth=2, alpha=0.7)\n\nax.set_xlim(1E-3, 10**2.6)\nax.set_ylim(7E6, 10**12.25)\n\nax.set_xlabel('$r$ [kpc]')\nax.set_ylabel('$M(<r)$ [M$_\\odot$]')\n\nax.set_xscale('log')\nax.set_yscale('log')\n\nfig.tight_layout()",
"This potential is already implemented in gala in gala.potential.special, and we can import it with:",
"from gala.potential import MilkyWayPotential\n\npotential = MilkyWayPotential()\npotential"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AEW2015/PYNQ_PR_Overlay
|
Pynq-Z1/notebooks/examples/video_filters.ipynb
|
bsd-3-clause
|
[
"Software Grayscale and Sobel filters on HDMI input\nThis example notebook will demonstrate two image filters using a snapshot from the HDMI input: <br>\n1. First, a frame is read from HDMI input\n2. That image is saved and displayed in the notebook\n3. Some simple Python pixel-level image processing is done (Gray Scale conversion, and Sobel filter)\n1. Start the HDMI input\nAn HDMI input source is required for this example. This should be on, and connected to the board before running the code below.",
"from pynq.drivers.video import Frame, HDMI\nfrom IPython.display import Image\n\nhdmi=HDMI('in')\nhdmi.start()",
"2. Save frame and display JPG here",
"frame = hdmi.frame()\norig_img_path = '/home/xilinx/jupyter_notebooks/examples/data/orig.jpg'\nframe.save_as_jpeg(orig_img_path)\n\nImage(filename=orig_img_path)",
"3. Gray Scale filter\nAccess the frame contents (a bytearray) directly for optimized processing time. This cell should take ~20s to complete.",
"from pynq.drivers.video import MAX_FRAME_WIDTH\n\ngrayframe = frame\nframe_i = grayframe.frame\n\nheight = hdmi.frame_height()\nwidth = hdmi.frame_width()\n\nfor y in range(0, height):\n for x in range(0, width):\n \n offset = 3 * (y * MAX_FRAME_WIDTH + x)\n \n gray = round((0.299*frame_i[offset+2]) + \n (0.587*frame_i[offset+0]) +\n (0.114*frame_i[offset+1]))\n frame_i[offset+0] = gray \n frame_i[offset+1] = gray\n frame_i[offset+2] = gray\n\ngray_img_path = '/home/xilinx/jupyter_notebooks/examples/data/gray.jpg'\ngrayframe.save_as_jpeg(gray_img_path)\nImage(filename=gray_img_path)",
"4. Sobel filter\nAccess the frame contents (a bytearray) directly for optimized processing time. This cell should take ~30s to complete.\nCompute the Sobel Filter output with sobel operator:\n$G_x=\n\\begin{bmatrix}\n-1 & 0 & +1 \\\n-2 & 0 & +2 \\\n-1 & 0 & +1\n\\end{bmatrix}\n$\n$G_y=\n\\begin{bmatrix}\n+1 & +2 & +1 \\\n0 & 0 & 0 \\\n-1 & -2 & -1\n\\end{bmatrix}\n$",
"height = 1080\nwidth = 1920\nsobel = Frame(1920, 1080)\nframe_i = frame.frame\n\nfor y in range(1,height-1):\n for x in range(1,width-1):\n \n offset = 3 * (y * MAX_FRAME_WIDTH + x)\n upper_row_offset = offset - MAX_FRAME_WIDTH*3\n lower_row_offset = offset + MAX_FRAME_WIDTH*3 \n \n gx = abs(-frame_i[lower_row_offset-3] + frame_i[lower_row_offset+3] -\n 2*frame_i[offset-3] + 2*frame_i[offset+3] -\n frame_i[upper_row_offset-3] + frame_i[upper_row_offset+3])\n gy = abs(frame_i[lower_row_offset-3] + 2*frame_i[lower_row_offset] + \n frame_i[lower_row_offset+3] - frame_i[upper_row_offset-3] -\n 2*frame_i[upper_row_offset] - frame_i[upper_row_offset+3]) \n \n grad = gx + gy\n if grad > 255:\n grad = 255 \n sobel.frame[offset+0] = grad \n sobel.frame[offset+1] = grad\n sobel.frame[offset+2] = grad\n \nsobel_img_path = '/home/xilinx/jupyter_notebooks/examples/data/sobel.jpg'\nsobel.save_as_jpeg(sobel_img_path)\n\nImage(filename=sobel_img_path)",
"Step 5: Free up space from different frames",
"hdmi.stop()\n\ndel sobel\ndel grayframe\ndel hdmi"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
seifip/udacity-deep-learning-nanodegree
|
gan_mnist/Intro_to_GANs_Exercises.ipynb
|
mit
|
[
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name=\"inputs_real\")\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name=\"inputs_z\")\n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(\n z,\n n_units,\n activation=tf.nn.leaky_relu\n )\n \n # Logits and tanh output\n logits = tf.layers.dense(\n h1,\n out_dim,\n activation=None\n )\n out = tf.tanh(logits)\n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(\n x,\n n_units,\n activation=tf.nn.leaky_relu\n )\n \n logits = tf.layers.dense(\n h1,\n 1,\n activation=None\n )\n out = tf.sigmoid(logits)\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Generator network here\ng_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"# Calculate losses\nd_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth)))\n\nd_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake)))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake)))",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want the generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\ng_vars = tf.trainable_variables('generator')\nd_vars = tf.trainable_variables('discriminator')\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)",
"Training",
"batch_size = 100\nepochs = 200\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rgerkin/python-neo
|
examples/test_nwbio_class_from_Neo.ipynb
|
bsd-3-clause
|
[
"!rm My_first_dataset_neo9.nwb\n\n# Pip installs\n!pip install pynwb\n!pip install nwb-docutils\n!pip install git+https://github.com/legouee/python-neo@NWB_updated\n\nimport neo\nfrom neo import Block, Segment, AnalogSignal\nfrom neo.io.nwbio import NWBIO\nimport pynwb\nimport quantities as pq\nfrom quantities import s, ms, kHz, Hz, uV\nimport numpy as np\nprint(\"neo = \", neo.__version__)\nprint(\"pynwb = \", pynwb.__version__)",
"Create a nwb file from Neo\nCreate 3 Neo blocks and populate each block with 4 Neo segments, and each segment with 3 Neo analogsignals objects",
"blocks = []\n\n# Define Neo blocks\nbl0 = neo.Block(name='First block')\nbl1 = neo.Block(name='Second block')\nbl2 = neo.Block(name='Third block')\nprint(\"bl0.segments = \", bl0.segments) \nprint(\"bl1.segments = \", bl1.segments)\nprint(\"bl2.segments = \", bl2.segments)\nblocks = [bl0, bl1, bl2]\nprint(\"blocks = \", blocks)\n\nnum_seg = 4 # number of segments\n\nfor blk in blocks: \n for ind in range(num_seg): # number of Segment\n seg = neo.Segment(name='segment %s %d' % (blk.name, ind), index=ind)\n blk.segments.append(seg)\n\n for seg in blk.segments: # AnalogSignal objects\n # 3 AnalogSignals\n a = AnalogSignal(np.random.randn(num_seg, 44)*pq.nA, sampling_rate=10*kHz)\n b = AnalogSignal(np.random.randn(num_seg, 64)*pq.nA, sampling_rate=10*kHz)\n c = AnalogSignal(np.random.randn(num_seg, 33)*pq.nA, sampling_rate=10*kHz)\n\n seg.analogsignals.append(a)\n seg.analogsignals.append(b)\n seg.analogsignals.append(c)\n\nblocks",
"Write a nwb file\nUsing Neo NWBIO",
"filename = 'My_first_dataset_neo9.nwb'\n\nwriter = NWBIO(filename, mode='w')\nwriter.write(blocks)",
"Read the NWB file\nUsing pynwb",
"io = pynwb.NWBHDF5IO(filename, mode='r') # Open a file with NWBHDF5IO\n_file = io.read()\n\nprint(_file)\n_file.acquisition",
"Using Neo NWBIO",
"reader = NWBIO(filename, mode='r')\n\nall_blocks = reader.read()\n\nall_blocks\n\nfirst_block = reader.read_block() # Read the first block\n\nfirst_block\n\n# Plotting settings\n%matplotlib inline\nshow_bar_plot = False # Change setting to plot distribution of object sizes in the HDF5 file\nplot_single_file = True # Plot all files or a single example file\noutput_filenames = filename\nprint(\"output_filenames = \", output_filenames)\n \n# Select the files to plot\nfilenames = output_filenames\nprint(\"filenames = \", filenames)\n\n# Changed\nfrom nwb_docutils.doctools.render import HierarchyDescription, NXGraphHierarchyDescription\nimport matplotlib.pyplot as plt\n \n# Create the plots for all files\nfile_hierarchy = HierarchyDescription.from_hdf5(filenames)\nfile_graph = NXGraphHierarchyDescription(file_hierarchy) \nfig = file_graph.draw(show_plot=False,\n figsize=(12,16),\n label_offset=(0.0, 0.0065),\n label_font_size=10)\nplot_title = filenames + \" \\n \" + \"#Datasets=%i, #Attributes=%i, #Groups=%i, #Links=%i\" % (len(file_hierarchy['datasets']), len(file_hierarchy['attributes']), len(file_hierarchy['groups']), len(file_hierarchy['links']))\nplt.title(plot_title)\nplt.show()\n \n# Show a sorted bar plot with the sizes of all datasets in the file\nif show_bar_plot:\n d = {i['name']: np.prod(i['size']) for i in file_hierarchy['datasets']}\n l = [w for w in sorted(d, key=d.get, reverse=True)]\n s = [d[w] for w in l] \n p = np.arange(len(l)) \n fig,ax = plt.subplots(figsize=(16,7))\n ax.set_title(filename)\n ax.bar(p, s, width=1, color='r')\n ax.set_xticks(p+1) \n ax.set_xticklabels(l) \n ax.set_yscale(\"log\", nonposy='clip')\n fig.autofmt_xdate(bottom=0.2, rotation=90, ha='right')\n plt.show()\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zomansud/coursera
|
ml-classification/week-1/module-2-linear-classifier-assignment-blank.ipynb
|
mit
|
[
"Predicting sentiment from product reviews\nThe goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.\nIn this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.\n\nUse SFrames to do some feature engineering\nTrain a logistic regression model to predict the sentiment of product reviews.\nInspect the weights (coefficients) of a trained logistic regression model.\nMake a prediction (both class and probability) of sentiment for a new product review.\nGiven the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.\nInspect the coefficients of the logistic regression model and interpret their meanings.\nCompare multiple logistic regression models.\n\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create.",
"from __future__ import division\nimport graphlab\nimport math\nimport string\nimport numpy",
"Data preparation\nWe will use a dataset consisting of baby product reviews on Amazon.com.",
"products = graphlab.SFrame('amazon_baby.gl/')",
"Now, let us see a preview of what the dataset looks like.",
"products",
"Build the word count vector for each review\nLet us explore a specific example of a baby product.",
"products[269]",
"Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nTransform the reviews into word-counts.\n\nAside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as \"I'd\", \"would've\", \"hadn't\" and so forth. See this page for an example of smart handling of punctuations.",
"def remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nreview_without_punctuation = products['review'].apply(remove_punctuation)\nproducts['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)",
"Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.",
"products[269]['word_count']",
"Extract sentiments\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.",
"products = products[products['rating'] != 3]\nlen(products)",
"Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.",
"products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)\nproducts",
"Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).\nSplit data into training and test sets\nLet's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.",
"train_data, test_data = products.random_split(.8, seed=1)\nprint len(train_data)\nprint len(test_data)",
"Train a sentiment classifier with logistic regression\nWe will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.\nNote: This line may take 1-2 minutes.",
"sentiment_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count'],\n validation_set=None)\n\nsentiment_model",
"Aside. You may get a warning to the effect of \"Terminated due to numerical difficulties --- this model may not be ideal\". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.\nNow that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:",
"weights = sentiment_model.coefficients\nweights.column_names()\n\nweights[weights['value'] > 0]['value']",
"There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. \nFill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).",
"num_positive_weights = weights[weights['value'] >= 0]['value'].size()\nnum_negative_weights = weights[weights['value'] < 0]['value'].size()\n\nprint \"Number of positive weights: %s \" % num_positive_weights\nprint \"Number of negative weights: %s \" % num_negative_weights",
"Quiz question: How many weights are >= 0?\nMaking predictions with logistic regression\nNow that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.",
"sample_test_data = test_data[10:13]\nprint sample_test_data['rating']\nsample_test_data",
"Let's dig deeper into the first row of the sample_test_data. Here's the full review:",
"sample_test_data[0]['review']",
"That review seems pretty positive.\nNow, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.",
"sample_test_data[1]['review']",
"We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:\n$$\n\\mbox{score}_i = \\mathbf{w}^T h(\\mathbf{x}_i)\n$$ \nwhere $h(\\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].",
"scores = sentiment_model.predict(sample_test_data, output_type='margin')\nprint scores",
"Predicting sentiment\nThese scores can be used to make class predictions as follows:\n$$\n\\hat{y} = \n\\left{\n\\begin{array}{ll}\n +1 & \\mathbf{w}^T h(\\mathbf{x}_i) > 0 \\\n -1 & \\mathbf{w}^T h(\\mathbf{x}_i) \\leq 0 \\\n\\end{array} \n\\right.\n$$\nUsing scores, write code to calculate $\\hat{y}$, the class predictions:",
"def margin_based_classifier(score):\n return 1 if score > 0 else -1\n\nsample_test_data['predictions'] = scores.apply(margin_based_classifier)\nsample_test_data['predictions']",
"Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.",
"print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data)",
"Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.\nProbability predictions\nRecall from the lectures that we can also calculate the probability predictions from the scores using:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))}.\n$$\nUsing the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].",
"def logistic_classifier_prob(weight):\n return 1.0 / (1.0 + math.exp(-1 * weight))\n\nprobabilities = scores.apply(logistic_classifier_prob)\nprobabilities",
"Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.",
"print \"Class predictions according to GraphLab Create:\" \nprint sentiment_model.predict(sample_test_data, output_type='probability')",
"Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?",
"print \"Third\"",
"Find the most positive (and negative) review\nWe now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.\nUsing the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the \"most positive reviews.\"\nTo calculate these top-20 reviews, use the following steps:\n1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)\n2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)",
"a = graphlab.SArray([1,2,3])\nb = graphlab.SArray([1,2,1])\n\nprint a == b\nprint (a == b).sum()\n\ntest_data['predicted_prob'] = sentiment_model.predict(test_data, output_type='probability')\ntest_data\ntest_data.topk('predicted_prob', 20).print_rows(20)",
"Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]\nNow, let us repeat this exercise to find the \"most negative reviews.\" Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.",
"test_data.topk('predicted_prob', 20, reverse=True).print_rows(20)",
"Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]\nCompute accuracy of the classifier\nWe will now evaluate the accuracy of the trained classifier. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified examples}}{\\mbox{# total examples}}\n$$\nThis can be computed as follows:\n\nStep 1: Use the trained model to compute class predictions (Hint: Use the predict method)\nStep 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).\nStep 3: Divide the total number of correct predictions by the total number of data points in the dataset.\n\nComplete the function below to compute the classification accuracy:",
"def get_classification_accuracy(model, data, true_labels):\n # First get the predictions\n prediction = model.predict(data)\n \n # Compute the number of correctly classified examples\n correctly_classified = prediction == true_labels\n\n # Then compute accuracy by dividing num_correct by total number of examples\n accuracy = float(correctly_classified.sum()) / true_labels.size()\n \n return accuracy",
"Now, let's compute the classification accuracy of the sentiment_model on the test_data.",
"get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])",
"Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).\nQuiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?\nLearn another classifier with fewer words\nThere were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subset of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:",
"significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', \n 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', \n 'work', 'product', 'money', 'would', 'return']\n\nlen(significant_words)",
"For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.",
"train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)\ntest_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)",
"Let's see what the first example of the dataset looks like:",
"train_data[0]['review']",
"The word_count column had been working with before looks like the following:",
"print train_data[0]['word_count']",
"Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.",
"print train_data[0]['word_count_subset']",
"Train a logistic regression model on a subset of data\nWe will now build a classifier with word_count_subset as the feature and sentiment as the target.",
"simple_model = graphlab.logistic_classifier.create(train_data,\n target = 'sentiment',\n features=['word_count_subset'],\n validation_set=None)\nsimple_model",
"We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.",
"get_classification_accuracy(simple_model, test_data, test_data['sentiment'])",
"Now, we will inspect the weights (coefficients) of the simple_model:",
"simple_model.coefficients",
"Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.",
"simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)",
"Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?",
"simple_model.coefficients[simple_model.coefficients['value'] > 0]['value'].size() - 1",
"Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?",
"positive_significant_words = simple_model.coefficients[simple_model.coefficients['value'] > 0]\npositive_significant_words\n\nfor w in positive_significant_words['index']:\n print sentiment_model.coefficients[sentiment_model.coefficients['index'] == w]\n",
"Comparing models\nWe will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.\nFirst, compute the classification accuracy of the sentiment_model on the train_data:",
"get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])",
"Now, compute the classification accuracy of the simple_model on the train_data:",
"get_classification_accuracy(simple_model, train_data, train_data['sentiment'])",
"Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?\nNow, we will repeat this exercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:",
"round(get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']), 2)",
"Next, we will compute the classification accuracy of the simple_model on the test_data:",
"get_classification_accuracy(simple_model, test_data, test_data['sentiment'])",
"Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?\nBaseline: Majority class prediction\nIt is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.\nWhat is the majority class in the train_data?",
"num_positive = (train_data['sentiment'] == +1).sum()\nnum_negative = (train_data['sentiment'] == -1).sum()\nprint num_positive\nprint num_negative",
"Now compute the accuracy of the majority class classifier on test_data.\nQuiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).",
"num_positive_test = (test_data['sentiment'] == +1).sum()\nnum_negative_test = (test_data['sentiment'] == -1).sum()\nprint num_positive_test\nprint num_negative_test\n\nmajority_accuracy = float(num_positive_test) / test_data['sentiment'].size()\nprint round(majority_accuracy, 2)",
"Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?",
"print \"Yes\"\n\ngraphlab.version"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Eric89GXL/vispy
|
examples/ipynb/colormaps.ipynb
|
bsd-3-clause
|
[
"VisPy colormaps\nThis notebook illustrates the colormap API provided by VisPy.\nList all colormaps",
"import numpy as np\nfrom vispy.color import (get_colormap, get_colormaps, Colormap)\nfrom IPython.display import display_html\n\nfor cmap in get_colormaps():\n display_html('<h3>%s</h3>' % cmap, raw=True)\n display_html(get_colormap(cmap))",
"Discrete colormaps\nDiscrete colormaps can be created by giving a list of colors, and an optional list of control points (in $[0,1]$, the first and last points need to be $0$ and $1$ respectively). The colors can be specified in many ways (1-character shortcuts, hexadecimal values, arrays or RGB values, ColorArray instances, and so on).",
"Colormap(['r', 'g', 'b'], interpolation='zero')\n\nColormap(['r', 'g', 'y'], interpolation='zero')\n\nColormap(np.array([[0, .75, 0],\n [.75, .25, .5]]), \n [0., .25, 1.], \n interpolation='zero')\n\nColormap(['r', 'g', '#123456'],\n interpolation='zero')",
"Linear gradients",
"Colormap(['r', 'g', '#123456'])\n\nColormap([[1,0,0], [1,1,1], [1,0,1]], \n [0., .75, 1.])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ulitosCoder/DataAnalysis
|
lesson01/ipython_notebook_tutorial.ipynb
|
gpl-2.0
|
[
"Text Using Markdown\nIf you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called \"running\" the cell, and you can also do it using the run button in the toolbar.\nCode cells\nOne great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.",
"# Hit shift + enter or use the run button to run this cell and see the results\n\nprint 'hello world'\n\n# The last line of every code cell will be displayed by default, \n# even if you don't print it. Run this cell to see how this works.\n\n2 + 2 # The result of this line will not be displayed\n3 + 3 # The result of this line will be displayed, because it is the last line of the cell",
"Nicely formatted results\nIPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in\nthe notebook. You'll learn how to use the following libraries later on in this course, but for now here's a\npreview of what IPython notebook can do.",
"# If you run this cell, you should see the values displayed as a table.\n\n# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.\nimport pandas as pd\n\ndf = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})\ndf\n\n# If you run this cell, you should see a scatter plot of the function y = x^2\n\n%pylab inline\nimport matplotlib.pyplot as plt\n\nxs = range(-30, 31)\nys = [x ** 2 for x in xs]\n\nplt.scatter(xs, ys)",
"Creating cells\nTo create a new code cell, click \"Insert > Insert Cell [Above or Below]\". A code cell will automatically be created.\nTo create a new markdown cell, first follow the process above to create a code cell, then change the type from \"Code\" to \"Markdown\" using the dropdown next to the run, stop, and restart buttons.\nRe-running cells\nIf you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, \"In [5]\". The third cell should output the message \"Intro to Data Analysis is awesome!\"",
"class_name = \"Intro to Data Analysis\"\n\nmessage = class_name + \" is awesome!\"\n\nmessage",
"Once you've run all three cells, try modifying the first one to set class_name to your name, rather than \"Intro to Data Analysis\", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second.\nYou should have seen that the third cell still printed \"Intro to Data Analysis is awesome!\" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third.\nYou should have seen the output change to \"your name is awesome!\" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking \"Cell > Run All Below\".\nOne final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
samgoodgame/sf_crime
|
.ipynb_checkpoints/sf_crime-checkpoint.ipynb
|
mit
|
[
"SF Crime\nW207 Final Project\nBasic Modeling\nEnvironment and Data",
"# Import relevant libraries:\nimport time\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\n# Set random seed and format print output:\nnp.random.seed(0)\nnp.set_printoptions(precision=3)",
"DDL to construct table for SQL transformations:\nsql\nCREATE TABLE kaggle_sf_crime (\ndates TIMESTAMP, \ncategory VARCHAR,\ndescript VARCHAR,\ndayofweek VARCHAR,\npd_district VARCHAR,\nresolution VARCHAR,\naddr VARCHAR,\nX FLOAT,\nY FLOAT);\nGetting training data into a locally hosted PostgreSQL database:\nsql\n\\copy kaggle_sf_crime FROM '/Users/Goodgame/Desktop/MIDS/207/final/sf_crime_train.csv' DELIMITER ',' CSV HEADER;\nSQL Query used for transformations:\nsql\nSELECT\n category,\n date_part('hour', dates) AS hour_of_day,\n CASE\n WHEN dayofweek = 'Monday' then 1\n WHEN dayofweek = 'Tuesday' THEN 2\n WHEN dayofweek = 'Wednesday' THEN 3\n WHEN dayofweek = 'Thursday' THEN 4\n WHEN dayofweek = 'Friday' THEN 5\n WHEN dayofweek = 'Saturday' THEN 6\n WHEN dayofweek = 'Sunday' THEN 7\n END AS dayofweek_numeric,\n X,\n Y,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS bayview_binary,\n CASE\n WHEN pd_district = 'INGLESIDE' THEN 1\n ELSE 0\n END AS ingleside_binary,\n CASE\n WHEN pd_district = 'NORTHERN' THEN 1\n ELSE 0\n END AS northern_binary,\n CASE\n WHEN pd_district = 'CENTRAL' THEN 1\n ELSE 0\n END AS central_binary,\n CASE\n WHEN pd_district = 'BAYVIEW' THEN 1\n ELSE 0\n END AS pd_bayview_binary,\n CASE\n WHEN pd_district = 'MISSION' THEN 1\n ELSE 0\n END AS mission_binary,\n CASE\n WHEN pd_district = 'SOUTHERN' THEN 1\n ELSE 0\n END AS southern_binary,\n CASE\n WHEN pd_district = 'TENDERLOIN' THEN 1\n ELSE 0\n END AS tenderloin_binary,\n CASE\n WHEN pd_district = 'PARK' THEN 1\n ELSE 0\n END AS park_binary,\n CASE\n WHEN pd_district = 'RICHMOND' THEN 1\n ELSE 0\n END AS richmond_binary,\n CASE\n WHEN pd_district = 'TARAVAL' THEN 1\n ELSE 0\n END AS taraval_binary\nFROM kaggle_sf_crime;\nLoad the data into training, development, and test:",
"data_path = \"./data/train_transformed.csv\"\n\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n# Impute missing values with mean values:\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\nX = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist:\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, y = X[shuffle], y[shuffle]\n\n# Separate training, dev, and test data:\ntest_data, test_labels = X[800000:], y[800000:]\ndev_data, dev_labels = X[700000:800000], y[700000:800000]\ntrain_data, train_labels = X[:700000], y[:700000]\n\nmini_train_data, mini_train_labels = X[:75000], y[:75000]\nmini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]\n",
"Loading the data, version 2 with some weather features",
"data_path = \"./data/train_transformed.csv\"\n\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n########## adding the date back in\nimport csv\nimport time\nimport calendar\ndata_path = \"./data/train.csv\"\ndataCSV = open(data_path, 'rt')\ncsvData = list(csv.reader(dataCSV))\ncsvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\nallData = csvData[1:]\ndataCSV.close()\n\ndf2 = pd.DataFrame(allData)\ndf2.columns = csvFields\ndates = df2['Dates']\ndates = dates.apply(time.strptime, args=(\"%Y-%m-%d %H:%M:%S\",))\ndates = dates.apply(calendar.timegm)\nprint(dates.head())\n#dates = pd.to_datetime(dates)\n\nx_data['secondsFromEpoch'] = dates\ncolnames = x_data.columns.tolist()\ncolnames = colnames[-1:] + colnames[:-1]\nx_data = x_data[colnames]\n##########\n\n########## adding in weather data\nweatherData1 = \"./data/1027175.csv\"\nweatherData2 = \"./data/1027176.csv\"\ndataCSV = open(weatherData1, 'rt')\ncsvData = list(csv.reader(dataCSV))\ncsvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\nallWeatherData1 = csvData[1:]\ndataCSV.close()\n\ndataCSV = open(weatherData2, 'rt')\ncsvData = list(csv.reader(dataCSV))\ncsvFields = csvData[0] #['Dates', 'Category', 'Descript', 'DayOfWeek', 'PdDistrict', 'Resolution', 'Address', 'X', 'Y']\nallWeatherData2 = csvData[1:]\ndataCSV.close()\n\nweatherDF1 = pd.DataFrame(allWeatherData1)\nweatherDF1.columns = csvFields\ndates1 = weatherDF1['DATE']\n\nweatherDF2 = pd.DataFrame(allWeatherData2)\nweatherDF2.columns = csvFields\ndates2 = weatherDF2['DATE']\n\ndates1 = dates1.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\ndates1 = dates1.apply(calendar.timegm)\ndates2 = dates2.apply(time.strptime, args=(\"%Y-%m-%d %H:%M\",))\ndates2 = dates2.apply(calendar.timegm)\nweatherDF1['DATE'] = dates1\nweatherDF2['DATE'] = dates2\nweatherDF = pd.concat([weatherDF1,weatherDF2[32:]],ignore_index=True)\n\n#starting off with some of the easier features to work with-- more to come\nweatherMetrics = weatherDF[['DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYDewPointTempF', 'HOURLYWindSpeed', 'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY']]\nweatherMetrics = weatherMetrics.convert_objects(convert_numeric=True)\nweatherDates = weatherMetrics['DATE']\n#'DATE','HOURLYDRYBULBTEMPF','HOURLYRelativeHumidity', 'HOURLYDewPointTempF', 'HOURLYWindSpeed',\n#'HOURLYSeaLevelPressure', 'HOURLYVISIBILITY'\ntimeWindow = 10800 #3 hours\nhourlyDryBulbTemp = []\nhourlyRelativeHumidity = []\nhourlyDewPointTemp = []\nhourlyWindSpeed = []\nhourlySeaLevelPressure = []\nhourlyVisibility = []\ntest = 0\nfor timePoint in dates:#dates is the epoch time from the kaggle data\n relevantWeather = weatherMetrics[(weatherDates <= timePoint) & (weatherDates > timePoint - timeWindow)]\n hourlyDryBulbTemp.append(relevantWeather['HOURLYDRYBULBTEMPF'].mean())\n hourlyRelativeHumidity.append(relevantWeather['HOURLYRelativeHumidity'].mean())\n hourlyDewPointTemp.append(relevantWeather['HOURLYDewPointTempF'].mean())\n hourlyWindSpeed.append(relevantWeather['HOURLYWindSpeed'].mean())\n hourlySeaLevelPressure.append(relevantWeather['HOURLYSeaLevelPressure'].mean())\n hourlyVisibility.append(relevantWeather['HOURLYVISIBILITY'].mean())\n if test%100000 == 0:\n print(relevantWeather)\n test += 1\n\nhourlyDryBulbTemp = pd.Series.from_array(np.array(hourlyDryBulbTemp))\nhourlyRelativeHumidity = pd.Series.from_array(np.array(hourlyRelativeHumidity))\nhourlyDewPointTemp = pd.Series.from_array(np.array(hourlyDewPointTemp))\nhourlyWindSpeed = pd.Series.from_array(np.array(hourlyWindSpeed))\nhourlySeaLevelPressure = pd.Series.from_array(np.array(hourlySeaLevelPressure))\nhourlyVisibility = pd.Series.from_array(np.array(hourlyVisibility))\n\nx_data['HOURLYDRYBULBTEMPF'] = hourlyDryBulbTemp\nx_data['HOURLYRelativeHumidity'] = hourlyRelativeHumidity\nx_data['HOURLYDewPointTempF'] = hourlyDewPointTemp\nx_data['HOURLYWindSpeed'] = hourlyWindSpeed\nx_data['HOURLYSeaLevelPressure'] = hourlySeaLevelPressure\nx_data['HOURLYVISIBILITY'] = hourlyVisibility\n\n#x_data.to_csv(path_or_buf=\"C:/MIDS/W207 final project/x_data.csv\")\n##########\n\n# Impute missing values with mean values:\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\nX = MinMaxScaler().fit_transform(X_raw)\n\n# Shuffle data to remove any underlying pattern that may exist:\nshuffle = np.random.permutation(np.arange(X.shape[0]))\n#X, y = X[shuffle], y[shuffle]\n\n# Separate training, dev, and test data:\ntest_data, test_labels = X[800000:], y[800000:]\ndev_data, dev_labels = X[700000:800000], y[700000:800000]\ntrain_data, train_labels = X[:700000], y[:700000]\n\nmini_train_data, mini_train_labels = X[:75000], y[:75000]\nmini_dev_data, mini_dev_labels = X[75000:100000], y[75000:100000]\n\n#print(train_data[:10])\n\n#the submission format requires that we list the ID of each example?\n#this is to remember the order of the IDs after shuffling\n#(not used for anything right now)\nallIDs = np.array(list(df.axes[0]))\nallIDs = allIDs[shuffle]\n\ntestIDs = allIDs[800000:]\ndevIDs = allIDs[700000:800000]\ntrainIDs = allIDs[:700000]\n\n#this is for extracting the column names for the required submission format\nsampleSubmission_path = \"./data/sampleSubmission.csv\"\nsampleDF = pd.read_csv(sampleSubmission_path)\nallColumns = list(sampleDF.columns)\nfeatureColumns = allColumns[1:]\n\n#this is for extracting the test data for our baseline submission\nreal_test_path = \"./data/test_transformed.csv\"\ntestDF = pd.read_csv(real_test_path, header=0)\nreal_test_data = testDF\n\ntest_complete = real_test_data.fillna(real_test_data.mean())\nTest_raw = test_complete.as_matrix()\n\nTestData = MinMaxScaler().fit_transform(Test_raw)\n\n#here we remember the ID of each test data point\n#(in case we ever decide to shuffle the test data for some reason)\ntestIDs = list(testDF.axes[0])\n\n#copied the baseline classifier from below,\n#but made it return prediction probabilities for the actual test data\ndef MNB():\n mnb = MultinomialNB(alpha = 0.0000001)\n mnb.fit(train_data, train_labels)\n #print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n return mnb.predict_proba(real_test_data)\nMNB()\n\nbaselinePredictionProbabilities = MNB()\n\n#here is my rough attempt at putting the results (prediction probabilities)\n#in a .csv in the required format\n#first we turn the prediction probabilties into a data frame\nresultDF = pd.DataFrame(baselinePredictionProbabilities,columns=featureColumns)\n#this adds the IDs as a final column\nresultDF.loc[:,'Id'] = pd.Series(testIDs,index=resultDF.index)\n#the next few lines make the 'Id' column the first column\ncolnames = resultDF.columns.tolist()\ncolnames = colnames[-1:] + colnames[:-1]\nresultDF = resultDF[colnames]\n#output to .csv file\nresultDF.to_csv('result.csv',index=False)",
"Note: the code above will shuffle data differently every time it's run, so model accuracies will vary accordingly.",
"## Data sanity checks\nprint(train_data[:1])\nprint(train_labels[:1])\n\n# Modeling sanity check with MNB--fast model\n\n\ndef MNB():\n mnb = MultinomialNB(alpha = 0.0000001)\n mnb.fit(train_data, train_labels)\n print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n \nMNB()",
"Model Prototyping\nRapidly assessing the viability of different model forms:",
"def model_prototype(train_data, train_labels, eval_data, eval_labels):\n knn = KNeighborsClassifier(n_neighbors=5).fit(train_data, train_labels)\n bnb = BernoulliNB(alpha=1, binarize = 0.5).fit(train_data, train_labels)\n mnb = MultinomialNB().fit(train_data, train_labels)\n log_reg = LogisticRegression().fit(train_data, train_labels)\n support_vm = svm.SVC().fit(train_data, train_labels)\n neural_net = MLPClassifier().fit(train_data, train_labels)\n random_forest = RandomForestClassifier().fit(train_data, train_labels)\n \n models = [knn, bnb, mnb, log_reg, support_vm, neural_net, random_forest]\n for model in models:\n eval_preds = model.predict(eval_data)\n print(model, \"Accuracy:\", np.mean(eval_preds==eval_labels), \"\\n\\n\")\n\nmodel_prototype(mini_train_data, mini_train_labels, mini_dev_data, mini_dev_labels)",
"K-Nearest Neighbors",
"# def k_neighbors(k_values):\n \n# accuracies = []\n# for k in k_values:\n# clfk = KNeighborsClassifier(n_neighbors=k).fit(train_data, train_labels)\n# dev_preds = clfk.predict(dev_data)\n# accuracies.append(np.mean(dev_preds == dev_labels))\n# print(\"k=\",k, \"accuracy:\", np.mean(dev_preds == dev_labels))\n# if k == 7: \n# print(\"\\n\\n Classification report for k = 7\", \":\\n\", \n# classification_report(dev_labels, dev_preds),)\n \n# k_values = [i for i in range(7,9)]\n\n# k_neighbors(k_values)",
"Multinomial, Bernoulli, and Gaussian Naive Bayes",
"def GNB():\n gnb = GaussianNB()\n gnb.fit(train_data, train_labels)\n print(\"GaussianNB accuracy on dev data:\", \n gnb.score(dev_data, dev_labels))\n \n # Gaussian Naive Bayes requires the data to have a relative normal distribution. Sometimes\n # adding noise can improve performance by making the data more normal:\n train_data_noise = np.random.rand(train_data.shape[0],train_data.shape[1])\n modified_train_data = np.multiply(train_data,train_data_noise) \n gnb_noise = GaussianNB()\n gnb.fit(modified_train_data, train_labels)\n print(\"GaussianNB accuracy with added noise:\", \n gnb.score(dev_data, dev_labels)) \n \n# Going slightly deeper with hyperparameter tuning and model calibration:\ndef BNB(alphas):\n \n bnb_one = BernoulliNB(binarize = 0.5)\n bnb_one.fit(train_data, train_labels)\n print(\"\\n\\nBernoulli Naive Bayes accuracy when alpha = 1 (the default value):\",\n bnb_one.score(dev_data, dev_labels))\n \n bnb_zero = BernoulliNB(binarize = 0.5, alpha=0)\n bnb_zero.fit(train_data, train_labels)\n print(\"BNB accuracy when alpha = 0:\", bnb_zero.score(dev_data, dev_labels))\n \n bnb = BernoulliNB(binarize=0.5)\n clf = GridSearchCV(bnb, param_grid = alphas)\n clf.fit(train_data, train_labels)\n print(\"Best parameter for BNB on the dev data:\", clf.best_params_)\n \n clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001)\n clf_tuned.fit(train_data, train_labels)\n print(\"Accuracy using the tuned Laplace smoothing parameter:\", \n clf_tuned.score(dev_data, dev_labels), \"\\n\\n\")\n \n\ndef investigate_model_calibration(buckets, correct, total):\n clf_tuned = BernoulliNB(binarize = 0.5, alpha=0.00000000000000000000001)\n clf_tuned.fit(train_data, train_labels)\n \n # Establish data sets\n pred_probs = clf_tuned.predict_proba(dev_data)\n max_pred_probs = np.array(pred_probs.max(axis=1))\n preds = clf_tuned.predict(dev_data)\n \n # For each bucket, look at the predictions that the model yields. \n # Keep track of total & correct predictions within each bucket.\n bucket_bottom = 0\n bucket_top = 0\n for bucket_index, bucket in enumerate(buckets):\n bucket_top = bucket\n for pred_index, pred in enumerate(preds):\n if (max_pred_probs[pred_index] <= bucket_top) and (max_pred_probs[pred_index] > bucket_bottom):\n total[bucket_index] += 1\n if preds[pred_index] == dev_labels[pred_index]:\n correct[bucket_index] += 1\n bucket_bottom = bucket_top\n\ndef MNB():\n mnb = MultinomialNB(alpha = 0.0000001)\n mnb.fit(train_data, train_labels)\n print(\"\\n\\nMultinomialNB accuracy on dev data:\", mnb.score(dev_data, dev_labels))\n\nalphas = {'alpha': [0.00000000000000000000001, 0.0000001, 0.0001, 0.001, \n 0.01, 0.1, 0.0, 0.5, 1.0, 2.0, 10.0]}\nbuckets = [0.5, 0.9, 0.99, 0.999, .9999, 0.99999, 1.0]\ncorrect = [0 for i in buckets]\ntotal = [0 for i in buckets]\n\nMNB()\nGNB()\nBNB(alphas)\ninvestigate_model_calibration(buckets, correct, total)\n\nfor i in range(len(buckets)):\n accuracy = 0.0\n if (total[i] > 0): accuracy = correct[i] / total[i]\n print('p(pred) <= %.13f total = %3d accuracy = %.3f' %(buckets[i], total[i], accuracy))",
"The Bernoulli Naive Bayes and Multinomial Naive Bayes models can predict whether a loan will be good or bad with XXX% accuracy.\nHyperparameter tuning:\nThe optimal Laplace smoothing parameter $\\alpha$ for the Bernoulli NB model:\nModel calibration:\nNotes\nFinal evaluation on test data",
"# def model_test(train_data, train_labels, eval_data, eval_labels):\n# '''Similar to the initial model prototyping, but using the \n# tuned parameters on data that none of the models have yet\n# encountered.'''\n# knn = KNeighborsClassifier(n_neighbors=7).fit(train_data, train_labels)\n# bnb = BernoulliNB(alpha=0.0000000000000000000001, binarize = 0.5).fit(train_data, train_labels)\n \n# models = [knn, bnb]\n# for model in models:\n# eval_preds = model.predict(eval_data)\n# print(model, \"Accuracy:\", np.mean(eval_preds==eval_labels), \"\\n\\n\")\n\n# model_test(train_data, train_labels, test_data, test_labels)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
giacomov/astromodels
|
examples/Functions_tutorials.ipynb
|
bsd-3-clause
|
[
"Functions tutorial\nIn astromodels functions can be used as spectral shapes for sources, or to describe time-dependence, phase-dependence, or links among parameters.\nTo get the list of available functions just do:",
"from astromodels import *\n\nlist_functions()",
"If you need more info about a function, you can obtain it by using:",
"powerlaw.info()",
"Note that you don't need to create an instance in order to call the info() method.\nCreating functions\nFunctions can be created in two different ways. We can create an instance with the default values for the parameters like this:",
"powerlaw_instance = powerlaw()",
"or we can specify on construction specific values for the parameters:",
"powerlaw_instance = powerlaw(K=-2.0, index=-2.2)",
"If you don't remember the names of the parameters just call the .info() method as in powerlaw.info() as demonstrated above.\nGetting information about an instance\nUsing the .display() method we get a representation of the instance which exploits the features of the environment we are using. If we are running inside a IPython notebook, a rich representation with the formula of the function will be displayed (if available). Otherwise, in a normal terminal, the latex formula will not be rendered:",
"powerlaw_instance.display()",
"It is also possible to get the text-only representation by simply printing the object like this:",
"print(powerlaw_instance)",
"NOTE: the .display() method of an instance displays the current values of the parameters, while the .info() method demonstrated above (for which you don't need an instance) displays the default values of the parameters.\nModifying parameters\nModifying a parameter of a function is easy:",
"# Modify current value\n\npowerlaw_instance.K = 1.2\n\n# Modify minimum \npowerlaw_instance.K.min_value = -10\n\n# Modify maximum\npowerlaw_instance.K.max_value = 15\n\n# We can also modify minimum and maximum at the same time\npowerlaw_instance.K.set_bounds(-10, 15)\n\n# Modifying the delta for the parameter \n# (which can be used by downstream software for fitting, for example)\npowerlaw_instance.K.delta = 0.25\n\n# Fix the parameter\npowerlaw_instance.K.fix = True\n\n# or equivalently\npowerlaw_instance.K.free = False\n\n# Free it again\npowerlaw_instance.K.fix = False\n\n# or equivalently\npowerlaw_instance.K.free = True\n\n# We can verify what we just did by printing again the whole function as shown above, \n# or simply printing the parameter:\npowerlaw_instance.K.display()",
"Physical units\nAstromodels uses the facility defined in astropy.units to make easier to convert between units during interactive analysis, when assigning to parameters. \nHowever, when functions are initialized their parameters do not have units, as it is evident from the .display calls above. They however get assigned units when they are used for something specific, like to represent a spectrum. For example, let's create a point source (see the \"Point source tutorial\" for more on this):",
"# Create a powerlaw instance with default values\npowerlaw_instance = powerlaw()\n\n# Right now the parameters of the power law don't have any unit\nprint(\"Unit of K is [%s]\" % powerlaw_instance.K.unit)\n\n# Let's use it as a spectrum for a point source\ntest_source = PointSource('test_source', ra=0.0, dec=0.0, spectral_shape=powerlaw_instance)\n\n# Now the parameter K has units\nprint(\"Unit of K is [%s]\" % powerlaw_instance.K.unit)",
"Now if we display the function we can see that other parameters got units as well:",
"powerlaw_instance.display()",
"Note that the index has still no units, as it is intrinsically a dimensionless quantity.\nWe can now change the values of the parameters using units, or pure floating point numbers. In the latter case, the current unit for the parameter will be assumed:",
"import astropy.units as u\n\n# Express the differential flux at the pivot energy in 1 / (MeV cm2 s)\n\npowerlaw_instance.K = 122.3 / (u.MeV * u.cm * u.cm * u.s)\n\n# Express the differential flux at the pivot energy in 1 / (GeV m2 s)\npowerlaw_instance.K = 122.3 / (u.GeV * u.m * u.m * u.s)\n\n# Express the differential flux at the pivot energy in its default unit\n# (currently 1/(keV cm2 s))\npowerlaw_instance.K = 122.3\n\npowerlaw_instance.display()",
"NOTE : using astropy.units in an assigment makes the operation pretty slow. This is hardly noticeable in an interactive settings, but if you put an assigment with units in a for loop or in any other context where it is repeated many times, you might start to notice. For this reason, astromodels allow you to assign directly the value of the parameter in an alternative way, by using the .scaled_value property. This assume that you are providing a simple floating point number, which implicitly uses a specific set of units, which you can retrieve with .scaled_units like this:",
"print(powerlaw_instance.K.scaled_unit)\n\n# NOTE: These requires IPython\n\n%timeit powerlaw_instance.K.scaled_value = 122.3 # 1 / (cm2 keV s)\n%timeit powerlaw_instance.K = 122.3 / (u.keV * u.cm**2 * u.s)",
"As you can see using an assignment with units is more than 100x slower than using .scaled_value. Note that this is a feature of astropy.units, not of astromodels. Thus, do not use assignment with units in computing intensive situations.\nComposing functions\nWe can create arbitrary complex functions by combining \"primitive\" functions using the normal math operators:",
"composite = gaussian() + powerlaw()\n\n# Instead of the usual .display(), which would print all the many parameters,\n# let's print just the description of the new composite functions:\nprint(composite.description)\n\na_source = PointSource(\"a_source\",l=24.3, b=44.3, spectral_shape=composite)\n\ncomposite.display()",
"These expressions can be as complicated as needed. For example:",
"crazy_function = 3 * sin() + powerlaw()**2 * (5+gaussian()) / 3.0\n\nprint(crazy_function.description)",
"The numbers between {} enumerate the unique functions which constitute a composite function. This is useful because composite functions can be created starting from pre-existing instances of functions, in which case the same instance can be used more than once. For example:",
"a_powerlaw = powerlaw()\na_sin = sin()\n\nanother_composite = 2 * a_powerlaw + (3 + a_powerlaw) * a_sin\n\nprint(another_composite.description)",
"In this case the same instance of a power law has been used twice. Changing the value of the parameters for \"a_powerlaw\" will affect also the second part of the expression. Instead, by doing this:",
"another_composite2 = 2 * powerlaw() + (3 + powerlaw()) * sin()\n\nprint(another_composite2.description)",
"we will end up with two independent sets of parameters for the two power laws. The difference can be seen immediately from the number of parameters of the two composite functions:",
"print(len(another_composite.parameters)) # 6 parameters\nprint(len(another_composite2.parameters)) # 9 parameters",
"Composing functions as in f(g(x))\nSuppose you have two functions (f and g) and you want to compose them in a new function h(x) = f(g(x)). You can accomplish this by using the .of() method:",
"# Let's get two functions (for example a gaussian and a sin function)\nf = gaussian()\ng = sin()\n\n# Let's compose them in a composite function h = f(g(x))\n\nh = f.of(g)\n\n# Verify that indeed we have composed the function\n\n# Get a random number between 1 and 10\nx = np.random.uniform(1,10)\n\nprint (h(x) == f(g(x)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DKolmas/Python-NeuralNet-SimpleEx
|
Simple_Neural_Network_example.ipynb
|
mit
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # Activation function is the sigmoid function\n self.activation_function = lambda x: 1 / ( 1 + np.exp(-x))\n self.derivative = lambda sig: sig * (1-sig)\n \n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array \n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T \n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n # => [ 2 x 1]\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n # => [ 2 x 1]\n \n # TODO: Output layer\n # [1 x 2] [ 2 x 1]\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n # => [ 1 x 1]\n final_outputs = final_inputs # signals from final output layer\n # => [1 x 1]\n #### Implement the backward pass here ####\n ### Backward pass ###\n \n # TODO: Output error\n # [1 x 1] - [1 x 1]\n output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Backpropagated error\n # [1 x 1] [1 x 2]\n hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer\n # => [2 x 1]\n hidden_grad = self.derivative(hidden_outputs) # hidden layer gradients\n # => [2 x 1]\n \n # TODO: Update the weights\n # [1 x 1] [1 x 2]\n self.weights_hidden_to_output += self.lr * output_errors * hidden_outputs.T # update hidden-to-output weights with gradient descent step\n # [2 x 1] comp wise [2 x 1] [1 x 56] \n self.weights_input_to_hidden += self.lr * np.dot((hidden_errors*hidden_grad),inputs.T ) # update input-to-hidden weights with gradient descent step\n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\n# Progress: 99.8% ... Training loss: 0.075 ... Validation loss: 0.175\n#epochs = 800\n#learning_rate = 0.01\n#hidden_nodes = 12\n#output_nodes = 3\n\n# Progress: 99.9% ... Training loss: 0.064 ... Validation loss: 0.155\n#epochs = 1000\n#learning_rate = 0.01\n#hidden_nodes = 10\n#output_nodes = 3\n\n#Progress: 99.9% ... Training loss: 0.063 ... Validation loss: 0.139\n#epochs = 1200\n#learning_rate = 0.012\n#hidden_nodes = 12\n#output_nodes = 3\n\n# Conclussions:\n# with more hidden nodes the validaton los becomes unstable (long lasting fluctuations)\n# with higher learning rate there are more spikes in validation loss\n# with lower learning rate there is smooth validation loss curve but it comes with the cost of longer training time required\n# At certain point increasing learning time does not improve (validation loss stay constant with some fluctuation)\n# Oerall: training results is very sensitive to hyperparameter settings (dancing on the edge)\n# with epoch above 1000 overfitting happens (fluctuatic around already stable value o validation loss)\n\n# Progress: 99.8% ... Training loss: 0.074 ... Validation loss: 0.149\nepochs = 800\nlearning_rate = 0.01\nhidden_nodes = 12\noutput_nodes = 3\n\nN_i = train_features.shape[1]\n#print(\"N_i: \", N_i)\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
examples/vision/ipynb/attention_mil_classification.ipynb
|
apache-2.0
|
[
"Classification using Attention-based Deep Multiple Instance Learning (MIL).\nAuthor: Mohamad Jaber<br>\nDate created: 2021/08/16<br>\nLast modified: 2021/11/25<br>\nDescription: MIL approach to classify bags of instances and get their individual instance score.\nIntroduction\nWhat is Multiple Instance Learning (MIL)?\nUsually, with supervised learning algorithms, the learner receives labels for a set of\ninstances. In the case of MIL, the learner receives labels for a set of bags, each of which\ncontains a set of instances. The bag is labeled positive if it contains at least\none positive instance, and negative if it does not contain any.\nMotivation\nIt is often assumed in image classification tasks that each image clearly represents a\nclass label. In medical imaging (e.g. computational pathology, etc.) an entire image\nis represented by a single class label (cancerous/non-cancerous) or a region of interest\ncould be given. However, one will be interested in knowing which patterns in the image\nis actually causing it to belong to that class. In this context, the image(s) will be\ndivided and the subimages will form the bag of instances.\nTherefore, the goals are to:\n\nLearn a model to predict a class label for a bag of instances.\nFind out which instances within the bag caused a position class label\nprediction.\n\nImplementation\nThe following steps describe how the model works:\n\nThe feature extractor layers extract feature embeddings.\nThe embeddings are fed into the MIL attention layer to get\nthe attention scores. The layer is designed as permutation-invariant.\nInput features and their corresponding attention scores are multiplied together.\nThe resulting output is passed to a softmax function for classification.\n\nReferences\n\nAttention-based Deep Multiple Instance Learning.\nSome of the attention operator code implementation was inspired from https://github.com/utayao/Atten_Deep_MIL.\nImbalanced data tutorial\nby TensorFlow.\n\nSetup",
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tqdm import tqdm\nfrom matplotlib import pyplot as plt\n\nplt.style.use(\"ggplot\")",
"Create dataset\nWe will create a set of bags and assign their labels according to their contents.\nIf at least one positive instance\nis available in a bag, the bag is considered as a positive bag. If it does not contain any\npositive instance, the bag will be considered as negative.\nConfiguration parameters\n\nPOSITIVE_CLASS: The desired class to be kept in the positive bag.\nBAG_COUNT: The number of training bags.\nVAL_BAG_COUNT: The number of validation bags.\nBAG_SIZE: The number of instances in a bag.\nPLOT_SIZE: The number of bags to plot.\nENSEMBLE_AVG_COUNT: The number of models to create and average together. (Optional:\noften results in better performance - set to 1 for single model)",
"POSITIVE_CLASS = 1\nBAG_COUNT = 1000\nVAL_BAG_COUNT = 300\nBAG_SIZE = 3\nPLOT_SIZE = 3\nENSEMBLE_AVG_COUNT = 1",
"Prepare bags\nSince the attention operator is a permutation-invariant operator, an instance with a\npositive class label is randomly placed among the instances in the positive bag.",
"\ndef create_bags(input_data, input_labels, positive_class, bag_count, instance_count):\n\n # Set up bags.\n bags = []\n bag_labels = []\n\n # Normalize input data.\n input_data = np.divide(input_data, 255.0)\n\n # Count positive samples.\n count = 0\n\n for _ in range(bag_count):\n\n # Pick a fixed size random subset of samples.\n index = np.random.choice(input_data.shape[0], instance_count, replace=False)\n instances_data = input_data[index]\n instances_labels = input_labels[index]\n\n # By default, all bags are labeled as 0.\n bag_label = 0\n\n # Check if there is at least a positive class in the bag.\n if positive_class in instances_labels:\n\n # Positive bag will be labeled as 1.\n bag_label = 1\n count += 1\n\n bags.append(instances_data)\n bag_labels.append(np.array([bag_label]))\n\n print(f\"Positive bags: {count}\")\n print(f\"Negative bags: {bag_count - count}\")\n\n return (list(np.swapaxes(bags, 0, 1)), np.array(bag_labels))\n\n\n# Load the MNIST dataset.\n(x_train, y_train), (x_val, y_val) = keras.datasets.mnist.load_data()\n\n# Create training data.\ntrain_data, train_labels = create_bags(\n x_train, y_train, POSITIVE_CLASS, BAG_COUNT, BAG_SIZE\n)\n\n# Create validation data.\nval_data, val_labels = create_bags(\n x_val, y_val, POSITIVE_CLASS, VAL_BAG_COUNT, BAG_SIZE\n)",
"Create the model\nWe will now build the attention layer, prepare some utilities, then build and train the\nentire model.\nAttention operator implementation\nThe output size of this layer is decided by the size of a single bag.\nThe attention mechanism uses a weighted average of instances in a bag, in which the sum\nof the weights must equal to 1 (invariant of the bag size).\nThe weight matrices (parameters) are w and v. To include positive and negative\nvalues, hyperbolic tangent element-wise non-linearity is utilized.\nA Gated attention mechanism can be used to deal with complex relations. Another weight\nmatrix, u, is added to the computation.\nA sigmoid non-linearity is used to overcome approximately linear behavior for x ∈ [−1, 1]\nby hyperbolic tangent non-linearity.",
"\nclass MILAttentionLayer(layers.Layer):\n \"\"\"Implementation of the attention-based Deep MIL layer.\n\n Args:\n weight_params_dim: Positive Integer. Dimension of the weight matrix.\n kernel_initializer: Initializer for the `kernel` matrix.\n kernel_regularizer: Regularizer function applied to the `kernel` matrix.\n use_gated: Boolean, whether or not to use the gated mechanism.\n\n Returns:\n List of 2D tensors with BAG_SIZE length.\n The tensors are the attention scores after softmax with shape `(batch_size, 1)`.\n \"\"\"\n\n def __init__(\n self,\n weight_params_dim,\n kernel_initializer=\"glorot_uniform\",\n kernel_regularizer=None,\n use_gated=False,\n **kwargs,\n ):\n\n super().__init__(**kwargs)\n\n self.weight_params_dim = weight_params_dim\n self.use_gated = use_gated\n\n self.kernel_initializer = keras.initializers.get(kernel_initializer)\n self.kernel_regularizer = keras.regularizers.get(kernel_regularizer)\n\n self.v_init = self.kernel_initializer\n self.w_init = self.kernel_initializer\n self.u_init = self.kernel_initializer\n\n self.v_regularizer = self.kernel_regularizer\n self.w_regularizer = self.kernel_regularizer\n self.u_regularizer = self.kernel_regularizer\n\n def build(self, input_shape):\n\n # Input shape.\n # List of 2D tensors with shape: (batch_size, input_dim).\n input_dim = input_shape[0][1]\n\n self.v_weight_params = self.add_weight(\n shape=(input_dim, self.weight_params_dim),\n initializer=self.v_init,\n name=\"v\",\n regularizer=self.v_regularizer,\n trainable=True,\n )\n\n self.w_weight_params = self.add_weight(\n shape=(self.weight_params_dim, 1),\n initializer=self.w_init,\n name=\"w\",\n regularizer=self.w_regularizer,\n trainable=True,\n )\n\n if self.use_gated:\n self.u_weight_params = self.add_weight(\n shape=(input_dim, self.weight_params_dim),\n initializer=self.u_init,\n name=\"u\",\n regularizer=self.u_regularizer,\n trainable=True,\n )\n else:\n self.u_weight_params = None\n\n self.input_built = True\n\n def call(self, inputs):\n\n # Assigning variables from the number of inputs.\n instances = [self.compute_attention_scores(instance) for instance in inputs]\n\n # Apply softmax over instances such that the output summation is equal to 1.\n alpha = tf.math.softmax(instances, axis=0)\n\n return [alpha[i] for i in range(alpha.shape[0])]\n\n def compute_attention_scores(self, instance):\n\n # Reserve in-case \"gated mechanism\" used.\n original_instance = instance\n\n # tanh(v*h_k^T)\n instance = tf.math.tanh(tf.tensordot(instance, self.v_weight_params, axes=1))\n\n # for learning non-linear relations efficiently.\n if self.use_gated:\n\n instance = instance * tf.math.sigmoid(\n tf.tensordot(original_instance, self.u_weight_params, axes=1)\n )\n\n # w^T*(tanh(v*h_k^T)) / w^T*(tanh(v*h_k^T)*sigmoid(u*h_k^T))\n return tf.tensordot(instance, self.w_weight_params, axes=1)\n",
"Visualizer tool\nPlot the number of bags (given by PLOT_SIZE) with respect to the class.\nMoreover, if activated, the class label prediction with its associated instance score\nfor each bag (after the model has been trained) can be seen.",
"\ndef plot(data, labels, bag_class, predictions=None, attention_weights=None):\n\n \"\"\"\"Utility for plotting bags and attention weights.\n\n Args:\n data: Input data that contains the bags of instances.\n labels: The associated bag labels of the input data.\n bag_class: String name of the desired bag class.\n The options are: \"positive\" or \"negative\".\n predictions: Class labels model predictions.\n If you don't specify anything, ground truth labels will be used.\n attention_weights: Attention weights for each instance within the input data.\n If you don't specify anything, the values won't be displayed.\n \"\"\"\n\n labels = np.array(labels).reshape(-1)\n\n if bag_class == \"positive\":\n if predictions is not None:\n labels = np.where(predictions.argmax(1) == 1)[0]\n bags = np.array(data)[:, labels[0:PLOT_SIZE]]\n\n else:\n labels = np.where(labels == 1)[0]\n bags = np.array(data)[:, labels[0:PLOT_SIZE]]\n\n elif bag_class == \"negative\":\n if predictions is not None:\n labels = np.where(predictions.argmax(1) == 0)[0]\n bags = np.array(data)[:, labels[0:PLOT_SIZE]]\n else:\n labels = np.where(labels == 0)[0]\n bags = np.array(data)[:, labels[0:PLOT_SIZE]]\n\n else:\n print(f\"There is no class {bag_class}\")\n return\n\n print(f\"The bag class label is {bag_class}\")\n for i in range(PLOT_SIZE):\n figure = plt.figure(figsize=(8, 8))\n print(f\"Bag number: {labels[i]}\")\n for j in range(BAG_SIZE):\n image = bags[j][i]\n figure.add_subplot(1, BAG_SIZE, j + 1)\n plt.grid(False)\n if attention_weights is not None:\n plt.title(np.around(attention_weights[labels[i]][j], 2))\n plt.imshow(image)\n plt.show()\n\n\n# Plot some of validation data bags per class.\nplot(val_data, val_labels, \"positive\")\nplot(val_data, val_labels, \"negative\")",
"Create model\nFirst we will create some embeddings per instance, invoke the attention operator and then\nuse the softmax function to output the class probabilities.",
"\ndef create_model(instance_shape):\n\n # Extract features from inputs.\n inputs, embeddings = [], []\n shared_dense_layer_1 = layers.Dense(128, activation=\"relu\")\n shared_dense_layer_2 = layers.Dense(64, activation=\"relu\")\n for _ in range(BAG_SIZE):\n inp = layers.Input(instance_shape)\n flatten = layers.Flatten()(inp)\n dense_1 = shared_dense_layer_1(flatten)\n dense_2 = shared_dense_layer_2(dense_1)\n inputs.append(inp)\n embeddings.append(dense_2)\n\n # Invoke the attention layer.\n alpha = MILAttentionLayer(\n weight_params_dim=256,\n kernel_regularizer=keras.regularizers.l2(0.01),\n use_gated=True,\n name=\"alpha\",\n )(embeddings)\n\n # Multiply attention weights with the input layers.\n multiply_layers = [\n layers.multiply([alpha[i], embeddings[i]]) for i in range(len(alpha))\n ]\n\n # Concatenate layers.\n concat = layers.concatenate(multiply_layers, axis=1)\n\n # Classification output node.\n output = layers.Dense(2, activation=\"softmax\")(concat)\n\n return keras.Model(inputs, output)\n",
"Class weights\nSince this kind of problem could simply turn into imbalanced data classification problem,\nclass weighting should be considered.\nLet's say there are 1000 bags. There often could be cases were ~90 % of the bags do not\ncontain any positive label and ~10 % do.\nSuch data can be referred to as Imbalanced data.\nUsing class weights, the model will tend to give a higher weight to the rare class.",
"\ndef compute_class_weights(labels):\n\n # Count number of postive and negative bags.\n negative_count = len(np.where(labels == 0)[0])\n positive_count = len(np.where(labels == 1)[0])\n total_count = negative_count + positive_count\n\n # Build class weight dictionary.\n return {\n 0: (1 / negative_count) * (total_count / 2),\n 1: (1 / positive_count) * (total_count / 2),\n }\n",
"Build and train model\nThe model is built and trained in this section.",
"\ndef train(train_data, train_labels, val_data, val_labels, model):\n\n # Train model.\n # Prepare callbacks.\n # Path where to save best weights.\n\n # Take the file name from the wrapper.\n file_path = \"/tmp/best_model_weights.h5\"\n\n # Initialize model checkpoint callback.\n model_checkpoint = keras.callbacks.ModelCheckpoint(\n file_path,\n monitor=\"val_loss\",\n verbose=0,\n mode=\"min\",\n save_best_only=True,\n save_weights_only=True,\n )\n\n # Initialize early stopping callback.\n # The model performance is monitored across the validation data and stops training\n # when the generalization error cease to decrease.\n early_stopping = keras.callbacks.EarlyStopping(\n monitor=\"val_loss\", patience=10, mode=\"min\"\n )\n\n # Compile model.\n model.compile(\n optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"],\n )\n\n # Fit model.\n model.fit(\n train_data,\n train_labels,\n validation_data=(val_data, val_labels),\n epochs=20,\n class_weight=compute_class_weights(train_labels),\n batch_size=1,\n callbacks=[early_stopping, model_checkpoint],\n verbose=0,\n )\n\n # Load best weights.\n model.load_weights(file_path)\n\n return model\n\n\n# Building model(s).\ninstance_shape = train_data[0][0].shape\nmodels = [create_model(instance_shape) for _ in range(ENSEMBLE_AVG_COUNT)]\n\n# Show single model architecture.\nprint(models[0].summary())\n\n# Training model(s).\ntrained_models = [\n train(train_data, train_labels, val_data, val_labels, model)\n for model in tqdm(models)\n]",
"Model evaluation\nThe models are now ready for evaluation.\nWith each model we also create an associated intermediate model to get the\nweights from the attention layer.\nWe will compute a prediction for each of our ENSEMBLE_AVG_COUNT models, and\naverage them together for our final prediction.",
"\ndef predict(data, labels, trained_models):\n\n # Collect info per model.\n models_predictions = []\n models_attention_weights = []\n models_losses = []\n models_accuracies = []\n\n for model in trained_models:\n\n # Predict output classes on data.\n predictions = model.predict(data)\n models_predictions.append(predictions)\n\n # Create intermediate model to get MIL attention layer weights.\n intermediate_model = keras.Model(model.input, model.get_layer(\"alpha\").output)\n\n # Predict MIL attention layer weights.\n intermediate_predictions = intermediate_model.predict(data)\n\n attention_weights = np.squeeze(np.swapaxes(intermediate_predictions, 1, 0))\n models_attention_weights.append(attention_weights)\n\n loss, accuracy = model.evaluate(data, labels, verbose=0)\n models_losses.append(loss)\n models_accuracies.append(accuracy)\n\n print(\n f\"The average loss and accuracy are {np.sum(models_losses, axis=0) / ENSEMBLE_AVG_COUNT:.2f}\"\n f\" and {100 * np.sum(models_accuracies, axis=0) / ENSEMBLE_AVG_COUNT:.2f} % resp.\"\n )\n\n return (\n np.sum(models_predictions, axis=0) / ENSEMBLE_AVG_COUNT,\n np.sum(models_attention_weights, axis=0) / ENSEMBLE_AVG_COUNT,\n )\n\n\n# Evaluate and predict classes and attention scores on validation data.\nclass_predictions, attention_params = predict(val_data, val_labels, trained_models)\n\n# Plot some results from our validation data.\nplot(\n val_data,\n val_labels,\n \"positive\",\n predictions=class_predictions,\n attention_weights=attention_params,\n)\nplot(\n val_data,\n val_labels,\n \"negative\",\n predictions=class_predictions,\n attention_weights=attention_params,\n)",
"Conclusion\nFrom the above plot, you can notice that the weights always sum to 1. In a\npositively predict bag, the instance which resulted in the positive labeling will have\na substantially higher attention score than the rest of the bag. However, in a negatively\npredicted bag, there are two cases:\n\nAll instances will have approximately similar scores.\nAn instance will have relatively higher score (but not as high as of a positive instance).\nThis is because the feature space of this instance is close to that of the positive instance.\n\nRemarks\n\nIf the model is overfit, the weights will be equally distributed for all bags. Hence,\nthe regularization techniques are necessary.\nIn the paper, the bag sizes can differ from one bag to another. For simplicity, the\nbag sizes are fixed here.\nIn order not to rely on the random initial weights of a single model, averaging ensemble\nmethods should be considered."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jdhp-docs/python-notebooks
|
python_geopandas_gps_tracks_lines_en.ipynb
|
mit
|
[
"GPS tracks\nhttp://geopandas.org/gallery/plotting_basemap_background.html#adding-a-background-map-to-plots\nhttps://ocefpaf.github.io/python4oceanographers/blog/2015/08/03/fiona_gpx/",
"import pandas as pd\nimport geopandas as gpd",
"TODO: put the following lists in a JSON dict and make it avaliable in a public Git repository (it can be usefull for other uses)\nTODO: put the generated GeoJSON files in a public Git repository",
"df = gpd.read_file(\"communes-20181110.shp\")\n\n!head test.gpx\n\n!head test.csv\n\n# https://gis.stackexchange.com/questions/114066/handling-kml-csv-with-geopandas-drivererror-unsupported-driver-ucsv\ndf_tracks = pd.read_csv(\"test.csv\", skiprows=3)\ndf_tracks.head()\n\ndf_tracks.columns\n\nfrom shapely.geometry import LineString\n\n# https://shapely.readthedocs.io/en/stable/manual.html\n\npositions = df_tracks.loc[:, [\"Longitude (deg)\", \"Latitude (deg)\"]]\npositions\n\nLineString(positions.values)\n\n# https://stackoverflow.com/questions/38961816/geopandas-set-crs-on-points\ndf_tracks = gpd.GeoDataFrame(geometry=[LineString(positions.values)], crs = {'init' :'epsg:4326'})\ndf_tracks.head()\n\ndf_tracks.plot()\n\ncommunes_list = [\n \"78160\", # Chevreuse\n \"78575\", # Saint-Rémy-lès-Chevreuse\n]\n\ndf = df.loc[df.insee.isin(communes_list)]\n\ndf\n\nax = df_tracks.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')\nax = df.plot(ax=ax, alpha=0.5, edgecolor='k')\n#df.plot(ax=ax)",
"Convert the data to Web Mercator",
"df_tracks_wm = df_tracks.to_crs(epsg=3857)\ndf_wm = df.to_crs(epsg=3857)\n\ndf_tracks_wm\n\nax = df_tracks_wm.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')",
"Contextily helper function",
"import contextily as ctx\n\ndef add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):\n xmin, xmax, ymin, ymax = ax.axis()\n basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, url=url)\n ax.imshow(basemap, extent=extent, interpolation='bilinear')\n # restore original x/y limits\n ax.axis((xmin, xmax, ymin, ymax))",
"Add background tiles to plot",
"ax = df_tracks_wm.plot(figsize=(16, 16), alpha=0.5, edgecolor='k')\nax = df_wm.plot(ax=ax, alpha=0.5, edgecolor='k')\n\n#add_basemap(ax, zoom=13, url=ctx.sources.ST_TONER_LITE)\nadd_basemap(ax, zoom=14)\nax.set_axis_off()",
"Save selected departments into a GeoJSON file",
"import fiona\nfiona.supported_drivers\n\n!rm tracks.geojson\n\ndf_tracks.to_file(\"tracks.geojson\", driver=\"GeoJSON\")\n\n!ls -lh tracks.geojson\n\ndf = gpd.read_file(\"tracks.geojson\")\n\ndf\n\nax = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
microsoft/dowhy
|
docs/source/example_notebooks/identifying_effects_using_id_algorithm.ipynb
|
mit
|
[
"Identifying Effect using ID Algorithm\nThis is a tutorial notebook for using the ID Algorithm in the causal identification step of causal inference.\nLink to paper: https://ftp.cs.ucla.edu/pub/stat_ser/shpitser-thesis.pdf\nThe pseudo code has been provided on Pg 40.",
"from dowhy import CausalModel\nimport pandas as pd\nimport numpy as np\nfrom IPython.display import Image, display\n",
"Examples\nThe following sections show the working of the ID Algorithm on multiple test cases. In the graphs, T denotes the treatment variable, Y denotes the outcome variable and the Xs are additional variables.\nCase 1\nThis example exhibits the performance of the algorithm on the simplest possible graph.",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\ncausal_graph = \"digraph{T->Y;}\"\ncolumns = list(treatment) + list(outcome)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\n# Causal Identification using the ID Algorithm\nidentified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\nprint(\"\\nResult for identification using ID Algorithm:\")\nprint(identified_estimand)\n",
"Case 2\nThis example exhibits the performance of the algorithm on a cyclic graph. This example demonstrates that a directed acyclic graph (DAG) is needed for the ID algorithm.",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\ncausal_graph = \"digraph{T->Y; Y->T;}\"\ncolumns = list(treatment) + list(outcome)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\ntry:\n # Causal Identification using the ID Algorithm\n identified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\n print(\"\\nResult for identification using ID Algorithm:\")\n print(identified_estimand)\nexcept:\n print(\"Identification Failed: The graph must be a directed acyclic graph (DAG).\")\n",
"Case 3\nThis example exhibits the performance of the algorithm in the presence of a mediator variable(X1).",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\nvariables = [\"X1\"]\ncausal_graph = \"digraph{T->X1;X1->Y;}\"\ncolumns = list(treatment) + list(outcome) + list(variables)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\n# Causal Identification using the ID Algorithm\nidentified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\nprint(\"\\nResult for identification using ID Algorithm:\")\nprint(identified_estimand)\n",
"Case 4\nThe example exhibits the performance of the algorithm in the presence of a direct and indirect path(through X1) from T to Y.",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\nvariables = [\"X1\"]\ncausal_graph = \"digraph{T->Y;T->X1;X1->Y;}\"\ncolumns = list(treatment) + list(outcome) + list(variables)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\n# Causal Identification using the ID Algorithm\nidentified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\nprint(\"\\nResult for identification using ID Algorithm:\")\nprint(identified_estimand)\n",
"Case 5\nThis example exhibits the performance of the algorithm in the presence of a confounding variable(X1) and an instrumental variable(X2).",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\nvariables = [\"X1\", \"X2\"]\ncausal_graph = \"digraph{T->Y;X1->T;X1->Y;X2->T;}\"\ncolumns = list(treatment) + list(outcome) + list(variables)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\n# Causal Identification using the ID Algorithm\nidentified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\nprint(\"\\nResult for identification using ID Algorithm:\")\nprint(identified_estimand)\n",
"Case 6\nThis example exhibits the performance of the algorithm in case of a disjoint graph.",
"# Random data\ntreatment = \"T\"\noutcome = \"Y\"\nvariables = [\"X1\"]\ncausal_graph = \"digraph{T;X1->Y;}\"\ncolumns = list(treatment) + list(outcome) + list(variables)\ndf = pd.DataFrame(columns=columns)\n\n# Causal Model Initialization\ncausal_model = CausalModel(df, treatment, outcome, graph=causal_graph)\n\n# View graph\ncausal_model.view_model()\nfrom IPython.display import Image, display\nprint(\"Graph:\")\ndisplay(Image(filename=\"causal_model.png\"))\n\n# Causal Identification using the ID Algorithm\nidentified_estimand = causal_model.identify_effect(method_name=\"id-algorithm\")\nprint(\"\\nResult for identification using ID Algorithm:\")\nprint(identified_estimand)\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
letsgoexploring/economicData
|
inflation-forecasts-and-interest-rates/python/real_rate.ipynb
|
mit
|
[
"About\nThis program downloads, manages, and exports to .csv files inflation forecast data from the Federal Reserve Bank of Philadelphia, and actual inflation and interest rate data from FRED. The purpose is to learn about historical ex ante real interest rates in the US.",
"import numpy as np\nimport matplotlib.dates as dts\nimport pandas as pd\nimport fredpy as fp\nimport runProcs\nimport requests\nimport matplotlib.pyplot as plt\nplt.style.use('classic')\n%matplotlib inline",
"Import forecast data",
"url = \"https://www.philadelphiafed.org/-/media/research-and-data/real-time-center/survey-of-professional-forecasters/historical-data/medianlevel.xls?la=en\"\nr = requests.get(url,verify=False)\nwith open(\"../xls/medianLevel.xls\", \"wb\") as code:\n code.write(r.content)\n\ndeflator_forecasts = pd.read_excel('../xls/medianLevel.xls',sheet_name = 'PGDP')\ndeflator_forecasts=deflator_forecasts.interpolate()\ndeflator_forecasts = deflator_forecasts.iloc[5:]",
"GDP deflator inflation forecasts",
"# Initialize forecast lists\nforecast_1q = []\nforecast_2q = []\nforecast_1y = []\n\n# Associate forecasts with dates. The date should coincide with the start of the period for which the forecast applies.\ndates = []\nfor i,ind in enumerate(deflator_forecasts.index):\n year =int(deflator_forecasts.iloc[i]['YEAR'])\n quart=int(deflator_forecasts.iloc[i]['QUARTER'])\n if quart == 1:\n month = '01'\n elif quart == 2:\n month = '04'\n elif quart == 3:\n month = '07'\n else:\n month = '10'\n year=year\n date = month+'-01-'+str(year)\n dates.append(date)\n \n forecast_1q.append(400*(deflator_forecasts.iloc[i]['PGDP3']/deflator_forecasts.iloc[i]['PGDP2']-1))\n forecast_2q.append(200*(deflator_forecasts.iloc[i]['PGDP4']/deflator_forecasts.iloc[i]['PGDP2']-1))\n forecast_1y.append(100*(deflator_forecasts.iloc[i]['PGDP6']/deflator_forecasts.iloc[i]['PGDP2']-1))\n\n# Update the FRED instances\n\ndefl_forecast_1q = fp.to_fred_series(data = forecast_1q,dates = dates,frequency='Quarterly')\ndefl_forecast_2q = fp.to_fred_series(data = forecast_2q,dates = dates,frequency='Quarterly')\ndefl_forecast_1y = fp.to_fred_series(data = forecast_1y,dates = dates,frequency='Quarterly')\n\ndeflator_frame = pd.DataFrame({'deflator inflation - 3mo forecast':defl_forecast_1q.data,\n 'deflator inflation - 6mo forecast':defl_forecast_2q.data,\n 'deflator inflation - 1yr forecast':defl_forecast_1y.data})",
"Actual data",
"interest3mo = fp.series('TB3MS').as_frequency('Q')\ninterest6mo = fp.series('TB6MS').as_frequency('Q')\ninterest1yr = fp.series('GS1').as_frequency('Q')\n\ninterest3mo,interest6mo,interest1yr = fp.window_equalize([interest3mo,interest6mo,interest1yr])\n\ninterest_frame = pd.DataFrame({'nominal interest - 3mo':interest3mo.data,\n 'nominal interest - 6mo':interest6mo.data,\n 'nominal interest - 1yr':interest1yr.data})\n\ndefl_3mo = fp.series('GDPDEF')\ndefl_6mo = fp.series('GDPDEF')\ndefl_1yr = fp.series('GDPDEF')\n\ndefl_3mo = defl_3mo.pc(method='forward',annualized=True)\n\ndefl_6mo.data = (defl_6mo.data.shift(-2)/defl_6mo.data-1)*200\ndefl_6mo = defl_6mo.drop_nan()\n\ndefl_1yr.data = (defl_1yr.data.shift(-4)/defl_1yr.data-1)*100\ndefl_1yr = defl_1yr.drop_nan()\n\ndefl_3mo_frame = pd.DataFrame({'deflator inflation - 3mo actual':defl_3mo.data})\ndefl_6mo_frame = pd.DataFrame({'deflator inflation - 6mo actual':defl_6mo.data})\ndefl_1yr_frame = pd.DataFrame({'deflator inflation - 1yr actual':defl_1yr.data})\n\nactual_rates_frame = pd.concat([interest_frame,defl_3mo_frame,defl_6mo_frame,defl_1yr_frame],axis = 1)",
"Organize actual and forecasted data and export to csv files",
"full_data_frame = pd.concat([actual_rates_frame,deflator_frame],axis=1)\nfull_data_frame = full_data_frame.dropna(subset=['deflator inflation - 1yr forecast',\n 'deflator inflation - 3mo forecast',\n 'deflator inflation - 6mo forecast'])\n\nfull_data_frame.columns\n\n# Export quarterly data\nfull_data_frame[['deflator inflation - 3mo forecast','deflator inflation - 3mo actual','nominal interest - 3mo'\n ]].to_csv('../csv/real_rate_data_Q.csv')\n\nfig = plt.figure(figsize = (12,8))\nax = fig.add_subplot(1,1,1)\nfull_data_frame[['deflator inflation - 3mo forecast','deflator inflation - 3mo actual','nominal interest - 3mo'\n ]].plot(ax=ax,lw=4,alpha = 0.6,grid=True)\n\n# Construct annual data and export\n\n# Resample to annual freq and count occurences per year\nannual_data_frame = full_data_frame[['deflator inflation - 1yr forecast','deflator inflation - 1yr actual','nominal interest - 1yr'\n ]].resample('AS').mean().dropna()\n\n# Export to csv\nannual_data_frame[['deflator inflation - 1yr forecast','deflator inflation - 1yr actual','nominal interest - 1yr'\n ]].to_csv('../csv/real_rate_data_A.csv')\n\nfig = plt.figure(figsize = (12,8))\nax = fig.add_subplot(1,1,1)\nannual_data_frame.plot(ax=ax,lw=4,alpha = 0.6,grid=True)",
"Figure for website",
"# Formatter for inserting commas in y axis labels with magnitudes in the thousands\n\n# Make all plotted axis lables and tick lables bold 15 pt font\nfont = {#'weight' : 'bold',\n 'size' : 15}\naxes={'labelweight' : 'bold'}\nplt.rc('font', **font)\nplt.rc('axes', **axes)\n\n# Add some space around the tick lables for better readability\nplt.rcParams['xtick.major.pad']='8'\nplt.rcParams['ytick.major.pad']='8'\n\n\ndef func(x, pos): # formatter function takes tick label and tick position\n s = '{:0,d}'.format(int(x))\n return s\n\ny_format = plt.FuncFormatter(func) # make formatter\n\n# format the x axis ticksticks\nyears2,years4,years5,years10,years15= dts.YearLocator(2),dts.YearLocator(4),dts.YearLocator(5),dts.YearLocator(10),dts.YearLocator(15)\n\n\n# y label locator for vertical axes plotting gdp\nmajorLocator_y = plt.MultipleLocator(3)\nmajorLocator_shares = plt.MultipleLocator(0.2)\n\n# Figure\n\nexpectedInflation = annual_data_frame['deflator inflation - 1yr forecast']\nactualInflation = annual_data_frame['deflator inflation - 1yr actual']\n\nv =fp.to_fred_series(data = annual_data_frame['deflator inflation - 1yr actual'],dates=annual_data_frame.index)\n\nfig=plt.figure(figsize=(10, 6))\nax=fig.add_subplot(1,1,1)\nv.recessions()\nax.plot_date(annual_data_frame.index,actualInflation,'b-',lw=3)\nax.plot_date(annual_data_frame.index,expectedInflation,'r--',lw=3)\nax.fill_between(annual_data_frame.index,actualInflation, expectedInflation, where = expectedInflation<actualInflation,alpha=0.25,facecolor='green', interpolate=True)\nax.fill_between(annual_data_frame.index,actualInflation, expectedInflation, where = expectedInflation>actualInflation,alpha=0.25,facecolor='red', interpolate=True)\nax.set_ylabel('%')\nax.xaxis.set_major_locator(years5)\nax.legend(['actual inflation (year ahead)','expected inflation (year ahead)'],bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n ncol=3, mode=\"expand\", borderaxespad=0.,prop={'weight':'normal','size':'15'})\nplt.grid()\nfig.autofmt_xdate()\nplt.savefig('../png/fig_US_Inflation_Forecast_site.png',bbox_inches='tight')\n\nprogName = 'realRateData'\nrunProcs.exportNb(progName)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ucsd-ccbb/jupyter-genomics
|
notebooks/networkAnalysis/network_propagation_example/propagation_example.ipynb
|
mit
|
[
"Simple implementation of network propagation from Vanunu et. al.\nAuthor: Brin Rosenthal (sbrosenthal@ucsd.edu)\nMarch 24, 2016\nNote: data and code for this notebook may be found in the 'data' and 'source' directories",
"# import some useful packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn\nimport networkx as nx\nimport pandas as pd\nimport random\n\n# latex rendering of text in graphs\nimport matplotlib as mpl\nmpl.rc('text', usetex = False)\nmpl.rc('font', family = 'serif')\n\nimport sys\n#sys.path.append('/Users/brin/Google Drive/UCSD/genome_interpreter_docs/barabasi_disease_distances/barabasi_incomplete_interactome/source/')\nsys.path.append('source/')\nimport separation\nimport plotting_results\nimport network_prop\n\nimport imp\nimp.reload(separation)\nimp.reload(plotting_results)\nimp.reload(network_prop)\n\n\n% matplotlib inline",
"Load the interactome from Barabasi paper\nInteractome downloaded from supplemental materials of http://science.sciencemag.org/content/347/6224/1257601 (Menche, Jörg, et al. \"Uncovering disease-disease relationships through the incomplete interactome.\" Science 347.6224 (2015): 1257601.)\n<img src=\"screenshots/barabasi_abstract.png\" width=\"600\" height=\"600\">\n\nWe need a reliable background interactome in order to correctly calculate localization and co-localization properties of node sets\nWe have a few choices in this decision, three of which are outlined below:\n\n<img src=\"screenshots/which_interactome.png\" width=\"800\" height=\"800\">",
"# load the interactome network (use their default network)\nGint = separation.read_network('data/DataS1_interactome.tsv')\n# remove self links\nseparation.remove_self_links(Gint)\n# Get rid of nodes with no edges\nnodes_degree = Gint.degree()\nnodes_0 = [n for n in nodes_degree.keys() if nodes_degree[n]==0]\nGint.remove_nodes_from(nodes_0)",
"Load a focal gene set\nGene lists should follow the format shown here for kidney_diseases.txt and epilepsy_genes.txt, and should be in the entrez ID format",
"genes_KID = separation.read_gene_list('kidney_diseases.txt')\ngenes_EPI = separation.read_gene_list('epilepsy_genes.txt')\n\n\n# set disease name and focal genes here\ndname = 'kidney'\ngenes_focal = genes_KID",
"Network propagation from seed nodes\nFirst calculate the degree-corrected version of adjacency matrix\nNetwork propagation simulation follows methods in http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000641 (Vanunu, Oron, et al. \"Associating genes and protein complexes with disease via network propagation.\" PLoS Comput Biol 6.1 (2010): e1000641.)\n<img src=\"screenshots/vanunu_abstract.png\">\nCalculate the degree- normalized adjacency matrix, using network_prop.normalized_adj_matrix function",
"Wprime= network_prop.normalized_adj_matrix(Gint)\n",
"Propagate heat from seed disease, examine the community structure of hottest nodes",
"seed_nodes = list(np.intersect1d(list(genes_focal),Gint.nodes()))\nalpha=.5 # this parameter controls how fast the heat dissipates\nFnew = network_prop.network_propagation(Gint,Wprime,seed_nodes,alpha=alpha,num_its=20)",
"Plot the hot subnetwork\n\nCreate subgraphs from interactome containing only disease genes\nSort the heat vector (Fnew), and select the top_N hottest genes to plot",
"Fsort = Fnew.sort(ascending=False)\ntop_N = 200\nF_top_N = Fnew.head(top_N)\ngneigh_top_N = list(F_top_N.index)\nG_neigh_N = Gint.subgraph(gneigh_top_N)\n# pull out some useful subgraphs for use in plotting functions\n# find genes which are neighbors of seed genes\ngenes_in_graph = list(np.intersect1d(Gint.nodes(),list(genes_focal)))\nG_focal=G_neigh_N.subgraph(list(genes_in_graph))\n",
"Set the node positions using nx.spring_layout. Parameter k controls default spacing between nodes (lower k brings the nodes closer together, higher k pushes them apart)",
"pos = nx.spring_layout(G_neigh_N,k=.03) # set the node positions\n\nplotting_results.plot_network_2_diseases(G_neigh_N,pos,G_focal,d1name=dname,saveflag=False)\nnx.draw_networkx_nodes(G_neigh_N,pos=pos,node_color=Fnew[G_neigh_N.nodes()],cmap='YlOrRd',node_size=30,\n vmin=0,vmax=max(Fnew)/3)\nnx.draw_networkx_edges(G_neigh_N,pos=pos,edge_color='white',alpha=.2)\nplt.title('Top '+str(top_N)+' genes propagated from '+dname+': alpha = ' + str(alpha),color='white',fontsize=16,y=.95)\nplt.savefig('heat_prop_network.png',dpi=200) # save the figure here",
"What are the top N genes? Print out gene symbols\n\nThese are the genes which are likely to be related to input gene set\n\n(Convert from entrez ID to gene symbol using MyGene.info)",
"import mygene\nmg = mygene.MyGeneInfo()\n\n# print out the names of the top N genes (that don't include the seed set)\nfocal_group = list(F_top_N.index)\nfocal_group = np.setdiff1d(focal_group,list(genes_focal))\ntop_heat_focal = F_top_N[focal_group]\nfocal_temp = mg.getgenes(focal_group)\nfocal_entrez_names = [str(x['entrezgene']) for x in focal_temp if 'symbol' in x.keys()]\nfocal_gene_names = [str(x['symbol']) for x in focal_temp if 'symbol' in x.keys()]\ntop_heat_df = pd.DataFrame({'gene_symbol':focal_gene_names,'heat':top_heat_focal[focal_entrez_names]})\ntop_heat_df = top_heat_df.sort('heat',ascending=False)\n# print the top 25 related genes, along with their heat values\ntop_heat_df.head(25)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DistrictDataLabs/yellowbrick
|
examples/pbs929/gridsearch.ipynb
|
apache-2.0
|
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\n\nsys.path.append(\"..\")\nsys.path.append(\"../..\")\n\nimport numpy as np \nimport pandas as pd",
"Occupancy data",
"## [from examples/examples.py]\nfrom download import download_all \n\n## The path to the test data sets\nFIXTURES = os.path.join(os.getcwd(), \"data\")\n\n## Dataset loading mechanisms\ndatasets = {\n \"credit\": os.path.join(FIXTURES, \"credit\", \"credit.csv\"),\n \"concrete\": os.path.join(FIXTURES, \"concrete\", \"concrete.csv\"),\n \"occupancy\": os.path.join(FIXTURES, \"occupancy\", \"occupancy.csv\"),\n \"mushroom\": os.path.join(FIXTURES, \"mushroom\", \"mushroom.csv\"),\n}\n\ndef load_data(name, download=True):\n \"\"\"\n Loads and wrangles the passed in dataset by name.\n If download is specified, this method will download any missing files. \n \"\"\"\n # Get the path from the datasets \n path = datasets[name]\n \n # Check if the data exists, otherwise download or raise \n if not os.path.exists(path):\n if download:\n download_all() \n else:\n raise ValueError((\n \"'{}' dataset has not been downloaded, \"\n \"use the download.py module to fetch datasets\"\n ).format(name))\n \n # Return the data frame\n return pd.read_csv(path)\n\n# Load the classification data set\ndata = load_data('occupancy') \nprint(len(data))\ndata.head()\n\n# Specify the features of interest and the classes of the target \nfeatures = [\"temperature\", \"relative humidity\", \"light\", \"C02\", \"humidity\"]\nclasses = ['unoccupied', 'occupied']\n\n# Searching the whole dataset takes a while (15 mins on my mac)... \n# For demo purposes, we reduce the size\nX = data[features].head(2000)\ny = data.occupancy.head(2000)",
"Parameter projection\n\nBecause the visualizer only displays results across two parameters, we need some way of reducing the dimension to 2. \nOur approach: for each value of the parameters of interest, display the maximum score across all the other parameters.\n\nHere we demo the param_projection utility function that does this",
"from sklearn.model_selection import GridSearchCV\nfrom sklearn.svm import SVC\n\nfrom yellowbrick.gridsearch.base import param_projection\n\n# Fit a vanilla grid search... these are the example parameters from sklearn's gridsearch docs.\nsvc = SVC()\ngrid = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]},\n {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]\ngs = GridSearchCV(svc, grid, n_jobs=4)\n\n%%time\ngs.fit(X, y)",
"As of Scikit-learn 0.18, cv_results has replaced grid_scores as the grid search results format",
"gs.cv_results_",
"Demo the use of param_projection... It identifies the unique values of the the two parameter values and gets the best score for each (here taking the max over gamma values)",
"param_1 = 'C'\nparam_2 = 'kernel'\nparam_1_vals, param2_vals, best_scores = param_projection(gs.cv_results_, param_1, param_2)\nparam_1_vals, param2_vals, best_scores",
"GridSearchColorPlot\nThis visualizer wraps the GridSearchCV object and plots the values obtained from param_projection.",
"from yellowbrick.gridsearch import GridSearchColorPlot\n\ngs_viz = GridSearchColorPlot(gs, 'C', 'kernel')\ngs_viz.fit(X, y).show()\n\ngs_viz = GridSearchColorPlot(gs, 'kernel', 'C')\ngs_viz.fit(X, y).show()\n\ngs_viz = GridSearchColorPlot(gs, 'C', 'gamma')\ngs_viz.fit(X, y).show()",
"If there are missing values in the grid, these are filled with a hatch (see https://stackoverflow.com/a/35905483/7637679)",
"gs_viz = GridSearchColorPlot(gs, 'kernel', 'gamma')\ngs_viz.fit(X, y).show()",
"Choose a different metric...",
"gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='mean_fit_time')\ngs_viz.fit(X, y).show()",
"Quick Method\nBecause grid search can take a long time and we may want to interactively cut the results a few different ways, by default the quick method assumes that the GridSearchCV object is already fit if no X data is passed in.",
"from yellowbrick.gridsearch import gridsearch_color_plot\n\n%%time\n# passing the GridSearchCV object pre-fit\ngridsearch_color_plot(gs, 'C', 'kernel')\n\n%%time\n# trying a different cut across parameters\ngridsearch_color_plot(gs, 'C', 'gamma')\n\n%%time\n# When we provide X, the `fit` method will call fit (takes longer)\ngridsearch_color_plot(gs, 'C', 'kernel', X=X, y=y)\n\n%%time\n# can also choose a different metric\ngridsearch_color_plot(gs, 'C', 'kernel', metric='mean_fit_time')",
"Parameter errors\nBad param values",
"gs_viz = GridSearchColorPlot(gs, 'foo', 'kernel')\ngs_viz.fit(X, y).show()\n\ngs_viz = GridSearchColorPlot(gs, 'C', 'foo')\ngs_viz.fit(X, y).show()",
"Bad metric option",
"gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='foo')\ngs_viz.fit(X, y).show()",
"Metric option exists in cv_results but is not numeric -> not valid",
"gs_viz = GridSearchColorPlot(gs, 'C', 'kernel', metric='param_kernel')\ngs_viz.fit(X, y).show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
courses/fast-and-lean-data-science/08_Taxifare_Keras_FeatureColumns_playground.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/08_Taxifare_Keras_FeatureColumns_playground.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nKeras Feature Columns are not an officially released feature yet. Some caveats apply: please run this notebook on a GPU Backend. Keras Feature Columns are not comaptible with TPUs yet. Also, you will not be able to export this model to Tensorflow's \"saved model\" format for serving. The serving layer for feature columns will be added soon.\nImports",
"import os, json, math\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.python.feature_column import feature_column_v2 as fc # This will change when Keras FeatureColumn is final.\nfrom matplotlib import pyplot as plt\nprint(\"Tensorflow version \" + tf.__version__)\ntf.enable_eager_execution()\n\n#@title display utilities [RUN ME]\n# utility to display training and validation curves\ndef display_training_curves(training, validation, title, subplot):\n if subplot%10==1: # set up the subplots on the first call\n plt.subplots(figsize=(10,10), facecolor='#F0F0F0')\n plt.tight_layout()\n ax = plt.subplot(subplot)\n ax.set_facecolor('#F8F8F8')\n ax.plot(training)\n ax.plot(validation)\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n ax.set_xlabel('epoch')\n ax.legend(['train', 'valid.'])",
"Colab-only auth",
"# backend identification\nIS_COLAB = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence\nHAS_COLAB_TPU = 'COLAB_TPU_ADDR' in os.environ\n\n# Auth on Colab\nif IS_COLAB:\n from google.colab import auth\n auth.authenticate_user()\n \n# Also propagate the Auth to TPU if available so that it can access your GCS buckets\nif IS_COLAB and HAS_COLAB_TPU:\n TF_MASTER = 'grpc://{}'.format(os.environ['COLAB_TPU_ADDR'])\n with tf.Session(TF_MASTER) as sess: \n with open('/content/adc.json', 'r') as f:\n auth_info = json.load(f) # Upload the credentials to TPU.\n tf.contrib.cloud.configure_gcs(sess, credentials=auth_info)\n print('Using TPU')\n\n# TPU usage flag\nUSE_TPU = HAS_COLAB_TPU",
"Config",
"DATA_BUCKET = \"gs://cloud-training-demos/taxifare/ch4/taxi_preproc/\"\nTRAIN_DATA_PATTERN = DATA_BUCKET + \"train*\"\nVALID_DATA_PATTERN = DATA_BUCKET + \"valid*\"\n\nCSV_COLUMNS = ['fare_amount', 'dayofweek', 'hourofday', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']\nDEFAULTS = [[0.0], ['null'], [12], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]\n\ndef decode_csv(line):\n column_values = tf.decode_csv(line, DEFAULTS)\n column_names = CSV_COLUMNS\n decoded_line = dict(zip(column_names, column_values)) # create a dictionary {'column_name': value, ...} for each line \n return decoded_line\n\ndef load_dataset(pattern):\n #filenames = tf.gfile.Glob(pattern)\n filenames = tf.data.Dataset.list_files(pattern)\n #dataset = tf.data.TextLineDataset(filenames)\n dataset = filenames.interleave(tf.data.TextLineDataset, cycle_length=16) # interleave so that reading happens from multiple files in parallel\n dataset = dataset.map(decode_csv)\n return dataset\n\ndataset = load_dataset(TRAIN_DATA_PATTERN)\nfor n, data in enumerate(dataset):\n numpy_data = {k: v.numpy() for k, v in data.items()} # .numpy() works in eager mode\n print(numpy_data)\n if n>10: break\n\ndef add_engineered(features):\n # this is how you can do feature engineering in TensorFlow\n distance = tf.sqrt((features['pickuplat'] - features['dropofflat'])**2 +\n (features['pickuplon'] - features['dropofflon'])**2)\n \n # euclidian distance is hard for a neural network to emulate\n features['euclidean'] = distance\n return features\n\ndef features_and_labels(features):\n features = add_engineered(features)\n features.pop('key') # this column not needed\n label = features.pop('fare_amount') # this is what we will train for\n return features, label\n \ndef prepare_dataset(dataset, batch_size, truncate=None, shuffle=True):\n dataset = dataset.map(features_and_labels)\n if truncate is not None:\n dataset = dataset.take(truncate)\n dataset = dataset.cache()\n if shuffle:\n dataset = dataset.shuffle(10000)\n dataset = dataset.repeat()\n dataset = dataset.batch(batch_size)\n dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)\n return dataset\n\none_item = load_dataset(TRAIN_DATA_PATTERN).map(features_and_labels).take(1).batch(1)",
"Linear Keras model [WORK REQUIRED]\n\nWhat do the columns do ? Familiarize yourself with these column types.\n\nnumeric_col = fc.numeric_column('name')\nbucketized_numeric_col = fc.bucketized_column(fc.numeric_column('name'), [0, 2, 10])\nindic_of_categ_col = fc.indicator_column(fc.categorical_column_with_identity('name', num_buckets = 24))\nindic_of_categ_vocab_col = fc.indicator_column(fc.categorical_column_with_identity('color', vocabulary_list = ['red', 'blue'])) \nindic_of_crossed_col = fc.indicator_column(fc.crossed_column([categcol1, categcol2], 16*16))\nembedding_of_crossed_col = fc.embedding_column(fc.crossed_column([categcol1, categcol2], 16*16), 5)\n| column | output vector shape | nb of parameters |\n|--------------|---------------------------------|------------------------------|\n| numeric_col | [1] | 0 |\n| bucketized_numeric_col | [bucket boundaries+1] | 0 |\n| indic_of_categ_col | [nb categories] | 0 |\n| indic_of_categ_vocab_col | [nb categories] | 0 |\n| indic_of_crossed_col | [nb crossed categories] | 0 |\n| embedding_of_crossed_col | [nb crossed categories] | crossed categories * embedding size |\n\nLet's start with all the data in as simply as possible: numerical columns for numerical values, categorical (one-hot encoded) columns for categorical data like the day of the week or the hour of the day. Try training...\nRSME flat at 8-9 ... not good\nTry to replace the numerical latitude and longitudes by their bucketized versions\nRSME trains to 6 ... progress!\nTry to add an engineered feature like 'euclidean' for the distance traveled by the taxi\nRMSE trains down to 4-5 ... progress !\n The euclidian distance is really hard to emulate for a neural network. Look through the code to see how it was \"engineered\".\nNow add embedded crossed columns for:\nhourofday x dayofweek\npickup neighborhood (bucketized pickup lon x bucketized pickup lat)\ndropoff neighborhood (bucketized dropoff lon x bucketized dropoff lat)\nis this better ?\n\nThe big wins were bucketizing the coordinates and adding the euclidian distance. The cross column add only a little, and only if you train for longer. Try training on 10x the training and validation data. With crossed columns you should be able to reach RMSE=3.9",
"NB_BUCKETS = 16\nlatbuckets = np.linspace(38.0, 42.0, NB_BUCKETS).tolist()\nlonbuckets = np.linspace(-76.0, -72.0, NB_BUCKETS).tolist()\n\n\n# the columns you can play with\n\n# Categorical columns are used as:\n# fc.indicator_column(dayofweek)\ndayofweek = fc.categorical_column_with_vocabulary_list('dayofweek', vocabulary_list = ['Sun', 'Mon', 'Tues', 'Wed', 'Thu', 'Fri', 'Sat'])\nhourofday = fc.categorical_column_with_identity('hourofday', num_buckets = 24)\n\n# Bucketized columns can be used as such:\nbucketized_pick_lat = fc.bucketized_column(fc.numeric_column('pickuplon'), lonbuckets)\nbucketized_pick_lon = fc.bucketized_column(fc.numeric_column('pickuplat'), latbuckets)\nbucketized_drop_lat = fc.bucketized_column(fc.numeric_column('dropofflon'), lonbuckets)\nbucketized_drop_lon = fc.bucketized_column(fc.numeric_column('dropofflat'), latbuckets)\n\n# Cross columns are used as\n# fc.embedding_column(day_hr, 5)\nday_hr = fc.crossed_column([dayofweek, hourofday], 24 * 7)\npickup_cross = fc.crossed_column([bucketized_pick_lat, bucketized_pick_lon], NB_BUCKETS * NB_BUCKETS)\ndrofoff_cross = fc.crossed_column([bucketized_drop_lat, bucketized_drop_lon], NB_BUCKETS * NB_BUCKETS)\n#pickdorp_pair = fc.crossed_column([pickup_cross, ddropoff_cross], NB_BUCKETS ** 4 )\n \ncolumns = [\n \n ###\n #\n # YOUR FEATURE COLUMNS HERE\n #\n fc.numeric_column('passengers'),\n ##\n]\n\nl = tf.keras.layers\nmodel = tf.keras.Sequential(\n [\n fc.FeatureLayer(columns),\n l.Dense(100, activation='relu'),\n l.Dense(64, activation='relu'),\n l.Dense(32, activation='relu'),\n l.Dense(16, activation='relu'),\n l.Dense(1, activation=None), # regression\n ])\n\ndef rmse(y_true, y_pred): # Root Mean Squared Error\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\ndef mae(y_true, y_pred): # Mean Squared Error\n return tf.reduce_mean(tf.abs(y_pred - y_true))\n \nmodel.compile(optimizer=tf.train.AdamOptimizer(), # little bug: in eager mode, 'adam' is not yet accepted, must spell out tf.train.AdamOptimizer()\n loss='mean_squared_error',\n metrics=[rmse])\n\n# print model layers\nmodel.predict(one_item, steps=1) # little bug: with FeatureLayer, must call the model once on dummy data before .summary can work\nmodel.summary()\n\nEPOCHS = 8\nBATCH_SIZE = 512\nTRAIN_SIZE = 64*1024 # max is 2,141,023\nVALID_SIZE = 4*1024 # max is 2,124,500\n\n# Playground settings: TRAIN_SIZE = 64*1024, VALID_SIZE = 4*1024\n# Solution settings: TRAIN_SIZE = 640*1024, VALID_SIZE = 64*1024\n\n# This should reach RMSE = 3.9 (multiple runs may be necessary)\n\ntrain_dataset = prepare_dataset(load_dataset(TRAIN_DATA_PATTERN), batch_size=BATCH_SIZE, truncate=TRAIN_SIZE)\nvalid_dataset = prepare_dataset(load_dataset(VALID_DATA_PATTERN), batch_size=BATCH_SIZE, truncate=VALID_SIZE, shuffle=False)\n\nhistory = model.fit(train_dataset, steps_per_epoch=TRAIN_SIZE//BATCH_SIZE, epochs=EPOCHS, shuffle=True,\n validation_data=valid_dataset, validation_steps=VALID_SIZE//BATCH_SIZE)\n\nprint(history.history.keys())\ndisplay_training_curves(history.history['rmse'], history.history['val_rmse'], 'accuracy', 211)\ndisplay_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
IanHawke/Southampton-PV-NumericalMethods-2016
|
solutions/02-Initial-Value-Problems.ipynb
|
mit
|
[
"Initial Value Problems\nA paper by Jones and Underwood suggests a model for the temperature behaviour $T(t)$ of a PV cell in terms of a nonlinear differential equation. Here we extract the key features as\n\\begin{equation}\n \\frac{\\text{d}T}{\\text{d}t} = f(t, T) = c_{1} \\left( c_{2} T_{\\text{ambient}}^4 - T^4 \\right) + c_{3} - \\frac{c_4}{T} - c_5 ( T - T_{\\text{ambient}} ),\n\\end{equation}\nwhere the various $c_{1, \\dots, 5}$ are constant parameters, and the cell is assumed to relax back to the ambient temperature fast enough to treat $T_{\\text{ambient}}$ as a constant as well.\nIf we're given the values of the parameters together with a temperature value at time $t=0$, we can solve this initial value problem numerically.\nSolution by integration\nWe've solved lots of problems by integration already. A good scientist is a lazy scientist, so we can try to solve this one by integration as well.\nAssume we know the solution at $t_j$ and want to compute the solution at $t_{j+1} = t_j + \\Delta t$. We write\n\\begin{equation}\n \\int_{t_j}^{t_{j+1}} \\text{d}t \\, \\frac{\\text{d}T}{\\text{d}t} = T \\left( t_{j+1} \\right) - T \\left( t_{j} \\right).\n\\end{equation}\nUsing the differential equation we therefore get\n\\begin{equation}\n T \\left( t_{j+1} \\right) = T \\left( t_{j} \\right) + \\int_{t_j}^{t_{j+1}} \\text{d}t \\, f(t, T).\n\\end{equation}\nIf we can solve the integral, we can move from the solution at $t_j$ to the solution at $t_{j+1}$.\nThe simplest solution of the integral was the Riemann integral approximation. The width of the interval is $t_{j+1} - t_j = \\Delta t$. We know the value of $T(t_j)$. Therefore we can approximate\n\\begin{equation}\n \\int_{t_j}^{t_{j+1}} \\text{d}t \\, f(t, T) \\approx \\Delta t \\, \\, f \\left( t, T(t_j) \\right),\n\\end{equation}\nleading to Euler's method\n\\begin{equation}\n T \\left( t_{j+1} \\right) = T \\left( t_{j} \\right) + \\Delta t \\, \\, f \\left( t_j, T(t_j) \\right),\n\\end{equation}\nwhich in more compact notation is\n\\begin{equation}\n T_{j+1} = T_{j} + \\Delta t \\, \\, f_j.\n\\end{equation}\nEuler's method\nLet's implement this where the ambient temperature is $290$K, the $c$ parameters are\n\\begin{align}\n c_1 &= 10^{-5} \\\n c_2 &= 0.9 \\\n c_3 &= 0 \\\n c_4 &= 10^{-2} \\\n c_5 &= 1\n\\end{align}\nand $T(0) = 300$K. We'll solve up to $t=10^{-2}$ hours (it relaxes very fast!).\nNote: we're going to pass in all the parameter values using a Python dictionary. These are a little like lists - they hold multiple things. However, the index is not an integer, but something constant - the key - that you specify. They're defined using curly braces {}, with the key followed by a colon and then the value.",
"from __future__ import division\nimport numpy\n%matplotlib notebook\nfrom matplotlib import pyplot\n\nparameters = { \"T_ambient\" : 290.0,\n \"c1\" : 1.0e-5,\n \"c2\" : 0.9,\n \"c3\" : 0.0,\n \"c4\" : 1.0e-2,\n \"c5\" : 1.0}\nT_initial = 300.0\nt_end = 1e-2\n\ndef f(t, T, parameters):\n T_ambient = parameters[\"T_ambient\"]\n c1 = parameters[\"c1\"]\n c2 = parameters[\"c2\"]\n c3 = parameters[\"c3\"]\n c4 = parameters[\"c4\"]\n c5 = parameters[\"c5\"]\n return c1 * (c2 * T_ambient**4 - T**4) + c3 - c4 / T - c5 * (T - T_ambient)\n\ndef euler_step(f, t, T, dt, parameters):\n return T + dt * f(t, T, parameters)\n\nNsteps = 100\nT = numpy.zeros((Nsteps+1,))\nT[0] = T_initial\ndt = t_end / Nsteps\nt = numpy.linspace(0, t_end, Nsteps+1)\nfor j in range(Nsteps):\n T[j+1] = euler_step(f, t[j], T[j], dt, parameters)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(t, T)\npyplot.xlabel(r\"$t$\")\npyplot.ylabel(r\"$T$\")\npyplot.show()",
"As with all integration problems, we expect accuracy (and computation time!) to increase as we increase the number of steps. Euler's method, like the Riemann integral on which it's built, is first order.\nExercise\nTry modifying the number of steps. Plot your solutions to check the solution remains reasonable. What happens when the number of steps is very small?\nSolution by differentiation\nA different way of thinking about Euler's method shows explicitly that it's first order. Take the original differential equation\n\\begin{equation}\n \\frac{\\text{d}T}{\\text{d}t} = f(t, T).\n\\end{equation}\nWe can directly replace the derivative by using finite differencing. By using Taylor expansion we have\n\\begin{align}\n T \\left( t_{j+1} \\right) &= T \\left( t_j \\right) + \\left( t_{j+1} - t_{j} \\right) \\left. \\frac{\\text{d}T}{\\text{d}t} \\right|{t = t{j}} + \\frac{\\left( t_{j+1} - t_{j} \\right)^2}{2!} \\left. \\frac{\\text{d}^2T}{\\text{d}t^2} \\right|{t = t{j}} + \\dots \\\n &= T \\left( t_j \\right) + \\Delta t \\, \\left. \\frac{\\text{d}T}{\\text{d}t} \\right|{t = t{j}} + \\frac{\\left( \\Delta t \\right)^2}{2!} \\left. \\frac{\\text{d}^2T}{\\text{d}t^2} \\right|{t = t{j}} + \\dots\n\\end{align}\nBy re-arranging we get\n\\begin{equation}\n \\left. \\frac{\\text{d}T}{\\text{d}t} \\right|{t = t{j}} = \\frac{T_{j+1} - T_j}{\\Delta t} - \\frac{\\Delta t}{2!} \\left. \\frac{\\text{d}^2T}{\\text{d}t^2} \\right|{t = t{j}} + \\dots\n\\end{equation}\nThis is the forward difference approximation to the first derivative.\nBy evaluating the original differential equation at $t=t_j$ we get\n\\begin{equation}\n \\frac{T_{j+1} - T_j}{\\Delta t} - \\frac{\\Delta t}{2!} \\left. \\frac{\\text{d}^2T}{\\text{d}t^2} \\right|{t = t{j}} + \\dots = f \\left( t_j, T(t_j) \\right).\n\\end{equation}\nThis shows that the difference between this approximation from the finite differencing, and the original differential equation, goes as $(\\Delta t)^1$ - it is first order. This approximation can be re-arranged to give\n\\begin{equation}\n T_{j+1} = T_j + \\Delta t \\, f_j + \\frac{\\left( \\Delta t \\right)^2}{2!} \\left. \\frac{\\text{d}^2T}{\\text{d}t^2} \\right|{t = t{j}} + \\dots\n\\end{equation}\nBy ignoring the higher order terms, we see that this is just Euler's method again.\nRunge-Kutta methods\nWe can now imagine how to get higher order methods for IVPs: by constructing a higher order approximation to the derivative. A standard approximation is the central difference approximation\n\\begin{equation}\n \\frac{\\text{d}T}{\\text{d}t} = \\frac{T(t_{j+1}) - T(t_{j-1})}{2 \\Delta t} + {\\cal O}\\left( (\\Delta t)^2 \\right),\n\\end{equation}\nwhich we will use later with PDEs. However, it isn't so useful for ODEs directly. Instead we see it as a suggestion: combine different differencing approximations to get a better method. Standard Runge-Kutta methods do this by repeatedly constructing approximations to the derivative, which are combined. These combinations are chosen so that the Taylor expansion of the algorithm matches the original equation to higher and higher orders.\nA second order Runge-Kutta method is\n\\begin{align}\n k_{1} &= \\Delta t \\, f \\left( t_j, T_j \\right), \\\n k_{2} &= \\Delta t \\, f \\left( t_j + \\frac{\\Delta t}{2}, T_j + \\frac{k_{1}}{2} \\right), \\\n T_{j+1} &= T_j + k_{2}.\n\\end{align}\nLet's implement that on our problem above:",
"def rk2_step(f, t, T, dt, parameters):\n k1 = dt * f(t, T, parameters)\n k2 = dt * f(t + 0.5*dt, T + 0.5*k1, parameters)\n return T + k2\n\nNsteps = 100\nT = numpy.zeros((Nsteps+1,))\nT[0] = T_initial\ndt = t_end / Nsteps\nt = numpy.linspace(0, t_end, Nsteps+1)\nfor j in range(Nsteps):\n T[j+1] = rk2_step(f, t[j], T[j], dt, parameters)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(t, T)\npyplot.xlabel(r\"$t$\")\npyplot.ylabel(r\"$T$\")\npyplot.show()",
"The solution looks pretty much identical to that from Euler's method, as this problem is well behaved. In general, the benefits of higher order methods (RK4 is pretty standard) massively outweight the slight additional effort in implementing them.\nA system of IVPs\nOf course, a PV cell is not one component with one temperature, but different materials coupled together. Let's assume it's made of three components, as in the Jones and Underwood paper: $T_{(1)}(t)$ is the temperature of the silicon cells, $T_{(2)}(t)$ the temperature of the trilaminate, and $T_{(3)}(t)$ the temperature of the glass face. We can write the temperature behaviour as the system of differential equations\n\\begin{equation}\n \\frac{\\text{d}{\\bf T}}{\\text{d}t} = {\\bf f} \\left( t, {\\bf T} \\right), \\quad {\\bf T}(0) = {\\bf T}_0.\n\\end{equation}\nHere the vector function ${\\bf T}(t) = \\left( T_{(1)}(t), T_{(2)}(t), T_{(3)}(t) \\right)^T$.\nTo be concrete let's assume that the silicon behaves as in the single equation model above,\n\\begin{equation}\n \\frac{\\text{d}T_{(1)}}{\\text{d}t} = f_{(1)}(t, {\\bf T}) = c_{1} \\left( c_{2} T_{\\text{ambient}}^4 - T_{(1)}^4 \\right) + c_{3} - \\frac{c_4}{T_{(1)}} - c_5 ( T_{(1)} - T_{\\text{ambient}} ),\n\\end{equation}\nwhilst the trilaminate and the glass face try to relax to the temperature of the silicon and the ambient,\n\\begin{equation}\n \\frac{\\text{d}T_{(k)}}{\\text{d}t} = f_{(k)}(t, {\\bf T}) = - c_5 ( T_{(k)} - T_{\\text{ambient}} ) - c_6 ( T_{(k)} - T_{(1)} ), \\quad k = 2, 3.\n\\end{equation}\nWe'll use the same parameters as above, and couple the materials using $c_6 = 200$. We'll start the different components at temperatures ${\\bf T}_0 = (300, 302, 304)^T$.\nThe crucial point for numerical methods: nothing conceptually changes. We extend our methods from the scalar to the vector case directly. Where before we had $T(t_j) = T_j$ we now have ${\\bf T}(t_j) = {\\bf T}_j$, and we can write Euler's method, for example, as\n\\begin{equation}\n {\\bf T}_{j+1} = {\\bf T}_j + \\Delta t \\, {\\bf f} \\left( t_j, {\\bf T}_j \\right).\n\\end{equation}\nEven better, the code implement needs no alteration:",
"parameters_system = { \"T_ambient\" : 290.0,\n \"c1\" : 1.0e-5,\n \"c2\" : 0.9,\n \"c3\" : 0.0,\n \"c4\" : 1.0e-2,\n \"c5\" : 1.0,\n \"c6\" : 200.0}\nT_initial = [300.0, 302.0, 304.0]\nt_end = 1e-2\n\ndef f_system(t, T, parameters):\n T_ambient = parameters[\"T_ambient\"]\n c1 = parameters[\"c1\"]\n c2 = parameters[\"c2\"]\n c3 = parameters[\"c3\"]\n c4 = parameters[\"c4\"]\n c5 = parameters[\"c5\"]\n c6 = parameters[\"c6\"]\n f = numpy.zeros_like(T)\n f[0] = c1 * (c2 * T_ambient**4 - T[0]**4) + c3 - c4 / T[0] - c5 * (T[0] - T_ambient)\n f[1] = - c5 * (T[1] - T_ambient) - c6 * (T[1] - T[0])\n f[2] = - c5 * (T[2] - T_ambient) - c6 * (T[2] - T[0])\n return f\n\nNsteps = 100\nT = numpy.zeros((3, Nsteps+1))\nT[:, 0] = T_initial\ndt = t_end / Nsteps\nt = numpy.linspace(0, t_end, Nsteps+1)\nfor j in range(Nsteps):\n T[:, j+1] = euler_step(f_system, t[j], T[:, j], dt, parameters_system)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(t, T[0,:], label=\"Silicon\")\npyplot.plot(t, T[1,:], label=\"Trilaminate\")\npyplot.plot(t, T[2,:], label=\"Glass\")\npyplot.legend()\npyplot.xlabel(r\"$t$\")\npyplot.ylabel(r\"$T$\")\npyplot.show()",
"Exercise\nCheck that you get similar results using RK2. Try RK4 as well.\nStochastic case\nThis is quite a bit more complex: see D Higham, An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, SIAM Review 43:525-546 (2001) for more details.\nLet's suppose that there's some fluctuating heat source in the cell that we can't explicitly model. Going back to the single cell case, let's write it as\n\\begin{equation}\n \\frac{\\text{d}T}{\\text{d}t} = f(t, T) + g(T) \\frac{\\text{d}W}{\\text{d}t}.\n\\end{equation}\nHere $W(t)$ is a random, or Brownian, or Wiener process. It's going to represent the random fluctuating heat source that we can't explicitly model: its values will be drawn from a normal distribution with mean zero. The values of the random process can jump effectively instantly, but over a timestep $\\Delta t$ will average to zero, with standard deviation $\\sqrt{\\Delta t}$.\nBecause of this extreme behaviour, the derivative doesn't really make sense: instead we should use the integral form.\nIn our integral form we get\n\\begin{equation}\n T_{j+1} = T_j + \\Delta t \\, f_j + \\int_{t_j}^{t_{j+1}} \\text{d}t \\, g(T) \\frac{\\text{d}W}{\\text{d}t}.\n\\end{equation}\nWe approximate this final integral at the left edge $t_j$ as\n\\begin{equation}\n \\int_{t_j}^{t_{j+1}} \\text{d}t \\, g(T) \\frac{\\text{d}W}{\\text{d}t} \\approx g(T_j) \\, \\text{d}W_j,\n\\end{equation}\nwhere $\\text{d}W_j$ is the random process over the interval $[t_j, t_{j+1}]$: this is a random number drawn from a normal distribution with mean zero and standard deviation $\\sqrt{\\Delta t}$.\nThis is the Euler-Maruyama method.\nLet's take our original single temperature model and add a temperature dependent fluctuation $g(T) = (T - T_{\\text{ambient}})^2$.",
"from numpy.random import randn\n\ndef g_stochastic(t, T, parameters):\n T_ambient = parameters[\"T_ambient\"]\n return (T - T_ambient)**2\n\ndef euler_maruyama_step(f, g, t, T, dt, dW, parameters):\n return T + dt * f(t, T, parameters) + g(t, T, parameters) * dW\n\nparameters = { \"T_ambient\" : 290.0,\n \"c1\" : 1.0e-5,\n \"c2\" : 0.9,\n \"c3\" : 0.0,\n \"c4\" : 1.0e-2,\n \"c5\" : 1.0}\nT_initial = 300.0\nt_end = 1e-2\n\nNsteps = 100\nT = numpy.zeros((Nsteps+1,))\nT[0] = T_initial\ndt = t_end / Nsteps\nt = numpy.linspace(0, t_end, Nsteps+1)\ndW = numpy.sqrt(dt) * randn(Nsteps+1)\nfor j in range(Nsteps):\n T[j+1] = euler_maruyama_step(f, g_stochastic, t[j], T[j], dt, dW[j], parameters)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(t, T)\npyplot.xlabel(r\"$t$\")\npyplot.ylabel(r\"$T$\")\npyplot.show()",
"In a fluctuating problem like this, a single simulation doesn't tell you very much. Instead we should perform many simulations and average the result. Let's run this 1000 times:",
"Nruns = 1000\nT = numpy.zeros((Nruns, Nsteps+1))\nT[:,0] = T_initial\ndt = t_end / Nsteps\nt = numpy.linspace(0, t_end, Nsteps+1)\nfor n in range(Nruns):\n dW = numpy.sqrt(dt) * randn(Nsteps+1)\n for j in range(Nsteps):\n T[n, j+1] = euler_maruyama_step(f, g_stochastic, t[j], T[n, j], dt, dW[j], parameters)\nT_average = numpy.mean(T, axis=0)\n\npyplot.figure(figsize=(10,6))\npyplot.plot(t, T[0,:], label=\"First run\")\npyplot.plot(t, T[99,:], label=\"Hundredth run\")\npyplot.plot(t, T_average, label=\"Average\")\npyplot.legend()\npyplot.xlabel(r\"$t$\")\npyplot.ylabel(r\"$T$\")\npyplot.show()",
"The average behaviour looks much like the differential equation model, but now we can model variability as well.\nExercise\nRead Higham's paper and try applying the Milstein method\n\\begin{equation}\n T_{j+1} = T_j + \\Delta t \\, f_j + g_j \\, \\text{d}W_j + \\frac{1}{2} g_j g'_j \\left( \\text{d}W_j^2 - \\Delta t \\right)\n\\end{equation}\nto the problem above. Here \n\\begin{equation}\n g'j = \\left. \\frac{\\text{d}g(T)}{\\text{d}T} \\right|{t=t_j}.\n\\end{equation}\nBlack box methods\nIn Python, the standard solvers for initial value problems are in the scipy.integrate library. The easiest to use is scipy.integrate.odeint, although other solvers offer more control for complex problems."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/migration/UJ1 AutoML for vision with Vertex AI Image Classification.ipynb
|
apache-2.0
|
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex SDK: AutoML image classification model\nInstallation\nInstall the latest (preview) version of Vertex SDK.",
"! pip3 install -U google-cloud-aiplatform --user",
"Install the Google cloud-storage library as well.",
"! pip3 install google-cloud-storage",
"Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"import os\n\nif not os.getenv(\"AUTORUN\") and False:\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.",
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.",
"BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION gs://$BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al gs://$BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.",
"import os\nimport sys\nimport time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value",
"Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.\nAPI_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.\nPARENT: The Vertex AI location root path for dataset, model and endpoint resources.",
"# API Endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex AI location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"AutoML constants\nNext, setup constants unique to AutoML image classification datasets and training:\n\nDataset Schemas: Tells the managed dataset service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the managed dataset service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Vertex AI Pipelines service the task (e.g., classification) to train the model for.",
"# Image Dataset type\nIMAGE_SCHEMA = \"google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml\"\n# Image Labeling type\nIMPORT_SCHEMA_IMAGE_CLASSIFICATION = \"gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml\"\n# Image Training task\nTRAINING_IMAGE_CLASSIFICATION_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\"",
"Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.\n\nDataset Service for managed datasets.\nModel Service for managed models.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving. Note: Prediction has a different service endpoint.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)\n\nIMPORT_FILE = (\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n)\n\n! gsutil cat $IMPORT_FILE | head -n 10",
"Example output:\ngs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10140303196_b88d3d6cec.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10172379554_b296050f82_n.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10172567486_2748826a8b.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10172636503_21bededa75_n.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/102841525_bd6628ae3c.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/1031799732_e7f4008c03.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10391248763_1d16681106_n.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10437754174_22ec990b77_m.jpg,daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10437770546_8bb6f7bdd3_m.jpg,daisy\nCreate a dataset\nprojects.locations.datasets.create\nRequest",
"DATA_SCHEMA = IMAGE_SCHEMA\n\ndataset = {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"metadata_schema_uri\": \"gs://\" + DATA_SCHEMA,\n}\n\nprint(\n MessageToJson(\n aip.CreateDatasetRequest(\n parent=PARENT,\n dataset=dataset,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"dataset\": {\n \"displayName\": \"flowers_20210226014942\",\n \"metadataSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml\"\n }\n}\nCall",
"request = clients[\"dataset\"].create_dataset(\n parent=PARENT,\n dataset=dataset,\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/datasets/3094342379910463488\",\n \"displayName\": \"flowers_20210226014942\",\n \"metadataSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml\",\n \"labels\": {\n \"aiplatform.googleapis.com/dataset_metadata_schema\": \"IMAGE\"\n },\n \"metadata\": {\n \"dataItemSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/dataitem/image_1.0.0.yaml\"\n }\n}",
"# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)",
"projects.locations.datasets.import\nRequest",
"LABEL_SCHEMA = IMPORT_SCHEMA_IMAGE_CLASSIFICATION\n\nimport_config = {\n \"gcs_source\": {\n \"uris\": [IMPORT_FILE],\n },\n \"import_schema_uri\": LABEL_SCHEMA,\n}\n\nprint(\n MessageToJson(\n aip.ImportDataRequest(\n name=dataset_short_id,\n import_configs=[import_config],\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"name\": \"3094342379910463488\",\n \"importConfigs\": [\n {\n \"gcsSource\": {\n \"uris\": [\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n ]\n },\n \"importSchemaUri\": \"gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml\"\n }\n ]\n}\nCall",
"request = clients[\"dataset\"].import_data(\n name=dataset_id,\n import_configs=[import_config],\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{}\nTrain a model\nprojects.locations.trainingPipelines.create\nRequest",
"TRAINING_SCHEMA = TRAINING_IMAGE_CLASSIFICATION_SCHEMA\n\ntask = Value(\n struct_value=Struct(\n fields={\n \"multi_label\": Value(bool_value=False),\n \"model_type\": Value(string_value=\"CLOUD\"),\n \"budget_milli_node_hours\": Value(number_value=8000),\n \"disable_early_stopping\": Value(bool_value=False),\n }\n )\n)\n\ntraining_pipeline = {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"input_data_config\": {\n \"dataset_id\": dataset_short_id,\n },\n \"model_to_upload\": {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n },\n \"training_task_definition\": TRAINING_SCHEMA,\n \"training_task_inputs\": task,\n}\n\n\nprint(\n MessageToJson(\n aip.CreateTrainingPipelineRequest(\n parent=PARENT,\n training_pipeline=training_pipeline,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"trainingPipeline\": {\n \"displayName\": \"flowers_20210226014942\",\n \"inputDataConfig\": {\n \"datasetId\": \"3094342379910463488\"\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"model_type\": \"CLOUD\",\n \"budget_milli_node_hours\": 8000.0,\n \"multi_label\": false,\n \"disable_early_stopping\": false\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226014942\"\n }\n }\n}\nCall",
"request = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT,\n training_pipeline=training_pipeline,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/trainingPipelines/1112934465727889408\",\n \"displayName\": \"flowers_20210226014942\",\n \"inputDataConfig\": {\n \"datasetId\": \"3094342379910463488\"\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"budgetMilliNodeHours\": \"8000\",\n \"modelType\": \"CLOUD\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226014942\"\n },\n \"state\": \"PIPELINE_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:11:57.377842Z\",\n \"updateTime\": \"2021-02-26T02:11:57.377842Z\"\n}",
"# The full unique ID for the training pipeline\ntraining_pipeline_id = request.name\n# The short numeric ID for the training pipeline\ntraining_pipeline_short_id = training_pipeline_id.split(\"/\")[-1]\n\nprint(training_pipeline_id)",
"projects.locations.trainingPipelines.get\nCall",
"request = clients[\"pipeline\"].get_training_pipeline(\n name=training_pipeline_id,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/trainingPipelines/1112934465727889408\",\n \"displayName\": \"flowers_20210226014942\",\n \"inputDataConfig\": {\n \"datasetId\": \"3094342379910463488\"\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"budgetMilliNodeHours\": \"8000\",\n \"modelType\": \"CLOUD\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_20210226014942\"\n },\n \"state\": \"PIPELINE_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:11:57.377842Z\",\n \"updateTime\": \"2021-02-26T02:11:57.377842Z\"\n}",
"while True:\n response = clients[\"pipeline\"].get_training_pipeline(name=training_pipeline_id)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n model_id = response.model_to_upload.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(model_id)",
"Evaluate the model\nprojects.locations.models.evaluations.list\nCall",
"request = clients[\"model\"].list_model_evaluations(\n parent=model_id,\n)",
"Response",
"import json\n\nmodel_evaluations = [json.loads(MessageToJson(me.__dict__[\"_pb\"])) for me in request]\n# The evaluation slice\nevaluation_slice = request.model_evaluations[0].name\n\nprint(json.dumps(model_evaluations, indent=2))",
"Example output\n[\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/6656478578927992832/evaluations/8656839874550169600\",\n \"metricsSchemaUri\": \"gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml\",\n \"metrics\": {\n \"confidenceMetrics\": [\n {\n \"precision\": 0.2,\n \"recall\": 1.0\n },\n {\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.98092645,\n \"precision\": 0.8910891\n },\n {\n \"recall\": 0.97275203,\n \"confidenceThreshold\": 0.1,\n \"precision\": 0.92248064\n },\n {\n \"recall\": 0.97002727,\n \"confidenceThreshold\": 0.15,\n \"precision\": 0.9295039\n },\n {\n \"precision\": 0.93421054,\n \"confidenceThreshold\": 0.2,\n \"recall\": 0.96730244\n },\n {\n \"precision\": 0.9465241,\n \"recall\": 0.9645777,\n \"confidenceThreshold\": 0.25\n },\n {\n \"recall\": 0.9645777,\n \"precision\": 0.9516129,\n \"confidenceThreshold\": 0.3\n },\n {\n \"precision\": 0.9567568,\n \"recall\": 0.9645777,\n \"confidenceThreshold\": 0.35\n },\n {\n \"precision\": 0.9592391,\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.4\n },\n {\n \"confidenceThreshold\": 0.45,\n \"precision\": 0.96185285,\n \"recall\": 0.96185285\n },\n {\n \"precision\": 0.96185285,\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.5\n },\n {\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.55,\n \"precision\": 0.9644809\n },\n {\n \"recall\": 0.95640326,\n \"confidenceThreshold\": 0.6,\n \"precision\": 0.96428573\n },\n {\n \"precision\": 0.96694213,\n \"confidenceThreshold\": 0.65,\n \"recall\": 0.95640326\n },\n {\n \"recall\": 0.9536785,\n \"confidenceThreshold\": 0.7,\n \"precision\": 0.9695291\n },\n {\n \"confidenceThreshold\": 0.75,\n \"precision\": 0.9719888,\n \"recall\": 0.94550407\n },\n {\n \"precision\": 0.97720796,\n \"confidenceThreshold\": 0.8,\n \"recall\": 0.9346049\n },\n {\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9318801,\n \"precision\": 0.9771429\n },\n {\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.9291553,\n \"precision\": 0.97988504\n },\n {\n \"confidenceThreshold\": 0.9,\n \"precision\": 0.98255813,\n \"recall\": 0.92098093\n },\n {\n \"confidenceThreshold\": 0.91,\n \"precision\": 0.9825073,\n \"recall\": 0.9182561\n },\n {\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.91553134,\n \"precision\": 0.9882353\n },\n {\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9128065,\n \"precision\": 0.9882006\n },\n {\n \"precision\": 0.98813057,\n \"confidenceThreshold\": 0.94,\n \"recall\": 0.907357\n },\n {\n \"precision\": 0.990991,\n \"recall\": 0.89918256,\n \"confidenceThreshold\": 0.95\n },\n {\n \"recall\": 0.8855586,\n \"precision\": 0.9938838,\n \"confidenceThreshold\": 0.96\n },\n {\n \"precision\": 0.99380803,\n \"recall\": 0.8746594,\n \"confidenceThreshold\": 0.97\n },\n {\n \"recall\": 0.8692098,\n \"precision\": 0.99376947,\n \"confidenceThreshold\": 0.98\n },\n {\n \"confidenceThreshold\": 0.99,\n \"precision\": 0.9968254,\n \"recall\": 0.8555858\n },\n {\n \"confidenceThreshold\": 0.995,\n \"recall\": 0.8310627,\n \"precision\": 1.0\n },\n {\n \"recall\": 0.8256131,\n \"precision\": 1.0,\n \"confidenceThreshold\": 0.996\n },\n {\n \"recall\": 0.8092643,\n \"confidenceThreshold\": 0.997,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 0.998,\n \"precision\": 1.0,\n \"recall\": 0.79019076\n },\n {\n \"precision\": 1.0,\n \"recall\": 0.76021796,\n \"confidenceThreshold\": 0.999\n },\n {\n \"precision\": 1.0,\n \"confidenceThreshold\": 1.0,\n \"recall\": 0.22888283\n }\n ],\n \"confusionMatrix\": {\n \"rows\": [\n [\n 80.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0\n ],\n [\n 3.0,\n 85.0,\n 0.0,\n 2.0,\n 0.0\n ],\n [\n 0.0,\n 1.0,\n 67.0,\n 1.0,\n 1.0\n ],\n [\n 1.0,\n 1.0,\n 1.0,\n 60.0,\n 0.0\n ],\n [\n 3.0,\n 0.0,\n 0.0,\n 0.0,\n 61.0\n ]\n ],\n \"annotationSpecs\": [\n {\n \"displayName\": \"tulips\",\n \"id\": \"521556639170428928\"\n },\n {\n \"displayName\": \"dandelion\",\n \"id\": \"1674478143777275904\"\n },\n {\n \"displayName\": \"sunflowers\",\n \"id\": \"2827399648384122880\"\n },\n {\n \"displayName\": \"daisy\",\n \"id\": \"5133242657597816832\"\n },\n {\n \"id\": \"7439085666811510784\",\n \"displayName\": \"roses\"\n }\n ]\n },\n \"logLoss\": 0.04900711,\n \"auPrc\": 0.99361706\n },\n \"createTime\": \"2021-02-26T02:36:30.247855Z\",\n \"sliceDimensions\": [\n \"annotationSpec\"\n ]\n }\n]\nprojects.locations.models.evaluations.get\nCall",
"request = clients[\"model\"].get_model_evaluation(\n name=evaluation_slice,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/6656478578927992832/evaluations/8656839874550169600\",\n \"metricsSchemaUri\": \"gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml\",\n \"metrics\": {\n \"logLoss\": 0.04900711,\n \"confusionMatrix\": {\n \"annotationSpecs\": [\n {\n \"displayName\": \"tulips\",\n \"id\": \"521556639170428928\"\n },\n {\n \"displayName\": \"dandelion\",\n \"id\": \"1674478143777275904\"\n },\n {\n \"displayName\": \"sunflowers\",\n \"id\": \"2827399648384122880\"\n },\n {\n \"displayName\": \"daisy\",\n \"id\": \"5133242657597816832\"\n },\n {\n \"id\": \"7439085666811510784\",\n \"displayName\": \"roses\"\n }\n ],\n \"rows\": [\n [\n 80.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0\n ],\n [\n 3.0,\n 85.0,\n 0.0,\n 2.0,\n 0.0\n ],\n [\n 0.0,\n 1.0,\n 67.0,\n 1.0,\n 1.0\n ],\n [\n 1.0,\n 1.0,\n 1.0,\n 60.0,\n 0.0\n ],\n [\n 3.0,\n 0.0,\n 0.0,\n 0.0,\n 61.0\n ]\n ]\n },\n \"auPrc\": 0.99361706,\n \"confidenceMetrics\": [\n {\n \"recall\": 1.0,\n \"precision\": 0.2\n },\n {\n \"precision\": 0.8910891,\n \"confidenceThreshold\": 0.05,\n \"recall\": 0.98092645\n },\n {\n \"recall\": 0.97275203,\n \"precision\": 0.92248064,\n \"confidenceThreshold\": 0.1\n },\n {\n \"confidenceThreshold\": 0.15,\n \"precision\": 0.9295039,\n \"recall\": 0.97002727\n },\n {\n \"confidenceThreshold\": 0.2,\n \"precision\": 0.93421054,\n \"recall\": 0.96730244\n },\n {\n \"confidenceThreshold\": 0.25,\n \"recall\": 0.9645777,\n \"precision\": 0.9465241\n },\n {\n \"precision\": 0.9516129,\n \"recall\": 0.9645777,\n \"confidenceThreshold\": 0.3\n },\n {\n \"confidenceThreshold\": 0.35,\n \"precision\": 0.9567568,\n \"recall\": 0.9645777\n },\n {\n \"precision\": 0.9592391,\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.4\n },\n {\n \"recall\": 0.96185285,\n \"precision\": 0.96185285,\n \"confidenceThreshold\": 0.45\n },\n {\n \"precision\": 0.96185285,\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.5\n },\n {\n \"precision\": 0.9644809,\n \"recall\": 0.96185285,\n \"confidenceThreshold\": 0.55\n },\n {\n \"confidenceThreshold\": 0.6,\n \"recall\": 0.95640326,\n \"precision\": 0.96428573\n },\n {\n \"recall\": 0.95640326,\n \"precision\": 0.96694213,\n \"confidenceThreshold\": 0.65\n },\n {\n \"confidenceThreshold\": 0.7,\n \"precision\": 0.9695291,\n \"recall\": 0.9536785\n },\n {\n \"recall\": 0.94550407,\n \"confidenceThreshold\": 0.75,\n \"precision\": 0.9719888\n },\n {\n \"recall\": 0.9346049,\n \"precision\": 0.97720796,\n \"confidenceThreshold\": 0.8\n },\n {\n \"precision\": 0.9771429,\n \"confidenceThreshold\": 0.85,\n \"recall\": 0.9318801\n },\n {\n \"precision\": 0.97988504,\n \"confidenceThreshold\": 0.875,\n \"recall\": 0.9291553\n },\n {\n \"recall\": 0.92098093,\n \"confidenceThreshold\": 0.9,\n \"precision\": 0.98255813\n },\n {\n \"recall\": 0.9182561,\n \"confidenceThreshold\": 0.91,\n \"precision\": 0.9825073\n },\n {\n \"precision\": 0.9882353,\n \"confidenceThreshold\": 0.92,\n \"recall\": 0.91553134\n },\n {\n \"precision\": 0.9882006,\n \"confidenceThreshold\": 0.93,\n \"recall\": 0.9128065\n },\n {\n \"precision\": 0.98813057,\n \"recall\": 0.907357,\n \"confidenceThreshold\": 0.94\n },\n {\n \"precision\": 0.990991,\n \"confidenceThreshold\": 0.95,\n \"recall\": 0.89918256\n },\n {\n \"precision\": 0.9938838,\n \"confidenceThreshold\": 0.96,\n \"recall\": 0.8855586\n },\n {\n \"recall\": 0.8746594,\n \"precision\": 0.99380803,\n \"confidenceThreshold\": 0.97\n },\n {\n \"confidenceThreshold\": 0.98,\n \"precision\": 0.99376947,\n \"recall\": 0.8692098\n },\n {\n \"precision\": 0.9968254,\n \"confidenceThreshold\": 0.99,\n \"recall\": 0.8555858\n },\n {\n \"confidenceThreshold\": 0.995,\n \"precision\": 1.0,\n \"recall\": 0.8310627\n },\n {\n \"precision\": 1.0,\n \"recall\": 0.8256131,\n \"confidenceThreshold\": 0.996\n },\n {\n \"confidenceThreshold\": 0.997,\n \"precision\": 1.0,\n \"recall\": 0.8092643\n },\n {\n \"recall\": 0.79019076,\n \"precision\": 1.0,\n \"confidenceThreshold\": 0.998\n },\n {\n \"confidenceThreshold\": 0.999,\n \"recall\": 0.76021796,\n \"precision\": 1.0\n },\n {\n \"confidenceThreshold\": 1.0,\n \"precision\": 1.0,\n \"recall\": 0.22888283\n }\n ]\n },\n \"createTime\": \"2021-02-26T02:36:30.247855Z\",\n \"sliceDimensions\": [\n \"annotationSpec\"\n ]\n}\nMake batch predictions\nMake a batch prediction file",
"test_items = !gsutil cat $IMPORT_FILE | head -n2\n\nif len(str(test_items[0]).split(\",\")) == 3:\n _, test_item_1, test_label_1 = str(test_items[0]).split(\",\")\n _, test_item_2, test_label_2 = str(test_items[1]).split(\",\")\nelse:\n test_item_1, test_label_1 = str(test_items[0]).split(\",\")\n test_item_2, test_label_2 = str(test_items[1]).split(\",\")\n\nprint(test_item_1, test_label_1)\nprint(test_item_2, test_label_2)",
"Example output:\ngs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg daisy\ngs://cloud-ml-data/img/flower_photos/daisy/10140303196_b88d3d6cec.jpg daisy",
"file_1 = test_item_1.split(\"/\")[-1]\nfile_2 = test_item_2.split(\"/\")[-1]\n\n! gsutil cp $test_item_1 gs://$BUCKET_NAME/$file_1\n! gsutil cp $test_item_2 gs://$BUCKET_NAME/$file_2\n\ntest_item_1 = \"gs://\" + BUCKET_NAME + \"/\" + file_1\ntest_item_2 = \"gs://\" + BUCKET_NAME + \"/\" + file_2",
"Make the batch input file\nLet's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the image.\nmime_type: The content type. In our example, it is an jpeg file.",
"import json\n\nimport tensorflow as tf\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + \"/test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n data = {\"content\": test_item_1, \"mime_type\": \"image/jpeg\"}\n f.write(json.dumps(data) + \"\\n\")\n data = {\"content\": test_item_2, \"mime_type\": \"image/jpeg\"}\n f.write(json.dumps(data) + \"\\n\")\n\nprint(gcs_input_uri)\n!gsutil cat $gcs_input_uri",
"Example output:\ngs://migration-ucaip-trainingaip-20210226014942/test.jsonl\n{\"content\": \"gs://migration-ucaip-trainingaip-20210226014942/100080576_f52e8ee070_n.jpg\", \"mime_type\": \"image/jpeg\"}\n{\"content\": \"gs://migration-ucaip-trainingaip-20210226014942/10140303196_b88d3d6cec.jpg\", \"mime_type\": \"image/jpeg\"}\nprojects.locations.batchPredictionJobs.create\nRequest",
"parameters = {\"confidenceThreshold\": 0.5, \"maxPredictions\": 2}\n\nbatch_prediction_job = {\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"model\": model_id,\n \"input_config\": {\n \"instances_format\": \"jsonl\",\n \"gcs_source\": {\n \"uris\": [gcs_input_uri],\n },\n },\n \"model_parameters\": json_format.ParseDict(parameters, Value()),\n \"output_config\": {\n \"predictions_format\": \"jsonl\",\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\",\n },\n },\n \"dedicated_resources\": {\n \"machine_spec\": {\n \"machine_type\": \"n1-standard-2\",\n \"accelerator_type\": 0,\n },\n \"starting_replica_count\": 1,\n \"max_replica_count\": 1,\n },\n}\n\nprint(\n MessageToJson(\n aip.CreateBatchPredictionJobRequest(\n parent=PARENT,\n batch_prediction_job=batch_prediction_job,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"batchPredictionJob\": {\n \"displayName\": \"flowers_20210226014942\",\n \"model\": \"projects/116273516712/locations/us-central1/models/6656478578927992832\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226014942/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 2.0\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226014942/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n }\n }\n}\nCall",
"request = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT,\n batch_prediction_job=batch_prediction_job,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/7156765165659095040\",\n \"displayName\": \"flowers_20210226014942\",\n \"model\": \"projects/116273516712/locations/us-central1/models/6656478578927992832\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226014942/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"maxPredictions\": 2.0,\n \"confidenceThreshold\": 0.5\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226014942/batch_output/\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"completionStats\": {\n \"incompleteCount\": \"-1\"\n },\n \"createTime\": \"2021-02-26T02:36:52.483588Z\",\n \"updateTime\": \"2021-02-26T02:36:52.483588Z\"\n}",
"# The fully qualified ID for the batch job\nbatch_job_id = request.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)",
"projects.locations.batchPredictionJobs.get\nCall",
"request = clients[\"job\"].get_batch_prediction_job(\n name=batch_job_id,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/7156765165659095040\",\n \"displayName\": \"flowers_20210226014942\",\n \"model\": \"projects/116273516712/locations/us-central1/models/6656478578927992832\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226014942/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 2.0\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226014942/batch_output/\"\n }\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"completionStats\": {\n \"incompleteCount\": \"-1\"\n },\n \"createTime\": \"2021-02-26T02:36:52.483588Z\",\n \"updateTime\": \"2021-02-26T02:36:52.483588Z\"\n}",
"def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n response = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", response.state)\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(\n response.output_config.gcs_destination.output_uri_prefix\n )\n ! gsutil ls $folder/prediction*.jsonl\n\n ! gsutil cat $folder/prediction*.jsonl\n break\n time.sleep(60)",
"Example output:\ngs://migration-ucaip-trainingaip-20210226014942/batch_output/prediction-flowers_20210226014942-2021-02-26T02:36:52.355258Z/predictions_00001.jsonl\n{\"instance\":{\"content\":\"gs://migration-ucaip-trainingaip-20210226014942/10140303196_b88d3d6cec.jpg\",\"mimeType\":\"image/jpeg\"},\"prediction\":{\"ids\":[\"5133242657597816832\"],\"displayNames\":[\"daisy\"],\"confidences\":[0.9999988]}}\n{\"instance\":{\"content\":\"gs://migration-ucaip-trainingaip-20210226014942/100080576_f52e8ee070_n.jpg\",\"mimeType\":\"image/jpeg\"},\"prediction\":{\"ids\":[\"5133242657597816832\"],\"displayNames\":[\"daisy\"],\"confidences\":[0.99999106]}}\nMake online predictions\nprojects.locations.endpoints.create\nRequest",
"endpoint = {\"display_name\": \"flowers_\" + TIMESTAMP}\n\nprint(\n MessageToJson(\n aip.CreateEndpointRequest(\n parent=PARENT,\n endpoint=endpoint,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"endpoint\": {\n \"displayName\": \"flowers_20210226014942\"\n }\n}\nCall",
"request = clients[\"endpoint\"].create_endpoint(\n parent=PARENT,\n endpoint=endpoint,\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/endpoints/3440574193450614784\"\n}",
"# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)",
"projects.locations.endpoints.deployModel\nRequest",
"deployed_model = {\n \"model\": model_id,\n \"display_name\": \"flowers_\" + TIMESTAMP,\n \"automatic_resources\": {\"min_replica_count\": 1, \"max_replica_count\": 1},\n}\n\nprint(\n MessageToJson(\n aip.DeployModelRequest(\n endpoint=endpoint_id,\n deployed_model=deployed_model,\n traffic_split={\n \"0\": 100,\n },\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/3440574193450614784\",\n \"deployedModel\": {\n \"model\": \"projects/116273516712/locations/us-central1/models/6656478578927992832\",\n \"displayName\": \"flowers_20210226014942\",\n \"automaticResources\": {\n \"minReplicaCount\": 1,\n \"maxReplicaCount\": 1\n }\n },\n \"trafficSplit\": {\n \"0\": 100\n }\n}\nCall",
"request = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint_id,\n deployed_model=deployed_model,\n traffic_split={\n \"0\": 100,\n },\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{\n \"deployedModel\": {\n \"id\": \"5165312113245159424\"\n }\n}",
"# The unique ID for the deployed model\ndeployed_model_id = result.deployed_model.id\n\nprint(deployed_model_id)",
"projects.locations.endpoints.predict\nPrepare file for online prediction",
"import base64\n\nimport tensorflow as tf\n\ntest_item = !gsutil cat $IMPORT_FILE | head -n1\n\nif len(str(test_item[0]).split(\",\")) == 3:\n _, test_item, test_label = str(test_item[0]).split(\",\")\nelse:\n test_item, test_label = str(test_item[0]).split(\",\")\n\nprint(test_item, test_label)\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()",
"Example output:\ngs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg daisy\nRequest",
"parameters_dict = {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 2,\n}\nparameters = json_format.ParseDict(parameters_dict, Value())\n\n# The format of each instance should conform to the deployed model's prediction input schema.\ninstances_list = [{\"content\": base64.b64encode(content).decode(\"utf-8\")}]\ninstances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\nrequest = aip.PredictRequest(\n endpoint=endpoint_id,\n parameters=parameters,\n)\nrequest.instances.append(instances)\n\nprint(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/3440574193450614784\",\n \"instances\": [\n [\n {\n \"content\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCAEHAUADAREAAhEBAxEB/8QAHQAAAgMBAQEBAQAAAAAAAAAABAUCAwYHAQAICf/EAEUQAAIBAwIEBAMFBQcCBAcBAAECAwAEEQUhBhIxQRMiUWFxgZEHFDJCoRUjUrHRJDNicoLB8EPhFlOS8QgXJVRjc4Oy/8QAGwEAAgMBAQEAAAAAAAAAAAAAAQIAAwQFBgf/xAA1EQACAgEDAwEGBQUAAgMBAAAAAQIRAwQhMRJBUWEFEyJxgZEyobHB8BRC0eHxFSMkM2IG/9oADAMBAAIRAxEAPwDRaXdr4ajO9KmGjU6Tpj32GLKqn1O5q6KXIpqIeGFeMECN/wDNvVgD6fQXiTAt4SPVY1/pRaAZvUNFtJHIntVBP54xyH9Nj9KravkKAH4ctWT9zKVONhKM/qP6UvSmEXXmi3FkrM0GYj/1EPMv1HT51U4NBGukypqdp4JP7+Md/wAy+tOnaoVizU7FoiTjcVUxgeHzoCPxJ1+FLZDWcLj7xE6L1XfHsa0Y3aAxxPbHrjcVcKBzW5I5sZx2pWNZZAAMDpjcUiAWNG0brKvzpiDVW8e3BxzD4VLALLleRyo2rPYSu0XxJghG/UUYche41ij8EHO61o4FKLqPDZ/Liq2yCyUb8h3zuDVTe4yJpIIkKsdiKiZBBqbYlfB2NJYQjTYC7L3CjJqJ0iGq0Fi2GwRmrcX4bAx1O/hxuParhTL61ckRMo/Edtqrk+wyMjOVTHcmqmEE5ERiW3PYCoiDKwhkunXblA6mrVuBj4IsaBF+dW8ERF35aAQbxjkcq5+dK2QKil/i79hS2AIWUDfHvTAJCZpNvy+1QhJVyCahKJPIIo8namqg0KL25MhO/WllwQT3kot4nOR+En9KqQTn2l6q3jJEiF3dgqL7naqISbdDtbHQdOnuLIZRhKF2fAx9K0x2ENfpmuEoGVtxuff4irrFNBaXyXZDKQHP5c7GnTAUahp0V4CCAjn2qEMzf6S9tk4OPUUGhkwGO5ltmPKxHxpSFMlpZ3kgmiJsbxTkSQjYn3X/AJ86VpMhKUffI+WYIk46vHuje47j4Gq2vJEI7i0a0mLAbelZ3sEusL+XTLpZoH5W7HGQR6EelSM6ZDe6Tq1trkWABDcD8UZ6fKtkJqQrR7LYmMkY2z0IpmAGMPI242qt7MJb4DBBg5Rv0qN0QusWMeV9DQTIUXyBpM/rVEmRFFsfAu0dh5Qd/h3pYzphNBNbeGSmNjW1+QAM6flPYbVTYBTPFhwR2ql3Y4LeOBCGAwRUsginzJIVPc7UCDy2QWlk7d2G1LN1sQ0GlJ4NuuAM4GK2RVRSFDbqUeHnPbBJqN2Qx2rXwMp5dz0x6VU2hjOyukbkndjSWQ+trfxZsk5LdBijHchordBbxqifiPX1rQtkKEdF6/OmbGBpZM5AOcUlkIIzIpJHwFK2Qks56tsTS2QvjDOc5yKYAWgwegApluQtB7k7U9UEXXV5zlsnyiiQAzzAtnIqlu2QyvE+p+F+5BHiSDdfRf8AvSN1sFGN0gG21W3c9mJHseU4qiCqVjvdGxt9Rkgl50Y4I7VqWxWPrLUecB4zyOOoHQ06INbTVsPhm8N+pU9D70yYKNJYa4sihJnyv8fdfjTpgoPkVZY9/OpGxHenAINT0fHM8S8y917ikaCZq5iMMmMEDtmkYwG106MHB36HFVsgUlxHex4Y4bpv+nyqmSIAzwGJiuCMdKzPZkPLa7ktZleN+V12B9qaM6DR0fh7WY+ILQBtrqPqp6/8/wCetboTU0I1QZPYc42GGB6YpmgFSQ8oKEH2zWdt8BPktmdgY0Zz/hUmjbZCi7tZl3e3lX4oaVpkBWQoQrKVJ7MCP51naaCaC0f73p8ZJzJH5G98dD9K1wl1RA+Qa5iDgNjfpQkQT3C8svSqepLkYU3+QNuhpE9yANtbmWcA9KeG+7IMdUlC24RdiMAVnlK2Q0tqOWJR6Dv7Cuk3tQoDf3/4kzjG1V9QaMhdyEuf4jVTYQF41E2W3NQhZdalFoWnSXssbSEERw26Y57iViAka57sxAz0G56A1ZEg/wBLt5kto/vEiSXJXMsiDCFjueX/AA9h3wBnerkAslDISCdvWq3LsErSDlyzZwenvUshU48pyd/Slvch7HADhm+Qo2QNXlAAG3vTrchaCScf8FWIgLe3YC8imiQUSytcSCFNyetBvYgJrusR6NbhQA87DCR5/U+1Ut0Q53d3UlzcSSyuXkc5Yn1qssSBpnMMwdeqtmk4dgNXYEXMa4xv/KtUdxBhFDJA+QfnTVRBlFKbgAN5ZR39aPJAmC8khPI4+DCpfkg/0nXQmI5D5cYz3p0wDeQiRA6tkflYURRPqlgJ1Yrs3f0NKxlZkb2Jrdjlcb1Uwi83wgkIZQ0Z2ZT0I7iqpMg0S4SVUiZ+ZJBmCYnr/gb3FUTQUUTRGNyCuG7iqG62YSzS9Vl0q9S4hJ5lO46ZHpTQm4uyM6rb6zaalpkN6kiqrjcZ7/Cuj1WrRXVAN1rVsoBy2R3ApGlyFC6TiWMNlVkI+IpHH1CTTiWJieZCp/xCq+l9mQKh1KK4XySMvwOR9KnVKPJAi31Uac5EsQMTjHiR7fp0zVkJJ8AaDZZElj8SNwysdiKktnuATXgzJmsjdsYVzLzBgw6UtkKraFYwSRnO1Wt1GiAWoz5uY07F1H6isqdyQTWGUquew/Wum2LwIdVu1jlB653qu0mQQXUhZy3rVblvYSNvbtPIpYbnpUjbdkFukuOL+LZbmI82jaA7W0LfluL7H71x6rEjCMHu7y/witUQG9RcrgCnfBEfTL5VBGHc8qYGfmfpWLLPp+bHSsjcwsG3YACrE9hQRkOQSduwp0Qn0IUYz39qKIWYVuRtvL0qxEK7y9EAIXqe4pyCa6ujgkneiuSFD3q6TYPdSbSyjyjO+Ow+dI2QweoXb3UryyHLt136VSxhacs1AI+k4UeckrdRYPqDVjxp9wWNdK0eXT40jkkSbGcMmenoc1dGNcCj+O18SPbcjqKeiHn3c5GxDetKyFqjnYo+xx1qEIFHgYc3Q9CKhBpY6o8AUEnB9DsaNgoYvOJ4soT6gelRsAi1GHx0PMMN61XLcNmV1C2eM4YZHrVMgg1jeiLmgmyYHPTup7MPf+dVvfkYdw3HigRysGcDyP8AxCs8lewAe5/ctzEEKfrSwi26YRlpOrr4YSNl8P0U+tb1tshQye5ZtwfgKjIDPcyL+Un5UtkIHUCrDm2B9RQIXRagFcNHJykehqEG9rxFj91cAMjbE/8Aaq3HwQPhu302YMjB7WTqOoplK9mQMu2DFXU5RtwfWsk04SIAzIGbaqk7YQaZxGygH41Zkn2IJZJBNqtuhO3iqf1FU43ckFmtvJCIio9a6E5UIjPagyk5PmYdqrvuwi6Zi2GK7noKDduwizia51CLTUsdJcJrOov92t5SMi3BBMk5HpGgZvdjGvVhm+C2AajhnQbThzRLPTLFCltaxiKMMcsQOpY92JJJPcknvVyIOkiJGKLIGxMrxEnAVRjmNZ5q0QVzlbiRmGeXtUWy3IDSMvMAD0/SmRCK8oztzfGrEiELu58GM8u21OQTSXLyNzfQUyIViFWzJM4WFBzOzbbVG6IZXV9XbVbpn3EKnCKfT1qhuxhNctn41AgksohUsetTjcBdHrU0bgh2BqWQcWHE7xMjMobHrTqbQKOg6PdW+qW6z25GD1Xup9DWpNPdCsZtYCXoMGpQLBXsGOx2I9aFEKHtmjUrImV7EUvAbKGhMQBVSVPalaoJdBd+EcEHHbbpUsFnl1Isw5h36ig2AR3oDZU7/EVVIYz11EY5CQNhVEgottLsgAHoPWq3uhi+/uWnZN8gDFNDgB7bXXggDBzVl0BjZLxmXyjBHrRsBW95MDny/CqnkDRR+0iPLJHzD/Car98kHpZAXEUuSmUb6VPeKXAektS9aMYcbdiKnvFwyUNdL1rwD4MxLW7+v5femUk9mBo0ljNy5gYgqfNGfejNdca7il3MCx9e9c+MviQRPcy5lc9hVM8jvYZKxJbTCTWoAeqyL/Ohhl/7AtbGtvrgiTlUeYiuhKdMShPIMM3N1PrS9RKBiyxq0jkKiAszscAAdTTRfUyBeiWryyNdyx8ksi4VWG6JkEKfQ5wSPUD+EVu9ADyJQuwplsQkkyLGR19TSuWxAaTUVk8o2RKTncgJJdg55f8A2qckB0lDscZx71YQquL7wRjqx/SjZAD7y858wO/anSIFxWhVS7+XG+D2p/w7shzXiT7Q7bU717GykBsIWw8wP9+49P8ACO3r16YrO59UqCtizT4LzV0BsrGeZT+ZUPL9Tt+tOoNjNocW3AWr3QzL93tB6PJzt9FyP1q1QQthK/Zdbtg3t/LN/ghUIPqcmm6Y90K7ZiL3T5rVzzDIHesrQ9lEU3I25+tJY1Gh0HXZtJullhYAnZkb8Lj0NPGbjwBo67w3rdrxBCDA4WZRl4WPmX+o9xW2M1IqaHbWCzDDdR3A3pqsADNpRQFSeZe1I0QVzabNCuVbnT09KFbDWBNE5cldj3BpKACysyMduXHakumMAX3nHMPxe9I+SCK6QlyDvVTYQIgwuCOlZ3sNyFRsJo+XIB7UVNUSj1PId+1JLJQaDIw4wWIVW39aze/6nXBKGptxHGDjIPeqMqnHuRC65AQny7ZzWJzl5LEgJmRm2PKwrNLUThvQ6iqPvFYcwY5FWx1qkluDpLYpiuO4PetcNSm6A47Gn0K/M0IiLeePdTmurHJ1Kyhqh2LjLu2NiucCubmn0ZGwpWJb6XDMQe1YpZLLEhXpI8XUw+c4cAVZjyfEvmRmnupT4pYbnNa/e3JtFdAU6s5329aZzdkoHTknuRalOdVUSvnp+Ly59d1J/wBNdLCtrFZpLOApsPiTVylbASnxFzHvjrRbpEFQuD5hkAVQ5NkBTMrcwzse1S3VEK/GypA6VbHZEIyXXKuFH6VZdkAvPLITy/WrEiDSxsgo8SQhVG+Wprogn4s0674uT9mQXjadpTY+8zQDmnuB/wCWudkX1Y5J2GMZy1J8gPeH/s90Th5V+6afEJF/60/72Q/NunyxT7LghqETlUDOQOg7CjYT4ycucn51LIDPcxhj5t6lkOQnWvFTlkHOPcVk6rCCyx290SUPI3pSvcK2BiZLfB3dfUVVuhxjp+sS28qSQyPDLGcq6MVYH2IoqdApHT+F/tYUKkOroXxsLqJcn/Uo/wBvpWqObyI4nR9Pv9P1mBZLS5gu4u5icNj49wa0KSfAlELnR8czQk79UPeiAS3+jliZIRyv3U0jW9kEV9a7jmUxN64qqSGQouYSr8rPufwse9VN06YRPc2+5UnDiq2QWyoCO21Z5+R0UxSeG++4zVHCG5GMYV8c35h5WFZ59URrK2nktJPDkHMn6/Gs0nezGSsbaXqkaoIZmzCfwyH8vt8KMM6/+vJx2Fce6J39u0Jwd17MO9Yst43THi7QjnwrkjbPasrlvQxTDclZQrbqTjFcbUq03B0y2Ndw4oFGV6H1rDpPaE5SePJtJDSiuUFaZdNb3CEZBz+lev0urt7szygauWcJE7Z2K7GqtfqVCN2JCO4o1ScRwsx7rmuWtRdblvSC8OfijfP4pM/r/wB60Y9QlKKsDVo1EAE0oQtjOSW64ArctRHHjeST2W5VV8AXEdxa8N2FxqWo3K2umW8LXEtw/wD041BLEjuQB077Cp7Oz5dZijlyKrv/AESaUXQu+zpbnUdAt9WvoTb3urY1CSA9YEkAMMH/APOLkUnu/iN1Y16nqpJIpNqgwc9hRjKtyCnU5mVHBbGaSU3RKFbTYTcnpkmkbukE8jMZHlz/AFq5cAZcqBxgDar4pMBNLBnBY4jT+JzgfrVySAVyJFGxEZ8dvVelWWiEo7KS4Ia4kwnYCpaCGosNuuExgdzUshXJeoucMDUsgLJqbk4UZ7bVCFDyzynfI+NG2QryAcO+/sKBDk7Wjr0G1ZByAidGPehdEJLMVyrrgUL2IRADNgAg0gxZHK8PQ5HehZBjZarJaSrNHK8Mq/nRiD9RTKTXAKs3GhfarqNkFS4f73EOpJHMP9j+nxrRHO1yK4m80vjzTNaUb8kvUgfiHxU7/TNaVkjIRoLubO01SNhHIr5GxU7j5UWrQDKaro7xZVxlR0NZ5QvkazO31t4RDE5Tpn0qqUd7IK7m3KjIIKN0IqiSoZCqRmhcBqyvbkfkY6dKt4phz5wMj3pdppxDuty4j70DbykLOn4G9faudk8PkdMXJcSWkhBBIBwymuXmyLhlqVjiw1VVUQTkvbHdG/NH/wBqz/1ka6MnAXB8or1KERnIOVYZVh0NY8mdR4ZK8iaf8WAwJB7VwX7QWbH7yq5/Iv6KdDEXahkR2GSMCvOR1cpyWUvlHp2C4PMy/GvQ4PaCW9lUoWaiV0itIjK6RqVxmQgLnqASdu1cv257SnkhCGJ773Q2HGk22LOJoGghgRkZjKmF5AcNjG249xXLxe1pzx1HmOzvkd4knuVWh/Z9xbWm3PHHzNjpktv+pP0rt6DW9dKLutimcaY+iufukKswzM+By/7V7TNkwwxxhn3t8ea3+3cxqLb2OA//ABF8XTcVcT8O/Z5bzswvrqB9T5G6Ru4WOI49uZz/APz967Win1w9527fJFWTZ9J+lrCKOCPCqFQbIoHQdvpXSjlvkRrwTnuXWE4HL8ad5tqRKM7ezGV8Fqr63J0NVFSoMAA+XuTWqKt7CE/GjhGdvnsK1/CuQckDqUr+W3Tc/mxtT9XgFHwtWkPPdTMx9CadOyFrahb26lVx8BTWQpOrSSbRqfQEU17EPOZpfxty/E0UQhI6qcZyPQVLIVteqBiNMH4UbIV+LJI3fPoKKRAiK2kf8u9MQxK6DfjYRK3t4i/1pHiYbPJdBvVUlrOb/SvMP0qqWKQbAZrQxNyupRx+WQcp+hqpwa7Bspa0HXBWq6phsi1ucE8mT6rQCQ8EEb7j261AnyqYTlWJHpU4IExanyEEkqQe3UfOjZKNPpXGNzblOaQzqPzZw4+ff5/WrI5mitxNzpnFlvqsAWdg22OfGCP8w/3rSpqXAtUVajo6lWaICSJhnHUfKkaBwZS+07wwwXIOd1NZpxGQkvbfA5geZcYPqKyZFQ6Yu55LSdWRirA5UjtXOnNwdotSsePIusWRuoDyXcWOdR6/0NZ8uVSXX9wpdge5db6BJQCLkDDKB+IDv8a81rtTjhOML+JmiEW962LGs3t0EkavJbsB+8xspPrXh5e0+uLjPaW+xu93vsGSWlzp2mpPcxCaxl35kP4M9Dntn6VxcWunC8UJfT/BbLGnuyNxcWOjrFaXCJiceKzgjKEgb9Oo274GKyp5crcot7fmPGG1gOk6bBJrVrZzP9+nvYlWyaN/zvktnHTHIcE7YJ7mr5Sk8bcFVc/JfzcpUPi3Gttplzp8VoLyURXsjlJbVxllIYjIx1GBn3HSqlqKk+jeK79gJWtzSz3Ea2DXdzarJYwRu4EqHJcEAMuQMgAvk/GufOeTLLp6vif6clypAem6p+2LMh/ENus3hpy7Dm5CRHzdF+dCeP3Uvp+/JoyxiqSIaJoUtvNPPfMsl+GCP4T80UYUHlVdtz5iSfUY7V6H2bqMcM6kvwR337t/8MOSG1vljOC1N3LO8XNFImYhNMvkHc8vr79egrZn9pT1GZzbSj5sSOOlSMpf/ZbFb8Z6PrNrfW8EEbPLqcMuZHvp85ilyy4UrsNsDCrgArk+19lahf07lLOpN/h2cUl3XLv67+pkyxqVKJvbbUf3ZBTkkU4KntXex6vp+GXJmcQPUL9pMID8cVpjn95KkDpoW8qHzE5rowqPIjKpLnmPKg5mHp0rbHK5bRFomLfIDTHm9B2FWxdAKrjVIbNSE8x9jVnUiULJNQnvmxzcoPp0FOmSqLooEXzNIPnVsWAIWYLkR7E96e2wFvhzPsz5+G1OgFkVlcE4IwKJA2DR1c+YNn1A2okGtppCIM8mfciiQOW1ghHmKgntTWQwT2Eo7fKtFAspaF09vjShKpJ5QhRjzL/C+GX6Hake5AF7OznODGbZvWLp/wCk7fyqqUIvsQEm0OWEGSMC5iAyXi/Eo916/MZFUPE1wNYCYFb8Khs1RQbK2tzjBAUds0o1g7xcmcrk+lI9uAlTuynmj2I7Gl6iBNpq8kEgfdHHcVFJ9iVZq9J4wZAAZOX17qflWiOYr6aND+0bPVVCzKI3P4XXoaZzUwUJtW0vlDIcYPRxVE0mqYVsY7UYzCSrNhh6964Oo2VGiJ9pTzW5a5hbmCEI6LuSD/ttXkdXr1gyrFLbqT37fI1wx2rQ/QWkhs7qOeOINMhMc2Qr+bcEjpt+ux714LW6zPkzShL+26rtsboQSj8xjxM+i291KZdaOitIuAskRkhB6csgG6g+wNcTTrLlSSh1fJ7/AE8/Wi3ZMqa01yHS7a/sLURTWXKlxYZxFLynzlX6crAq3pu2+dqsrA8jhkez4fj59/P5Eck+COo8J2qcSRcRc8N9p+OWa0mXm8KX8KyL1DDK53wckdc0cWpawPDG1Nd/T9vBVv1WafjfTI9e4VklvbeaxabFtNdxQq0xRjsCPZ+RhnGCB61j0mXJhzR6N0t0r79/ytP0ZG0twC24ebX9c0u91G7K3NgWiuROpBuUYADnTfkPU7ZB9qeeVYcc4QjtPj/87/mv0K0+t34NJLYw3mn3Gj6u/hXsiPHJd25LchlQqSmcAJh8AYA/nWF5HDKp418KfD9H3/6W/iQw0y3lsIZtOt7eB7WGMxC3j8qsM8wIJ35s+o6n1rPOXvJdTe77/wA7Fjews0ThuK3gFlb3czM5Zk+8YJQ5HlYjc5bmGcbZ6bVoyalt9TVdnX88DbvkSW2u316bNGtpLaK7PJAsgIWU5GfoMn4AkZq+WGMFJXdc+g8acWwxNcsr+7aCGCSd4mKkKh9cZx26e1WRlnwRUrr6lNRldF1yxjUsoTB6MnpjYV3/AGZ7QzSyuWWbexmyY1WyEV3qQRygGT0wK+jaPVOe0DFKKRXC01wSWBRPTpXoMXU/imUMLDx20YONx+tdGOSIjQtvtSeU8q7D0FaVO9gUBqqk5Klj61agB0FuZtk2+O1aUkhA620YlslQx9qsS7ksZw6QuQPCwfXHSrluKGJp8MIzJLy0/BC1JLRPwnmxRRCZ1NEGI0C++KjIQN9PJ03HqKYh5Gjy7sAKKRBD95JwGdfYmtQp65Qjzx8w9RQCgO4ghYeU49jShFtzaFd1AI9qRqiFEV09s4O4we3alIFPa2Wsjmdvu1z1EyDZj/iXv8Rv8elVSjGXJBTf6VPprItyvkb8EgOUf4H/AG61RLHQ1gToiHp9apaoNg8kZB2A+IqlpoZMEljkB3x9Kr4GKiJYW5lIweuKVuhkFQatJACrESQtsVPQ1W50TpQxg4r1XS4Q9rB/4isV/vLCRwt5EvcwSHCyY/8ALk39H7UVnV7iOHgqXVuHuNef9kapFHfqOaXTL4G3uIj3BRsEYO3pXM1c8cV1dX3LMalYRpPDdxFZzlJkhvecbSseXk36Y75x618h9taxZNQotfCv1/wdfDGohg4eMdpLfXOppZq37uSFYieU8w/eK2eo6gAdQd68/wC+uShGN+vp4LlHuaXRbTTNc4gttW06VGks42hWS4j5EeUpjmKcuxCk7qceYHAIGMuSWXDCWKWyl4fC8X6vzv27gY34Zl1eW7uo71rW0un5kMEziRZCCehU9Dn1+VUzUMaqEthWxRY2djJp01u8F3b6jbsCieNvFKAD4JDAAqp233wAc75qyUmpXtT7+nnvz6CW+57fX+q8LpZQ3UsTWlySFkgk5iJcfhwRzZO4G1T3EcvxQ5/YXqVBc3Eh1+/tLSKVLeay2uri5DBQPyqcDJOdwPjVXuXCPVPh8E6kuC6GcW2tTDWUE935Y5GilYQGPbleMDBGc533HTJpJwUYpYvw+vN+GMpXwWQz6kdSvf2RaPd20TMYnacMxIA2GTlhkkb77UsseOUU5um+w6klyH6VcRajZ3MqmRb6eOSRJpCUKuQSF5egAJ7gnbfJqifwSSktk1tz/LHt9g7S7v8Aaui8lt4NxE8XLDOx745QebfO/eqpRcJ09n3C2kZtNRu34hk0dYUs7tCObkA3UDZzjsR0NbJY0sazN2v5sWJLpsB1PV7aW5XNyLiRmZPu9ru0hJwrA7AHuQNv51r0+LI5KONbtqr8lb4bfBfJCuixLFColkP97OfxM3f4D2r7Fhy6f2NijHa3y+7f+DkyUszILcyHZt8/xV6DS+1MGqVQkmZ5Y3HkFvYTJh4jsPxIe3uPat8sMfx4/sJfZlC20B/E6g981fjjXYDYfa20LDyOj+wrbFdyth8SW8OOdMH2FaEgWW/fFQYt0Oe5YdKsragFck1yNxJuewqxJohWplc+YFm9KKV8kL47YyHOcN6AU/SQuS1MZwyHPrRqgWXF3jOEQMKNEISSzyYUAp8KaiWKneGQKzoPfarwbkGiidD4eV+FQJQ0DLsHB9jSkBpY2XsCPallYRbcIjgnPm9KrZAAqyPlWwRVb2CMLHXpYFaKVUngbZ4pV5lYe4/5jtUUiUHw6HpevEfs+5+5XR6W1wxKsfRX6/I7/Gh0Rl8wAl1wrqFueSe3VCO5cA/0NJLG+5LFcuh6gAwNlJInqi8/8s1mljk+EOpCe4tDCSkytET+WRSP51llBrkdMW3EQibY5WsUti5ANzcm0XnWQow3DKcEVgyyGSLtBmtuL9ZCPaxNrAiMYvVQKZY+pRjjG2M/BTXi/bmTpwNzeyf+jbh52OgaVCwt7eG7vYrHUHzF4F6hVQ+4XDA75GMetfOsmR1KMU5R523/AOG2q3Feo6LdyaDBbaklxp0UXMTq3heIkoOwXGQWyehx0OPSrYTgsrnH4r/tumL1NLYusNN1TgNrbT01K2uoLlnNtf253Q8vMw5d89Njnv6UMs8Ws/8AZTTXKYrmNNQEvDll+0LG/juxJOIJRekA27MfxjHVfbt9azxhjz1Fqtm9u/8AOxT1tcg/EGn2VtpOn3OnLcS6oZ2hdWnb+0FsnnJY4TlHQbDAA6ircU3NuM6Sr7V/n9RG3yEniDVbC+tor2B7NPD57aOR0kjyThmBBODsNtuvSq3hi4/DK/uJ1J8BWpcT2fGGqW9nd3TWcdkeV5ebExdhtyZH4dubJ2bII23oRwzwx63v1cd1S/f07CSlXBTFa6facV3MWrSPqs0caLBOGMB8Jgckcj7NnPm6dMCpJtY08KqNu1zuvpwWRlaPL3X9N4A1uOxsXCWUipNHA0ru8btkESliSM45hk7jOMYpHgya3G8rW/F1XHiixNLZmha50ix0qKYN4l2h8bx+dgzuXyeYZ5XznoR3rB/7JycapfLj5eC1X5DrjV4IYoXs4eW/mZUWxg5VVl6c4XYDAAJxtt69aowlNvreyvd/zcs42GUdwRC8GrWyOt0hUgEFSCPwhhgjG2wxgmqWnFp43x/LoZVJGR0RNMGu3Vvp9pbRxwvym6Ul3kddsIWJIXO+BsSM75FdrHqMumljz5N6d1x96XJFDri4rgX/AHy84k4hvLbTgps7FuSW6ckRhvQHG7ZB2HoTsOvroaXUe2pqbfSklzwv9mFzWLZbjGexvUjHhtHc8pwVVsE/AGurpfYefSyeTHkUn44K55lPagNpiJhz5iY7MrjlIzX0LTZpdC69mYpLfYEksmGXclIxvzsCFHz6VqjK3sATHjLhq3uzajX7O6uh1t9Pc3sw+McAdh8xW+EcnNMrdWaG01ATrzQ2WrOm2HuNOns1Od9muVjz8s1ojGa5F2GEV0FRWYNETuVzkj5ir0muWKXR60sewy49DVyaRAmPiUDpBEPcimtIFF68RFx/cRk+1NaJRP8Ab5x/crn401olHo1sNuYBt6GmtEo9XWoj+KE/I0VQKMFFqUyYIY7daFjBkOsOMEJ8aa0QIOoLKvNjHtmhZD77zFJtzHmoOmQBukbJIG3qN6QgnvIZMEqWPwqpoKFzySxebdj6Gqm6CTt9VfoVx60jkw0a/Q/tFmslW3vB98t+nLIcOPgT1+B+op45q2kBofm003ipefRtTFjef/bzJkE+65B+an5U7Ucm8WLujGcS6hxTwlDI+o6BdanZp1uNH/tkePdDiRfgUrHNZo8bj7Mw1l9rfAfEEjQSyRWdwv4knVrZ1PoewNYckn/dAZV2ZdNY8NX00c9tqU0sYYOYlKTK4B3GQQcbYrzHtDPGGOSimn58M1402zQaNrCWNvLJw5pskaRNzTmGMsCN9mA36ZwQNq+SarFkyTrUzv6/odWLSWyPdTsJeJtQtBdSz3GkXCuxFpKshBA2GeqgtgEkKRnt2pxuOmUulLqX0/nkeUr2NRY2s+s6A6X9+9iDGQIfD57mEpkLzMTgbqDtnIPUdscpRxZaSv8AR/z8ipt1se6bqeiW/Cstnq9layWUMXMHePLTPj8Q7q3NuMZ60rWZ5+rDy+3+e3BXS7mcbUNAn+z++S7hsnbwh92uLt28UyAZUrJuVbIH0rfD+ohql0Xu90kuCqrC10DTr7hOa6jW7m4htURo57ifLGQ4HKVDKgBYlScYHXPco88o5vd7LG347eb3fr+RTK63Yk4w4b12Ph2zv9aXT20PTv3sjQ3niTKGAByvKAQM4OCeu2d8a9NPD1uOGT65bJU0tt+Spt0J9EsTqWiXOvaXqEVnGIzcpY8gYSRqMdRvzHHw23NX5pKOT3GaNvi/Vips01nfaDrGmxWDaYbi+lflsyJQLkSNuGMp7Z9dgNum1ZHHLCbktl38JccButynhTjuxvtL1HRNdERkmbwNQ8aX94zrsrcw6gdiAKXPpsmKccuDdf2+CxSvcafZnZ6Lr1gdKvIZbm7Tmt5JGlfx4ix5jIA34SNsHBO/yrPrsmXBk95HaL37U+1beTTFmo4Rs9J1DS0WWUT3RQlb4yczREEgMMbAYPQep6Vzs0545uLVR8F6b5CLDU7e91VdMmu45I7CRjLjIWaQbLy+wHMT8V9N6Jwljh7xL8XHoh0k+Aq/0/QNLtbG5sbeXT42mEf9nHMckHH4m6HGQfb3qYp5c0pLLK9vH/Bm+lJFdx91ttFmttD06O2XxEZkgQLyM5GXbrg4GT17Vrx6rULKnkyuldWI4KtkLtOmtptSawm++REEj73cJyoWx2BOeU9M4Fe20eujJLFHJ0t/3N2r9fn+RjnjfNHuvQWd9YXGnz+I7OChmhlMbIvfldTkE7jbHU1732Vrve4F71XJP6fNeUYpwp7GYg4T0CzYMukWszj89zH47fWTmNekhqH2/wAFXSN01W4tYRBbs0EA/wCjCfDQf6VwP0q+OSUuRaKTJM5Lcu56kDB+taU3W4KKmadxsCvrirLAR5JHJGCfjVqbfBCQjlQ7ZFP8wFiLN/EfrTIBes1wuAelMmyFn3qUfPfamshE3kp7H5UbIKI4gp67e9W0QISFo2/FhD6bih6ELFteV8HofXal5IfNauhHL09aBD3wnZcFi3pyigyFPgyZxy49c96rtkBpdMLAt4e3U4pWiAE2kRyZKkqaqlFcoZMFfTpHHKTj0JFUNNj2Dm1uLVgQ5HoQapdxYypmh0z7Q9Z0hQryftCIbeHcjzAezjf5HNT+olH1B0XwF3l3wLx9JE+vaLp6XyHyPqFuhIOO0uP0JA9qj1GOa5piuDQn1PgZ9L1ZXsrHRp9PKHw420yNJVH+GRCMj4g/GvCf/wBBrVpIJSp35SZv08HPdE9LI1hbvT7VYdEeFAz8uIzMpJ8y7diPXv3r5RnfS/fTfVb48eh1tqpIOgurm7vrfTIdSgLwQjxLww+G3hggDKZILZYbqcHrgZ3rlGCi8yi6b4539H/nj1Ee2wv4htJ9LvY9OXWFkXU1kYXksPIyYfdTk8u++56e9XY3HKveuP4e1/z/AGVN0gDhe5suDNTeeZXgnAaa3bUMny46xl9sEknK+vvV+oWbNFVxtxX51+5nbt8iX7RJ9M1fia1nezbT9HltzK0q2zxxvcMSV5ywwSVBI9cnritukjlhhk1vNPzul6b38/BW32FMWua7qRWz4eSHVoLdkeaSSYI5BboSe4wd9+mat/p9PH49S3G+NiuUqVB/EX2gXNjeDhTVdJk+5TMgvVu35ofD2bOxyT5RsBnf0qvT6FdH9XjyfErqub4+RW99im94G1vXtQWbgl4TpTkNqFpPMEAcgsyR48wDKFGPU1dh1WnjGtYn1r8LX6v5MTdbIhpPCmqafHqGt8PWcz6MqhT52a5TB5WCxg83kJLeo5Se1TJmhm6cWZ/Gvt9+Nwd7RpeIePNFuOCooGjtop1mhb71HEGHIr5JdsZXYMdz1rnYNLqPfNb1vt60Wqh5rzy6tDYcQ6PY3s17beGbvVLFcNJbkAEvjHPt0YAlcZyAKx4bj1YMzSXZPz6eN+3cui99h3aWej6vBBDp+dKuyuBJp7hYzHt5nTo2wIB6+Yb7ZrBLJkg7yrqXrz9P5RfG13Hj8OCLULZdSlElvbKrWgtXxE/8Rf8AMG3AxttjrgYxvNUWsa3fN8/T+eTRjVlvG1to0OlW73EVzewSyhIYIJ+WaKUc2SrY8wx/EGG9TSyy9b6KXm1ar+eC5pt8WUaAsNzb3QGnPpbQx4bxJmlkCnGWYZ5cbDfahm6lJfF1X4VfQa2+diN5dm50OBy8c8olkiWaIYUqJGwy9duUqa9PDRwlgwSwx3fV1Xe7Vc/QxSy05WUw2Eb/AI8gnvX0X2WpxxpZElXjj+ehgyNN7BA0dSdiCPTG9etxpcmayQ0qBvxYWuhD0Fsl+yY1xyDmHqK1RutxbJLo4b0zVyaSAS/YUZ3OM0U/ACptHKn8OQe4q1epCH7IJ6bUbIQOnY6g596ayFUlhyYwNqNkKmsc7inIKIboMF54lf1GKtsgwgNrKuCjRj19KOxA+OwhlAKTAEdjQpACF0lnA5TGx/nQcSHg0eVTnwyPhSUSz5tJDrupV/cUrjZLK/2Lg4IJ9qFUErOggnKx8v0pXSID3GhPIpUxjPriqZJsgun4fbBBAPxrPLGhuoTXelMhwEJ+AzWOUH2LVLyIdS0e4KEraTyA7f3LnP0Fc3LiyK6i/sWRkvItsLvUNHiaK3NxAuSfAdGMXyGPL8sV4b2pjyyb64KUV52aN+JxS2ZpdC1m0vNHNvqZOmaldEqr3FrzRKchU8y5PKe5bHLzHtvXg8+BrJeF9UV4e/rzt9nubYzdW0ZyWxh4G117m5v0uNZ3gTTNMjEyOrEbF25TnIU4UbY6npW6Mlq8Pu4R+F79T2r6b/mScjU6ld6foph1K4v5b/UIp+cSzKfAiJXlOIztgHG+577VzsallvFjjSf3f1M8rCOJeO7HUb/Q9OmmW7e3vEu2mQeIIsAnDMM8vmwf1pMGmywhOdVaa32vyVSoQ/aP9rp4m4Wu+GNPtbnW7qceEZbYmVYgPzNjPLgZO+BtXS9m+zJYs0dTkfQo70+/ov8AW5VkltRh+FeHJOG7iC34dkk1KW9VXuFzylSCfMWxhVXJzjr2ya6mqzrVR6tRUVH+V6tmY6NrvAVk2soeKwGS8hAtLrS5giFVUcxIO7MGYDcYwdu9cXDrJY8aWnWye9rz+2waMBwzqmo8Ja3rFjoBm1Sz0+RjDeTzCNmOAzB2G53/AIc4IwM12dThx6mOPJm+GUkrS8fsyvgo4H+0viA2ck0P9svbyRlW2DOz+djhMjclTt6kGprNBgU+hbJLnb7/AFAmdV4as+EY+GdQs7iS6WCRC00s1w48IkEsG5R65O4/Lg153Nl1PvVJJX22HSFeh8UcQ8KcKwalPa3Udgk/gR3CqP3MXMRG5jzkLhQdwBuM4yKtzaTDqM8oRkrq3vy+6v8AwXqnSNQNK0mDRRrHDwTTbm0Rp57eLEaXyHzEuWI843we+QpwMYwPNlyy/p9T8Vuk/H27f9NK2G+hxxcc6bZalPqM1haqv9nt0ZSec4877nI8o8u2Bn1rFl/+JN4lHqfd/sv3Zpj5L7PQYdadl1W6gvo42kWFouaJkIJ8ysCQQwAOCp6bUjz+5X/qjXF9/wCV8y6MpRdpizVLyz0KabSbuMXty0eTcyyA5U7cgCj07nr1xVsIyypZoOl4/caXxfiYxMMdzptryiWJA++xYKSv5u4G3U+tdv2fqvdx93mklFPbzvz9NjDmx3+EsW0uo1zGY5k/iRh/I19O0WJ5IqeNpp+Gc6WzpkGluUbDhl+VelxxceUUNonHzOwJyproRTW5W2MbYYIUk1qjvyBjAKgwrE5PQ1ckAuS0dugDrV/BC9bEpuVwPrUqyHjacrZYZY+lMlSIefs4uDmPA9xRohS+nrnHhDHuKaiFLaODuFx7UyIc/wBNSC9H9nnjdv4c4YfLrVqp8EHNvprAklQfhT0Cw1LDbZSKFELZBLAn7pdyPxYqUAEGq3lq+S3MOvmGanAdmOdP1+1usLcN92c/mIyv9R+tC0Chz91EsPipyzRfxxYYfpQa2B3E93relW7sj3kAkX8SKwLD4gVnlKMfxMPIouOM9NjyUlV/gf6A1RLNBD0xXPxraNkoik//AKif51meoiuEHpYuuuPJAD4bOv8AlULWaWsfA6xiW84pubrOZZBnsXNcvUaick9y6MUmBjV2sibm4j++Q45fDDZHx6V8s9o6yWqXuotxfqdWEOjdgN7JBxBcQ3c8l3/4eRh46KpQzMDlo8jGAehYcxGdh0xxMd6ZNJJzfH+fpzW1l6d8EbfiDROLNVj0iztr+01GI82nXMMgt/ACKTylvMMDGc8u+wwO9scOfTQeeTjJP8V73f2+wjl2Prq1vtA123TVZ7TULUxeLEMcqyuSObbYeUZHTqdsYpE8c8beJNP9F/PUzzbYr4zvdB4OuvvtjCtt99BNzZQsW852VlXqVwSNiem9a9NDNrY9Ev7eG/0spe259wl9tek8L6bLp0cDxmXLixitmVmZhgkEAZz/ALinzeytRqJdbla4tu6KHRm7TVotK09dajQWuoCV3kjidihV32gKnGAucZ67Z7VunieabwV8Pl+i5vy+wqDNe+1e91LQRYRZgskEc819ATKscmTyhDjyg5weuzYyO9OD2bHHk9493xT228v1/m4/T3MRY3/EbtcDT5DZ+JgtPFugUjBPN/EQDsPbrXXcNLFqWTevv9vAkkqs6Xb6loXDh0OfQI4bYxo1vcPEqvNK22GYgk8xI3wBvn5efyLPqFNZ78rsv54KkqNAeb7TbxY4b6LQUj5ke6iAV7mTl8ivGR+Hm/E2x6YO5rJGUdHvlj1X9ku7vz47DfIqXiHUxpmq8KajbyW2rXFuYpQm9rhgVEivuGXbO2/brmllgxQnDVYpXBO1527Ndn2LosZ/Z015oesSWOqSRaxcadKj27nZArL5XUdCdzuc4I61n13TOKyYV0qS3+fg0Rdo10cel6jxLdT2UxtLZRGLiCzOFklyS43yAegPLjqe9cybyY8cVlVt3u+a7f6s0Qtohr9yeCpIZ9MjuNT0x2Eg5pUR7UnYIWKM3KdwGxntnO5mGMNWmpun+vr2X0/YtuXCRXwxctqCajeHQrJ7pgZgWUuzMcADnkfB7DYipn+Bxh71qPH8pFig0m5IbwXa6hpc9tNBPpiM5WW3KRiRcEEgE8wQEnr8cYpMeOSzLoak/Nut/tYkvw77CRNP1GC4kk++2tshclVSVmOMnHRRX0v2V7Olg6ZxzU/Bz8uTq2cRva3mpRbferS6Ho+Qf5CvoWnnmXMkznyS8Da3nlutpdNJP8VuwYfpXYi5P8UfsVbBKfdmYYeSJ/4ZEIrQukAxitS6jBD46Yq3ptbAsJjR4tgPpTpUQLSVwcMNu2RRIXpIuPwgGiQtBEmAwAHtTIJaIk9AaNAIG1Rj5t/lRIfnay4alZlMsywEbjlyzD6Y/nQWJrex2zZ6KPuahXurm5I6eIcD+Wf1rQlSENFb3S9eVQPfc0yAXyTiRfwgGiyCi/VSCCuD86RoggumdCeTPyqljEbPWL3T5fEt7mW3cfmjYqaCcl3ING4vj1EcuraVp+rED+8uLdfE/wDUBn6VHJP8SJVFbWXBupk+Jb6hpbf/AIZ/FQf+sFv1rPLDhn2oNsHueAtAuV/sHFiW7dlvISv67Vnlo8f9sqG62Lbj7K72ZT911jSr5uxjn6/TNZJaFviSHWT0EN/9mXE1pubaGUdf3dwu/wBcVz8uhyrfkf3iFC6JrFtdKtzatbKDzc7FWUEeuDv0rxXtrHPDhcq9H5+huwyUnQfDqkGsPdaXrD/dNPSDmTwogrP5ubl5s4Xffm5T8a+duDxpZce8r8/z7HQvsJJrqLQ9c0u30/TNF0wzFYW1CdwrvG2M8zhVG+B0G5x1rowvUY5rJKTrelxa8LcRxrc0HEPDk5uLPUdaggureGXMdrGS4t+bHKzk7EH07HFc7BmioShgbTa39fkVSRm/tL4YsdV1azudBtObV7CNWurG1j8rI2QCOwb0UehJ9a6mh1Mo43HM6jLht91+3qZpqkJuC9RvNA4ivJJNOnhvlaMM3g8zqwDEDK7g5P6j2rTqY+8xx6ZWt+5noV8fNp179pGm2q6aujBP7TNP4XhSBCMMnmxuf8WQNq16N5IaLJPq6rdVdr5/8LElaK7eytNFuUuJ0uL6Y3T8rLHIIbnm5hzKMBS2SNzj8LbUZ5JZU4xaWy8Wq89x0gW5/alvf6jPo1rNHeTPztalQ8IjP4SctjopGBuDmmTxTUVna6V37+vAGmhBw/d2eg61a3l1NAbe5mdpJYlLNG7ZJAQZ6MQNum9bc8MmoxyxxTTS28bev8srlGjqNjey8Q8QtcwXUmnbKXMsBka5xsCAT5QAP9sV5jKlhxVNdX14K6GXGfDUWi8a8PX2ja/cXTalItvdWuoOZDCqgEOoOy4GcL3OMb5oabPHNpsmPLjroVpra77ffuWxia7iTg/hubQ21OANHqmlckj3sszDnjDrzhs9chiQPXGK5eDVZ79y1tO9q71sXqNMb3mraVdaXZ2y+BYajE6x2s8IVnSM7kHb8HXr3IxvWGMMibbVrdv+eS+KdjfXbLTRwTqckT3Md5JGkcty0vnlUOuUBbyqu3sBv61nwTk9RBSW2+3jZ/mXtbOhbpWr2mg391aQwSRWljZpcxXAR5ZLkMvMSMDzgHYFRgZ36Vdl0+TMot/ik6rivvxt55H6pJfFwK9W4q168019RtNChurC5RhJPCyzle3PhchsDG4JxjcDFdLSez8c5+7631x7cf7+hnlka3rYx1nrKxpEI3aRT17gV7fQ48qk1kXBnlVbDmDVlbbf3r1eJNGV7DK11Yr5o5XjI9GIrq45yTKmkzRWHGmo24VWuRcoPyzjm/XrXXx6iSW+5S4odW3G8RA8Wxj+MdbYZ4vlFfSMU40snC5ikB9T/Wr1kgwNMKXiy1ZQBGxFOpRBRNeJLcj+6P1o3ElEv2/A35XA+tFUEsTXIDt4rKfQg0aXkhfHqiS/hmVj6E1KIctbCAn/AGqwJFbsA9CT8alkDrO9fmUqCPfriimAf21xLPFnlJPvtT2AjPaSSKSFI9iaVkQrudPLdTg+1I0g2L59Mbcq2fbFI0yFB091GNs+1VSQSlrN98EZqvchU9synfO/pSNXyFAV1asckJn4iss4+B06E11cXdrnwppIv8jkfyrnZHOPcsVMS3vEOqnI++TNjbDNnP1rzuqlKf4jRClwXwarp11bw3ElpHPeWxR3gushJWXBOQp3Uke3yr5tqMWfFlfU/hd1XZdvqdGEoteoVq+v2l/p0OonQ/JIRJI0sARi2P8Aou+Ry8w9T8s4rHjwzxZPdrJS7U7+6W/6Fn4iMXG13rnDF9cWOleAizpaSXF/h4zI56LynJIB5iMbZG+4p5aOGDJFSyXab22dL7/K7EltyJNKY8MazZraRz3st755JWkBkyoOebJAC+gz3rRkf9Rjbk0lHj6/v5MbV7s2nCXDknEet3et3Wqz6XfXcrNLYQLH4ZK8yq6u2cHoT8fasGfURxwWHp6kly7XzW35Arey/hr9nXp1bTuJLaJ9QUyLeXNzyF85yMuM5QjlI6A7b1Xmnlx9M9PJ9Lqkrr7efIK3pmGitdV0K3fhvVNNl0rhyS4K22qzckkZXm5kjXDEq5JwHIGOmMkA9lvHnS1eGalOt4q0/Dfql3S/4622M7e3A4V4mkjs9QW+vJWUfe5hkSRNupbpgpgjAIzgYzmtqX9Ri6pRqK7eq8fP5bB4MrxaNL0a7s4mVr/U95ZJSoCyBiSfOemSc4IzjAroab3uaMpJ9MeK8V6f7A6Tofy8VW+r8DSy2sV1DepGV8dgE8I7hsPnOPTGMn0rCtNLDq4xm01fz/Kinp7o0B+0HRtF4Jja8tp9U8SFVzJbkvLIoJXmZmGAG752HrWNaDNn1TjGSjv54Xov+DNUrHOh6PxJxL9nMlx9ynuljti5gurkPJID1ZRgnOCSM429zWHNkwYNZ09aW/KWy+Zck6tnUPs/17TtK0KzMtrbwosfLItwgDAHZgT3z6d8V5vVY8k8sknZrpciW21+A8bH9kzW2p6TNCI5NOuoTNCAwIkiD4zgrjPUbkHNbJY5Q0696umad2nTtPZ1+3gsilLdMdaPwtZfZxY26SXnjaYJfFguvu7m7hwN42CA5UdeYBfj2OfNqZ+0ZPpjUu6tdL+V9/Tf9wytbNmfm4ls7jiDUJ9Cjl02OaRXaaRgpklA3k5AQVz5Tg9Tk7ZxXRxQzYVjlN9Uo+O3pfeuL/xZVUXaAF13RtdvHh1uzOiawGKG+tv7i4b+JWICtn0YI3x619Y0U1qccZyV2vlL6r9aMM8coq4PYPn4VurdRJbvHfxYyHhznHqVO/0z8a7cNM2rg7Mbl5F5QhjhSjg4IIwR8avjFrZguwhDKoDA59q0xjYgSl04GTWhLwCwqG+OMZxV8RQ2C+5T1x7g1cpMFB0eoY35jVsZAoKS9PUE/WrLATF3nrt8KeyEknzupopkCE0u2wPJH82zWihTwaXbRk48PHp6UA2eeDFG3MkqRn3ocACYbxY/+smfansBd98V1yGz8agQZ2LEtyb/AM6UAOYGJJAHqd96VpkKza8/sT61WNZ7+y5JNkGB60riSyQ0OZhuhPypGmSyt+GpXBGCR6VS4sliXUOEiRgKR/vWLJi69qHUqM7f8JMjcioeb3FcfPo29kXRmZq70eYySQ6dyNdoCxlY7RjODjY5Oa+be082HHP3V35/wdbFC92T1ziq4ttKtNOvNKGpm3jXzJM0aSnuGVBzZ+DAV5zDgxyyvLCXTfonXyv90zTbWxTwzYw6pod1LqNnFZQzP4n7OiDQw2/L+EqCd2IG7HJNXarLKOVe7k3S52bflbLj04Kqsaf+ANPueFLfVdL1Sca1DGbm5M7GZZoCCwQLsAAOXBG+x6npR/WTWV4ckF08bbU/NlbhasE4Wj1zT9Curi2uLOS6ZXuIlaPbBAwoJIz0zv79qszxwTyKMlSWz35KxBpOqaTbaDeXHE5NnqNxzNLcXcbR805YvsADzKR0HYAbbVuy4suTMo6TdLhJ8Ljfx6lfHI64h+02HU+ArVbiynu9PuJVe7kngYpbx55iCM4JLqvTpgms2m9nTx6iThKpJVGny3/q/wAhr23EWstw1xFJZR3StoelmyZI7uMmASLzEMcMMEIRjJ3zj033Yv6rArS6pXxz/L9AWmI7949X4amlWGyu9Isp1zqVqIi12oflHhgAZJPLzYIHUDtWyEJYszjbU5Lh38Nrv8uy3BdrYK13hxtQFhY2Og3apOGNw8Bwjxr+EnBx1JwT3HSs2HN7pSyzyK1VX68/p2C93VF/CfAEPEuqalpuvffrWxsbdWSN/wB3IsrcwVjjZscu475+IpNRrXpYRy4KcpP5ql+gyXVydR1biT/5axw2r3SXEzxrFbmIcjEHylnjX8PLux3HTA3rzmPTrXOU47Ll/wCE/wAv1LUqqxvpnFml2elavNBIl8YYEZ3RRKI+UMAA/wCHrgY656+2SWmzSnCEl02/8F7aSM9wNxBo+q62t83DUVtdxv8Au4rCR18PbAQodmHUZPzzitesw5ceN41lbT80/wA+w2Poa6kPtTS4vuPoG1C9ltJraPFsLckBrdwThT1L84CkjpnpjFZsdQ0lY1cXz6SXn0rjz9x2oyd9/wBhLx3Z6po5/atvIl/Zqi/eLjw18WMDIDTKBynqAZFC55V5gDkt6T2TiWrh8Hwtcrn1tej8b1v2qseR9D33EFjxGl9+5vLa3LnbmQFOb5ZIr3mnwe74ZjcrH+mXv7KAFq7RRZz4LjK/LG4+VdzE5RdplElZp7XU7HWgsdzEskh2GTyv/pYfy/SuzjyxltJFDVcEJOGV5ibeXxlP5JMK39DWj3Ke8QWwGfTWtpCDGwx1DDpR6ekFg72p/hIHqKdJgs9+7PGMg5FWqPcNliNKu+PpT0Cwy3uyoHmOe4pgBaXGetMiE/G9wPjTJkM9HcXK7czgH3q22QtEs4Pm59+/epuQKgDORnn9siile5BraQTMQMF/Y06IN7fTn2JLY9DRSBYfHpefU+xOaLVC8l8WmBPy49sUPQgZDpkRxzLj41OlMDYcmlRpuRQaoDLkt41Jwv6UKrkh88CsDnAHt3qtxGM1xPxJo/DaE3lwFmI8sEY55X+CjcfE4HvWbJOGPeTGSb4OLcacbX2uJJHZx/syzOQVVsyyfFh0+C/U1wNVqHkXTHZGuEEuTntrbTOwhAMkS4yucKFBzynHQbdK8VrtNBp5OH+/k2xbew30try+13w2MrRtbuwlTzLEM4zylh32xuf1NeMyrDjxSryvqa921Yp0/QozxZI+us8gZOWGWYlowx7sT+EAbdAB3xWiWdvTdODnx/j6hlzwaHXrjU9CSK0tNRAhk8r2plTxFTOWERY+Q8ud6yYFjyfFlh9af51zuK9uC+eWxteHo5NFvDf2wnRhZQFJ2l3BKMobIOBuCQOu9KoZFl6dRGn5ey+ZVzuiHH/GmlapwzquhIlxqOr3Ub3cFrPG8DwxqCAy84/CkaZ5F3IViBjJrZ7P0maObHmdKEaTqnf27tvl8cPcSVIy/wBkVlol1pDW3Fl6l09nKWFjNMqRpGQFRyCBzYJJGGIyBtsK2e0smXHkWTRw2kuat33Xp27fuIk+GzR/Z3faVrus6jo2tpb8R6bZuF0trqFcXLDAd2DEjnwAAV/EOYjsK5+vjl0+KOXTtwk/xJPhdkmu3nxsh4rq5G3F/wBmz8WcewXXDmrW+maZiKbVLWB0BikXbmCDqTjcHBx2PfPg9oLBpP8A5eNyk7UW73Xq/wBwdDb2FhbibgHjJYrvTY761u4nEElhcKWkjV1zJyHfbI2PLktsTR6dJrNM5xyU0+6709vr9eB6d8EONuM9WsuIvvdloiLqGoxRyrdMjtOygsphVQSAylMEbkHfuKv0ujxZcXRlyWk3w9l3vz37hVp7IcfalJw3Zarpcs0mqa/bShmtdUg5I43HMP3ZVwuCoXOMbht+uRi0EdRLHOK6YNcrl/O03z8/8DtqMovcZ8Tz32l8N6FxLwZzPpYm/wDqeQoadVKgK4bzcpYujL0HkyOjFNF7tZcuDV1118Pond128O+efUbK22nHgaW+p6dfy6ld6ZamyaG4WSyu0JieVymY1dSPIFZuRyAQeUkZBFYZ43jcFOVpr4l4Xen39B07Xwlum/aBbX0P7O4j0w2pyFlCDxIg67cxXBZT/iXm69a9JoPZmhnPq3TrZ3tv/PFGeeSaiRTReJtPvY9U4f1SPizTQAoiLol7GMYIDEhJgd8qxR+uA/Svfab2Rhxt5dN8Mn67f6/QzyyrIumboWT8IadrcskmkxfsnU180mlXCGFSfRAwBj/ysAvoRtXWjg69mumRk6+n5C3w5NPka2u4ZLe4TYpKuD/z36U3u3B1JDdSfBfDPE4I7+/Sro7AY9sOIZrTCykzx/4j5h8D/WtuPI0VONmo03V7bUkwp8UL1Rxh1/7fpXShkjMraoNfRo3UvCAUI6YqzprgUAl0QJnyYJ9KPSSwOXTmiJBAxQ3CDSWZTcDFMQgMr5vT0o1ZCSzEoDg475G4qUQR22uEjBA+IrQmGg6LVwOqjFNaAHwaquxCg/KjZBpa6yAcAqSe2KOwKGkOroMeIT8cUbBQyg1IOVKjnX/LTdQKDkvnkI8qL8Wz/KoBhEdyW/MD/lFLugBiSliB/OgEB1fizTdEBWeUNMBtDH5n+nb54qmWSMOQ0c/4i+0DVtT54bJBplsR+IHmmYfHovyHzrBl1EmqjsWKPkwF1AmZJJCXYnzMxJJPua5sl1clq2JWvBd9q6Gdo2tbUDJdsKSPidh8T9DVf9LOW7G94lwUXul6bpkXLJKZlXcW9tlY/iznzOfpXI1WkwTVZF1enYshOSexmL3iK4sGK6VElmZB4eI1yzA9q81qNFp5r44qlubITlewqurzW0tIopuWV0yxidPNKTnBkPVjuMDIAAG3Unzco6aWRrEqXn/Hj1NK2XxDrWuDreLSzpEDwlhLm91fHiSkHHOVXIyOY4JBOF6d6x4tXeV5pXx8MeF6fzyRx6kNeDuF/wBsSwRaHpfKmiR+BFe21y6SSzOTzShMEMcK2c8wOQNh0p1WreNOeol/9j/C0mkl2/Tw+4IxV0uwh0rhG903jHVNSjuItW4m0+Ms13cyLJZ287IQxdznneJDzEZwp5Qc8vKepPVXix45rphPstpNJ+NqUnt5a8XYk1cupCX7Hn4ce9vb0Leajf2dlKn3e/gVoLlVKtlMDI2APKwyAM79K0e1Y51GMHUYtptpu1237eeNv1K1GnT5CIdF1G6u9Pi4k1bTFuJNYeV5YZcsLQkl0KqMqC6sY12YBn2AAwMubAoylpoOlDbbvwvnt+J7rjdsPS2+TosWi61H9rcItrZ7fhSXTprC1XTxmERpEZA/XHiFw7rIwwWIH4TmuFPPgl7O6JNPLcW756rV+tVtS7epY4/EqFvCPGSa7rOpzXOn3ltrckxhtIN5MWsYYsj8x8kqYYnlwGy2wKjJ1mkWHDGGKScI7v5uqa8p+u623e4yV7+R9xdxfqmjPFqNhohmVYWhlknjMJydhKrDmLAZIYbAhxutYNJp8GZSx5cnPFP8muN+3dV3DcovgXXet3XEnCMsOsaRDbaWiRw2tpZBla4mRlLCN32D8oYqBsPC5TkMRWvDhx4tSvcTd7tuW6SadWl2ulfO98oafdSNPotvpXDHDD3sgmu7Mkxc0qsFmbJBHhjIjIGx6nPU9hzMsc2o1CxP4a3+S+fffjgdyVWIOE76KTiG78FZjBeq0qRTY5UkBP4RnbK529hXUWknqenBGupPnyvUrclFdR9rCFb2R54+eVj0VcZ7Zr3Gk0L08VjXYwyydTshpsk2lSia1meCXJJKHHU5PxG9eiwXjaaZQ/iN/pnHNprMUdtxFZx3HKAFvI1w6++24+KkfCvSYs0ZqsiM7jXBobjhKy4g04CC5j1S2A8glIEsf+WQdD8QK3+5jNVyirqo51xDwJd8PM8yq89oNyzJh4x/iA2I/wAQ29hWDLpZR3jwWqfkSq3KASMA9z0PzqhRoawmJ2hdWUshH4WU4IPsati6AzXaBxc0LCO9G3TxlHUf4h/uPpW3HkfcraN5HDHewLLGyyRsMhl3Fb47lYFcaWhztim6UwC2XSBgqMfSp09iWK7rSJEzgYz6UHEaxe1pIjlaFBOVaXqNy0Ef3tVinI8wjOVFJ1UWUOre56eY4PpTqQA6K7VPztT2BoMh1NVbOcn3NHqANbTW3zgcuMdcZpuoA5tNTlkx+98p7Yp7IOILpEj55ZcJ3Z2wB9aF0LRGbjnT7BMRM12/YQDy/wDqO30zVc80Y+pOkz+pcdalqIKiT7jAduWA4Y/F+v0xWOWeUtlsMopC+DSrufzpD4KtvzzsI8+/m3P0qtY5vehrDouG4G3vtU5V6mO0hZyf9TcopliX9zBYxt7fRtMKtY6QJp16XOov4rA+oX8I+QFOlCP4UD5gGt3s1/g3M7SsPwp0Vfgo2FZp3P8AEwowut2aDrlmJ3NcnPiSL4sTaTZajdarJDo9oJ5+TlZnH7uIEjDO3Rdxt3O4AO9cHUey/wDyC6HdXb3r7mmOX3e5oNX4ch0rToBfuNQuolwzMCIySc/h7+2enoKxZfY+n0Sbgrb/AC+RPfvJyIbTVL/Uy2kqqCKVi33nABtYQMvy7fQnoTtvXlf/AB0cuoTiuN34+v8ANzR7xpbnmkabrPFOkavIkVxp9g8aW9t9yc27TQjPnz+J1JwN8KQDsQax6mK9nZMU2vilb338ceP1Q0X7y9wTg7gyLh5NRsLzUm05ZY3kkE0ebYQKVLhh7gMcLufc4FVanWS1ThOEeprbne3/AN7l2ydiWxm1mW1ttQ4cjstDnsL154XkYFbmN43HnV+YSEq2CGBHTpit94ITcNTc1KKTW+1NcVVK/G6oEk5bpj6PTbu9W3m4iNlpupWwTULqU3PhGPnQgu8fNhjgHY5BLYUbCsDkoSnDSRcoyuK2tbeHylf2S3Yva5AGtcSapx99neiw29o9rLHcTQyyW7sst3bnnRFk5f8A8eQ46NjJ61oxY8Ps7WZKlyk9+E1u6+u68cDtcWbbVuJ1M+kadotvaS8Sam9yIdVEaiQiOEs/iKFDOzBo1853yTvXIw6Zyhk1Gdv3cKuPbd9nfC3e3GyDfEULtF0qHhtrzUkn1a+1FNMl5VuHEksE/ISgXJ5eUcpBTcbbjflGnNn/AKqcMMklHqXycb3vw3f++4WlG2gfg7iTUtT4GTRr144LrkgkttUutlJGTyvzjlLEiUAtseZc5ODTajSY46l58UbVtOK9eGq3VbbIRzdUMtTX75wpfarZq1xZXkcTRyzBfESTAQuCoABIyrDGcrv0FDS429bDTzVOLe29VzW9/NEk7h1IU6NrcjLAZT/aYWV0mGzZByOb+L0OdyCa9tj0WP3qyJU0ZJTfTVnSLi2g1yxiuoVGHXIHcHuK9h/TrLBSRh6nF0ZuezaFyhGD6dqqWChuspU8u+MEdq0RjRLGml6vNYS80Urwt6oxGfpXRxyce5W1ZsdM+0C9wEmlWbHeRdz866MMrkipxPrkcPajlrnRo4HbrJZMYSfchcA/Smaxz/EibrgAuOFNEnyba8vbdT+WULKB89jSPBjfAeprkGPCYiP7vVI5AO0kRX+RNBadLhk6mM+HDqnDlwfDCX1m/wDeQxSZPxAOCD8OtWQhOD9APdHQLdodRtxNAxZDsVIwVPoR2Na+SvdAlxZcpyM+9QIBPBkeZc1CfICnsI5B0wagT8/KhHtWM0lityjYnPrSgLY2lA5udsfGhbIXx3L/AMWTR6mCu4VDd3CDHOAfYVOuQKQbFqd6PKtwRj5UPeSB0ky8kzgyyNMw3wxJANVuTkRpIc6Pot5rJDACG17zMOv+Udz/AMzVkMUpO3wC0a620Wz09F8Jf3g/6sm7fL0rSscY8CXZ69n4r4Ql/c0Gm+CcHzad4Y3wSaRxolkfui8pJbAHeh07bksV3iRc3LGA231NZ5KnSCLLbhS44o1DwkIitk3mn68o9F9WPbsOp6b1LD7x78B6qRsX06z4b01bWygWKJcnA3Zm/iJ7sfWrZ1jj0xQOd2cw4nd7yZ5JT5VzhewrzOpi8zL4ujQcI8EW+j6RNeapErzXIy8Ld17Ifbufjiuhp9HDBjufLElNyZl+Mry71K+aVJJLa3h2XwWKE477emdq8v7SwvM3JJelmvE62M7w1p8dz9/WeWWK4kQst0XJZUGSwJJz0329K8VrNJmSi4fElyv3NcJx39QPRtWl1Piq4n1CzX9iWcQX7rAgCgFgm2epA7k7b1kz4Y48KhjfxyfPfyWRk3J3wHnTdO1CTUra3tlvJ7yVbgyOvktVXm3bscIyqMHqB60uKGfK49Oyivv/AB7hlJJbjK+kuOBRpsGmIsVlMjRzxNKMOMfiLcrEEkg7Dfcd6yY1HWdcs/Krj9P2C5ONdIu0vxuI72DUbOKPT+J9MSPwLtWJW4K5V33AI5gzZXp5hjvXcWifTLFDeEr+H0fZfLYqU97fI+seM34PjtND8EXLyeJOl1dHlLtJM7GNlx0LO6jBGBjrvXFei/rcnvnaapUv/wApL7ruXOfQqL9E4ck1Ow1nT5ZjcRW0MSxRlRyoF5tgPgAMewr1PsbRyzxyZHGnyZc0+lpCmzsryws5tKM0kdpzmTws+UnIOcfIV6GHs3H71Z3H4/Pcz+9bXSKhaG2uW5tjntW94uli2bPg3XV065Frcyf2Sc+Vz0Rux+B6fT3rsaOfQ+mXDKci7o2etaB4i8wXfqCK6k8KbtFKZkbzTZbdz3FZ/dtD2CBW5jtuKKi0G+xZHIw6/ptVy9AMZ2+pPGAreZP1FaVa5EoOjuyR5TViAWpfkjzj55p0yF0d0QQVY9adOiGj0HXZIJMM+QdjmrUwM1olSdAw2J9O9F8Cgk8IIJGx9qVNPclC2eIqc/XG1NRDgDWnP16VlcS6z1LHfIHTuaHSGyQsGbqeYelLQLLFs+XqOWhRGTSHlGQOYeppaJZdFGXcALzsSAFXuewxQUbYbNtoHBix8tzqXKx6rbA7D/N6/D/2rVDEo7yK2zULOwTkjUBBsMDoKvsUlFaeO2WBHqaWiF5tliAVGxt8zUoBBrORk5mYAZ2AquSfcgDdYUAcpZvTO1VyYRWllLqN4ttAoDucFj0X3+AqhLqlSG4N3aafFpNiltbgLGgyWbqSerH3NaXUVSE5Mlrt595mdY2JXGOY9veuZkl1NpDoXcP8OR6le/e51/slsebf879h8tj9KTBhi31Pgjdmi1CM3x2UsBsqAbAVdk+JgRkeKOHj4arygc25GK5uowKS3LIypmf0PRlju7jy8qmN1z6eU5rix0yk5fJl3XwYHhS08G1vZpebDWvNy5zk5U/XevOf0NR6e5r6/sdT0jg5tG4VYSxiPUL5OeYd0BGyfBR19y1ej03slaTTe7e8ny/54M08znK+wg1jR7j7qokYyiNSqh9wo9vTpXHyex8XV1QVb267lqzPhizQbX7re+IvlZDke471ux6XpkmhZS2C/tK0QXtlaatEoK5FvcLjuclG+fmB9wPWtGb2dj95/VQVN8+Pn8xY5LXQzVfYpqS6rqd5a3ZzeNaY5m6yqrAc3uQGwfXr610vZuFRnJeUV5XdGg4p4TWOZpETK46gdK6jwpMps5zqOlvBMzcvMoyDWSeF2WqSFVuiiUpvj3qmMKY7dm/4M4uFsqaZqrc1qTyxXDdYv8Lf4fQ9vhjHYwZf7ZGeUe6NVq/D6lWIGVNbHASzH6hobQnK/Wk92EVyW5RwCevQ0vQ0xrPgjK2CckUyTWwC6CQIMEGnSAGo7HJBJGKdELFbJ605AmCblAwSp+NMmQ0ujcQvCBFM3MnTerEwUOTcSqviRSeNEezdV+fel6adoB6k7TdV7dqawHGEs1OORR7k1XQ56bbBwQD7AUrIeOOXI5QPegwlZAb8o+JqtjFfh8zBVQsxO2O5oVb2Abbh7QxpKC4nUNeMOnaIeg9/U1qjHpQjdjxHklO65HwqwFh9nbsc5XA9aBGwvPm5FUD2HWoQ+az5jkkj0ANAVg146xpyqxZunwFUzYd+4pnuuVdhhR1Y1S34CP8AhvSjbQ/e5V5Zpl2U/lTtn3PU/IdquiulWB7n2t3pKvGmcY3rLmltSCjPW+mSalOIF2By0j4zyjuf+dcis+PH1DcGmttOSKBLZFKRRjHIDnHt7nuT3JPbFbXGl0oWxktkqxjlQLt2paSVEsQ69pvPliBv5RntWLLGxkYvUbf7jo+rzKMMlrIqnvzOPDH6vWJ41CEmOnbFH2ccLR6lxFbwNGGgjj8aUY25VIwD8W5B9ay6LTrJmtrZFk5UjsWpaN94Y5UbjfNegnj6jPZnNZ4cSSywEyQD261lnhTXAUznc+kNZ3S+XrWN4aZZ1DnSIrfUYbjS7xuS3ukMDSfwZ/C4/wArcrfKtEMdrpYt90Y/he5m4O4ksrueIwz2E7Q3UQG6jdJV+WSR/lFY8K9xlTZdL4kfo6eCHUrRZY+WRHUMGA6j1r0DV7mWzmPEHDbWssxVcr1wR2rO8fI1nPL3TBHMSAV9PSsUsW9liltRVbuxU5GR03oVuFs6DwLxaFWPSdQf90cLbyv+U9kJ9PT6eldHDkv4ZFTXg1Wo6YrKwZMrv061tcU0VpmV1XROWNiF5l65FVuOw9iGW38JVfJZO/tSNBPOUKwK7g96lUQvhJDbbUSF7QsAH7e1OQ8V8ddqhAqG5C4Vl+dOmQcaZrDWpwDzJ3U09gNDBqEUgDr+A9h2qNEo5MMlc8x+ApAkWmRCDnHwoEKXkJ3AGPU0jGIqjP1Yk+gpelkNFw3pYh/tkvmfP7pcdPVv6VfCNbitmliWWb8o3q0UY2kch3wMZ7d6ABnGXVeXFFtELII3U5CDbcn3oWArurwxK2Fznq1UykET3dy7LylAq/qaqfgJ5pdn+0r1FdAIIxzuPX0HzP6Zoxim7IzRXl5yITkdOnrRnICRlru7ZmON87k1z5S7sc0ug2DWlqgPluJj4jeqjtn4Z29yT2FbcMHCC6uWVt2OYrTYxxLgt1YjoKZrekEYy2qpDyqNx3oyjSIZniNSPDQAYUEn49qyZN3Qy2MLxRHy6QkIIzcTg/6UGf8A/TL9Kw6jbFXkePJoPsc0cINUvCvmJSBf8oBY/qV+laPZ+NKMpeRcjs6DcWvMuPwkjrXTcbETFc1sjIsJGebIzVTj2ImYHXtI8GdxjJXp71Q4UhrMjGxgn5iMDOCKoiEs4805buGy16Igi6xa3g9J1Xysf86DPxU+tLqMd1NDwfY3P2PcQi80qTSZ3BubLzRg9XhPTHrg7H/T61s00uqHS+UJNUzWa7oy3URZBlsbZrQ4iWcu1jQgY5VMeD2OO9UOC7jJmLu9NaJGIGGzvgVncHVjWUoxKAMuD3NLwE6TwVxaupRppl+/9oAxDMT/AHg/hPv/AD+PXfiyWqZW4j28thCxBGx7VpFRm7/SUdHeFckblPX4Ujj4HsQz2C82YyRkdKr6Qg8bGNir7EHoe9LQRjbsjgeY/DtTAJPa5BwKhCgoyH2qcELIpCh26U1kD7TUXiyQdvSrLIYU3gPf5A1WNwR8cPjlG9D5ALEUE5c1ADjRrEX8pYj+zod+wY+gqyMe5DWwRDpkKOg2q0UaW1v0J3HoO9EgeJhkAnCetCkLYZHOqqQuR2O3Wg2Qi1y6KcDA9TS2EVX1+/Nv5QN6qbRBZeXZdSzZVQNjVLduwjzRh92sE5jyyy/vHJ2wOw+n86tukABvb7xHkOfKOmazSdjI+0qyN7NHM6c0St5VK/jYf7CmjiTakwSdG1sbNubz5LNuxNaqbZWN7W2EYJz5uvwoVQS6cCOAt6ClnwRGI1aXxVYk7nqTWdrYcxnETiXVGhI8lnEsR36yEc7/AELY/wBNYM/xOvBYtjo/2aWX3DhmFmGGmdpCfXJ/7V09LDoxJFUnbNLcocqQua08C2A/dvGhLA+bPlzSVsSzP67pnjocjz9M+tJ02G0jm+v6a0UrALgL+Ij1qiUEOiGmtHeWl3ply2LS8QIzHfkcENHIPdWA+RYd6nSmnEK23M3Z3d9wxrayxfur2zkKsvZuzKfY/wC+axRbxTH/ABH6G4f1eDiPRoL63PNHMuSp6q3QqfcHauxFqSsoa3EmuaOoeQMPK+6+xoOKZLtGI1Th9ZYmZVKsNsH19KrcNg2ZDUNGkt151Q8vXNUuI1i4Fom5gCrDcEbGkquAnSeFOKY+ILb7ldvi/jXyudvFHr8R3+tbsc+pUxGj28R7WZgr4x69DVjTRBZcrFcEE/u5e+1VNrqp8jIV3trySZdRv+cd6DQSqJPBfY8y/wAqFEDoZ98evY0xCySESLtUZAORDGd9qQhFJOXruKKdEOaHWY1PXNCwkH1+GEhpJkjA7uwH86XqoAXovEFrrWoxWVpcpPO+55PMFUdWJGwA/oO9OviewdjqFgqRRRxR7RqMD3960IVjmFUwCelEUOU85Rm8qVLJ8gwSooGFIx60pD1rtXIBPkXp70CHksi5LlsgdAelKQU3l0n3jff0FVPdhBwheYSSrhB1Un9KVK9wh15flVGOp657GhKRKoEs7V9QuliBKx455GHYVXjh1Pcl0bfTbZVC8qBVUAKB0ArdVFY4gRi3L1B6ihZBssZWPPT1oMgLqsyrCFHc9qWfFBRjrnle4BbAQHmbPoOtZ2FMwF1K8yTTsDzzSNK3xJLf74rmS3l9S1HbtAtBaaPYwEYKRKPnjeu/GOyRQ+Qi5mLMo+o9RT0iFMbZDfHpVdABTD4kjq/Y8wz70QmN4o0gxSSyKvMh6mqnHuFMwk9o0EzMuOUHGRVTjTHTsF4nt/vUdrqaDzOPu8/+ZR5G+a7f6fes2aDdSHi+xovsf4i/Z+syaXKcQXY54weiyDr9R/KrdPP+0WS7nXNRsVuoGU7elb6KjH3unuokCtkEYO360KIZ79ms0UiuOYA4OKSg2INR4c58Mq8p77VW4JhTM3cadc6ZcB0LIUYMrr1U9jmquHSHNVpvFMerwrb3oEV2Ng/RZP6H271qjO1TFqge+zHIOuRtnuKWcbCgb76SpR/OPQ1FxuE8Kq3mjOD3FMQtjkD/AIlIb1oUQIUHl6ZqUQ+cCRd/rSvcgvlhKtt0NJ3Ifgk8bapNs2o3TD/9zf1rA5vyQI0/VZbmYF3aQ+p3NBXJ0Q/UX2W6CvDmkxiRc39yA9w38Hog+Hf3z7V1McVFUiHWLJ1jQewzVtgGkNxzFcbnO2KgBh98zu/5RttUIe/eQ8uWzjsBShotUq7b536CgCiu9mCBQOp6UGSgJ4+Q88hHN12pKCfCVUDSPvjcexpW6IgBrzxXLHf2qnkJr+HdKNtbhWB8RzzyH37D4D+tbYQ6UIzVxRLEFydh2FWNbijGIBRkDBO9LVEC+dQjHO460GQR6vOrR82arku4UZLVJiLO4cbFhyD57fyzWeWyYUZWSES3EMC7B5Uj+rAf71jgviLDtlu2WwBt/Ku4tilkLwhyoQ4YZoSTZCm0cM7EjfbIoehAl4v37Y32oNEFGr2w8No3XIPQ+tTsQ51rOmGzlflwY29fWqmhkKWt/vWm3tty7lOZf8y7g/pSyjaaGRlbO7e0uobuI8skLrIpHt/WufF9MrH5R+lNH1KPWNKgukbKyICSPWuxF2ilgOp2vhSeKv4SNxRqgGemtwpJU49R60pCpYVcOCOnUVKCDXOhwXCkMoKt2xQ6Fdksy+p8FKsh5RjuPelcNxkxf+yLlF8MXHOB0Em+PnUoNiu7gubZvNGS3qveq2+nkPIPDfsW5c8pHUdCKKkGg+K+HRqZMFBsVwOqt9aJC0TKwIIwaFohFlDd/rQoh/NuOBgwri2Q13Atp4uv6er7r4ykj4HNaMXNkP1fw1KFjTffI3NdGLohuLOYy8ox1qyyDeG6WH8A83apYKL4bjxM5bLH0qBCoCM9SfXeoQl45RiFJY5+OKjIVOwQDLczk9epoEBJ7gNMqc253x6CksANfXvMwjXfHWqZBDeF7P8AaN/4hGYYN/Zm7D/f6U+KNuwNnR7GEKvv1rYisYQqCydz1pXyQYx79Bj41GQkZQsTD61HxYV4M3qrnkbJ8uarZDL6tIPCjT+Jix+A/wDes0+KHQjtXVtc08YypuI9v9QqmEamgnYLKbK5z5fWusvDKiu6cGVWBIOCMj/nxqcEK7CbnkLKRs24+dVryQbMvJO5G2RnFOtyFN/GtzbrlQGIyMDpUcbIYXXbRYyysuUz9M1WMjP2Vusd/wArbj+YoUGzn19bGzvrmAjBikZPocVy5rpkWI619jWs+PYT6c5y0J5lz6f8/lW/DK4iSN9PGGypGRjpWwrMvd8sNw0Lryg7q3tVbVBQKp5SRgFh2HpURGFRKlxECNvUEdKf5AIXVn40XL+dehpiGa1KyKODjHN3HY0jQyEd5ELiFo22kHQnsRVUlaoZCV7dLtMOB4g236isjVFgHJavCCR27Hej1MNFS3Txds06n5BRfHq6gHI6e1HrQOkk2sooGATU6kTpPwWtqpYf0riWKangtY7XW7KRyAqyDf8AT/etOKW4T9F8Py55VzsK3RYzRtbO6xGArfOr06FGdnM7McH50wBpAQoHmFGiBiSllyCAvr60aITEgxhQAcfrUZBTq+sQaWnM58SfHliB6/0FUzmooKVibS76SZZ72dsyynlUDoqjsPmT9KpjJtWwtHz3mQzE+wFDkh03hXTRp+nxIwxIRzyH1Y1vhGkVs08ZOeUEAd6dt9hUkHQ4iCgbnqSaT5kCzIVdObAB3pu4D6eXHMBsMUoTL6vcgK2O5xVb4IjMalIWdm68q8oFZZu2OhDDd+FrVi5wMXEYB/1Clj+JMJ17TpiYs5GP5V00VEppuVOU9s1GrJYNpE2JXUHOCaT0DRoYpMyhm3IGwqyuwCy5TEQYnr2qAMprkYYMGXmjZRn2waRrcYxc0XgXKsGyqnGaQYxfFScvEN2TuHYOP9Sg/wBa5+ZfGWLga/ZpqR07jC2UthJ8wnfrkZH8v1qzDKnRJLazu0+GjBPXsRXSW5QJNYiFxb+Ko8y57fpQohnfHXyyIe+GBpGMM7SZZcMpwfzA9/emSI9w5oyMHt0p0KK9TtQxI5ch9vhUqwoxmrQGB2JGMdc1XJUMjPXLqlyD+Hn7+9Zci3ssRYUEo3xn1qoIBc2mx7GpQwve3KtkfCo0QqEORsfippWiH4tCgmuUVDTTV3wO9WR2Yx+gOF51urC3mjJ5HQEc3UY2OfmDW+LLDb2G6A52xWpCsdWrscAfHNWIWhnBgBc5JPSmAHRnmUljyooyfYUQGd1PitnUpYAxqf8ArON/kO3zrJPLWyHUfJlp5HldnZi7HcsxyTWNtstqthvC5itIox0C1o42K2MOHrUajrtrE/4FbnI9l3/pVmNXIVnYbVfLmtqKw1TzN8qgBlFnkUjG2KHcHYIJ25s5IoDFV2S2MHc7Uz2FRltWOJVXt1NVMJmrliUcnq2TWR7jmVvpCl1akE58dTkezCl4aCjsWmTESSRncCuouCqid2xLq4JBBI+IoPYNAukTc11L1DAn+dKgM1dvIWMbHqCacFB0/nhz26YokM3qYyjHGQObIqEMLeoFmkUHyMcgelVscxXFRzqme5iX9Mj/AGrFmW5YhVZTyWdxDco2HicOPiDVGNtbhas/R+m6gNQ0y2ugMLKgYj4iuvHgpa3KbhgpJ7E4PxpqBRj9diWxu1lQYWQ4ZR61W7GR7ZXZinB3KHqKiZKNNbScxCnLKemafkVojNGJQwI6U3Yhk9etl8J2IyRsfel5GRgbqLxOZf4DgGs0x0U2908J5Tkj0qgZbjEBZ0z7VAgc9upJwN6gQJoOV8gb1CH/2Q==\"\n }\n ]\n ],\n \"parameters\": {\n \"maxPredictions\": 2.0,\n \"confidenceThreshold\": 0.5\n }\n}\nCall",
"request = clients[\"prediction\"].predict(\n endpoint=endpoint_id,\n instances=instances,\n parameters=parameters,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"predictions\": [\n {\n \"confidences\": [\n 0.999991059\n ],\n \"ids\": [\n \"5133242657597816832\"\n ],\n \"displayNames\": [\n \"daisy\"\n ]\n }\n ],\n \"deployedModelId\": \"5165312113245159424\"\n}\nprojects.locations.endpoints.undeployModel\nCall",
"request = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint_id,\n deployed_model_id=deployed_model_id,\n traffic_split={},\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{}\nTrain and export an Edge model\nprojects.locations.trainingPipelines.create\nRequest",
"task = Value(\n struct_value=Struct(\n fields={\n \"disable_early_stopping\": Value(bool_value=False),\n \"model_type\": Value(string_value=\"MOBILE_TF_VERSATILE_1\"),\n \"multi_label\": Value(bool_value=False),\n \"budget_milli_node_hours\": Value(number_value=8000),\n }\n )\n)\n\ntraining_pipeline = {\n \"display_name\": \"flowers_edge_\" + TIMESTAMP,\n \"input_data_config\": {\n \"dataset_id\": dataset_short_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n },\n \"model_to_upload\": {\n \"display_name\": \"flowers_edge_\" + TIMESTAMP,\n },\n \"training_task_definition\": TRAINING_SCHEMA,\n \"training_task_inputs\": task,\n}\n\nprint(\n MessageToJson(\n aip.CreateTrainingPipelineRequest(\n parent=PARENT,\n training_pipeline=training_pipeline,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"trainingPipeline\": {\n \"displayName\": \"flowers_edge_20210226014942\",\n \"inputDataConfig\": {\n \"datasetId\": \"3094342379910463488\",\n \"fractionSplit\": {\n \"trainingFraction\": 0.8,\n \"validationFraction\": 0.1,\n \"testFraction\": 0.1\n }\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"multi_label\": false,\n \"disable_early_stopping\": false,\n \"budget_milli_node_hours\": 8000.0,\n \"model_type\": \"MOBILE_TF_VERSATILE_1\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_edge_20210226014942\"\n }\n }\n}\nCall",
"request = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT,\n training_pipeline=training_pipeline,\n)",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/trainingPipelines/7472017139575029760\",\n \"displayName\": \"flowers_edge_20210226014942\",\n \"inputDataConfig\": {\n \"datasetId\": \"3094342379910463488\",\n \"fractionSplit\": {\n \"trainingFraction\": 0.8,\n \"validationFraction\": 0.1,\n \"testFraction\": 0.1\n }\n },\n \"trainingTaskDefinition\": \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml\",\n \"trainingTaskInputs\": {\n \"modelType\": \"MOBILE_TF_VERSATILE_1\",\n \"budgetMilliNodeHours\": \"8000\"\n },\n \"modelToUpload\": {\n \"displayName\": \"flowers_edge_20210226014942\"\n },\n \"state\": \"PIPELINE_STATE_PENDING\",\n \"createTime\": \"2021-02-26T03:34:33.436419Z\",\n \"updateTime\": \"2021-02-26T03:34:33.436419Z\"\n}",
"# The full unique ID for the training pipeline\ntraining_pipeline_edge_id = request.name\n# The short numeric ID for the training pipeline\ntraining_pipeline_edge_short_id = training_pipeline_edge_id.split(\"/\")[-1]\n\nprint(training_pipeline_edge_id)\n\nwhile True:\n response = clients[\"pipeline\"].get_training_pipeline(name=training_pipeline_edge_id)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n model_edge_id = response.model_to_upload.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(model_edge_id)",
"projects.locations.models.export\nRequest",
"model_output_config = {\n \"export_format_id\": \"tflite\",\n \"artifact_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/export/\",\n },\n}\n\nprint(\n MessageToJson(\n aip.ExportModelRequest(\n name=model_edge_id,\n output_config=model_output_config,\n ).__dict__[\"_pb\"]\n )\n)",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/1204871229996007424\",\n \"outputConfig\": {\n \"exportFormatId\": \"tflite\",\n \"artifactDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226014942/export/\"\n }\n }\n}\nCall",
"request = clients[\"model\"].export_model(\n name=model_edge_id,\n output_config=model_output_config,\n)",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))",
"Example output:\n{}",
"model_export_dir = model_output_config[\"artifact_destination\"][\"output_uri_prefix\"]\n\n! gsutil ls -r $model_export_dir",
"Example output:\n```\ngs://migration-ucaip-trainingaip-20210226014942/export/:\ngs://migration-ucaip-trainingaip-20210226014942/export/model-1204871229996007424/:\ngs://migration-ucaip-trainingaip-20210226014942/export/model-1204871229996007424/tflite/:\ngs://migration-ucaip-trainingaip-20210226014942/export/model-1204871229996007424/tflite/2021-02-26T04:43:08.209439Z/:\ngs://migration-ucaip-trainingaip-20210226014942/export/model-1204871229996007424/tflite/2021-02-26T04:43:08.209439Z/model.tflite\n```\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.",
"delete_dataset = True\ndelete_model = True\ndelete_endpoint = True\ndelete_pipeline = True\ndelete_batchjob = True\ndelete_edge_model = True\ndelete_edge_pipeline = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex AI fully qualified identifier for the dataset\ntry:\n if delete_dataset:\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex AI fully qualified identifier for the model\ntry:\n if delete_model:\n clients[\"model\"].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint\ntry:\n if delete_endpoint:\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex AI fully qualified identifier for the training pipeline\ntry:\n if delete_pipeline:\n clients[\"pipeline\"].delete_training_pipeline(name=training_pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex AI fully qualified identifier for the batch job\ntry:\n if delete_batchjob:\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the Edge model using the Vertex AI fully qualified identifier for the Edge model\ntry:\n if delete_edge_model:\n clients[\"model\"].delete_model(name=model_edge_id)\nexcept Exception as e:\n print(e)\n\n# Delete the Edge training pipeline using the Vertex AI fully qualified identifier for the Edge training pipeline\ntry:\n if delete_edge_pipeline:\n clients[\"pipeline\"].delete_training_pipeline(name=training_pipeline_edge_id)\nexcept Exception as e:\n print(e)\n\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
farfan92/SpringBoard-
|
naive_bayes/Mini_Project_Naive_Bayes.ipynb
|
mit
|
[
"Basic text classification with Naive Bayes\n\nIn the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.",
"%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\nimport seaborn as sns\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")",
"Table of Contents\n\nBasic text classification with Naive Bayes\nRotten Tomatoes data set\nExplore\n\n\nThe Vector space model and a search engine.\nIn Code\n\n\nNaive Bayes\nCross-Validation and hyper-parameter fitting\nWork with the best params\n\n\nInterpretation\nIdeas to improve\n\n\n\nRotten Tomatoes data set",
"critics = pd.read_csv('./critics.csv')\n#let's drop rows with missing quotes\ncritics = critics[~critics.quote.isnull()]\ncritics.head()",
"Explore",
"n_reviews = len(critics)\nn_movies = critics.rtid.unique().size\nn_critics = critics.critic.unique().size\n\n\nprint(\"Number of reviews: %i\" % n_reviews)\nprint(\"Number of critics: %i\" % n_critics)\nprint(\"Number of movies: %i\" % n_movies)\n\ndf = critics.copy()\ndf['fresh'] = df.fresh == 'fresh'\ngrp = df.groupby('critic')\ncounts = grp.critic.count() # number of reviews by each critic\nmeans = grp.fresh.mean() # average freshness for each critic\n\nmeans[counts > 100].hist(bins=10, edgecolor='w', lw=1)\nplt.xlabel(\"Average rating per critic\")\nplt.ylabel(\"N\")\nplt.yticks([0, 2, 4, 6, 8, 10]);",
"The Vector space model and a search engine.\nAll the diagrams here are snipped from\nSee http://nlp.stanford.edu/IR-book/ which is a great resource on Text processing.\nAlso check out Python packages nltk, spacy, and pattern, and their associated resources.\nLet us define the vector derived from document d by $\\bar V(d)$. What does this mean? Each document is considered to be a vector made up from a vocabulary, where there is one axis for each term in the vocabulary.\nTo define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So \"hello\" may be at index 5 and \"world\" at index 99.\nThen the document\n\"hello world world\"\nwould be indexed as\n[(5,1),(99,2)]\nalong with a dictionary\n5: Hello\n99: World\nso that you can see that our representation is one of a sparse array.\nThen, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays and columns \"being\" the features, ie the vocabulary. I put \"being\" in quites as the layout in memort is that of a matrix with many 0's, but, rather, we use the sparse representation we talked about above.\nNotice that this representation loses the relative ordering of the terms in the document. That is \"cat ate rat\" and \"rat ate cat\" are the same. Thus, this representation is also known as the Bag-Of-Words representation.\nHere is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:\n\nSuch a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other \"Natural Language Processing\" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove \"stopwords\" from our vocabulary, such as common words like \"the\". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all deoends on our application.\nFrom the book:\n\nThe standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\\bar V(d_1)$ and $\\bar V(d_2)$:\n\n$$S_{12} = \\frac{\\bar V(d_1) \\cdot \\bar V(d_2)}{|\\bar V(d_1)| \\times |\\bar V(d_2)|}$$\n\n\nThere is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. \n\n\n\nThe key idea now: to assign to each document d a score equal to the dot product:\n\n$$\\bar V(q) \\cdot \\bar V(d)$$\nThis we can use this simple Vector Model as a Search engine.\nIn Code",
"from sklearn.feature_extraction.text import CountVectorizer\n\ntext = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']\nprint(\"Original text is\\n\", '\\n'.join(text))\n\nvectorizer = CountVectorizer(min_df=0)\n\n# call `fit` to build the vocabulary\nvectorizer.fit(text)\n\n# call `transform` to convert text to a bag of words\nx = vectorizer.transform(text)\n\n# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to \n# convert back to a \"normal\" numpy array\nx = x.toarray()\n\nprint\nprint(\"Transformed text vector is \\n\", x)\n\n# `get_feature_names` tracks which word is associated with each column of the transformed x\nprint\nprint(\"Words for each feature:\")\nprint(vectorizer.get_feature_names())\n\n# Notice that the bag of words treatment doesn't preserve information about the *order* of words, \n# just their frequency\n\ndef make_xy(critics, vectorizer=None):\n #Your code here \n if vectorizer is None:\n vectorizer = CountVectorizer()\n X = vectorizer.fit_transform(critics.quote)\n X = X.tocsc() # some versions of sklearn return COO format\n y = (critics.fresh == 'fresh').values.astype(np.int)\n return X, y\nX, y = make_xy(critics)",
"Naive Bayes\n$$P(c|d) \\propto P(d|c) P(c) $$\n$$P(d|c) = \\prod_k P(t_k | c) $$\nthe conditional independence assumption.\nThen we see that for which c is $P(c|d)$ higher.\nFor floating point underflow we change the product into a sum by going into log space. So:\n$$log(P(d|c)) = \\sum_k log (P(t_k | c)) $$\nBut we must also handle non-existent terms, we cant have 0's for them:\n$$P(t_k|c) = \\frac{N_{kc}+\\alpha}{N_c+\\alpha N_{feat}}$$\nYour turn: Implement a simple Naive Bayes classifier\n\nUse scikit-learn's MultinomialNB() classifier with default parameters.\nsplit the data set into a training and test set\ntrain the classifier over the training set and test on the test set\nprint the accuracy scores for both the training and the test sets\n\nWhat do you notice? Is this a good classifier? If not, why not?",
"#your turn\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)\n\nnaive_model = MultinomialNB().fit(X_train,y_train)\ntest_score = naive_model.score(X_test,y_test)\ntrain_score = naive_model.score(X_train,y_train)\nprint(\"Multinomial test accuracy: %0.2f%%\" % (100*test_score))\nprint(\"Multinomial train accuracy: %0.2f%%\" % (100*train_score))",
"The accuracy score is good for the test set, but not great. When we see how it performs on the training set however, it becomes clear that the classifier is overfit. There is a $\\approx 16\\%$ difference in score.\nCross-Validation and hyper-parameter fitting\nWe use KFold instead of GridSearchCV here as we will want to also set parameters in the CountVectorizer.",
"from sklearn.model_selection import KFold\ndef cv_score(clf, X, y, scorefunc):\n result = 0.\n nfold = 5\n kf = KFold(n_splits = nfold)\n for train, test in kf.split(X): # split data into train/test groups, 5 times\n clf.fit(X[train], y[train]) # fit\n result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data\n return result / nfold # average",
"We use the log-likelihood as the score here in scorefunc. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.\nSince Naive Bayes classifiers are often used in asymmetric situations, it might help to actually maximize probability on the validation folds rather than just accuracy.\nNotice something else about using a custom score function. It allows us to do a lot of the choices with the Decision risk we care about (-profit for example) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as risks or scores.",
"def log_likelihood(clf, x, y):\n prob = clf.predict_log_proba(x)\n rotten = y == 0\n fresh = ~rotten\n return prob[rotten, 0].sum() + prob[fresh, 1].sum()",
"Your turn: What is using this function as the score mean? What are we trying to optimize for?\nWe are scoring by taking the amount that each X in the data contributes to the LOG PROBABILITY of being in the rotten class, or the fresh class. Thus we are optimizing the certainty of our predictions. Large scores indicate a greater certainty in classification. \nA downfall of this is that hard to classify movies (such as the infamous Napolean Dynamite), would score high in both categories. The assumption is that in the general sense, these two components of score shouldn't be closely correlated, and so taking the largest score means optimizing for SOME classification into one of the two groups.\nWe'll cross-validate over the regularization parameter $\\alpha$ and the min_df of the CountVectorizer.\n\nmin_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.\n\nLets set up the train and test masks first:",
"itrain, itest = train_test_split(range(critics.shape[0]), train_size=0.7)\nmask=np.ones(critics.shape[0], dtype='int')\nmask[itrain]=1\nmask[itest]=0\nmask = (mask==1)",
"Your turn: \nUsing the skeleton code below, find the best values of the parameters alpha and min_df. \nUse the cv_score function above with the log_likelihood function for scoring.",
"#the grid of parameters to search over\nalphas = [0, .1, 1, 5, 10, 50]\nmin_dfs = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1]\n\n#Find the best value for alpha and min_df, and the best classifier\nbest_alpha = None\nbest_min_df = None\nmaxscore=-np.inf\nfor alpha in alphas:\n for min_df in min_dfs: \n vectorizer = CountVectorizer(min_df = min_df) \n Xthis, ythis = make_xy(critics, vectorizer)\n Xtrainthis=Xthis[mask]\n ytrainthis=ythis[mask]\n #your turn\n naive_bayes = MultinomialNB(alpha=alpha)\n crossval_score = cv_score(naive_bayes,Xtrainthis,ytrainthis, log_likelihood)\n \n if crossval_score > maxscore:\n maxscore = crossval_score\n best_alpha,best_min_df = alpha,min_df\n\nprint(\"alpha: %f\" % best_alpha)\nprint(\"min_df: %f\" % best_min_df)",
"Work with the best params\nYour turn: Using the best values of alpha and min_df you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?",
"vectorizer = CountVectorizer(min_df=best_min_df)\nX, y = make_xy(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: %0.2f%%\" % (training_accuracy))\nprint(\"Accuracy on test data: %0.2f%%\" % (test_accuracy))\n\nfrom sklearn.metrics import confusion_matrix\nprint(confusion_matrix(ytest, clf.predict(xtest)))",
"The classifier performs slightly worse on the test data, but the closeness of the scores suggests that we are no longer over fitting. One would need to get new novel data and test against that to be sure, but the initial impression is that this classifier will perform better over a greater variety of datasets.\nInterpretation\nWhat are the strongly predictive features?\nWe use a neat trick to identify strongly predictive features (i.e. words). \n\nfirst, create a data set such that each row has exactly one feature. This is represented by the identity matrix.\nuse the trained classifier to make predictions on this matrix\nsort the rows by predicted probabilities, and pick the top and bottom $K$ rows",
"words = np.array(vectorizer.get_feature_names())\n\nx = np.eye(xtest.shape[1])\nprobs = clf.predict_log_proba(x)[:, 0]\nind = np.argsort(probs)\n\ngood_words = words[ind[:10]]\nbad_words = words[ind[-10:]]\n\ngood_prob = probs[ind[:10]]\nbad_prob = probs[ind[-10:]]\n\nprint(\"Good words\\t P(fresh | word)\")\nfor w, p in zip(good_words, good_prob):\n print(\"%20s\" % w, \"%0.2f\" % (1 - np.exp(p)))\n \nprint(\"Bad words\\t P(fresh | word)\")\nfor w, p in zip(bad_words, bad_prob):\n print(\"%20s\" % w, \"%0.2f\" % (1 - np.exp(p)))",
"Your turn: Why does this method work? What does the probability for each row in the identity matrix represent?\nThis methods works because we have made it so that each row in our matrix has only one word (feature). The total probability for all the features summed along a row is thus the probability of freshness given that single word in the row. The words for which we have a very high probability indicate that the word on its own correlates very strongly with freshness, and words that have very low probablities indicate that there is a strong correlation with rottenness. \nMis-predictions\nWe can see mis-predictions as well.",
"x, y = make_xy(critics, vectorizer)\n\nprob = clf.predict_proba(x)[:, 0]\npredict = clf.predict(x)\n\nbad_rotten = np.argsort(prob[y == 0])[:5]\nbad_fresh = np.argsort(prob[y == 1])[-5:]\n\nprint(\"Mis-predicted Rotten quotes\")\nprint ('---------------------------')\nfor row in bad_rotten:\n print (critics[y == 0].quote.iat[row])\n print()\n\nprint(\"Mis-predicted Fresh quotes\")\nprint('--------------------------')\nfor row in bad_fresh:\n print(critics[y == 1].quote.iat[row])\n print()",
"Predicting the freshness for a new review\nYour turn:\n\nUsing your best trained classifier, predict the freshness of the following sentence: 'This movie is not remarkable, touching, or superb in any way'\nIs the result what you'd expect? Why (not)?",
"#your turn\nclf.predict_proba(vectorizer.transform(['This movie is not remarkable, touching, or superb in any way']))",
"This classifier gives a 98.6% probability of being fresh, despite the fact that the sentence is clearly negative. The word 'not' should negate all the postive adjectives which follow, but our simple bag-of-words approach doesn't have any way of dealing with this, and simply takes the positive features as is. Thus, a completely naive approach fails when confronted with the subleties of language.\nFun things to try and improve this model:\nThere are many things worth trying. Some examples:\n\nYou could try to build a NB model where the features are word pairs instead of words. This would be smart enough to realize that \"not good\" and \"so good\" mean very different things. This technique doesn't scale very well, since these features are much more sparse (and hence harder to detect repeatable patterns within).\nYou could try a model besides NB, that would allow for interactions between words -- for example, a Random Forest classifier.\nYou could consider adding supplemental features -- information about genre, director, cast, etc.\nYou could build a visualization that prints word reviews, and visually encodes each word with size or color to indicate how that word contributes to P(Fresh). For example, really bad words could show up as big and red, good words as big and green, common words as small and grey, etc.\n\nBetter features\nWe could use TF-IDF instead. What is this? It stands for \nTerm-Frequency X Inverse Document Frequency.\nIn the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weigh this term frequency by the inverse of its popularity in all document. For example, if the word \"movie\" showed up in all the documents, it would not have much predictive value. By weighing its counts by 1 divided by its overall frequency, we down-weight it. We can then use this tfidf weighted features as inputs to any classifier.",
"#http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction\n#http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)",
"Your turn (extra credit): Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.",
"Xtfidf[0].toarray()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ThinkBayes2
|
soln/chap02.ipynb
|
mit
|
[
"Bayes's Theorem\nThink Bayes, Second Edition\nCopyright 2020 Allen B. Downey\nLicense: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)\nIn the previous chapter, we derived Bayes's Theorem:\n$$P(A|B) = \\frac{P(A) P(B|A)}{P(B)}$$\nAs an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.\nBut since we had the complete dataset, we didn't really need Bayes's Theorem.\nIt was easy enough to compute the left side of the equation directly, and no easier to compute the right side.\nBut often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability.\nThe Cookie Problem\nWe'll start with a thinly disguised version of an urn problem:\n\nSuppose there are two bowls of cookies.\n\n\nBowl 1 contains 30 vanilla cookies and 10 chocolate cookies. \n\n\nBowl 2 contains 20 vanilla cookies and 20 chocolate cookies.\n\n\nNow suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?\n\nWhat we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.\nBut what we get from the statement of the problem is:\n\n\nThe conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and\n\n\nThe conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$.\n\n\nBayes's Theorem tells us how they are related:\n$$P(B_1|V) = \\frac{P(B_1)~P(V|B_1)}{P(V)}$$\nThe term on the left is what we want. The terms on the right are:\n\n\n$P(B_1)$, the probability that we chose Bowl 1,\n unconditioned by what kind of cookie we got. \n Since the problem says we chose a bowl at random, \n we assume $P(B_1) = 1/2$.\n\n\n$P(V|B_1)$, the probability of getting a vanilla cookie\n from Bowl 1, which is 3/4.\n\n\n$P(V)$, the probability of drawing a vanilla cookie from\n either bowl. \n\n\nTo compute $P(V)$, we can use the law of total probability:\n$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$\nPlugging in the numbers from the statement of the problem, we have\n$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$\nWe can also compute this result directly, like this: \n\n\nSince we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. \n\n\nBetween the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$.\n\n\nFinally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:\n$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$\nThis example demonstrates one use of Bayes's theorem: it provides a\nway to get from $P(B|A)$ to $P(A|B)$. \nThis strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left.\nDiachronic Bayes\nThere is another way to think of Bayes's theorem: it gives us a way to\nupdate the probability of a hypothesis, $H$, given some body of data, $D$.\nThis interpretation is \"diachronic\", which means \"related to change over time\"; in this case, the probability of the hypotheses changes as we see new data.\nRewriting Bayes's theorem with $H$ and $D$ yields:\n$$P(H|D) = \\frac{P(H)~P(D|H)}{P(D)}$$\nIn this interpretation, each term has a name:\n\n\n$P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just prior.\n\n\n$P(H|D)$ is the probability of the hypothesis after we see the data, called the posterior.\n\n\n$P(D|H)$ is the probability of the data under the hypothesis, called the likelihood.\n\n\n$P(D)$ is the total probability of the data, under any hypothesis.\n\n\nSometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.\nIn other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.\nThe likelihood is usually the easiest part to compute. In the cookie\nproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis.\nComputing the total probability of the data can be tricky. \nIt is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.\nMost often we simplify things by specifying a set of hypotheses that\nare:\n\n\nMutually exclusive, which means that only one of them can be true, and\n\n\nCollectively exhaustive, which means one of them must be true.\n\n\nWhen these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:\n$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$\nAnd more generally, with any number of hypotheses:\n$$P(D) = \\sum_i P(H_i)~P(D|H_i)$$\nThe process in this section, using data and a prior probability to compute a posterior probability, is called a Bayesian update.\nBayes Tables\nA convenient tool for doing a Bayesian update is a Bayes table.\nYou can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas DataFrame.\nFirst I'll make empty DataFrame with one row for each hypothesis:",
"import pandas as pd\n\ntable = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])",
"Now I'll add a column to represent the priors:",
"table['prior'] = 1/2, 1/2\ntable",
"And a column for the likelihoods:",
"table['likelihood'] = 3/4, 1/2\ntable",
"Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:\n\n\nThe chance of getting a vanilla cookie from Bowl 1 is 3/4.\n\n\nThe chance of getting a vanilla cookie from Bowl 2 is 1/2.\n\n\nYou might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.\nThere's no reason they should add up to 1 and no problem if they don't.\nThe next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:",
"table['unnorm'] = table['prior'] * table['likelihood']\ntable",
"I call the result unnorm because these values are the \"unnormalized posteriors\". Each of them is the product of a prior and a likelihood:\n$$P(B_i)~P(D|B_i)$$\nwhich is the numerator of Bayes's Theorem. \nIf we add them up, we have\n$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$\nwhich is the denominator of Bayes's Theorem, $P(D)$.\nSo we can compute the total probability of the data like this:",
"prob_data = table['unnorm'].sum()\nprob_data",
"Notice that we get 5/8, which is what we got by computing $P(D)$ directly.\nAnd we can compute the posterior probabilities like this:",
"table['posterior'] = table['unnorm'] / prob_data\ntable",
"The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.\nAs a bonus, we also get the posterior probability of Bowl 2, which is 0.4.\nWhen we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. This process is called \"normalization\", which is why the total probability of the data is also called the \"normalizing constant\".\nThe Dice Problem\nA Bayes table can also solve problems with more than two hypotheses. For example:\n\nSuppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?\n\nIn this example, there are three hypotheses with equal prior\nprobabilities. The data is my report that the outcome is a 1. \nIf I chose the 6-sided die, the probability of the data is\n1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.\nHere's a Bayes table that uses integers to represent the hypotheses:",
"table2 = pd.DataFrame(index=[6, 8, 12])",
"I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.",
"from fractions import Fraction\n\ntable2['prior'] = Fraction(1, 3)\ntable2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)\ntable2",
"Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:",
"def update(table):\n \"\"\"Compute the posterior probabilities.\"\"\"\n table['unnorm'] = table['prior'] * table['likelihood']\n prob_data = table['unnorm'].sum()\n table['posterior'] = table['unnorm'] / prob_data\n return prob_data",
"And call it like this.",
"prob_data = update(table2)",
"Here is the final Bayes table:",
"table2",
"The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.\nIntuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw.\nThe Monty Hall Problem\nNext we'll use a Bayes table to solve one of the most contentious problems in probability.\nThe Monty Hall problem is based on a game show called Let's Make a Deal. If you are a contestant on the show, here's how the game works:\n\n\nThe host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.\n\n\nOne prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).\n\n\nThe object of the game is to guess which door has the car. If you guess right, you get to keep the car.\n\n\nSuppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door.\nTo maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?\nTo answer this question, we have to make some assumptions about the behavior of the host:\n\n\nMonty always opens a door and offers you the option to switch.\n\n\nHe never opens the door you picked or the door with the car.\n\n\nIf you choose the door with the car, he chooses one of the other\n doors at random.\n\n\nUnder these assumptions, you are better off switching. \nIf you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.\nIf you have not encountered this problem before, you might find that\nanswer surprising. You would not be alone; many people have the strong\nintuition that it doesn't matter if you stick or switch. There are two\ndoors left, they reason, so the chance that the car is behind Door A is 50%. But that is wrong.\nTo see why, it can help to use a Bayes table. We start with three\nhypotheses: the car might be behind Door 1, 2, or 3. According to the\nstatement of the problem, the prior probability for each door is 1/3.",
"table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])\ntable3['prior'] = Fraction(1, 3)\ntable3",
"The data is that Monty opened Door 3 and revealed a goat. So let's\nconsider the probability of the data under each hypothesis:\n\n\nIf the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.\n\n\nIf the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.\n\n\nIf the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.\n\n\nHere are the likelihoods.",
"table3['likelihood'] = Fraction(1, 2), 1, 0\ntable3",
"Now that we have priors and likelihoods, we can use update to compute the posterior probabilities.",
"update(table3)\ntable3",
"After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;\nthe posterior probability of Door 2 is $2/3$.\nSo you are better off switching from Door 1 to Door 2.\nAs this example shows, our intuition for probability is not always\nreliable. \nBayes's Theorem can help by providing a divide-and-conquer strategy:\n\n\nFirst, write down the hypotheses and the data.\n\n\nNext, figure out the prior probabilities.\n\n\nFinally, compute the likelihood of the data under each hypothesis.\n\n\nThe Bayes table does the rest.\nSummary\nIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.\nThere's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.\nThen we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.\nIf the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into why the answer is what it is.\nWhen Monty opens a door, he provides information we can use to update our belief about the location of the car. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.\nIn the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.\nBut first, you might want to work on the exercises.\nExercises\nExercise: Suppose you have two coins in a box.\nOne is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.\nWhat is the probability that you chose the trick coin?",
"# Solution\n\ntable4 = pd.DataFrame(index=['Normal', 'Trick'])\ntable4['prior'] = 1/2\ntable4['likelihood'] = 1/2, 1\n\nupdate(table4)\ntable4",
"Exercise: Suppose you meet someone and learn that they have two children.\nYou ask if either child is a girl and they say yes.\nWhat is the probability that both children are girls?\nHint: Start with four equally likely hypotheses.",
"# Solution\n\ntable5 = pd.DataFrame(index=['GG', 'GB', 'BG', 'BB'])\ntable5['prior'] = 1/4\ntable5['likelihood'] = 1, 1, 1, 0\n\nupdate(table5)\ntable5",
"Exercise: There are many variations of the Monty Hall problem.\nFor example, suppose Monty always chooses Door 2 if he can, and\nonly chooses Door 3 if he has to (because the car is behind Door 2).\nIf you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?\nIf you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?",
"# Solution\n\n# If the car is behind Door 1, Monty would always open Door 2 \n# If the car was behind Door 2, Monty would have opened Door 3\n# If the car is behind Door 3, Monty would always open Door 2\n\ntable6 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])\ntable6['prior'] = 1/3\ntable6['likelihood'] = 1, 0, 1\n\nupdate(table6)\ntable6\n\n# Solution\n\n# If the car is behind Door 1, Monty would have opened Door 2\n# If the car is behind Door 2, Monty would always open Door 3\n# If the car is behind Door 3, Monty would have opened Door 2\n\ntable7 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])\ntable7['prior'] = 1/3\ntable7['likelihood'] = 0, 1, 0\n\nupdate(table7)\ntable7",
"Exercise: M&M's are small candy-coated chocolates that come in a variety of colors.\nMars, Inc., which makes M&M's, changes the mixture of colors from time to time.\nIn 1995, they introduced blue M&M's. \n\n\nIn 1994, the color mix in a bag of plain M&M's was 30\\% Brown, 20\\% Yellow, 20\\% Red, 10\\% Green, 10\\% Orange, 10\\% Tan. \n\n\nIn 1996, it was 24\\% Blue , 20\\% Green, 16\\% Orange, 14\\% Yellow, 13\\% Red, 13\\% Brown.\n\n\nSuppose a friend of mine has two bags of M&M's, and he tells me\nthat one is from 1994 and one from 1996. He won't tell me which is\nwhich, but he gives me one M&M from each bag. One is yellow and\none is green. What is the probability that the yellow one came\nfrom the 1994 bag?\nHint: The trick to this question is to define the hypotheses and the data carefully.",
"# Solution\n\n# Hypotheses:\n# A: yellow from 94, green from 96\n# B: yellow from 96, green from 94\n\ntable8 = pd.DataFrame(index=['A', 'B'])\ntable8['prior'] = 1/2\ntable8['likelihood'] = 0.2*0.2, 0.14*0.1\n\nupdate(table8)\ntable8"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
amcdawes/QMlabs
|
Lab 4 - Measurements Solutions.ipynb
|
mit
|
[
"Lab 4 - Measurements\nFirst our standard definitions:",
"import matplotlib.pyplot as plt\nfrom numpy import sqrt,pi,cos,sin,arange,random,exp\nfrom qutip import *\n\nH = Qobj([[1],[0]])\nV = Qobj([[0],[1]])\nP45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]])\nM45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])\nR = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])\nL = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])\n\ndef sim_transform(o_basis1, o_basis2, n_basis1, n_basis2):\n a = n_basis1.dag()*o_basis1\n b = n_basis1.dag()*o_basis2\n c = n_basis2.dag()*o_basis1\n d = n_basis2.dag()*o_basis2\n return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]])\n\ndef Delta(state, op):\n \"\"\"Calculate std. dev. of an observable in a given state\"\"\"\n eO2 = state.dag()*op*op*state\n eO = state.dag()*op*state\n return sqrt(eO2.data[0,0] - (eO.data[0,0])**2)",
"Q: Define the $\\hat{\\mathscr{P}}_{HV}$ operator",
"Phv = H*H.dag() - V*V.dag()\nPhv",
"Q: What is the expectation value $\\langle \\hat{\\mathscr{P}}_{HV}\\rangle$ for state $|\\psi\\rangle = \\frac{1}{\\sqrt{5}}|H\\rangle + \\frac{2}{\\sqrt{5}}|V\\rangle$? Interpret this result given the amplitudes in the state.",
"psi = 1/sqrt(5)*H + 2/sqrt(5)*V\n\npsi.dag()*Phv*psi",
"Q: What is the variance of $\\mathscr{P}_{HV}$?",
"psi.dag()*Phv*Phv*psi\n\n1.0 - (-0.6)**2",
"Ex: Use the random function to generate a mock data set for the state $|\\psi\\rangle$.\nrandom.choice([1,-1],size=10,p=[0.2,0.8])\n\ngives a list of 10 numbers, either 1 or -1 with the associated probability p:",
"data = random.choice([1, -1],size=20,p=[0.2,0.8])\n\ndata.mean()\n\ndata.var()",
"Q: Verify the mean and variance of the mock data set match your QM predictions. How big does the set need to be for you to get ±5% agreement?",
"data = random.choice([1, -1],size=10000,p=[0.2,0.8])\n\ndata.mean()\n\ndata.var()",
"10,000 does pretty well for getting to the predictions. \"There is no substitute for an adequate sample size.\"\nQ: Answer problems 5.11, 5.12, 5.13, 5.14, 5.17, 5.18, 5.19 from the textbook. These are an opportunity to practice with a new operator $\\hat{\\mathscr{P}}_{C}$",
"# 5.11\nP_45 = P45*P45.dag() - M45*M45.dag()\n\n#5.12\nP_c = L*L.dag() - R*R.dag()\n\n#5.13\n(Phv*P_45 - P_45*Phv) == 2j * P_c\n\n#5.14\n(Phv*P_c - P_c*Phv) == -2j * P_45\n\n#5.15\nP_p45 = P45*P45.dag()\nP_v = V*V.dag()\n\nP_p45*P_v - P_v*P_p45",
"5.19 - this one is tricky, but is easier with our custom function",
"psi = 1/sqrt(3)*H + sqrt(2/3.0)*exp(1j*pi/3.0)*V\npsi.norm()\n\nDelta(psi,P_45)*Delta(psi,Phv)\n\n1/2j*(psi.dag()*(Phv*P_45 - P_45*Phv)*psi).data[0,0]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml.ipynb
|
apache-2.0
|
[
"Collaborative filtering on the MovieLense Dataset\nLearning objectives\n1. Explore the data using BigQuery.\n2. Use the model to make recommendations for a user.\n3. Use the model to recommend an item to a group of users.\nIntroduction\nThis notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.\nMovieLens dataset\nTo illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.\nDownload the data and load it as a BigQuery table using:",
"import os\nimport tensorflow as tf\nPROJECT = \"your-project-here\" # REPLACE WITH YOUR PROJECT ID\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"TFVERSION\"] = '2.6'\n\n%%bash\nmkdir bqml_data\ncd bqml_data\ncurl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'\nunzip ml-20m.zip\nyes | bq rm -r $PROJECT:movielens\nbq --location=US mk --dataset \\\n --description 'Movie Recommendations' \\\n $PROJECT:movielens\nbq --location=US load --source_format=CSV \\\n --autodetect movielens.ratings gs://cloud-training/recommender-systems/movielens/ratings.csv\nbq --location=US load --source_format=CSV \\\n --autodetect movielens.movies_raw gs://cloud-training/recommender-systems/movielens/movies.csv",
"Exploring the data\nTwo tables should now be available in <a href=\"https://console.cloud.google.com/bigquery\">BigQuery</a>.\nCollaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.",
"%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.ratings\nLIMIT 10",
"A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.",
"%%bigquery --project $PROJECT\nSELECT \n COUNT(DISTINCT userId) numUsers,\n COUNT(DISTINCT movieId) numMovies,\n COUNT(*) totalRatings\nFROM movielens.ratings",
"On examining the first few movies using the query following query, we can see that the genres column is a formatted string:",
"%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.movies_raw\nWHERE movieId < 5",
"We can parse the genres into an array and rewrite the table as follows:",
"%%bigquery --project $PROJECT\nCREATE OR REPLACE TABLE movielens.movies AS\n SELECT * REPLACE(SPLIT(genres, \"|\") AS genres)\n FROM movielens.movies_raw\n\n%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.movies\nWHERE movieId < 5",
"Matrix factorization\nMatrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.",
"%%bash\nbq --location=US cp \\\ncloud-training-demos:movielens.recommender_16 \\\nmovielens.recommender\n\n%%bigquery --project $PROJECT\nSELECT *\n-- Note: remove cloud-training-demos if you are using your own model: \nFROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)\n\n%%bigquery --project $PROJECT\nSELECT *\n-- Note: remove cloud-training-demos if you are using your own model:\nFROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender_16`)",
"When we did that, we discovered that the evaluation loss was lower (0.97) with num_factors=16 than with num_factors=36 (1.67) or num_factors=24 (1.45). We could continue experimenting, but we are likely to see diminishing returns with further experimentation.\nMaking recommendations\nWith the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (\n SELECT \n movieId, title, 903 AS userId\n FROM movielens.movies, UNNEST(genres) g\n WHERE g = 'Comedy'\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"Filtering out already rated movies\nOf course, this includes movies the user has already seen and rated in the past. Let’s remove them.\nTODO 1: Make a prediction for user 903 that does not include already seen movies.",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (\n WITH seen AS (\n SELECT ARRAY_AGG(movieId) AS movies \n FROM movielens.ratings\n WHERE userId = 903\n )\n SELECT \n movieId, title, 903 AS userId\n FROM movielens.movies, UNNEST(genres) g, seen\n WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies)\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.\nCustomer targeting\nIn the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId=96481 which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest. \nTODO 2: Find the top five users who will likely enjoy American Mullet (2001)",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender_16`, (\n WITH allUsers AS (\n SELECT DISTINCT userId\n FROM movielens.ratings\n )\n SELECT \n 96481 AS movieId, \n (SELECT title FROM movielens.movies WHERE movieId=96481) title,\n userId\n FROM\n allUsers\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"Batch predictions for all users and movies\nWhat if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.",
"%%bigquery --project $PROJECT\nSELECT *\nFROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender_16`)\nLIMIT 10",
"As seen in a section above, it is possible to filter out movies the user has already seen and rated in the past. The reason already seen movies aren’t filtered out by default is that there are situations (think of restaurant recommendations, for example) where it is perfectly expected that we would need to recommend restaurants the user has liked in the past.\nCopyright 2022 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/pipelines
|
components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/tfp_anomaly_detection.ipynb
|
apache-2.0
|
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"<table align=\"left\">\n\n <td>\n <a href=\"https://colab.research.google.com/github/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/tfp_anomaly_detection.ipynb\"\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/tfp_anomaly_detection.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nAnomaly Detection with TensorFlow Probability STS on Kubeflow Pipelines\nOverview\nThis notebook demonstrates how to use TensorFlow Probability and Kubeflow Pipelines for anomaly detection in time series data. It uses structural time series (STS), a class of Bayesian statistical models, to decompose a time series into interpretable seasonal and trend components. This algorithm fits an STS model to the time series, generates a forecast of acceptable values for each timestep, and flags any points outside of the forecast as an anomaly. To learn more about STS models, check out this demo on Structural Time Series Modeling Case Studies.\nThis demo is most relevant for those who would like to automatically flag anomalies in time series data and can be used for applications like network monitoring, infrastructure maintenance, and sales tracking.\nDataset\nThis demo uses the Numenta Anomaly Benchmark, a popular benchmark of time series data with labeled anomalies. More specifically, our demo uses nyc_taxi.csv which reports the total number of passengers in NYC taxis from July 2014 to January 2015 in 30-minute increments.\nObjective\nYou will go through the following steps: \n* Define and launch an anomaly detection algorithm on Kubeflow Pipelines.\n* Retrieve and visualize results.\n* Benchmark predictions using the Numenta Anomaly Benchmark scoring method.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets\nall the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nThe Google Cloud SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the\ncommand-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstall additional packages\nInstall additional package dependencies not installed in your notebook environment.",
"import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n! pip3 install {USER_FLAG} --upgrade kfp\n! pip3 install {USER_FLAG} --upgrade google-cloud-pipeline-components\n! pip3 install {USER_FLAG} --upgrade tensorflow\n! pip3 install {USER_FLAG} --upgrade matplotlib\n! pip3 install {USER_FLAG} --upgrade numpy\n! pip3 install {USER_FLAG} --upgrade pandas",
"Restart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.",
"# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI API, Cloud Build API, Cloud Storage API, and Container Registry API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.",
"import os\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)",
"Otherwise, set your project ID here.",
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\n!gcloud config set project {PROJECT_ID}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not IS_GOOGLE_CLOUD_NOTEBOOK:\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job, Vertex AI saves all resources to the given GCS bucket. We will also use the same bucket to download and host the input data. \nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may not use a Multi-Regional Storage bucket for training with Vertex AI.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\nREGION = \"[your-region]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Import libraries and define constants",
"PIPELINE_NAME = '{0}-{1}'.format('tfp-anomaly-detection', TIMESTAMP)\nPIPELINE_ROOT = '{0}/{1}'.format(BUCKET_NAME, PIPELINE_NAME)\n\nfrom typing import Callable, Optional, Mapping, Any\n\nimport kfp\nfrom kfp.v2 import compiler\nfrom kfp.v2 import dsl\nfrom kfp.v2.google.client import AIPlatformClient\nfrom kfp.v2.dsl import Input, Output, Dataset",
"Define the anomaly detection components\nHere you will load components from the anomaly_detection folder in the Google Cloud Pipeline Components SDK.\nYou can also save and modify the original Python component file. For example, for tfp_anomaly_detection.py:\n\nCall generate_component_file() which creates a yaml file.\nReplace the next cell with anomaly_detection_op = kfp.components.load_component_from_file('component.yaml')\n\nThe components do the following:\n* preprocess: Regularizes and resamples a time series.\n* tfp_anomaly_detection: Infers the structure of the time series, fits the model, and identifies anomalies based on the predictive distribution of acceptable values at each timestep.\n* postprocess: Fills missing values from regularizing and resampling.",
"preprocess_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/preprocess.yaml')\nanomaly_detection_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/component.yaml')\npostprocess_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/postprocess.yaml')",
"Define the pipeline\nHere you will define the relationship between the components and how data is passed. In this pipeline a Google Cloud Storage csv is imported, the data is preprocessed, anomalies are flagged, and the results are postprocessed so that the output csv is scoreable by the Numenta Anomaly Benchmark.",
"@dsl.pipeline(\n pipeline_root=PIPELINE_ROOT, name=PIPELINE_NAME)\ndef pipeline(input_url: str, memory_limit: str, seed: int) -> None:\n \"\"\"\n Train model and return detected anomalies.\n \"\"\"\n input_task = kfp.dsl.importer(\n artifact_uri=input_url,\n artifact_class=Dataset)\n preprocess_task = preprocess_op(input_dataset=input_task.output)\n anomaly_detection_task = anomaly_detection_op(input_dataset=preprocess_task.output, seed=seed).set_memory_limit(memory_limit)\n postprocess_op(input_dataset=input_task.output, predictions_dataset=anomaly_detection_task.output)\n\ndef run_pipeline(pipeline: Callable,\n parameter_values: Optional[Mapping[str, Any]] = {},\n enable_caching: bool = False) -> None:\n \"\"\"Runs a given pipeline function using Kubeflow Pipelines.\n\n Args:\n pipeline: The function to run.\n parameter_values: Parameters passed to the pipeline function when run.\n enable_caching: Whether to used cached results from previous runs.\n \"\"\"\n compiler.Compiler().compile(\n pipeline_func=pipeline,\n package_path='{}_pipeline.json'.format(PIPELINE_NAME))\n\n api_client = AIPlatformClient(\n project_id=PROJECT_ID,\n region=REGION,\n )\n\n _ = api_client.create_run_from_job_spec(\n job_spec_path='{}_pipeline.json'.format(PIPELINE_NAME),\n pipeline_root=PIPELINE_ROOT,\n parameter_values=parameter_values,\n enable_caching=enable_caching)",
"Download the data\nHere you will download the Numenta Anomaly Benchmark and upload the dataset to your GCS bucket. We will then find the exact GCS file url associated with the chosen task to pass as the input url into the pipeline.",
"import os\n\nNAB_DATA_BLOB = '{0}/NAB'.format(BUCKET_NAME)\nif not os.path.exists('content/NAB'):\n !git clone https://github.com/numenta/NAB\n!gsutil cp -r NAB/data $NAB_DATA_BLOB\n\n# Find the full file path in gcs for the chosen task\nimport tensorflow as tf\n\nchosen_task_folder = 'realKnownCause'\nchosen_task = 'nyc_taxi'\nnab_files = tf.io.gfile.glob('{0}/*/*.csv'.format(NAB_DATA_BLOB))\nchosen_task_file = [file for file in nab_files if chosen_task in file][0]\nprint('The pipeline will be run on the task: {0}'.format(chosen_task))",
"Run the pipeline\nFinally, we run the pipeline. Please wait until the run has completed before proceeding to the next steps.",
"parameter_values = {\n 'input_url': chosen_task_file,\n 'memory_limit': '50G',\n 'seed': 0,\n}\nrun_pipeline(pipeline, parameter_values=parameter_values)",
"Download the results locally\nCopy the GCS file path from the final postprocess step of the pipeline below. Here we will save this output locally for visualization and scoring.",
"import pandas as pd\nimport numpy as np\nimport json\n\ngcs_file = '[your-pipeline-output]' # @param {type:'string'}\noutput_file = '/content/{0}-{1}.csv'.format(chosen_task, TIMESTAMP)\n!gsutil cp $gcs_file $output_file\n\n# Collect targets specifically for the chosen task\ntargets = json.load(open('/content/NAB/labels/combined_labels.json'))\nchosen_task_targets = [targets[key] for key in targets if chosen_task in key][0]",
"Visualize the results\nHere we will plot the forecast distribution outputted by the pipeline, the points flagged as anomalies (red), and the ground truth targets (green). The graph is plotted with daily granularity due to the resampling done during preprocessing.\nNote how the algorithm correctly identifies December 25th as an anomaly.",
"#@title Plotting setup\nfrom matplotlib import pylab as plt\nfrom matplotlib.lines import Line2D\n\ndef plot_predictions(predictions: pd.DataFrame, annotation_fn: Callable = lambda timestamp: timestamp) -> None:\n \"\"\"\n Plots the time series, forecast, detected anomalies, and residuals.\n\n Args:\n predictions: The output of the anomaly detection algorithm.\n \"\"\"\n # Drop NaN values during plotting\n predictions = predictions.dropna(how='any')\n predictions = predictions.reset_index()\n\n timestamp = pd.to_datetime(predictions['timestamp'], format='%Y-%m-%d')\n # Plot the value from predictions which may be\n # an aggregation of the original value\n value = np.array(predictions['value_predictions'])\n lower_limit = np.array(predictions['lower_limit'])\n upper_limit = np.array(predictions['upper_limit'])\n mean = np.array(predictions['mean'])\n anomalies = np.array(predictions['label']).nonzero()[0]\n targets = []\n if 'target' in predictions:\n targets = np.array(predictions['target']).nonzero()[0]\n\n fig = plt.figure(figsize=(10, 5), constrained_layout=True)\n spec = fig.add_gridspec(ncols=1, nrows=2, height_ratios=[2., 1.])\n series_ax = fig.add_subplot(spec[0, 0])\n residuals_ax = fig.add_subplot(spec[1, 0], sharex=series_ax)\n\n # Plot anomalies on series_ax\n series_ax.plot(\n timestamp,\n value,\n color='black',\n alpha=0.6)\n series_ax.fill_between(\n timestamp,\n lower_limit,\n upper_limit,\n color='tab:blue',\n alpha=0.3)\n\n for anomaly_idx in anomalies:\n x = timestamp[anomaly_idx]\n y = value[anomaly_idx]\n series_ax.scatter(x, y, s=100, alpha=0.4, c='red')\n \n for target_idx in targets:\n x = timestamp[target_idx]\n y = value[target_idx]\n series_ax.scatter(x, y, s=100, alpha=0.4, c='green')\n series_ax.annotate(annotation_fn(x), (x, y))\n\n # Plot residuals on residuals_ax\n time_delta = timestamp[1] - timestamp[0]\n residuals_ax.bar(\n timestamp,\n height=upper_limit - lower_limit,\n bottom=lower_limit - mean,\n width=time_delta,\n align='center',\n color='tab:blue',\n alpha=0.3)\n residuals_ax.bar(\n timestamp,\n width=time_delta,\n height=value - mean,\n align='center',\n color='black',\n alpha=0.6)\n\n # Set up grid styling\n series_ax.set_ylabel('Original series')\n residuals_ax.set_ylabel('Residuals')\n series_ax.grid(True, color='whitesmoke')\n residuals_ax.grid(True, color='whitesmoke')\n series_ax.set_axisbelow(True)\n residuals_ax.set_axisbelow(True)\n\n # Add title and legend\n series_ax.set_title('TFP STS model forecast, anomalies, and residuals for {0}'.format(chosen_task))\n create_legend_label = lambda label, color: Line2D([0], [0], marker='o', color='w', label=label, markerfacecolor=color, markersize=10)\n legend_elements = [create_legend_label(label, color) for label, color in [('predicted anomaly', 'red'), ('target', 'green')]]\n series_ax.legend(handles=legend_elements, loc='lower right')\n\n# Round target timestamps to day for plotting\nround_to_day = lambda timestamp: timestamp.split()[0]\nrounded_targets = [round_to_day(timestamp) for timestamp in chosen_task_targets]\nrounded_targets = set(rounded_targets)\npredictions = pd.read_csv(output_file)\npredictions['target'] = predictions.apply(lambda df: round_to_day(df['timestamp']) in rounded_targets, axis=1)\n\n# Change the start and end to view different slices of the prediction\nstart, end = 8000, 9000\nround_annotation = lambda timestamp: timestamp.date()\nplot_predictions(predictions.iloc[start:end], round_annotation)",
"Run scoring\nHere we quantitatively score the algorithm's performance on the Numenta Anomaly Benchmark. The benchmark uses a custom scoring mechanism described in their paper. Unlike precision and recall which do not reward for early detection, this scoring mechanism rewards based on windows around anomalous points rather than the exact points themselves.\nWe will run the scoring script with the --optimize flag, which uses the anomaly_scores column to score and optimizes the decision threshold. If this flag is omitted, then the script will only use the label column originally outputted by the component.",
"# Set up NAB folder for running scoring\n%cd /content/NAB\n!pip install . --user\n!python scripts/create_new_detector.py --detector $PIPELINE_NAME\n\n# Move gcs output into the NAB results folder structure\nresults_file = 'results/{0}/{1}/{0}_{2}.csv'.format(PIPELINE_NAME, chosen_task_folder, chosen_task)\n!cp $output_file $results_file\n\n# Run the scoring script\n!python run.py -d $PIPELINE_NAME --optimize --score --normalize\n\n#@title Score collection and normalization setup\nimport glob\n\ndef collect_scores(profile_name: str, chosen_task: str) -> pd.DataFrame:\n \"\"\"Crawls through results files for all detectors in NAB to get results for the chosen task.\n \n Args:\n profile_name: One of 'standard', 'low_FP_rate', 'low_FN_rate'.\n chosen_task: The chosen benchmark task.\n \n Returns:\n all_scores_df: A pandas DataFrame of results for the task sorted by highest to lowest score.\n \"\"\"\n all_scores = []\n for scores_file in glob.glob('/content/NAB/results/**/*_{0}_scores.csv'.format(profile_name)):\n scores_df = pd.read_csv(scores_file)\n chosen_task_row = scores_df[scores_df['File'].str.contains(chosen_task).fillna(False)]\n all_scores.append(chosen_task_row)\n all_scores_df = pd.concat(all_scores)\n all_scores_df = all_scores_df.sort_values(by=['Score'], ascending=False)\n all_scores_df = all_scores_df.reset_index().drop('index', axis=1)\n return all_scores_df\n\ndef normalize_scores(results: pd.DataFrame, profile_name: str, profiles: dict,\n tpCount: int) -> pd.DataFrame:\n \"\"\"Normalizes scores with the max from a perfect detector and the min from a null detector.\n\n Args:\n results: Pandas DataFrame with score results.\n profile_name: One of 'standard', 'low_FP_rate', 'low_FN_rate'.\n profiles: Dictionary containing cost matrix for each profile.\n tpCount: The number of true positives in the ground truth targets.\n\n Returns:\n The results DataFrame with an added column of normalized scores.\n \"\"\"\n perfect = tpCount * profiles[profile_name][\"CostMatrix\"][\"tpWeight\"]\n # Note that the null detector's name is NaN in the `Detector` column\n base = results[pd.isna(results['Detector'])]['Score'].iloc[0]\n scores = results['Score']\n results['Normalized_Score'] = 100 * (scores - base) / (perfect - base)\n \n # Reindex column order for more organized table\n columns = results.columns.to_list()\n columns.remove('Score')\n columns.remove('Normalized_Score')\n columns += ['Score', 'Normalized_Score']\n results = results.reindex(columns=columns)\n print('Normalization used min raw score: {0} and max raw score: {1}'.format(base, perfect))\n return results",
"NAB also provides scores for three profile settings: standard, reward_low_FN_rate, and reward_low_FP_rate. If you run the cell below you can see the cost matrix for each profile, where reward_low_FN_rate penalizes false negatives more and reward_low_FP_rate penalizes false positives more. For example, if for the NYC Taxi & Limousine Commission it is worse to not have enough taxis during a big event than it is to have too many, then they may want to score based on a reward_low_FN_rate profile.\nFor the purposes of this demo we will only display results for the standard profile.",
"tpCount = len(chosen_task_targets)\nprofile_name = 'standard'\nprofiles = json.load(open('/content/NAB/config/profiles.json'))\nprofiles\n\nresults = collect_scores(profile_name, chosen_task)\nresults = normalize_scores(results, profile_name, profiles, tpCount)\n\nresults"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
info-370/classification
|
knn/INFO370-KNN_Exercise.ipynb
|
mit
|
[
"Import modules",
"from sklearn.datasets import load_boston\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import scale\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.cross_validation import KFold\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"Load data\nFor this exercise, we will be using a dataset of housing prices in Boston during the 1970s. Python's super-awesome sklearn package already has the data we need to get started. Below is the command to load the data. The data is stored as a dictionary. \nThe 'DESCR' is a description of the data and the command for printing it is below. Note all the features we have to work with. From the dictionary, we need the data and the target variable (in this case, housing price). Store these as variables named \"data\" and \"price\", respectively. Once you have these, print their shapes to see all checks out with the DESCR.",
"boston = load_boston()\nprint boston.DESCR",
"Train-Test split\nNow, using sklearn's train_test_split, (see here for more. I've already imported it for you.) let's make a random train-test split with the test size equal to 30% of our data (i.e. set the test_size parameter to 0.3). For consistency, let's also set the random.state parameter = 11.\nName the variables train_data, train_price for the training data and test_data, test_price for the test data. As a sanity check, let's also print the shapes of these variables.\nScale our data\nBefore we get too far ahead, let's scale our data. Let's subtract the min from each column (feature) and divide by the difference between the max and min for each column. \nHere's where things can get tricky. Remember, our test data is unseen yet we need to scale it. We cannot scale using it's min/max because the data is unseen might not be available to us en masse. Instead, we use the training min/max to scale the test data.\nBe sure to check which axis you use to take the mins/maxes!\nLet's add a \"_stand\" suffix to our train/test variable names for the standardized values\nK-Fold CV\nNow, here's where things might get really messy. Let's implement 10-Fold Cross Validation on K-NN across a range of K values (given below - 9 total). We'll keep our K for K-fold CV constant at 10. \nLet's determine our accuracy using an RMSE (root-mean-square-error) value based on Euclidean distance. Save the errors for each fold at each K value (10 folds x 9 K values = 90 values) as you loop through.\nTake a look at sklearn's K-fold CV. Also, sklearn has it's own K-NN implementation. there is also an implementation of mean squared error, though you'll have to take the root yourself. I've imported these for you already. :)\nPlot Results\nPlot your training accuracy across all folds as a function of K. What do you see?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
probml/pyprobml
|
notebooks/misc/gaussian_param_inf_1d_numpyro.ipynb
|
mit
|
[
"<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/gaussian_param_inf_1d_numpyro.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nInference for the parameters of a 1d Gaussian using a non-conjugate prior\nWe illustrate various inference methods using the example in sec 4.3 (\"Gaussian model of height\") of Statistical Rethinking ed 2. This requires computing $p(\\mu,\\sigma|D)$ using a Gaussian likelihood but a non-conjugate prior.\nThe numpyro code is from Du Phan's site.",
"import numpy as np\n\nnp.set_printoptions(precision=3)\nimport matplotlib.pyplot as plt\nimport math\nimport os\nimport warnings\nimport pandas as pd\n\n# from scipy.interpolate import BSpline\n# from scipy.stats import gaussian_kde\n\n!mkdir figures\n\n!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro\n\nimport jax\n\nprint(\"jax version {}\".format(jax.__version__))\nprint(\"jax backend {}\".format(jax.lib.xla_bridge.get_backend().platform))\n\nimport jax.numpy as jnp\nfrom jax import random, vmap\n\nrng_key = random.PRNGKey(0)\nrng_key, rng_key_ = random.split(rng_key)\n\nimport numpyro\nimport numpyro.distributions as dist\nfrom numpyro.distributions import constraints\nfrom numpyro.distributions.transforms import AffineTransform\nfrom numpyro.diagnostics import hpdi, print_summary\nfrom numpyro.infer import Predictive\nfrom numpyro.infer import MCMC, NUTS\nfrom numpyro.infer import SVI, Trace_ELBO, init_to_value\nfrom numpyro.infer.autoguide import AutoLaplaceApproximation\nimport numpyro.optim as optim\n\n!pip install arviz\nimport arviz as az",
"Data\nWe use the \"Howell\" dataset, which consists of measurements of height, weight, age and sex, of a certain foraging tribe, collected by Nancy Howell.",
"# url = 'https://github.com/fehiepsi/rethinking-numpyro/tree/master/data/Howell1.csv?raw=True'\nurl = \"https://raw.githubusercontent.com/fehiepsi/rethinking-numpyro/master/data/Howell1.csv\"\n\nHowell1 = pd.read_csv(url, sep=\";\")\nd = Howell1\nd.info()\nd.head()\n\n# get data for adults\nd2 = d[d.age >= 18]\nN = len(d2)\nndx = jax.random.permutation(rng_key, N)\ndata = d2.height.values[ndx]\nN = 20 # take a subset of the 354 samples\ndata = data[:N]",
"Empirical mean and std.",
"print(len(data))\nprint(np.mean(data))\nprint(np.std(data))",
"Model\nWe use the following model for the heights (in cm):\n$$\n\\begin{align}\nh_i &\\sim N(\\mu,\\sigma) \\\n\\mu &\\sim N(178, 20) \\\n\\sigma &\\sim U(0,50)\n\\end{align}\n$$\nThe prior for $\\mu$ has a mean 178cm, since that is the height of \nRichard McElreath, the author of the \"Statisical Rethinking\" book.\nThe standard deviation is 20, so that 90\\% of people lie in the range 138--218.\nThe prior for $\\sigma$ has a lower bound of 0 (since it must be positive), and an upper bound of 50, so that the interval $[\\mu-\\sigma, \\mu+\\sigma]$ has width 100cm, which seems sufficiently large to capture human heights.\nNote that this is not a conjugate prior, so we will just approximate the posterior.\nBut since there are just 2 unknowns, this will be easy.\nGrid posterior",
"mu_prior = dist.Normal(178, 20)\nsigma_prior = dist.Uniform(0, 50)\n\nmu_range = [150, 160]\nsigma_range = [4, 14]\nngrid = 100\nplot_square = False\n\nmu_list = jnp.linspace(start=mu_range[0], stop=mu_range[1], num=ngrid)\nsigma_list = jnp.linspace(start=sigma_range[0], stop=sigma_range[1], num=ngrid)\nmesh = jnp.meshgrid(mu_list, sigma_list)\nprint([mesh[0].shape, mesh[1].shape])\nprint(mesh[0].reshape(-1).shape)\npost = {\"mu\": mesh[0].reshape(-1), \"sigma\": mesh[1].reshape(-1)}\npost[\"LL\"] = vmap(lambda mu, sigma: jnp.sum(dist.Normal(mu, sigma).log_prob(data)))(post[\"mu\"], post[\"sigma\"])\nlogprob_mu = mu_prior.log_prob(post[\"mu\"])\nlogprob_sigma = sigma_prior.log_prob(post[\"sigma\"])\npost[\"prob\"] = post[\"LL\"] + logprob_mu + logprob_sigma\npost[\"prob\"] = jnp.exp(post[\"prob\"] - jnp.max(post[\"prob\"]))\nprob = post[\"prob\"] / jnp.sum(post[\"prob\"]) # normalize over the grid\n\nprob2d = prob.reshape(ngrid, ngrid)\nprob_mu = jnp.sum(prob2d, axis=0)\nprob_sigma = jnp.sum(prob2d, axis=1)\n\nplt.figure()\nplt.plot(mu_list, prob_mu, label=\"mu\")\nplt.legend()\nplt.savefig(\"figures/gauss_params_1d_post_grid_marginal_mu.pdf\", dpi=300)\nplt.show()\n\nplt.figure()\nplt.plot(sigma_list, prob_sigma, label=\"sigma\")\nplt.legend()\nplt.savefig(\"figures/gauss_params_1d_post_grid_marginal_sigma.pdf\", dpi=300)\nplt.show()\n\nplt.contour(\n post[\"mu\"].reshape(ngrid, ngrid),\n post[\"sigma\"].reshape(ngrid, ngrid),\n post[\"prob\"].reshape(ngrid, ngrid),\n)\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_grid_contours.pdf\", dpi=300)\nplt.show()\n\nplt.imshow(\n post[\"prob\"].reshape(ngrid, ngrid),\n origin=\"lower\",\n extent=(mu_range[0], mu_range[1], sigma_range[0], sigma_range[1]),\n aspect=\"auto\",\n)\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_grid_heatmap.pdf\", dpi=300)\nplt.show()",
"Posterior samples.",
"nsamples = 5000 # int(1e4)\nsample_rows = dist.Categorical(probs=prob).sample(random.PRNGKey(0), (nsamples,))\nsample_mu = post[\"mu\"][sample_rows]\nsample_sigma = post[\"sigma\"][sample_rows]\nsamples = {\"mu\": sample_mu, \"sigma\": sample_sigma}\n\nprint_summary(samples, 0.95, False)\n\n\nplt.scatter(samples[\"mu\"], samples[\"sigma\"], s=64, alpha=0.1, edgecolor=\"none\")\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nplt.axis(\"square\")\nplt.show()\n\naz.plot_kde(samples[\"mu\"], samples[\"sigma\"])\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_grid.pdf\", dpi=300)\nplt.show()",
"posterior marginals.",
"print(hpdi(samples[\"mu\"], 0.95))\nprint(hpdi(samples[\"sigma\"], 0.95))\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"mu\"], ax=ax, label=r\"$\\mu$\")\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"sigma\"], ax=ax, label=r\"$\\sigma$\")",
"Laplace approximation\nSee the documentation\nOptimization",
"def model(data):\n mu = numpyro.sample(\"mu\", mu_prior)\n sigma = numpyro.sample(\"sigma\", sigma_prior)\n numpyro.sample(\"height\", dist.Normal(mu, sigma), obs=data)\n\n\nguide = AutoLaplaceApproximation(model)\nsvi = SVI(model, guide, optim.Adam(1), Trace_ELBO(), data=data)\nsvi_result = svi.run(random.PRNGKey(0), 2000)\n\nplt.figure()\nplt.plot(svi_result.losses)\n\nstart = {\"mu\": data.mean(), \"sigma\": data.std()}\nguide = AutoLaplaceApproximation(model, init_loc_fn=init_to_value(values=start))\nsvi = SVI(model, guide, optim.Adam(0.1), Trace_ELBO(), data=data)\nsvi_result = svi.run(random.PRNGKey(0), 2000)\n\nplt.figure()\nplt.plot(svi_result.losses)",
"Posterior samples.",
"samples = guide.sample_posterior(random.PRNGKey(1), svi_result.params, (nsamples,))\n\nprint_summary(samples, 0.95, False)\n\n\nplt.scatter(samples[\"mu\"], samples[\"sigma\"], s=64, alpha=0.1, edgecolor=\"none\")\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nplt.show()\n\naz.plot_kde(samples[\"mu\"], samples[\"sigma\"])\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_laplace.pdf\", dpi=300)\nplt.show()\n\nprint(hpdi(samples[\"mu\"], 0.95))\nprint(hpdi(samples[\"sigma\"], 0.95))\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"mu\"], ax=ax, label=r\"$\\mu$\")\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"sigma\"], ax=ax, label=r\"$\\sigma$\")",
"Extract 2d joint posterior\nThe Gaussian approximation is over transformed parameters.",
"post = guide.get_posterior(svi_result.params)\nprint(post.mean)\nprint(post.covariance_matrix)\n\ndef logit(p):\n return jnp.log(p / (1 - p))\n\n\ndef sigmoid(a):\n return 1 / (1 + jnp.exp(-a))\n\n\nscale = 50\nprint(logit(7.7 / scale))\nprint(sigmoid(-1.7) * scale)\n\nunconstrained_samples = post.sample(rng_key, sample_shape=(nsamples,))\nconstrained_samples = guide._unpack_and_constrain(unconstrained_samples, svi_result.params)\n\nprint(unconstrained_samples.shape)\nprint(jnp.mean(unconstrained_samples, axis=0))\nprint(jnp.mean(constrained_samples[\"mu\"], axis=0))\nprint(jnp.mean(constrained_samples[\"sigma\"], axis=0))",
"We can sample from the posterior, which return results in the original parameterization.",
"samples = guide.sample_posterior(random.PRNGKey(1), params, (nsamples,))\nx = jnp.stack(list(samples.values()), axis=0)\nprint(x.shape)\nprint(\"mean of ssamples\\n\", jnp.mean(x, axis=1))\nvcov = jnp.cov(x)\nprint(\"cov of samples\\n\", vcov) # variance-covariance matrix\n\n# correlation matrix\nR = vcov / jnp.sqrt(jnp.outer(jnp.diagonal(vcov), jnp.diagonal(vcov)))\nprint(\"corr of samples\\n\", R)",
"Variational inference\nWe use\n$q(\\mu,\\sigma) = N(\\mu|m,s) Ga(\\sigma|a,b)$",
"def guide(data):\n data_mean = jnp.mean(data)\n data_std = jnp.std(data)\n m = numpyro.param(\"m\", data_mean)\n s = numpyro.param(\"s\", 10, constraint=constraints.positive)\n a = numpyro.param(\"a\", data_std, constraint=constraints.positive)\n b = numpyro.param(\"b\", 1, constraint=constraints.positive)\n mu = numpyro.sample(\"mu\", dist.Normal(m, s))\n sigma = numpyro.sample(\"sigma\", dist.Gamma(a, b))\n\n\noptimizer = numpyro.optim.Momentum(step_size=0.001, mass=0.1)\nsvi = SVI(model, guide, optimizer, loss=Trace_ELBO())\nnsteps = 2000\nsvi_result = svi.run(rng_key_, nsteps, data=data)\n\nprint(svi_result.params)\nprint(svi_result.losses.shape)\nplt.plot(svi_result.losses)\nplt.title(\"ELBO\")\nplt.xlabel(\"step\")\nplt.ylabel(\"loss\");",
"Extract Variational parameters.",
"print(svi_result.params)\na = np.array(svi_result.params[\"a\"])\nb = np.array(svi_result.params[\"b\"])\nm = np.array(svi_result.params[\"m\"])\ns = np.array(svi_result.params[\"s\"])\n\nprint(\"empirical mean\", jnp.mean(data))\nprint(\"empirical std\", jnp.std(data))\n\nprint(r\"posterior mean and std of $\\mu$\")\npost_mean = dist.Normal(m, s)\nprint([post_mean.mean, jnp.sqrt(post_mean.variance)])\n\nprint(r\"posterior mean and std of unconstrained $\\sigma$\")\npost_sigma = dist.Gamma(a, b)\nprint([post_sigma.mean, jnp.sqrt(post_sigma.variance)])",
"Posterior samples",
"predictive = Predictive(guide, params=svi_result.params, num_samples=nsamples)\nsamples = predictive(rng_key, data)\n\nprint_summary(samples, 0.95, False)\n\n\nplt.scatter(samples[\"mu\"], samples[\"sigma\"], s=64, alpha=0.1, edgecolor=\"none\")\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nplt.show()\n\naz.plot_kde(samples[\"mu\"], samples[\"sigma\"])\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_vi.pdf\", dpi=300)\nplt.show()\n\nprint(hpdi(samples[\"mu\"], 0.95))\nprint(hpdi(samples[\"sigma\"], 0.95))\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"mu\"], ax=ax, label=r\"$\\mu$\")\n\nfig, ax = plt.subplots()\naz.plot_kde(samples[\"sigma\"], ax=ax, label=r\"$\\sigma$\")",
"MCMC",
"conditioned_model = numpyro.handlers.condition(model, {\"data\": data})\nnuts_kernel = NUTS(conditioned_model)\nmcmc = MCMC(nuts_kernel, num_warmup=100, num_samples=nsamples)\nmcmc.run(rng_key_, data)\n\nmcmc.print_summary()\nsamples = mcmc.get_samples()\n\nprint_summary(samples, 0.95, False)\n\n\nplt.scatter(samples[\"mu\"], samples[\"sigma\"], s=64, alpha=0.1, edgecolor=\"none\")\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nplt.show()\n\naz.plot_kde(samples[\"mu\"], samples[\"sigma\"])\nplt.xlim(mu_range[0], mu_range[1])\nplt.ylim(sigma_range[0], sigma_range[1])\nplt.xlabel(r\"$\\mu$\")\nplt.ylabel(r\"$\\sigma$\")\nif plot_square:\n plt.axis(\"square\")\nplt.savefig(\"figures/gauss_params_1d_post_mcmc.pdf\", dpi=300)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mauriciogtec/PropedeuticoDataScience2017
|
Alumnos/Rodrigo_Cedeno/Tarea_2_Rodrigo_Cedeno.ipynb
|
mit
|
[
"Proyecto 2\nParte 1: Conocimiento teórico de Álgebra Lineal\n\n¿Por qué una matriz equivale a una transformación lineal entre espacios vectoriales?\n¿Cuál es el efecto de transformación lineal de una matriz diagonal y el de una matriz ortogonal?\n¿Qué es la descomposición en valores singulares de una matriz?\n¿Qué es diagonalizar una matriz y que representan los eigenvectores?\n¿ Intuitivamente qué son los eigenvectores?\n¿Cómo interpretas la descomposición en valores singulares como una composición de tres tipos\nde transformaciones lineales simples?\n¿Qué relación hay entre la descomposición en valores singulares y la diagonalización?\n¿Cómo se usa la descomposición en valores singulares para dar un aproximación de rango menor\na una matriz?\nDescribe el método de minimización por descenso gradiente\nMenciona 4 ejemplo de problemas de optimización (dos con restricciones y dos sin restricciones) que te parecan interesantes como Científico de Datos\n\nPregunta 1\nUna matriz equivale a una transformación lineal entre espacios vectoriales porque la mayoría de las transformaciones lineales pueden ser descritas en términos matriciales. Esto quiere decir que se puede tomar un espacio vectorial, y al multipicarlo por cierta matriz el espacio anterior puede representar un nuevo espacio vectorial.\nPregunta 2\nUna matriz diagonal es una transformación lineal uno a uno, (por lo que al realizar la transformación ningun vector se colapsa) lo que permite obtener todos los vectores de la imágen utilizando la transformación en los vectores del dominio.\nEl efecto de transformación de una matriz ortogonal es el de preservar los ángulos y dimensiones entre vectores que se están transformando.\nPregunta 3\nLa descomposición de valores singulares de una matriz es la factorización última y mejor de una matriz de la siguiente manera:\n\\begin{equation}\nAx = U \\Sigma V^T\n\\end{equation}\nDonde:\n\\begin{equation}\nU = Matriz Ortogonal\\\n\\Sigma = Matriz Diagonal\\\nV^T = Ortogonal\\\n\\end{equation}\nVentaja: No es necesario tener matrices cuadradas\nPregunta 4\nEn el caso de los eigenvalores y eigenvectores, diagonalizar una matriz significa factorizar los eigenvalores para que puedan multiplicar a las columnas de eigenvectores individualmente. \nLos eigenvectores representan el sistema de coordenadas, la dirección.\nPregunta 5\nEigenvectores: Intuitivamente, son vectores que despúes de haberlos transformado con una matriz Ax, tienen una dirección paralela a 'x' (sin importar si es positiva o negativa).\nPregunta 6\nLa descomposición en valores singulares como una composición de tres tipos de transformaciones lineales simples se puede interpetar de la siguiente manera:\nRotación + Redimensión de los ejes canónicos + Rotación\nPregunta 7\nLa relación entre Valores Singulares y diagonalización es que en ambos casos se obtienen por separado los eigenvalores de los eigenvectores.\nLa SVD te da información similar a la diagonalización pero para matrices que no sean cuadradas.\nPregunta 8\nSe utiliza para saber a qué valores de x en la ecuación Ax=y se aproximan más, considerando que A tiene rango menor.\nPregunta 9\nLa minimización por descenso gradiente es un método de optimización en el que se busca la pendiente de una curva, superficie, etc. que vaya en descenso; al momento de encontrar una pendiente que no sea decrecientese dice que este último valor con pendiente negativa es el mínimo.\nPregunta 10\nEjemplos de optimización (con restricciones) que me parecen importantes como científico de datos:\nCon restricción\n1) Optimización para el análisis de \"multibody dynamics\", dónde se busca optimizar el comportamiento de cierto mecanismo para que cumpla con objetivos establecidos de comportamientos. La restricción en este caso es la unión con otros sistemas o mecanismos; lo que haría que los resultados se vean acotados por los de otro sistema.\n2) Para mí es de gran interés e importancia este tipo de investigación para análisis de consumo en los supermerados, donde se optimiza los productos seleccionados a que haya la mayor cantidad de venta de cierta marca. Bajo esta optimización hay distintas restricciones como: el catálogo que tiene la empresa, las características demográficas a las que se quiere enfocar cada producto o la calidad del producto que se quiere comercializar.\nSin restricción\n3) Optimización de ciclos de vida del motor de un automóvil. En este caso se busca maximizar la vida útil del motor involucrando una gran cantidad de factores donde también se tienen que combinar con itinerarios de mantenimiento preventivo entre otras cosas.\n4) Optimización de una empresa manufurera; en este tipo de negocio hay distintas áreas que necesitan optimización; tales pueden ser: supply chain, procesos de manufactura, logística, etc.\nParte 2: Comprimir una imágen",
"#Importar Librerías\nimport numpy as np\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\n#Abrir imágen\nim = Image.open(\"/escudo_ferrari.png\")\n\n#Convertir imágen a blanco y negro\nim_gray = im.convert('LA')\n\n#Convertir los True y False en 1s y 0s\nmatrix_im = np.array(list(im_gray.getdata(band=0)), float)\nmatrix_im.shape = (im_gray.size[1], im_gray.size[0])\nmatrix_im = np.matrix(matrix_im)\n\n#Hacer la descomposición en valores singulares\nU, s, V = np.linalg.svd(matrix_im)\n\n#Reconstruir la imágen con el número de vectores requeridos\nbuild_image = np.matrix(U[:, :5]) * np.diag(s[:5]) * np.matrix(V[:5, :])\n\n#Mostrar imágen\nplt.imshow(build_image, cmap='gray')\nplt.show()",
"A continuación se puede también observar la reconstrucción de la imágen con 50 vectores principales, mostrando una calidad excelente en la imágen.",
"#Importar Librerías\nimport numpy as np\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\n#Abrir imágen\nim = Image.open(\"/escudo_ferrari.png\")\n\n#Convertir imágen a blanco y negro\nim_gray = im.convert('LA')\n\n#Convertir los True y False en 1s y 0s\nmatrix_im = np.array(list(im_gray.getdata(band=0)), float)\nmatrix_im.shape = (im_gray.size[1], im_gray.size[0])\nmatrix_im = np.matrix(matrix_im)\n\n#Hacer la descomposición en valores singulares\nU, s, V = np.linalg.svd(matrix_im)\n\n#Reconstruir la imágen con el número de vectores requeridos\nbuild_image = np.matrix(U[:, :50]) * np.diag(s[:50]) * np.matrix(V[:50, :])\n\n#Mostrar imágen\nplt.imshow(build_image, cmap='gray')\nplt.show()",
"Parte 3: Operaciones con Pseudoinversa",
"#Importar libererías\nimport numpy as np\n\n#Hacer pseudoinversa de la matriz\ndef pseudoinverse(mat_A):\n #Obtener SVD\n U, s, Vt = np.linalg.svd(mat_A)\n #Hacer inversa\n s_inv = 1/s\n #infinitos convertir en ceros\n NaN = np.isinf(s_inv)\n s_inv[NaN]=0\n #diagonalizar s\n sd_inv = np.diag(s_inv)\n V = np.transpose(Vt)\n Ut = np.transpose(U)\n pseudo_pt_1 = np.dot(V, sd_inv)\n pseudo = np.dot(pseudo_pt_1, Ut)\n return pseudo\n\n#Resolver ecuación Ax=b con método de pseudoinversa\ndef solve_with_pseudo(A,b):\n pseudo_A = pseudoinverse (A)\n x = np.matmul(pseudo_A,b)\n return x",
"a) ¿Cuál es la imágen de la matriz?\nLa imágen de esta matriz es:\n\\begin{equation}\n{(x \\ , \\ 0): x \\ \\epsilon \\ R }\n\\end{equation}\nb) ¿La solución resultante es única? Si hay más de una solución, investigar que carateriza a la solución devuelta.\nLa solución resultante no es única; debido a que la matriz es singular (no es invertible y su determinante = 0). Se puede decir que hay una infinidad de resultados.\nc) Cambiando A=[[1,1],[0,1e-32]], ¿En este caso la solucíon es única? ¿Cambia el valor devuelto de x en cada posible valor de b del punto anterior?\nIterando con el código anterior se puede observar que hay una solución única a pesar de que el valor 1e-32 se aproxime a cero. Es muy importante en este caso considerar el valor exacto y no cero.\nParte 4: Ajuse de mínimos cuadrados",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Crear Data Frame con archivo CSV\ndf = pd.DataFrame(pd.read_csv(\"study_vs_sat.csv\", parse_dates=True))\n\n# Cambiar nombre a columnas\nnewcols = {\n 'Unnamed: 0': 'study_hours', \n 'Unnamed: 1': 'sat_score',}\ndf.rename(columns=newcols, inplace=True)\n\n#Borrar columnas vacías\ndf.drop('Unnamed: 2', axis=1, inplace=True)\ndf.drop('Unnamed: 3', axis=1, inplace=True)\ndf.drop('Unnamed: 4', axis=1, inplace=True)\ndf.drop('Unnamed: 5', axis=1, inplace=True)\ndf.drop('Unnamed: 6', axis=1, inplace=True)\n\nstudy_hours = df.study_hours\nsat_score = df.sat_score\n\n#Comenzar OLS\n#Definir X\n#Convertir DataFrame en Array\ndf_mat = np.array(df)\n\nx = study_hours\ny = sat_score\n\nn = len(x)\nxi_sum = np.sum(x)\nyi_sum = np.sum(y)\nxy = np.multiply(x,y)\nxy_sum = np.sum(xy)\nx_sq = x**2\nx_sq_sum = np.sum(x_sq)\nbeta = ((n*(xy_sum))-(xi_sum*yi_sum))/((n*x_sq_sum)-(xi_sum**2))\nalpha = (yi_sum-(beta*xi_sum))/n\ny = alpha + beta*x\n\ndef matrix_solve(data):\n #Calcular Alpha y Beta\n df_mat_2 = np.insert(df_mat, 0, 1, axis=1)\n df_mat_X = df_mat_2[:, 0:2]\n df_mat_Xt = np.transpose(df_mat_X)\n XXt = np.matmul(df_mat_Xt, df_mat_X)\n XXt_inv = np.linalg.inv(XXt)\n XXt_inv_Xt = np.matmul(XXt_inv,df_mat_Xt)\n df_mat_Y = df_mat_2[:, 2:3]\n Alpha_Beta = np.matmul(XXt_inv_Xt,df_mat_Y)\n alpha = Alpha_Beta[0]\n beta = Alpha_Beta[1]\n #Calcular Y\n Y_hat = np.matmul(df_mat_X,Alpha_Beta)\n Beta_norm = np.linalg.norm(beta)\n return(Y_hat)\n \n\n\ndef predictions(df_mat_X,alpha,beta):\n predict = np.matmul(df_mat_X,Beta)\n return predict\n \ndef res_w_pseudoinverse():\n df_mat_2 = np.insert(df_mat, 0, 1, axis=1)\n df_mat_X = df_mat_2[:, 0:2]\n pseudo_X = np.linalg.pinv(df_mat_X)\n result = np.matmul(pseudo_X, sat_score)\n return result\n\ndef plot(df_mat):\n input = df_mat\n plt.figure(1)\n x_axis = np.linspace(0, len(input))\n y_axis = np.array(alpha + beta * x_axis)\n plt.plot(x_axis, y_axis, color='b')\n plt.scatter(input[:, 0], input[:, 1], color='r')\n return plt.show()\n(2*(y-alpha-(beta*x)))\n ",
"El gradiente de la función es:",
"grad_desc_x = -sum((2*(y-alpha-(beta*x))*x))\ngrad_desc_y = sum((2*(y-alpha-(beta*x))))\nprint (\"El mínimo por descenso gradiente es \",grad_desc_y,\", \",grad_desc_x)",
"Programar una función que reciba valores de alpha, beta y el vector sat_score y devuelva un vector array de numpy de predicciones alpha + beta*study_hours_i, con un valor para cada individuo",
"y",
"Definan un numpy array X de dos columnas, la primera con unos en todas sus entradas y la segunda con la variable study_hours. Observen que <code>X[alpha,beta]</code> nos devuelve <code>alpha + betastudy_hours_i</code> en cada entrada y que entonces el problema se vuelve <code>sat_score ~ X*[alpha,beta]</code>",
"matrix_solve(df_mat)",
"Calculen la pseudoinversa X^+ de X y computen <code>(X^+)*sat_score</code> para obtener alpha y beta soluciones.</li>",
"res_w_pseudoinverse()",
"Se puede observar que el resultado es el mismo utilizando la seudoinversa, que el obtenido realizando la operación:<code> Y=(X^tX)^(-1)X^t*study_hours</code>\n<strong>(Avanzado)</strong> Usen la libreria <code>matplotlib</code> par visualizar las predicciones con alpha y beta solución contra los valores reales de sat_score.",
"plot(df_mat)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sarvex/PythonMachineLearning
|
Chapter 4/Default metrics.ipynb
|
isc
|
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Be mindful of default metrics\nRegression\n$$ R^2(y, \\hat{y}) = 1 - \\frac{\\displaystyle \\sum_{i=0}^{n - 1} (y_i - \\hat{y}i)^2}{\\displaystyle \\sum{i=0}^{n - 1} (y_i - \\bar{y})^2}$$\nwhere $$\\bar{y} = \\frac{1}{n} \\sum_{i=0}^{n - 1} y_i$$",
"rng = np.random.RandomState(42)\nX = rng.uniform(size=(30, 1))\na = rng.normal(scale=10)\nb = rng.normal()\n\ny_clean = np.dot(X, a).ravel() + b\ny = y_clean + rng.normal(size=len(y_clean))\nplt.plot(X[:, 0], y, 'x')\n\nfrom sklearn.metrics import mean_squared_error, r2_score\nfrom sklearn.linear_model import LinearRegression\n\nlr = LinearRegression().fit(X, y)\ny_pred = lr.predict(X)\nplt.plot(X[:, 0], y, 'x')\nplt.plot(X[:, 0], y_pred)\n\nprint(\"training set R^2: %f\" % r2_score(y, y_pred)) # same as lr.score(X, y)\nprint(\"training set MSE: %f\" % mean_squared_error(y, y_pred))\n\nprint(\"training set R^2: %f\" % r2_score(10 * y, 10 * y_pred))\nprint(\"training set MSE: %f\" % mean_squared_error(10 * y, 10 * y_pred))\n\nfrom sklearn.cross_validation import cross_val_score, LeaveOneOut\ncv = LeaveOneOut(len(y))\ncross_val_score(LinearRegression(), X, y, cv=cv)",
"Classification\n$$ \\texttt{accuracy}(y, \\hat{y}) = \\frac{1}{n} \\sum_{i=0}^{n-1} 1(\\hat{y}_i = y_i)$$\nMulti-class accuracies",
"from sklearn.datasets import make_blobs\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.dummy import DummyClassifier\n\nn_classes = 5\n\nX, y = make_blobs(centers=n_classes, n_samples=10000, random_state=42)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=42)\ndummy = DummyClassifier(random_state=42).fit(X_train, y_train)\nprint(\"Chance accuracy for %d classes: %.1f\" % (n_classes, dummy.score(X_test, y_test)))",
"Imbalanced classes",
"from sklearn.datasets import load_digits\nfrom sklearn.svm import SVC\n\ndigits = load_digits()\nX, y_is_three = digits.data, digits.target == 3\n\nX_train, X_test, y_train, y_test = train_test_split(X, y_is_three, test_size=.5, random_state=0)\n\ndummy = DummyClassifier().fit(X_train, y_train)\nprint(\"Chance accuracy for 9:1 imbalanced classification : %.2f\" % dummy.score(X_test, y_test))\n\ndummy = DummyClassifier(strategy=\"most_frequent\").fit(X_train, y_train)\nprint(\"Accuracy for 9:1 imbalanced classification predicting majority: %.2f\"\n % dummy.score(X_test, y_test))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
courses/machine_learning/tensorflow/c_batched.ipynb
|
apache-2.0
|
[
"<h1> 2c. Refactoring to add batching and feature-creation </h1>\n\nIn this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:\n<ol>\n<li> Refactor the input to read data in batches.\n<li> Refactor the feature creation so that it is not one-to-one with inputs.\n</ol>\nThe Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.",
"import tensorflow as tf\nimport numpy as np\nimport shutil\nprint(tf.__version__)",
"<h2> 1. Refactor the input </h2>\n\nRead data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.",
"CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]\n\ndef read_dataset(filename, mode, batch_size = 512):\n def _input_fn():\n def decode_csv(value_column):\n columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)\n features = dict(zip(CSV_COLUMNS, columns))\n label = features.pop(LABEL_COLUMN)\n return features, label\n\n # Create list of files that match pattern\n file_list = tf.gfile.Glob(filename)\n\n # Create dataset from file list\n dataset = tf.data.TextLineDataset(file_list).map(decode_csv)\n if mode == tf.estimator.ModeKeys.TRAIN:\n num_epochs = None # indefinitely\n dataset = dataset.shuffle(buffer_size = 10 * batch_size)\n else:\n num_epochs = 1 # end-of-input after this\n\n dataset = dataset.repeat(num_epochs).batch(batch_size)\n return dataset.make_one_shot_iterator().get_next()\n return _input_fn\n \n\ndef get_train():\n return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)\n\ndef get_valid():\n return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)\n\ndef get_test():\n return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)",
"<h2> 2. Refactor the way features are created. </h2>\n\nFor now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.",
"INPUT_COLUMNS = [\n tf.feature_column.numeric_column('pickuplon'),\n tf.feature_column.numeric_column('pickuplat'),\n tf.feature_column.numeric_column('dropofflat'),\n tf.feature_column.numeric_column('dropofflon'),\n tf.feature_column.numeric_column('passengers'),\n]\n\ndef add_more_features(feats):\n # Nothing to add (yet!)\n return feats\n\nfeature_cols = add_more_features(INPUT_COLUMNS)",
"<h2> Create and train the model </h2>\n\nNote that we train for num_steps * batch_size examples.",
"tf.logging.set_verbosity(tf.logging.INFO)\nOUTDIR = 'taxi_trained'\nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\nmodel = tf.estimator.LinearRegressor(\n feature_columns = feature_cols, model_dir = OUTDIR)\nmodel.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed",
"<h3> Evaluate model </h3>\n\nAs before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.",
"def print_rmse(model, name, input_fn):\n metrics = model.evaluate(input_fn = input_fn, steps = 1)\n print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))\nprint_rmse(model, 'validation', get_valid())",
"Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jorisvandenbossche/DS-python-data-analysis
|
_solved/pandas_05_groupby_operations.ipynb
|
bsd-3-clause
|
[
"<p><font size=\"6\"><b>06 - Pandas: \"Group by\" operations</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-whitegrid')",
"Some 'theory': the groupby operation (split-apply-combine)",
"df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],\n 'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})\ndf",
"Recap: aggregating functions\nWhen analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example:",
"df['data'].sum()",
"However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.\nFor example, in the above dataframe df, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following:",
"for key in ['A', 'B', 'C']:\n print(key, df[df['key'] == key]['data'].sum())",
"This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.\nWhat we did above, applying a function on different groups, is a \"groupby operation\", and pandas provides some convenient functionality for this.\nGroupby: applying functions per group\nThe \"group by\" concept: we want to apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets\nThis operation is also referred to as the \"split-apply-combine\" operation, involving the following steps:\n\nSplitting the data into groups based on some criteria\nApplying a function to each group independently\nCombining the results into a data structure\n\n<img src=\"../img/pandas/splitApplyCombine.png\">\nSimilar to SQL GROUP BY\nInstead of doing the manual filtering as above\ndf[df['key'] == \"A\"].sum()\ndf[df['key'] == \"B\"].sum()\n...\n\npandas provides the groupby method to do exactly this:",
"df.groupby('key').sum()\n\ndf.groupby('key').aggregate(np.sum) # 'sum'",
"And many more methods are available.",
"df.groupby('key')['data'].sum()",
"Application of the groupby concept on the titanic data\nWe go back to the titanic passengers survival data:",
"df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Using groupby(), calculate the average age for each sex.</li>\n</ul>\n</div>",
"df.groupby('Sex')['Age'].mean()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate the average survival ratio for all passengers.</li>\n</ul>\n</div>",
"# df['Survived'].sum() / len(df['Survived'])\ndf['Survived'].mean()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Calculate this survival ratio for all passengers younger than 25 (remember: filtering/boolean indexing).</li>\n</ul>\n</div>",
"df25 = df[df['Age'] < 25]\ndf25['Survived'].mean()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>What is the difference in the survival ratio between the sexes?</li>\n</ul>\n</div>",
"df.groupby('Sex')['Survived'].mean()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Make a bar plot of the survival ratio for the different classes ('Pclass' column).</li>\n</ul>\n</div>",
"df.groupby('Pclass')['Survived'].mean().plot(kind='bar') #and what if you would compare the total number of survivors?",
"<div class=\"alert alert-success\">\n\n**EXERCISE**:\n\n* Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is divided is separate classes using the `pd.cut()` function as provided below.\n\n</div>",
"df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))\n\ndf.groupby('AgeClass')['Fare'].mean().plot(kind='bar', rot=0)",
"If you are ready, more groupby exercises can be found below.\nSome more theory\nSpecifying the grouper\nIn the previous example and exercises, we always grouped by a single column by passing its name. But, a column name is not the only value you can pass as the grouper in df.groupby(grouper). Other possibilities for grouper are:\n\na list of strings (to group by multiple columns)\na Series (similar to a string indicating a column in df) or array\nfunction (to be applied on the index)\nlevels=[], names of levels in a MultiIndex",
"df.groupby(df['Age'] < 18)['Survived'].mean()\n\ndf.groupby(['Pclass', 'Sex'])['Survived'].mean()",
"The size of groups - value counts\nOften you want to know how many elements there are in a certain group (or in other words: the number of occurences of the different values from a column).\nTo get the size of the groups, we can use size:",
"df.groupby('Pclass').size()\n\ndf.groupby('Embarked').size()",
"Another way to obtain such counts, is to use the Series value_counts method:",
"df['Embarked'].value_counts()",
"[OPTIONAL] Additional exercises using the movie data\nThese exercises are based on the PyCon tutorial of Brandon Rhodes (so credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /notebooks/data folder.\ncast dataset: different roles played by actors/actresses in films\n\ntitle: title of the movie\nyear: year it was released\nname: name of the actor/actress\ntype: actor/actress\nn: the order of the role (n=1: leading role)",
"cast = pd.read_csv('data/cast.csv')\ncast.head()",
"titles dataset:\n\ntitle: title of the movie\nyear: year of release",
"titles = pd.read_csv('data/titles.csv')\ntitles.head()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Using `groupby()`, plot the number of films that have been released each decade in the history of cinema.</li>\n</ul>\n</div>",
"titles['decade'] = titles['year'] // 10 * 10\n\ntitles.groupby('decade').size().plot(kind='bar', color='green')",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Use `groupby()` to plot the number of 'Hamlet' movies made each decade.</li>\n</ul>\n</div>",
"titles['decade'] = titles['year'] // 10 * 10\nhamlet = titles[titles['title'] == 'Hamlet']\nhamlet.groupby('decade').size().plot(kind='bar', color=\"orange\")",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>For each decade, plot all movies of which the title contains \"Hamlet\".</li>\n</ul>\n</div>",
"titles['decade'] = titles['year'] // 10 * 10\nhamlet = titles[titles['title'].str.contains('Hamlet')]\nhamlet.groupby('decade').size().plot(kind='bar', color=\"lightblue\")",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>List the 10 actors/actresses that have the most leading roles (n=1) since the 1990's.</li>\n</ul>\n</div>",
"cast1990 = cast[cast['year'] >= 1990]\ncast1990 = cast1990[cast1990['n'] == 1]\ncast1990.groupby('name').size().nlargest(10)\n\ncast1990['name'].value_counts().head(10)",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>In a previous exercise, the number of 'Hamlet' films released each decade was checked. Not all titles are exactly called 'Hamlet'. Give an overview of the titles that contain 'Hamlet' and an overview of the titles that start with 'Hamlet', each time providing the amount of occurrences in the data set for each of the movies</li>\n</ul>\n</div>",
"hamlets = titles[titles['title'].str.contains('Hamlet')]\nhamlets['title'].value_counts()\n\nhamlets = titles[titles['title'].str.startswith('Hamlet')]\nhamlets['title'].value_counts()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>List the 10 movie titles with the longest name.</li>\n</ul>\n</div>",
"title_longest = titles['title'].str.len().nlargest(10)\ntitle_longest\n\npd.options.display.max_colwidth = 210\ntitles.loc[title_longest.index]",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?</li>\n</ul>\n</div>",
"cast1950 = cast[cast['year'] // 10 == 195]\ncast1950 = cast1950[cast1950['n'] == 1]\ncast1950.groupby(['year', 'type']).size()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>What are the 11 most common character names in movie history?</li>\n</ul>\n</div>",
"cast.character.value_counts().head(11)",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Plot how many roles Brad Pitt has played in each year of his career.</li>\n</ul>\n</div>",
"cast[cast.name == 'Brad Pitt'].year.value_counts().sort_index().plot()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>What are the 10 most occurring movie titles that start with the words 'The Life'?</li>\n</ul>\n</div>",
"titles[titles['title'].str.startswith('The Life')]['title'].value_counts().head(10)",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Which actors or actresses were most active in the year 2010 (i.e. appeared in the most movies)?</li>\n</ul>\n</div>",
"cast[cast.year == 2010].name.value_counts().head(10)",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Determine how many roles are listed for each of 'The Pink Panther' movies.</li>\n</ul>\n</div>",
"pink = cast[cast['title'] == 'The Pink Panther']\npink.groupby(['year'])[['n']].max()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li> List, in order by year, each of the movies in which 'Frank Oz' has played more than 1 role.</li>\n</ul>\n</div>",
"oz = cast[cast['name'] == 'Frank Oz']\noz_roles = oz.groupby(['year', 'title']).size()\noz_roles[oz_roles > 1]",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li> List each of the characters that Frank Oz has portrayed at least twice.</li>\n</ul>\n</div>",
"oz = cast[cast['name'] == 'Frank Oz']\noz_roles = oz.groupby(['character']).size()\noz_roles[oz_roles > 1].sort_values()",
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nAdd a new column to the `cast` DataFrame that indicates the number of roles for each movie. \n\n<details><summary>Hints</summary>\n\n- [Transformation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#transformation) returns an object that is indexed the same (same size) as the one being grouped.\n\n</details> \n\n\n</div>",
"cast['n_total'] = cast.groupby(['title', 'year'])['n'].transform('size') # transform will return an element for each row, so the size value is given to the whole group\ncast.head()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li> Calculate the ratio of leading actor and actress roles to the total number of leading roles per decade. </li>\n</ul><br>\n\n**Tip**: you can do a groupby twice in two steps, first calculating the numbers, and secondly, the ratios.\n</div>",
"leading = cast[cast['n'] == 1]\nsums_decade = leading.groupby([cast['year'] // 10 * 10, 'type']).size()\nsums_decade\n\n#sums_decade.groupby(level='year').transform(lambda x: x / x.sum())\nratios_decade = sums_decade / sums_decade.groupby(level='year').transform('sum')\nratios_decade\n\nratios_decade[:, 'actor'].plot()\nratios_decade[:, 'actress'].plot()",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li> In which years the most films were released?</li>\n</ul><br>\n</div>",
"t = titles\nt.year.value_counts().head(3)",
"<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?</li>\n</ul><br>\n</div>",
"cast1950 = cast[cast['year'] // 10 == 195]\ncast1950 = cast1950[cast1950['n'] == 1]\ncast1950['type'].value_counts()\n\ncast2000 = cast[cast['year'] // 10 == 200]\ncast2000 = cast2000[cast2000['n'] == 1]\ncast2000['type'].value_counts()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
danielbultrini/FXFEL
|
Quick-start plotting .ipynb
|
bsd-3-clause
|
[
"Plotting routines\na selection of plotting routines for easy access",
"import file_processor as fp #contains simple routines for sorting files and making directories\nimport processing_tools as pt #bulk of the processing\nimport int_plot as ip #allows for interactive plots\n",
"Plotting entire directories\nAs it is necessary to often plot many files, a script is included that will go through a directory, make a folder for each file (which ends in .h5, but you may specify other endings by the optional parameter 'file_ending' )\nThis plots all the defaults, both in pandas and bokeh and then moves them in said directory.",
"directory = './example'\nfp.plot_defaults(directory, file_ending='.h5')\n",
"Interactive plots\nTo plot an interactive interface an ordered list of files which follows the path along the simulation must be provided.\nfile_processor has some tools to ease this process. This can plot either any combination of standard distributions or the slice values over a slice.",
"list_of_files = fp.directory_list('./example') #returns a list of files that \n#can be sorted in your preferred method\nfp.sort_nicely(list_of_files) # sorts in a nice way, but you ought to check\n#in this case the files are too different to work, so I provide similar files \n\nlist_of_files = ['./example/noise_10kSI_MASP.h5',\n './example/noise_10kSI_MASP.h5',\n './example/noise_10kSI_MASP.h5']\n\nfrom bokeh.plotting import show\nfrom bokeh.io import output_notebook #To view on a notebook such as this\noutput_notebook() # allows Bokeh to output to the notebook\n\nint_plot = ip.interactive_plot(list_of_files,'z_pos','CoM_y',num_slices=100, undulator_period=0.0275,k_fact=1) \n#what to plot\nshow(int_plot)",
"Quick plotting\nTo quickly plot any combination the pandas inbuit system can help",
"import matplotlib.pyplot as plt\n\ntest = pt.ProcessedData('./example/example.h5',undulator_period=0.00275,num_slices=100)\npanda_data = test.StatsFrame()\nax = panda_data.plot(x='z_pos',y='CoM_y')\npanda_data.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes, one can even mix different runs\n#by plotting another dataset on the same axis\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
grfiv/predict-blood-donations
|
Ensemble.ipynb
|
mit
|
[
"Ensemble",
"from __future__ import division\nfrom IPython.display import display\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport random, sys, os, re",
"The test set has duplicates so we get the list of IDs in the sample file in order",
"id_list = []\nwith open('../submissions/Submission_Format.csv', 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n ID,prob = line.split(',')\n if ID == '': continue\n id_list.append(ID)\n\ndef get_filepaths(directory):\n \"\"\"\n This function will generate the file names in a directory \n tree by walking the tree either top-down or bottom-up. For each \n directory in the tree rooted at directory top (including top itself), \n it yields a 3-tuple list (dirpath, dirnames, filenames).\n \"\"\"\n import os\n \n file_paths = [] # List which will store all of the full filepaths.\n\n # Walk the tree.\n for root, directories, files in os.walk(directory):\n for filename in files:\n # Join the two strings in order to form the full filepath.\n filepath = os.path.join(root, filename)\n file_paths.append(filepath) # Add it to the list.\n\n return file_paths ",
"Get the list of submission files\n\n\nremove the example file\n\n\nand all ensembles\n\n\nBEFORE",
"file_list = get_filepaths('../submissions')\nfile_list",
"AFTER",
"# why do it more than once? For some reason it doesn't work if only run once. Who knows?\n# ======================================================================================\nfor i in range(3):\n for file_name in file_list:\n if 'Format' in file_name: file_list.remove(file_name)\n if 'Ensemble' in file_name: file_list.remove(file_name)\n if 'ensemble' in file_name: file_list.remove(file_name)\n\nfile_list.sort(key=lambda x: x[26:32])\n\nfrom copy import copy\nfile_list_all = copy(file_list)\n\nfile_list",
"---------------------------------------------\nEnsemble ALL the submissions\n---------------------------------------------\nFind the average probability for all IDs",
"from collections import defaultdict\n\naggregates = defaultdict(list)\naverages = defaultdict(list)\n\n\n# 1. collect the probabilities for each ID from all the submission files\n# ======================================================================\nfor file_name in file_list:\n with open(file_name, 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n ID,prob = line.split(',')\n if ID == '': continue\n aggregates[ID].append(prob)\n \n \n \n# 2. find the average of all the probabilities for each ID\n# ========================================================\naverages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())\n\naggregates['1'],averages['1']\n\nlen(aggregates),len(averages)",
"Create a submission file of the ensemble of averages",
"# f = open(\"../submissions/submission_EnsembleOfAveragesALL.csv\", \"w\")\n\n# f.write(\",Made Donation in March 2007\\n\")\n# for ID in id_list:\n# f.write(\"{},{}\\n\".format(ID, averages[ID]))\n \n# f.close()",
"---------------------------------------------------------------\nEnsemble the submissions with high scores\n---------------------------------------------------------------\nBEFORE",
"file_list",
"AFTER",
"# why do it more than once? For some reason it doesn't work if only run once. Who knows?\n# ======================================================================================\nfor _ in range(2):\n for _ in range(4):\n for file_name in file_list:\n if 'Format' in file_name: file_list.remove(file_name)\n if 'Ensemble' in file_name: file_list.remove(file_name)\n\n # scores of 0.4... or 0.3... are good\n # files with SEED... are good-scoring models that were re-run with different random seeds\n if ('bagged_nolearn' not in file_name): \n file_list.remove(file_name)\n \nfile_list\n\nfrom collections import defaultdict\n\naggregates = defaultdict(list)\naverages = defaultdict(list)\n\n\n# 1. collect the probabilities for each ID from all the submission files\n# ======================================================================\nfor file_name in file_list:\n with open(file_name, 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n ID,prob = line.split(',')\n if ID == '': continue\n aggregates[ID].append(prob)\n \n \n \n# 2. find the average of all the probabilities for each ID\n# ========================================================\naverages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())\n\naggregates['1'],averages['1']\n\nlen(aggregates),len(averages)\n\nf = open(\"../submissions/submission_EnsembleOfAveragesBEST_SEED.csv\", \"w\")\n\nf.write(\",Made Donation in March 2007\\n\")\nfor ID in id_list:\n f.write(\"{},{}\\n\".format(ID, averages[ID]))\n \nf.close()",
"---------------------------------------------------------------\nEnsemble the least-correlated submissions\n---------------------------------------------------------------\nCreate a dataframe with one column per submission",
"from os.path import split\ncorr_table = pd.read_csv(file_list_all[0],names=['id',split(file_list_all[0])[1][11:-4]],header=0,index_col=0)\ncorr_table.head()\n\nfor file_path in file_list_all[1:]:\n temp = pd.read_csv(file_path,names=['id',split(file_path)[1][11:-4]],header=0,index_col=0)\n corr_table[temp.columns[0]] = temp[[temp.columns[0]]]\ncorr_table.head()",
"Display the correlations among the submissions",
"import seaborn as sns\n\n# Compute the correlation matrix\ncorr_matrix = corr_table.corr()\n\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr_matrix, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(11, 9))\n\n# Generate a custom diverging colormap\ncmap = sns.diverging_palette(220, 10, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nsns.heatmap(corr_matrix, mask=mask, cmap=cmap, vmax=.9,\n square=True, xticklabels=4, yticklabels=3,\n linewidths=.5, cbar_kws={\"shrink\": .5}, ax=ax)\nplt.show()",
"Find the least-correlated pairs of submissions",
"corr_threshold = 0.20\n\nindices = np.where(corr_matrix < corr_threshold)\nindices = [(corr_matrix.index[x], corr_matrix.columns[y], corr_matrix.ix[x,y]) for x, y in zip(*indices)\n if x != y and x < y]\n\nfrom operator import itemgetter\nindices.sort(key=itemgetter(2))\nlen(indices),indices\n\nleast_corr = set(set(['../submissions/submission_'+a+'.csv' for a,b,c in indices]).\\\n union(set(['../submissions/submission_'+b+'.csv' for a,b,c in indices])))\n\nlen(least_corr), least_corr\n\nfrom collections import defaultdict\n\naggregates = defaultdict(list)\naverages = defaultdict(list)\n\n\n# 1. collect the probabilities for each ID from all the submission files\n# ======================================================================\nfor file_name in least_corr:\n with open(file_name, 'r') as f:\n lines = f.read().splitlines()\n for line in lines:\n ID,prob = line.split(',')\n if ID == '': continue\n aggregates[ID].append(prob)\n \n \n \n# 2. find the average of all the probabilities for each ID\n# ========================================================\naverages.update((ID, np.mean(map(float, probs))) for ID, probs in aggregates.items())\n\naggregates['1'],averages['1']\n\n# f = open(\"../submissions/submission_EnsembleOfAveragesLeastCorr.csv\", \"w\")\n\n# f.write(\",Made Donation in March 2007\\n\")\n# for ID in id_list:\n# f.write(\"{},{}\\n\".format(ID, averages[ID]))\n \n# f.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/noaa-gfdl/cmip6/models/sandbox-3/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: NOAA-GFDL\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:35\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
willettk/insight
|
notebooks/Probability tutorial.ipynb
|
apache-2.0
|
[
"Probability tutorial\nProblems by Peter Komar\n18 Jul 2016\nSample problems from Peter Komar; after trying to analytically solve everything, Monte Carlo and see if I'm right.",
"def compare(analytic,N,f):\n errval = err(f,N)\n successes = sum(f)\n print \"Analytic prediction: {:.0f}%.\".format(analytic*100.)\n print \"Monte Carlo: {:.0f} +- {:.0f}%.\".format(successes/float(N)*100.,errval*100.)\n\ndef err(fx,N):\n # http://www.northeastern.edu/afeiguin/phys5870/phys5870/node71.html\n f2 = [x*x for x in fx]\n return np.sqrt((1./N * sum(f2) - (1./N * sum(fx))**2)/float(N))",
"Forward probability\nQuestion 1\nQ1: What is the probability of light rain on both days?",
"import numpy as np\nfrom numpy.random import binomial\n\n# Default is 1000 trials each\n\nN = 1000\n\np_rain_sat = 0.5\np_rain_sun = 0.2\n\np_light_sat = 0.9\np_heavy_sat = 0.1\n\np_light_sun = 1.0\np_heavy_sun = 0.0\n\nf = []\nfor i in range(N):\n # Light rain on Saturday?\n rain_sat = binomial(1,p_rain_sat)\n if rain_sat:\n light_sat = binomial(1,p_light_sat)\n else:\n light_sat = 0\n # Light rain on Sunday?\n rain_sun = binomial(1,p_rain_sun)\n if rain_sun:\n light_sun = binomial(1,p_light_sun)\n else:\n light_sun = 0\n if light_sat and light_sun:\n f.append(1)\n else:\n f.append(0)\n\ncompare(9/100.,N,f)",
"Q2: What is the probability of rain during the weekend?",
"f = []\nfor i in range(N):\n # Light rain on either day?\n rain_sat = binomial(1,p_rain_sat)\n rain_sun = binomial(1,p_rain_sun)\n if rain_sat or rain_sun:\n f.append(1)\n else:\n f.append(0)\n\ncompare(60/100.,N,f)",
"Question 2\nQ1: With what probability are the two drawn pieces of candy different?",
"from random import randint\n\nf = []\nfor i in range(N):\n # Draw candy from bag 1\n r1 = randint(0,6)\n if r1 < 3:\n candy1 = \"taffy\"\n else:\n candy1 = \"caramel\"\n \n # Draw candy from bag 2\n r2 = randint(0,5)\n if r2 == 0:\n candy2 = \"taffy\"\n else:\n candy2 = \"caramel\"\n \n if candy1 is not candy2:\n f.append(1)\n else:\n f.append(0)\n\ncompare(19/42.,N,f)",
"Q2: With what probability are the two drawn pieces of candy different if they are drawn from the same (but randomly chosen) bag?",
"f = []\nfor i in range(N):\n # Choose the bag\n \n bag = binomial(1,0.5)\n if bag:\n # Bag 1\n \n # First draw\n r1 = randint(0,6)\n if r1 < 3:\n candy1 = \"taffy\"\n else:\n candy1 = \"caramel\"\n \n # Second draw\n r2 = randint(0,5)\n if candy1 is \"taffy\":\n if r2 < 2:\n candy2 = \"taffy\"\n else:\n candy2 = \"caramel\"\n else:\n if r2 < 3:\n candy2 = \"taffy\"\n else:\n candy2 = \"caramel\"\n \n else:\n # Bag 2\n\n # First draw\n r1 = randint(0,5)\n if r1 < 2:\n candy1 = \"taffy\"\n else:\n candy1 = \"caramel\"\n \n # Second draw\n r2 = randint(0,4)\n if candy1 is \"caramel\":\n if r2 < 4:\n candy2 = \"caramel\"\n else:\n candy2 = \"taffy\"\n else:\n candy2 = \"caramel\"\n \n if candy1 is not candy2:\n f.append(1)\n else:\n f.append(0)\n\ncompare(23/42.,N,f)",
"Question 3 \nQ: What is the expectation value and standard deviation of the reward?",
"p_H = 0.5\n\nf = []\nfor i in range(N):\n # Flip coin 1\n c1 = binomial(1,p_H) \n # Flip coin 2\n c2 = binomial(1,p_H) \n # Flip coin 3\n c3 = binomial(1,p_H)\n \n total_heads = c1 + c2 + c3\n # Three heads\n if total_heads == 3:\n reward = 100\n if total_heads == 2:\n reward = 40\n if total_heads == 1:\n reward = 0\n if total_heads == 0:\n reward = -200\n f.append(reward)\n\nprint \"Analytic: {:.2f} +- {:.0f}\".format(20/8.,82)\nprint \"Monte Carlo: {:.2f} +- {:.0f}\".format(np.mean(f),np.std(f))",
"Question 4\nQ1: What is the probability that Potter, Granger, and Weasley are standing next to each other?",
"n = 10\nf = []\nfor i in range(N):\n line = range(n)\n np.random.shuffle(line)\n \n # Assume Potter, Granger, Weasley correspond to 0, 1, and 2\n \n indices = [line.index(person) for person in (0,1,2)]\n if max(indices) - min(indices) == 2:\n f.append(1)\n\ncompare(1/15.,N,f)",
"Q2: What is the probability that Potter, Granger, and Weasley are standing next to each other if the line is a circle?",
"f = []\nfor i in range(N):\n line = range(n)\n np.random.shuffle(line)\n \n # Assume Potter, Granger, Weasley correspond to 0, 1, and 2\n \n indices = [line.index(person) for person in (0,1,2)]\n if max(indices) - min(indices) == 2:\n f.append(1)\n else:\n # Shift line halfway around and check again\n line = list(np.roll(line,n//2))\n indices = [line.index(person) for person in (0,1,2)]\n if max(indices) - min(indices) == 2:\n f.append(1)\n \ncompare(1/12.,N,f)",
"Question 5\nQ: What is the probability that c dances with gamma?",
"f = []\nfor i in range(N):\n guys = ['a','b','c','d','e']\n gals = ['alpha','beta','gamma','delta','epsilon']\n np.random.shuffle(guys)\n np.random.shuffle(gals)\n if guys.index('c') == gals.index('gamma'):\n f.append(1)\n \ncompare(1./5,N,f)",
"Question 6\nQ: What is the probability that Derrick and Gaurav end up in the same group?",
"f = []\nfor i in range(N):\n fellows = range(21)\n np.random.shuffle(fellows)\n # Derrick = 0, Gaurav = 1\n group_derrick = fellows.index(0)//7\n group_gaurav = fellows.index(1)//7\n if group_derrick == group_gaurav:\n f.append(1)\n \ncompare(0.30,N,f)",
"Question 7\nQ: What is the probability that stocking A gets no candy?",
"f = []\nfor i in range(N):\n a,b,c,d = 0,0,0,0\n for candy in range(10):\n selection = randint(0,3)\n if selection == 0:\n a += 1 \n if selection == 1:\n b += 1 \n if selection == 2:\n c += 1 \n if selection == 3:\n d += 1\n \n if a == 0:\n f.append(1)\n \ncompare(0.75**10,N,f)",
"Question 8\nQ1: What is the probability that we get two 1s in the first twenty throws?",
"n = 20\nf = []\nfor i in range(N):\n throws = np.random.randint(1,11,n)\n counts = np.bincount(throws)\n if counts[1] == 2:\n f.append(1)\n \nanalytic = 10**(np.log10(190) + 18*np.log10(9) - 20) \ncompare(analytic,N,f)",
"Q2: What is the probability that we get the first 1 in the tenth throw?",
"n = 10\nf = []\nfor i in range(N):\n throws = np.random.randint(1,11,n)\n counts = np.bincount(throws)\n if counts[1] == 1 and throws[-1] == 1:\n f.append(1)\n \nanalytic = 0.9**9 * 0.1\ncompare(analytic,N,f)",
"Q3: What is the probability that we get the third 1 on the thirtieth throw?",
"n = 30\nf = []\nfor i in range(N):\n throws = np.random.randint(1,11,n)\n counts = np.bincount(throws)\n if counts[1] == 3 and throws[-1] == 1:\n f.append(1)\n \nanalytic = (29*28/2. * 0.9**27 * 0.1**2) * 0.1\ncompare(analytic,N,f)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dynaryu/rmtk
|
rmtk/vulnerability/model_generator/point_dispersion/point_dispersion.ipynb
|
agpl-3.0
|
[
"Generation of capacity curves using point dispersion\nThis notebook can be used to generate many synthethic capacity curves (spectral acceleration versus spectrum displacement), starting from a median pushover curve and dispersion levels for a number of specific points on the capacity curve. The figure below presents a number of capacity curves generated from a median curve representative of a given building typology.\n<img src=\"../../../../figures/capacity_curves_dispersion.png\" width=\"350\" align=\"middle\">\nNote: To run the code in a cell:\n\nClick on the cell to select it.\nPress SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.",
"import point_dispersion as pd\nfrom rmtk.vulnerability.common import utils\n%matplotlib inline ",
"Capacity curve generator\n\nThe various points of the capacity curve should be defined in the vectors Sa (spectral acceleration) and Sd (spectral displacement).\nFor each of these points, the variability is specified in the form of coefficients of variation, using the vectors Sa_cov and Sd_cov.\nThe distribution can be set to \"normal\" or \"lognormal\" and controls the distribution used for the sampling.\nThe parameters Sa_corr, Sd_corr, and Sa_Sd_corr control the correlation used for sampling points on. A correlation factor equal to 1.0 implies full correlation, whereas a value equal to 0.0 will lead to an independent sampling process.\nThe parameter truncation_level specifies the number of standard deviations on either side of the mean of the distribution that will be considered during the sampling process.\nThe number of capacity curves that will be generated is specified by the parameter no_capacity_curves.",
"Sa_means = [0.40, 0.40, 0.40, 0.40]\nSa_covs = [0.20, 0.20, 0.20, 0.20]\nSd_means = [0.03, 0.05, 0.08, 0.1]\nSd_covs = [0.20, 0.20, 0.20, 0.20]\ndistribution = \"normal\"\nSa_corr = 0.99999\nSd_corr = 0.99999\nSa_Sd_corr = 0.5\ntruncation_level = 1\nno_capacity_curves = 50\ncapacity_curves = pd.generate_capacity_curves(Sa_means, Sa_covs, Sd_means, Sd_covs,\n distribution, no_capacity_curves, \n Sa_corr, Sd_corr, Sa_Sd_corr, truncation_level)\nutils.plot_capacity_curves(capacity_curves)",
"Include additional information\nAdditional information can be added to the capacity curves generated using the above method. \nAfter generating the required capacity curves by running the cells above, you can add the following information to the capacity curves:\n\nThe parameter gamma defines the modal participation factor\nThe parameter height defines the height of the structure\nThe parameter elastic_period defines the elastic period of the first mode of vibration\nThe parameter yielding_point_index defines the yielding point",
"gamma = 1.2\nheight = 3.0\nelastic_period = 0.6\nyielding_point_index = 1\ncapacity_curves = utils.add_information(capacity_curves, 'gamma', 'value', gamma)\ncapacity_curves = utils.add_information(capacity_curves, 'heights', 'value', height)\ncapacity_curves = utils.add_information(capacity_curves, 'periods', 'calculate', 1)\ncapacity_curves = utils.add_information(capacity_curves, 'yielding point', 'point', yielding_point_index)",
"Save capacity curves\nPlease specify below the path for the output file to save the capacity curves:",
"output_file = \"../../../../../rmtk_data/capacity_curves_point.csv\"\nutils.save_SdSa_capacity_curves(capacity_curves, output_file)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsiufl/2015-Fall-Hadoop
|
notes/4-pyspark-example.ipynb
|
mit
|
[
"Example: Use pyspark to process GDELT event data\nGDELT: Global Database of Events, Language, and Tone \nhttp://www.gdeltproject.org/ \nColumn Header: http://gdeltproject.org/data/lookups/CSV.header.historical.txt\nCountryCode: http://gdeltproject.org/data/lookups/CAMEO.country.txt\nMore doc: http://gdeltproject.org/data.html#rawdatafiles\nPrepare pyspark environment",
"import findspark\nimport os\nfindspark.init('/home/ubuntu/shortcourse/spark-1.5.1-bin-hadoop2.6')\n\nfrom pyspark import SparkContext, SparkConf\nconf = SparkConf().setAppName(\"pyspark-example\").setMaster(\"local[2]\")\nsc = SparkContext(conf=conf)",
"First start by seeing that there does exist a SparkContext object in the sc variable:",
"print sc",
"Now let's load an RDD with some interesting data. We have the GDELT event data set on our VM as a tab-delimited text file. (Due to VM storage and compute power limitation, we only choose year 2001.)\nWe use a local file this time, the path is: '/home/ubuntu/shortcourse/data/gdelt'.\nPlease read the file, and map each line to a single word list.\nLet's see what an object in the RDD looks like.\nTake the first element from the created RDD.\nLet's count the number of events we have.\nWe should see about 5 million events at our disposal.\nThe GDELT event data set collects geopolitical events that occur around the world. Each event is tagged with a Goldstein scale value that measures the potential for the event to destabilize the country. Let's compute and plot a histogram of the Goldstein scale values across all the events in the database. The Goldstein scale value is present in the 31st field.\nFirst, let's make sure that plotting images are set to be displayed inline (see the IPython docs):",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Now we'll just confirm that all the Goldstein values are indeed between -10 and 10.\nCount and print out the max and min value of the 31st field.\nHere we compute the histogram. And print it out.\nPlot the histogram.\nWe can also plot the number of events each day for the 10 countries that have the most events in the second half of year 2001.\nFirst we can see the number of unique countries that are available. Note that we filter out events that don't list a country code.\nShow the distict country codes, the 8th field.\nHere we convert each event into counts. Aggregate by country and day, for all events in the second half of 2001. \nFirst, filter the raw events. Keep the events for the second half of 2001. Also filter out events that don't list a country code.\nCount how many qualified events we have.\nTransform the events into key-value pair, key is (countrycode (8th), date (2nd)), value is event count.\n((code, date), count)\n\nShow the first five.",
"# some help function to convert the date to a float value indicates the time within the year in seconds.\n\nfrom dateutil.parser import parse as parse_date\nepoch = parse_date('20010101')\ndef td2s(td):\n return (td.microseconds + (td.seconds + td.days * 24 * 3600) * 1000000) / 1e6\ndef day2unix(day):\n return td2s(parse_date(day) - epoch)",
"Aggregate the events by country and transform the country_day_counts to (country, time, counts), where time and counts can be later used for drawing. Note the time and its corresponding count should be sorted according to time.\nShow the first item.\nPlot the figure, x axis is the time and y axis is the event count. Plot for the 10 countries with most events.\nWhat's the big spike for the line above?\nTry to see what's going on use reduce and max.\nLooks like it was the day after September 11th.",
"# stop the spark context\nsc.stop()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SteveDiamond/cvxpy
|
examples/machine_learning/svm.ipynb
|
gpl-3.0
|
[
"Support vector machine classifier with $\\ell_1$-regularization\nIn this example we use CVXPY to train a SVM classifier with $\\ell_1$-regularization.\nWe are given data $(x_i,y_i)$, $i=1,\\ldots, m$. The $x_i \\in {\\bf R}^n$ are feature vectors, while the $y_i \\in {\\pm 1}$ are associated boolean outcomes.\nOur goal is to construct a good linear classifier $\\hat y = {\\rm sign}(\\beta^T x - v)$.\nWe find the parameters $\\beta,v$ by minimizing the (convex) function\n$$\nf(\\beta,v) = (1/m) \\sum_i \\left(1 - y_i ( \\beta^T x_i-v) \\right)_+ + \\lambda\n\\| \\beta\\|_1\n$$\nThe first term is the average hinge loss. The second term shrinks the coefficients in $\\beta$ and encourages sparsity.\nThe scalar $\\lambda \\geq 0$ is a (regularization) parameter.\nMinimizing $f(\\beta,v)$ simultaneously selects features and fits the classifier.\nExample\nIn the following code we generate data with $n=20$ features by randomly choosing $x_i$ and a sparse $\\beta_{\\mathrm{true}} \\in {\\bf R}^n$.\nWe then set $y_i = {\\rm sign}(\\beta_{\\mathrm{true}}^T x_i -v_{\\mathrm{true}} - z_i)$, where the $z_i$ are i.i.d. normal random variables.\nWe divide the data into training and test sets with $m=1000$ examples each.",
"# Generate data for SVM classifier with L1 regularization.\nfrom __future__ import division\nimport numpy as np\nnp.random.seed(1)\nn = 20\nm = 1000\nTEST = m\nDENSITY = 0.2\nbeta_true = np.random.randn(n,1)\nidxs = np.random.choice(range(n), int((1-DENSITY)*n), replace=False)\nfor idx in idxs:\n beta_true[idx] = 0\noffset = 0\nsigma = 45\nX = np.random.normal(0, 5, size=(m,n))\nY = np.sign(X.dot(beta_true) + offset + np.random.normal(0,sigma,size=(m,1)))\nX_test = np.random.normal(0, 5, size=(TEST,n))\nY_test = np.sign(X_test.dot(beta_true) + offset + np.random.normal(0,sigma,size=(TEST,1)))",
"We next formulate the optimization problem using CVXPY.",
"# Form SVM with L1 regularization problem.\nimport cvxpy as cp\nbeta = cp.Variable((n,1))\nv = cp.Variable()\nloss = cp.sum(cp.pos(1 - cp.multiply(Y, X*beta - v)))\nreg = cp.norm(beta, 1)\nlambd = cp.Parameter(nonneg=True)\nprob = cp.Problem(cp.Minimize(loss/m + lambd*reg))",
"We solve the optimization problem for a range of $\\lambda$ to compute a trade-off curve.\nWe then plot the train and test error over the trade-off curve.\nA reasonable choice of $\\lambda$ is the value that minimizes the test error.",
"# Compute a trade-off curve and record train and test error.\nTRIALS = 100\ntrain_error = np.zeros(TRIALS)\ntest_error = np.zeros(TRIALS)\nlambda_vals = np.logspace(-2, 0, TRIALS)\nbeta_vals = []\nfor i in range(TRIALS):\n lambd.value = lambda_vals[i]\n prob.solve()\n train_error[i] = (np.sign(X.dot(beta_true) + offset) != np.sign(X.dot(beta.value) - v.value)).sum()/m\n test_error[i] = (np.sign(X_test.dot(beta_true) + offset) != np.sign(X_test.dot(beta.value) - v.value)).sum()/TEST\n beta_vals.append(beta.value)\n\n# Plot the train and test error over the trade-off curve.\nimport matplotlib.pyplot as plt\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\n\nplt.plot(lambda_vals, train_error, label=\"Train error\")\nplt.plot(lambda_vals, test_error, label=\"Test error\")\nplt.xscale('log')\nplt.legend(loc='upper left')\nplt.xlabel(r\"$\\lambda$\", fontsize=16)\nplt.show()",
"We also plot the regularization path, or the $\\beta_i$ versus $\\lambda$. Notice that the $\\beta_i$ do not necessarily decrease monotonically as $\\lambda$ increases.\n4 features remain non-zero longer for larger $\\lambda$ than the rest, which suggests that these features are the most important. In fact $\\beta_{\\mathrm{true}}$ had 4 non-zero values.",
"# Plot the regularization path for beta.\nfor i in range(n):\n plt.plot(lambda_vals, [wi[i,0] for wi in beta_vals])\nplt.xlabel(r\"$\\lambda$\", fontsize=16)\nplt.xscale(\"log\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
John-Keating/ThinkStats2
|
code/chap04soln.ipynb
|
gpl-3.0
|
[
"Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>\nAllen Downey\nRead the pregnancy file.",
"%matplotlib inline\n\nimport nsfg\npreg = nsfg.ReadFemPreg()",
"Select live births, then make a CDF of <tt>totalwgt_lb</tt>.",
"import thinkstats2\nlive = preg[preg.outcome == 1]\nfirsts = live[live.birthord == 1]\nothers = live[live.birthord != 1]\ncdf = thinkstats2.Cdf(live.totalwgt_lb)",
"Display the CDF.",
"import thinkplot\nthinkplot.Cdf(cdf, label='totalwgt_lb')\nthinkplot.Show(loc='lower right')",
"Find out how much you weighed at birth, if you can, and compute CDF(x).",
"cdf.Prob(8.4)",
"If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children.",
"other_cdf = thinkstats2.Cdf(others.totalwgt_lb)\nother_cdf.Prob(8.4)",
"Compute the percentile rank of your birthweight",
"cdf.PercentileRank(8.4)",
"Compute the median birth weight by looking up the value associated with p=0.5.",
"cdf.Value(0.5)",
"Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75.",
"cdf.Percentile(25), cdf.Percentile(75)",
"Make a random selection from <tt>cdf</tt>.",
"cdf.Random()",
"Draw a random sample from <tt>cdf</tt>.",
"cdf.Sample(10)",
"Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks.",
"t = [cdf.PercentileRank(x) for x in cdf.Sample(1000)]\ncdf2 = thinkstats2.Cdf(t)\nthinkplot.Cdf(cdf2)\nthinkplot.Show(legend=False)",
"Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.",
"import random\nt = [random.random() for _ in range(1000)]\npmf = thinkstats2.Pmf(t)\nthinkplot.Pmf(pmf, linewidth=0.1)\nthinkplot.Show()",
"Assuming that the PMF doesn't work very well, try plotting the CDF instead.",
"cdf = thinkstats2.Cdf(t)\nthinkplot.Cdf(cdf)\nthinkplot.Show()\n\nimport scipy.stats\n\nscipy.stats.norm.cdf(0)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
stable/_downloads/da9f4c4e77e7268fbe1384cfc1b249a5/70_eeg_mri_coords.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"EEG source localization given electrode locations on an MRI\nThis tutorial explains how to compute the forward operator from EEG data when\nthe electrodes are in MRI voxel coordinates.",
"# Authors: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\n\nimport nibabel\nfrom nilearn.plotting import plot_glass_brain\nimport numpy as np\n\nimport mne\nfrom mne.channels import compute_native_head_t, read_custom_montage\nfrom mne.viz import plot_alignment",
"Prerequisites\nFor this we will assume that you have:\n\nraw EEG data\nyour subject's MRI reconstrcted using FreeSurfer\nan appropriate boundary element model (BEM)\nan appropriate source space (src)\nyour EEG electrodes in Freesurfer surface RAS coordinates, stored\n in one of the formats :func:mne.channels.read_custom_montage supports\n\nLet's set the paths to these files for the sample dataset, including\na modified sample MRI showing the electrode locations plus a .elc\nfile corresponding to the points in MRI coords (these were synthesized_,\nand thus are stored as part of the misc dataset).",
"data_path = mne.datasets.sample.data_path()\nsubjects_dir = op.join(data_path, 'subjects')\nfname_raw = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')\nbem_dir = op.join(subjects_dir, 'sample', 'bem')\nfname_bem = op.join(bem_dir, 'sample-5120-5120-5120-bem-sol.fif')\nfname_src = op.join(bem_dir, 'sample-oct-6-src.fif')\n\nmisc_path = mne.datasets.misc.data_path()\nfname_T1_electrodes = op.join(misc_path, 'sample_eeg_mri', 'T1_electrodes.mgz')\nfname_mon = op.join(misc_path, 'sample_eeg_mri', 'sample_mri_montage.elc')",
"Visualizing the MRI\nLet's take our MRI-with-eeg-locations and adjust the affine to put the data\nin MNI space, and plot using :func:nilearn.plotting.plot_glass_brain,\nwhich does a maximum intensity projection (easy to see the fake electrodes).\nThis plotting function requires data to be in MNI space.\nBecause img.affine gives the voxel-to-world (RAS) mapping, if we apply a\nRAS-to-MNI transform to it, it becomes the voxel-to-MNI transformation we\nneed. Thus we create a \"new\" MRI image in MNI coordinates and plot it as:",
"img = nibabel.load(fname_T1_electrodes) # original subject MRI w/EEG\nras_mni_t = mne.transforms.read_ras_mni_t('sample', subjects_dir) # from FS\nmni_affine = np.dot(ras_mni_t['trans'], img.affine) # vox->ras->MNI\nimg_mni = nibabel.Nifti1Image(img.dataobj, mni_affine) # now in MNI coords!\nplot_glass_brain(img_mni, cmap='hot_black_bone', threshold=0., black_bg=True,\n resampling_interpolation='nearest', colorbar=True)",
"Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords\nLet's load our :class:~mne.channels.DigMontage using\n:func:mne.channels.read_custom_montage, making note of the fact that\nwe stored our locations in Freesurfer surface RAS (MRI) coordinates.\n.. collapse:: |question| What if my electrodes are in MRI voxels?\n :class: info\nIf you have voxel coordinates in MRI voxels, you can transform these to\nFreeSurfer surface RAS (called \"mri\" in MNE) coordinates using the\ntransformations that FreeSurfer computes during reconstruction.\n``nibabel`` calls this transformation the ``vox2ras_tkr`` transform\nand operates in millimeters, so we can load it, convert it to meters,\nand then apply it::\n\n >>> pos_vox = ... # loaded from a file somehow\n >>> img = nibabel.load(fname_T1)\n >>> vox2mri_t = img.header.get_vox2ras_tkr() # voxel -> mri trans\n >>> pos_mri = mne.transforms.apply_trans(vox2mri_t, pos_vox)\n >>> pos_mri /= 1000. # mm -> m\n\nYou can also verify that these are correct (or manually convert voxels\nto MRI coords) by looking at the points in Freeview or tkmedit.",
"dig_montage = read_custom_montage(fname_mon, head_size=None, coord_frame='mri')\ndig_montage.plot()",
"We can then get our transformation from the MRI coordinate frame (where our\npoints are defined) to the head coordinate frame from the object.",
"trans = compute_native_head_t(dig_montage)\nprint(trans) # should be mri->head, as the \"native\" space here is MRI",
"Let's apply this digitization to our dataset, and in the process\nautomatically convert our locations to the head coordinate frame, as\nshown by :meth:~mne.io.Raw.plot_sensors.",
"raw = mne.io.read_raw_fif(fname_raw)\nraw.pick_types(meg=False, eeg=True, stim=True, exclude=()).load_data()\nraw.set_montage(dig_montage)\nraw.plot_sensors(show_names=True)",
"Now we can do standard sensor-space operations like make joint plots of\nevoked data.",
"raw.set_eeg_reference(projection=True)\nevents = mne.find_events(raw)\nepochs = mne.Epochs(raw, events)\ncov = mne.compute_covariance(epochs, tmax=0.)\nevoked = epochs['1'].average() # trigger 1 in auditory/left\nevoked.plot_joint()",
"Getting a source estimate\nNew we have all of the components we need to compute a forward solution,\nbut first we should sanity check that everything is well aligned:",
"fig = plot_alignment(\n evoked.info, trans=trans, show_axes=True, surfaces='head-dense',\n subject='sample', subjects_dir=subjects_dir)",
"Now we can actually compute the forward:",
"fwd = mne.make_forward_solution(\n evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True)",
"Finally let's compute the inverse and apply it:",
"inv = mne.minimum_norm.make_inverse_operator(\n evoked.info, fwd, cov, verbose=True)\nstc = mne.minimum_norm.apply_inverse(evoked, inv)\nbrain = stc.plot(subjects_dir=subjects_dir, initial_time=0.1)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
csadorf/signac
|
doc/signac_201_Advanced_Indexing.ipynb
|
bsd-3-clause
|
[
"2.1 Advanced Indexing\nIndexing files\nAs was shown earlier, we can create an index of the data space using the index() method:",
"import signac\n\nproject = signac.get_project(root='projects/tutorial')\nindex = list(project.index())\n\nfor doc in index[:3]:\n print(doc)",
"We will use the Collection class to manage the index directly in-memory:",
"index = signac.Collection(project.index())",
"This enables us for example, to quickly search for all indexes related to a specific state point:",
"for doc in index.find({'statepoint.p': 0.1}):\n print(doc)",
"At this point the index contains information about the statepoint and all data stored in the job document.\nIf we want to include the V.txt text files we used to store data in, with the index, we need to tell signac the filename pattern and optionally the file format.",
"index = signac.Collection(project.index('.*\\.txt'))\nfor doc in index.find(limit=2):\n print(doc)",
"The index contains basic information about the files within our data space, such as the path and the MD5 hash sum.\nThe format field currently says File, which is the default value.\nWe can specify that all files ending with .txt are to be defined to be of TextFile format:",
"index = signac.Collection(project.index({'.*\\.txt': 'TextFile'}))\nprint(index.find_one({'format': 'TextFile'}))",
"Generating a Master Index\nA master index is compiled from multiple other indexes, which is useful when operating on data compiled from multiple sources, such as multiple signac projects.\nTo make a data space part of master index, we need to create a signac_access.py module.\nWe use the access module to define how the index for the particular space is to be generated.\nWe can create a basic access module using the Project.create_access_module() function:",
"# Let's make sure to remoe any remnants from previous runs...\n% rm -f projects/tutorial/signac_access.py\n\n# This will generate a minimal access module:\nproject.create_access_module(master=False)\n\n% cat projects/tutorial/signac_access.py",
"When compiling a master index, signac will search for access modules named signac_access.py.\nWhenever it finds a file with that name, it will import the module and compile all indexes yielded from a function called get_indexes() into the master index.\nLet's try that!",
"master_index = signac.Collection(signac.index())\nfor doc in master_index.find(limit=2):\n print(doc)",
"Please note, that we executed the index() function without specifying the project directory.\nThe function crawled through all sub-directories below the root directory in an attempt to find acccess modules.\nWe can use the access module to control how exactly the index is generated, for example by adding filename and format definitions.\nUsually we could edit the file directly, here we will just overwrite the old one:",
"access_module = \\\n\"\"\"import signac\n\ndef get_indexes(root):\n yield signac.get_project(root).index({'.*\\.txt': 'TextFile'})\n\"\"\"\n\nwith open('projects/tutorial/signac_access.py', 'w') as file:\n file.write(access_module)",
"Now files will also be part of the master index!",
"master_index = signac.Collection(signac.index())\nprint(master_index.find_one({'format': 'TextFile'}))",
"We can use the signac.fetch() function to directly open files associated with a particular index document:",
"for doc in master_index.find({'format': 'TextFile'}, limit=3):\n with signac.fetch(doc) as file:\n p = doc['statepoint']['p']\n V = [float(v) for v in file.read().strip().split(',')]\n print(p, V)",
"Think of fetch() like the built-in open() function. It allows us to retrieve and open files based on the index document (file id) instead of an absolute file path. This makes it easier to operate on data agnostic to its actual physical location.\nPlease note that we can specify access modules for any kind of data space, it does not have to be a signac project!\nIn the next section, we will learn how to use indexes in combination with pandas dataframes."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.17/_downloads/e33f407965fb2ba1307deaf80b7d794c/plot_brainstorm_auditory.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Brainstorm auditory tutorial dataset\nHere we compute the evoked from raw for the auditory Brainstorm\ntutorial dataset. For comparison, see [1] and the associated\nbrainstorm site <http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>.\nExperiment:\n- One subject, 2 acquisition runs 6 minutes each.\n- Each run contains 200 regular beeps and 40 easy deviant beeps.\n- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.\n- Button pressed when detecting a deviant with the right index finger.\n\nThe specifications of this dataset were discussed initially on the\nFieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>_.\nReferences\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716",
"# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>\n# Eric Larson <larson.eric.d@gmail.com>\n# Jaakko Leppakangas <jaeilepp@student.jyu.fi>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport pandas as pd\nimport numpy as np\n\nimport mne\nfrom mne import combine_evoked\nfrom mne.minimum_norm import apply_inverse\nfrom mne.datasets.brainstorm import bst_auditory\nfrom mne.io import read_raw_ctf\n\nprint(__doc__)",
"To reduce memory consumption and running time, some of the steps are\nprecomputed. To run everything from scratch change this to False. With\nuse_precomputed = False running time of this script can be several\nminutes even on a fast computer.",
"use_precomputed = True",
"The data was collected with a CTF 275 system at 2400 Hz and low-pass\nfiltered at 600 Hz. Here the data and empty room data files are read to\nconstruct instances of :class:mne.io.Raw.",
"data_path = bst_auditory.data_path()\n\nsubject = 'bst_auditory'\nsubjects_dir = op.join(data_path, 'subjects')\n\nraw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',\n 'S01_AEF_20131218_01.ds')\nraw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',\n 'S01_AEF_20131218_02.ds')\nerm_fname = op.join(data_path, 'MEG', 'bst_auditory',\n 'S01_Noise_20131218_01.ds')",
"In the memory saving mode we use preload=False and use the memory\nefficient IO which loads the data on demand. However, filtering and some\nother functions require the data to be preloaded in the memory.",
"preload = not use_precomputed\nraw = read_raw_ctf(raw_fname1, preload=preload)\nn_times_run1 = raw.n_times\nmne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])\nraw_erm = read_raw_ctf(erm_fname, preload=preload)",
"Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference\nsensors and 2 EEG electrodes (Cz and Pz).\nIn addition:\n\n1 stim channel for marking presentation times for the stimuli\n1 audio channel for the sent signal\n1 response channel for recording the button presses\n1 ECG bipolar\n2 EOG bipolar (vertical and horizontal)\n12 head tracking channels\n20 unused channels\n\nThe head tracking channels and the unused channels are marked as misc\nchannels. Here we define the EOG and ECG channels.",
"raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})\nif not use_precomputed:\n # Leave out the two EEG channels for easier computation of forward.\n raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,\n ecg=True)",
"For noise reduction, a set of bad segments have been identified and stored\nin csv files. The bad segments are later used to reject epochs that overlap\nwith them.\nThe file for the second run also contains some saccades. The saccades are\nremoved by using SSP. We use pandas to read the data from the csv files. You\ncan also view the files with your favorite text editor.",
"annotations_df = pd.DataFrame()\noffset = n_times_run1\nfor idx in [1, 2]:\n csv_fname = op.join(data_path, 'MEG', 'bst_auditory',\n 'events_bad_0%s.csv' % idx)\n df = pd.read_csv(csv_fname, header=None,\n names=['onset', 'duration', 'id', 'label'])\n print('Events from run {0}:'.format(idx))\n print(df)\n\n df['onset'] += offset * (idx - 1)\n annotations_df = pd.concat([annotations_df, df], axis=0)\n\nsaccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)\n\n# Conversion from samples to times:\nonsets = annotations_df['onset'].values / raw.info['sfreq']\ndurations = annotations_df['duration'].values / raw.info['sfreq']\ndescriptions = annotations_df['label'].values\n\nannotations = mne.Annotations(onsets, durations, descriptions)\nraw.set_annotations(annotations)\ndel onsets, durations, descriptions",
"Here we compute the saccade and EOG projectors for magnetometers and add\nthem to the raw data. The projectors are added to both runs.",
"saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,\n reject_by_annotation=False)\n\nprojs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,\n desc_prefix='saccade')\nif use_precomputed:\n proj_fname = op.join(data_path, 'MEG', 'bst_auditory',\n 'bst_auditory-eog-proj.fif')\n projs_eog = mne.read_proj(proj_fname)[0]\nelse:\n projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),\n n_mag=1, n_eeg=0)\nraw.add_proj(projs_saccade)\nraw.add_proj(projs_eog)\ndel saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory",
"Visually inspect the effects of projections. Click on 'proj' button at the\nbottom right corner to toggle the projectors on/off. EOG events can be\nplotted by adding the event list as a keyword argument. As the bad segments\nand saccades were added as annotations to the raw data, they are plotted as\nwell.",
"raw.plot(block=True)",
"Typical preprocessing step is the removal of power line artifact (50 Hz or\n60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the\noriginal 60 Hz artifact and the harmonics. The power spectra are plotted\nbefore and after the filtering to show the effect. The drop after 600 Hz\nappears because the data was filtered during the acquisition. In memory\nsaving mode we do the filtering at evoked stage, which is not something you\nusually would do.",
"if not use_precomputed:\n meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)\n raw.plot_psd(tmax=np.inf, picks=meg_picks)\n notches = np.arange(60, 181, 60)\n raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')\n raw.plot_psd(tmax=np.inf, picks=meg_picks)",
"We also lowpass filter the data at 100 Hz to remove the hf components.",
"if not use_precomputed:\n raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',\n phase='zero-double', fir_design='firwin2')",
"Epoching and averaging.\nFirst some parameters are defined and events extracted from the stimulus\nchannel (UPPT001). The rejection thresholds are defined as peak-to-peak\nvalues and are in T / m for gradiometers, T for magnetometers and\nV for EOG and EEG channels.",
"tmin, tmax = -0.1, 0.5\nevent_id = dict(standard=1, deviant=2)\nreject = dict(mag=4e-12, eog=250e-6)\n# find events\nevents = mne.find_events(raw, stim_channel='UPPT001')",
"The event timing is adjusted by comparing the trigger times on detected\nsound onsets on channel UADC001-4408.",
"sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]\nonsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]\nmin_diff = int(0.5 * raw.info['sfreq'])\ndiffs = np.concatenate([[min_diff + 1], np.diff(onsets)])\nonsets = onsets[diffs > min_diff]\nassert len(onsets) == len(events)\ndiffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']\nprint('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'\n % (np.mean(diffs), np.std(diffs)))\nevents[:, 0] = onsets\ndel sound_data, diffs",
"We mark a set of bad channels that seem noisier than others. This can also\nbe done interactively with raw.plot by clicking the channel name\n(or the line). The marked channels are added as bad when the browser window\nis closed.",
"raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']",
"The epochs (trials) are created for MEG channels. First we find the picks\nfor MEG and EOG channels. Then the epochs are constructed using these picks.\nThe epochs overlapping with annotated bad segments are also rejected by\ndefault. To turn off rejection by bad segments (as was done earlier with\nsaccades) you can use keyword reject_by_annotation=False.",
"picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n exclude='bads')\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject, preload=False,\n proj=True)",
"We only use first 40 good epochs from each run. Since we first drop the bad\nepochs, the indices of the epochs are no longer same as in the original\nepochs collection. Investigation of the event timings reveals that first\nepoch from the second run corresponds to index 182.",
"epochs.drop_bad()\nepochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],\n epochs['standard'][182:222]])\nepochs_standard.load_data() # Resampling to save memory.\nepochs_standard.resample(600, npad='auto')\nepochs_deviant = epochs['deviant'].load_data()\nepochs_deviant.resample(600, npad='auto')\ndel epochs, picks",
"The averages for each conditions are computed.",
"evoked_std = epochs_standard.average()\nevoked_dev = epochs_deviant.average()\ndel epochs_standard, epochs_deviant",
"Typical preprocessing step is the removal of power line artifact (50 Hz or\n60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all\nline artifacts (and high frequency information). Normally this would be done\nto raw data (with :func:mne.io.Raw.filter), but to reduce memory\nconsumption of this tutorial, we do it at evoked stage. (At the raw stage,\nyou could alternatively notch filter with :func:mne.io.Raw.notch_filter.)",
"for evoked in (evoked_std, evoked_dev):\n evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')",
"Here we plot the ERF of standard and deviant conditions. In both conditions\nwe can see the P50 and N100 responses. The mismatch negativity is visible\nonly in the deviant condition around 100-200 ms. P200 is also visible around\n170 ms in both conditions but much stronger in the standard condition. P300\nis visible in deviant condition only (decision making in preparation of the\nbutton press). You can view the topographies from a certain time span by\npainting an area with clicking and holding the left mouse button.",
"evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')\nevoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')",
"Show activations as topography figures.",
"times = np.arange(0.05, 0.301, 0.025)\nevoked_std.plot_topomap(times=times, title='Standard', time_unit='s')\nevoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')",
"We can see the MMN effect more clearly by looking at the difference between\nthe two conditions. P50 and N100 are no longer visible, but MMN/P200 and\nP300 are emphasised.",
"evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')\nevoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')",
"Source estimation.\nWe compute the noise covariance matrix from the empty room measurement\nand use it for the other runs.",
"reject = dict(mag=4e-12)\ncov = mne.compute_raw_covariance(raw_erm, reject=reject)\ncov.plot(raw_erm.info)\ndel raw_erm",
"The transformation is read from a file. More information about coregistering\nthe data, see ch_interactive_analysis or\n:func:mne.gui.coregistration.",
"trans_fname = op.join(data_path, 'MEG', 'bst_auditory',\n 'bst_auditory-trans.fif')\ntrans = mne.read_trans(trans_fname)",
"To save time and memory, the forward solution is read from a file. Set\nuse_precomputed=False in the beginning of this script to build the\nforward solution from scratch. The head surfaces for constructing a BEM\nsolution are read from a file. Since the data only contains MEG channels, we\nonly need the inner skull surface for making the forward solution. For more\ninformation: CHDBBCEJ, :func:mne.setup_source_space,\ncreate_bem_model, :func:mne.bem.make_watershed_bem.",
"if use_precomputed:\n fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',\n 'bst_auditory-meg-oct-6-fwd.fif')\n fwd = mne.read_forward_solution(fwd_fname)\nelse:\n src = mne.setup_source_space(subject, spacing='ico4',\n subjects_dir=subjects_dir, overwrite=True)\n model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],\n subjects_dir=subjects_dir)\n bem = mne.make_bem_solution(model)\n fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,\n bem=bem)\n\ninv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\ndel fwd",
"The sources are computed using dSPM method and plotted on an inflated brain\nsurface. For interactive controls over the image, use keyword\ntime_viewer=True.\nStandard condition.",
"stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')\nbrain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,\n surface='inflated', time_viewer=False, hemi='lh',\n initial_time=0.1, time_unit='s')\ndel stc_standard, brain",
"Deviant condition.",
"stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')\nbrain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,\n surface='inflated', time_viewer=False, hemi='lh',\n initial_time=0.1, time_unit='s')\ndel stc_deviant, brain",
"Difference.",
"stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')\nbrain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,\n surface='inflated', time_viewer=False, hemi='lh',\n initial_time=0.15, time_unit='s')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dostrebel/working_place_ds_17
|
01+Beautifulsoup.ipynb
|
mit
|
[
"Schauen wir uns den Flughafen Basel an",
"import requests\nfrom bs4 import BeautifulSoup\nimport pandas as pd",
"Die Dokumentatiovon BeautifulSoup ist wirklich sehr beautiful. Es lohnt sich hier einen Blick darauf zu werfen. Beginnen wir damit, die Arrivals Site des Flughafens Basel/Mulhouse zu analysieren.\nURL einlesen",
"url = \"https://www.euroairport.com/en/flights/daily-arrivals.html\"\nresponse = requests.get(url)\narrivals_soup = BeautifulSoup(response.text, 'html.parser')\n\narrivals_soup",
"Developer Tools im Browser\nSuchen wir die Daten, der uns wirklich interessiert. Beginnen bei den Developers Tools. \nFind und Findall",
"arrivals_soup.find('tbody')\n\narrivals_soup.find('div', {'class': 'cblock modules-flights-flightlist modules-flights'})",
"Definieren wir eine Variable damit.",
"table = arrivals_soup.find('div', {'class': 'cblock modules-flights-flightlist modules-flights'})\n\ntype(table)\n\ntable.text",
"Holen wir alle \"row-1\" und \"row-0\" heraus.",
"row0 = table.find_all('tr', {'class': 'row-0'})\nrow1 = table.find_all('tr', {'class': 'row-1'})",
"Arbeit mit den Listen & find next sibling",
"type(row0)\n\nlen(row0)\n\nrow0[0]\n\nrow0[0].find('td', {'class': 'first'}).text\n\nrow0[0].find('td', {'class': 'first'}).find_next_sibling('td').text \n\nrow0[0].find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .text.replace('\\n', '').replace('\\t','')\n\nrow0[0].find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n\nrow0[0].find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n\nrow0[0].find('td', {'class': 'last'})",
"Packen wir das alles in einen Loop?",
"fluege = []\n\nfor elem in row0:\n ga_zeit = elem.find('td', {'class': 'first'}).text\n herkunft = elem.find('td', {'class': 'first'}).find_next_sibling('td').text \n airline = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .text.replace('\\n', '').replace('\\t','')\n nummer = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n a_zeit = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n typ = elem.find('td', {'class': 'last'}).text\n \n mini_dict = {'Geplante Ankunft': ga_zeit,\n 'Ankunft': a_zeit,\n 'Ankunft aus': herkunft,\n 'Flugnummer': nummer,\n 'Passagier/Cargo': typ}\n \n fluege.append(mini_dict)",
"Gehen wir die zweite Liste durch?",
"fluege1 = [] #das ändern\n\nfor elem in row1:\n ga_zeit = elem.find('td', {'class': 'first'}).text\n herkunft = elem.find('td', {'class': 'first'}).find_next_sibling('td').text \n airline = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .text.replace('\\n', '').replace('\\t','')\n nummer = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n a_zeit = elem.find('td', {'class': 'first'}).find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td') \\\n .find_next_sibling('td').text \\\n .replace('\\t','').replace('\\n', '')\n typ = elem.find('td', {'class': 'last'}).text\n \n mini_dict = {'Geplante Ankunft': ga_zeit,\n 'Ankunft': a_zeit,\n 'Ankunft aus': herkunft,\n 'Flugnummer': nummer,\n 'Passagier/Cargo': typ}\n \n fluege1.append(mini_dict) #und hier",
"Verbinden wir beide Listen",
"f = fluege + fluege1\n\npd.DataFrame(f)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
|
tutorial_part2/RealTime-DSP.ipynb
|
bsd-2-clause
|
[
"%pylab inline\n#%matplotlib qt\nfrom __future__ import division # use so 1/2 = 0.5, etc.\nimport sk_dsp_comm.sigsys as ss\nimport sk_dsp_comm.pyaudio_helper as pah\nimport scipy.signal as signal\nimport time\nimport sys\nimport imp # for module development and reload()\nfrom IPython.display import Audio, display\nfrom IPython.display import Image, SVG\n\npylab.rcParams['savefig.dpi'] = 100 # default 72\n#pylab.rcParams['figure.figsize'] = (6.0, 4.0) # default (6,4)\n#%config InlineBackend.figure_formats=['png'] # default for inline viewing\n%config InlineBackend.figure_formats=['svg'] # SVG inline viewing\n#%config InlineBackend.figure_formats=['pdf'] # render pdf figs for LaTeX",
"Introduction\nA simplified block diagram of PyAudio streaming-based (nonblocking) signal processing.",
"Image('PyAudio_RT_flow@300dpi.png',width='90%')\n\npah.available_devices()",
"Real-Time Loop Through\nHere we set up a simple callback function that passes the input samples directly to the output. The module pyaudio_support provides a class for managing a pyaudio stream object, capturing the samples processed by the callback function, and collection of performance metrics. Once the callback function is written/declared a DSP_io_stream object can be created and then the stream(Tsec) method can be executed to start the input/output processing, e.g.,\n```python\nimport pyaudio_helper as pah\nDSP_IO = pah.DSP_io_stream(callback,in_idx, out_idx)\nDSP_IO.stream(2)\n``\nwherein_idxis the index of the chosen input device found usingavailable_devices()and similarlyin_idx` is the index of the chosen output device.\n\nThe callback function must be written first as the function name used by the object to call the callback.",
"# define a pass through, y = x, callback\ndef callback(in_data, frame_count, time_info, status):\n DSP_IO.DSP_callback_tic()\n # convert byte data to ndarray\n in_data_nda = np.fromstring(in_data, dtype=np.int16)\n #***********************************************\n # DSP operations here\n # Here we apply a linear filter to the input\n x = in_data_nda.astype(float32)\n y = x\n # Typically more DSP code here \n #***********************************************\n # Save data for later analysis\n # accumulate a new frame of samples\n DSP_IO.DSP_capture_add_samples(y) \n #***********************************************\n # Convert from float back to int16\n y = y.astype(int16)\n DSP_IO.DSP_callback_toc()\n # Convert ndarray back to bytes\n #return (in_data_nda.tobytes(), pyaudio.paContinue)\n return y.tobytes(), pah.pyaudio.paContinue\n\nDSP_IO = pah.DSP_io_stream(callback,2,1,Tcapture=0)\n\nDSP_IO.stream(5)",
"Real-Time Filtering\nHere we set up a callback function that filters the input samples and then sends them to the output. \n```python\nimport pyaudio_helper as pah\nDSP_IO = pah.DSP_io_stream(callback,in_idx, out_idx)\nDSP_IO.stream(2)\n``\nwherein_idxis the index of the chosen input device found usingavailable_devices()and similarlyin_idx` is the index of the chosen output device.\n\nThe callback function must be written first as the function name is used by the object to call the callback\nTo demonstrate this we first design some filters that can be used in testing",
"import sk_dsp_comm.fir_design_helper as fir_d\n\nb = fir_d.fir_remez_bpf(2500,3000,4500,5000,.5,60,44100,18)\nfir_d.freqz_resp_list([b],[1],'dB',44100)\nylim([-80,5])\ngrid();\n\n# Design an IIR Notch\nb, a = ss.fir_iir_notch(3000,44100,r= 0.9)\nfir_d.freqz_resp_list([b],[a],'dB',44100,4096)\nylim([-60,5])\ngrid();",
"Create some global variables for the filter coefficients and the filter state array (recall that a filter has memory).",
"# For the FIR filter 'b' is defined above, but 'a' also needs to be declared\n# For the IIR notch filter both 'b' and 'a' are declared above\na = [1]\nzi = signal.lfiltic(b,a,[0])\n#zi = signal.sosfilt_zi(sos)\n\n# define callback (#2)\ndef callback2(in_data, frame_count, time_info, status):\n global b, a, zi\n DSP_IO.DSP_callback_tic()\n # convert byte data to ndarray\n in_data_nda = np.fromstring(in_data, dtype=np.int16)\n #***********************************************\n # DSP operations here\n # Here we apply a linear filter to the input\n x = in_data_nda.astype(float32)\n #y = x\n # The filter state/(memory), zi, must be maintained from frame-to-frame \n y, zi = signal.lfilter(b,a,x,zi=zi) # for FIR or simple IIR\n #y, zi = signal.sosfilt(sos,x,zi=zi) # for IIR use second-order sections \n #***********************************************\n # Save data for later analysis\n # accumulate a new frame of samples\n DSP_IO.DSP_capture_add_samples(y) \n #***********************************************\n # Convert from float back to int16\n y = y.astype(int16)\n DSP_IO.DSP_callback_toc()\n return y.tobytes(), pah.pyaudio.paContinue\n\nDSP_IO = pah.DSP_io_stream(callback2,2,1,Tcapture=0)\n\nDSP_IO.stream(5)",
"Real-Time Playback\nThe case of real-time playback sends an ndarray through the chosen audio output path with the array data either being truncated or looped depending upon the length of the array relative to Tsec supplied to stream(Tsec). To manage the potential looping aspect of the input array, we first make a loop_audio object from the input array. An example of this shown below:",
"# define callback (2)\n# Here we configure the callback to play back a wav file \ndef callback3(in_data, frame_count, time_info, status):\n \n DSP_IO.DSP_callback_tic()\n \n # Ignore in_data when generating output only\n #***********************************************\n global x\n # Note wav is scaled to [-1,1] so need to rescale to int16\n y = 32767*x.get_samples(frame_count)\n # Perform real-time DSP here if desired\n #\n #***********************************************\n # Save data for later analysis\n # accumulate a new frame of samples\n DSP_IO.DSP_capture_add_samples(y)\n #***********************************************\n # Convert from float back to int16\n y = y.astype(int16)\n DSP_IO.DSP_callback_toc()\n return y.tobytes(), pah.pyaudio.paContinue\n\n#fs, x_wav = ss.from_wav('OSR_us_000_0018_8k.wav')\nfs, x_wav2 = ss.from_wav('Music_Test.wav')\nx_wav = (x_wav2[:,0] + x_wav2[:,1])/2\n#x_wav = x_wav[15000:90000]\nx = pah.loop_audio(x_wav)\n#DSP_IO = pah.DSP_io_stream(callback3,2,1,fs=8000,Tcapture=2)\nDSP_IO = pah.DSP_io_stream(callback3,2,1,fs=44100,Tcapture=2)\nDSP_IO.stream(20)",
"Real-Time Audio Capture/Record\nHere we use PyAudio to acquire or capture the signal present on the chosen input device, e.g., microphone or a line-in signal from some sensor or music source. The example captures from the built-in microphone found on most PCs.",
"# define callback (2)\n# Here we configure the callback to capture a one channel input\ndef callback4(in_data, frame_count, time_info, status):\n \n DSP_IO.DSP_callback_tic()\n \n # convert byte data to ndarray\n in_data_nda = np.fromstring(in_data, dtype=np.int16)\n #***********************************************\n # DSP operations here\n # Here we apply a linear filter to the input\n x = in_data_nda.astype(float32)\n y = x \n #***********************************************\n # Save data for later analysis\n # accumulate a new frame of samples\n DSP_IO.DSP_capture_add_samples(y)\n #***********************************************\n # Convert from float back to int16\n y = 0*y.astype(int16)\n DSP_IO.DSP_callback_toc()\n # Convert ndarray back to bytes\n #return (in_data_nda.tobytes(), pyaudio.paContinue)\n return y.tobytes(), pah.pyaudio.paContinue\n\nDSP_IO = pah.DSP_io_stream(callback4,0,1,fs=22050)\nDSP_IO.stream(5)",
"Capture Buffer Analysis\nAs each of the above real-time processing scenarios is run, move down here in the notebook to do some analysis of what happened.\n* The stream_stats() provides statistics related to the real-time performance\n* What is the period between callbacks, ideal not contention theory and measured\n* The average time spent in the callback\n * The frame-based processing approach taken by PyAudio allows for efficient processing at the expense of latency in getting the first input sample to the output\n * With a large frame_length and the corresponding latency, a lot of processing time is available to get DSP work done\n* The object DSP_IO also contains a capture buffer (an ndarray), data_capture\n* Post processing this buffer allows further study of what was passed to the output of the DSP IP itself\n* In the case of a capture only application, the array data_capture is fundamental interest, as this is what you were seeking",
"DSP_IO.stream_stats()",
"Note for a attributes used in the above examples the frame_length is alsways 1024 samples and the sampling rate $f_s = 44.1$ ksps. The ideal callback period is this\n$$\n T_{cb} = \\frac{1024}{44100} = 23.22\\ \\text{(ms)}\n$$",
"T_cb = 1024/44100 * 1000 # times 1000 to get units of ms\nprint('Callback/Frame period = %1.4f (ms)' % T_cb)",
"Next consider what the captures tic and toc data revels about the processing. Calling the method cb_active_plot() produces a plot similar to what an electrical engineer would see what using a logic analyzer to show the time spent in an interrupt service routine of an embedded system. The latency is also evident. You expect to see a minimum latency of two frame lengths (input buffer fill and output buffer fill),e.g.,\n$$\n T_\\text{latency} >= 2\\times \\frac{1024}{44100} \\times 1000 = 56.44\\ \\text{(ms)}\n$$\nThe host processor is multitasking, so the latency can be even greater. A true real-time DSP system would give the signal processing high priority and hence much lower latency is expected.",
"subplot(211)\nDSP_IO.cb_active_plot(0,270) # enter start time (ms) and stop time (ms)\nsubplot(212)\nDSP_IO.cb_active_plot(150,160)\ntight_layout()\n\nNpts = 1000\nNstart = 1000\nplot(arange(len(DSP_IO.data_capture[Nstart:Nstart+Npts]))*1000/44100,\n DSP_IO.data_capture[Nstart:Nstart+Npts])\ntitle(r'A Portion of the capture buffer')\nylabel(r'Amplitude (int16)')\nxlabel(r'Time (ms)')\ngrid();",
"Finally, the spectrum of the output signal. To apply custon scaling we use a variation of psd() found in the sigsys module. If we are plotting the spectrum of white noise sent through a filter, the output PSD will be of the form $\\sigma_w^2|H(e^{j2\\pi f/f_s})|^2$, where $\\sigma_w^2$ is the variance of the noise driving the filter. You may choose to overlay a plot of",
"Pxx, F = ss.my_psd(DSP_IO.data_capture,2**13,44100);\nfir_d.freqz_resp_list([b],[a],'dB',44100)\nplot(F,10*log10(Pxx/max(Pxx))+3,'g') # Normalize by the max PSD\nylim([-80,5])\nxlim([100,20e3])\ngrid();\n\nspecgram(DSP_IO.data_capture,1024,44100);\nylim([0, 5000])",
"What to Try Next?"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
molgor/spystats
|
notebooks/.ipynb_checkpoints/model_fitting_spatialautocorr_using_GLS-checkpoint.ipynb
|
bsd-2-clause
|
[
"Linear Spatial Autocorrelation Model\nThe two methodologies under study (i.e. Meta-analysis and distributed networks) share the assumption that the observations are independent between each other. In other words, if two plots (say p1 and p2 ) are from different studies, the covariance between p1 and p2 is zero. The assumption is reasonable because of the character of both methodologies. Data derived from meta-analysis and distributed networks is composed of experiments measured in different environmental conditions and geographical locations; using an assortment of experimental techniques and sample designs. It is therefore reasonable to expect that the residuals derived from a linear statistical model will be explained by a non structured error (e.g. $epsilon \\sim N(0,\\sigma^2)).\nData Used and computational challenges\nThe dataset used as reference was the FIA dataset. It comprises more than 36k plot records. Each with a different spatial coordinate. Analysing the data for spatial effects require to manage a 36k x 36k matrix and the parameter optimization through GLS requires to calculate the inverse as well. \nModel specification\nThe spatial model proposed follows a classical geostatistical approach. In other words, an empirical variogram was used to estimate a {\\em valid} analytical model (Webster and Oliver, 2001). In accordance with the model simulations, the spatial model is as follows:\n$$log Biomass = log(Spp Richness) + S_x + \\epsilon$$\nWhere: $$E(log(biomass)) = \\beta_0 log(Spp Richness)$ and $Var(y) = [\\tau^2 \\rho(|x,x’|) + \\sigma^2] $$ \n$\\tau$ is a variance parameter of the gaussian isotropic and stationary process S_x with spatial autocorrelation distance function $\\rho$ given by:\n$$\\rho (h)=(s-n)\\left(1-\\exp \\left(-{\\frac {h^{2}}{r^{2}}}\\right)\\right)+n1_{{(0,\\infty )}}(h)$$\nWhere $h$ is the distance $|x,x’|$ , $s$, $n$ and $r$ are the parameters for sill, nugget and range.\nExploratory analysis\nTo begin with, a linear model using OLS was fitted using a log-log transformation of Biomass and Species Richness as response variable and covariate (respectively). I.e.\n$ (log(Biomass) | S) = \\beta log(Spp Richness) + \\epsilon $\nA histogram of the residuals shows a symmetric distribution (see figure below). \nThe residuals show no significant spatial trend across latitude or longitude (see figures 2bis and 3bis). We decided to follow the principle of model parsimony by not including the spatial coordinates as covariates (fixed effect).\nEmpirical Variogram and model fit\nThe residuals however, show variance dependent on the distance (autocorrelation). An empirical variogram was calculated to account for this effect (see figure below). The variogram was calculated using 50 lag distances of 13.5 km each (derived from dividing the data’s distance matrix range by 50). A Monte Carlo envelope method (blue region) at 0.25 and 0.975 quantiles was calculated to designate the region under the null hypothesis, i.e. with no spatial autocorrelation (Diggle and Ribeiro, 2003).\nThe resulting variogram (orange dots) show a typical spatial autocorrelation pattern, distinct from complete randomness and with an increasing variance as a function of distance. Using this pattern we fitted a gaussian model using non-linear least squares method implemented in Scipy.optimize.curve_fit (Jones et.al., 2017). The results obtained were: Sill 0.341, Range 50318.763, Nugget 0.33 . The resulting function is overlapped in green.\nConclusion\nThe model: $log(Biomass) = \\beta log(Spp_richness) + \\epsilon$ presents non-explicative random effects with spatial autocorrelation. The variogram of its residuals shows a typical pattern for distance dependent heteroscedasticity. In other words, the correlation of distinct data points depends on the distance i.e. ($Cov(p_1,p_2) = \\sigma^2 \\rho(|p_1 - p_2|^2)$) where $\\rho$ is a spatial auto-correlation function (derived from the empirical variogram under a gaussian model assumption). \nThe observations reject the assumptions of the linear model estimator obtained by OLS, those on independence and identically distributed errors. The Generalised Least Square (GLS) estimator would be a better approach for obtaining the linear parameters but more importantly a more reliable variance and consequently more reliable confidence interval.\nRecommendations and future work\nThe whole dataset unveiled a spatial structure that needs to be accounted for in both studies; distributed plots and independent studies. The GLS estimator is a more robust method for optimising the linear models. The covariance matrix (used in meta-analysis) can be extended to include a spatial effect and derive better estimators and their confidence interval.",
"# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\n## Use the ggplot style\nplt.style.use('ggplot')\n\n## check the matern\nimport scipy.special as special\n#def MaternVariogram(h,range_a,nugget=40,sill=100,kappa=0.5):\ndef MaternVariogram(h,sill=1,range_a=100,nugget=40,kappa=0.5): \n \"\"\"\n The Matern Variogram of order $\\kappa$.\n \n $$ \\gamma(h) = nugget + (sill (1 - (\\farc{1}{2^{\\kappa -1}} \\Gamma(\\kappa) (\\frac{h}{r})^{\\kappa} K_\\kappa \\Big(\\frac{h}{r}\\Big)$$\n \n Let:\n a = $$ \n b = $$\n K_v = Modified Bessel function of the second kind of real order v\n \"\"\"\n \n #a = np.power(2, 1 - kappa) / special.gamma(kappa)\n #b = (np.sqrt(2 * kappa) / range_a) * h\n a = 1 / np.power(2,kappa - 1 ) * special.gamma(kappa)\n \n b = (h / float(range_a))\n K_v = special.kv(kappa,b)\n \n #kh = sigma * a * np.power(b,kappa) * K_v\n #kh = (sill - nugget) * ( 1 - (a * np.power(b,kappa) * K_v))\n kh = nugget + (sill * ( 1 - (a * np.power(b,kappa) * K_v)))\n\n kh = np.nan_to_num(kh)\n\n return kh\n \n\ncc = MaternVariogram(hx,range_a=100000,sill=100,nugget=300,kappa=0.5)\nplt.plot(hx,cc,'.')",
"$$ \\gamma(h) = nugget + (sill (1 - (\\frac{1}{2^{\\kappa -1}} \\Gamma(\\kappa) (\\frac{h}{r})^{\\kappa} K_\\kappa \\Big(\\frac{h}{r}\\Big)$$",
"from external_plugins.spystats import tools\ngx = tools.exponentialVariogram(hx,sill=100,nugget=0,range_a=100000)\nplt.plot(hx,gx)\nplt.plot(hx,cc,'.')\n\ndef gaussianVariogram(h,sill=0,range_a=0,nugget=0):\n if isinstance(h,np.ndarray):\n Ih = np.array([1.0 if hx >= 0.0 else 0.0 for hx in h])\n else:\n Ih = 1.0 if h >= 0 else 0.0\n #Ih = 1.0 if h >= 0 else 0.0 \n g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget*Ih\n return g_h\n## Fitting model.\n### Optimizing the empirical values\ndef theoreticalVariogram(model_function,sill,range_a,nugget,kappa=0):\n if kappa == 0:\n return lambda x : model_function(x,sill,range_a,nugget)\n else: \n return lambda x : model_function(x,sill,range_a,nugget,kappa)\n",
"Relating the Semivariogram with the correlation function\nThe variogram of a spatial stochastic process $S(x)$ is the function:\n$$ V(x,x') = \\frac{1}{2} Var { S(x) - S(x') } $$\nNote that :\n$$V(x,x') = \\frac{1}{2} { Var(S(x)) + Var(S(x') - 2Cov(S(x),S(x')) } $$\nFor the stationary case: \n$$ 2 V(u) = 2\\sigma^2 (1 - \\rho(u)) $$\nSo:\n$$ \\rho(u) = 1 - \\frac{V(u)}{\\sigma^2} $$",
"from external_plugins.spystats import tools\n%run ../testvariogram.py\n\n## Remove duplications\nwithoutrep = new_data.drop_duplicates(subset=['newLon','newLat'])\nprint(new_data).shape\nprint(withoutrep).shape\nnew_data = withoutrep\n",
"new_data.residuals2.hist()\nplt.title('Residuals of $log(Biomass) \\sim log(Spp_{rich})$')\n------------------\nplt.scatter(new_data.newLon,new_data.residuals2)\nplt.xlabel('Longitude (meters)')\nplt.ylabel('Residuals: $log(Biomass) - \\hat{Y}$')\n-------------------\nplt.scatter(new_data.newLat,new_data.residuals2)\nplt.xlabel('Latitude (meters)')\nplt.ylabel('$Residuals: log(Biomass) - \\hat{Y}$')\nRead the data",
"### Read the data first\n#hx = np.linspace(0,400000,100)\n#spmodel = theoreticalVariogram(gaussianVariogram,sill,range_a,nugget)\n#nt = 30 # num iterations\nthrs_dist = 100000\nempirical_semivariance_log_log = \"../HEC_runs/results/logbiomas_logsppn_res.csv\"\nfilename = \"../HEC_runs/results/low_q/data_envelope.csv\"\n\n#### here put the hec calculated \nenvelope_data = pd.read_csv(filename)\nemp_var_log_log = pd.read_csv(empirical_semivariance_log_log)\ngvg = tools.Variogram(new_data,'logBiomass',using_distance_threshold=thrs_dist)\ngvg.envelope = emp_var_log_log\ngvg.empirical = emp_var_log_log.variogram\ngvg.lags = emp_var_log_log.lags\nemp_var_log_log = emp_var_log_log.dropna()\nvdata = gvg.envelope.dropna()\ngvg.plot(refresh=False,legend=False,percentage_trunked=20)\nplt.title(\"Semivariogram of residuals $log(Biomass) ~ log(SppR)$\")",
"The best fitted values were:\n(Processed by chunks see: http://localhost:8888/notebooks/external_plugins/spystats/notebooks/variogram_envelope_by_chunks.ipynb )\n\nSill 0.34122564947\nRange 50318.763452\nNugget 0.329687351696",
"sill = 0.34122564947\nrange_a = 50318.763452\nnugget = 0.329687351696\n\nimport matplotlib.pylab as plt\nhx = np.linspace(0,600000,100)\nfrom scipy.optimize import curve_fit\ns = 0.345\nr = 50000.0\nnugget = 0.33\nkappa = 0.5\ninit_vals = [s,r,nugget] # for [amp, cen, wid]\ninit_matern = [s,r,nugget,kappa]\n#bg, covar_gaussian = curve_fit(gaussianVariogram, xdata=emp_var_log_log.lags.values, ydata=emp_var_log_log.variogram.values, p0=init_vals)\nbg, covar_gaussian = curve_fit(MaternVariogram, xdata=emp_var_log_log.lags.values, ydata=emp_var_log_log.variogram.values, p0=init_matern)\n#MaternVariogram(h,range_a,nugget=40,sill=100,kappa=0.5)\n#vdata = gvg.envelope.dropna()\n## The best parameters asre:\n#gau_var = tools.gaussianVariogram(hx,bg[0],bg[1],bg[2])\ngau_var = MaternVariogram(hx,bg[0],bg[1],bg[2],bg[3])\n\n\nsill = bg[0]\nrange_a = bg[1]\nnugget = bg[2]\nkappa = bg[3]\nspmodel = theoreticalVariogram(MaternVariogram,sill,range_a,nugget,kappa)\n#spmodel = theoreticalVariogram(gaussianVariogram,sill,range_a,nugget)\n#MaternVariogram(0,sill,range_a,nugget,kappa)\n\n\n\nresults = \"Sill %s , range_a %s , nugget %s, kappa %s\"%(sill,range_a,nugget,kappa)\nprint(results)\n\n#n_points = pd.DataFrame(map(lambda v : v.n_points,variograms2))\n#points = n_points.transpose()\n#ejem2 = pd.DataFrame(variogram2.values * points.values)\n# Chunks (variograms) columns\n# lag rows\n#vempchunk2 = ejem2.sum(axis=1) / points.sum(axis=1)\n#plt.plot(lags,vempchunk2,'--',color='blue',lw=2.0)\nplt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')\nthrs_dist = 1000000\n\n\ngvg.plot(refresh=False,legend=False,percentage_trunked=20)\nplt.title(\"Empirical variogram for the residuals: $log(Biomass) \\sim log(Spp_{rich}) $ \")\nplt.plot(hx,spmodel(hx),color='green',lw=2.3)",
"Todo: Fit Matern",
"X = np.linspace(0,600000,50)\ntvar = spmodel(X)\ncorrelation_h = lambda h : 1 - (spmodel(h))\n\nspmodel(0)\n\nplt.plot(X,correlation)\n\nplt.plot(np.linspace(0,7,1000),special.kv(0.5,np.linspace(0,7,1000)))",
"GLS estimation.\nIt's not possible to do it all in the server or this computer because it requires massive computational capacity.\nI'll do it with a geographical section or sample.\nTaken from:\nhttp://localhost:8888/notebooks/external_plugins/spystats/notebooks/Analysis_spatial_autocorrelation_with_empirical_variogram_using_GLS.ipynb\nRe fit the $\\beta$ \nOke, first calculate the distances",
"def randomSelection(data,k):\n n = len(data)\n idxs = np.random.choice(n,k,replace=True)\n random_sample = data.iloc[idxs]\n return random_sample\n#################\n#n = len(new_data)\n#p = 3000 # The amount of samples taken (let's do it without replacement)\n\ndef systSelection(data,k):\n n = len(data)\n idxs = range(0,n,k)\n systematic_sample = data.iloc[idxs]\n return systematic_sample\n##################\nn = len(new_data)\nk = 10 # The k-th element to take as a sample\n\ndef subselectDataFrameByCoordinates(dataframe,namecolumnx,namecolumny,minx,maxx,miny,maxy):\n \"\"\"\n Returns a subselection by coordinates using the dataframe/\n \"\"\"\n minx = float(minx)\n maxx = float(maxx)\n miny = float(miny)\n maxy = float(maxy)\n section = dataframe[lambda x: (x[namecolumnx] > minx) & (x[namecolumnx] < maxx) & (x[namecolumny] > miny) & (x[namecolumny] < maxy) ]\n return section\n\n\nsample = systSelection(new_data,10)\nsample = randomSelection(new_data,10)\nminx = -85\nmaxx = -80\nminy = 30\nmaxy = 35\n\nsection = subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy)\n\nvsamp = tools.Variogram(section,'logBiomass')\nimport statsmodels.regression.linear_model as lm\nMdist = vsamp.distance_coordinates.flatten()\nvsamp.plot(num_iterations=1)\n\n\n%time vars = np.array(correlation_h(Mdist))\nMMdist = Mdist.reshape(len(section),len(section))\nCovMat = vars.reshape(len(section),len(section))\nX = section.logSppN.values\nY = section.logBiomass.values\n\nsection.plot()\n\ntt = section.geometry\ntt.plot()\n\nplt.imshow(MMdist,interpolation='None',cmap=plt.cm.Blues)\n\nplt.imshow(CovMat,interpolation='None',cmap=plt.cm.Blues)\n\n%time results_gls = lm.GLS(Y,X,sigma=CovMat)\n#tt = np.linalg.cholesky(CovMat)\n#np.linalg.eigvals(CovMat)\n#CovMat.flatten()\n#MMdist.flatten()\n#lm.GLS?\n\n\nmodelillo = results_gls.fit()\n\nmodelillo.summary()\n\n## Experimental do not run in interactive mode\n##But the data is massive and it consumes all my RAM (32 GB) I need to do something clever.\ncovar = []\nfor i in range(len(Mdist)):\n x = Mdist.pop(i)\n g = gaussianVariogram(x,bg[0],bg[1],bg[2])\n covar.append(g)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mattmcd/PyBayes
|
scripts/dc_manipulating_time_series.ipynb
|
apache-2.0
|
[
"Manipulating Time Series\nNotebook to follow along with final chapter of the DataCamp Manipulating Time Series course.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os\nimport yfinance as yf\n\n%matplotlib inline\n\ndef ddir(name=None):\n data_dir = 'dc_manipulating_time_series/stock_data/'\n if name is None:\n print(os.listdir(data_dir))\n else:\n return os.path.join(data_dir, name)\n\nddir()",
"Import the data",
"nyse = pd.read_excel(ddir('listings.xlsx'), sheet_name='nyse', na_values='n/a')\nnyse.info()\n\nnyse.head()\n\nnyse.loc[nyse['Stock Symbol'].isin(['CPRT', 'ILMN']), :]",
"Problem: it looks like the listing data Excel file from the Datacamp data download for this course contains different stocks to the ones we want. Some investigation shows that the \nNASDAQ listings do have all the stocks of interest.\nSome filtering by sector and removing stocks more recent than the ones used in the course, e.g. TSLA, is\nthen required to get the tickers of the biggest companies in each sector to match those expected in the course (and \nwhose prices are available in the data files).",
"# NASDAQ listings from https://www.nasdaq.com/market-activity/stocks/screener \n# Looks like the listings data for the exercise is from Nasdaq rather than NYSE\nnasdaq = pd.read_csv(ddir('nasdaq_screener_1646066842549.csv')).rename(\n columns={'Symbol': 'Stock Symbol', 'Market Cap': 'Market Capitalization', 'Name': 'Company Name'}\n)\nnasdaq['Last Sale'] = nasdaq['Last Sale'].replace({'\\$': '', ',': ''}, regex=True).astype(float)\nnasdaq.head()\n\nnasdaq.set_index('Stock Symbol', inplace=True) # Make Stock Symbol the index\nnasdaq.dropna(subset=['Sector'], inplace=True) # Remove stocks without sector info\nnasdaq['Market Capitalization'] /= 1e6 # Scale to million $ \n\n# Exercise only considers stocks that IPOd before 2019, need to filter\n# by this as well otherwise reading stock time series data fails \n# due to missing tickers. Also fiddle with sector label to get things matching.\nnasdaq = nasdaq.loc[(nasdaq['IPO Year'] < 2016), :]\nnasdaq.loc['GS', 'Sector'] = 'Bank'\nnasdaq.loc['ILMN', 'Sector'] = 'Biotech'\nnasdaq.loc['CPRT', 'Sector'] = 'Finance'\n\n\nnasdaq.info()\n\n# nasdaq.loc[nasdaq.index.isin(data.columns), :]\n\nallowed_sectors = ['Technology', 'Health Care', 'Consumer Services', 'Miscellaneous',\n 'Consumer Non-Durables', 'Bank', 'Health Care', 'Biotech', 'Energy', 'Basic Industries',\n 'Public Utilities', 'Transportation']\ndisallowed_stocks = ['LIN', 'TSLA', 'GNRC', 'MPLX', 'HDB', 'ABBV', 'KMI', 'FANG', 'RSG',\n 'CLR', 'AWK', 'WES', 'PSXP']\ncomponents = nasdaq.loc[\n nasdaq.Sector.isin(allowed_sectors) & ~nasdaq.index.isin(disallowed_stocks), :\n].groupby('Sector')['Market Capitalization'].nlargest(1)\ncomponents.sort_values(ascending=False)\n\ncomponents.info()\n\n# Index is MultiIndex so use get_level_values to extract tickers\ntickers = components.index.get_level_values('Stock Symbol')\ntickers\n\ncolumns = ['Company Name', 'Market Capitalization', 'Last Sale']\ncomponent_info = nasdaq.loc[tickers, columns].sort_values('Market Capitalization', ascending=False)\npd.options.display.float_format = '{:,.2f}'.format\ncomponent_info\n\n# Read the price time series data \ndata = pd.read_csv(\n ddir('stock_data.csv'), parse_dates=['Date'], index_col='Date'\n).loc[:, tickers.tolist()].dropna()\ndata.info()\n\n# Look at price returns over period\nprice_return = data.iloc[-1].div(data.iloc[0]).sub(1).mul(100).sort_values(ascending=False)\nprice_return\n\nprice_return.sort_values().plot(title='Stock Price Returns', kind='barh')\nplt.show()",
"Build a Market Cap Weighted Index",
"shares = component_info['Market Capitalization'].div(component_info['Last Sale'])\nshares.sort_values(ascending=False)\n\nmarket_cap_series = data.mul(shares)\n_ = market_cap_series.plot(logy=True)\n\n# market_cap_series.first('D').append(market_cap_series.last('D')) # .append is deprecated\npd.concat([market_cap_series.first('D'), market_cap_series.last('D')], axis=0)\n\nagg_mcap = market_cap_series.sum(axis=1)\n_ = agg_mcap.plot(title='Aggregate Market Cap')\n\nmcap_index = agg_mcap.div(agg_mcap.iloc[0]).mul(100)\n_ = mcap_index.plot(title='Market-Cap Weighted Index')",
"Evaluate Index Performance",
"# Total performance of companies in index\nprint(f'Companies in index added ${agg_mcap.iloc[-1] - agg_mcap.iloc[0]:,.2f}M in value')\n\nchange = pd.concat([market_cap_series.first('D'), market_cap_series.last('D')], axis=0)\nchange.diff().iloc[-1].sort_values()\n\nweights = component_info['Market Capitalization'].div(component_info['Market Capitalization'].sum())\nweights\n\nindex_return = (mcap_index.iloc[-1]/mcap_index.iloc[0] - 1)*100\nindex_return\n\nweighted_return = weights.mul(index_return)\n_ = weighted_return.sort_values().plot(kind='barh')\n\ndf_index = mcap_index.to_frame('Index')\ndf_index['SP500'] = pd.read_csv(ddir('sp500.csv'), parse_dates=['date'], index_col='date')\ndf_index['SP500'] = df_index['SP500'].div(df_index['SP500'].iloc[0]).mul(100)\n_ = df_index.plot()\n\n# Multi period returns\ndef multi_period_returns(r):\n return (np.prod(1 + r) - 1) * 100\n\n_ = df_index.pct_change().rolling('30D').apply(multi_period_returns).plot()",
"Index Correlation and Exporting to Excel",
"daily_returns = data.pct_change().dropna()\ncorrelations = daily_returns.corr()\ncorrelations\n\n_ = sns.heatmap(correlations, annot=True, cmap='RdBu', center=0)\n_ = plt.xticks(rotation=45)\n_ = plt.title('Daily Return Correlations')\n\n# correlations.to_excel(excel_writer=ddir('dc_correlation.xls'), sheet_name='correlations', startrow=1, startcol=1)\n\nwith pd.ExcelWriter(ddir('dc_stock_data.xlsx')) as writer:\n correlations.to_excel(excel_writer=writer, sheet_name='correlations')\n data.to_excel(excel_writer=writer, sheet_name='prices')\n data.pct_change().to_excel(excel_writer=writer, sheet_name='returns')",
"Time Series Analysis\nSecond course in Skill Track - look at AR, MA, ACF etc",
"from statsmodels.tsa.arima_process import ArmaProcess\nfrom statsmodels.graphics.tsaplots import plot_acf, plot_pacf\nfrom statsmodels.tsa.stattools import adfuller, coint\nfrom statsmodels.tsa.arima_model import ARMA\n\ndaily_returns.head()\n\nlen(daily_returns)\n\n_ = plot_acf(daily_returns.AMZN, lags=20, alpha=0.05)\n\n_ = plot_acf((1+daily_returns).resample('Q').prod().AMZN -1 , lags=20, alpha=0.05)\n\nquarterly_returns = data.resample('Q').last().pct_change().dropna()\nquarterly_returns.head()\n\n_ = plot_acf(quarterly_returns.AMZN, lags=20, alpha=0.05)\n\n((1 + daily_returns).resample('Q').prod() - 1).head()\n\n# aapl = yf.Ticker('aapl')\n\n# aapl_historical = aapl.history(start=\"2022-03-01\", end=\"2022-03-05\", interval=\"1m\")\n\n# aapl_historical.to_csv(ddir('aapl_data_20220301.csv'))\n\naapl_historical = pd.read_csv(ddir('aapl_data_20220301.csv'), parse_dates=['Datetime'], index_col='Datetime')\n\naapl_historical.head()\n\n_ = aapl_historical.loc['2022-03-01', 'Close'].plot()\n\nappl_ret_20220301 = aapl_historical.loc['2022-03-01', 'Close'].pct_change().dropna()\nappl_ret_20220301.head()\n\n_ = plot_acf(appl_ret_20220301, lags=60)\n\n_ = aapl_historical.loc['2022-03-01 9:30':'2022-03-01 10:00', 'Close'].plot()\n\nhelp(coint) #(data.AAPL, data.AMZN)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/pcmdi/cmip6/models/sandbox-2/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: PCMDI\nSource ID: SANDBOX-2\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:36\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-2', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Naereen/notebooks
|
Février 2021 un mini challenge arithmético-algorithmique.ipynb
|
mit
|
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Février-2021-un-mini-challenge-arithmético-algorithmique\" data-toc-modified-id=\"Février-2021-un-mini-challenge-arithmético-algorithmique-1\"><span class=\"toc-item-num\">1 </span>Février 2021 un mini challenge arithmético-algorithmique</a></span><ul class=\"toc-item\"><li><span><a href=\"#Challenge-:\" data-toc-modified-id=\"Challenge-:-1.1\"><span class=\"toc-item-num\">1.1 </span>Challenge :</a></span></li><li><span><a href=\"#Réponse-en-Java-(par-un-de-mes-élèves-de-L3-SIF)\" data-toc-modified-id=\"Réponse-en-Java-(par-un-de-mes-élèves-de-L3-SIF)-1.2\"><span class=\"toc-item-num\">1.2 </span>Réponse en Java (par <a href=\"http://www.dit.ens-rennes.fr/\" target=\"_blank\">un de mes élèves de L3 SIF</a>)</a></span></li><li><span><a href=\"#Réponse-en-Bash-(par-Lilian-Besson)-?\" data-toc-modified-id=\"Réponse-en-Bash-(par-Lilian-Besson)-?-1.3\"><span class=\"toc-item-num\">1.3 </span>Réponse en Bash (par <a href=\"https://perso.crans.org/besson/\" target=\"_blank\">Lilian Besson</a>) ?</a></span></li><li><span><a href=\"#Réponse-en-Python-(par-Lilian-Besson)\" data-toc-modified-id=\"Réponse-en-Python-(par-Lilian-Besson)-1.4\"><span class=\"toc-item-num\">1.4 </span>Réponse en Python (par <a href=\"https://perso.crans.org/besson/\" target=\"_blank\">Lilian Besson</a>)</a></span></li><li><span><a href=\"#Réponse-en-OCaml-(par-Lilian-Besson)\" data-toc-modified-id=\"Réponse-en-OCaml-(par-Lilian-Besson)-1.5\"><span class=\"toc-item-num\">1.5 </span>Réponse en OCaml (par <a href=\"https://perso.crans.org/besson/\" target=\"_blank\">Lilian Besson</a>)</a></span></li><li><span><a href=\"#Réponse-en-Rust-(par-un-de-mes-élèves-(Théo-Degioanni))\" data-toc-modified-id=\"Réponse-en-Rust-(par-un-de-mes-élèves-(Théo-Degioanni))-1.6\"><span class=\"toc-item-num\">1.6 </span>Réponse en Rust (par <a href=\"https://github.com/Moxinilian\" target=\"_blank\">un de mes élèves (Théo Degioanni)</a>)</a></span></li><li><span><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-1.7\"><span class=\"toc-item-num\">1.7 </span>Conclusion</a></span></li><li><span><a href=\"#Challenge-(pour-les-futurs-agrégs-maths)\" data-toc-modified-id=\"Challenge-(pour-les-futurs-agrégs-maths)-1.8\"><span class=\"toc-item-num\">1.8 </span>Challenge (pour les futurs agrégs maths)</a></span><ul class=\"toc-item\"><li><span><a href=\"#Une-première-réponse\" data-toc-modified-id=\"Une-première-réponse-1.8.1\"><span class=\"toc-item-num\">1.8.1 </span>Une première réponse</a></span></li></ul></li></ul></li></ul></div>\n\nFévrier 2021 un mini challenge arithmético-algorithmique\nChallenge :\nLe lundi 01 février 2021, j'ai donné à mes élèves de L3 et M1 du département informatique de l'ENS Rennes le challenge suivant :\n\nMini challenge algorithmique pour les passionnés en manque de petits exercices de code : (optionnel)\nVous avez dû observer que ce mois de février est spécial parce que le 1er février est un lundi, et qu'il a exactement 4 lundis, 4 mardis, 4 mercredis, 4 jeudis, 4 vendredis, 4 samedis et 4 dimanches.\nQuestion : Comptez le nombre de mois de février répondant à ce critère (je n'ai pas trouvé de nom précis), depuis l'année de création de l'ENS Rennes (1994, enfin pour Cachan antenne Bretagne) jusqu'à 2077 (1994 et 2077 inclus).\n\n\nAuteur : Lilian Besson\nLicense : MIT\nDate : 01/02/2021\nCours : ALGO2 @ ENS Rennes\n\n<span style=\"color:red;\">Attention : ce notebook est déclaré avec le kernel Python, mais certaines sections (Java, OCaml et Rust) ont été exécutées avec le kernel correspondant. La coloration syntaxique multi-langage n'est pas (encore?) supportée, désolé d'avance.</span>\n\nRéponse en Java (par un de mes élèves de L3 SIF)\n\nOui, on peut utiliser Java dans des notebooks ! Voir ce poste de blogue, et ce kernel IJava.\nMoi je trouve ça chouette, et je m'en suis servi en INF1 au semestre dernier\n\nIl en avait trouvé 9.\n\nDate et heure : lundi 01 février, 20h32.",
"// ceci est du code Java 9 et pas Python !\n// On a besoin des dépendances suivantes :\nimport java.util.Calendar; // pour Calendar.FEBRUARY, .YEAR, .MONDAY\nimport java.util.GregorianCalendar; // pour \nimport java.util.stream.IntStream; // pour cet IntStream\n\n// ceci est du code Java 9 et pas Python !\nIntStream.rangeClosed(1994, 2077)\n //.parallel() // ce .parallel() est inutile, il y a trop peu de valeurs\n .mapToObj(i -> new GregorianCalendar(i, Calendar.FEBRUARY, 1))\n .filter(calendar -> !calendar.isLeapYear(calendar.get(Calendar.YEAR)))\n .filter(calendar -> calendar.get(Calendar.DAY_OF_WEEK) == Calendar.MONDAY)\n .count();",
"Si les cellules précédentes ne s'exécute pas, a priori c'est normal : ce notebook est déclaré en Python !\nIl faudrait utiliser une des astuces suivantes, mais flemme.",
"System.out.println(\"Test d'une cellule en Java dans un notebook déclaré comme Java\");\n// ==> ça marche !\n\n%%java\nSystem.out.println(\"Test d'une cellule en Java dans un notebook déclaré comme Python\");\n// cela ne marche pas !\n\n# On peut aussi écrire une cellule Python qui fait appel à une commande Bash\n!echo 'System.out.println(\"\\nTest d\\'une ligne de Java dans un notebook déclaré comme Python\");' | jshell -q\n\n%%bash\n# voir une commande Bash directement !\n# mais uniquement depuis un notebook Python\necho 'System.out.println(\"\\nok\");' | jshell -q",
"Réponse en Bash (par Lilian Besson) ?\nEn bidouillant avec des programmes en lignes de commandes tels que cal et des grep on devrait pouvoir s'en sortir facilement. Ça tient même en une ligne !\n\nDate et heure : 01/02/2021, 21h16",
"%%bash\nncal February 2021",
"En recherchant exactement cette chaîne \"lu 1 8 15 22\" et en excluant 29 des lignes trouvées, on obtient la réponse :",
"%%bash\ntime for ((annee=1994; annee<=2077; annee+=1)); do\n ncal February $annee \\\n | grep 'lu 1 8 15 22' \\\n | grep -v 29;\ndone \\\n| wc -l\n\n%%bash\nfor ((annee=1994; annee<=2077; annee+=1)); do ncal February $annee | grep 'lu 1 8 15 22' | grep -v 29; done | wc -l",
"Réponse en Python (par Lilian Besson)\nAvec le module calendar on pourrait faire comme en Bash : imprimer les calendriers, et rechercher des chaînes particulières... mais ce n'est pas très propre.\nEssayons avec ce même module mais en écrivant une solution fonctionnelle !\n\nDate et heure : lundi 01 février, 21h40.",
"import calendar\n\ndef filter_annee(annee):\n return (\n set(calendar.Calendar(annee).itermonthdays2(annee, 2))\n & {(1,0), (28, 6), (29, 0)}\n ) == {(1, 0), (28, 6)}\n\nfilter_annee(2020), filter_annee(2021), filter_annee(2022)",
"Et donc on a juste à compter les années, de 1994 à 2077 inclus, qui ne sont pas des années bissextiles et qui satisfont le filtre :",
"%%time\nlen(list(filter(filter_annee, ( annee\n for annee in range(1994, 2077 + 1)\n # if not calendar.isleap(annee) # en fait c'est inutile\n )\n)))",
"Réponse en OCaml (par Lilian Besson)\nEn installant et utilisant ocaml-calendar cela ne doit pas être trop compliqué. On peut s'inspirer du code Java ci-dessus, qui a une approche purement fonctionnelle.\n\nDate et heure : ?",
"(* cette cellule est en OCaml *)\n(* avec la solution en Bash et Sys.command... *)\nSys.command \"bash -c \\\"for ((annee=1994; annee<=2077; annee+=1)); do ncal February \\\\$annee | grep 'lu 1 8 15 22' | grep -v 29; done | wc -l\\\"\";;\n(* mais ça ne compte pas ! *)",
"On pourrait faire en calculant manuellement les quantièmes du 01/01/YYYY pour YYYY entre 1994 et 2077.",
"type day = int\nand dayofweek = int\nand month = int\nand year = int\n;;\ntype date = { d: day; q: dayofweek; m: month; y: year };;\n\nlet is_not_bissextil (y: year) : bool =\n (y mod 4 != 0) || (y mod 100 == 0 && y mod 400 != 0)\n;;\n\nis_not_bissextil 2019;;\nis_not_bissextil 2020;;\nis_not_bissextil 2021;;\n\n(* Ce Warning:8 est volontaire ! *)\nlet length_of_month (m: month) (y: year) : int =\n match m with\n | 4 | 6 | 9 | 11 -> 30\n | 1 | 3 | 5 | 7 | 8 | 10 | 12 -> 31\n | 2 -> if is_not_bissextil(y) then 28 else 29\n;;\n\nlength_of_month 2 2019;; (* 28 *)\nlength_of_month 2 2020;; (* 29 *)\nlength_of_month 2 2021;; (* 28 *)\n\nlet next_dayofweek (q: dayofweek) =\n 1 + (q mod 7)\n;;\n\nnext_dayofweek 1;; (* Monday => Tuesday *)\nnext_dayofweek 2;; (* Tuesday => Wednesday *)\nnext_dayofweek 3;; (* Wednesday => Thursday *)\nnext_dayofweek 4;; (* Thursday => Friday *)\nnext_dayofweek 5;; (* Friday => Saturday *)\nnext_dayofweek 6;; (* Saturday => Sunday *)\nnext_dayofweek 7;; (* Sunday => Monday *)\n\nlet next_day { d; q; m; y } =\n let l_o_m = length_of_month m y in\n if (d = 31 && m = 12) then\n { d=1; q=next_dayofweek q; m=1; y=y+1 }\n else begin\n if (d = l_o_m) then\n { d=1; q=next_dayofweek q; m=m+1; y=y }\n else\n { d=d+1; q=next_dayofweek q; m=m; y=y }\n end\n;;\n\nlet rec iterate (n: int) (f: 'a -> 'a) (x: 'a): 'a =\n match n with\n | 0 -> x (* identité *)\n | 1 -> f(x)\n | n -> iterate (n-1) f (f(x))\n;;\n\nlet start_of_next_month { d; q; m; y } =\n let l_o_m = length_of_month m y in\n let nb_nextday = l_o_m - d + 1 in\n if (m = 12) then\n { d=1; q=iterate nb_nextday next_dayofweek q; m=1; y=y+1 }\n else\n { d=1; q=iterate nb_nextday next_dayofweek q; m=m+1; y=y }\n;;",
"Je vais tricher un peu et utiliser la connaissance que le 01/01/1994 est un samedi :",
"Sys.command \"ncal Jan 1994 | grep ' 1'\";\n\nlet start_date = {d=1; q=6; m=1; y=1994};; (* 01/01/1994 était un samedi, q=6 *)\n(* let start_date = {d=1; q=3; m=1; y=2020};; (* 01/01/2020 était un mercredi, q=3 *) *)\n\nlet end_date = {d=31; q=0; m=1; y=2077};; (* on se fiche du quantième ici ! *)\n(* let end_date = {d=31; q=0; m=1; y=2021};; (* on se fiche du quantième ici ! *) *)\n\nlet aujourdhui = ref start_date;; (* les champs sont pas mutables, go reference *)\nlet nb_bon_mois_fevrier = ref 0;;\nlet aujourdhuis = ref [];;\nlet solutions = ref [];;\n\nwhile (!aujourdhui.y <= end_date.y || !aujourdhui.y <= end_date.y || !aujourdhui.y <= end_date.y) do\n if (!aujourdhui.d = 1 && !aujourdhui.m = 2) then begin\n (* on a un début de février, est-ce qu'il vérifie les critères ? *)\n let date_suivante = iterate 27 next_day !aujourdhui in\n let date_suivante_p1 = next_day date_suivante in\n if (\n date_suivante.d = 28 && date_suivante.m = 2\n && date_suivante_p1.d != 29 (* année pas bisextile *)\n && !aujourdhui.q = 1 (* mois février commence par lundi *)\n ) then begin\n solutions := !aujourdhui :: !solutions;\n incr nb_bon_mois_fevrier;\n end;\n end;\n (* on a un jour quelconque, on avance d'un mois *)\n aujourdhui := start_of_next_month !aujourdhui;\n aujourdhuis := !aujourdhui :: !aujourdhuis;\ndone;;\n\n!nb_bon_mois_fevrier;;",
"On peut même facilement vérifier les années qui ont été trouvées, et donc vérifier que 2021 est dedans.\nJ'ai aussi eu la chance d'observer ce phénomène en 1999 (mais je ne me souviens pas l'avoir remarqué, j'avais 6 ans !) et en 2010 (et je me souviens l'avoir remarqué, normal j'étais en MP* et on adore ce genre de coïncidences).",
"!solutions",
"(le kernel ocaml-jupyter est génial, mais plante un peu, j'ai galéré à écrire cette douzaine de cellules sans devoir relancer Jupyter plusieurs fois... bug signalé, résolution en cours...)\n\nRéponse en Rust (par un de mes élèves (Théo Degioanni))\nLe code Rust proposé peut être executé depuis le bac à sable Rust :\nhttps://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=2ab9c57e9d114a344363e21f9493bf22\nMais on peut aussi utiliser le kernel Jupyter proposé par le projet evcxr de Google :\n\nil faut installer Rust, je n'avais jamais fait j'ai donc suivi rustup.rs le site officiel de présentation de l'installation de Rust ;\npuis j'ai suivi les explications pour installer le kernel sur GitHub @google/evcxr ;\npuis j'ai écrit cette cellule ci-dessous, j'ai changé le Noyau en Rust, et j'ai exécuté la cellule ;\nnotez qu'avec l'extension jupyter nbextension ExecuteTime, j'ai vu que la première cellule avait pris quasiment 10 secondes... mais je pense qu'il s'agit du temps d'installer et de compilation du module chrono (je ne suis pas encore très familier avec Rust).\nLes exécutions suivantes prennent environ 300ms pour la définition (y a-t-il recompilation même si le texte ne change pas ?) de la fonction ;\n\nEt environ 700ms pour l'exécution. C'est bien plus lent que les 35 ms de mon code naïf en OCaml (qui est juste interprété et pas compilé), que les 10 ms de Python, ou les 100 ms de Bash. Mais pas grave !\n\n\nDate et heure : lundi 01/02/2021, 21h20",
":dep chrono = \"0.4\"\nuse chrono::TimeZone;\nuse chrono::Datelike;\n\nfn main() {\n let n = (1994..=2077)\n .filter(|n| chrono::Utc.ymd_opt(*n, 2, 29) == chrono::LocalResult::None)\n .map(|n| chrono::Utc.ymd(n, 2, 1))\n .filter(|t| t.weekday() == chrono::Weekday::Mon)\n .count();\n \n println!(\"{}\", n);\n}\n\nmain()",
"On trouve la même réponse que les autres langages, parfait.\n\nConclusion\nEn utilisant bien la librairie standard de votre langage favori, ce n'est pas très difficile.\nCurieux de plus ? Voir cet article et celui là qui expliquent que ce n'est pas si rare (comme vous venez de le calculer), contrairement à une rumeur qui semble circuler régulièrement sur les réseaux sociaux. La rumeur dirait que ces mois de février n'arrivent qu'une fois tous les 823 ans...\nChallenge (pour les futurs agrégs maths)\nBonus spécial : si quelqu'un trouve un calcul de maths qui permette de trouver la réponse \"à la main\", ou en tous cas sans un programme informatique.\nUne première réponse\n\nDate : lundi 01/02/2021, 22h32\n\n\n\nSur 28 ans, chaque année on décale d'un jour, sauf les bissextiles où c'est de 2 (car 365%7 = 1).\nDonc il y a 7 années bissextiles et comme 7 et 4 sont premiers entre eux, autant de chacun des jours, donc 3 cas (une des commençant lundi ayant 5 lundis).\nAucunes années non bissextiles en 4 entre 94 et 77\nOn a donc 83 années = 3x28-1.\n2020-28 = 1992 donc 1993 (l'année manquante pour tomber rond) n'était pas un cas de lundi.\nLe nombre de cas semblable est donc de 3x3 = 9 (réponse correcte).\n\nC'est tout pour ce notebook, allez voir ce projet pour d'autres notebooks."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.19/_downloads/ee17e3e8df43ce4f0119faeeeccc374f/plot_sensors_time_frequency.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Frequency and time-frequency sensors analysis\nThe objective is to show you how to explore the spectral content\nof your data (frequency and time-frequency). Here we'll work on Epochs.\nWe will use this dataset: somato-dataset. It contains so-called event\nrelated synchronizations (ERS) / desynchronizations (ERD) in the beta band.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Stefan Appelhoff <stefan.appelhoff@mailbox.org>\n# Richard Höchenberger <richard.hoechenberger@gmail.com>\n#\n# License: BSD (3-clause)\nimport os.path as op\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.time_frequency import tfr_morlet, psd_multitaper, psd_welch\nfrom mne.datasets import somato",
"Set parameters",
"data_path = somato.data_path()\nsubject = '01'\ntask = 'somato'\nraw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',\n 'sub-{}_task-{}_meg.fif'.format(subject, task))\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)\n\n# Construct Epochs\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = (None, 0)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),\n preload=True)\n\nepochs.resample(200., npad='auto') # resample to reduce computation time",
"Frequency analysis\nWe start by exploring the frequence content of our epochs.\nLet's first check out all channel types by averaging across epochs.",
"epochs.plot_psd(fmin=2., fmax=40., average=True, spatial_colors=False)",
"Now let's take a look at the spatial distributions of the PSD.",
"epochs.plot_psd_topomap(ch_type='grad', normalize=True)",
"Alternatively, you can also create PSDs from Epochs objects with functions\nthat start with psd_ such as\n:func:mne.time_frequency.psd_multitaper and\n:func:mne.time_frequency.psd_welch.",
"f, ax = plt.subplots()\npsds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)\npsds = 10. * np.log10(psds)\npsds_mean = psds.mean(0).mean(0)\npsds_std = psds.mean(0).std(0)\n\nax.plot(freqs, psds_mean, color='k')\nax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,\n color='k', alpha=.5)\nax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency (Hz)',\n ylabel='Power Spectral Density (dB)')\nplt.show()",
"Notably, :func:mne.time_frequency.psd_welch supports the keyword argument\naverage, which specifies how to estimate the PSD based on the individual\nwindowed segments. The default is average='mean', which simply calculates\nthe arithmetic mean across segments. Specifying average='median', in\ncontrast, returns the PSD based on the median of the segments (corrected for\nbias relative to the mean), which is a more robust measure.",
"# Estimate PSDs based on \"mean\" and \"median\" averaging for comparison.\nkwargs = dict(fmin=2, fmax=40, n_jobs=1)\npsds_welch_mean, freqs_mean = psd_welch(epochs, average='mean', **kwargs)\npsds_welch_median, freqs_median = psd_welch(epochs, average='median', **kwargs)\n\n# Convert power to dB scale.\npsds_welch_mean = 10 * np.log10(psds_welch_mean)\npsds_welch_median = 10 * np.log10(psds_welch_median)\n\n# We will only plot the PSD for a single sensor in the first epoch.\nch_name = 'MEG 0122'\nch_idx = epochs.info['ch_names'].index(ch_name)\nepo_idx = 0\n\n_, ax = plt.subplots()\nax.plot(freqs_mean, psds_welch_mean[epo_idx, ch_idx, :], color='k',\n ls='-', label='mean of segments')\nax.plot(freqs_median, psds_welch_median[epo_idx, ch_idx, :], color='k',\n ls='--', label='median of segments')\n\nax.set(title='Welch PSD ({}, Epoch {})'.format(ch_name, epo_idx),\n xlabel='Frequency (Hz)', ylabel='Power Spectral Density (dB)')\nax.legend(loc='upper right')\nplt.show()",
"Lastly, we can also retrieve the unaggregated segments by passing\naverage=None to :func:mne.time_frequency.psd_welch. The dimensions of\nthe returned array are (n_epochs, n_sensors, n_freqs, n_segments).",
"psds_welch_unagg, freqs_unagg = psd_welch(epochs, average=None, **kwargs)\nprint(psds_welch_unagg.shape)",
"Time-frequency analysis: power and inter-trial coherence\nWe now compute time-frequency representations (TFRs) from our Epochs.\nWe'll look at power and inter-trial coherence (ITC).\nTo this we'll use the function :func:mne.time_frequency.tfr_morlet\nbut you can also use :func:mne.time_frequency.tfr_multitaper\nor :func:mne.time_frequency.tfr_stockwell.",
"# define frequencies of interest (log-spaced)\nfreqs = np.logspace(*np.log10([6, 35]), num=8)\nn_cycles = freqs / 2. # different number of cycle per frequency\npower, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,\n return_itc=True, decim=3, n_jobs=1)",
"Inspect power\n<div class=\"alert alert-info\"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click\n on an image to visualize the data for one sensor.\n You can also select a portion in the time-frequency plane to\n obtain a topomap for a certain time-frequency region.</p></div>",
"power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')\npower.plot([82], baseline=(-0.5, 0), mode='logratio', title=power.ch_names[82])\n\nfig, axis = plt.subplots(1, 2, figsize=(7, 4))\npower.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,\n baseline=(-0.5, 0), mode='logratio', axes=axis[0],\n title='Alpha', show=False)\npower.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,\n baseline=(-0.5, 0), mode='logratio', axes=axis[1],\n title='Beta', show=False)\nmne.viz.tight_layout()\nplt.show()",
"Joint Plot\nYou can also create a joint plot showing both the aggregated TFR\nacross channels and topomaps at specific times and frequencies to obtain\na quick overview regarding oscillatory effects across time and space.",
"power.plot_joint(baseline=(-0.5, 0), mode='mean', tmin=-.5, tmax=2,\n timefreqs=[(.5, 10), (1.3, 8)])",
"Inspect ITC",
"itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')",
"<div class=\"alert alert-info\"><h4>Note</h4><p>Baseline correction can be applied to power or done in plots.\n To illustrate the baseline correction in plots, the next line is\n commented power.apply_baseline(baseline=(-0.5, 0), mode='logratio')</p></div>\n\nExercise\n\nVisualize the inter-trial coherence values as topomaps as done with\n power."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst
|
courses/ai-for-finance/solution/arima_model.ipynb
|
apache-2.0
|
[
"Building an ARIMA Model for a Financial Dataset\nIn this notebook, you will build an ARIMA model for AAPL stock closing prices. The lab objectives are:\n\nPull data from Google Cloud Storage into a Pandas dataframe\nLearn how to prepare raw stock closing data for an ARIMA model\nApply the Dickey-Fuller test \nBuild an ARIMA model using the statsmodels library\n\nMake sure you restart the Python kernel after executing the pip install command below! After you restart the kernel you don't have to execute the command again.",
"!pip install --user statsmodels\n\n%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime\n\n%config InlineBackend.figure_format = 'retina'",
"Import data from Google Clod Storage\nIn this section we'll read some ten years' worth of AAPL stock data into a Pandas dataframe. We want to modify the dataframe such that it represents a time series. This is achieved by setting the date as the index.",
"df = pd.read_csv('gs://cloud-training/ai4f/AAPL10Y.csv')\n\ndf['date'] = pd.to_datetime(df['date'])\ndf.sort_values('date', inplace=True)\ndf.set_index('date', inplace=True)\n\nprint(df.shape)\n\ndf.head()",
"Prepare data for ARIMA\nThe first step in our preparation is to resample the data such that stock closing prices are aggregated on a weekly basis.",
"df_week = df.resample('w').mean()\ndf_week = df_week[['close']]\ndf_week.head()",
"Let's create a column for weekly returns. Take the log to of the returns to normalize large fluctuations.",
"df_week['weekly_ret'] = np.log(df_week['close']).diff()\ndf_week.head()\n\n# drop null rows\ndf_week.dropna(inplace=True)\n\ndf_week.weekly_ret.plot(kind='line', figsize=(12, 6));\n\nudiff = df_week.drop(['close'], axis=1)\nudiff.head()",
"Test for stationarity of the udiff series\nTime series are stationary if they do not contain trends or seasonal swings. The Dickey-Fuller test can be used to test for stationarity.",
"import statsmodels.api as sm\nfrom statsmodels.tsa.stattools import adfuller\n\nrolmean = udiff.rolling(20).mean()\nrolstd = udiff.rolling(20).std()\n\nplt.figure(figsize=(12, 6))\norig = plt.plot(udiff, color='blue', label='Original')\nmean = plt.plot(rolmean, color='red', label='Rolling Mean')\nstd = plt.plot(rolstd, color='black', label = 'Rolling Std Deviation')\nplt.title('Rolling Mean & Standard Deviation')\nplt.legend(loc='best')\nplt.show(block=False)\n\n# Perform Dickey-Fuller test\ndftest = sm.tsa.adfuller(udiff.weekly_ret, autolag='AIC')\ndfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'])\nfor key, value in dftest[4].items():\n dfoutput['Critical Value ({0})'.format(key)] = value\n \ndfoutput",
"With a p-value < 0.05, we can reject the null hypotehsis. This data set is stationary.\nACF and PACF Charts\nMaking autocorrelation and partial autocorrelation charts help us choose hyperparameters for the ARIMA model.\nThe ACF gives us a measure of how much each \"y\" value is correlated to the previous n \"y\" values prior.\nThe PACF is the partial correlation function gives us (a sample of) the amount of correlation between two \"y\" values separated by n lags excluding the impact of all the \"y\" values in between them.",
"from statsmodels.graphics.tsaplots import plot_acf\n\n# the autocorrelation chart provides just the correlation at increasing lags\nfig, ax = plt.subplots(figsize=(12,5))\nplot_acf(udiff.values, lags=10, ax=ax)\nplt.show()\n\nfrom statsmodels.graphics.tsaplots import plot_pacf\n\nfig, ax = plt.subplots(figsize=(12,5))\nplot_pacf(udiff.values, lags=10, ax=ax)\nplt.show()",
"The table below summarizes the patterns of the ACF and PACF.\n<img src=\"../imgs/How_to_Read_PACF_ACF.jpg\" alt=\"drawing\" width=\"300\" height=\"300\"/>\nThe above chart shows that reading PACF gives us a lag \"p\" = 3 and reading ACF gives us a lag \"q\" of 1. Let's Use Statsmodel's ARMA with those parameters to build a model. The way to evaluate the model is to look at AIC - see if it reduces or increases. The lower the AIC (i.e. the more negative it is), the better the model.\nBuild ARIMA Model\nSince we differenced the weekly closing prices, we technically only need to build an ARMA model. The data has already been integrated and is stationary.",
"from statsmodels.tsa.arima_model import ARMA\n\n# Notice that you have to use udiff - the differenced data rather than the original data. \nar1 = ARMA(tuple(udiff.values), (3, 1)).fit()\nar1.summary()",
"Our model doesn't do a good job predicting variance in the original data (peaks and valleys).",
"plt.figure(figsize=(12, 8))\nplt.plot(udiff.values, color='blue')\npreds = ar1.fittedvalues\nplt.plot(preds, color='red')\nplt.show()",
"Let's make a forecast 2 weeks ahead:",
"steps = 2\n\nforecast = ar1.forecast(steps=steps)[0]\n\nplt.figure(figsize=(12, 8))\nplt.plot(udiff.values, color='blue')\n\npreds = ar1.fittedvalues\nplt.plot(preds, color='red')\n\nplt.plot(pd.DataFrame(np.array([preds[-1],forecast[0]]).T,index=range(len(udiff.values)+1, len(udiff.values)+3)), color='green')\nplt.plot(pd.DataFrame(forecast,index=range(len(udiff.values)+1, len(udiff.values)+1+steps)), color='green')\nplt.title('Display the predictions with the ARIMA model')\nplt.show()",
"The forecast is not great but if you tune the hyper parameters some more, you might be able to reduce the errors."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Unidata/unidata-python-workshop
|
notebooks/Pandas/Pandas Introduction.ipynb
|
mit
|
[
"<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Introduction to Pandas</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\nOverview:\n\nTeaching: 35 minutes\nExercises: 40 minutes\n\nQuestions\n\nWhat is Pandas?\nWhat are the basic Pandas data structures?\nHow can I read data into Pandas?\nWhat are some of the data operations available in Pandas?\n\nObjectives\n\n<a href=\"#series\">Data Series</a>\n<a href=\"#frames\">Data Frames</a>\n<a href=\"#loading\">Loading Data in Pandas</a>\n<a href=\"#missing\">Missing Data</a>\n<a href=\"#manipulating\">Manipulating Data</a>\n\n<a name=\"series\"></a>\nData Series\nData series are one of the fundamental data structures in Pandas. You can think of them like a dictionary; they have a key (index) and value (data/values) like a dictionary, but also have some handy functionality attached to them.\nTo start out, let's create a series from scratch. We'll imagine these are temperature observations.",
"import pandas as pd\ntemperatures = pd.Series([23, 20, 25, 18])\ntemperatures",
"The values on the left are the index (zero based integers by default) and on the right are the values. Notice that the data type is an integer. Any NumPy datatype is acceptable in a series.\nThat's great, but it'd be more useful if the station were associated with those values. In fact you could say we want the values indexed by station name.",
"temperatures = pd.Series([23, 20, 25, 18], index=['TOP', 'OUN', 'DAL', 'DEN'])\ntemperatures",
"Now, very similar to a dictionary, we can use the index to access and modify elements.",
"temperatures['DAL']\n\ntemperatures[['DAL', 'OUN']]",
"We can also do basic filtering, math, etc.",
"temperatures[temperatures > 20]\n\ntemperatures + 2",
"Remember how I said that series are like dictionaries? We can create a series straight from a dictionary.",
"dps = {'TOP': 14,\n 'OUN': 18,\n 'DEN': 9,\n 'PHX': 11,\n 'DAL': 23}\n\ndewpoints = pd.Series(dps)\ndewpoints",
"It's also easy to check and see if an index exists in a given series:",
"'PHX' in dewpoints\n\n'PHX' in temperatures",
"Series have a name attribute and their index has a name attribute.",
"temperatures.name = 'temperature'\ntemperatures.index.name = 'station'\n\ntemperatures",
"Exercise\n\nCreate a series of pressures for stations TOP, OUN, DEN, and DAL (assign any values you like).\nSet the series name and series index name.\nPrint the pressures for all stations which have a dewpoint below 15.",
"# Your code goes here\n",
"Solution",
"# %load solutions/make_series.py\n",
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"frames\"></a>\nData Frames\nSeries are great, but what about a bunch of related series? Something like a table or a spreadsheet? Enter the data frame. A data frame can be thought of as a dictionary of data series. They have indexes for their rows and their columns. Each data series can be of a different type, but they will all share a common index.\nThe easiest way to create a data frame by hand is to use a dictionary.",
"data = {'station': ['TOP', 'OUN', 'DEN', 'DAL'],\n 'temperature': [23, 20, 25, 18],\n 'dewpoint': [14, 18, 9, 23]}\n\ndf = pd.DataFrame(data)\ndf",
"You can access columns (data series) using dictionary type notation or attribute type notation.",
"df['temperature']\n\ndf.dewpoint",
"Notice the index is shared and that the name of the column is attached as the series name.\nYou can also create a new column and assign values. If I only pass a scalar it is duplicated.",
"df['wspeed'] = 0.\ndf",
"Let's set the index to be the station.",
"df.index = df.station\ndf",
"Well, that's close, but we now have a redundant column, so let's get rid of it.",
"df = df.drop('station', axis='columns')\ndf",
"We can also add data and order it by providing index values. Note that the next cell contains data that's \"out of order\" compared to the dataframe shown above. However, by providing the index that corresponds to each value, the data is organized correctly into the dataframe.",
"df['pressure'] = pd.Series([1010,1000,998,1018], index=['DEN','TOP','DAL','OUN'])\ndf",
"Now let's get a row from the dataframe instead of a column.",
"df.loc['DEN']",
"We can even transpose the data easily if we needed that do make things easier to merge/munge later.",
"df.T",
"Look at the values attribute to access the data as a 1D or 2D array for series and data frames recpectively.",
"df.values\n\ndf.temperature.values",
"Exercise\n\nAdd a series of rain observations to the existing data frame.\nApply an instrument correction of -2 to the dewpoint observations.",
"# Your code goes here\n",
"Solution",
"# %load solutions/rain_obs.py\n",
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"loading\"></a>\nLoading Data in Pandas\nThe real power of pandas is in manupulating and summarizing large sets of tabular data. To do that, we'll need a large set of tabular data. We've included a file in this directory called JAN17_CO_ASOS.txt that has all of the ASOS observations for several stations in Colorado for January of 2017. It's a few hundred thousand rows of data in a tab delimited format. Let's load it into Pandas.",
"import pandas as pd\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t')\n\ndf.head()\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t', parse_dates=['valid'])\n\ndf.head()\n\ndf = pd.read_csv('Jan17_CO_ASOS.txt', sep='\\t', parse_dates=['valid'], na_values='M')\n\ndf.head()",
"Let's look in detail at those column names. Turns out we need to do some cleaning of this file. Welcome to real world data analysis.",
"df.columns\n\ndf.columns = ['station', 'time', 'temperature', 'dewpoint', 'pressure']\n\ndf.head()",
"For other formats of data CSV, fixed width, etc. that are tools to read it as well. You can even read excel files straight into Pandas.\n<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"missing\"></a>\nMissing Data\nWe've already dealt with some missing data by turning the 'M' string into actual NaN's while reading the file in. We can do one better though and delete any rows that have all values missing. There are similar operations that could be performed for columns. You can even drop if any values are missing, all are missing, or just those you specify are missing.",
"len(df)\n\ndf = df.dropna(axis='rows', how='all', subset=['temperature', 'dewpoint', 'pressure'])\n\nlen(df)\n\ndf.head()",
"Exercise\nOur dataframe df has data in which we dropped any entries that were missing all of the temperature, dewpoint and pressure observations. Let's modify our command some and create a new dataframe df2 that only keeps observations that have all three variables (i.e. if a pressure is missing, the whole entry is dropped). This is useful if you were doing some computation that requires a complete observation to work.",
"# Your code goes here\n# df2 = ",
"Solution",
"# %load solutions/drop_obs.py\n",
"Lastly, we still have the original index values. Let's reindex to a new zero-based index for only the rows that have valid data in them.",
"df.reset_index(drop=True)\n\ndf.head()",
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">\n\n<a name=\"manipulating\"></a>\nManipulating Data\nWe can now take our data and do some intersting things with it. Let's start with a simple min/max.",
"print(f'Min: {df.temperature.min()}\\nMax: {df.temperature.max()}')",
"You can also do some useful statistics on data with attached methods like corr for correlation coefficient.",
"df.temperature.corr(df.dewpoint)",
"We can also call a groupby on the data frame to start getting some summary information for each station.",
"df.groupby('station').mean()",
"Exercise\nCalculate the min, max, and standard deviation of the temperature field grouped by each station.",
"# Calculate min\n\n\n# Calculate max\n\n\n# Calculate standard deviation\n",
"Solution",
"# %load solutions/calc_stats.py\n",
"Now, let me show you how to do all of that and more in a single call.",
"df.groupby('station').describe()",
"Now let's suppose we're going to make a meteogram or similar and want to get all of the data for a single station.",
"df.groupby('station').get_group('0CO').head().reset_index(drop=True)",
"Exercise\n\nRound the temperature column to whole degrees.\nGroup the observations by temperature and use the count method to see how many instances of the rounded temperatures there are in the dataset.",
"# Your code goes here\n",
"Solution",
"# %load solutions/temperature_count.py\n",
"<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bloomberg/bqplot
|
examples/Marks/Object Model/Market Map.ipynb
|
apache-2.0
|
[
"import pandas as pd\nfrom ipywidgets import Label, VBox, Layout\nfrom bqplot.market_map import MarketMap\nfrom bqplot import ColorScale, ColorAxis, DateScale, LinearScale, Axis, Lines, Figure",
"Get Data",
"data = pd.read_csv(\"../../data_files/country_codes.csv\", index_col=[0])\ncountry_codes = data.index.values\ncountry_names = data[\"Name\"]",
"Basic Market Map",
"market_map = MarketMap(\n names=country_codes,\n # basic data which needs to set for each map\n ref_data=data,\n # Data frame which can be used for different properties of the map\n # Axis and scale for color data\n tooltip_fields=[\"Name\"],\n layout=Layout(width=\"800px\", height=\"600px\"),\n)\n\nmarket_map\n\nmarket_map.colors = [\"MediumSeaGreen\"]\n\nmarket_map.font_style = {\"font-size\": \"16px\", \"fill\": \"white\"}\n\nmarket_map.title = \"Country Map\"\nmarket_map.title_style = {\"fill\": \"Red\"}",
"GDP data with grouping by continent\nWorld Bank national accounts data, and OECD National Accounts data files. (The World Bank: GDP per capita (current US$))",
"gdp_data = pd.read_csv(\n \"../../data_files/gdp_per_capita.csv\", index_col=[0], parse_dates=True\n)\ngdp_data.fillna(method=\"backfill\", inplace=True)\ngdp_data.fillna(method=\"ffill\", inplace=True)\n\ncol = ColorScale(scheme=\"Greens\")\ncontinents = data[\"Continent\"].values\nax_c = ColorAxis(scale=col, label=\"GDP per Capita\", visible=False)\n\ndata[\"GDP\"] = gdp_data.iloc[-1]\n\nmarket_map = MarketMap(\n names=country_codes,\n groups=continents, # Basic data which needs to set for each map\n cols=25,\n row_groups=3, # Properties for the visualization\n ref_data=data, # Data frame used for different properties of the map\n tooltip_fields=[\n \"Name\",\n \"Continent\",\n \"GDP\",\n ], # Columns from data frame to be displayed as tooltip\n tooltip_formats=[\"\", \"\", \".1f\"],\n scales={\"color\": col},\n axes=[ax_c],\n layout=Layout(min_width=\"800px\", min_height=\"600px\"),\n) # Axis and scale for color data\n\ndeb_output = Label()\n\n\ndef selected_index_changed(change):\n deb_output.value = str(change.new)\n\n\nmarket_map.observe(selected_index_changed, \"selected\")\nVBox([deb_output, market_map])\n\n# Attribute to show the names of the groups, in this case the continents\nmarket_map.show_groups = True\n\n# Setting the selected countries\nmarket_map.show_groups = False\nmarket_map.selected = [\"PAN\", \"FRA\", \"PHL\"]\n\n# changing selected stroke and hovered stroke variable\nmarket_map.selected_stroke = \"yellow\"\nmarket_map.hovered_stroke = \"violet\"",
"Setting the color based on data",
"# Adding data for color and making color axis visible\nmarket_map.colors = [\"#ccc\"]\nmarket_map.color = data[\"GDP\"]\nax_c.visible = True",
"Adding a widget as tooltip",
"# Creating the figure to be displayed as the tooltip\nsc_x = DateScale()\nsc_y = LinearScale()\n\nax_x = Axis(scale=sc_x, grid_lines=\"dashed\", label=\"Date\")\nax_y = Axis(\n scale=sc_y,\n orientation=\"vertical\",\n grid_lines=\"dashed\",\n label=\"GDP\",\n label_location=\"end\",\n label_offset=\"-1em\",\n)\n\nline = Lines(\n x=gdp_data.index.values, y=[], scales={\"x\": sc_x, \"y\": sc_y}, colors=[\"orange\"]\n)\nfig_tooltip = Figure(marks=[line], axes=[ax_x, ax_y])\n\nmarket_map = MarketMap(\n names=country_codes,\n groups=continents,\n cols=25,\n row_groups=3,\n color=data[\"GDP\"],\n scales={\"color\": col},\n axes=[ax_c],\n ref_data=data,\n tooltip_widget=fig_tooltip,\n freeze_tooltip_location=True,\n colors=[\"#ccc\"],\n layout=Layout(min_width=\"900px\", min_height=\"600px\"),\n)\n\n# Update the tooltip chart\nhovered_symbol = \"\"\n\n\ndef hover_handler(self, content):\n global hovered_symbol\n symbol = content.get(\"data\", \"\")\n\n if symbol != hovered_symbol:\n hovered_symbol = symbol\n if gdp_data.get(hovered_symbol) is not None:\n line.y = gdp_data[hovered_symbol].values\n fig_tooltip.title = content.get(\"ref_data\", {}).get(\"Name\", \"\")\n\n\n# Custom msg sent when a particular cell is hovered on\nmarket_map.on_hover(hover_handler)\nmarket_map",
"This notebook uses data derived from the World Bank dataset.\n- The World Bank: GDP per capita (current US$)\n- The World Bank: Country Codes\nSee the LICENSE file for more information."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jorisvandenbossche/DS-python-data-analysis
|
notebooks/python_recap/03-functions.ipynb
|
bsd-3-clause
|
[
"Python the basics: functions\n\nDS Data manipulation, analysis and visualization in Python\nMay/June, 2021\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons\n\n\n\nThis notebook is largely based on material of the Python Scientific Lecture Notes (https://scipy-lectures.github.io/), adapted with some exercises.\n\nFunctions\nFunction definition\nFunction blocks must be indented as other control-flow blocks",
"def the_answer_to_the_universe():\n print(42)\n\nthe_answer_to_the_universe()",
"Note: the syntax to define a function:\n\nthe def keyword;\nis followed by the function’s name, then\nthe arguments of the function are given between parentheses followed by a colon.\nthe function body;\nand return object for optionally returning values.\n\nReturn statement\nFunctions can optionally return values",
"def calcAreaSquare(edge):\n return edge**2\ncalcAreaSquare(2.3)",
"Parameters\nMandatory parameters (positional arguments)",
"def double_it(x):\n return 2*x\n\ndouble_it(3)\n\n#double_it()",
"Optional parameters (keyword or named arguments)\nThe order of the keyword arguments does not matter, but it is good practice to use the same ordering as the function's definition\nKeyword arguments are a very convenient feature for defining functions with a variable number of arguments, especially when default values are to be used in most calls to the function.",
"def double_it (x=1):\n return 2*x\n\nprint(double_it(3))\n\nprint(double_it())\n\ndef addition(int1=1, int2=1, int3=1):\n return int1 + 2*int2 + 3*int3\n\nprint(addition(int1=1, int2=1, int3=1))\n\nprint(addition(int1=1, int3=1, int2=1)) # sequence of these named arguments do not matter",
"<div class=\"alert alert-danger\">\n <b>NOTE</b>: <br><br>\n\nDefault values are evaluated when the function is defined, not when it is called. This can be problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body, since the modifications will be persistent across invocations of the function.\n\n</div>\n\nUsing an immutable type in a keyword argument:",
"bigx = 10\ndef double_it(x=bigx):\n return x * 2\nbigx = 1e9\ndouble_it()",
"Using an mutable type in a keyword argument (and modifying it inside the function body)",
"def add_to_dict(args={'a': 1, 'b': 2}):\n for i in args.keys():\n args[i] += 1\n print(args)\n\nadd_to_dict\nadd_to_dict()\nadd_to_dict()\nadd_to_dict()\n\n#the {'a': 1, 'b': 2} was created in the memory on the moment that the definition was evaluated\n\ndef add_to_dict(args=None):\n if not args:\n args = {'a': 1, 'b': 2}\n \n for i in args.keys():\n args[i] += 1\n \n print(args)\n\nadd_to_dict\nadd_to_dict()\nadd_to_dict()",
"Variable number of parameters\nSpecial forms of parameters:\n\n*args: any number of positional arguments packed into a tuple\n**kwargs: any number of keyword arguments packed into a dictionary",
"def variable_args(*args, **kwargs):\n print('args is', args)\n print('kwargs is', kwargs)\n\nvariable_args('one', 'two', x=1, y=2, z=3)",
"Docstrings\nDocumentation about what the function does and its parameters. General convention:",
"def funcname(params):\n \"\"\"Concise one-line sentence describing the function.\n \n Extended summary which can contain multiple paragraphs.\n \"\"\"\n # function body\n pass\n\nfuncname?",
"Functions are objects\nFunctions are first-class objects, which means they can be:\n\nassigned to a variable\nan item in a list (or any collection)\npassed as an argument to another function.",
"va = variable_args\nva('three', x=1, y=2)",
"Methods\nMethods are functions attached to objects. You’ve seen these in our examples on lists, dictionaries, strings, etc...\nCalling them can be done by dir(object):",
"dd = {'antea': 3, 'IMDC': 2, 'arcadis': 4, 'witteveen': 5, 'grontmij': 1, 'fluves': 6, 'marlinks': 7}",
"<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Make a function of the exercise in the previous notebook: Given the dictionary `dd`, check if a key is already existing in the dictionary and raise an exception if the key already exist. Otherwise, return the dict with the element added.\n</div>",
"def check_for_key(checkdict, key):\n \"\"\"\n Function checks the presence of key in dictionary checkdict and returns an \n exception if the key is already used in the dictionary\n \n \"\"\"\n if key in checkdict.keys():\n raise Exception('Key already used in this dictionary')\n\ncheck_for_key(dd, 'deme')\n\n#check_for_key(dd, 'antea') # uncomment this line",
"Object oriented Programming\nWondering what OO is? A very nice introduction is given here: http://py.processing.org/tutorials/objects/\nPython supports object-oriented programming (OOP). The goals of OOP are:\n\nto organize the code, and\nto re-use code in similar contexts.\n\nHere is a small example: we create a Student class, which is an object gathering several custom functions (methods) and variables (attributes), we will be able to use:",
"class Employee(): #object\n \n def __init__(self, name, wage=60.):\n \"\"\"\n Employee class to save the amount of hours worked and related earnings\n \"\"\"\n self.name = name\n self.wage = wage\n \n self.hours = 0. \n \n def worked(self, hours):\n \"\"\"add worked hours on a project\n \"\"\"\n try:\n hours = float(hours)\n except:\n raise Exception(\"Hours not convertable to float!\")\n \n self.hours += hours\n \n def calc_earnings(self):\n \"\"\"\n Calculate earnings\n \"\"\"\n return self.hours *self.wage\n\nbert = Employee('bert')\n\nbert.worked(10.)\nbert.worked(20.)\nbert.wage = 80.\n\nbert.calc_earnings()\n\ndir(Employee)",
"It is just the same as all the other objects we worked with!\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Extend the class `Employee` with a projects attribute, which is a dictionary. Projects can be added by the method `new_project`. Hours are contributed to a specific project\n</div>",
"class Employee(): #object\n \n def __init__(self, name, wage=60.):\n \"\"\"\n Employee class to save the amount of hours worked and related earnings\n \"\"\"\n self.name = name\n self.wage = wage\n self.projects = {} \n\n def new_project(self, projectname):\n \"\"\"\n \"\"\"\n if projectname in self.projects:\n raise Exception(\"project already exist for\", self.name)\n else:\n self.projects[projectname] = 0.\n \n \n def worked(self, hours, projectname):\n \"\"\"add worked hours on a project\n \"\"\"\n try:\n hours = float(hours)\n except:\n raise Exception(\"Hours not convertable to float!\")\n\n if not projectname in self.projects:\n raise Exception(\"project non-existing for\", self.name)\n \n self.projects[projectname] += hours\n \n def calc_earnings(self):\n \"\"\"\n Calculate earnings\n \"\"\"\n total_hours = 0\n for val in self.projects.values():\n total_hours += val\n \n return total_hours *self.wage\n \n def info(self):\n \"\"\"\n get info\n \"\"\"\n for proj, hour in self.projects.items():\n print(hour, 'worked on project', proj)\n\nbert = Employee('bert')\nbert.new_project('vmm')\n\nbert.worked(10., 'vmm')\n\nbert.calc_earnings()\n\nbert.new_project('pwc')\n\nbert.info()\n\nbert.worked(3., 'pwc')",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/production_ml/solutions/distributed_training.ipynb
|
apache-2.0
|
[
"Distributed Training with GPUs on Cloud AI Platform\nLearning Objectives:\n 1. Setting up the environment\n 1. Create a model to train locally\n 1. Train on multiple GPUs/CPUs with MultiWorkerMirrored Strategy\nIn this notebook, we will walk through using Cloud AI Platform to perform distributed training using the MirroredStrategy found within tf.keras. This strategy will allow us to use the synchronous AllReduce strategy on a VM with multiple GPUs attached.\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.",
"# Use the chown command to change the ownership of repository to user\n!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst",
"Next we will configure our environment. Be sure to change the PROJECT_ID variable in the below cell to your Project ID. This will be the project to which the Cloud AI Platform resources will be billed. We will also create a bucket for our training artifacts (if it does not already exist).\nLab Task #1: Setting up the environment",
"# The OS module in python provides functions for interacting with the operating system\nimport os\n# TODO 1\nPROJECT_ID = \"cloud-training-demos\" # Replace with your PROJECT\nBUCKET = PROJECT_ID \nREGION = 'us-central1'\n# Store the value of `BUCKET` and `PROJECT_ID` in environment variables.\nos.environ[\"PROJECT_ID\"] = PROJECT_ID\nos.environ[\"BUCKET\"] = BUCKET\n",
"Since we are going to submit our training job to Cloud AI Platform, we need to create our trainer package. We will create the train directory for our package and create a blank __init__.py file so Python knows that this folder contains a package.",
"# Using `mkdir` we can create an empty directory\n!mkdir train\n# Using `touch` we can create an empty file\n!touch train/__init__.py",
"Next we will create a module containing a function which will create our model. Note that we will be using the Fashion MNIST dataset. Since it's a small dataset, we will simply load it into memory for getting the parameters for our model.\nOur model will be a DNN with only dense layers, applying dropout to each hidden layer. We will also use ReLU activation for all hidden layers.",
"%%writefile train/model_definition.py\n# Here we'll import data processing libraries like Numpy and Tensorflow\nimport tensorflow as tf\nimport numpy as np\n\n# Get data\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# add empty color dimension\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)\n\ndef create_model():\n# The `tf.keras.Sequential` method will sequential groups a linear stack of layers into a tf.keras.Model.\n model = tf.keras.models.Sequential()\n# The `Flatten()` method will flattens the input and it does not affect the batch size.\n model.add(tf.keras.layers.Flatten(input_shape=x_train.shape[1:]))\n# The `Dense()` method is just your regular densely-connected NN layer.\n model.add(tf.keras.layers.Dense(1028))\n# The `Activation()` method applies an activation function to an output.\n model.add(tf.keras.layers.Activation('relu'))\n# The `Dropout()` method applies dropout to the input.\n model.add(tf.keras.layers.Dropout(0.5))\n# The `Dense()` method is just your regular densely-connected NN layer.\n model.add(tf.keras.layers.Dense(512))\n# The `Activation()` method applies an activation function to an output.\n model.add(tf.keras.layers.Activation('relu'))\n# The `Dropout()` method applies dropout to the input.\n model.add(tf.keras.layers.Dropout(0.5))\n# The `Dense()` method is just your regular densely-connected NN layer.\n model.add(tf.keras.layers.Dense(256))\n# The `Activation()` method applies an activation function to an output.\n model.add(tf.keras.layers.Activation('relu'))\n# The `Dropout()` method applies dropout to the input.\n model.add(tf.keras.layers.Dropout(0.5))\n# The `Dense()` method is just your regular densely-connected NN layer.\n model.add(tf.keras.layers.Dense(10))\n# The `Activation()` method applies an activation function to an output.\n model.add(tf.keras.layers.Activation('softmax'))\n return model",
"Before we submit our training jobs to Cloud AI Platform, let's be sure our model runs locally. We will call the model_definition function to create our model and use tf.keras.datasets.fashion_mnist.load_data() to import the Fashion MNIST dataset.\nLab Task #2: Create a model to train locally",
"# The OS module in python provides functions for interacting with the operating system\nimport os\n# The Python time module provides many ways of representing time in code, such as objects, numbers, and strings.\n# It also provides functionality other than representing time, like waiting during code execution and measuring the efficiency of your code.\nimport time\n# Here we'll import data processing libraries like Numpy and Tensorflow\nimport tensorflow as tf\nimport numpy as np\nfrom train import model_definition\n\n#Get data\n# TODO 2\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# add empty color dimension\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)\n\ndef create_dataset(X, Y, epochs, batch_size):\n dataset = tf.data.Dataset.from_tensor_slices((X, Y))\n dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True)\n return dataset\n\nds_train = create_dataset(x_train, y_train, 20, 5000)\nds_test = create_dataset(x_test, y_test, 1, 1000)\n\nmodel = model_definition.create_model()\n\nmodel.compile(\n# Using `tf.keras.optimizers.Adam` the optimizer will implements the Adam algorithm.\n optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),\n loss='sparse_categorical_crossentropy',\n metrics=['sparse_categorical_accuracy'])\n \nstart = time.time()\n\nmodel.fit(\n ds_train,\n validation_data=ds_test, \n verbose=1\n)\nprint(\"Training time without GPUs locally: {}\".format(time.time() - start))",
"Train on multiple GPUs/CPUs with MultiWorkerMirrored Strategy\nThat took a few minutes to train our model for 20 epochs. Let's see how we can do better using Cloud AI Platform. We will be leveraging the MultiWorkerMirroredStrategy supplied in tf.distribute. The main difference between this code and the code from the local test is that we need to compile the model within the scope of the strategy. When we do this our training op will use information stored in the TF_CONFIG variable to assign ops to the various devices for the AllReduce strategy. \nAfter the training process finishes, we will print out the time spent training. Since it takes a few minutes to spin up the resources being used for training on Cloud AI Platform, and this time can vary, we want a consistent measure of how long training took.\nNote: When we train models on Cloud AI Platform, the TF_CONFIG variable is automatically set. So we do not need to worry about adjusting based on what cluster configuration we use.",
"%%writefile train/train_mult_worker_mirrored.py\n# The OS module in python provides functions for interacting with the operating system\nimport os\n# The Python time module provides many ways of representing time in code, such as objects, numbers, and strings.\n# It also provides functionality other than representing time, like waiting during code execution and measuring the efficiency of your code.\nimport time\n# Here we'll import data processing libraries like Numpy and Tensorflow\nimport tensorflow as tf\nimport numpy as np\nfrom . import model_definition\n\n# The `MultiWorkerMirroredStrategy()` method will work as a distribution strategy for synchronous training on multiple workers.\nstrategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n#Get data\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# add empty color dimension\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)\n\ndef create_dataset(X, Y, epochs, batch_size):\n dataset = tf.data.Dataset.from_tensor_slices((X, Y))\n dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True)\n return dataset\n\nds_train = create_dataset(x_train, y_train, 20, 5000)\nds_test = create_dataset(x_test, y_test, 1, 1000)\n\nprint('Number of devices: {}'.format(strategy.num_replicas_in_sync))\n\nwith strategy.scope():\n model = model_definition.create_model()\n model.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),\n loss='sparse_categorical_crossentropy',\n metrics=['sparse_categorical_accuracy'])\n \nstart = time.time()\n\nmodel.fit(\n ds_train,\n validation_data=ds_test, \n verbose=2\n)\nprint(\"Training time with multiple GPUs: {}\".format(time.time() - start))",
"Lab Task #3: Training with multiple GPUs/CPUs on created model using MultiWorkerMirrored Strategy\nFirst we will train a model without using GPUs to give us a baseline. We will use a consistent format throughout the trials. We will define a config.yaml file to contain our cluster configuration and the pass this file in as the value of a command-line argument --config.\nIn our first example, we will use a single n1-highcpu-16 VM.",
"%%writefile config.yaml\n# TODO 3a\n# Configure a master worker\ntrainingInput:\n scaleTier: CUSTOM\n masterType: n1-highcpu-16\n\n%%bash\n\nnow=$(date +\"%Y%m%d_%H%M%S\")\nJOB_NAME=\"cpu_only_fashion_minst_$now\"\n\ngcloud ai-platform jobs submit training $JOB_NAME \\\n --staging-bucket=gs://$BUCKET \\\n --package-path=train \\\n --module-name=train.train_mult_worker_mirrored \\\n --runtime-version=2.3 \\\n --python-version=3.7 \\\n --region=us-west1 \\\n --config config.yaml",
"If we go through the logs, we see that the training job will take around 5-7 minutes to complete. Let's now attach two Nvidia Tesla K80 GPUs and rerun the training job.",
"%%writefile config.yaml\n# TODO 3b\n# Configure a master worker\ntrainingInput:\n scaleTier: CUSTOM\n masterType: n1-highcpu-16\n masterConfig:\n acceleratorConfig:\n count: 2\n type: NVIDIA_TESLA_K80\n\n%%bash\n\nnow=$(date +\"%Y%m%d_%H%M%S\")\nJOB_NAME=\"multi_gpu_fashion_minst_2gpu_$now\"\n\ngcloud ai-platform jobs submit training $JOB_NAME \\\n --staging-bucket=gs://$BUCKET \\\n --package-path=train \\\n --module-name=train.train_mult_worker_mirrored \\\n --runtime-version=2.3 \\\n --python-version=3.7 \\\n --region=us-west1 \\\n --config config.yaml",
"That was a lot faster! The training job will take upto 5-10 minutes to complete. Let's keep going and add more GPUs!",
"%%writefile config.yaml\n# TODO 3c\n# Configure a master worker\ntrainingInput:\n scaleTier: CUSTOM\n masterType: n1-highcpu-16\n masterConfig:\n acceleratorConfig:\n count: 4\n type: NVIDIA_TESLA_K80\n\n%%bash\n\nnow=$(date +\"%Y%m%d_%H%M%S\")\nJOB_NAME=\"multi_gpu_fashion_minst_4gpu_$now\"\n\ngcloud ai-platform jobs submit training $JOB_NAME \\\n --staging-bucket=gs://$BUCKET \\\n --package-path=train \\\n --module-name=train.train_mult_worker_mirrored \\\n --runtime-version=2.3 \\\n --python-version=3.7 \\\n --region=us-west1 \\\n --config config.yaml",
"The training job will take upto 10 minutes to complete. It was faster than no GPUs, but why was it slower than 2 GPUs? If you rerun this job with 8 GPUs you'll actually see it takes just as long as using no GPUs!\nThe answer is in our input pipeline. In short, the I/O involved in using more GPUs started to outweigh the benefits of having more available devices. We can try to improve our input pipelines to overcome this (e.g. using caching, adjusting batch size, etc.)."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jcharit1/Amazon-Fine-Foods-Reviews
|
code/.ipynb_checkpoints/experimental-checkpoint (jimmy-Precision-T1600's conflicted copy 2017-05-14).ipynb
|
mit
|
[
"%matplotlib inline",
"Experimental Model Building\nCode for building the models\nAuthor: Jimmy Charité\nEmail: jimmy.charite@gmail.com \nExperimenting with tensorflow",
"import os\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport json\nfrom IPython.display import Image\nfrom IPython.core.display import HTML\nimport tensorflow as tf\n\nretval=os.chdir(\"..\")\n\nclean_data=pd.read_pickle('./clean_data/clean_data.pkl')\n\nclean_data.head()\n\nkept_cols=['helpful']\nkept_cols.extend(clean_data.columns[9:])",
"Training and Testing Split",
"my_rand_state=0\ntest_size=0.25\n\nfrom sklearn.model_selection import train_test_split\n\nX = (clean_data[kept_cols].iloc[:,1:]).as_matrix()\ny = (clean_data[kept_cols].iloc[:,0]).tolist()\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, \n random_state=my_rand_state)",
"Setting Up Tensor Flow",
"feature_columns = [tf.contrib.layers.real_valued_column(\"\", dimension=len(X[0,:]))]\n\ndnn_clf=tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,\n hidden_units=[200,100,50],\n model_dir='./other_output/tf_model')\n\nfrom sklearn.preprocessing import StandardScaler\nstd_scale=StandardScaler()\n\nclass PassData(object):\n '''\n Callable object that can be initialized and \n used to pass data to tensorflow\n '''\n \n def __init__(self,X,y):\n self.X=X\n self.y=y\n \n def scale(self):\n self.X = std_scale.fit_transform(self.X, self.y) \n \n def __call__(self):\n return tf.constant(X), tf.constant(y)\n\ntrain_data=PassData(X,y)\n\ntrain_data.scale()\n\ndnn_clf.fit(input_fn=train_data,steps=1000)",
"Testing Estimators",
"from sklearn.metrics import roc_curve, auc\n\nnb_fpr, nb_tpr, _ = roc_curve(y_test, \n nb_clf_est_b.predict_proba(X_test)[:,1])\nnb_roc_auc = auc(nb_fpr, nb_tpr)\n\nqda_fpr, qda_tpr, _ = roc_curve(y_test, \n qda_clf_est_b.predict_proba(X_test)[:,1])\nqda_roc_auc = auc(qda_fpr, qda_tpr)\n\nlog_fpr, log_tpr, _ = roc_curve(y_test, \n log_clf_est_b.predict_proba(X_test)[:,1])\nlog_roc_auc = auc(log_fpr, log_tpr)\n\nrf_fpr, rf_tpr, _ = roc_curve(y_test, \n rf_clf_est_b.predict_proba(X_test)[:,1])\nrf_roc_auc = auc(rf_fpr, rf_tpr)\n\nplt.plot(nb_fpr, nb_tpr, color='cyan', linestyle='--',\n label='NB (area = %0.2f)' % nb_roc_auc, lw=2)\n\nplt.plot(qda_fpr, qda_tpr, color='indigo', linestyle='--',\n label='QDA (area = %0.2f)' % qda_roc_auc, lw=2)\n\nplt.plot(log_fpr, log_tpr, color='seagreen', linestyle='--',\n label='LOG (area = %0.2f)' % log_roc_auc, lw=2)\n\nplt.plot(rf_fpr, rf_tpr, color='blue', linestyle='--',\n label='RF (area = %0.2f)' % rf_roc_auc, lw=2)\n\nplt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='k',\n label='Luck')\n\nplt.xlim([-0.05, 1.05])\nplt.ylim([-0.05, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curves of Basic Models Using BOW & Macro-Text Stats')\nplt.legend(loc=\"lower right\")\nplt.savefig('./plots/ROC_Basic_BOW_MERGED.png', bbox_inches='tight')\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
enoordeh/StatisticalMethods
|
.archive/2015/scikit-learn/Introduction_to_Machine_Learning.ipynb
|
gpl-2.0
|
[
"What is Machine Learning ?\n\n\nThe umbrella term \"machine learning\" describes methods for automated data analysis, developed by computer scientists and statisticians in response to the appearance of ever larger datasets.\n\n\nThe goal of automation has led to a very uniform terminology, enabling multiple algorithms to be implemented and compared on an equal footing.\n\n\nMachine learning can be divided into two types: supervised and unsupervised. \n\n\nSupervised Learning\n\n\nSupervised learning is also known as predictive learning. Given inputs $X$, the goal is to construct a machine that can accurately predict a set of outputs $y$. \n\n\nThe \"supervision\" refers to the education of the machine, via a training set $D$ of input-output pairs that we provide. Prediction accuracy is then tested on validation and test sets.\n\n\nAt the heart of the prediction machine is a model $M$ that can be trained to give accurate predictions.\n\n\nThe outputs $y$ are said to be response variables - predictions of $y$ will be generated by our model. The variables $y$ can be either categorical (\"labels\") or nominal (real numbers). When the $y$ are categorical, the problem is one of classification (\"is this an image of a kitten, or a puppy?\"). When the $y$ are numerical, the problem is a regression (\"how should we interpolate between these values?\").\n\n\nSupervised learning is about making predictions by characterizing ${\\rm Pr}(y_k|x_k,D,M)$.\n\n\n<img src=\"figures/supervised_workflow.svg\" width=100%>\nUnsupervised Learning\n\n\nAlso known as descriptive learning. Here the goal is \"knowledge discovery\" - detection of patterns in a dataset, that can then be used in supervised/model-based analyses. \n\n\nUnsupervised learning is about density estimation - characterizing ${\\rm Pr}(x|\\theta,H)$.\n\n\nExamples of unsupervised learning activities include:\n\n\nClustering analysis of the $x$.\n\n\nDimensionality reduction: principal component analysis, independent component analysis, etc.\n\n\nIn this lesson we will focus on supervised learning, since it is arguably somewhat closer to our goal of gaining understanding from data.\n\n\nData Representations\n\n\nEach input $x$ is said to have $P$ features (or attributes), and represents a sample drawn from a population. Each sample input $x$ is associated with an output $y$.\n\n\nOur $N$ input samples are packaged into $N \\times P$ design matrix $X$ (with $N$ rows and $P$ columns).\n\n\n<img src=\"figures/data_representation.svg\" width=100%>\nDataset Split\n\nWe train our machine learning models on a subset of the data, and then test them against the remainder.\n\n<img src=\"figures/train_test_split_matrix.svg\" width=100%>\nSimple Example: The Digits Dataset\n\nLet's take a look at one of the SciKit-Learn example datasets, digits",
"% matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.datasets import load_digits\ndigits = load_digits()\ndigits.keys()\n\ndigits.images.shape\n\nprint(digits.images[0])\n\nplt.matshow(digits.images[23], cmap=plt.cm.Greys)\n\ndigits.data.shape\n\ndigits.target.shape\n\ndigits.target[23]",
"In SciKit-Learn, data contains the design matrix $X$, and is a numpy array of shape $(N, P)$\n\n\ntarget contains the response variables $y$, and is a numpy array of shape $(N)$",
"print(digits.DESCR)",
"Splitting the data:",
"from sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)\n\nX_train.shape,y_train.shape\n\nX_test.shape,y_test.shape\n\n?train_test_split",
"Other Example Datasets\nSciKit-Learn provides 5 \"toy\" datasets for tutorial purposes, all load-able in the same way:\nName | Description\n------------|:---------------------------------------\nboston | Boston house-prices, with 13 associated measurements (R)\niris | Fisher's iris classifications (based on 4 characteristics) (C)\ndiabetes | Diabetes (x vs y) (R)\ndigits | Hand-written digits, 8x8 images with classifications (C)\nlinnerud | Linnerud: 3 exercise and 3 physiological data (R)\n\n\"R\" and \"C\" indicate that the problem to be solved is either a regression or a classification, respectively.",
"from sklearn.datasets import load_boston\nboston = load_boston()\nprint(boston.DESCR)\n\n# Visualizing the Boston house price data:\n\nimport corner\n\nX = boston.data\ny = boston.target\n\nplot = np.concatenate((X,np.atleast_2d(y).T),axis=1)\nlabels = np.append(boston.feature_names,'MEDV')\n\ncorner.corner(plot,labels=labels);",
"Question:\nTalk to your neighbor for a few minutes about the things you have just heard about machine learning. In this course have we been talking about regression or classification problems? Have our models been supervised or unsupervised? How are our example astronomical datasets similar to the toy datasets in SciKit-Learn? And how are they different?\nLet's move on to look a simple worked example"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
SHAFNehal/Course
|
code/Introduction to Deep Learning.ipynb
|
apache-2.0
|
[
"Introduction to Deep Learning\nGoal : This notebook explains the building blocks of a neural network model. \nData : Data is taken from sklearn's make_moon dataset. There are two features and and the target is a categorical variable (0/1). The aim is to devise an algorithm that correctly classifies the datapoints. \nAproach: We will build the neural networks from first principles. We will create a very simple model and understand how it works. We will also be implementing backpropagation algorithm. Please note that this code is not optimized. This is for instructive purpose - for us to understand how ANN works. Libraries like theano have highly optimized code.\n<img src=\"image/nn-3-layer-network.png\">",
"# Import the required packages\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport scipy\nimport math\nimport random\nimport string\n\nrandom.seed(123)\n# Display plots inline \n%matplotlib inline\n# Define plot's default figure size\nmatplotlib.rcParams['figure.figsize'] = (10.0, 8.0)\n\n#read the datasets\ntrain = pd.read_csv(\"data/intro_to_ann.csv\")\nprint (train.head())\nX, y = np.array(train.ix[:,0:2]), np.array(train.ix[:,2])\nprint(X.shape, y.shape)\nplt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.BuGn)",
"Let's start building our NN's building blocks.\nThis process will eventually result in our own NN class\nFunction to generate a random number, given two numbers\nWhen we initialize the neural networks, the weights have to be randomly assigned.",
"# calculate a random number where: a <= rand < b\ndef rand(a, b):\n return (b-a)*random.random() + a\n\n# Make a matrix \ndef makeMatrix(I, J, fill=0.0):\n return np.zeros([I,J])\n\n# our sigmoid function\ndef sigmoid(x):\n #return math.tanh(x)\n return 1/(1+np.exp(-x))\n\n# derivative of our sigmoid function, in terms of the output (i.e. y)\ndef dsigmoid(y):\n return (y * (1- y))",
"Our NN class\nWhen we first create a neural networks architecture, we need to know the number of inputs, number of hidden layers and number of outputs.\nThe weights have to be randomly initialized.",
"class NN:\n def __init__(self, ni, nh, no):\n # number of input, hidden, and output nodes\n self.ni = ni + 1 # +1 for bias node\n self.nh = nh\n self.no = no\n\n # activations for nodes\n self.ai = [1.0]*self.ni\n self.ah = [1.0]*self.nh\n self.ao = [1.0]*self.no\n \n # create weights\n self.wi = makeMatrix(self.ni, self.nh)\n self.wo = makeMatrix(self.nh, self.no)\n \n # set them to random vaules\n for i in range(self.ni):\n for j in range(self.nh):\n self.wi[i][j] = rand(-0.2, 0.2)\n for j in range(self.nh):\n for k in range(self.no):\n self.wo[j][k] = rand(-2.0, 2.0)\n\n # last change in weights for momentum \n self.ci = makeMatrix(self.ni, self.nh)\n self.co = makeMatrix(self.nh, self.no)\n\nclass NN:\n def __init__(self, ni, nh, no):\n # number of input, hidden, and output nodes\n self.ni = ni + 1 # +1 for bias node\n self.nh = nh\n self.no = no\n\n # activations for nodes\n self.ai = [1.0]*self.ni\n self.ah = [1.0]*self.nh\n self.ao = [1.0]*self.no\n \n # create weights\n self.wi = makeMatrix(self.ni, self.nh)\n self.wo = makeMatrix(self.nh, self.no)\n \n # set them to random vaules\n for i in range(self.ni):\n for j in range(self.nh):\n self.wi[i][j] = rand(-0.2, 0.2)\n for j in range(self.nh):\n for k in range(self.no):\n self.wo[j][k] = rand(-2.0, 2.0)\n\n # last change in weights for momentum \n self.ci = makeMatrix(self.ni, self.nh)\n self.co = makeMatrix(self.nh, self.no)\n \n\n def backPropagate(self, targets, N, M):\n \n if len(targets) != self.no:\n print(targets)\n raise ValueError('wrong number of target values')\n\n # calculate error terms for output\n #output_deltas = [0.0] * self.no\n output_deltas = np.zeros(self.no)\n for k in range(self.no):\n error = targets[k]-self.ao[k]\n output_deltas[k] = dsigmoid(self.ao[k]) * error\n\n # calculate error terms for hidden\n \n #hidden_deltas = [0.0] * self.nh\n hidden_deltas = np.zeros(self.nh)\n for j in range(self.nh):\n error = 0.0\n for k in range(self.no):\n error = error + output_deltas[k]*self.wo[j][k]\n hidden_deltas[j] = dsigmoid(self.ah[j]) * error\n\n # update output weights\n for j in range(self.nh):\n for k in range(self.no):\n change = output_deltas[k]*self.ah[j]\n self.wo[j][k] = self.wo[j][k] + N*change + M*self.co[j][k]\n self.co[j][k] = change\n #print N*change, M*self.co[j][k]\n\n # update input weights\n for i in range(self.ni):\n for j in range(self.nh):\n change = hidden_deltas[j]*self.ai[i]\n self.wi[i][j] = self.wi[i][j] + N*change + M*self.ci[i][j]\n self.ci[i][j] = change\n\n # calculate error\n error = 0.0\n for k in range(len(targets)):\n error = error + 0.5*(targets[k]-self.ao[k])**2\n return error\n\n\n def test(self, patterns):\n self.predict = np.empty([len(patterns), self.no])\n for i, p in enumerate(patterns):\n self.predict[i] = self.activate(p)\n #self.predict[i] = self.activate(p[0])\n\n def weights(self):\n print('Input weights:')\n for i in range(self.ni):\n print(self.wi[i])\n \n print('Output weights:')\n for j in range(self.nh):\n print(self.wo[j])\n \n def activate(self, inputs):\n \n if len(inputs) != self.ni-1:\n print(inputs)\n raise ValueError('wrong number of inputs')\n\n # input activations\n for i in range(self.ni-1):\n #self.ai[i] = sigmoid(inputs[i])\n self.ai[i] = inputs[i]\n \n \n\n # hidden activations\n for j in range(self.nh):\n sum = 0.0\n for i in range(self.ni):\n sum = sum + self.ai[i] * self.wi[i][j]\n self.ah[j] = sigmoid(sum)\n \n # output activations\n for k in range(self.no):\n sum = 0.0\n for j in range(self.nh):\n sum = sum + self.ah[j] * self.wo[j][k]\n self.ao[k] = sigmoid(sum)\n\n \n return self.ao[:]\n \n def train(self, patterns, iterations=1000, N=0.5, M=0.1):\n # N: learning rate\n # M: momentum factor\n patterns = list(patterns)\n for i in range(iterations):\n error1 = 0.0\n #j = 0\n for p in patterns:\n inputs = p[0]\n targets = p[1]\n self.activate(inputs)\n error1 = error1 + self.backPropagate([targets], N, M)\n #j= j+1\n #print (j)\n #self.weights() \n #if i % 5 == 0:\n print('error in iiteration %d : %-.5f' % (i,error1))\n #print('Final training error: %-.5f' % error1)",
"Let's visualize and observe the resultset",
"# Helper function to plot a decision boundary.\n# This generates the contour plot to show the decision boundary visually\ndef plot_decision_boundary(nn_model):\n # Set min and max values and give it some padding\n x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\n y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\n h = 0.01\n # Generate a grid of points with distance h between them\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n # Predict the function value for the whole gid\n nn_model.test(np.c_[xx.ravel(), yy.ravel()])\n Z = nn_model.predict\n Z[Z>=0.5] = 1\n Z[Z<0.5] = 0\n Z = Z.reshape(xx.shape)\n # Plot the contour and training examples\n plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)\n plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.BuGn)",
"Create Neural networks with 1 hidden layer.",
"n = NN(2, 4, 1)\n\nprint (n.weights())",
"Data Set\n(X1,X2) = (2.067788 0.258133), y=1\n(X1,X2) = (0.993994 -0.609145), y=1\n(X1,X2) = (-0.690315 0.749921), y=0\n(X1,X2) = (1.023582 0.529003), y=0\n(X1,X2) = (0.700747 -0.496724), y=1",
"print (\"prediction\")\nprint (\"y=1 --- yhat=\",n.activate([2.067788, 0.258133]))\nprint (\"y=1 --- yhat=\",n.activate([0.993994, 0.258133]))\nprint (\"y=0 --- yhat=\",n.activate([-0.690315, 0.749921]))\nprint (\"y=0 --- yhat=\",n.activate([1.023582, 0.529003]))\nprint (\"y=1 --- yhat=\",n.activate([0.700747, -0.496724]))",
"Train the Neural Networks = estimate the ws while minimizing the error",
"%timeit -n 1 -r 1 n.train(zip(X,y), iterations=1000)\nplot_decision_boundary(n)\nplt.title(\"Our next model with 4 hidden units\")\n\nprint (n.weights())",
"Data Set\n(X1,X2) = (2.067788 0.258133), y=1\n(X1,X2) = (0.993994 -0.609145), y=1\n(X1,X2) = (-0.690315 0.749921), y=0\n(X1,X2) = (1.023582 0.529003), y=0\n(X1,X2) = (0.700747 -0.496724), y=1",
"print (\"prediction\")\nprint (\"y=1 --- yhat=\",n.activate([2.067788, 0.258133]))\nprint (\"y=1 --- yhat=\",n.activate([0.993994, 0.258133]))\nprint (\"y=0 --- yhat=\",n.activate([-0.690315, 0.749921]))\nprint (\"y=0 --- yhat=\",n.activate([1.023582, 0.529003]))\nprint (\"y=1 --- yhat=\",n.activate([0.700747, -0.496724]))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dismalpy/dismalpy
|
doc/notebooks/varmax.ipynb
|
bsd-2-clause
|
[
"VARMAX models\nThis is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport dismalpy as dp\nimport matplotlib.pyplot as plt\n\ndta = pd.read_stata('data/lutkepohl2.dta')\ndta.index = dta.qtr\nendog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]",
"Model specification\nThe VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument).\nExample 1: VAR\nBelow is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.",
"exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend')\nexog = endog['dln_consump']\nmod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog)\nres = mod.fit(maxiter=1000)\nprint(res.summary())",
"Example 2: VMA\nA vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.",
"mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')\nres = mod.fit(maxiter=1000)\nprint(res.summary())",
"Caution: VARMA(p,q) specifications\nAlthough the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.",
"mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))\nres = mod.fit(maxiter=1000)\nprint(res.summary())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
simonsfoundation/CaImAn
|
demos/notebooks/demo_caiman_cnmf_3D.ipynb
|
gpl-2.0
|
[
"Volumetric data processing\nThis is a simple demo on toy 3d data for source extraction and deconvolution using CaImAn.\nFor more information check demo_pipeline.ipynb which performs the complete pipeline for\n2d two photon imaging data.",
"try:\n get_ipython().magic(u'load_ext autoreload')\n get_ipython().magic(u'autoreload 2')\n print(1)\nexcept:\n print('NOT IPYTHON')\n\nimport logging\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nfrom scipy.ndimage.filters import gaussian_filter\nfrom tifffile.tifffile import imsave\n\nimport caiman as cm\nfrom caiman.utils.visualization import nb_view_patches3d\nimport caiman.source_extraction.cnmf as cnmf\nfrom caiman.paths import caiman_datadir\n\nimport bokeh.plotting as bpl\nbpl.output_notebook()\nlogging.basicConfig(format=\n \"%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s\",\n # filename=\"/tmp/caiman.log\",\n level=logging.DEBUG)",
"Define a function to create some toy data",
"def gen_data(p=1, noise=.5, T=256, framerate=30, firerate=2., motion=True, plot=False):\n if p == 2:\n gamma = np.array([1.5, -.55])\n elif p == 1:\n gamma = np.array([.9])\n else:\n raise\n dims = (70, 50, 10) # size of image\n sig = (4, 4, 2) # neurons size\n bkgrd = 10\n N = 20 # number of neurons\n np.random.seed(0)#7)\n centers = np.asarray([[np.random.randint(s, x - s)\n for x, s in zip(dims, sig)] for i in range(N)])\n if motion:\n centers += np.array(sig) * 2\n Y = np.zeros((T,) + tuple(np.array(dims) + np.array(sig) * 4), dtype=np.float32) \n else:\n Y = np.zeros((T,) + dims, dtype=np.float32)\n trueSpikes = np.random.rand(N, T) < firerate / float(framerate)\n trueSpikes[:, 0] = 0\n truth = trueSpikes.astype(np.float32)\n for i in range(2, T):\n if p == 2:\n truth[:, i] += gamma[0] * truth[:, i - 1] + gamma[1] * truth[:, i - 2]\n else:\n truth[:, i] += gamma[0] * truth[:, i - 1]\n for i in range(N):\n Y[:, centers[i, 0], centers[i, 1], centers[i, 2]] = truth[i]\n tmp = np.zeros(dims)\n tmp[tuple(np.array(dims)//2)] = 1.\n z = np.linalg.norm(gaussian_filter(tmp, sig).ravel())\n Y = bkgrd + noise * np.random.randn(*Y.shape) + 10 * gaussian_filter(Y, (0,) + sig) / z\n if motion:\n shifts = np.transpose([np.convolve(np.random.randn(T-10), np.ones(11)/11*s) for s in sig])\n Y = np.array([cm.motion_correction.apply_shifts_dft(img, (sh[0], sh[1], sh[2]), 0,\n is_freq=False, border_nan='copy')\n for img, sh in zip(Y, shifts)])\n Y = Y[:, 2*sig[0]:-2*sig[0], 2*sig[1]:-2*sig[1], 2*sig[2]:-2*sig[2]]\n else:\n shifts = None\n T, d1, d2, d3 = Y.shape\n\n if plot:\n Cn = cm.local_correlations(Y, swap_dim=False)\n plt.figure(figsize=(15, 3))\n plt.plot(truth.T)\n plt.figure(figsize=(15, 3))\n for c in centers:\n plt.plot(Y[c[0], c[1], c[2]])\n\n d1, d2, d3 = dims\n x, y = (int(1.2 * (d1 + d3)), int(1.2 * (d2 + d3)))\n scale = 6/x\n fig = plt.figure(figsize=(scale*x, scale*y))\n axz = fig.add_axes([1-d1/x, 1-d2/y, d1/x, d2/y])\n plt.imshow(Cn.max(2).T, cmap='gray')\n plt.scatter(*centers.T[:2], c='r')\n plt.title('Max.proj. z')\n plt.xlabel('x')\n plt.ylabel('y')\n axy = fig.add_axes([0, 1-d2/y, d3/x, d2/y])\n plt.imshow(Cn.max(0), cmap='gray')\n plt.scatter(*centers.T[:0:-1], c='r')\n plt.title('Max.proj. x')\n plt.xlabel('z')\n plt.ylabel('y')\n axx = fig.add_axes([1-d1/x, 0, d1/x, d3/y])\n plt.imshow(Cn.max(1).T, cmap='gray')\n plt.scatter(*centers.T[np.array([0,2])], c='r')\n plt.title('Max.proj. y')\n plt.xlabel('x')\n plt.ylabel('z');\n plt.show()\n\n return Y, truth, trueSpikes, centers, dims, -shifts",
"Select file(s) to be processed\n\ncreate a file with a toy 3d dataset.",
"fname = os.path.join(caiman_datadir(), 'example_movies', 'demoMovie3D.tif')\nY, truth, trueSpikes, centers, dims, shifts = gen_data(p=2)\nimsave(fname, Y)\nprint(fname)",
"Display the raw movie (optional)\nShow a max-projection of the correlation image",
"Y = cm.load(fname)\nCn = cm.local_correlations(Y, swap_dim=False)\nd1, d2, d3 = dims\nx, y = (int(1.2 * (d1 + d3)), int(1.2 * (d2 + d3)))\nscale = 6/x\nfig = plt.figure(figsize=(scale*x, scale*y))\naxz = fig.add_axes([1-d1/x, 1-d2/y, d1/x, d2/y])\nplt.imshow(Cn.max(2).T, cmap='gray')\nplt.title('Max.proj. z')\nplt.xlabel('x')\nplt.ylabel('y')\naxy = fig.add_axes([0, 1-d2/y, d3/x, d2/y])\nplt.imshow(Cn.max(0), cmap='gray')\nplt.title('Max.proj. x')\nplt.xlabel('z')\nplt.ylabel('y')\naxx = fig.add_axes([1-d1/x, 0, d1/x, d3/y])\nplt.imshow(Cn.max(1).T, cmap='gray')\nplt.title('Max.proj. y')\nplt.xlabel('x')\nplt.ylabel('z');\nplt.show()",
"Play the movie (optional). This will require loading the movie in memory which in general is not needed by the pipeline. Displaying the movie uses the OpenCV library. Press q to close the video panel.",
"Y[...,5].play(magnification=2)",
"Setup a cluster",
"#%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened)\nif 'dview' in locals():\n cm.stop_server(dview=dview)\nc, dview, n_processes = cm.cluster.setup_cluster(\n backend='local', n_processes=None, single_thread=False)",
"Motion Correction\nFirst we create a motion correction object with the parameters specified. Note that the file is not loaded in memory",
"# motion correction parameters\nopts_dict = {'fnames': fname,\n 'strides': (24, 24, 6), # start a new patch for pw-rigid motion correction every x pixels\n 'overlaps': (12, 12, 2), # overlap between pathes (size of patch strides+overlaps)\n 'max_shifts': (4, 4, 2), # maximum allowed rigid shifts (in pixels)\n 'max_deviation_rigid': 5, # maximum shifts deviation allowed for patch with respect to rigid shifts\n 'pw_rigid': False, # flag for performing non-rigid motion correction\n 'is3D': True}\n\nopts = cnmf.params.CNMFParams(params_dict=opts_dict)\n\n# first we create a motion correction object with the parameters specified\nmc = cm.motion_correction.MotionCorrect(fname, dview=dview, **opts.get_group('motion'))\n# note that the file is not loaded in memory\n\n%%capture\n#%% Run motion correction using NoRMCorre\nmc.motion_correct(save_movie=True)\nm_rig = cm.load(mc.fname_tot_rig, is3D=True)\n\nplt.figure(figsize=(12,3))\nfor i, s in enumerate((mc.shifts_rig, shifts)):\n plt.subplot(1,2,i+1)\n for k in (0,1,2):\n plt.plot(np.array(s)[:,k], label=('x','y','z')[k])\n plt.legend()\n plt.title(('inferred shifts', 'true shifts')[i])\n plt.xlabel('frames')\n plt.ylabel('pixels')",
"Memory mapping\nThe cell below memory maps the file in order 'C' and then loads the new memory mapped file. The saved files from motion correction are memory mapped files stored in 'F' order. Their paths are stored in mc.mmap_file.",
"#%% MEMORY MAPPING\n# memory map the file in order 'C'\nfname_new = cm.save_memmap(mc.mmap_file, base_name='memmap_', order='C',\n border_to_0=0, dview=dview) # exclude borders\n\n# now load the file\nYr, dims, T = cm.load_memmap(fname_new)\nimages = np.reshape(Yr.T, [T] + list(dims), order='F') \n #load frames in python format (T x X x Y)",
"Now restart the cluster to clean up memory",
"#%% restart cluster to clean up memory\ncm.stop_server(dview=dview)\nc, dview, n_processes = cm.cluster.setup_cluster(\n backend='local', n_processes=None, single_thread=False)",
"If data is small enough use a single patch approach",
"# set parameters\nK = 20 # number of neurons expected per patch\ngSig = [4, 4, 2] # expected half size of neurons\nmerge_thresh = 0.8 # merging threshold, max correlation allowed\np = 2 # order of the autoregressive system",
"Run CNMF",
"# INIT\ncnm = cnmf.CNMF(n_processes, k=K, gSig=gSig, merge_thresh=merge_thresh, p=p, dview=dview)\n\n%%capture\n# FIT\ncnm = cnm.fit(images)",
"View the results\nView components per plane",
"cnm.estimates.nb_view_components_3d(image_type='mean', dims=dims, axis=2);",
"With padding",
"# INIT\ncnm = cnmf.CNMF(n_processes, k=K, gSig=gSig, merge_thresh=merge_thresh, p=p, dview=dview)\n\n%%capture\n# FIT\ncnm = cnm.fit(np.pad(images, ((0,0),(1,1),(1,1),(1,1))))\n\ncnm.estimates.nb_view_components_3d(image_type='mean', dims=np.array(dims)+2, axis=2);",
"For larger data use a patch approach",
"# set parameters\nrf = 15 # half-size of the patches in pixels. rf=25, patches are 50x50\nstride = 10 # amount of overlap between the patches in pixels\nK = 10 # number of neurons expected per patch\ngSig = [4, 4, 2] # expected half size of neurons\nmerge_thresh = 0.8 # merging threshold, max correlation allowed\np = 2 # order of the autoregressive system",
"Run CNMF",
"%%capture\n#%% RUN ALGORITHM ON PATCHES\n\ncnm = cnmf.CNMF(n_processes, k=K, gSig=gSig, merge_thresh=merge_thresh, p=p, dview=dview,\n rf=rf, stride=stride, only_init_patch=True)\ncnm = cnm.fit(images)\n\nprint(('Number of components:' + str(cnm.estimates.A.shape[-1])))",
"Component Evaluation",
"#%% COMPONENT EVALUATION\n# the components are evaluated in two ways:\n# a) the shape of each component must be correlated with the data\n# b) a minimum peak SNR is required over the length of a transient\n\nfr = 10 # approx final rate (after eventual downsampling )\ndecay_time = 1. # length of typical transient in seconds \nuse_cnn = False # CNN classifier is designed for 2d (real) data\nmin_SNR = 3 # accept components with that peak-SNR or higher\nrval_thr = 0.6 # accept components with space correlation threshold or higher\ncnm.params.change_params(params_dict={'fr': fr,\n 'decay_time': decay_time,\n 'min_SNR': min_SNR,\n 'rval_thr': rval_thr,\n 'use_cnn': use_cnn})\n\ncnm.estimates.evaluate_components(images, cnm.params, dview=dview)\n\nprint(('Keeping ' + str(len(cnm.estimates.idx_components)) +\n ' and discarding ' + str(len(cnm.estimates.idx_components_bad))))",
"Re-run seeded CNMF\nNow we re-run CNMF on the whole FOV seeded with the accepted components.",
"%%capture\ncnm.params.set('temporal', {'p': p})\ncnm2 = cnm.refit(images)",
"View the results\nFor a change we view the components as max-projections (frontal in the XY direction, sagittal in YZ direction and transverse in XZ),\nand we additionaly show the denoised trace",
"cnm2.estimates.nb_view_components_3d(image_type='corr', dims=dims, Yr=Yr, denoised_color='red', max_projection=True);\n\n# STOP CLUSTER\ncm.stop_server(dview=dview)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AndreySheka/dl_ekb
|
hw6/Seminar 6 - segmentation.ipynb
|
mit
|
[
"Seminar 6 - Neural networks for segmentation",
"! wget https://www.dropbox.com/s/o8loqc5ih8lp2m9/weights.pkl?dl=0\n\n! wget https://www.dropbox.com/s/jy34yowcf85ydba/data.zip?dl=0\n! unzip -q data.zip\n\nimport scipy as sp\nimport scipy.misc\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline",
"Задача на эту неделю: обучить сеть детектировать края клеток.",
"# Human HT29 colon-cancer cells\nplt.figure(figsize=(10,8))\nplt.subplot(1,2,1)\nim = sp.misc.imread('BBBC018_v1_images-fixed/train/00735-actin.DIB.bmp')\nplt.imshow(im)\nplt.subplot(1,2,2)\nmask = sp.misc.imread('BBBC018_v1_outlines/train/00735-cells.png')\nplt.imshow(mask, 'gray')",
"Самый естественный способ (но не самый эффективный) - свести задачу сегментации к задаче классификации отдельных патчей картинки. Очевидный плюс такого перехода - человечество уже придумало множество хороших архитектур для классификационных сеток (спасибо imagenet'y), в то время как с архитектурами для сегментационных сеток пока не все так однозначно.",
"def get_valid_patches(img_shape, patch_size, central_points):\n start = central_points - patch_size/2\n end = start + patch_size\n mask = np.logical_and(start >= 0, end < np.array(img_shape))\n mask = np.all(mask, axis=-1)\n return mask\n\ndef extract_patches(img, mask, n_pos=64, n_neg=64, patch_size=100):\n res = []\n labels = []\n pos = np.argwhere(mask > 0)\n accepted_patches_mask = get_valid_patches(np.array(img.shape[:2]), patch_size, pos)\n pos = pos[accepted_patches_mask]\n np.random.shuffle(pos)\n for i in range(n_pos):\n start = pos[i] - patch_size/2\n end = start + patch_size\n res.append(img[start[0]:end[0], start[1]:end[1]])\n labels.append(1)\n \n neg = np.argwhere(mask == 0)\n accepted_patches_mask = get_valid_patches(np.array(img.shape[:2]), patch_size, neg)\n neg = neg[accepted_patches_mask]\n np.random.shuffle(neg)\n for i in range(n_neg):\n start = neg[i] - patch_size/2\n end = start + patch_size\n res.append(img[start[0]:end[0], start[1]:end[1]])\n labels.append(0)\n return np.array(res), np.array(labels)\n\npatches, labels = extract_patches(im, mask, 32,32)\n\nplt.imshow(patches[0])\n\nfrom lasagne.layers import InputLayer\nfrom lasagne.layers import DenseLayer\nfrom lasagne.layers import NonlinearityLayer\nfrom lasagne.layers import Pool2DLayer as PoolLayer\nfrom lasagne.layers import Conv2DLayer as ConvLayer\nfrom lasagne.layers import BatchNormLayer, batch_norm\nfrom lasagne.nonlinearities import softmax\nimport theano.tensor as T\nimport pickle\nimport lasagne.layers\nimport theano\n\nwith open('weights.pkl') as f:\n weights = pickle.load(f)\n\ndef build_network(weights):\n net = {}\n net['input'] = InputLayer((None, 3, 100, 100))\n net['conv1_1'] = batch_norm(ConvLayer(net['input'], num_filters=64, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv1_1_w'], b=weights['conv1_1_b']), \n beta=weights['conv1_1_bn_beta'], gamma=weights['conv1_1_bn_gamma'], epsilon=1e-6)\n net['conv1_2'] = batch_norm(ConvLayer(net['conv1_1'], num_filters=64, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv1_2_w'], b=weights['conv1_2_b']),\n beta=weights['conv1_2_bn_beta'], gamma=weights['conv1_2_bn_gamma'], epsilon=1e-6)\n net['pool1'] = PoolLayer(net['conv1_2'], pool_size=2)\n\n net['conv2_1'] = batch_norm(ConvLayer(net['pool1'], num_filters=128, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv2_1_w'], b=weights['conv2_1_b']), \n beta=weights['conv2_1_bn_beta'], gamma=weights['conv2_1_bn_gamma'], epsilon=1e-6)\n net['conv2_2'] = batch_norm(ConvLayer(net['conv2_1'], num_filters=128, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv2_2_w'], b=weights['conv2_2_b']),\n beta=weights['conv2_2_bn_beta'], gamma=weights['conv2_2_bn_gamma'], epsilon=1e-6)\n net['pool2'] = PoolLayer(net['conv2_2'], pool_size=2)\n \n net['conv3_1'] = batch_norm(ConvLayer(net['pool2'], num_filters=256, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv3_1_w'], b=weights['conv3_1_b']), \n beta=weights['conv3_1_bn_beta'], gamma=weights['conv3_1_bn_gamma'], epsilon=1e-6)\n net['conv3_2'] = batch_norm(ConvLayer(net['conv3_1'], num_filters=256, filter_size=3, pad=0, flip_filters=False,\n W=weights['conv3_2_w'], b=weights['conv3_2_b']),\n beta=weights['conv3_2_bn_beta'], gamma=weights['conv3_2_bn_gamma'], epsilon=1e-6)\n net['pool3'] = PoolLayer(net['conv3_2'], pool_size=2)\n \n net['fc1'] = batch_norm(DenseLayer(net['pool3'], num_units=512, \n W=weights['fc1_w'], \n b=weights['fc1_b']), \n beta=weights['fc1_bn_beta'], gamma=weights['fc1_bn_gamma'], epsilon=1e-6)\n net['fc2'] = DenseLayer(net['fc1'], num_units=2, W=weights['fc2_w'], b=weights['fc2_b'])\n net['prob'] = NonlinearityLayer(net['fc2'], softmax)\n return net\n\n\n\nnet = build_network(weights)\n\ninput_image = T.tensor4('input')\nprob = lasagne.layers.get_output(net['prob'], input_image, batch_norm_use_averages=False)\nget_probs = theano.function([input_image], prob)\n\ndef preproces(patches):\n patches = patches.astype(np.float32)\n patches = patches / 255 - 0.5\n patches = patches.transpose(0,3,1,2)\n return patches\n\npredictions = get_probs(preproces(patches)).argmax(axis=-1)\n\nprint predictions\nprint (predictions == labels).mean()\n\nnp.mean(predictions[:32] == 1), np.mean(predictions[32:] == 0)",
"Вопрос: это что ж, если мы хотим отсегментировать картинку, нам для каждого пикселя надо вытаскивать патч и их независимо через сетку прогонять?\nОтвет: нет, можно модифицировать исходную сетку так, чтобы она принимала на вход картинку произвольного размера и возвращала для каждого пикселя вероятности классов. И это задача на сегодняшний семинар!\nЧто нам потребуется:\n- избавиться от полносвязных слоев, превратив их в эквивалентные сверточные;\n- избавиться от страйдов в пулинге, из-за которых размер картинки уменьшается.\n- перейти от обычных сверток и пулингов к dilated-сверткам и dilated-пулингам.",
"from lasagne.layers import DilatedConv2DLayer as DilatedConvLayer\n\ndef dilated_pool2x2(incoming, dilation_rate):\n d,input_h,input_w = incoming.output_shape[-3:]\n #print \"dilated pool\", input_h, input_w\n # 1. padding \n h_remainer = input_h % dilation_rate\n w_remainer = input_w % dilation_rate\n h_pad = 0 if h_remainer == 0 else dilation_rate - h_remainer\n w_pad = 0 if w_remainer == 0 else dilation_rate - w_remainer\n #print h_pad, w_pad\n incoming_padded = lasagne.layers.PadLayer(incoming, width=[(0, h_pad), (0, w_pad)], batch_ndim=2)\n h,w = incoming_padded.output_shape[-2:]\n assert h % dilation_rate == 0, \"{} {}\".format(h, dilation_rate)\n assert w % dilation_rate == 0, \"{} {}\".format(w, dilation_rate)\n \n # 2. reshape and transpose\n incoming_reshaped = lasagne.layers.ReshapeLayer(\n incoming_padded, ([0], [1], h/dilation_rate, dilation_rate, w/dilation_rate, dilation_rate))\n incoming_transposed = lasagne.layers.DimshuffleLayer(incoming_reshaped, \n (0, 1,3,5,2,4))\n incoming_reshaped = lasagne.layers.ReshapeLayer(incoming_transposed, ([0], -1, [4], [5]))\n \n # 3. max pool\n incoming_pooled = PoolLayer(incoming_reshaped, pool_size=2, stride=1)\n \n # 4. reshape\n pooled_reshaped = lasagne.layers.ReshapeLayer(incoming_pooled, ([0], d, dilation_rate, dilation_rate, [2], [3]))\n pooled_transposed = lasagne.layers.DimshuffleLayer(pooled_reshaped, (0, 1, 4, 2, 5, 3))\n pooled_reshaped = lasagne.layers.ReshapeLayer(pooled_transposed, ([0], [1], h - dilation_rate, w - dilation_rate))\n \n # 5. crop\n result = lasagne.layers.SliceLayer(pooled_reshaped, indices=slice(0, input_h - dilation_rate), axis=2) \n result = lasagne.layers.SliceLayer(result, indices=slice(0, input_w - dilation_rate), axis=3) \n return result",
"Обратите внимание на грабли, положенные в лазанье в реализации dilated convolution. Описание параметра W из документации:\n\nW : Theano shared variable, expression, numpy array or callable\nInitial value, expression or initializer for the weights. These should be a 4D tensor with shape (num_input_channels, num_filters, filter_rows, filter_columns). Note that the first two dimensions are swapped compared to a non-dilated convolution.",
"def build_network2(weights):\n net = {}\n dilation = 1\n net['input'] = InputLayer((None, 3, 200, 200))\n\n # TODO\n # you may copy-paste original function and fix it\n #net['conv1_1'] = \n # ...\n #net['fc2'] = \n\n #net['prob'] = NonlinearityLayer(net['fc2'], softmax)\n print \"output_shape\", net['fc2'].output_shape\n return net\n\nnet2 = build_network2(weights)\n\ninput_image = T.tensor4('input')\nfc2 = lasagne.layers.get_output(net2['fc2'], input_image, batch_norm_use_averages=False)\n\n\nget_fc2 = theano.function([input_image], fc2)",
"Давайте посмотрим, что у нас получилось",
"%time predictions = get_fc2(preproces(im[None,:200, :200])).transpose(0,2,3,1)\n\npredictions.shape\n\nplt.figure(figsize=(12,8))\nplt.subplot(1,3,1)\nplt.imshow(predictions[0].argmax(axis=-1), plt.cm.gray)\nplt.title('predicted')\nplt.subplot(1,3,2)\nplt.imshow(im[49:200-50,49:200-50])\nplt.title('input')\nplt.subplot(1,3,3)\nplt.imshow(mask[49:200-50,49:200-50], 'gray')\nplt.title('gt')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chi-hung/SementicProj
|
webCrawler/amzProd.ipynb
|
mit
|
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom bs4 import BeautifulSoup # For HTML parsing\nimport requests\nimport re # Regular expressions\nfrom time import sleep # To prevent overwhelming the server between connections\nfrom collections import Counter # Keep track of our term counts\n#from nltk.corpus import stopwords # Filter out stopwords, such as 'the', 'or', 'and'\nimport nltk\nimport pandas as pd # For converting results to a dataframe and bar chart plots\n%matplotlib inline\n\nimport csv\nimport datetime\nimport time\n\nimport sqlalchemy\nfrom sqlalchemy import create_engine\n\n%load_ext watermark",
"This notebook is written by Yishin and Chi-Hung.",
"%watermark",
"First of all, we know that there are 7 types of vacuums on Amazon",
"def getVacuumTypeUrl(vacuumType,pageNum=1):\n vcleaners={\"central\":11333709011,\"canister\":510108,\"handheld\":510114,\"robotic\":3743561,\"stick\":510112,\"upright\":510110,\"wetdry\":553022}\n url_type_base=\"https://www.amazon.com/home-garden-kitchen-furniture-bedding/b/ref=sr_pg_\"+str(pageNum)+\"?ie=UTF8&node=\"\n url=url_type_base+str(vacuumType)+\"&page=\"+str(pageNum)\n print (url)\n return url\n\nvcleaners={\"central\":11333709011,\"canister\":510108,\"handheld\":510114,\"robotic\":3743561,\"stick\":510112,\"upright\":510110,\"wetdry\":553022}\n\nfor key in vcleaners:\n print(key,vcleaners[key])\n getVacuumTypeUrl(vcleaners[key])",
"The following are two functions which we aim to obtain the total number of pages of each vacuum type",
"def getFinalPageNum(url,maxretrytime=20):\n passed=False\n cnt=0\n \n while(passed==False):\n cnt+=1\n print(\"iteration from getFinalPageNum=\",cnt)\n if(cnt>maxretrytime):\n raise Exception(\"Error from getFinalPageNum(url)! Tried too many times but we are still blocked by Amazon.\")\n try:\n with requests.Session() as session:\n session.headers = {'User-Agent': \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0\"} \n r=session.get(url)\n if (r.status_code==200):\n soup=BeautifulSoup(r.content,\"lxml\")\n if(\"Robot Check\" in soup.text):\n print(\"we are blocked!\")\n else:\n tagsFinalPageNum=soup.select(\"span[class='pagnDisabled']\")\n finalPageNum=str(tagsFinalPageNum[0].text)\n passed=True\n\n else:\n print(\"Connection failed. Reconnecting...\")\n except:\n print(\"Error from getFinalPageNum(url)! Probably due to connection time out\")\n return finalPageNum \n\ndef InferFinalPageNum(vacuumType,pageNum=1,times=10):\n url=getVacuumTypeUrl(vacuumType,pageNum)\n \n list_finalpageNum=[]\n \n for j in range(times):\n finalpageNum=getFinalPageNum(url)\n list_finalpageNum.append(finalpageNum)\n FinalpageNum=min(list_finalpageNum)\n\n return FinalpageNum\n\nFinalPageNum=InferFinalPageNum(510114,pageNum=1)\nprint('FinalPageNum=',FinalPageNum)",
"So, right now, we are able to infer the total number of pages of a specific vacuum type.\nThe next step is to generate all URLs of the selected vacuum type:",
"def urlsGenerator(typenode,FinalPageNum):\n #Note: 'typenode' and 'FinalpageNum' are both string\n\n URLs=[]\n pageIdx=1\n while(pageIdx<=int(FinalPageNum)):\n url_Type=\"https://www.amazon.com/home-garden-kitchen-furniture-bedding/b/ref=sr_pg_\"+str(pageIdx)+\"?ie=UTF8&node=\"\n url=url_Type+str(typenode)+\"&page=\"+str(pageIdx)\n #print(url)\n URLs.append(url)\n pageIdx+=1\n \n return URLs",
"For the moment, let us choose the vacuum type \"handheld\":",
"URLs=urlsGenerator(510114,FinalPageNum)\nlen(URLs)\nfor url in URLs:\n print(url)",
"Next, we'd like to obtain all the \"soups\" of the vacuum type \"handheld\" and store them into a list",
"def soupGenerator(URLs,maxretrytime=20): \n\n soups=[]\n urlindex=0\n for URL in URLs:\n urlindex+=1\n print(\"urlindex=\",urlindex)\n passed=False\n cnt=0 \n while(passed==False):\n cnt+=1\n print(\"iteration=\",cnt)\n if(cnt>maxretrytime):\n raise Exception(\"Error from soupGenerator(url,maxretrytime=20)! Tried too many times but we are still blocked by Amazon.\")\n \n try:\n with requests.Session() as session:\n \n session.headers = {'User-Agent': \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0\"} \n r=session.get(URL) \n \n if (r.status_code==200): \n soup=BeautifulSoup(r.content,\"lxml\")\n if(\"Robot Check\" in soup.text):\n print(\"we are blocked!\")\n else:\n print(\"we are not blocked!\")\n soups.append(soup)\n passed=True\n \n else:\n print (\"Connection failed. Reconnecting...\")\n except:\n print(\"Error from soupGenerator(URLs,maxretrytime=20)! Probably due to connection time out\")\n \n return soups \n\nsoups=soupGenerator(URLs,maxretrytime=20)",
"How many soups have we created?",
"print(len(soups))",
"Let us pause for a while. We would like to review the usage of CSS selectors",
"example='''\n<span class=\"abc\">\n <div>\n <a href=\"http://123xyz.com\"></a>\n hello_div01\n </div>\n</span>\n\n<span class=\"def\">\n <a href=\"http://www.go.123xyz\"></a>\n <div>hello_div02</div>\n</span>\n'''\n\nmysoup=BeautifulSoup(example,\"lxml\")\n\nprint(mysoup.prettify())",
"Exercise: look for a specific tag which is a descendent of some other tag",
"mysoup.select(\".abc a\")\n\nmysoup.select(\".abc > a\")",
"the symbol > indicates that we'd like to look for a tags, which are direct descendents of the tag which its class=abc.\nIf we use \".abc a\", it means that we would like to find all descendents of the tag which its class=abc.",
"mysoup.select(\".abc > div\")",
"Exercise: we look for the tags whose value of the attr href starts with \"http\"",
"mysoup.select(\"a[href^='http']\")",
"Exercise: we look for the tags whose value of the attr href ends with \"http\"",
"mysoup.select(\"a[href$='http']\")",
"Exercise: extract the value of a specific attr of a specific tag",
"mysoup.select(\".abc a\")[0][\"href\"]",
"more info about CSS selectors:\nhttps://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors\nhttp://wiki.jikexueyuan.com/project/python-crawler-guide/beautiful-soup.html",
"sp=soups[70].select('li[id^=\"result_\"]')[0]\n\nprint(sp)\n\nfor s in sp:\n try:\n print(sp.span)\n except:\n print(\"error\")",
"Let's go back.\nFirst of all, let us look for the Product URL of the first item of the first page\nprint the link of the first page:",
"URLs=urlsGenerator(510114,FinalPageNum)\nlen(URLs)\nprint(URLs[0])\n#for url in URLs:\n# print(url)",
"We found that the Product URL of the first item can be extracted via:",
"soups[0].select('li[id^=\"result_\"]')[0].select(\"a[class='a-link-normal s-access-detail-page a-text-normal']\")[0]",
"where we have used the fact that each item has one unique id.\nNow, we have another goal: obtain the total number of customer reviews of the selected item (first item in the first page). Doing so we are also able to obtain the link of that item, which is pretty nice, since the item name and the item ID can be extracted from that link.",
"csrev_tag=soups[0].select('li[id^=\"result_\"]')[0].select(\"a[href$='customerReviews']\")[0]\nprint(csrev_tag)",
"This means we are able to obtain the total number of customer reviews (10,106) and also the link of the selected item:\nhttps://www.amazon.com/BLACK-DECKER-CHV1410L-Cordless-Lithium/dp/B006LXOJC0/ref=lp_510114_1_1/157-7476471-7904367?s=vacuums&ie=UTF8&qid=1485361951&sr=1-1\nThe above link will then be replaced by the following one:\nhttps://www.amazon.com/BLACK-DECKER-CHV1410L-Cordless-Lithium/product-reviews/B006LXOJC0/ref=cm_cr_getr_d_paging_btm_1?ie=UTF8&pageNumber=1&reviewerType=all_reviews&pageSize=1000\nwhich shows 50 customer reviews per page (instead of 10 reviews per page by default).\nAnother Goal: We'd like to obtain the price of the selected item\nNow, let's look for more information, e.g. the price of the selected product. We know that the tag we have found is stored at the end part of a big tag which contains all the info of a specific item. Now, to retrieve more info of that item, we'll move ourselves from the end part to the front gradually.",
"csrev_tag.parent\n\ncsrev_tag.parent.previous_sibling.previous_sibling\n\npricetag=csrev_tag.parent.previous_sibling.previous_sibling\nprice=pricetag.select(\".sx-price-whole\")[0].text\nfraction_price=pricetag.select(\".sx-price-fractional\")[0].text\nprint(price,fraction_price)\nprint(int(price)+0.01*int(fraction_price))",
"so, we are able to obtain the price of the selected item.\nYet Another Goal: Let's see if we can obtain the brand of the selected item",
"pricetag.parent\n\npricetag.previous_sibling.parent.select(\".a-size-small\")[2].text",
"Another goal: number of the average stars of the selected item",
"for j in range(30):\n try:\n #selected=soups[2].select('li[id^=\"result_\"]')[j].select_one(\"span[class='a-declarative']\")\n selected=soups[2].select('li[id^=\"result_\"]')[j].select_one(\"i[class='a-icon a-icon-popover']\").previous_sibling\n\n print(len(selected),selected.string.split(\" \")[0])\n except:\n print(\"index= \",j,\", 0 stars (no reviews yet)\")\n\nprint(soups[10].select('li[id^=\"result_\"]')[0].find_all(\"a\")[2][\"href\"]) # 5stars (although only 2 reviews)\n\nprint(soups[12].select('li[id^=\"result_\"]')[0].find_all(\"a\")[2][\"href\"]) # 0 start (no customer reviews yet)",
"Now we are ready to merge all the ingredients learned from above code blocks into one function",
"def items_info_extractor(soups):\n \n item_links=[]\n item_num_of_reviews=[]\n item_prices=[]\n item_names=[]\n item_ids=[]\n item_brands=[]\n item_avestars=[]\n \n for soup in soups:\n items=soup.select('li[id^=\"result_\"]')\n\n for item in items:\n\n link_item=item.select(\"a[href$='customerReviews']\")\n\n # ignore those items which contains 0 customer reviews. Those items are irrelevent to us.\n if (link_item !=[]): \n\n price_tag=link_item[0].parent.previous_sibling.previous_sibling\n price_main_tag=price_tag.select(\".sx-price-whole\")\n price_fraction_tag=price_tag.select(\".sx-price-fractional\")\n\n link=link_item[0][\"href\"]\n\n # Ignore items which don't have normal price tags.\n # Those are items which are not sold by Amazon directly.\n # Also, remove those items which are ads (3 ads are shown in each page).\n if((price_main_tag !=[]) & (price_fraction_tag !=[]) & (link.endswith(\"spons#customerReviews\") == False)):\n\n # extract the item's name and ID from the obtained link\n item_name=link.split(\"/\")[3]\n item_id=link.split(\"/\")[5]\n # replace the obtained link by the link that will lead to the customer reviews\n base_url=\"https://www.amazon.com/\"\n link=base_url+item_name+\"/product-reviews/\"+item_id+\"/ref=cm_cr_getr_d_paging_btm_\" \\\n +str(1)+\"?ie=UTF8&pageNumber=\"+str(1)+\"&reviewerType=all_reviews&pageSize=1000\"\n\n # obtain the price of the selected single item\n price_main=price_main_tag[0].text\n price_fraction=price_fraction_tag[0].text\n item_price=int(price_main)+0.01*int(price_fraction)\n\n # obtain the brand of the selected single item\n item_brand=price_tag.parent.select(\".a-size-small\")[1].text\n if(item_brand==\"by \"):\n item_brand=price_tag.parent.select(\".a-size-small\")[2].text\n # obtain the number of reviews of the selected single item\n item_num_of_review=int(re.sub(\",\",\"\",link_item[0].text))\n \n # obtain the averaged number of stars\n starSelect=item.select_one(\"span[class='a-declarative']\")\n if((starSelect is None) or (starSelect.span is None)): # there are no reviews yet (hence, we see no stars at all)\n item_avestar=0\n else:\n item_avestar=starSelect.span.string.split(\" \")[0] # there are some reviews. So, we are able to extract the averaged number of stars\n \n # store the obtained variables into lists\n item_links.append(link)\n item_num_of_reviews.append(item_num_of_review)\n item_prices.append(item_price)\n item_names.append(item_name)\n item_ids.append(item_id)\n item_brands.append(item_brand)\n item_avestars.append(item_avestar)\n return item_brands,item_ids,item_names,item_prices,item_num_of_reviews,item_links,item_avestars\n\nitem_brands,item_ids,item_names,item_prices,item_num_of_reviews,item_links,item_avestars=items_info_extractor(soups)\n\nprint(len(item_ids))\nprint(len(set(item_ids)))\n\nprint(len(item_names))\nprint(len(set(item_names)))\n\nprint(len(item_links))\nprint(len(set(item_links)))",
"The above results indicate that there are items that have the same product name but different links.\nCool. Let's find those products.",
"import collections\nitem_names_repeated=[]\nfor key in collections.Counter(item_names):\n if collections.Counter(item_names)[key]>1:\n print(key,collections.Counter(item_names)[key])\n item_names_repeated.append(key)\n#print [item for item, count in collections.Counter(a).items() if count > 1]\n\nprint(item_names_repeated)\n\nitems_repeated=[]\nfor name,link,price,numrev in zip(item_names,item_links,item_prices,item_num_of_reviews):\n if name in item_names_repeated:\n #print(name,link,\"\\n\")\n items_repeated.append((name,link,price,numrev))",
"sort a list with the method: sorted ( a \"key\" has to be given )",
"items_repeated=sorted(items_repeated, key=lambda x: x[0])\n\nprint(\"item name, item link, item price, total # of reviews of that item\",\"\\n\")\n\nfor idx,(name,link,price,numrev) in enumerate(items_repeated):\n if((idx+1)%2==0):\n print(name,link,price,numrev,\"\\n\")\n else:\n print(name,link,price,numrev)",
"What's found\n* Each of the 7 items above has two different links/IDs (probably due to different color or seller) and varying prices.\nNow, let's try to merge the obtained data into pandas dataframe\nReference: http://pbpython.com/pandas-list-dict.html",
"for id in item_ids:\n if(\"B006LXOJC0\" in id):\n print(id)\n\ndf=pd.DataFrame.from_items([(\"pindex\",item_ids),(\"type\",\"handheld\"),(\"pname\",item_names),(\"brand\",item_brands),(\"price\",item_prices),(\"rurl\",item_links),(\"totalRev\",item_num_of_reviews),(\"avgStars\",item_avestars)])\n\ndf.loc[:,[\"rurl\",\"avgStars\",\"totalRev\"]]",
"Let's upload the obtained dataframe to MariaDB",
"from sqlalchemy import create_engine,Table,Column,Integer,String,MetaData,ForeignKey,Date\nimport pymysql\n\nengine=create_engine(\"mysql+pymysql://semantic:GbwSq1RzFb@104.199.201.206:13606/Tests?charset=utf8\",echo=False, encoding='utf-8')\nconn = engine.connect()\n\ndf.to_sql(name='amzProd', con=conn, if_exists = 'append', index=False)\nconn.close()",
"Alternatively, we can store the obtained dataframe into a csv file",
"df.to_csv(\"ProdInfo_handheld_26012017.csv\", encoding=\"utf-8\")",
"And load it:",
"pd.DataFrame.from_csv(\"ProdInfo_handheld_26012017.csv\", encoding=\"utf-8\")\n",
"Upload the obtained CSV files to the remote MariaDB",
"from sqlalchemy import create_engine,Table,Column,Integer,String,MetaData,ForeignKey,Date\nimport pymysql\nimport datetime",
"I found out that there might be same pindex in one dataframe. This can lead to an error if we are going to upload our data to MariaDB, as the primary key is ought to be unique.",
"pd.set_option('max_colwidth', 800)\nfor idx,df in enumerate(dfs):\n print(idx,df.loc[df['pindex'] == 'B00SWGVICS'])",
"Strategy: Store all csvs into one dataframe. Then, remove all duplicates before uploading to the DataBase.",
"import os\nfrom IPython.display import display\n\ncwd=os.getcwd()\n\nprint(cwd)",
"Now, it's time to get to know the Pandas Dataframe better. I'd like to figure out how two dataframes can be merged horizontally.\nan one column example: pd.Dataframe.from_items()",
"test_col = pd.DataFrame.from_items([(\"test_column1\",np.arange(10))])\ntest_col2 = pd.DataFrame.from_items([(\"test_column2\",5+np.arange(10))])\ndisplay(test_col,test_col2)\n\nresult = pd.concat([test_col, test_col2], axis=1)\n\ndisplay(result)",
"",
"date=\"2017-02-01\"\nprodTypes=[\"central\",\"canister\",\"handheld\",\"robotic\",\"stick\",\"upright\",\"wetdry\"]\n\n# put all the dataframes into a list\ndfs=[pd.DataFrame.from_csv(\"data/ProdInfo_%s_%s.csv\"%(prodType,date), encoding=\"utf-8\") for prodType in prodTypes]\n\n\nfor idx,df in enumerate(dfs):\n cID=[j%7 for j in range(df.shape[0])]\n colCID=pd.DataFrame.from_items([( \"cID\",cID )])\n dfs[idx]=pd.concat([df, colCID], axis=1)\n\n# concatenate dataframes\ndf=pd.concat(dfs).drop_duplicates(\"rurl\")\n\ndf.to_csv(\"ProdInfo_all_%s.csv\"%(date), encoding=\"utf-8\")\n\ndate=\"2017-02-01\"\ndate=\"2017-02-06\"\nprodTypes=[\"central\",\"canister\",\"handheld\",\"robotic\",\"stick\",\"upright\",\"wetdry\"]\n\n# put all the dataframes into a list\ndfs=[pd.DataFrame.from_csv(\"data/ProdInfo_%s_%s.csv\"%(prodType,date), encoding=\"utf-8\") for prodType in prodTypes]\n\n\nfor idx,df in enumerate(dfs):\n cID=[j%7 for j in range(df.shape[0])]\n colCID=pd.DataFrame.from_items([( \"cID\",cID )])\n dfs[idx]=pd.concat([df, colCID], axis=1)\n\n# concatenate dataframes\ndf=pd.concat(dfs).drop_duplicates(\"rurl\")\n\n# prepare the connection and connect to the DB\nengine=create_engine(\"mysql+pymysql://semantic:GbwSq1RzFb@104.199.201.206:13606/Tests?charset=utf8\",echo=False, encoding='utf-8')\nconn = engine.connect()\n\n# remove duplicates and upload the concatenated dataframe to the SQL DataBase\ndf.to_sql(name='amzProd', con=conn, if_exists = 'append', index=False)\n\n# close the connection\nconn.close()\n\nlen(df.iloc[974][\"brand\"])\n\ndf.iloc[463][\"pname\"]\n\n!echo \"Handheld-Vacuum-Cleaner-Abask-Vacuum-Cleaner-7-2V-60W-Ni-CD2200MA-3-5KPA-Suction-Portable-1-Accessories-Rechargeable-Cordless-Cleaner\"| wc ",
"Length of this string is larger than 100. Therefore, I have to alter our schema, since the product name was set to have length 100 by default."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ranophoenix/jvmthreadparser
|
Thread Analysis.ipynb
|
bsd-2-clause
|
[
"JVM Thread Dump Analysis\nGoal: Get insights about thread states in a production environment.\nInspired by: https://github.com/jakevdp/JupyterWorkflow",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nimport numpy as np\nimport pandas as pd\nplt.style.use('seaborn')\n\nfrom sklearn.decomposition import PCA\nfrom sklearn.mixture import GaussianMixture\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimport jvmthreadparser.parser as jtp",
"Get Data\nDumps generated every 2 minutes and saved in one single file. Period: May/ 2017.",
"dump = jtp.open_text('threads4.txt', load_thread_content = False)\n\ndump.head()",
"Thread State by Date\n\nProblem: How many threads exist in each state?\nGoal: Reshape the data using the date as index and states as columns.\nHow:",
"dump['Threads'] = 1\nthreads_by_state = dump.groupby(['DateTime','State']).count().unstack().fillna(0)\nthreads_by_state.columns = threads_by_state.columns.droplevel()\nthreads_by_state.head()\n\nax = threads_by_state.plot(figsize=(14,12), cmap='Paired', title = 'Thread State by Date')\nax.set_xlabel('Day of Month')\nax.set_ylabel('Number of Threads');",
"Average of Threads by Hour\n\nProblem: Are there any peak hour for thread states?\nGoal: Plot thread states by hour (0-24).\nHow:",
"ax = threads_by_state.groupby(threads_by_state.index.hour).mean().plot(figsize=(14,12), cmap='Paired', title='Threads by Hour')\nax.set_xlabel('Hour of the Day (0-23)')\nax.set_ylabel('Mean(Number of Threads)');",
"Average of Threads by Day\n\nProblem: Are there any peak day for thread states?\nGoal: Plot thread states by day (2017-05-01 / 2017-05-29).\nHow:",
"ax = threads_by_state.resample('D').mean().plot(figsize=(14,12), cmap = 'Paired')\nax.set_xlabel('Day of Month')\nax.set_ylabel('Mean(Number of Threads)');",
"Threads in TIMED_WAITING (PARKING) by Hour Each Day\n\nProblem: Are there any pattern in TIMED_WAITING(PARKING) threads?\nGoal: Plot TIMED_WAITING (PARKING) threads. Each line represents a day. Thus, we can try visualize some patterns in data.\nHow:",
"by_hour = threads_by_state.resample('H').mean()\npivoted = by_hour.pivot_table(\"TIMED_WAITING (PARKING)\", index = by_hour.index.time, columns = by_hour.index.date).fillna(0)\nax = pivoted.plot(legend=False, alpha = 0.3, color = 'black', title = 'Day Patterns of TIMED_WAITING (PARKING) Threads by Time', figsize=(14,12))\nax.set_xlabel('Time')\nax.set_ylabel('Number of Threads');",
"Principal Component Analysis\n\nProblem: Can we plot clustering patterns?\nGoal: Use PCA to reduce data dimensionality to 3 dimensions.\nHow:",
"X = pivoted.fillna(0).T.values\nX.shape\n\nX2 = PCA(3, svd_solver='full').fit_transform(X)\nX2.shape\n\nfig = plt.figure(figsize=(12,12))\nax = fig.add_subplot(111, projection='3d')\nax.scatter(X2[:, 0], X2[:, 1], X2[:, 2])\nax.set_title('PCA Dimensionality Reduction (3 Dimensions)')\nax.set_xlabel('Principal Component 1')\nax.set_ylabel('Principal Component 2')\nax.set_zlabel('Principal Component 3');",
"Unsupervised Clustering\n\nProblem: Can we put colors for identify each cluster?\nGoal: Use GaussianMixture to identify clusters.\nHow:",
"gmm = GaussianMixture(3).fit(X)\nlabels = gmm.predict(X)\n\nfig = plt.figure(figsize=(14,10))\nax = fig.add_subplot(111, projection='3d')\n\ncMap = ListedColormap(['green', 'blue','red'])\np = ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2], c=labels, cmap=cMap)\nax.set_title('Unsupervised Clustering (3 Clusters with Colors)')\nax.set_xlabel('Principal Component 1')\nax.set_ylabel('Principal Component 2')\nax.set_zlabel('Principal Component 3');\ncolorbar = fig.colorbar(p, ticks=np.linspace(0,2,3))\ncolorbar.set_label('Cluster')",
"Visualizing Clustering\n\nProblem: How identify threads in each cluster?\nGoal: Generate plots showing threads in each cluster.\nHow:",
"fig, ax = plt.subplots(1, 3, figsize=(14, 6))\n\npivoted.T[labels == 0].T.plot(legend=False, alpha=0.4, ax=ax[0]);\npivoted.T[labels == 1].T.plot(legend=False, alpha=0.4, ax=ax[1]);\npivoted.T[labels == 2].T.plot(legend=False, alpha=0.4, ax=ax[2]);\n\nax[0].set_title('Cluster 0')\nax[0].set_xlabel('Time')\nax[0].set_ylabel('Number of Threads')\nax[1].set_title('Cluster 1');\nax[1].set_xlabel('Time')\nax[2].set_title('Cluster 2');\nax[2].set_xlabel('Time')",
"Comparing with Day of Week\n\nProblem: Can weekday explain this variability?\nGoal: Plot clusters using one color per weekday (Monday=0, Sunday=6).\nHow:",
"dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek\n\nfig = plt.figure(figsize=(14, 10))\nax = fig.add_subplot(111, projection='3d')\np = ax.scatter(X2[:, 0], X2[:, 1], X2[:, 2], c=dayofweek, cmap='rainbow')\nax.set_title('Unsupervised Clustering (3 Clusters) Colored by Weekday')\nax.set_xlabel('Principal Component 1')\nax.set_ylabel('Principal Component 2')\nax.set_zlabel('Principal Component 3');\ncolorbar = fig.colorbar(p)\ncolorbar.set_label('Weekday (0=Monday, Sunday=6)')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kuchaale/X-regression
|
examples/xarray_coupled_w_GLSAR_JRA55_analysis.ipynb
|
gpl-3.0
|
[
"Table of Contents\n<p><div class=\"lev1\"><a href=\"#Import-libraries-1\"><span class=\"toc-item-num\">1 </span>Import libraries</a></div><div class=\"lev1\"><a href=\"#Data-opening-2\"><span class=\"toc-item-num\">2 </span>Data opening</a></div><div class=\"lev1\"><a href=\"#Variable-and-period-of-analysis-selection-3\"><span class=\"toc-item-num\">3 </span>Variable and period of analysis selection</a></div><div class=\"lev1\"><a href=\"#Deseasonalizing-4\"><span class=\"toc-item-num\">4 </span>Deseasonalizing</a></div><div class=\"lev1\"><a href=\"#Regressor-loading-5\"><span class=\"toc-item-num\">5 </span>Regressor loading</a></div><div class=\"lev1\"><a href=\"#Regression-function-6\"><span class=\"toc-item-num\">6 </span>Regression function</a></div><div class=\"lev1\"><a href=\"#Regression-calculation-7\"><span class=\"toc-item-num\">7 </span>Regression calculation</a></div><div class=\"lev1\"><a href=\"#Visualization-8\"><span class=\"toc-item-num\">8 </span>Visualization</a></div>\n\n# Import libraries",
"import supp_functions as fce\nimport xarray as xr\nimport pandas as pd\nimport statsmodels.api as sm\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Data opening",
"s_year = 1979\ne_year = 2009\nvari ='t'\n\nin_dir = '~/'\nin_netcdf = in_dir + 'jra55_tmp_1960_2009_zm.nc'\nds = xr.open_dataset(in_netcdf)",
"Variable and period of analysis selection",
"times = pd.date_range(str(s_year)+'-01-01', str(e_year)+'-12-31', name='time', freq = 'M')\nds_sel = ds.sel(time = times, method='ffill') #nearest\nds_sel = ds_sel[vari]",
"Deseasonalizing",
"climatology = ds_sel.groupby('time.month').mean('time')\nanomalies = ds_sel.groupby('time.month') - climatology",
"Regressor loading",
"global reg\n\nsolar = fce.open_reg_ccmi(in_dir+'solar_1947.nc', 'solar', 0, 1947, s_year, e_year)\nsolar /= 126.6\n\ntrend = np.linspace(-1, 1, solar.shape[0])\n\nnorm = 4\nwhat_re = 'jra55'\nwhat_sp = ''\ni_year2 = 1947\ni_year = 1960\nwhat_re2 = 'HadISST'\nsaod = fce.open_reg_ccmi(in_dir+'sad_gm_50hPa_1949_2013.nc', 'sad', 0, 1949, s_year, e_year)\nqbo1 = fce.open_reg_ccmi(in_dir+'qbo_'+what_re+what_sp+'_pc1.nc', 'index', norm, i_year, s_year, e_year)\nqbo2 = fce.open_reg_ccmi(in_dir+'qbo_'+what_re+what_sp+'_pc2.nc', 'index', norm, i_year, s_year, e_year) \nenso = fce.open_reg_ccmi(in_dir+'enso_'+what_re2+'_monthly_'+str(i_year2)+'_'+str(e_year)+'.nc', \\\n 'enso', norm, i_year2, s_year, e_year)\nprint(trend.shape, solar.shape, saod.shape, enso.shape, qbo1.shape, qbo2.shape, anomalies.time.shape)\nreg = np.column_stack((trend, solar, qbo1, qbo2, saod, enso)) ",
"Regression function",
"def xr_regression(y):\n X = sm.add_constant(reg, prepend=True) # regressor matrix\n mod = sm.GLSAR(y.values, X, 2, missing = 'drop') # MLR analysis with AR2 modeling\n res = mod.iterative_fit()\n\n return xr.DataArray(res.params[1:])",
"Regression calculation",
"stacked = anomalies.stack(allpoints = ['lev', 'lat']).squeeze()\nstacked = stacked.reset_coords(drop=True)\ncoefs = stacked.groupby('allpoints').apply(xr_regression)\ncoefs_unstacked = coefs.unstack('allpoints')",
"Visualization",
"%matplotlib inline\n\ncoefs_unstacked.isel(dim_0 = [1]).squeeze().plot.contourf(yincrease=False)#, vmin=-1, vmax=1, cmap=plt.cm.RdBu_r)\ncoefs_unstacked.isel(dim_0 = [1]).squeeze().plot.contour(yincrease=False, colors='k', add_colorbar=False, \\\n levels = [-0.5, -0.2,-0.1,0,0.1,0.2, 0.5])\nplt.yscale('log')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
evanmiltenburg/python-for-text-analysis
|
Chapters-colab/Chapter_21_Tables_and_Networks.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_21_Tables_and_Networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"%%capture\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip\n!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip\n\n!unzip Data.zip -d ../\n!unzip images.zip -d ./\n!unzip Extra_Material.zip -d ../\n\n!rm Data.zip\n!rm Extra_Material.zip\n!rm images.zip",
"Chapter 20 - Tables and Networks\nIn the previous chapter we looked into various types of charts and correlations that are useful for scientific analysis in Python. Here, we present two more groups of visualizations: tables and networks. We will spend little attention to these, since they are less/not useful for the final assignment; however, note that they are still often a useful visualization options in practice.\nAt the end of this chapter, you will be able to:\n- Create formatted tables\n- Create networks\nThis requires that you already have (some) knowledge about:\n- Loading and manipulating data.\nIf you want to learn more about these topics, you might find the following links useful:\n- List of visualization blogs: https://flowingdata.com/2012/04/27/data-and-visualization-blogs-worth-following/",
"%matplotlib inline\n",
"1. Tables\nThere are (at least) two ways to output your data as a formatted table:\n\nUsing the tabulate package. (You might need to install it first, using conda install tabulate)\nUsing the pandas dataframe method df.to_latex(...), df.to_string(...), or even df.to_clipboard(...).\n\nThis is extremely useful if you're writing a paper. First version of the 'results' section: done!\nOption 1: Tabulate",
"from tabulate import tabulate\n\ntable = [[\"spam\",42],[\"eggs\",451],[\"bacon\",0]]\nheaders = [\"item\", \"qty\"]\n\n# Documentation: https://pypi.python.org/pypi/tabulate\nprint(tabulate(table, headers, tablefmt=\"latex_booktabs\"))",
"Option 2: Pandas DataFrames",
"import pandas as pd\n\n# Documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html\ndf = pd.DataFrame(data=table, columns=headers)\nprint(df.to_latex(index=False))",
"Once you've produced your LaTeX table, it's almost ready to put in your paper. If you're writing an NLP paper and your table contains scores for different system outputs, you might want to make the best scores bold, so that they stand out from the other numbers in the table.\nMore to explore\nThe pandas library is really useful if you work with a lot of data (we'll also use it below). As Jake Vanderplas said in the State of the tools video, the pandas DataFrame is becoming the central format in the Python ecosystem. Here is a page with pandas tutorials.\n2. Networks\nSome data is best visualized as a network. There are several options out there for doing this. The easiest is to use the NetworkX library and either plot the network using Matplotlib, or export it to JSON or GEXF (Graph EXchange Format) and visualize the network using external tools.\nLet's explore a bit of WordNet today. For this, we'll want to import the NetworkX library, as well as the WordNet module. We'll look at the first synset for dog: dog.n.01, and how it's positioned in the WordNet taxonomy. All credits for this idea go to this blog.",
"import networkx as nx # You might need to install networkx first (conda install -c anaconda networkx)\nfrom nltk.corpus import wordnet as wn\nfrom nltk.util import bigrams # This is a useful function.",
"Networks are made up out of edges: connections between nodes (also called vertices). To build a graph of the WordNet-taxonomy, we need to generate a set of edges. This is what the function below does.",
"def hypernym_edges(synset):\n \"\"\"\n Function that generates a set of edges \n based on the path between the synset and entity.n.01\n \"\"\"\n edges = set()\n for path in synset.hypernym_paths():\n synset_names = [s.name() for s in path]\n # bigrams turns a list of arbitrary length into tuples: [(0,1),(1,2),(2,3),...]\n # edges.update adds novel edges to the set.\n edges.update(bigrams(synset_names))\n return edges\n\nimport nltk\nnltk.download('wordnet')\n\n# Use the synset 'dog.n.01'\ndog = wn.synset('dog.n.01')\n\n# Generate a set of edges connecting the synset for 'dog' to the root node (entity.n.01)\nedges = hypernym_edges(dog)\n\n# Create a graph object.\nG = nx.Graph()\n\n# Add all the edges that we generated earlier.\nG.add_edges_from(edges)",
"Now we can actually start drawing the graph. We'll increase the figure size, and use the draw_spring method (that implements the Fruchterman-Reingold layout algorithm).",
"# Increasing figure size for better display of the graph.\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 11, 11\n\n# Draw the actual graph.\nnx.draw_spring(G,with_labels=True)",
"What is interesting about this is that there is a cycle in the graph! This is because dog has two hypernyms, and those hypernyms are both superseded (directly or indirectly) by animal.n.01.\nWhat is not so good is that the graph looks pretty ugly: there are several crossing edges, which is totally unnecessary. There are better layouts implemented in NetworkX, but they do require you to install pygraphviz. Once you've done that, you can execute the next cell. (And if not, then just assume it looks much prettier!)",
"# Install pygraphviz first: pip install pygraphviz\n!sudo apt-get install -y graphviz-dev\n!pip install pygraphviz\nfrom networkx.drawing.nx_agraph import graphviz_layout\n\n# Let's add 'cat' to the bunch as well.\ncat = wn.synset('cat.n.01')\ncat_edges = hypernym_edges(cat)\nG.add_edges_from(cat_edges)\n\n# Use the graphviz layout. First compute the node positions..\npositioning = graphviz_layout(G)\n\n# And then pass node positions to the drawing function.\nnx.draw_networkx(G,pos=positioning)",
"Question\nHow do dogs differ from cats, according to WordNet?\nAnswer:\nQuestion\nCan you think of any data other than WordNet-synsets that could be visualized as a network?\nAnswer:\nMore to explore\n\n\nPython's network visualization tools are fairly limited (though we haven't really explored Pygraphviz (and Graphviz itself is able to create examples like these)). It's usually easier to export the graph to GEXF and visualize it using Gephi or SigmaJS. Gephi also features plugins, which enable you to create interactive visualizations. See here for code and a link to a demo that Emiel made.\n\n\nFor analyzing graphs, it is better to use either Gephi, or the python-louvain library, which enables you to cluster nodes in a network.\n\n\nSome of the map-making libraries listed above also provide some cool functionality to create graphs on a map. This is nice to visualize e.g. relations between countries.\n\n\n3. Maps\nMaps are a huge subject that we won't cover in this course. If you are interested, you can take a look into the Basemap module."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mintcloud/deep-learning
|
weight-initialization/.ipynb_checkpoints/weight_initialization-checkpoint.ipynb
|
mit
|
[
"Weight Initialization\nIn this lesson, you'll learn how to find good initial weights for a neural network. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. \nTesting Weights\nDataset\nTo see how different weights perform, we'll test on the same dataset and neural network. Let's go over the dataset and neural network.\nWe'll be using the MNIST dataset to demonstrate the different initial weights. As a reminder, the MNIST dataset contains images of handwritten numbers, 0-9, with normalized input (0.0 - 1.0). Run the cell below to download and load the MNIST dataset.",
"%matplotlib inline\n\nimport tensorflow as tf\nimport helper\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\nprint('Getting MNIST Dataset...')\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\nprint('Data Extracted.')",
"Neural Network\n<img style=\"float: left\" src=\"images/neural_network.png\"/>\nFor the neural network, we'll test on a 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers.",
"# Save the shapes of weights for each layer\nlayer_1_weight_shape = (mnist.train.images.shape[1], 256)\nlayer_2_weight_shape = (256, 128)\nlayer_3_weight_shape = (128, mnist.train.labels.shape[1])",
"Initialize Weights\nLet's start looking at some initial weights.\nAll Zeros or Ones\nIf you follow the principle of Occam's razor, you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.\nWith every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.\nLet's compare the loss with all ones and all zero weights using helper.compare_init_weights. This function will run two different initial weights on the neural network above for 2 epochs. It will plot the loss for the first 100 batches and print out stats after the 2 epochs (~860 batches). We plot the first 100 batches to better judge which weights performed better at the start.\nRun the cell below to see the difference between weights of all zeros against all ones.",
"all_zero_weights = [\n tf.Variable(tf.zeros(layer_1_weight_shape)),\n tf.Variable(tf.zeros(layer_2_weight_shape)),\n tf.Variable(tf.zeros(layer_3_weight_shape))\n]\n\nall_one_weights = [\n tf.Variable(tf.ones(layer_1_weight_shape)),\n tf.Variable(tf.ones(layer_2_weight_shape)),\n tf.Variable(tf.ones(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'All Zeros vs All Ones',\n [\n (all_zero_weights, 'All Zeros'),\n (all_one_weights, 'All Ones')])",
"As you can see the accuracy is close to guessing for both zeros and ones, around 10%.\nThe neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.\nA good solution for getting these random weights is to sample from a uniform distribution.\nUniform Distribution\nA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continous distribution, so the chance of picking the same number is low. We'll use TensorFlow's tf.random_uniform function to pick random numbers from a uniform distribution.\n\ntf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a uniform distribution.\nThe generated values follow a uniform distribution in the range [minval, maxval). The lower bound minval is included in the range, while the upper bound maxval is excluded.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nminval: A 0-D Tensor or Python value of type dtype. The lower bound on the range of random values to generate. Defaults to 0.\nmaxval: A 0-D Tensor or Python value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point.\ndtype: The type of the output: float32, float64, int32, or int64.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).\n\n\nWe can visualize the uniform distribution by using a histogram. Let's map the values from tf.random_uniform([1000], -3, 3) to a histogram using the helper.hist_dist function. This will be 1000 random float values from -3 to 3, excluding the value 3.",
"helper.hist_dist('Random Uniform (minval=-3, maxval=3)', tf.random_uniform([1000], -3, 3))",
"The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.\nNow that you understand the tf.random_uniform function, let's apply it to some initial weights.\nBaseline\nLet's see how well the neural network trains using the default values for tf.random_uniform, where minval=0.0 and minval=1.0.",
"# Default for tf.random_uniform is minval=0 and maxval=1\nbasline_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Baseline',\n [(basline_weights, 'tf.random_uniform [0, 1)')])",
"The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction.\nGeneral rule for setting weights\nThe general rule for setting the weights in a neural network is to be close to zero without being too small. A good pracitce is to start your weights in the range of $[-y, y]$ where\n$y=1/\\sqrt{n}$ ($n$ is the number of inputs to a given neuron).\nLet's see if this holds true, let's first center our range over zero. This will give us the range [-1, 1).",
"uniform_neg1to1_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1, 1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1, 1))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[0, 1) vs [-1, 1)',\n [\n (basline_weights, 'tf.random_uniform [0, 1)'),\n (uniform_neg1to1_weights, 'tf.random_uniform [-1, 1)')])",
"We're going in the right direction, the accuracy and loss is better with [-1, 1). We still want smaller weights. How far can we go before it's too small?\nToo small\nLet's compare [-0.1, 0.1), [-0.01, 0.01), and [-0.001, 0.001) to see how small is too small. We'll also set plot_n_batches=None to show all the batches in the plot.",
"uniform_neg01to01_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.1, 0.1)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.1, 0.1))\n]\n\nuniform_neg001to001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.01, 0.01)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.01, 0.01))\n]\n\nuniform_neg0001to0001_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -0.001, 0.001)),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -0.001, 0.001))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-1, 1) vs [-0.1, 0.1) vs [-0.01, 0.01) vs [-0.001, 0.001)',\n [\n (uniform_neg1to1_weights, '[-1, 1)'),\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (uniform_neg001to001_weights, '[-0.01, 0.01)'),\n (uniform_neg0001to0001_weights, '[-0.001, 0.001)')],\n plot_n_batches=None)",
"Looks like anything [-0.01, 0.01) or smaller is too small. Let's compare this to our typical rule of using the range $y=1/\\sqrt{n}$.",
"import numpy as np\n\ngeneral_rule_weights = [\n tf.Variable(tf.random_uniform(layer_1_weight_shape, -1/np.sqrt(layer_1_weight_shape[0]), 1/np.sqrt(layer_1_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_2_weight_shape, -1/np.sqrt(layer_2_weight_shape[0]), 1/np.sqrt(layer_2_weight_shape[0]))),\n tf.Variable(tf.random_uniform(layer_3_weight_shape, -1/np.sqrt(layer_3_weight_shape[0]), 1/np.sqrt(layer_3_weight_shape[0])))\n]\n\nhelper.compare_init_weights(\n mnist,\n '[-0.1, 0.1) vs General Rule',\n [\n (uniform_neg01to01_weights, '[-0.1, 0.1)'),\n (general_rule_weights, 'General Rule')],\n plot_n_batches=None)",
"The range we found and $y=1/\\sqrt{n}$ are really close.\nSince the uniform distribution has the same chance to pick anything in the range, what if we used a distribution that had a higher chance of picking numbers closer to 0. Let's look at the normal distribution.\nNormal Distribution\nUnlike the uniform distribution, the normal distribution has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from TensorFlow's tf.random_normal function to a histogram.\n\ntf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a normal distribution.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).",
"helper.hist_dist('Random Normal (mean=0.0, stddev=1.0)', tf.random_normal([1000]))",
"Let's compare the normal distribution against the previous uniform distribution.",
"normal_01_weights = [\n tf.Variable(tf.random_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.random_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Uniform [-0.1, 0.1) vs Normal stddev 0.1',\n [\n (uniform_neg01to01_weights, 'Uniform [-0.1, 0.1)'),\n (normal_01_weights, 'Normal stddev 0.1')])",
"The normal distribution gave a slight increasse in accuracy and loss. Let's move closer to 0 and drop picked numbers that are x number of standard deviations away. This distribution is called Truncated Normal Distribution.\nTruncated Normal Distribution\n\ntf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)\nOutputs random values from a truncated normal distribution.\nThe generated values follow a normal distribution with specified mean and standard deviation, except that values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.\n\nshape: A 1-D integer Tensor or Python array. The shape of the output tensor.\nmean: A 0-D Tensor or Python value of type dtype. The mean of the truncated normal distribution.\nstddev: A 0-D Tensor or Python value of type dtype. The standard deviation of the truncated normal distribution.\ndtype: The type of the output.\nseed: A Python integer. Used to create a random seed for the distribution. See tf.set_random_seed for behavior.\nname: A name for the operation (optional).",
"helper.hist_dist('Truncated Normal (mean=0.0, stddev=1.0)', tf.truncated_normal([1000]))",
"Again, let's compare the previous results with the previous distribution.",
"trunc_normal_01_weights = [\n tf.Variable(tf.truncated_normal(layer_1_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_2_weight_shape, stddev=0.1)),\n tf.Variable(tf.truncated_normal(layer_3_weight_shape, stddev=0.1))\n]\n\nhelper.compare_init_weights(\n mnist,\n 'Normal vs Truncated Normal',\n [\n (normal_01_weights, 'Normal'),\n (trunc_normal_01_weights, 'Truncated Normal')])",
"There's no difference between the two, but that's because the neural network we're using is too small. A larger neural network will pick more points on the normal distribution, increasing the likelihood it's choices are larger than 2 standard deviations.\nWe've come a long way from the first set of weights we tested. Let's see the difference between the weights we used then and now.",
"helper.compare_init_weights(\n mnist,\n 'Baseline vs Truncated Normal',\n [\n (basline_weights, 'Baseline'),\n (trunc_normal_01_weights, 'Truncated Normal')])",
"That's a huge difference. You can barely see the truncated normal line. However, this is not the end your learning path. We've provided more resources for initializing weights in the classroom!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ModSimPy
|
examples/bungee2.ipynb
|
mit
|
[
"Bungee Dunk Revisited\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *",
"In the previous case study, we simulated a bungee jump with a model that took into account gravity, air resistance, and the spring force of the bungee cord, but we ignored the weight of the cord.\nIt is tempting to say that the weight of the cord doesn't matter because it falls along with the jumper. But that intuition is incorrect, as explained by Heck, Uylings, and Kędzierska. As the cord falls, it transfers energy to the jumper. They derive a differential equation that relates the acceleration of the jumper to position and velocity:\n$a = g + \\frac{\\mu v^2/2}{\\mu(L+y) + 2L}$ \nwhere $a$ is the net acceleration of the jumper, $g$ is acceleration due to gravity, $v$ is the velocity of the jumper, $y$ is the position of the jumper relative to the starting point (usually negative), $L$ is the length of the cord, and $\\mu$ is the mass ratio of the cord and jumper.\nIf you don't believe this model is correct, this video might convince you.\nFollowing the previous case study, we'll model the jump with the following assumptions:\n\n\nInitially the bungee cord hangs from a crane with the attachment point 80 m above a cup of tea.\n\n\nUntil the cord is fully extended, it applies a force to the jumper as explained above.\n\n\nAfter the cord is fully extended, it obeys Hooke's Law; that is, it applies a force to the jumper proportional to the extension of the cord beyond its resting length.\n\n\nThe jumper is subject to drag force proportional to the square of their velocity, in the opposite of their direction of motion.\n\n\nFirst I'll create a Param object to contain the quantities we'll need:\n\n\nLet's assume that the jumper's mass is 75 kg and the cord's mass is also 75 kg, so mu=1.\n\n\nThe jumpers's frontal area is 1 square meter, and terminal velocity is 60 m/s. I'll use these values to back out the coefficient of drag.\n\n\nThe length of the bungee cord is L = 25 m.\n\n\nThe spring constant of the cord is k = 40 N / m when the cord is stretched, and 0 when it's compressed.\n\n\nI adopt the coordinate system and most of the variable names from Heck, Uylings, and Kędzierska.",
"params = Params(y_attach = 80, # m,\n v_init = 0, # m / s,\n g = 9.8, # m/s**2,\n M = 75, # kg,\n m_cord = 75, # kg\n area = 1, # m**2,\n rho = 1.2, # kg/m**3,\n v_term = 60, # m / s,\n L = 25, # m,\n k = 40, # N / m\n )",
"Now here's a version of make_system that takes a Params object as a parameter.\nmake_system uses the given value of v_term to compute the drag coefficient C_d.\nIt also computes mu and the initial State object.",
"def make_system(params):\n \"\"\"Makes a System object for the given params.\n \n params: Params object\n \n returns: System object\n \"\"\"\n M, m_cord = params.M, params.m_cord\n g, rho, area = params.g, params.rho, params.area\n v_init, v_term = params.v_init, params.v_term\n \n # back out the coefficient of drag\n C_d = 2 * M * g / (rho * area * v_term**2)\n \n mu = m_cord / M\n init = State(y=params.y_attach, v=v_init)\n t_end = 8\n\n return System(params, C_d=C_d, mu=mu,\n init=init, t_end=t_end)",
"Let's make a System",
"system1 = make_system(params)",
"drag_force computes drag as a function of velocity:",
"def drag_force(v, system):\n \"\"\"Computes drag force in the opposite direction of `v`.\n \n v: velocity\n \n returns: drag force in N\n \"\"\"\n rho, C_d, area = system.rho, system.C_d, system.area\n\n f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2\n return f_drag",
"Here's drag force at 20 m/s.",
"drag_force(20, system1)",
"The following function computes the acceleration of the jumper due to tension in the cord.\n$a_{cord} = \\frac{\\mu v^2/2}{\\mu(L+y) + 2L}$",
"def cord_acc(y, v, system):\n \"\"\"Computes the force of the bungee cord on the jumper:\n \n y: height of the jumper\n v: velocity of the jumpter\n \n returns: acceleration in m/s\n \"\"\"\n L, mu = system.L, system.mu\n \n a_cord = -v**2 / 2 / (2*L/mu + (L+y))\n return a_cord",
"Here's acceleration due to tension in the cord if we're going 20 m/s after falling 20 m.",
"y = -20\nv = -20\ncord_acc(y, v, system1)",
"Now here's the slope function:",
"def slope_func1(t, state, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n M, g = system.M, system.g\n \n a_drag = drag_force(v, system) / M\n a_cord = cord_acc(y, v, system)\n dvdt = -g + a_cord + a_drag\n \n return v, dvdt",
"As always, let's test the slope function with the initial params.",
"slope_func1(0, system1.init, system1)",
"We'll need an event function to stop the simulation when we get to the end of the cord.",
"def event_func1(t, state, system):\n \"\"\"Run until y=-L.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: difference between y and y_attach-L\n \"\"\"\n y, v = state \n return y - (system.y_attach - system.L)",
"We can test it with the initial conditions.",
"event_func1(0, system1.init, system1)",
"And then run the simulation.",
"results1, details1 = run_solve_ivp(system1, slope_func1, \n events=event_func1)\ndetails1.message",
"Here's the plot of position as a function of time.",
"def plot_position(results, **options):\n results.y.plot(**options)\n decorate(xlabel='Time (s)',\n ylabel='Position (m)')\n \nplot_position(results1)",
"We can use min to find the lowest point:",
"min(results1.y)",
"As expected, Phase 1 ends when the jumper reaches an altitude of 55 m.\nPhase 2\nOnce the jumper has falled more than the length of the cord, acceleration due to energy transfer from the cord stops abruptly. As the cord stretches, it starts to exert a spring force. So let's simulate this second phase.\nspring_force computes the force of the cord on the jumper:",
"def spring_force(y, system):\n \"\"\"Computes the force of the bungee cord on the jumper:\n \n y: height of the jumper\n \n Uses these variables from system:\n y_attach: height of the attachment point\n L: resting length of the cord\n k: spring constant of the cord\n \n returns: force in N\n \"\"\"\n L, k = system.L, system.k\n \n distance_fallen = system.y_attach - y\n extension = distance_fallen - L\n f_spring = k * extension\n return f_spring",
"The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.",
"spring_force(55, system1)\n\nspring_force(56, system1)",
"The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.",
"def slope_func2(t, state, system):\n \"\"\"Compute derivatives of the state.\n \n state: position, velocity\n t: time\n system: System object containing g, rho,\n C_d, area, and mass\n \n returns: derivatives of y and v\n \"\"\"\n y, v = state\n M, g = system.M, system.g\n \n a_drag = drag_force(v, system) / M\n a_spring = spring_force(y, system) / M\n dvdt = -g + a_drag + a_spring\n \n return v, dvdt",
"The initial state for Phase 2 is the final state from Phase 1.",
"t_final = results1.index[-1]\nt_final\n\nstate_final = results1.iloc[-1]\nstate_final",
"And that gives me the starting conditions for Phase 2.",
"system2 = System(system1, t_0=t_final, init=state_final)",
"Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.",
"results2, details2 = run_solve_ivp(system2, slope_func2)\ndetails2.message\n\nt_final = results2.index[-1]\nt_final",
"We can plot the results on the same axes.",
"plot_position(results1, label='Phase 1')\nplot_position(results2, label='Phase 2')",
"And get the lowest position from Phase 2.",
"min(results2.y)",
"To see how big the effect of the cord is, I'll collect the previous code in a function.",
"def run_two_phases(params):\n system1 = make_system(params)\n results1, details1 = run_solve_ivp(system1, slope_func1, \n events=event_func1)\n t_final = results1.index[-1]\n state_final = results1.iloc[-1]\n \n system2 = system1.set(t_0=t_final, init=state_final)\n results2, details2 = run_solve_ivp(system2, slope_func2)\n return results1.append(results2)",
"Now we can run both phases and get the results in a single TimeFrame.",
"results = run_two_phases(params)\n\nplot_position(results)\n\nparams_no_cord = params.set(m_cord=1)\nresults_no_cord = run_two_phases(params_no_cord);\n\nplot_position(results, label='m_cord = 75 kg')\nplot_position(results_no_cord, label='m_cord = 1 kg')\n\nmin(results_no_cord.y)\n\ndiff = min(results.y) - min(results_no_cord.y)\ndiff",
"The difference is about a meter, which could certainly be the difference between a successful bungee dunk and a bad day."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ceholden/ceholden.github.io
|
_drafts/2016-09-09-Landsat-Metadata-Dask.ipynb
|
mit
|
[
"Landsat Metadata Analysis with Dask\nUnderstanding the global distribution of Landsat observations over the satellite's 40+ year record can help answer many questions including:\n\nHow viable is a particular analytical method given the observation frequency and quality in my study site?\nWhat is the distribution of cloud cover across the planet as observed by Landsat?\nHow does solar and sensor geometry change in the Landsat record across time and the planet?\n\nIn order to mine this information, we can use the Landsat Bulk Metadata dataset from the USGS which provides rich metadata about every observation in the Landsat record across the history of the program. Unfortunately, these files are gigantic and would be very difficult to process using an average computer.\nLuckily for those using Python, the Dask library can provide both multiprocessing and out-of-core computation capabilities while keeping to the same function calls that you might be familiar with from Numpy and Pandas. Of interest to us is the dask.dataframe collection which allows us to easily process the Landsat metadata CSV files.\nTo begin, first download the Landsat metadata files and unzip them. While Pandas can read from compressed CSV files, we will want to break these CSV files apart into many pieces for processing and they need to be uncompressed to split them.\nFor this tutorial, we will only be using the Landsat 8 and Landsat 7 metadata:\nbash\nwget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_8.csv.gz\ngunzip LANDSAT_8.csv.gz\nwget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_ETM_SLC_OFF.csv.gz\ngunzip LANDSAT_ETM_SLC_OFF.csv.gz\nwget http://landsat.usgs.gov/metadata_service/bulk_metadata_files/LANDSAT_ETM.csv.gz\ngunzip LANDSAT_8.csv.gz\nTODO: a transition",
"import dask.dataframe as ddf\n\ncolumns = {\n 'sceneID': str,\n 'sensor': str,\n 'path': int,\n 'row': int,\n 'acquisitionDate': str,\n 'cloudCover': float,\n 'cloudCoverFull': float,\n 'sunElevation': float,\n 'sunAzimuth': float,\n 'DATA_TYPE_L1': str,\n 'GEOMETRIC_RMSE_MODEL': float,\n 'GEOMETRIC_RMSE_MODEL_X': float,\n 'GEOMETRIC_RMSE_MODEL_Y': float,\n 'satelliteNumber': float\n}\n\ndf = ddf.read_csv('LANDSAT*.csv',\n usecols=columns.keys(),\n dtype=columns,\n parse_dates=['acquisitionDate'],\n blocksize=int(20e6))\ndf = df.assign(year=df.acquisitionDate.dt.year)\ndf.columns",
"Question: How many observations are there?",
"df.groupby('sensor').sensor.count().compute()",
"Question: What is the mean cloud cover for every path/row across the years?",
"result = df.groupby(['path', 'row', 'year']).cloudCoverFull.mean().compute()\n\nresult.loc[12, 31, :]",
"Question: What is the cloud cover difference between a L1T and L1GT product?",
"result = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()\nresult\n",
"Looks like there is a labeling issue due to a capitalization difference in L1GT versus L1Gt. We can correct this as well:",
"df = df.assign(DATA_TYPE_L1=df.DATA_TYPE_L1.apply(lambda x: x if x != 'L1Gt' else 'L1GT'))\nresult = df.groupby(['DATA_TYPE_L1']).cloudCoverFull.mean().compute()\nresult",
"Question: How does the various levels of preprocessing affect the estimated geometric accuracy? Are more recent Landsat sensors more accurate?",
"df.groupby(['DATA_TYPE_L1', 'sensor'])[['GEOMETRIC_RMSE_MODEL_X', 'GEOMETRIC_RMSE_MODEL_Y']].mean().compute()",
"Unfortunately it looks like the Landsat 8 observations do not record the estimated geometric accuracy unless a systematic terrain correction using Ground Control Points is successful.\nBehind the scenes",
"from dask.dot import dot_graph\n\ndot_graph(result.dask)\n\nimport dask",
"Getting started with Dask\nBecause Dask DataFrame implements much of the Pandas API, we eliminate the added complications of parallel computing or computing on large datasets by first working our analysis out on a small subset using Pandas.\nIf one makes sure they stick the to subset of the Pandas API that Dask supports, leveraging the power of Dask is as simple as changing the data ingest or creation call and adding a .compute() to the computation.",
"import pandas as pd\n_df = pd.read_csv('LANDSAT_8.csv', parse_dates=['acquisitionDate'], nrows=100)\n\ns = _df['DATA_TYPE_L1']"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/text_models/solutions/rnn_encoder_decoder.ipynb
|
apache-2.0
|
[
"Simple RNN Encode-Decoder for Translation\nLearning Objectives\n1. Learn how to create a tf.data.Dataset for seq2seq problems\n1. Learn how to train an encoder-decoder model in Keras\n1. Learn how to save the encoder and the decoder as separate models \n1. Learn how to piece together the trained encoder and decoder into a translation function\n1. Learn how to use the BLUE score to evaluate a translation model\nIntroduction\nIn this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.\nWe will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.\nAt last, we'll benchmark our results using the industry standard BLEU score.",
"pip install nltk\n\nimport os\nimport pickle\nimport sys\n\nimport nltk\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nimport utils_preproc\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.keras.layers import GRU, Dense, Embedding, Input\nfrom tensorflow.keras.models import Model, load_model\n\nprint(tf.__version__)\n\nSEED = 0\nMODEL_PATH = \"translate_models/baseline\"\nDATA_URL = (\n \"http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip\"\n)\nLOAD_CHECKPOINT = False\n\ntf.random.set_seed(SEED)",
"Downloading the Data\nWe'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:\nMay I borrow this book? ¿Puedo tomar prestado este libro?\nThe dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.",
"path_to_zip = tf.keras.utils.get_file(\n \"spa-eng.zip\", origin=DATA_URL, extract=True\n)\n\npath_to_file = os.path.join(os.path.dirname(path_to_zip), \"spa-eng/spa.txt\")\nprint(\"Translation data stored at:\", path_to_file)\n\ndata = pd.read_csv(\n path_to_file, sep=\"\\t\", header=None, names=[\"english\", \"spanish\"]\n)\n\ndata.sample(3)",
"From the utils_preproc package we have written for you,\nwe will use the following functions to pre-process our dataset of sentence pairs.\nSentence Preprocessing\nThe utils_preproc.preprocess_sentence() method does the following:\n1. Converts sentence to lower case\n2. Adds a space between punctuation and words\n3. Replaces tokens that aren't a-z or punctuation with space\n4. Adds <start> and <end> tokens\nFor example:",
"raw = [\n \"No estamos comiendo.\",\n \"Está llegando el invierno.\",\n \"El invierno se acerca.\",\n \"Tom no comio nada.\",\n \"Su pierna mala le impidió ganar la carrera.\",\n \"Su respuesta es erronea.\",\n \"¿Qué tal si damos un paseo después del almuerzo?\",\n]\n\nprocessed = [utils_preproc.preprocess_sentence(s) for s in raw]\nprocessed",
"Sentence Integerizing\nThe utils_preproc.tokenize() method does the following:\n\nSplits each sentence into a token list\nMaps each token to an integer\nPads to length of longest sentence \n\nIt returns an instance of a Keras Tokenizer\ncontaining the token-integer mapping along with the integerized sentences:",
"integerized, tokenizer = utils_preproc.tokenize(processed)\nintegerized",
"The outputted tokenizer can be used to get back the actual works\nfrom the integers representing them:",
"tokenizer.sequences_to_texts(integerized)",
"Creating the tf.data.Dataset\nload_and_preprocess\nLet's first implement a function that will read the raw sentence-pair file\nand preprocess the sentences with utils_preproc.preprocess_sentence.\nThe load_and_preprocess function takes as input\n- the path where the sentence-pair file is located\n- the number of examples one wants to read in\nIt returns a tuple whose first component contains the english\npreprocessed sentences, while the second component contains the\nspanish ones:",
"def load_and_preprocess(path, num_examples):\n with open(path_to_file) as fp:\n lines = fp.read().strip().split(\"\\n\")\n\n # TODO 1a\n sentence_pairs = [\n [utils_preproc.preprocess_sentence(sent) for sent in line.split(\"\\t\")]\n for line in lines[:num_examples]\n ]\n\n return zip(*sentence_pairs)\n\nen, sp = load_and_preprocess(path_to_file, num_examples=10)\n\nprint(en[-1])\nprint(sp[-1])",
"load_and_integerize\nUsing utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:\npython\n (input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)\nwhere \n\ninput_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences\ntarget_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences\ninp_lang_tokenizer is the source language tokenizer\ntarg_lang_tokenizer is the target language tokenizer",
"def load_and_integerize(path, num_examples=None):\n\n targ_lang, inp_lang = load_and_preprocess(path, num_examples)\n\n # TODO 1b\n input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang)\n target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang)\n\n return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer",
"Train and eval splits\nWe'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU. \nLet us set variable for that:",
"TEST_PROP = 0.2\nNUM_EXAMPLES = 30000",
"Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:",
"input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(\n path_to_file, NUM_EXAMPLES\n)",
"Let us store the maximal sentence length of both languages into two variables:",
"max_length_targ = target_tensor.shape[1]\nmax_length_inp = input_tensor.shape[1]",
"We are now using scikit-learn train_test_split to create our splits:",
"splits = train_test_split(\n input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED\n)\n\ninput_tensor_train = splits[0]\ninput_tensor_val = splits[1]\n\ntarget_tensor_train = splits[2]\ntarget_tensor_val = splits[3]",
"Let's make sure the number of example in each split looks good:",
"(\n len(input_tensor_train),\n len(target_tensor_train),\n len(input_tensor_val),\n len(target_tensor_val),\n)",
"The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:",
"print(\"Input Language; int to word mapping\")\nprint(input_tensor_train[0])\nprint(utils_preproc.int2word(inp_lang, input_tensor_train[0]), \"\\n\")\n\nprint(\"Target Language; int to word mapping\")\nprint(target_tensor_train[0])\nprint(utils_preproc.int2word(targ_lang, target_tensor_train[0]))",
"Create tf.data dataset for train and eval\nBelow we implement the create_dataset function that takes as input\n* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences\n* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences\nIt returns a tf.data.Dataset containing examples for the form\npython\n ((source_sentence, target_sentence), shifted_target_sentence)\nwhere source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1. \nRemark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.",
"def create_dataset(encoder_input, decoder_input):\n # TODO 1c\n\n # shift ahead by 1\n target = tf.roll(decoder_input, -1, 1)\n\n # replace last column with 0s\n zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)\n target = tf.concat((target[:, :-1], zeros), axis=-1)\n\n dataset = tf.data.Dataset.from_tensor_slices(\n ((encoder_input, decoder_input), target)\n )\n\n return dataset",
"Let's now create the actual train and eval dataset using the function above:",
"BUFFER_SIZE = len(input_tensor_train)\nBATCH_SIZE = 64\n\ntrain_dataset = (\n create_dataset(input_tensor_train, target_tensor_train)\n .shuffle(BUFFER_SIZE)\n .repeat()\n .batch(BATCH_SIZE, drop_remainder=True)\n)\n\n\neval_dataset = create_dataset(input_tensor_val, target_tensor_val).batch(\n BATCH_SIZE, drop_remainder=True\n)",
"Training the RNN encoder-decoder model\nWe use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.",
"EMBEDDING_DIM = 256\nHIDDEN_UNITS = 1024\n\nINPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1\nTARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1",
"Let's implement the encoder network with Keras functional API. It will\n* start with an Input layer that will consume the source language integerized sentences\n* then feed them to an Embedding layer of EMBEDDING_DIM dimensions\n* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS\nThe output of the encoder will be the encoder_outputs and the encoder_state.",
"encoder_inputs = Input(shape=(None,), name=\"encoder_input\")\n\n# TODO 2a\nencoder_inputs_embedded = Embedding(\n input_dim=INPUT_VOCAB_SIZE,\n output_dim=EMBEDDING_DIM,\n input_length=max_length_inp,\n)(encoder_inputs)\n\nencoder_rnn = GRU(\n units=HIDDEN_UNITS,\n return_sequences=True,\n return_state=True,\n recurrent_initializer=\"glorot_uniform\",\n)\n\nencoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)",
"We now implement the decoder network, which is very similar to the encoder network.\nIt will\n* start with an Input layer that will consume the source language integerized sentences\n* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions\n* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS\nImportant: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!\nThe output of the encoder will be the decoder_outputs and the decoder_state.",
"decoder_inputs = Input(shape=(None,), name=\"decoder_input\")\n\n# TODO 2b\ndecoder_inputs_embedded = Embedding(\n input_dim=TARGET_VOCAB_SIZE,\n output_dim=EMBEDDING_DIM,\n input_length=max_length_targ,\n)(decoder_inputs)\n\ndecoder_rnn = GRU(\n units=HIDDEN_UNITS,\n return_sequences=True,\n return_state=True,\n recurrent_initializer=\"glorot_uniform\",\n)\n\ndecoder_outputs, decoder_state = decoder_rnn(\n decoder_inputs_embedded, initial_state=encoder_state\n)",
"The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:",
"decoder_dense = Dense(TARGET_VOCAB_SIZE, activation=\"softmax\")\n\npredictions = decoder_dense(decoder_outputs)",
"To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.\nWhile compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:",
"# TODO 2c\nmodel = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions)\n\nmodel.compile(optimizer=\"adam\", loss=\"sparse_categorical_crossentropy\")\nmodel.summary()",
"Let's now train the model!",
"STEPS_PER_EPOCH = len(input_tensor_train) // BATCH_SIZE\nEPOCHS = 1\n\n\nhistory = model.fit(\n train_dataset,\n steps_per_epoch=STEPS_PER_EPOCH,\n validation_data=eval_dataset,\n epochs=EPOCHS,\n)",
"Implementing the translation (or decoding) function\nWe can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!\nWe do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.\nSo, the strategy now is to split our trained network into two independent Keras models:\n\nan encoder model with signature encoder_inputs -> encoder_state\na decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]\n\nThis way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1. \nGiven that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state. \nAt this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word. \nThis process can be continued until the decoder produces the token <stop>. \nThis is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model. \nRemark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.",
"if LOAD_CHECKPOINT:\n encoder_model = load_model(os.path.join(MODEL_PATH, \"encoder_model.h5\"))\n decoder_model = load_model(os.path.join(MODEL_PATH, \"decoder_model.h5\"))\n\nelse:\n # TODO 3a\n encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state)\n\n decoder_state_input = Input(\n shape=(HIDDEN_UNITS,), name=\"decoder_state_input\"\n )\n\n # Reuses weights from the decoder_rnn layer\n decoder_outputs, decoder_state = decoder_rnn(\n decoder_inputs_embedded, initial_state=decoder_state_input\n )\n\n # Reuses weights from the decoder_dense layer\n predictions = decoder_dense(decoder_outputs)\n\n decoder_model = Model(\n inputs=[decoder_inputs, decoder_state_input],\n outputs=[predictions, decoder_state],\n )",
"Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems). \ndecode_sequences will take as input\n* input_seqs which is the integerized source language sentence tensor that the encoder can consume\n* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers\n* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted\nNote: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.",
"def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):\n \"\"\"\n Arguments:\n input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)\n output_tokenizer: Tokenizer used to conver from int to words\n\n Returns translated sentences\n \"\"\"\n # Encode the input as state vectors.\n states_value = encoder_model.predict(input_seqs)\n\n # Populate the first character of target sequence with the start character.\n batch_size = input_seqs.shape[0]\n target_seq = tf.ones([batch_size, 1])\n\n decoded_sentences = [[] for _ in range(batch_size)]\n\n # TODO 4: Sampling loop\n for i in range(max_decode_length):\n\n output_tokens, decoder_state = decoder_model.predict(\n [target_seq, states_value]\n )\n\n # Sample a token\n sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1)\n\n tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index)\n\n for j in range(batch_size):\n decoded_sentences[j].append(tokens[j])\n\n # Update the target sequence (of length 1).\n target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)\n\n # Update states\n states_value = decoder_state\n\n return decoded_sentences",
"Now we're ready to predict!",
"sentences = [\n \"No estamos comiendo.\",\n \"Está llegando el invierno.\",\n \"El invierno se acerca.\",\n \"Tom no comio nada.\",\n \"Su pierna mala le impidió ganar la carrera.\",\n \"Su respuesta es erronea.\",\n \"¿Qué tal si damos un paseo después del almuerzo?\",\n]\n\nreference_translations = [\n \"We're not eating.\",\n \"Winter is coming.\",\n \"Winter is coming.\",\n \"Tom ate nothing.\",\n \"His bad leg prevented him from winning the race.\",\n \"Your answer is wrong.\",\n \"How about going for a walk after lunch?\",\n]\n\nmachine_translations = decode_sequences(\n utils_preproc.preprocess(sentences, inp_lang), targ_lang, max_length_targ\n)\n\nfor i in range(len(sentences)):\n print(\"-\")\n print(\"INPUT:\")\n print(sentences[i])\n print(\"REFERENCE TRANSLATION:\")\n print(reference_translations[i])\n print(\"MACHINE TRANSLATION:\")\n print(machine_translations[i])",
"Checkpoint Model\nNow let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse:",
"if not LOAD_CHECKPOINT:\n\n os.makedirs(MODEL_PATH, exist_ok=True)\n\n # TODO 3b\n model.save(os.path.join(MODEL_PATH, \"model.h5\"))\n encoder_model.save(os.path.join(MODEL_PATH, \"encoder_model.h5\"))\n decoder_model.save(os.path.join(MODEL_PATH, \"decoder_model.h5\"))\n\n with open(os.path.join(MODEL_PATH, \"encoder_tokenizer.pkl\"), \"wb\") as fp:\n pickle.dump(inp_lang, fp)\n\n with open(os.path.join(MODEL_PATH, \"decoder_tokenizer.pkl\"), \"wb\") as fp:\n pickle.dump(targ_lang, fp)",
"Evaluation Metric (BLEU)\nUnlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation. \nMany attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).\n\nIt is quick and inexpensive to calculate.\nIt allows flexibility for the ordering of words and phrases.\nIt is easy to understand.\nIt is language independent.\nIt correlates highly with human evaluation.\nIt has been widely adopted.\n\nThe score is from 0 to 1, where 1 is an exact match.\nIt works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4\nIt still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.\nThe NLTK framework has an implementation that we will use.\nWe can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.\nFor more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/",
"def bleu_1(reference, candidate):\n reference = list(filter(lambda x: x != \"\", reference)) # remove padding\n candidate = list(filter(lambda x: x != \"\", candidate)) # remove padding\n smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1\n return nltk.translate.bleu_score.sentence_bleu(\n reference, candidate, (1,), smoothing_function\n )\n\ndef bleu_4(reference, candidate):\n reference = list(filter(lambda x: x != \"\", reference)) # remove padding\n candidate = list(filter(lambda x: x != \"\", candidate)) # remove padding\n smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1\n return nltk.translate.bleu_score.sentence_bleu(\n reference, candidate, (0.25, 0.25, 0.25, 0.25), smoothing_function\n )",
"Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.",
"%%time\nnum_examples = len(input_tensor_val)\nbleu_1_total = 0\nbleu_4_total = 0\n\n\nfor idx in range(num_examples):\n # TODO 5\n reference_sentence = utils_preproc.int2word(\n targ_lang, target_tensor_val[idx][1:]\n )\n\n decoded_sentence = decode_sequences(\n input_tensor_val[idx : idx + 1], targ_lang, max_length_targ\n )[0]\n\n bleu_1_total += bleu_1(reference_sentence, decoded_sentence)\n bleu_4_total += bleu_4(reference_sentence, decoded_sentence)\n\nprint(f\"BLEU 1: {bleu_1_total / num_examples}\")\nprint(f\"BLEU 4: {bleu_4_total / num_examples}\")",
"Results\nHyperparameters\n\nBatch_Size: 64\nOptimizer: adam\nEmbed_dim: 256\nGRU Units: 1024\nTrain Examples: 24,000\nEpochs: 10\nHardware: P100 GPU\n\nPerformance\n- Training Time: 5min \n- Cross-entropy loss: train: 0.0722 - val: 0.9062\n- BLEU 1: 0.2519574312515255\n- BLEU 4: 0.04589972764144636\nReferences\n\nFrancois Chollet: https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JAmarel/Phys202
|
Numpy/NumpyEx02.ipynb
|
mit
|
[
"Numpy Exercise 2\nImports",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"Factorial\nWrite a function that computes the factorial of small numbers using np.arange and np.cumprod.",
"def np_fact(n):\n \"\"\"Compute n! = n*(n-1)*...*1 using Numpy.\"\"\"\n if n == 0:\n return 1\n else:\n c = np.arange(1, n+1, 1)\n return c.cumprod()[n-1]\n\n\nassert np_fact(0)==1\nassert np_fact(1)==1\nassert np_fact(10)==3628800\nassert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]",
"Write a function that computes the factorial of small numbers using a Python loop.",
"def loop_fact(n):\n result = 1\n \"\"\"Compute n! using a Python for loop.\"\"\"\n if n == 0:\n return 1\n else:\n while n > 0:\n result = result*n\n n-=1\n return result\n\nassert loop_fact(0)==1\nassert loop_fact(1)==1\nassert loop_fact(10)==3628800\nassert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]",
"Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:\npython\n%timeit -n1 -r1 function_to_time()",
"print(\"Argument of 50\")\n%timeit -n1 -r1 np_fact(50)\n%timeit -n1 -r1 loop_fact(50)\n\nprint(\"Argument of 90000\")\n%timeit -n1 -r1 np_fact(90000)\n%timeit -n1 -r1 loop_fact(90000)",
"In the cell below, summarize your timing tests. Which version is faster? Why do you think that version is faster?\nLoop_fact is faster than np_fact for small values. I believe this is because np_fact is creating an array and every entry in the array, while loop_fact is only directly doing the computation. But you can notice that as the argument is increased to very large values np_fact is much faster."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ntoll/poem-o-matic
|
poem-o-matic.ipynb
|
mit
|
[
"Poem-O-Matic\nThis is a description, in both code and prose, of how to generate original poetry on demand using a computer and the Python programming language. It's based upon work done in the London Python Code Dojo with Dan Pope and Hans Bolang. I've taken some of our original ideas and run with them, specifically:\nIf you re-assemble unrelated lines from different poems into a new poetic structure, you get a pretty convincing __new__ poem.\nThis is an exercise in doing the simplest possible thing with a program to fool humans into thinking the computer can write poetry. There are two reasons for this:\n\nSimple solutions are easy to understand and think about.\nSimple solutions work well in an educational context.\n\nTo be blunt: we're going to use software to automate a sneaky way to create poems. The basic process is simple:\n\nTake a huge number of existing source poems (written by humans) and chop them up into their constituent lines. These thousands of lines will be our source material.\nWork out how the source lines rhyme and group them together into \"buckets\" containing lines that rhyme with each other.\nFurther categorise the rhymes in each bucket by word ending. For example, sub-categorise the bucket that rhymes with \"uck\" into slots for \"look\", \"book\", \"suck\" etc.\nSpecify a rhyming scheme. For example \"aabba\" means lines one, two and five (the \"a\"s) rhyme with each other, as do lines three and four (the \"b\"s).\nUse the rhyming scheme to randomly select a bucket for each letter (for example, one bucket for the \"a\"s and yet another bucket for the \"b\"s) and randomly select a line from different word endings for each line in the rhyming scheme.\n\nHere's a practical example of this process in plain English:\nConsider the following three poems I just made up:\n```\nPoem 1\nThis is a special poem,\nThe words, they are a flowing.\nIt almost seems quite pointless,\nSince this poem is meaningless.\nPoem 2\nOh, my keyboard is on fire,\nCausing consternation and ire.\nSince words are cheap and cheerful,\nIt's going to be quite an earful.\nPoem 3\nWords are relentless,\nThey light up minds like fire.\nCausing us to express,\nIdeas that flow like quagmire.\n```\nThe rhyming schemes for each poem are as follows:\n\nPoem 1: aabb\nPoem 2: aabb\nPoem 3: abab\n\nIf we cut up the poems into their constituent lines we get:\nThis is a special poem,\nIt almost seems quite pointless,\nThey light up minds like fire.\nSince this poem is meaningless.\nCausing consternation and ire.\nIt's going to be quite an earful.\nOh, my keyboard is on fire,\nWords are relentless,\nSince words are cheap and cheerful,\nCausing us to express,\nThe words, they are a flowing.\nIdeas that flow like quagmire.\nIf we bucket them by rhymes we get the following four groups:\n```\nThe words, they are a flowing.\nThis is a special poem,\nIt almost seems quite pointless,\nWords are relentless,\nSince this poem is meaningless.\nCausing us to express,\nThey light up minds like fire.\nOh, my keyboard is on fire,\nCausing consternation and ire.\nIdeas that flow like quagmire.\nIt's going to be quite an earful.\nSince words are cheap and cheerful,\n```\nWe can further refine the buckets into sub-categories based on word endings:\n```\nFLOWING:\n The words, they are a flowing.\nPOEM:\n This is a special poem,\nPOINTLESS:\n It almost seems quite pointless,\nRELENTLESS:\n Words are relentless,\nMEANINGLESS:\n Since this poem is meaningless.\nEXPRESS:\n Causing us to express,\nFIRE:\n They light up minds like fire.\n Oh, my keyboard is on fire,\nIRE:\n Causing consternation and ire.\nQUAGMIRE:\n Ideas that flow like quagmire.\nEARFUL:\n It's going to be quite an earful.\nCHEERFUL:\n Since words are cheap and cheerful,\n```\nNotice how all but one of the subcategories contain a single line. This is simply because our source poems are limited in number and length. In the programmed example below we'll be working with tens of thousands of lines of poetry.\nTo generate a new poem we specify a rhyming scheme for the new poem, for example: aabba. This tells us we need three \"a\" lines that rhyme with each other and two \"b\" lines that rhyme with each other. In other words we need two buckets of rhyming lines - from one we'll select three lines, from the other two lines. Given the list above I'll randomly pick the second and third buckets. Given that I don't want to repeat word endings I'll make sure I randomly choose lines from each bucket from a different word-ending subcategory. In the end I get the following lines:\n```\nOh, my keyboard is on fire,\nCausing consternation and ire.\nIdeas that flow like quagmire.\nIt almost seems quite pointless,\nSince this poem is meaningless.\n```\nIf I arrange the lines into the aabba rhyming scheme I end up with the finished poem:\nOh, my keyboard is on fire,\nCausing consternation and ire.\nIt almost seems quite pointless,\nSince this poem is meaningless.\nIdeas that flow like quagmire.\nGiven such a simple technique, the result is both interesting, meaningful and (almost) poetic.\nAs already mentioned, the important \"poetic sounding\" langauge is created by real poets - we're just going to use a Python program to reassemble lines from these poems to make new poetry.\nWhere can we get such free source material..? Easy, the wonderful resource that is Project Gutenberg. \nI've selected the following anthologies as the source material for this project:\n\nThe Sonnets by Shakespeare\nThe World's Best Poetry, Volume 04: The Higher Life by Gladden and Carman\nLeaves of Grass by Walt Whitman\nA Book of Nonsense by Edward Lear\nThe Golden Treasury by Francis Turner Palgrave and Alfred Pearse\nA Child's Garden of Verses by Robert Louis Stevenson\nThe Peter Patter Book of Nursery Rhymes by Leroy F. Jackson\nThe Aeneid by Virgil\nSongs of Childhood by Walter De la Mare\nPoems Chiefly from Manuscript by John Clare\nA Treasury of War Poetry: British and American Poems of the World War 1914-1917\n\nI've put plain text versions of these works in the sources directory, and manually removed the prose elements of these files (introductions, titles, author's names etc).\nConsuming Source Poetry\nFirst, we need to get a list of all the source files:",
"from os import listdir\nfrom os.path import isfile, join\n\nmypath = 'sources'\n\nfilenames = [join(mypath, f) for f in listdir(mypath) if isfile(join(mypath, f))]\nprint(filenames)",
"Next, we need to load each file and extract the lines of poetry into a set of all known lines of poetry:",
"LINES_OF_POETRY = set() # All our lines will be added to this set.\n\nfor source_file in filenames: # For each source file...\n with open(source_file) as source: # Open it as the object 'source'\n for line in source.readlines(): # Now, for each line in the new 'source' object,\n clean_line = line.strip() # remove all the leading and trailing whitespace from the line,\n clean_line += '\\n' # re-add a newline character,\n LINES_OF_POETRY.add(clean_line) # and add it to the set of all lines of poetry\n \nprint('We have {} unique lines of poetry.'.format(len(LINES_OF_POETRY)))",
"Cleaning and Transforming the Data\nIn order to re-combine these lines into new poems we need to work out how the lines relate to each other in terms of rhyming. To do this we need to know about phonemes - the sounds that make up speech. The cmudict.0.7a.phones file contains definitions and categorisations (vowel, frictive, etc) of the phonemes used in English:",
"# Load the phoneme table\nwith open('cmudict.0.7a.phones') as phoneme_definitions:\n PHONEMES = dict(line.split() for line in phoneme_definitions.readlines())\n\nprint(PHONEMES)",
"Next, we create a simple function to determine if a phoneme is a vowel:",
"def is_vowel(phoneme):\n \"\"\"\n A utility function to determine if a phoneme is a vowel.\n \"\"\"\n return PHONEMES.get(phoneme) == 'vowel'",
"The cmudict.0.7a file contains a mapping of spelled words to pronunciations expressed as phonemes:",
"# Create a rhyming definition dictionary\nwith open('cmudict.0.7a') as pronunciation_definitions: # Load the CMU phoneme definitions of pronunciation.\n PRONUNCIATIONS = pronunciation_definitions.readlines()\n\nprint(PRONUNCIATIONS[80:90])",
"We're in a position to create a rhyme dictionary we can use to look up words and discover rhymes.",
"import re\nRHYME_DICTIONARY = {}\nfor pronunciation in PRONUNCIATIONS: # For each pronunciation in the list of pronunciations,\n pronunciation = re.sub(r'\\d', '', pronunciation) # strip phomeme stresses in the definition (not interesting to us),\n tokens = pronunciation.strip().split() # get the tokens that define the pronunciation,\n word = tokens[0] # the word whose pronunciation is defined is always in position zero of the listed tokens,\n phonemes = tokens[:0:-1] # the phonemes that define the pronunciation are the rest of the tokens. We reverse these!\n phonemes_to_rhyme = [] # This will hold the phonemes we use to rhyme words.\n for phoneme in phonemes:\n phonemes_to_rhyme.append(phoneme)\n if is_vowel(phoneme):\n break # We only need to rhyme from the last phoneme to the final vowel. Remember the phonemes are reversed!\n RHYME_DICTIONARY[word] = tuple(phonemes_to_rhyme)\nprint('There are {} items in the rhyme dictionary.'.format(len(RHYME_DICTIONARY)))",
"Given that we're rhyming the last word of each line, we need a function to identify what the last word of any given line actually is:",
"def last_word(line):\n \"\"\"\n Return the last word in a line (stripping punctuation).\n\n Raise ValueError if the last word cannot be identified.\n \"\"\"\n match_for_last_word = re.search(r\"([\\w']+)\\W*$\", line)\n if match_for_last_word:\n word = match_for_last_word.group(1)\n word = re.sub(r\"'d$\", 'ed', word) # expand old english contraction of -ed\n return word.upper()\n raise ValueError(\"No word in line.\")",
"The next step is to collect all the lines from our source poems into lines that all rhyme.",
"from collections import defaultdict\n\nlines_by_rhyme = defaultdict(list)\nfor line in LINES_OF_POETRY:\n try:\n rhyme = RHYME_DICTIONARY[last_word(line)]\n except (KeyError, ValueError):\n continue\n lines_by_rhyme[rhyme].append(line)\n\nLINES_THAT_RHYME = [l for l in lines_by_rhyme.values() if len(l) > 1]\n\nprint(\"Number of rhymes found is: {}\".format(len(LINES_THAT_RHYME)))",
"The final transformation of the data is to group the individual rhymes into ending words (so all the lines that end in \"look\", \"nook\" and \"book\" are collected together, for example). This well help us avoid rhyming lines with the same word.",
"RHYME_DATA = []\nfor lines in LINES_THAT_RHYME:\n lines_by_word = defaultdict(list)\n for line in lines:\n end_word = last_word(line)\n lines_by_word[end_word].append(line)\n RHYME_DATA.append(dict(lines_by_word))\n\nprint(RHYME_DATA[1:3])",
"Generating Poetry\nGiven the data found in RHYME_DATA we're finally in a position to reassemble rhyming lines from our source poems to make new poetry.\nIt's important to make sure that, no matter the content of the final line, we ensure it ends with the correct punctuation. So we make a function to do this for us:",
"def terminate_poem(poem):\n \"\"\"\n Given a list of poem lines, fix the punctuation of the last line.\n\n Removes any non-word characters and substitutes a random sentence\n terminator - ., ! or ?.\n \"\"\"\n last = re.sub(r'\\W*$', '', poem[-1])\n punc = random.choice(['!', '.', '.', '.', '.', '?', '...'])\n return poem[:-1] + [last + punc]",
"We also need to be able to define a rhyme scheme. For example, \"aabba\" means the first, second and fifth lines all rhyme (a) and the third and fourth lines rhyme (b). We could, of course write other schemes such as: \"aabbaaccaa\". Nevertheless, the \"aabba\" scheme is a safe default.",
"import random\nfrom collections import Counter\n\n\ndef build_poem(rhyme_scheme=\"aabba\", rhymes=RHYME_DATA):\n \"\"\"\n Build a poem given a rhyme scheme.\n \"\"\"\n groups = Counter(rhyme_scheme) # Work out how many lines of each sort of rhyming group are needed\n lines = {} # Will hold lines for given rhyming groups.\n for name, number in groups.items():\n candidate = random.choice([r for r in rhymes if len(r) >= number]) # Select candidate rhymes with enough lines.\n word_ends = list(candidate.keys()) # Get the candidate rhyming words.\n random.shuffle(word_ends) # Randomly shuffle them.\n lines_to_use = [] # Will hold the lines selected to use in the final poem for this given rhyming group.\n for i in range(number): # For all the needed number of lines for this rhyming group,\n lines_to_use.append(random.choice(candidate[word_ends.pop()])) # Randomly select a line for a new word end.\n lines[name] = lines_to_use # Add the lines for the rhyming group to the available lines.\n\n # Given a selection of lines, we need to order them into the specified rhyming scheme.\n poem = [] # To hold the ordered list of lines for the new poem.\n for k in rhyme_scheme: # For each rhyming group name specification for a line...\n poem.append(lines[k].pop()) # Simply take a line from the specified rhyming group.\n return terminate_poem(poem) # Return the result as a list with the final line appropriately punctuated.",
"Finally, we can call the build_poem function to get a list of the lines for our new poem.",
"my_poem = build_poem() # Get an ordered list of the lines encompassing my new poem.\npoem = ''.join(my_poem) # Turn them into a single printable string.\nprint(poem)",
"Example output:\nBreake ill eggs ere they be hatched:\nThe flower in ripen'd bloom unmatch'd\nSparrows fighting on the thatch.\nAnd where hens lay, and when the duck will hatch.\nThough by no hand untimely snatch'd...\nYou could also change the rhyming scheme too:",
"my_poem = build_poem('aabbaaccaa')\npoem = ''.join(my_poem)\nprint(poem)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ManyBodyPhysics/NuclearForces
|
doc/exercises/Variable_Phase_Approach.ipynb
|
cc0-1.0
|
[
"Preliminaries\nIn this notebook, we compute the phase shifts of the potential with $V = -V0$ for $r < R$ and $V=0$ for $r > R$ with an analytic formula and using the Variable Phase Approach (VPA). The reduced mass is $\\mu$.\nWe work in units where $\\hbar=1$ and we measure mass in units of $\\mu$ and lengths in terms of $R$. For convenience we set $\\mu$ and $R$ to $1$. However, we will continue to make them explicit in the formulas.",
"# I know you're not supposed to do this to avoid namespace issues, but whatever\nfrom numpy import *\nfrom matplotlib.pyplot import *\n\n# Global variables for this notebook\nmu=1.\nR=1.\nhbar=1.",
"The only parameter to adjust now is the depth $V0$.",
"\n@vectorize\ndef Vsw(r,V0): # function for a square well of width R (set externally) and depth V0 (V0>0)\n if r > R:\n return 0.\n return -V0\n\ndef Ek(k): # kinetic energy\n return k**2 / (2.*mu)\n\nV0 = 10\nx=linspace(0,10,100)\nplot(x,Vsw(x,V0))\nylim(-V0,V0)\nshow()\n",
"Analytic result for the phase shift\nUse the formula for $\\delta(E)$ from one of the problems, converting it to $\\delta(k)$ using $E_k(k)$:",
"def deltaAnalytic(k, V0):\n return arctan(sqrt(Ek(k)/(Ek(k)+V0))*tan(R*sqrt(2.*mu*(Ek(k)+V0))))-R*sqrt(2.*mu*Ek(k))\n\nV0 = 1\nk=linspace(0,10,100)\nplot(k,deltaAnalytic(k,V0))\nshow()",
"What is going on with the steps? Why are they there? Is the phase shift really discontinuous? How would you fix it?\nHere is one \"fix\" that makes the result continuous for this example, but we will still have issues when $V0$ is large enough to support bound states.",
"V0 = .5\nk=linspace(0,10,100)\nplot(k,arctan(tan(deltaAnalytic(k,V0))))\nshow()\n\ndef deltaAnalyticAdjusted(k, V0):\n return arctan(tan(deltaAnalytic(k, V0)))",
"We can also avoid this issue by looking at $k cot[\\delta(k)]$ instead, which doesn't have these ambiguities.",
"V0 = .5\nk=linspace(0,10,100)\nseterr(all='ignore') # use 1/tan for cot, so ignore 1/0 errors\nplot(k,k/tan(deltaAnalytic(k,V0)))\nseterr(all='warn')\nshow()",
"Phase shift from the Variable Phase Approach\nUse the formula: \n$$\\frac{d}{dr}\\delta_{k}(r)= -\\frac{2 mu}{k} V(r)\\sin^2\\left(\\delta_{k}(r)+k r\\right)$$\nwith the initial condition $\\delta_{k}(r)= 0$\nWe'll need a numerical differential equation solver. In python/scipy there are two choices, the simplest is scipy.integrate.odeint, the slightly more complicated on scipy.integrate.ode which is class based wrapper to many different solvers. For the VPA, scipy.integrate.ode is more power than needed so we will use scipy.integrate.odeint.",
"from scipy.integrate import odeint\n?odeint",
"Click on the border to remove the help information\nIn principle we integrate out to infinity. In practice, we choose a value of k and integrate out to Rmax, chosen to be well beyond the range of the potential (at which point the right side of the equation for $\\delta_p(r)$ is zero), and then evaluate at $r=R_{\\textrm{max}}$, which is the phase shift for momentum k. Here is an implementation:",
"def delta_VPA_simple(k, V0, r=10.): \n # actual VPA code \n def RHS(delta,x,Vd,k):\n return (-2.* mu/k) * Vsw(x,Vd) * sin(delta+k*x)**2.\n \n soln = odeint(RHS, 0, r, args=(V0,k), mxstep=10000, rtol=1e-6, atol=1e-6)\n # that is it\n return soln\n\ndef delta_VPA(k,V0, r=10.):\n \n # doing some tricks so that we can process either a single k or a vector of k\n if isscalar(k): \n delta0 = array([0.])\n kv = array([k])\n else:\n kv = array(k)\n \n if 0. in kv:\n kv[kv==0.] = 1e-10 # cannot allow zero for k due to 1/k in RHS\n \n # doing some tricks so that we can process either a single Rmax or a vector of r\n if isscalar(r):\n rv = array([r],float)\n else:\n rv = array(r,float)\n rv = sort(rv)\n if rv[0] != 0.:\n rv = insert(rv,0,0.)\n \n # actual VPA code \n def RHS(delta,x,Vd,k):\n return (-2.* mu/k) * Vsw(x,Vd) * sin(delta+k*x)**2.\n soln = zeros((len(rv),len(kv)))\n for i, ki in enumerate(kv):\n soln[:,i] = odeint(RHS, 0, rv, args=(V0,ki),mxstep=10000,rtol=1e-6,atol=1e-6)[:,0]\n # that is it\n\n # tricks to return based on input\n if isscalar(k):\n if isscalar(r):\n return soln[-1,0]\n return rv, soln[:,0]\n if isscalar(r):\n return soln[-1,:]\n return rv, soln.T\n \ndef delta_VPA_faster(k,V0, r=10.):\n # faster version of VPA, couples the error term for all k so it will break down if you give a it a vector of k with very small or zero momenta\n \n # doing some tricks so that we can process either a single k or a vector of k\n if isscalar(k): \n delta0 = array([0.])\n kv = array([k])\n else:\n kv = array(k)\n delta0 = zeros(len(k),float)\n if min(kv) < 1e-3:\n raise Exception('k must be >=1e-3,otherwise the method is unstable')\n \n # doing some tricks so that we can process either a single Rmax or a vector of r\n if isscalar(r):\n rv = array([r],float)\n else:\n rv = array(r,float)\n rv = sort(rv)\n if rv[0] != 0.:\n rv = insert(rv,0,0.)\n \n # actual VPA code \n def RHS(delta,x,Vd,kv):\n return (-2.* mu/kv) * Vsw(x,Vd) * sin(delta+kv*x)**2.\n soln = odeint(RHS, delta0, rv, args=(V0,kv))\n # that is it\n \n # tricks to return based on input\n if isscalar(k):\n if isscalar(r):\n return soln[-1,0]\n return rv, soln[:,0]\n if isscalar(r):\n return soln[-1,:]\n return rv, soln.T",
"Do a quick check against the analytic result with sample values for $k$ and $V0$ to see the accuracy we are getting:",
"V0=2.6\nprint deltaAnalyticAdjusted(2.,V0)\nprint delta_VPA(2.,V0)\nprint delta_VPA_faster(2.,V0)\nprint delta_VPA_simple(2.,V0)",
"To get more digits correct, increase atol and rtol in odeint (at the cost of slower evaluation of the function).\nCheck the cutoff phase shift out to Rmax. Does this plot make sense?",
"V0=1.5\nr=linspace(0,5,100)\nk = 2.\nrv, delta = delta_VPA(k,V0,r)\nplot(rv,delta)\nxlabel('r')\nylabel(r'$\\delta_p(r)$')\nshow()\n",
"Lets check phase shifts for a V0 with no bound states",
"V0 = 5.\nk = linspace(1e-3,10,100)\ndelta = delta_VPA(k,V0,2.0)\n\nseterr(all='ignore')\nplot(k,k / tan(delta),label='VPA at Rmax = 2.')\nplot(k,k / tan(deltaAnalytic(k,V0)),label='exact')\nseterr(all='warn')\nxlabel('k')\nylabel(r'$k \\cot\\delta(k)$')\nlegend(loc='upper left')\nshow()\n\nplot(k,delta,label='VPA at Rmax = 2.')\nplot(k,deltaAnalyticAdjusted(k,V0),label='exact')\nxlabel('k')\nylabel(r'$\\delta(k)$')\nlegend(loc='upper right')\nshow()",
"Lets check phase shifts for a V0 with 1 bound state",
"V0 = 4.\nk = linspace(1e-4,10,100)\ndelta = delta_VPA(k,V0)\nprint delta.shape\n\nseterr(all='ignore')\nplot(k,k / tan(delta),label='VPA at Rmax = 2.')\nplot(k,k / tan(deltaAnalytic(k,V0)),label='exact')\nseterr(all='warn')\nxlabel('k')\nylabel(r'$k \\cot\\delta(k)$')\nlegend(loc='upper left')\nshow()\n\nplot(k,delta,label='VPA at Rmax = 2.')\nplot(k,deltaAnalyticAdjusted(k,V0),label='exact')\nxlabel('k')\nylabel(r'$\\delta(k)$')\nlegend(loc='upper right')\nshow()",
"Now it's time to play!\nCheck whether Levinson's theorem holds by calculating phase shifts for increasing depths V0 and noting the jump to $\\delta(0) = n\\pi$, where n is the number of bound states (found, for example, from the square_well_example.nb notebook).",
"i = 0\nV0s = [.5,1.,2.,5.,10.,20.]\nk=linspace(.001,20,100)\n\nfor V0 in V0s:\n delta=delta_VPA(k,V0)\n plot(k,delta,label='V0 = %4.2f' % V0)\nxlabel('k')\nylabel(r'\\delta(k)')\nlegend(loc='upper right')\nshow()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sergey-tomin/workshop
|
5_Genesis_preprocessor.ipynb
|
mit
|
[
"*This notebook was created by Svitozar Serkez. Source and license info is on GitHub. August 2016. *\nTutorial N5: Genesis preprocessor. \nOcelot is not only a particle tracking code, but a simulation toolkit.\nPython is known to be a very good \"glue\" between different software\nInterface to other codes may be (and was) developed.\nGENESIS interface is under active development and is currently used for studies\n-list of papers\n\nThis example will cover the following topics:\n\nInitialization of the library\nPreparing the GENESIS simulation\nrunning many-stage statistical simulation\n\nRequirements\n\nOCELOT - library\nnumpy, scipy, matplotlib\nbeam.txt - input beam file\n\nImport of modules",
"# the output of plotting commands is displayed inline within frontends, \n# directly below the code cell that produced it\n%matplotlib inline\nfrom __future__ import print_function\n# this python library provides generic shallow (copy) and deep copy (deepcopy) operations \nfrom copy import deepcopy\n\n# import from Ocelot graphical modules\nimport sys, os\nfrom ocelot import *\nfrom ocelot.utils.xfel_utils import *\nfrom ocelot.gui.accelerator import *\nfrom ocelot.gui.genesis_plot import *\n#from ocelot.optics.elements import Filter_freq\n\nimport numpy as np\nfrom copy import copy\n#import matplotlib.pyplot as plt\n# load beam distribution",
"Setting input parameters\nelectron beam energy and expected radiation photon energy",
"E_beam=8.5 #[GeV]\nE_photon=250 #[eV]",
"Creating SASE3 lattice\nwith native ocelot objects",
"# defining the undulator\nlperiod=0.068\nnperiods=73\nund = Undulator(lperiod=lperiod, nperiods=nperiods, Kx=1.0);\nund.Kx = Ephoton2K(E_photon, und.lperiod, E_beam)\n\n# defining of the drifts\nd2 = Drift (l=4*und.lperiod)\nd3 = Drift (l=7*und.lperiod)\n\n# defining of the quads\nqf = Quadrupole (l=6*und.lperiod, k1=-7.3)\nqd = Quadrupole (l=6*und.lperiod, k1=7.3)\nqdh=deepcopy(qd)\nqdh.l/=2\n\n# creating of the cell\nextra_fodo = (und, d2, qdh)\ncell_ps = (und, d2, qf, d3, und, d2, qd, d3)\nl_fodo= MagneticLattice(cell_ps).totalLen/2\nsase3 = MagneticLattice((und, d2, qd, d3) + 11*cell_ps)\n\nup = UndulatorParameters(und,E_beam)\nup.printParameters()",
"Load beam file",
"beamf = read_beam_file('beam.dat')",
"Plot beamfile",
"fig=plt.figure()\nfig.set_size_inches((20,15))\nplot_beam(fig, beamf)",
"Match beam file",
"beta_av = 20.0\nbeam=get_beam_peak(beamf)\nbeam.E=E_beam\nrematch(beta_av, l_fodo, qdh, sase3, extra_fodo, beam, qf, qd)\nbeamf = transform_beam_file(beamf ,transform = [ [beam.beta_x,beam.alpha_x], [beam.beta_y,beam.alpha_y] ], energy_new = beam.E, emit_scale = 1.0)\nbeamf = cut_beam(beamf,[-2e-6, 2e-6])\n\nfig=plt.figure()\nfig.set_size_inches((20,15))\nplot_beam(fig, beamf)",
"Tapering the undulator\nlinear, quadratic,power-law...",
"def f1(n, n0, a0, a1, a2):\n '''\n piecewise-quadratic tapering function\n '''\n for i in xrange(1,len(n0)):\n if n < n0[i]:\n return a0 + (n-n0[i-1])*a1[i-1] + (n-n0[i-1])**2 * a2[i-1]\n a0 += (n0[i]-n0[i-1])*a1[i-1] + (n0[i]-n0[i-1])**2 * a2[i-1]\n \n return 1.0\n\ntap_start=3 #number of undulators\nlin_tap=0.01 #taper step\nquad_tap=0.0\n\nn = 60\nn0 = [0,tap_start,60]\na0 = und.Kx\na1 = [0,lin_tap*a0]\na2 = [0,quad_tap]\n\ntaper_func1 = lambda n : f1(n, n0, a0, a1, a2)\n\nsase3= taper(sase3, taper_func1)",
"specify the run_dir - directory into which the experimental results will be saved",
"run_dir = 'gen_stst' #directory to dump data\nrun_id=0 # run number (subdirectory 'run_#') for statistical studies\ntry:\n os.makedirs(run_dir)\nexcept:\n pass\nlauncher = get_genesis_launcher('genesis') # launcher object to start genesis",
"generate Genesis input object",
"inp = generate_input(up, beam, itdp=False)\ninp.lattice_str = generate_lattice(sase3, unit = up.lw, energy = beam.E) #generate Genesis lattice based on Ocelot lattice object\ninp.beam_file_str = beam_file_str(beamf)\n#inp.beamfile = 'tmp.beam'\n\ninp.runid = run_id\ninp.run_dir = run_dir\ninp.ipseed = 17111+7*run_id # defines shot-noise, changes automatically \n# below other Genesis parameters may be specified, like prad0, dgrid, etc.....",
"now all the genesis input files are created, such as lattice file, beam file, input file.",
"print(inp.input())\n\nprint(inp.lattice_str)\n\nprint(inp.beam_file_str)",
"Genesis may be executed with the following command:",
"#g = run(inp,launcher)",
"if \"run\" function is placed in a sctipt, the following post-processing code will be executed after the GENESIS simulation is finished\nthe following two python scripts would start ocelot/genesis several-stage simulation for many independent runs\nPossible post-processing between stages:\n* electron beam: propagation through chicane via second-order tracking +CSR (in development)\n* radiation: hard X-ray Self-seeding\n* radiation: soft X-ray Self-seeding (in development)",
"exp_dir='/some_directory'\n\nrun_number=10\nrun_ids = xrange(0,run_number)\n\nstart_stage = 1\nstop_stage = 4\n\n# set simulation parameters\n# prepare electron beam file\n\nstage=1\nif start_stage <= stage and stop_stage >= stage:\n for run_id in run_ids: \n run_dir = exp_dir + 'run_' + str(run_id) \n #prepare input, specify parameters\n #inp.ipseed = 17111*(run_id + 1)\n #\n #\n #\n #g = run(inp,launcher)\n print('run #',run_id, ' of stage ',stage)\n \nstage=2\nif start_stage <= stage and stop_stage >= stage:\n for run_id in run_ids: \n run_dir = exp_dir + 'run_' + str(run_id) \n #prepare input based on stage 1 output\n #inp.ipseed = 27222*(run_id + 1)\n #\n #inp.distfile = 'run.'+ str(inp.runid)+'.s1.gout.dist'\n #\n #g = run(inp,launcher)\n print('run #',run_id, ' of stage ',stage)\n \nstage=3\nif start_stage <= stage and stop_stage >= stage:\n for run_id in run_ids: \n run_dir = exp_dir + 'run_' + str(run_id) \n #prepare input based on stage 1 output\n #inp.ipseed = 37333*(run_id + 1)\n #\n #inp.distfile = 'run.'+ str(inp.runid)+'.s1.gout.dist'\n #\n #g = run(inp,launcher)\n print('run #',run_id, ' of stage ',stage)\n \n #stage=n .........",
"next: Tutorial N6: Genesis_postprocessor"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
phoebe-project/phoebe2-docs
|
development/tutorials/grav_redshift.ipynb
|
gpl-3.0
|
[
"Gravitational Redshift (rv_grav)\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.4,<2.5\"",
"As always, let's do imports and initialize a logger and a new bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()\n\nb.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')\n\nb.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'logarithmic')\nb.set_value_all('ld_coeffs', [0.0, 0.0])\nb.set_value_all('atm', 'blackbody')",
"Relevant Parameters\nGravitational redshifts are only accounted for flux-weighted RVs (dynamical RVs literally only return the z-component of the velocity of the center-of-mass of each star).\nFirst let's run a model with the default radii for our stars.",
"print(b['value@requiv@primary@component'], b['value@requiv@secondary@component'])",
"Note that gravitational redshift effects for RVs (rv_grav) are disabled by default. We could call add_compute and then set them to be true, or just temporarily override them by passing rv_grav to the run_compute call.",
"b.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='defaultradii_true')",
"Now let's run another model but with much smaller stars (but with the same masses).",
"b['requiv@primary'] = 0.4\nb['requiv@secondary'] = 0.4\n\nb.run_compute(rv_method='flux-weighted', rv_grav=True, irrad_method='none', model='smallradii_true')",
"Now let's run another model, but with gravitational redshift effects disabled",
"b.run_compute(rv_method='flux-weighted', rv_grav=False, irrad_method='none', model='smallradii_false')",
"Influence on Radial Velocities",
"afig, mplfig = b.filter(model=['defaultradii_true', 'smallradii_true']).plot(legend=True, show=True)\n\nafig, mplfig = b.filter(model=['smallradii_true', 'smallradii_false']).plot(legend=True, show=True)",
"Besides the obvious change in the Rossiter-McLaughlin effect (not due to gravitational redshift), we can see that making the radii smaller shifts the entire RV curve up (the spectra are redshifted as they have to climb out of a steeper potential at the surface of the stars).",
"print(b['rvs@rv01@primary@defaultradii_true'].get_value().min())\nprint(b['rvs@rv01@primary@smallradii_true'].get_value().min())\nprint(b['rvs@rv01@primary@smallradii_false'].get_value().min())\n\nprint(b['rvs@rv01@primary@defaultradii_true'].get_value().max())\nprint(b['rvs@rv01@primary@smallradii_true'].get_value().max())\nprint(b['rvs@rv01@primary@smallradii_false'].get_value().max())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zlxs23/Python-Cookbook
|
string_and_world_py3_1.ipynb
|
apache-2.0
|
[
"2.1 使用多个界定符分割字符串\n需要将一个字符串分割为多个字段,但是分割符( and 周围の空格)并不是固定\nstring 对象 的 split() 只是用与非常简单 の 字符串分割情况,起不允许有多个分割符 OR分隔符周围不确定 の 空格 When要更加灵活地切割字符串 の 时候 最好 使用 re.split()",
"line = 'asdf fjdk; afed, fjedk, asdef , foo'\nimport re\nline_new = re.split(r'[;,\\s]\\s*',line)\n\nline_new",
"re.split() 允许为分隔符指定一个 Regular mode 在以上例子中:分隔符可以是 逗号 分号 OR 空格 同时允许后面紧跟任意个空格 只要这个模式被找到 则匹配 の 分隔符两边 の 实体都会被当成结果中 の element 返回结果为一个字段 list 与 str.split() 返回值类型一致<br>When 使用 re.split() 需要注意 Regular Expression 是否包含一个括号捕获分组 IF 使用捕获分组,则被匹配的文本亦会出现在结果 list 中",
"fields = re.split(r'(;|,|\\s)\\s*',line)\nfields",
"捕获到 分割字符 亦可使用 想要保留之 用来在后面重新构造一个新的输出 string",
"values = fields[::2]\ndelimiters = fields[1::2] + ['']\nvalues\n\ndelimiters\n\n# Reform the line using the same delimiters\n''.join(v + d for v, d in zip(values, delimiters))",
"IF 不想保留分隔符 字符串 到 list 中去,但仍然要是用括号来分组 Regular Expression 的话,确保你的 分组是非捕获分组,形如 (?:...)",
"re.split('(?:,|;|\\s)\\s*',line)",
"2.2 字符串开头 OR 结尾匹配\n需要制定的文本模式去检查字符串的开头 OR 结尾,比如文件名后缀 URL Scheme等\n检查字符串开头 OR 结尾 使用 str.startswith() AND str.endswith() method",
"filename = 'spm.txt'\nfilename.endswith('.txt')\n\nfilename.startswith('spa')\n\nurl = 'http://www.python.prg'\nurl.startswith('http:')",
"需要检查多种匹配可能 只需将所有匹配项放入到一个元组中去 然后传入 startswith OR endswith()",
"import os\nfilenames = os.listdir('.')\nfilenames\n\n[name for name in filenames if name.endswith(('.ipynb','.py'))]\n\nany(name.endswith('.py') for name in filenames)",
"奇怪点:这个方法必须以一个元组作为参数 IF 恰好有一个 list OR set 类型的选择项 AND 确保传递参数前先调用 tuple 将其转换为 元组型",
"choices = ['htttp:','ftp:']\nurl.startswith(choices)\n\nurl.startswith(tuple(choices))",
"startswith AND endswith method 提供了一个非常方便的方式去做字符串 开头 AND 结尾 的检查 类似的操作亦可使用 slice 来实现 BUT 不优雅",
"filename[-4:] == '.txt'\n\nurl[:5] == 'http:' or url[:6] == 'https:' or url[:4] == 'ftp:'",
"亦可使用正则表达式来实现",
"import re\nre.match('http:|https:|ftp:',url)",
"当和其它操作 如 检查某个文件夹中是否存在指定的文件类型",
"if any(name.endsw)",
"2.3 用 Shell 通配符匹配字符串\n需要使用 Unix Shell 中常用的通配符 (比如 *.py,Dat[0-9]*.csv 等)去匹配字符串\nfnmatch module 提供 两个 Func fnmatch() OR fnmatchcase() 来实现这样匹配",
"from fnmatch import fnmatch,fnmatchcase\nfnmatch('foo.txt','*.txt')\n\nfnmatch('foo.txt','?oo.txt')\n\nfnmatch('Data45.csv','Dat[0-9]*')\n\nfnmatch('Dat45.csv','Dat[0-9]*')\n\nnames = ['Dat1.csv','Dat2.csv','Dat3.csv','config.ini','foo.py']\n[name for name in names if fnmatch(name,'Dat*.csv')]",
"fnmatch() Func 使用底层操作系统 の 大小写敏感规则 (不同 の 系统是不一样的)来匹配模式\n>>> # On OS X\n>>> fnmatch('foo.txt','*.TXT')\n>>> False\n>>> # On Win\n>>> fnmatch('foo.txt','*.TXT')\n>>> True\n\nIF 对这个区别很难受 可使用 fnmatchcase 来代替 其可完全按你的模式大小来写匹配",
"fnmatchcase('foo.txt','*.TXT')",
"此两函数通常被忽略的一个特性即在除了非文件名的字符串其亦有用",
"address = [\n '5412 N CLARK ST',\n '1060 W ADDISON ST',\n '1039 W GRANVILLE AVE',\n '2122 N CLARK ST',\n '4802 N BROADWAY',\n]\n[addr for addr in address if fnmatchcase(addr,'* ST')]\n\n[addr for addr in address if fnmatchcase(addr,'54[0-9][0-9] *CLARK*')]",
"fnmatch Func can 介于简单 の string AND 强大 正则表达式 之间 IF 在数据处理操作中只需简单的通配符就可完成的时候<br>IF Code need 做 文件名 の 匹配 最好使用 glob module\n2.4 字符串匹配 AND 搜索\n匹配 OR 搜索 指定模式 の 文本\nIF 匹配的是 字面字符串 则 需要调用基本字符串方法 str.find() AND str,endswith() str.startswith() OR 类似方法",
"text = 'yeah, but no,but yeah,but no,but yeah'\n# Exact match\nb0 = text == 'yeah'\n# Match at start or end\nb1 = text.startswith('yeah')\nb2 = text.endswith('yeah')\n# Search for the location of the first occurrence\nb3 = text.find('no')\nboolean = [b0,b1,b2,b3]\nprint(boolean)",
"对于复杂 の 匹配需要使用 正则表达式 AND re module<br>为解释 Regular Expression の 基本原理",
"text1 = '11/27/2013'\ntext2 = 'Nov 27,2012'\nimport re\n# Simple matching: \\d+ means match one or more digits\nfor text in (text1,text2):\n\tif re.match(r'\\d+/\\d+/\\d+',text):\n\t\tprint('Match yes!')\n\telse:\n\t\tprint('Match no!')",
"IF 想要使用 同一模式去做多次匹配 YOU Should 将模式 String 预编译为一个模式对象",
"import re\ntext1 = '11/27/2013'\ntext2 = 'Nov 27,2012'\ndatepat = re.compile(r'\\d+/\\d+/\\d+')\nfor text in [text1,text2]:\n\tif datepat.match(text):\n\t\tprint('yes')\n\telse:\n\t\tprint('no')",
"match 总是以字符串开始匹配 IF想去查找字符串任意部分的模式出现位置 使用 findall() 取代",
"text = 'Today is 11/27/2012,pycon starts 3/12/2013'\ndatepat.findall(text)",
"在定义 Regular Expression 通常用括号来捕获分组",
"datepat = re.compile(r'(\\d+)/(\\d+)/(\\d+)')\nm = datepat.match('11/27/1009')\nm\n\nm.group(0)\n\nm.group(1) + m.group(2) + m.group(3)\n\nfrom functools import reduce\ndef addd(x,y):\n return x + y\nreduce(addd,m.groups())\n\nmonth,day,year = m.groups()\n\nmonth + day + year\n\ndatepat.findall(text)\n\nfor month,day,year in datepat.findall(text):\n\tprint('{}-{}-{}'.format(year,month,year))",
"findall 会搜索文本同时以列表形式 返回所有匹配 IF以迭代形式 返回匹配 可使用 finditer() 来代替",
"for m in datepat.finditer(text):\n\tprint(m.groups())",
"使用 re 模块 进行 匹配 AND 搜索 文本 最基本方法 核心:\n\n先使用 re.compile() 进行编译 Regular Expression string\n后使用 match() OR findall() OR finditer() 等方法\n\n在使用 re 模块 时 一定要对 string 前 + r",
"m = datepat.match('11/27/2013abcd')\nm.groups()",
"精确匹配 以 $ 结尾",
"datepat = re.compile(r'(\\d+)/(\\d+)/(\\d+)')\ndatepat.match('11/27/2013abc') == datepat.match('11/27/2013')",
"IF 仅仅 做一次 简单 文本匹配 OR 搜索 可略过 编译 compile 过程 直接调用 re 模块级别 Func",
"re.findall(r'(\\d+)/(\\d+)/(\\d+)',text)",
"但是需要注意的是,如果你打算做大量的匹配和搜索操作的话,最好先编译正则表达式,然后再重复使用它。 模块级别的函数会将最近编译过的模式缓存起来,因此并不会消耗太多的性能, 但是如果使用预编译模式的话,你将会减少查找和一些额外的处理损耗。\n2.5 字符串搜索 AND 替换\n在字符串中搜索和匹配指定的文本的模式\n对于简单的字面模式 可用 str.relace()",
"text = 'yeah,but nol,but yeah,byt no,but yeah'\ntext.replace('yeah','yep')",
"对于复杂的模式,可使用 re 模块中 sub() 函数 说明将 形式<br>11/27/2012 --> 2012-11-27",
"text = 'Today is 11/27/2012,pycon starts 4/12/2021'\nimport re\nre.sub(r'(\\d+)/(\\d+)/(\\d+)',r'\\3-\\1-\\2',text)",
"sub Func first argv 被匹配的模式 second argv 替换模式<br>反斜杠数字比如 \\3 指向前面模式 の 捕获组号<br>若打算以相同的模式做多次替换 考虑先编译其将提升性能",
"datept = re.compile(r'(\\d+)/(\\d+)/(\\d+)')\ndatept"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fullmetalfelix/ML-CSC-tutorial
|
tSNE.ipynb
|
gpl-3.0
|
[
"t-distributed Stochastic Neighbour Embedding\nt-SNE is a nonlinear dimensionality reduction technique for high-dimensional data.\nMore info in the usual place: https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding",
"from sklearn.manifold import TSNE\nimport matplotlib.pyplot as plt\nimport numpy\nimport pickle\nfrom dscribe.descriptors import MBTR\nfrom visualise import view",
"We are going to apply this technique to a database of wine samples. The inputs are 13 chemical descriptors, the output is the index of its class (cheap, ok, good). In principle we do not know the output.",
"dataIn = numpy.genfromtxt('./data/wineInputs.txt', delimiter=',')\ndataOut = numpy.genfromtxt('./data/wineOutputs.txt', delimiter=',')\n\n# find indexes of wines for each class\nidx1 = numpy.where(dataOut==1)\nidx2 = numpy.where(dataOut==2)\nidx3 = numpy.where(dataOut==3)\n\n# compute the tSNE transformation of the inputs in 2 dimensions\ncomp = TSNE(n_components=2).fit_transform(dataIn)\n\n# plot the resulting 2D points\nplt.plot(comp[:,0],comp[:,1],'ro')\nplt.xlabel('X1')\nplt.ylabel('X2')\nplt.show()",
"The transform had no idea about the output classes, and still three clusters of points can be seen. We can overlay the knowledge of correct classifaction to check if the clusters correspond to what we know:",
"plt.plot(comp[idx1,0],comp[idx1,1],'go')\nplt.plot(comp[idx2,0],comp[idx2,1],'ro')\nplt.plot(comp[idx3,0],comp[idx3,1],'bo')\nplt.xlabel('X1')\nplt.ylabel('X2')\nplt.show()",
"Exercises\n1. Iron clusters\nWe have a bunch of Fe clusters and it is not easy to determine their crystal structure with conventional tools. Let's try using the MBTR descriptor and t-SNE on these clusters and check if we can distinguish between FCC and BCC phases.",
"import ase.io\n\n# load the database\nsamples = ase.io.read(\"data/clusters.extxyz\", index=':')\n\n# samples is now a list of ASE Atoms objects, ready to use!\n# the first 55 clusters are FCC, the last 55 are BCC\n\n# define MBTR setup\nmbtr = MBTR(\n species=[\"Fe\"],\n periodic=False,\n k2={\n \"geometry\": {\"function\": \"distance\"},\n \"grid\": { \"min\": 0, \"max\": 2, \"sigma\": 0.01, \"n\": 200 },\n \"weighting\": {\"function\": \"exp\", \"scale\": 0.4, \"cutoff\": 1e-2}\n },\n k3={\n \"geometry\": {\"function\": \"cosine\"},\n \"grid\": { \"min\": -1.0, \"max\": 1.0, \"sigma\": 0.02, \"n\": 200 },\n \"weighting\": {\"function\": \"exp\", \"scale\": 0.4, \"cutoff\": 1e-2}\n },\n flatten=True,\n sparse=False,\n)\n\n# calculate MBTR descriptor for each sample - takes a few secs\nmbtrs = mbtr.create(samples)\nprint(mbtrs.shape)",
"Plot the t-SNE projection of MBTR output and see if you can see the two classes of structures accurately",
"# ...",
"Plot the original MBTR descriptors and see if the structural differences are visible there",
"# ...",
"Try changing the MBTR and t-SNE parameters and see how the projection changes",
"# ..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jmfranck/pyspecdata
|
docs/_downloads/871a80cfd7a71c1edf9cefede4305190/matrix_mult.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Matrix Multiplication\nVarious ways of implementing different matrix multiplications.\nRead the documentation embedded in the code.",
"# -*- coding: utf-8 -*-\nfrom pylab import *\nfrom pyspecdata import *\nfrom numpy.random import random\nimport time\ninit_logging('debug')",
"In this example, the assertions essentially tell the story of what's going on\nNote that in all these examples, the pyspecdata version appears more\ncomplicated.\nBut, that's because these are toy examples, where we have no need for the\ndimension names or axes.\nNonetheless, we wanted to give the simplest working example possible.\nFirst, we demonstrate matrix multiplication\nfor all the below, I attach an axis to make sure the routines work with the\naxes attached",
"a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')\na = a_nd.data",
"in the next line, note how only the dimension that goes away is named the\nsame!\nif you think about the matrix a transforming from one vector space (labeled\ny) to another (labeled x) this makes sense",
"a2_nd = nddata(random(10*2048),[2048,10],['y','z']).setaxis('y','#').setaxis('z','#')\na2 = a2_nd.data\n\n# multiply two different matrices\n\ntime1 = time.time()\nb = a @ a2\ntime2 = time.time()\nb_nd = a_nd @ a2_nd",
"the previous is unambiguous b/c only 'y' is shared between the two,\nbut I can do the following for clarity:\nb_nd = a_nd.along('y') @ a2_nd\nNote that \"along\" gives the dimension along which the sum is performed -- and\nso this dimension goes away upon matrix multiplication.\nIf only one dimension is shared between the matrices, then we know to take\nthe sum along the shared dimension.\nFor example, here a2_nd transforms from a space called \"z\" into a space called \"y\",\nwhile a_nd transforms from \"y\" into \"x\" -- so it's obvious that a_nd @ a2_nd should\ntransform from \"z\" into \"y\".",
"time3 = time.time()\nassert b_nd.dimlabels == ['x','z'], b_nd.dimlabels\nassert all(isclose(b,b_nd.data))\nprint(\"total time\",(time3-time2),\"time/(time for raw)\",((time3-time2)/(time2-time1)))\nassert ((time3-time2)/(time2-time1))<1",
"calculate a projection matrix",
"time1 = time.time()\nb = a @ a.T\ntime2 = time.time()",
"note that here, I have to rename the column space",
"b_nd = a_nd.along('y',('x','x_new')) @ a_nd\ntime3 = time.time()\nassert b_nd.dimlabels == ['x_new','x'], b_nd.dimlabels\nassert all(b_nd.getaxis('x_new') == b_nd.getaxis('x'))\nassert (id(b_nd.getaxis('x_new')) != id(b_nd.getaxis('x')))\nassert all(isclose(b,b_nd.data))\nif time2-time1>0:\n print(\"total time\",(time3-time2),\"time/(time for raw)\",((time3-time2)/(time2-time1)))\n assert ((time3-time2)/(time2-time1))<1.1",
"now, a standard dot product note how I don't need along here, since it's\nunambiguous",
"a_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')\nb_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')\na = a_nd.data\nb = b_nd.data\nassert all(isclose(a.dot(b),(a_nd @ b_nd).data))",
"Finally, let's show what happens when we multiply a matrix by itself and\ndon't rename one of the dimensions\nBy doing this, we indicate that we're not interested in transforming from one\nvector space to another (as a projection matrix does), but rather just have\ntwo sets of vectors and are interested in finding the dot products between\nthe two sets\nThis will take the dot product of our 10 2048-long vectors, and present them\n10-long array",
"a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')\na = a_nd.data\nb_nd = a_nd.along('y') @ a_nd\nb = matmul(a_nd.data.reshape(10,1,2048),\n a_nd.data.reshape(10,2048,1)).reshape(-1)\nassert all(isclose(b,b_nd.data))\nassert len(b.data) == 10"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ernestyalumni/MLgrabbag
|
kaggle/DatSciBow2017_FullPreprocessTutorial.ipynb
|
mit
|
[
"Data, Data Science Bowl 2017\ncf. https://www.kaggle.com/c/data-science-bowl-2017/data\nI had to sudo dnf install p7zip and install p7zip (yet another software repo) just to extract the images. \n\ndata_password.txt.zip - contains decryption key for the image files; needed to extract. 9$kAsfpQ*FtH \nsample_images.7z - smaller subset of full dataset, provided for people who wish to preview images before large file stage1.7z, File size 781.39 MB",
"import os, sys\nos.getcwd()\nos.listdir( os.getcwd() ) \n\nos.listdir( os.getcwd() + \"/2017datascibowl/sample_images/\") \n\nos.getcwd()",
"cf. Full Preprocessing Tutorial",
"%matplotlib inline\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport dicom\nimport os\nimport scipy.ndimage\nimport matplotlib.pyplot as plt\n\nfrom skimage import measure, morphology\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection\n\n# Some constants\nINPUT_FOLDER = './2017datascibowl/sample_images/'\npatients=os.listdir(INPUT_FOLDER)\npatients.sort()\n\npatients",
"Loading the files",
"# Load the scans in given folder path\ndef load_scan(path):\n \"\"\"\n INPUTS/ARGUMENTS \n ================\n @type path : Python string\n \n @type slices : Python list (for each file in a folder/directory of dicom.dataset.FileDataset \n each is a slice of the single patient's lung\n \"\"\"\n slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]\n slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))\n try:\n slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])\n except:\n slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)\n \n for s in slices:\n s.SliceThickness = slice_thickness\n \n return slices\n\nimage_test = np.stack([s.pixel_array for s in load_scan(INPUT_FOLDER+patients[0])])\n\nprint(image_test.shape);print(image_test.dtype)\n\npd.DataFrame( image_test[0]).describe()\n\npd.DataFrame( image_test[1]).describe()\n\nlen(image_test == -2000) # set outside-of-scan pixels to 0 \n\nprint( image_test.min() )",
"$N_x=512$, so $N_x\\in \\mathbb{Z}^+$ \n$N_y=512$, so $N_y\\in \\mathbb{Z}^+$ \nFor each patient, $m_{\\text{slice}} \\in \\mathbb{Z}^+$ represents the number of \"slices\", image \"slices\" of the single patient's lung. \nSo first make, for each patient, $(i_{\\text{slice}}, P_{\\text{arr}}) \\in \\lbrace 0,1,\\dots m_{\\text{slice}}-1 \\rbrace \\times \\mathbb{K}^{N_x \\times N_y } $",
"def get_pixels_hu(slices):\n \"\"\"\n INPUTS/ARGUMENTS\n ================\n @type slices : Python list of dicom.dataset.FileDataset, \n @param slices : each dicom.dataset.FileDataset representing an image \"slice\" of a single patient's lung\n \"\"\"\n image = np.stack([s.pixel_array for s in slices]) # np.array image.shape (134,512,512)\n # Convert to int16 (from sometimes int16),\n # should be possible as values should always be low enough (<32k)\n image = image.astype(np.int16)\n \n # Set outside-of-scan pixels to 0\n # The intercept is usually -1024, so air is approximately 0\n # suggested by Henry Wolf to avoid -2048 values\n outside_of_image_val = image.min()\n image[image == outside_of_image_val] = 0 \n \n image[image == -2000] = 0 \n \n # Convert to Hounsfield units (HU)\n for slice_number in range(len(slices)):\n \n intercept = slices[slice_number].RescaleIntercept\n slope = slices[slice_number].RescaleSlope\n \n if slope != 1:\n image[slice_number] = slope * image[slice_number].astype(np.float64)\n image[slice_number] = image[slice_number].astype(np.int16)\n \n image[slice_number] += np.int16(intercept)\n \n return np.array(image, dtype=np.int16)",
"Let's take a look at 1 of the patients.",
"patients[0]\n\nfirst_patient = load_scan(INPUT_FOLDER + patients[0])\n\nprint(type(first_patient));print(len(first_patient));print(type(first_patient[0]))\nprint(type(first_patient[0].pixel_array)); print(first_patient[0].pixel_array.shape)\n\nfirst_patient[0]\n\ndir( first_patient[0] )\n\nfirst_patient_pixels = get_pixels_hu(first_patient)\n\nprint(type(first_patient_pixels));print(first_patient_pixels.shape)\n\npd.DataFrame( first_patient[0].pixel_array ).describe()\n\npd.DataFrame( first_patient_pixels[0] ).describe()\n\nplt.hist(first_patient_pixels.flatten(),bins=80,color='c')\nplt.xlabel(\"Hounsfield Units (HU)\")\nplt.ylabel(\"Frequency\")\nplt.show() \n\n# Show some slice in the middle\nplt.imshow(first_patient_pixels[80],cmap=plt.cm.gray)\nplt.show()\n\nprint(type(first_patient_pixels));print(first_patient_pixels.shape);print(first_patient_pixels.dtype)\n\npd.DataFrame(first_patient_pixels[0]).describe()",
"Resampling\nA scan may have a pixel spacing of [2.5, 0.5, 0.5], which means that the distance between slices is 2.5 millimeters. For a different scan this may be [1.5, 0.725, 0.725], this can be problematic for automatic analysis (e.g. using ConvNets)!",
"def resample(image, scan, new_spacing=[1,1,1]):\n # Determine current pixel spacing\n spacing = np.array([scan[0].SliceThickness] + scan[0].PixelSpacing, dtype=np.float32)\n \n resize_factor = spacing / new_spacing\n new_real_shape = image.shape * resize_factor\n new_shape = np.round(new_real_shape)\n real_resize_factor = new_shape / image.shape\n new_spacing = spacing / real_resize_factor\n \n image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')\n \n return image, new_spacing",
"Let's resample our patient's pixels to an isomorphic resolution of 1 by 1 by 1 mm.",
"pix_resampled, spacing = resample(first_patient_pixels, first_patient, [1,1,1])\n\nprint(\"Shape before resampling\\t\", first_patient_pixels.shape)\nprint(\"Shape after resample\\t\",pix_resampled.shape)\n\nprint(first_patient[0].SliceThickness);print(first_patient[0].PixelSpacing)\n\nnp.array( [2.5]+first_patient[0].PixelSpacing)/[1,1,1]\n\nfirst_patient_pixels.shape\n\n# My stuff, each scan or image \"slice\" is a dicom.dataSet.FileDataSet \nprint(first_patient[0].SliceThickness)\nprint(first_patient[0].PixelSpacing)\n\nprint( np.array( [first_patient[0].SliceThickness]+first_patient[0].PixelSpacing)/ [1,1,1] )\n\nfirst_patient_pixels.shape\n\nfirst_patient_pixels.shape * np.array( [first_patient[0].SliceThickness]+first_patient[0].PixelSpacing)/ [1,1,1] \n\nnp.round( first_patient_pixels.shape * np.array( [first_patient[0].SliceThickness]+first_patient[0].PixelSpacing)/ [1,1,1] )\n\nnp.round( first_patient_pixels.shape * np.array( [first_patient[0].SliceThickness]+first_patient[0].PixelSpacing)/ [1,1,1] ) / first_patient_pixels.shape\n\nnp.round( first_patient_pixels.shape * np.array( [first_patient[0].SliceThickness]+first_patient[0].PixelSpacing)/ [1,1,1] ) / first_patient_pixels.shape\n\nprint(type(pix_resampled));print(type(spacing));print(pix_resampled.shape);print(spacing.shape); print(spacing)\n\nfor image_slice in first_patient:\n print(image_slice.pixel_array.shape)\n\n[1000,2000,3000]/np.array(first_patient_pixels.shape).astype(\"float32\")\n\ndef resample_given_dims(image, scan, new_shape):\n # Determine current pixel spacing\n spacing = np.array([scan[0].SliceThickness] + scan[0].PixelSpacing, dtype=np.float32) # (\\Delta z,\\Delta x,\\Delta y)\n# print(spacing)\n real_resize_factor=new_shape/np.array(image.shape).astype(\"float32\") \n# print(real_resize_factor)\n new_spacing = spacing/real_resize_factor # (\\Delta z',\\Delta x',\\Delta y')\n \n# real_resize_factor_zoom=np.round(real_resize_factor) # original\n real_resize_factor_zoom = real_resize_factor\n print(real_resize_factor_zoom)\n image = scipy.ndimage.interpolation.zoom(image, real_resize_factor_zoom, mode='nearest')\n \n return image, new_spacing\n\n# pix_resampled, spacing = resample(first_patient_pixels, first_patient, [1,1,1])\npix_resampled_given,spacing_given=resample_given_dims( first_patient_pixels, first_patient, [167,128,128])\n\n\npd.DataFrame(pix_resampled_given[0]).describe()\n\nspacing_given\n\n#def resample(image, scan, new_spacing=[1,1,1]):",
"3D plotting the scan\nUse marching cubes to create an approximate mesh for our 3D object, and plot this with matplotlib. Quite slow and ugly, but the best we can do.",
"def plot_3d(image,threshold=-300):\n \n # Position the scan upright,\n # so the head of the patient would be at the top facing the camera\n p = image.transpose(2,1,0)\n \n verts, faces = measure.marching_cubes(p, threshold)\n \n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(111, projection='3d')\n \n # Fancy indexing: `verts[faces]` to generate a collection of triangles\n mesh = Poly3DCollection(verts[faces], alpha=0.70)\n face_color = [0.45, 0.45, 0.75]\n mesh.set_facecolor(face_color)\n ax.add_collection3d(mesh)\n \n ax.set_xlim(0, p.shape[0])\n ax.set_ylim(0, p.shape[1])\n ax.set_zlim(0, p.shape[2])\n \n plt.show()\n",
"Our plot function tkaes a threshold argument which we can use to plot certain structures, such as all tissue or only the bones. 400 is a good threshold for showing the bones only (see Hounsfield unit table above).",
"plot_3d(pix_resampled,400)",
"Lung segmentation\nIn order to reduce the problem space, we can segment the lungs (and usually some tissue around it). \nThe steps:\n Threshold the image (-320 HU is a good threshold, but it doesn't matter much for this approach)\n Do connected components, determine label of air around person, fill this with 1s in the binary image \n* Optionally: For every axial slice in the scan, determine the largest solid connected component (the body+air around the person), and set others to 0. This fills the structures in the lungs in the mask.\n* Keep only the largest air pocket (the human body has other pockets of air here and there).",
"help(measure.label)\n\ndef largest_label_volume(im, bg=-1):\n vals, counts = np.unique(im, return_counts=True)\n \n counts = counts[vals != bg]\n vals = vals[vals != bg]\n \n if len(counts) > 0:\n return vals[np.argmax(counts)]\n else:\n return None\n \ndef segment_lung_mask(image, fill_lung_structures=True):\n \n # no actually binary, but 1 and 2.\n # 0 is treated as background, which we do not want\n binary_image = np.array(image > -320, dtype=np.int8)+1\n labels=measure.label(binary_image)\n \n # Pick the pixel in the very corner to determine which label is air.\n # Improvement: Pick multiple background labels from around the patient\n # More resitant to \"trays\" on which the patient lays cutting the air\n # around the person in half\n background_label = labels[0,0,0]\n \n # Fill the air around the person\n binary_image[background_label == labels] = 2\n \n \n # Method of filling the lung structures (that is superior to something like \n # morphological closing)\n if fill_lung_structures:\n # For every slice we determine the largest solid structure\n for i, axial_slice in enumerate(binary_image):\n axial_slice = axial_slice - 1\n labeling = measure.label(axial_slice)\n l_max = largest_label_volume(labeling, bg=0)\n \n if l_max is not None: # This slice contains some lung\n binary_image[i][labeling != l_max] = 1 \n \n binary_image -= 1 # Make the image actual binary\n binary_image = 1-binary_image # Invert it, lungs are now 1\n \n # Remove other air pockets insided body\n labels = measure.label(binary_image, background=0)\n l_max = largest_label_volume(labels, bg=0)\n if l_max is not None: # There are air pockets\n binary_image[labels != l_max] =0 \n \n return binary_image\n\n# look at bolek's comment\n# l_max = largest_label_volume(labels, bg=-1)\n\n# Gerome Pistre\ndef largest_label_volume(im, bg=-1):\n vals, counts = np.unique(im, return_counts=True)\n \n counts = counts[vals != bg]\n vals = vals[vals != bg]\n \n biggest=vals[np.argmax(counts)]\n return biggest\n\ndef segment_lung_mask(image, fill_lung_structures=True):\n \n # no actually binary, but 1 and 2.\n # 0 is treated as background, which we do not want\n binary_image = np.array(image > -320, dtype=np.int8)+1\n labels=measure.label(binary_image)\n \n # Pick the pixel in the very corner to determine which label is air.\n # Improvement: Pick multiple background labels from around the patient\n # More resitant to \"trays\" on which the patient lays cutting the air\n # around the person in half\n background_label = labels[0,0,0]\n \n # Fill the air around the person\n binary_image[background_label == labels] = 2\n \n \n # Method of filling the lung structures (that is superior to something like \n # morphological closing)\n if fill_lung_structures:\n # For every slice we determine the largest solid structure\n for i, axial_slice in enumerate(binary_image):\n axial_slice = axial_slice - 1\n labeling = measure.label(axial_slice)\n l_max = largest_label_volume(labeling, bg=0)\n \n if l_max is not None: # This slice contains some lung\n binary_image[i][labeling != l_max] = 1 \n \n binary_image -= 1 # Make the image actual binary\n binary_image = 1-binary_image # Invert it, lungs are now 1\n \n # Remove other air pockets insided body\n labels = measure.label(binary_image, background=0)\n l_max = largest_label_volume(labels, bg=0)\n if l_max is not None: # There are air pockets\n binary_image[labels != l_max] =0 \n \n return binary_image\n\nsegmented_lungs = segment_lung_mask(pix_resampled, False)\nsegmented_lungs_fill = segment_lung_mask(pix_resampled, True)\n\nplot_3d(segmented_lungs,0)\n\nplot_3d(segmented_lungs_fill,0)\n\nplot_3d(segmented_lungs_fill - segmented_lungs,0)\n\nplot_3d(pix_resampled,0)\n\nprint(type(segmented_lungs));print(type(segmented_lungs_fill));\nprint(segmented_lungs.shape);print(segmented_lungs_fill.shape)",
"Normalization\nOur values currently range from -1024 to around 2000. Anything above 400 is not interesting to us, as these are simply bones with different radiodensity. Here's some code you can use:",
"# MANUALLY change MIN_BOUND, MAX_BOUND\nMIN_BOUND=-1000.0 \nMAX_BOUND=400.0\n\ndef normalize(image):\n image = (image - MIN_BOUND)/(MAX_BOUND-MIN_BOUND)\n image[image>1]=1.\n image[image<0]=0.\n return image",
"Zero centering\nAs a final preprocessing step, it's advisory to 0 center your data so that your mean value is 0. To do this you simply subtract the mean pixel value from all pixels. \nTo determine this mean you simply average all images in the whole dataset. If that sounds like a lot of work, we found this to be around 0.25 in the LUNA16 competition. \nWarning: Do not zero center with the mean per image (like is done in some kernels on here). The CT scanners are calibrated to return accurate HU measurements. There is no such thing as an image with lower contrast or brightness like in normal pictures.",
"PIXEL_MEAN = 0.25\n\ndef zero_center(image):\n image = image - PIXEL_MEAN\n return image\n\n# from rolanddog\n# https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial\n# zero-center, and preserve compressibility\n\nMIN_BOUND = -1000.0\nMAX_BOUND = 400.0\nPIXEL_MEAN = 0.25\nPIXEL_CORR = int((MAX_BOUND -MIN_BOUND)*PIXEL_MEAN) # in this case, 350\n\ndef zero_center_w_compress(image):\n image = image - PIXEL_CORR\n return image\n\ndef normalize(image):\n image = (image - MIN_BOUND)/(MAX_BOUND-MIN_BOUND)\n image[image>(1-PIXEL_MEAN)]=1.\n image[image<(0-PIXEL_MEAN)]=0.\n return image\n\ndef normalize_to_bounds(image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n image = (image - MIN_BOUND)/(MAX_BOUND-MIN_BOUND)\n image[image>(1-PIXEL_MEAN)]=1.\n image[image<(0-PIXEL_MEAN)]=0.\n return image \n\ndef normalize(image,MIN_BOUND,MAX_BOUND):\n image = (image - MIN_BOUND)/(MAX_BOUND-MIN_BOUND)\n image[image>1]=1.\n image[image<0]=0.\n return image \n\ndef zero_center_w_compress_to_bounds(image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n PIXEL_CORR = int((MAX_BOUND -MIN_BOUND)*PIXEL_MEAN)\n image = image - PIXEL_CORR\n return image\n\ndef zero_center(image, PIXEL_MEAN):\n image = image - PIXEL_MEAN\n return image\n\nimport skimage\n\nskimage.morphology.binary_dilation",
"General advice from Guido Zuidhof: http://pubs.rsna.org/doi/pdf/10.1148/radiol.11091710",
"len(first_patient)\n\n%time patients_lst = [load_scan(INPUT_FOLDER+samplepatient) for samplepatient in patients]\n\npd.DataFrame( [len(patient) for patient in patients_lst] ).describe()\n\nprint( pd.DataFrame( [len(patient) for patient in patients_lst]).mean() )\nprint( pd.DataFrame( [len(patient) for patient in patients_lst]).median() )\n\nINPUT_FOLDER+patients[0]\n\nlen( os.listdir(INPUT_FOLDER+patients[0]) )\n\nlen(patients_lst[0])\n\ndef slices_per_patient(input_folder_path):\n \"\"\" slices_per_patient\n @type input_folder_path : Python string \n \"\"\"\n patients = os.listdir(input_folder_path)\n patients.sort()\n \n patient_slices_lst = []\n \n for patient in patients:\n Nz = len( os.listdir(input_folder_path + patient))\n patient_slices_lst.append(Nz)\n \n return patient_slices_lst\n\nslices_per_patient_sample = slices_per_patient( INPUT_FOLDER)\n\nprint(len(slices_per_patient_sample))\npd.DataFrame( slices_per_patient_sample).describe()\n\n# Load the scans in given folder path\ndef load_scan(path):\n \"\"\"\n INPUTS/ARGUMENTS \n ================\n @type path : Python string\n \n @type slices : Python list (for each file in a folder/directory of dicom.dataset.FileDataset \n each is a slice of the single patient's lung\n \"\"\"\n slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]\n slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))\n try:\n slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])\n except:\n slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)\n \n for s in slices:\n s.SliceThickness = slice_thickness\n \n return slices\n\nprint(type(patients_resampled));print(len(patients_resampled));\nprint(type(patients_resampled[0]));print(patients_resampled[0].shape);print(patients_resampled[7].shape)\n\nos.listdir( os.getcwd() + \"/2017datascibowl/sample_images/\") \n\n# Some constants\nINPUT_FOLDER = './2017datascibowl/sample_images/'\npatients=os.listdir(INPUT_FOLDER)\npatients.sort()\n\nfirst_patient = load_scan(INPUT_FOLDER + patients[0])\n\n# from Python list of 2-dim. numpy array/matrix to 3-dim. numpy array/matrix\nfirst_patient_pixels = get_pixels_hu(first_patient)\n\npatients_resampled[0][0].shape\n\npatients_lst[0][0].shape\n\nprint(type(patients_resampled[0].shape))\ndef norm_zero_images(patient_image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n \"\"\" norm_zero_images - due normalization and zero center on a 3-dim. numpy array/matrix that \n represents all (total) 2-dim. image slices for a single patient\"\"\"\n Nz = patient_image.shape[0]\n new_patient_image = []\n for z_idx in range(Nz):\n patient_normed=normalize_to_bounds(patient_image[z_idx],MIN_BOUND,MAX_BOUND,PIXEL_MEAN)\n patient_normed=zero_center_w_compress_to_bounds(patient_normed,MIN_BOUND,MAX_BOUND,PIXEL_MEAN)\n new_patient_image.append( patient_normed)\n new_patient_image=np.array(new_patient_image)\n return new_patient_image\n \n\ndef norm_zero_images(patient_image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n \"\"\" norm_zero_images - due normalization and zero center on a 3-dim. numpy array/matrix that \n represents all (total) 2-dim. image slices for a single patient\"\"\"\n Nz = patient_image.shape[0]\n new_patient_image = []\n for z_idx in range(Nz):\n patient_normed=normalize_to_bounds(patient_image[z_idx],MIN_BOUND,MAX_BOUND,PIXEL_MEAN)\n new_patient_image.append( patient_normed)\n new_patient_image=np.array(new_patient_image)\n return new_patient_image\n \n\ndef norm_zero_images(patient_image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n \"\"\" norm_zero_images - due normalization and zero center on a 3-dim. numpy array/matrix that \n represents all (total) 2-dim. image slices for a single patient\"\"\"\n Nz = patient_image.shape[0]\n new_patient_image = []\n for z_idx in range(Nz):\n patient_normed=normalize(patient_image[z_idx],MIN_BOUND,MAX_BOUND)\n patient_normed=zero_center(patient_normed,PIXEL_MEAN)\n new_patient_image.append( patient_normed)\n new_patient_image=np.array(new_patient_image)\n return new_patient_image\n \n\nprint(type(patients_resampled))\n\nprint(len( patients_resampled_normed ))\nprint(pd.DataFrame(patients_resampled_normed[0][0]).describe())\npd.DataFrame(patients_resampled_normed[4][6]).describe()\n\nos.listdir('./2017datascibowl/')\n\nyhat_ids = pd.read_csv('./2017datascibowl/stage1_labels.csv')\n\nyhat_ids.head()\n\nyhat_ids_found = yhat_ids.loc[yhat_ids['id'].isin(patients)]\n\nyhat_ids_found;\n\n#yhat_ids_found['id']-pd.DataFrame(patients) \n\nSet( yhat_ids_found['id']).symmetric_difference(patients)\n\nyhat_ids['id'].iloc[:70];\n\npd.DataFrame(patients)\n\n# pd.concat([pd.DataFrame(patients), yhat_ids_found['id']],join=\"inner\",ignore_index=True)\npatients[0] in yhat_ids_found['id'].as_matrix()\n\nm = len(patients)\nfound_indices =[]\nfor i in range(m):\n if patients[i] in yhat_ids_found['id'].as_matrix():\n found_indices.append(i)\n\npatients_resampled_found = [patients[i] for i in found_indices]\n\nfor i in range(len(found_indices)):\n print(patients_resampled_found[i] in yhat_ids_found['id'].as_matrix())\n\ny_found=[]\nfor i in range(len(found_indices)):\n if (patients_resampled_found[i] in yhat_ids_found['id'].as_matrix()):\n y_found.append( yhat_ids_found['cancer'] )\n\nos.listdir(os.getcwd())",
"Summary",
"# Some constants\nINPUT_FOLDER = './2017datascibowl/sample_images/'\npatients=os.listdir(INPUT_FOLDER)\npatients.sort()\n\nNz_lst = slices_per_patient(INPUT_FOLDER)\nNz_tot= int( pd.DataFrame(Nz_lst).median() )\n\n%time patients_lst = [load_scan(INPUT_FOLDER+samplepatient) for samplepatient in patients]\n\n%time patients_pixels = [get_pixels_hu(patient) for patient in patients_lst]\n\npixels_and_patients=zip(patients_pixels,patients_lst)\n\nNx=128\nNy=128\n# pix_resampled_given,spacing_given=resample_given_dims( first_patient_pixels, first_patient, [167,128,128])\n%time patients_resampled,spacings_new=map(list, \\\n zip(*[resample_given_dims(patient_pixels,\\\n patient,\\\n [Nz_tot,Nx,Ny]) for patient_pixels,patient in pixels_and_patients])) \n\n\nMIN_BOUND=-1000.0\nMAX_BOUND=500.0\nPIXEL_MEAN=0.25\n\n\npatients_resampled_normed = [norm_zero_images(patient_image,\\\n MIN_BOUND,\\\n MAX_BOUND,\\\n PIXEL_MEAN) for patient_image in patients_resampled]\n\ny_found\n\nprint(len(patients_resampled_normed_found))",
"Get the y value (outcomes), by matching patient IDs",
"os.listdir('./2017datascibowl/');\ny_ids = pd.read_csv('./2017datascibowl/stage1_labels.csv')\n\ny_ids_found=y_ids.loc[yhat_ids['id'].isin(patients)]\n\nm = len(patients)\nfound_indices =[]\nfor i in range(m):\n if patients[i] in y_ids_found['id'].as_matrix():\n found_indices.append(i)\n\npatients_resampled_found = [patients[i] for i in found_indices]\n\ny_found=[]\nfor i in range(len(found_indices)):\n if (patients_resampled_found[i] in yhat_ids_found['id'].as_matrix()):\n cancer_val = y_ids_found.loc[y_ids_found['id']==patients_resampled_found[i]]['cancer'].as_matrix()\n y_found.append( cancer_val )\n\ny_found=np.array(y_found).flatten()\n\npatients_resampled_normed_found=[patients_resampled_normed[idx] for idx in found_indices]\n\npatients_resampled_found[2]\n\nnp.array(y_found).flatten()",
"Process Whole",
"os.listdir('./2017datascibowl/')\n\nstage1_labels_csv = pd.read_csv(\"./2017datascibowl/stage1_labels.csv\")\n\nstage1_labels_csv.head()\n\nstage1_labels_csv.describe()\n\nstage1_labels_csv.as_matrix().shape\n\nstage1_sample_submission_csv = pd.read_csv(\"./2017datascibowl/stage1_sample_submission.csv\")\n\nstage1_sample_submission_csv.head()\n\nstage1_sample_submission_csv.describe()\n\nstage1_sample_submission_csv.as_matrix().shape\n\nstage1_labels_csv.cancer.median()\n\nstage1_sample_submission_csv[\"cancer\"]",
"Stage 1 File I/O",
"patients_stage1 = os.listdir('./2017datascibowl/stage1')\n\nprint(len(patients_stage1))",
"Stage 1 labels",
"stage1_labels_csv[\"id\"].describe()\n\nstage1_labels_csv[\"id\"].as_matrix()\n\nlen( set(patients_stage1).difference(set(stage1_labels_csv[\"id\"].as_matrix())) )\n\nset(stage1_labels_csv[\"id\"].as_matrix()).difference( set(patients_stage1))\n\nlen( set(patients_stage1).symmetric_difference(set(stage1_labels_csv[\"id\"].as_matrix())) )\n\nlen(patients_stage1)\n\nNz_lst = slices_per_patient('./2017datascibowl/stage1/')\nNz_tot= int( pd.DataFrame(Nz_lst).median() )\n\npatient_0 = load_scan('./2017datascibowl/stage1/'+patients_stage1[0])\n\npatient_pixels_0 = get_pixels_hu(patient_0)\n\nNx=128\nNy=128\n# pix_resampled_given,spacing_given=resample_given_dims( first_patient_pixels, first_patient, [167,128,128])\n%time patient_resampled_0,spacing_new_0=resample_given_dims(patient_pixels_0,\\\n patient_0,\\\n [Nz_tot,Nx,Ny]) \n\nMIN_BOUND=-1000.0\nMAX_BOUND=500.0\nPIXEL_MEAN=0.25\n\npatient_resampled_norm_0=norm_zero_images(patient_resampled_0,\\\n MIN_BOUND,\\\n MAX_BOUND,\\\n PIXEL_MEAN)\n\npatient_resampled_norm_0.flatten().shape\n\nspacing_new_0\n\nprint( patient_0[0].ImagePositionPatient )\nprint( patient_0[0].ImageOrientationPatient )\nprint( patient_0[1].ImagePositionPatient )\nprint( patient_0[1].ImageOrientationPatient )\nprint( patient_0[2].ImagePositionPatient )\nprint( patient_0[2].ImageOrientationPatient )\n\n\npatient_0[0].ImageOrientationPatient\n\npatient_0[1]\n\nprint(patient_0[0].ImagePositionPatient)\nprint(patient_0[0].SliceThickness)\nprint(patient_0[0].PixelSpacing)\ndir(patient_0[0]);\n\nprint(patient_0[0].ImageOrientationPatient)\n\npatient_0[0]\n\nnp.array(patient_0[0].ImagePositionPatient[:2])\n\nnp.array(patient_0[0].ImageOrientationPatient)\n\npatient_resampled_norm_0.flatten()\n\nnp.concatenate([patient_resampled_norm_0.flatten(),\n np.array(patient_0[0].ImagePositionPatient[:2]),\n np.array( patient_0[0].ImageOrientationPatient) ] ).shape\n\ndef process_patient(patientname, \n MIN_BOUND=-1000.0,\n MAX_BOUND=500.0,\n PIXEL_MEAN=0.25,\n INPUT_FOLDER='./2017datascibowl/stage1/',\n N_x=128,N_y=128,):\n \"\"\" process_patient - process single patient \"\"\"\n patient = load_scan(INPUT_FOLDER +patientname)\n \n Nz_lst = slices_per_patient( INPUT_FOLDER )\n print( \"Nz_lst: %d\" % len(Nz_lst))\n Nz_tot= int( pd.DataFrame(Nz_lst).median() )\n print( \"\\n Nz_tot : %d\" % Nz_tot)\n patient_pixels = get_pixels_hu(patient)\n print( patient_pixels.shape)\n \n patient_resampled,spacing_new=resample_given_dims(patient_pixels,patient,[Nz_tot,N_x,N_y]) \n print(patient_resampled.shape)\n patient_resampled_norm=norm_zero_images(patient_resampled,MIN_BOUND,MAX_BOUND,PIXEL_MEAN)\n \n patient_feature_vec = np.concatenate([patient_resampled_norm.flatten(),\n np.array(patient[0].ImagePositionPatient[:2]),\n np.array( patient[0].ImageOrientationPatient) ] )\n return patient_feature_vec\n\ndef save_feat_vec(patient_feature_vec,patientname,sub_name=\"stage1_feat\"):\n#def save_feat_vec(patient_feature_vec,patientname): original\n # f=file( \"./2017datascibowl/stage1_feat/\"+patientname + \"feat_vec\" ,\"wb\")\n f=file( \"./2017datascibowl/\"+sub_name+\"/\"+patientname + \"feat_vec\" ,\"wb\")\n np.save(f,patient_feature_vec)\n f.close()\n \n\npatient0_feature_vec = process_patient(patients_stage1[0])\n\nsave_feat_vec( patient0_feature_vec, patients_stage1[0])\n\nos.getcwd()\n\npatient0_feature_vec\n\ntest_patient0_feat_vec=np.load(\"./2017datascibowl/stage1_feat/\"+patients_stage1[0]+\"feat_vec\")\n\ntest_patient0_feat_vec\n\ntest_patient0_feat_vec.shape\n\nfor patient_name in patients_stage1:\n patient_feature_vec = process_patient(patient_name)\n save_feat_vec( patient_feature_vec, patient_name)\n \n\nlen(patients_stage1)\n\npatients_stage1_feat = os.listdir('./2017datascibowl/stage1_feat')\nprint(len(patients_stage1_feat))",
"Stage 1 Low-Res Preprocess",
"# Load the scans in given folder path\ndef load_scan(path):\n \"\"\"\n INPUTS/ARGUMENTS \n ================\n @type path : Python string\n \n @type slices : Python list (for each file in a folder/directory of dicom.dataset.FileDataset \n each is a slice of the single patient's lung\n \"\"\"\n slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]\n slices.sort(key = lambda x: float(x.ImagePositionPatient[2]))\n try:\n slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])\n except:\n slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)\n \n for s in slices:\n s.SliceThickness = slice_thickness\n \n return slices\n\ndef slices_per_patient(input_folder_path):\n \"\"\" slices_per_patient\n @type input_folder_path : Python string \n \"\"\"\n patients = os.listdir(input_folder_path)\n patients.sort()\n \n patient_slices_lst = []\n \n for patient in patients:\n Nz = len( os.listdir(input_folder_path + patient))\n patient_slices_lst.append(Nz)\n \n return patient_slices_lst\n\ndef get_pixels_hu(slices):\n \"\"\"\n INPUTS/ARGUMENTS\n ================\n @type slices : Python list of dicom.dataset.FileDataset, \n @param slices : each dicom.dataset.FileDataset representing an image \"slice\" of a single patient's lung\n \"\"\"\n image = np.stack([s.pixel_array for s in slices]) # np.array image.shape (134,512,512)\n # Convert to int16 (from sometimes int16),\n # should be possible as values should always be low enough (<32k)\n image = image.astype(np.int16)\n \n # Set outside-of-scan pixels to 0\n # The intercept is usually -1024, so air is approximately 0\n # suggested by Henry Wolf to avoid -2048 values\n outside_of_image_val = image.min()\n image[image == outside_of_image_val] = 0 \n \n image[image == -2000] = 0 \n \n # Convert to Hounsfield units (HU)\n for slice_number in range(len(slices)):\n \n intercept = slices[slice_number].RescaleIntercept\n slope = slices[slice_number].RescaleSlope\n \n if slope != 1:\n image[slice_number] = slope * image[slice_number].astype(np.float64)\n image[slice_number] = image[slice_number].astype(np.int16)\n \n image[slice_number] += np.int16(intercept)\n \n return np.array(image, dtype=np.int16)\n\ndef resample_given_dims(image, scan, new_shape):\n # Determine current pixel spacing\n spacing = np.array([scan[0].SliceThickness] + scan[0].PixelSpacing, dtype=np.float32) # (\\Delta z,\\Delta x,\\Delta y)\n \n real_resize_factor=new_shape/np.array(image.shape).astype(\"float32\") \n \n new_spacing = spacing/real_resize_factor # (\\Delta z',\\Delta x',\\Delta y')\n \n image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest')\n \n return image, new_spacing\n\ndef norm_zero_images(patient_image,MIN_BOUND,MAX_BOUND,PIXEL_MEAN=0.25):\n \"\"\" norm_zero_images - due normalization and zero center on a 3-dim. numpy array/matrix that \n represents all (total) 2-dim. image slices for a single patient\"\"\"\n Nz = patient_image.shape[0]\n new_patient_image = []\n for z_idx in range(Nz):\n patient_normed=normalize(patient_image[z_idx],MIN_BOUND,MAX_BOUND)\n patient_normed=zero_center(patient_normed,PIXEL_MEAN)\n new_patient_image.append( patient_normed)\n new_patient_image=np.array(new_patient_image)\n return new_patient_image\n \n\ndef normalize(image,MIN_BOUND,MAX_BOUND):\n image = (image - MIN_BOUND)/(MAX_BOUND-MIN_BOUND)\n image[image>1]=1.\n image[image<0]=0.\n return image \n\ndef zero_center(image, PIXEL_MEAN):\n image = image - PIXEL_MEAN\n return image\n\ndef process_patient(patientname, \n MIN_BOUND=-1000.0,\n MAX_BOUND=500.0,\n PIXEL_MEAN=0.25,\n INPUT_FOLDER='./2017datascibowl/stage1/',\n N_x=128,N_y=128,):\n \"\"\" process_patient - process single patient \"\"\"\n patient = load_scan(INPUT_FOLDER +patientname)\n \n Nz_lst = slices_per_patient( INPUT_FOLDER )\n Nz_tot= int( pd.DataFrame(Nz_lst).median() )\n patient_pixels = get_pixels_hu(patient)\n \n patient_resampled,spacing_new=resample_given_dims(patient_pixels,patient,[Nz_tot,N_x,N_y]) \n patient_resampled_norm=norm_zero_images(patient_resampled,MIN_BOUND,MAX_BOUND,PIXEL_MEAN)\n \n patient_feature_vec = np.concatenate([patient_resampled_norm.flatten(),\n np.array(patient[0].ImagePositionPatient[:2]),\n np.array( patient[0].ImageOrientationPatient) ] )\n return patient_feature_vec\n\ndef save_feat_vec(patient_feature_vec,patientname,sub_name=\"stage1_feat\"):\n#def save_feat_vec(patient_feature_vec,patientname): original\n # f=file( \"./2017datascibowl/stage1_feat/\"+patientname + \"feat_vec\" ,\"wb\")\n f=file( \"./2017datascibowl/\"+sub_name+\"/\"+patientname + \"feat_vec\" ,\"wb\")\n np.save(f,patient_feature_vec)\n f.close()\n\nimport os, sys\nos.getcwd()\nos.listdir( os.getcwd() ) \n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport dicom\nimport os\nimport scipy.ndimage\nimport matplotlib.pyplot as plt\n\npatients_stage1 = os.listdir('./2017datascibowl/stage1')\nprint(len(patients_stage1))\n\nN_x=32\nN_y=32\n\n%time \nfor patient_name in patients_stage1:\n patient_feature_vec = process_patient(patient_name,N_x=N_x,N_y=N_y)\n save_feat_vec( patient_feature_vec, patient_name,sub_name=\"stage1_feat_lowres32\")\n\n%time \nfor patient_name in patients_stage1:\n patient_feature_vec = process_patient(patient_name,N_x=N_x,N_y=N_y)\n save_feat_vec( patient_feature_vec, patient_name,sub_name=\"stage1_feat_lowres32\")\n\nimport time\n\nN_x=64\nN_y=64\nt1 = time.time()\nfor patient_name in patients_stage1:\n patient_feature_vec = process_patient(patient_name,N_x=N_x,N_y=N_y)\n save_feat_vec( patient_feature_vec, patient_name,sub_name=\"stage1_feat_lowres\"+str(N_x))\nt2=time.time()\nprint((t2-t1)*1000*1000.) # in seconds\n\npatient_feature_vec_test = process_patient(patients_stage1[0],N_x=N_x,N_y=N_y)\n\npatient_feature_vec_test.shape"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DocNow/notebooks
|
20161017-trending.ipynb
|
mit
|
[
"Exploring the Twitter trends API\nIn this notebook we'll take a quick look at the section of the Twitter Rest API that deals with trending terms:\n\nGET trends/available\nGET trends/place\nGET trends/closest\n\nWe'll work with the Tweepy library's support for these, explore the calls, and sketch out some possibilities.\nSetup\nFirst some library imports and user key setup. This assumes you've arranged for the necessary app keys at apps.twitter.com.",
"from collections import Counter\nimport os\nfrom pprint import pprint\nimport tweepy\n\nc_key = os.environ['CONSUMER_KEY']\nc_secret = os.environ['CONSUMER_SECRET']\na_token = os.environ['ACCESS_TOKEN']\na_token_secret = os.environ['ACCESS_TOKEN_SECRET']\n\nauth = tweepy.OAuthHandler(c_key, c_secret)\nauth.set_access_token(a_token, a_token_secret)\n\napi = tweepy.API(auth)",
"Basic calls\nThe first call trends/available returns a set of locations, each with its own set of parameters.",
"trends_available = api.trends_available()\n\nlen(trends_available)\n\ntrends_available[0:3]",
"We see from this (and from their docs) that Twitter uses Yahoo WOEIDs to define place names. How they arrived at this list of 467 places is not clear. It's a mix of countries and cities, at least:",
"sorted([t['name'] for t in trends_available])[:25]",
"Hmm, let's see exactly what classification levels they have in this set:",
"c = Counter([t['placeType']['name'] for t in trends_available])\n\nc",
"'Unknown'??",
"unk = [t for t in trends_available if t['placeType']['name'] == 'Unknown']\n\nunk",
"Ah, Soweto is a township, not a city, and Al Ahsa is a region.\nMoving along, we can dig into specifc places one at a time:",
"ams = [t for t in trends_available if t['name'] == 'Amsterdam'][0]\n\nams\n\ntrends_ams = api.trends_place(ams['woeid'])\n\ntrends_ams\n\ntrends_ams[0]['trends']\n\n[(t['name'], t['tweet_volume']) for t in trends_ams[0]['trends']]",
"There's also a way to fetch the list without hashtags:",
"trends_ams = api.trends_place(ams['woeid'], exclude='hashtags')\n\n[(t['name'], t['tweet_volume']) for t in trends_ams[0]['trends']]",
"Simplifying a bit\nLet's write a few functions so we don't have to write all that out each time.",
"def getloc(name):\n try:\n return [t for t in trends_available if t['name'].lower() == name.lower()][0]\n except:\n return None\n\ndef trends(loc, exclude=False):\n return api.trends_place(loc['woeid'], 'hashtags' if exclude else None)\n\ndef top(trends):\n return sorted([(t['name'], t['tweet_volume']) for t in trends[0]['trends']], key=lambda a: a[1] or 0, reverse=True)\n\ntok = getloc('tokyo')\n\ntrends_tok = trends(tok, True)\n\ntop(trends_tok)\n\nalg = getloc('Algeria')\n\ntrends_alg = trends(alg)\n\ntop(trends_alg)",
"Weird, there's an L-to-R issue rendering the output here:",
"top(trends_alg)[1][0]",
"Just for fun let's compare Algiers with Algeria.",
"algs = getloc('Algiers')\n\ntrends_algs = trends(algs)\n\ntop(trends_algs)",
"Very similar. How about Chicago vs Miami?",
"chi = getloc('Chicago')\n\ntrends_chi = trends(chi)\n\nchi\n\ntop(trends_chi)[:10]\n\nmia = getloc('Miami')\n\nmia\n\ntrends_mia = trends(mia)\n\ntop(trends_mia)[:10]\n\nworld = getloc('Worldwide')\n\ntrends_world = trends(world)\n\ntop(trends_world)[:25]",
"Looking nearby\nThere are specific javascript calls for fetching a user's geolocation through a browser (see Using Geolocation via MDN). With a tool like that in place, you could fetch the user's lat and long and send it to the trends/closest Twitter API call:",
"closeby = api.trends_closest(38.8860307, -76.9931073)\n\ncloseby[0]\n\ntop(trends(closeby[0]))",
"Hmm, that parentid attribute might be useful.",
"[t['name'] for t in trends_available if t['parentid'] == 23424977]\n\n[t['name'] for t in trends_available if t['parentid'] == world['woeid']]",
"No Liechtenstein? Georgia? Belize? Hmm.\nThoughts on rate limits\nThe Twitter API calls have the following rate limits (as of October 17, 2016):\n\nGET trends/available - 15/15 min\nGET trends/place - 15/15 min\nGET trends/closest - 15/15 min\n\nThe key issue here is trends/place. trends/available probably doesn't change very often. trends/closest would only vary if the user is on the move, and even then, only slowly. But to track trends at more than one level on a minute-over-minute basis would simply not be possible because we are limited to one call per minute on trends/place. \nPractically speaking, we could do:\n\nevery minute for one place\nevery two-plus-a-fraction minutes for two places \nevery four minutes for three places\n\nGiven the practicality of setting things up and allowing a little buffer, it probably makes sense to check three places every five minutes. The default scenario could be check (worldwide, country, closest place). A config parameter could figure this out for you if set by default, or a little UI element could help choose places. For example, if a continuing event users wanted to track occurred in Omaha, you could have an option to switch one of the ongoing trend trackers to get Omaha every five minutes, or to turn off all the defaults and fetch trends for Omaha once every minute."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.17/_downloads/96cf5c207119de22548efa8f14198f9e/plot_artifacts_correction_rejection.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Rejecting bad data (channels and segments)",
"# sphinx_gallery_thumbnail_number = 3\n\nimport numpy as np\nimport mne\nfrom mne.datasets import sample\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname) # already has an EEG ref",
"Marking bad channels\nSometimes some MEG or EEG channels are not functioning properly\nfor various reasons. These channels should be excluded from\nanalysis by marking them bad as. This is done by setting the 'bads'\nin the measurement info of a data container object (e.g. Raw, Epochs,\nEvoked). The info['bads'] value is a Python string. Here is\nexample:",
"raw.info['bads'] = ['MEG 2443']",
"Why setting a channel bad?: If a channel does not show\na signal at all (flat) it is important to exclude it from the\nanalysis. If a channel as a noise level significantly higher than the\nother channels it should be marked as bad. Presence of bad channels\ncan have terribe consequences on down stream analysis. For a flat channel\nsome noise estimate will be unrealistically low and\nthus the current estimate calculations will give a strong weight\nto the zero signal on the flat channels and will essentially vanish.\nNoisy channels can also affect others when signal-space projections\nor EEG average electrode reference is employed. Noisy bad channels can\nalso adversely affect averaging and noise-covariance matrix estimation by\ncausing unnecessary rejections of epochs.\nRecommended ways to identify bad channels are:\n\n\nObserve the quality of data during data\n acquisition and make notes of observed malfunctioning channels to\n your measurement protocol sheet.\n\n\nView the on-line averages and check the condition of the channels.\n\n\nCompute preliminary off-line averages with artifact rejection,\n SSP/ICA, and EEG average electrode reference computation\n off and check the condition of the channels.\n\n\nView raw data with :func:mne.io.Raw.plot without SSP/ICA\n enabled and identify bad channels.\n\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>Setting the bad channels should be done as early as possible in the\n analysis pipeline. That's why it's recommended to set bad channels\n the raw objects/files. If present in the raw data\n files, the bad channel selections will be automatically transferred\n to averaged files, noise-covariance matrices, forward solution\n files, and inverse operator decompositions.</p></div>\n\nThe actual removal happens using :func:pick_types <mne.pick_types> with\nexclude='bads' option (see picking_channels).\nInstead of removing the bad channels, you can also try to repair them.\nThis is done by interpolation of the data from other channels.\nTo illustrate how to use channel interpolation let us load some data.",
"# Reading data with a bad channel marked as bad:\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory',\n baseline=(None, 0))\n\n# restrict the evoked to EEG and MEG channels\nevoked.pick_types(meg=True, eeg=True, exclude=[])\n\n# plot with bads\nevoked.plot(exclude=[], time_unit='s')\n\nprint(evoked.info['bads'])",
"Let's now interpolate the bad channels (displayed in red above)",
"evoked.interpolate_bads(reset_bads=False, verbose=False)",
"Let's plot the cleaned data",
"evoked.plot(exclude=[], time_unit='s')",
"<div class=\"alert alert-info\"><h4>Note</h4><p>Interpolation is a linear operation that can be performed also on\n Raw and Epochs objects.</p></div>\n\nFor more details on interpolation see the page channel_interpolation.\nMarking bad raw segments with annotations\nMNE provides an :class:mne.Annotations class that can be used to mark\nsegments of raw data and to reject epochs that overlap with bad segments\nof data. The annotations are automatically synchronized with raw data as\nlong as the timestamps of raw data and annotations are in sync.\nSee sphx_glr_auto_tutorials_plot_brainstorm_auditory.py\nfor a long example exploiting the annotations for artifact removal.\nThe instances of annotations are created by providing a list of onsets and\noffsets with descriptions for each segment. The onsets and offsets are marked\nas seconds. onset refers to time from start of the data. offset is\nthe duration of the annotation. The instance of :class:mne.Annotations\ncan be added as an attribute of :class:mne.io.Raw.",
"eog_events = mne.preprocessing.find_eog_events(raw)\nn_blinks = len(eog_events)\n# Center to cover the whole blink with full duration of 0.5s:\nonset = eog_events[:, 0] / raw.info['sfreq'] - 0.25\nduration = np.repeat(0.5, n_blinks)\nannot = mne.Annotations(onset, duration, ['bad blink'] * n_blinks,\n orig_time=raw.info['meas_date'])\nraw.set_annotations(annot)\nprint(raw.annotations) # to get information about what annotations we have\nraw.plot(events=eog_events) # To see the annotated segments.",
"It is also possible to draw bad segments interactively using\n:meth:raw.plot <mne.io.Raw.plot> (see\nsphx_glr_auto_tutorials_plot_visualize_raw.py).\nAs the data is epoched, all the epochs overlapping with segments whose\ndescription starts with 'bad' are rejected by default. To turn rejection off,\nuse keyword argument reject_by_annotation=False when constructing\n:class:mne.Epochs. When working with neuromag data, the first_samp\noffset of raw acquisition is also taken into account the same way as with\nevent lists. For more see :class:mne.Epochs and :class:mne.Annotations.\nRejecting bad epochs\nWhen working with segmented data (Epochs) MNE offers a quite simple approach\nto automatically reject/ignore bad epochs. This is done by defining\nthresholds for peak-to-peak amplitude and flat signal detection.\nIn the following code we build Epochs from Raw object. One of the provided\nparameter is named reject. It is a dictionary where every key is a\nchannel type as a string and the corresponding values are peak-to-peak\nrejection parameters (amplitude ranges as floats). Below we define\nthe peak-to-peak rejection values for gradiometers,\nmagnetometers and EOG:",
"reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)",
"<div class=\"alert alert-info\"><h4>Note</h4><p>The rejection values can be highly data dependent. You should be careful\n when adjusting these values. Make sure not too many epochs are rejected\n and look into the cause of the rejections. Maybe it's just a matter\n of marking a single channel as bad and you'll be able to save a lot\n of data.</p></div>\n\nWe then construct the epochs",
"events = mne.find_events(raw, stim_channel='STI 014')\nevent_id = {\"auditory/left\": 1}\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nbaseline = (None, 0) # means from the first instant to t = 0\npicks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, exclude='bads')\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks_meg, baseline=baseline, reject=reject,\n reject_by_annotation=True)",
"We then drop/reject the bad epochs",
"epochs.drop_bad()",
"And plot the so-called drop log that details the reason for which some\nepochs have been dropped.",
"print(epochs.drop_log[40:45]) # only a subset\nepochs.plot_drop_log()",
"What you see is that some drop log values are empty. It means event was kept.\nIf it says 'IGNORED' is means the event_id did not contain the associated\nevent. If it gives the name of channel such as 'EOG 061' it means the\nepoch was rejected because 'EOG 061' exceeded the peak-to-peak rejection\nlimit."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ellisonbg/leafletwidget
|
examples/LegendControl.ipynb
|
mit
|
[
"Legend: How to use\nstep 1: create an ipyleaflet map",
"from ipyleaflet import Map, LegendControl\n\nmymap = Map(center=(-10,-45), zoom=4)\n\nmymap",
"step 2: create a legend\nBy default, you need to provide at least a dictionary with pair key=> the label to display and value=> the desired color. By default, it is named 'Legend', but you can pass a name as argument as well.",
"a_legend = LegendControl({\"low\":\"#FAA\", \"medium\":\"#A55\", \"High\":\"#500\"}, name=\"Legend\", position=\"bottomright\")\n\nmymap.add_control(a_legend)",
"Step 3: manipulate Legend\nName",
"a_legend.name = \"Risk\" ## set name\na_legend.name # get name",
"Legend content",
"a_legend.legends = {\"el1\":\"#FAA\", \"el2\":\"#A55\", \"el3\":\"#500\"} #set content\na_legend.legends # get content\n\na_legend.add_legend_element(\"el5\",\"#000\") # add a legend element\n\na_legend.remove_legend_element(\"el5\") # remove a legend element",
"Positioning",
"a_legend.positioning =\"topright\" # set positioning : possible values are topleft, topright, bottomleft, bottomright\na_legend.positioning # get current positioning"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Tsiems/machine-learning-projects
|
In_Class/ICA3_MachineLearning.ipynb
|
mit
|
[
"# Ebnable HTML/CSS \nfrom IPython.core.display import HTML\nHTML(\"<link href='https://fonts.googleapis.com/css?family=Passion+One' rel='stylesheet' type='text/css'><style>div.attn { font-family: 'Helvetica Neue'; font-size: 30px; line-height: 40px; color: #FFFFFF; text-align: center; margin: 30px 0; border-width: 10px 0; border-style: solid; border-color: #5AAAAA; padding: 30px 0; background-color: #DDDDFF; }hr { border: 0; background-color: #ffffff; border-top: 1px solid black; }hr.major { border-top: 10px solid #5AAA5A; }hr.minor { border: none; background-color: #ffffff; border-top: 5px dotted #CC3333; }div.bubble { width: 65%; padding: 20px; background: #DDDDDD; border-radius: 15px; margin: 0 auto; font-style: italic; color: #f00; }em { color: #AAA; }div.c1{visibility:hidden;margin:0;height:0;}div.note{color:red;}</style>\")",
"Enter Team Member Names here (double click to edit):\n\nName 1: Ian Johnson\nName 2: Travis Siems\nName 3: Derek Phanekham\n\n\nIn Class Assignment Three\nIn the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the rendered notebook) before the end of class (or right after class). The initial portion of this notebook is given before class and the remainder is given during class. Please answer the initial questions before class, to the best of your ability. Once class has started you may rework your answers as a team for the initial part of the assignment. \n<a id=\"top\"></a>\nContents\n\n<a href=\"#LoadingKDD\">Loading KDDCup Data</a>\n<a href=\"#kdd_eval\">KDDCup Evaluation and Cross Validation</a>\n<a href=\"#data_snooping\">More Cross Validation</a>\n<a href=\"#stats\">Statistical Comparison</a>\n\nBefore coming to class, please make sure you have the latest version of scikit-learn. This notebook was created for version 0.18 and higher. \n\n<a id=\"LoadingKDD\"></a>\n<a href=\"#top\">Back to Top</a>\nLoading KDDCup Data\nPlease run the following code to read in the \"KDD Cup\" dataset from sklearn's data loading module. It consists of examples of different simulated attacks for the 1998 DARPA Intrusion Detection System (IDS). \nThis will load the data into the variable ds. ds is a bunch object with fields like ds.data and ds.target. The field ds.data is a numpy matrix of the continuous features in the dataset. The object is not a pandas dataframe. It is a numpy matrix. Each row is a set of observed instances, each column is a different feature. It also has a field called ds.target that is an integer value we are trying to predict (i.e., a specific integer represents a specific person). Each entry in ds.target is a label for each row of the ds.data matrix.",
"# fetch the dataset\nfrom sklearn.datasets import fetch_kddcup99\nfrom sklearn import __version__ as sklearn_version\n\nprint('Sklearn Version:',sklearn_version)\nds = fetch_kddcup99(subset='http')\n\nimport numpy as np\n# get some of the specifics of the dataset\nX = ds.data\ny = ds.target != b'normal.'\n\nn_samples, n_features = X.shape\nn_classes = len(np.unique(y))\n\nprint(\"n_samples: {}\".format(n_samples))\nprint(\"n_features: {}\".format(n_features))\nprint(\"n_classes: {}\".format(n_classes))",
"Question 1: How many instances are in the binary classification problem loaded above? How many instances are in each class? Plot a pie chart or bar chart of the number of classes.",
"from matplotlib import pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')\n\n#=== Fill in code below========\nprint('Total number of instances', len(y))\nprint('Number of instances in each class:',np.bincount(y))\n\n\nvals = np.bincount(y)\nplt.bar(range(len(vals)), vals)\nplt.ylabel('Counts')\nplt.title('Counts of Normal and Abnormal Events')\nplt.xticks(range(2), ('Normal','Abnormal'))",
"<a id=\"kdd_eval\"></a>\n<a href=\"#top\">Back to Top</a>\nKDDCup Evaluation and Cross Validation",
"from sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold, ShuffleSplit\nfrom sklearn.model_selection import StratifiedKFold, StratifiedShuffleSplit\n\nfrom sklearn.metrics import make_scorer, accuracy_score\nfrom sklearn.metrics import precision_score, recall_score, f1_score\n\nfrom sklearn.linear_model import LogisticRegression\n\n# select model\nclf = LogisticRegression()\n#select cross validation\ncv = KFold(n_splits=10)\n# select evaluation criteria\nmy_scorer = make_scorer(accuracy_score)\n# run model training and cross validation\nper_fold_eval_criteria = cross_val_score(estimator=clf,\n X=X,\n y=y,\n cv=cv,\n scoring=my_scorer\n )\n\nplt.bar(range(len(per_fold_eval_criteria)),per_fold_eval_criteria)\nplt.ylim([min(per_fold_eval_criteria)-0.01,max(per_fold_eval_criteria)])",
"Question 2 Is the code above a proper separation of training and testing sets for the given dataset? Why or why not?\nFor this dataset, due to the class imbalance, stratified K fold partitioning should be used to guarantee that each of the folds actually includes examples of each class. A normal K-fold partitioning will result in only a handful of folds including any of the abnormal \"attack\" messages.\nQuestion 3: Is the evaluation metric chosen in the above code appropriate for the dataset? Why or Why not?\nNo, accuracy is not a good metric in this case because there is a significant class imbalance problem. Accuracy is not meaningful because there are so many normal messages in the data and very few abnormal messages in the data.\n\nExercise 1: If the code above is not a proper separation of the train or does not use the proper evaluation criteria, fix the code in the block below to use appropriate train/test separation and appropriate evaluation criterion (criteria).",
"from sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold, ShuffleSplit\nfrom sklearn.model_selection import StratifiedKFold, StratifiedShuffleSplit\n\nfrom sklearn.metrics import make_scorer, accuracy_score\nfrom sklearn.metrics import precision_score, recall_score, f1_score\n\nfrom sklearn.linear_model import LogisticRegression\n# these imports above might help you\n\n#=====Write your code below here=================\n# select model\nclf = LogisticRegression()\n#select cross validation\ncv = StratifiedKFold(n_splits=10) ##CHANGED TO STRATIFIED K FOLD\n# select evaluation criteria\nmy_scorer = make_scorer(f1_score) ##SWITCHED TO F1_SCORE\n# run model training and cross validation\nper_fold_eval_criteria = cross_val_score(estimator=clf,\n X=X,\n y=y,\n cv=cv,\n scoring=my_scorer\n )\n\nplt.bar(range(len(per_fold_eval_criteria)),per_fold_eval_criteria)\nplt.ylim([min(per_fold_eval_criteria)-0.01,max(per_fold_eval_criteria)])\n\nprint(\"Mean F1: \", np.mean(per_fold_eval_criteria))",
"Question 4: Does the learning algorithm perform well based on the evaluation criteria? Why or why not?\nYes, the algorithm performs well, as the mean F1 score is 0.9968. Because the F1 score is a weighted average of the recall scores and precision scores, this high F1 score means that we have a very small number of not only Type I errors, but also Type II errors, relative to the number of actual positive and negative instances in the dataset.\n<a id=\"data_snooping\"></a>\n<a href=\"#top\">Back to Top</a>\nMore Cross Validation\nExercise 2: Does the code below contain any errors in the implementation of the cross validation? If so, fix the code below.",
"from sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.pipeline import Pipeline\n\n\n#======If there are errors, fix them below======\nn_components = 1\n\n##REMOVED THE PCA FROM HERE AND PUT IT IN THE PIPELINE\n##We also removed the StandardScaler, since PCA performs scaling internally\n\nclf = Pipeline([\n ('pca',PCA(n_components=n_components)),\n ('clf',LogisticRegression())])\n\nper_fold_eval_criteria = cross_val_score(estimator=clf,\n X=X,\n y=y,\n cv=cv,\n scoring=my_scorer\n )\n\nplt.bar(range(len(per_fold_eval_criteria)),per_fold_eval_criteria)\nplt.ylim([min(per_fold_eval_criteria)-0.01,max(per_fold_eval_criteria)])\n\n# =====fixed code======\n# write the fixed code (if needed) below\n\n#We put the PCA inside of the pipeline, as that is where it should be, and we removed the StandardScaler, because it's redundant",
"For this question, the circumstances for the DARPA KDD99 cup are changed in the following way:\n- When the model for detecting attacks is deployed, we now think that it will often need to be retrained.\n- DARPA anticipates that there will be a handful of different style attacks on their systems that have never been seen before. To detect these new attacks, they are employing programmers and analysts to find them manually every day. \n- DARPA believes the perpetrators of these new attacks are more sophisticated, so finding the new attacks will take priority over detecting the older, known attacks. \n- DARPA wants to use your learning algorithm for detecting only these new attacks, but the amount of training and testing data will be extremely small, because the analysts can only identify a handful of new style attacks each day.\n- DARPA asks you if you think its a good idea to employ retraining your model each day to find these new attacks.\nQuestion 5: How would you change the method of cross validation to answer this question from DARPA? That is, how can you change your cross validation method to better mirror how your system will be used and deployed by DARPA? \nWe would use the time-series approach discussed in the flipped lecture, where we use all existing data to try to classify the new data from every day.\nAt day 0 (the day that we start identifying the new type of attack), we would build a new dataset consisting of all of the new messages, including the new sophisticated attacks, which we would label as sophisticated attacks.\nAt day N, we would use all of the collected data from days 0..N (not including N) to build a model to classify the data from day N. We would then compare the model's output (for predicting day N) to the actual measured classes of the data from day N. We would compute the F1 score of the model with respect to identifying only sophisticated attacks, and use that to evaluate the model, or we could use a cost matrix where the cost of misidentifying a sophisiticated attack is much higher than the cost of misidentifying any other type of attack.\nWe believe that we should re-train the model every day, because we have so little data about the new attacks that leaving any of it out of the model would be a waste.",
"#plotting function for use in next question\n# takes input 'test_scores', and an x-axis label\ndef plot_filled(test_scores,train_x_axis, xlabel=''):\n \n test_mean = np.percentile(test_scores,50, axis=1)\n test_max = np.percentile(test_scores,95, axis=1) \n test_min = np.percentile(test_scores,5, axis=1) \n\n plt.plot(train_x_axis, test_mean,\n color='blue', linestyle='--',\n marker='s', markersize=5,\n label='validation set')\n\n plt.fill_between(train_x_axis,\n test_min,\n test_max,\n alpha=0.15, color='blue')\n\n plt.grid(True)\n plt.xlabel(xlabel)\n plt.ylabel('Evaluation Criterion')\n plt.legend(loc='lower right')\n plt.tight_layout()",
"DARPA is also concerned about how much training data they will need from the analysts in order to have a high performing model. They would like to use the current dataset to help answer that question. The code below is written for you to help answer DARPA's question about how many examples will be needed for training. Examine the code and then answer the following question:\nQuestion 6: Based on the analysis graphed below, how many positive examples are required to have a good tradeoff between bias and variance for the given evaluation criteria? Why?",
"clf = LogisticRegression()\n\ntest_scores = []\ntrain_sizes=np.linspace(5e-4,5e-3,10)\n\nfor size in train_sizes:\n cv = StratifiedShuffleSplit(n_splits=100,\n train_size = size,\n test_size = 1-size,\n )\n test_scores.append(cross_val_score(estimator=clf,X=X,y=y,cv=cv,scoring=my_scorer))\n\nplot_filled(np.array(test_scores), train_sizes*100, 'Percentage training data (%)')\n\nprint(.0015 * len(X))",
"It seems that approximately 0.15% of the data must be comprised of positive examples in order to optimize the tradeoff between vias and variance.\nFor the entire dataset, this means about 88 examples.\n\n<a id=\"stats\"></a>\n<a href=\"#top\">Back to Top</a>\nStatistical Comparison\nNow lets create a few different models and see if any of them have statistically better performances. \nWe are creating three different classifiers below to compare to one another. For creating different training and testing splits, we are using stratified shuffle splits on the datasets.",
"clf1 = LogisticRegression(C=100)\nclf2 = LogisticRegression(C=1)\nclf3 = LogisticRegression(C=0.1)\n\ntrain_size = 0.003 # small training size\ncv = StratifiedShuffleSplit(n_splits=10,train_size=train_size,test_size=1-train_size)\n\nevals1 = cross_val_score(estimator=clf1,X=X,y=y,scoring=my_scorer,cv=cv)\nevals2 = cross_val_score(estimator=clf2,X=X,y=y,scoring=my_scorer,cv=cv)\nevals3 = cross_val_score(estimator=clf3,X=X,y=y,scoring=my_scorer,cv=cv)",
"Question 7: Given the code above, what statistical test is more appropriate for selecting confidence intervals, and why? Your options are:\n- A: approximating the evaluation criterion as a binomial distribution and bounding by the variance (the first option we used in the flipped lecture video)\n- B: approximating the bounds using the folds of the cross validation to get mean and variance (the second option we used in the flipped lecture video)\n- C: Either are acceptable statistical tests for obtaining confidence intervals\nAnswer: B\nA) is not an acceptable answer, because, since the training size is so small, the testing sets are not independent, and the binomial approximation requires that the datasets are independent.\nB) is an acceptable answer, because we have 3 sets of scores for the three classifiers, so we can compute the mean and variance of the difference between these sets. This is the only acceptable test in this scenario.\nC) is not an acceptable answer, since A is incorrect.\n\nFinal Exercise: With 95% confidence, perform the statistical test that you selected above. Is any model or set of models statistically the best performer(s)? Or can we not say if the models are different with greater than 95% confidence?\nIf you chose option A, use a multiplier of Z=1.96. The number of instances used in testing can be calculated from the variable train_size.\nIf you chose option B, use a multiplier of t=2.26 and k=10.",
"#===================================================\n# Enter your code below\ndiff12 = evals1 - evals2\ndiff13 = evals1 - evals3\ndiff23 = evals2 - evals3\n\nsigma12 = np.sqrt(np.sum(diff12*diff12) * 1/(10-1))\nsigma13 = np.sqrt(np.sum(diff13*diff13) * 1/(10-1))\nsigma23 = np.sqrt(np.sum(diff23*diff23) * 1/(10-1))\n\nd12 = (np.mean(diff12) + 1/(np.sqrt(10) * 2.26 * sigma12), np.mean(diff12) - 1/(np.sqrt(10) * 2.26 * sigma12)) \nd13 = (np.mean(diff13) + 1/(np.sqrt(10) * 2.26 * sigma13), np.mean(diff13) - 1/(np.sqrt(10) * 2.26 * sigma13)) \nd23 = (np.mean(diff23) + 1/(np.sqrt(10) * 2.26 * sigma23), np.mean(diff23) - 1/(np.sqrt(10) * 2.26 * sigma23)) \n\nprint('Models 1 and 2 have statistically the best F1_score with 95% confidence (compared to model 3)')\nprint('Models 1 and 2 do not have statistically different F1 scores with 95% confidence.')\n#===================================================",
"That's all! Please save (make sure you saved!!!) and upload your rendered notebook and please include team member names in the notebook submission."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arnicas/eyeo_nlp
|
python/Tokenizing_Stopwords_Freqs.ipynb
|
cc0-1.0
|
[
"Intro to low level NLP - Tokenization, Stopwords, Frequencies, Bigrams\nLynn Cherny, arnicas@gmail",
"import itertools\nimport nltk\nimport string\n\nnltk.data.path\n\nnltk.data.path.append(\"../nltk_data\")\n\nnltk.data.path = ['../nltk_data']",
"Tokenization\nRead in a file to use for practice. The directory is one level above us now, in data/books. You can add other files into the data directory if you want.",
"ls ../data/books\n\n# the \"U\" here is for universal newline mode, because newlines on Mac are \\r\\n and on Windows are \\n.\n\nwith open(\"../data/books/Austen_Emma.txt\", \"U\") as handle:\n text = handle.read()\n\ntext[0:120]",
"Before we go further, it might be worth saying that even the lines of a text can be interesting as a visual. Here are a couple of books where every line is a line of pixels, and we've applied a simple search in JS to show lines of dialogue in pink. (The entire analysis is done in the web file book_shape.html -- so it's a little slow to load.)\n<img src=\"img/book_shape_dialog_emma_moby.png\">\nBut usually you want to extract some sense of the content, which means crunching the text itself to get insights about the overall file.",
"## if you don't want the newlines in there - replace them all.\ntext = text.replace('\\n', ' ')\n\ntext[0:120]\n\n## Breaking it up by sentence! Can be very useful for vis :)\nnltk.sent_tokenize(text)[0:10]\n\ntokens = nltk.word_tokenize(text)\ntokens[70:85] # Notice the punctuation:\n\n# Notice the difference here:\nnltk.wordpunct_tokenize(text)[70:85]",
"There are other options for tokenization in NLTK. You can test some out here: http://text-processing.com/demo/tokenize/\nDoing it in textkit at the command line:\nThanks to the work of Bocoup.com, we have a library that will do some simple text analysis at the command line, wrapping up some of the python functions I'll be showing you. The library is at https://github.com/learntextvis/textkit. Be aware it is under development! Also, some of these commands will be slower than running the code in the notebook.\nWhen I say you can run these at the command line, what I mean is that in your terminal window you can type the command you see here after the !. The ! in the Jupyter notebook means this is a shell command.\nThe | is a \"pipe.\" This means take the output from the previous command and make it the input to the next command.",
"# run text2words on this book file at this location, pipe the output to the unix \"head\" command, showing 20 lines\n!textkit text2words ../data/books/Austen_Emma.txt | head -n20\n\n# Pipe the output through the lowercase textkit operation, before showing 20 lines again!\n!textkit text2words ../data/books/Austen_Emma.txt | textkit lowercase | head -n20",
"What if, at this point, we made a word cloud? Let's say we strip out the punctuation and just count the words. I'll do it quickly just to show you... but we'll go a bit further.",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2counts > ../outputdata/simple_emma_counts.csv\n\n!ls -al ../outputputdata/simple_emma_counts.csv",
"Using the html file simple_wordcloud.html and this data file, we can see something basically useless. You don't have to do this yourself, but if you want to, edit that file to point to the ../outputdata/simple_emma_counts.csv at the bottom.\n<img src=\"img/emma_wc_nostops.png\">\nStopWords\n\"Stopwords\" are words that are usually excluded because they are common connectors (or determiners, or short verbs) that are not considered to carry meaning. BEWARE hidden stopword filtering in libraries you use and always check stopword lists to see if you agree with their contents!",
"from nltk.corpus import stopwords\nenglish_stops = stopwords.words('english')\n\n# Notice they are lowercase. This means we need to be sure we lowercase our text if we want to match against them.\n\nenglish_stops\n\ntokens = nltk.word_tokenize(text)\ntokens[0:15]\n\n# this is how many tokens we have:\nlen(tokens)",
"We want to strip out stopwords - use a list comprehension. Notice you need to lower case the words before you check for membership!",
"# try this without .lower in the if-statement and check the size!\n# We are using a python list comprehension to remove the tokens from Emma (after lowercasing them!) that are stopwords\ntokens = [token.lower() for token in tokens if token.lower() not in english_stops]\nlen(tokens)\n\n# now look at the first 15 words:\ntokens[0:15]",
"Let's get rid of punctuation too, which isn't used in most bag-of-words analyses. \"Bag of words\" means lists of words where the order doesn't matter. That's how most NLP tasks are done!",
"import string\nstring.punctuation\n\n# Now remove the punctuation and see how much smaller the token list is now:\ntokens = [token for token in tokens if token not in string.punctuation]\nlen(tokens)\n\n# But there's some awful stuff still in here:\nsorted(tokens)[0:20]",
"The ugliness of some of those tokens! You have some possibilities now - add to your stopwords list the ones you want removed; or remove all very short words, which will get rid of our puntuation problem too.",
"[token for token in tokens if len(token) <= 2][0:20]\n\n# Let's define a small python function that's a pretty common one for text processing.\n\ndef clean_tokens(tokens):\n \"\"\" Lowercases, takes out punct and stopwords and short strings \"\"\"\n return [token.lower() for token in tokens if (token not in string.punctuation) and \n (token.lower() not in english_stops) and len(token) > 2]\n\nclean = clean_tokens(tokens)\nclean[0:20]\n\nlen(clean)",
"So now we've reduced our data set from 191739 to 72576, just by removing stopwords, punctuation, and short strings. If we're interested in \"meaning\", this is a useful removal of noise.\nUsing textkit at the commandline for filtering stopwords and punctuation and lowercase and short words:\n(We are breaking these lines up with some intermediate output files (emma_lower.txt) because of how long these get.)",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt\n\n!head -n5 ../outputdata/emma_lower.txt\n\n!textkit text2words ../outputdata/emma_lower.txt | textkit filterwords | textkit filterlengths -m 3 > ../outputdata/emma_clean.txt\n\n!head -n10 ../outputdata/emma_clean.txt",
"Count Word Frequencies\nThe obvious thing you want to do next is count frequencies of words in texts - NLTK has you covered. (Or you can do it easily yourself using a Counter object.)",
"from nltk import Text\ncleantext = Text(clean)\ncleantext.vocab().most_common()[0:20]\n\n# if you want to know all the vocabular, without counts - you can remove the [0:10] which just shows the first 10:\ncleantext.vocab().keys()[0:10]\n\n# Another way to do this is with nltk.FreqDist, which creates an object with keys that are \n# the vocabulary, and values for the counts:\n\nnltk.FreqDist(clean)['sashed']",
"If you wanted to save the words and counts to a file to use, you can do it like this:",
"wordpairs = cleantext.vocab().most_common()\nwith open(\"../outputdata/emma_word_counts.csv\", \"w\") as handle:\n for pair in wordpairs:\n handle.write(pair[0] + \",\" + str(pair[1]) + \"\\n\")\n\n!head -n5 ../outputdata/emma_word_counts.csv",
"Using Textkit at the command line:\nLet's save the output of the filtered, lowercase words into a file called cleantokens.txt:",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt\n\n!textkit filterwords ../outputdata/emma_lower.txt | textkit filterlengths -m 3 > ../outputdata/emma_clean.txt\n\n!head -n5 ../outputdata/emma_clean.txt\n\n!textkit tokens2counts ../outputdata/emma_clean.txt > ../outputdata/emma_word_counts.csv\n\n!head ../outputdata/emma_word_counts.csv",
"Now you are ready to make word clouds that are smarter than your average word cloud. Move your counts file into a place where your html can find it. Edit the file \"simple_wordcloud.html\" to use the name of your file, including the path! \n<img src=img/edit_file_name.png>\nYou may still see some words in here you don't love -- names, modal verbs (would, could):\n<img src=\"img/emma_wc_before_more_stops.png\">\nWe can actually edit those by hand in the html/js code if you want. Look for the list of stopwords. You can change the color, too, if you want. I've added a few more stops to see how it looks now:\n<img src=\"img/emma_wc_after_more_stops.png\">\nYou might want to keep going.\nBy this point, we already know a lot about how to make texts manageable. A nice example of counting words over time in text appeared in the Washington Post, for SOTU speeches: https://www.washingtonpost.com/graphics/politics/2016-sotu/language/\nThere have also been a lot of studies of sentence and speech or book length. I hope that seems easy now. You could tokenize by sentence using nltk, and plot those lengths. And you could just count the words in speeches or books to plot them.\nFinding Most Common Pairs of Words (\"Bigrams\")\nWords occur in common sequences, sometimes. We call word pairs \"bigrams\" (and word triples \"trigrams\"). We refer to N-grams when we mean \"sequences of some length.\"",
"from nltk.collocations import *\nbigram_measures = nltk.collocations.BigramAssocMeasures()\n\nword_fd = nltk.FreqDist(clean) # all the words\nbigram_fd = nltk.FreqDist(nltk.bigrams(clean))\nfinder = BigramCollocationFinder(word_fd, bigram_fd)\nscored = finder.score_ngrams(bigram_measures.likelihood_ratio) # a good option here, there are others:\nscored[0:50]\n\n# Trigrams - using raw counts is much faster.\n\nfinder = TrigramCollocationFinder.from_words(clean,\n window_size = 15)\nfinder.apply_freq_filter(2)\n#ignored_words = nltk.corpus.stopwords.words('english') # if you hadn't removed them...\n# if you want to remove extra words, like character names, you can create the ignored_words list too:\n#finder.apply_word_filter(lambda w: len(w) < 3 or w.lower() in ignored_words)\n\nfinder.nbest(trigram_measures.raw_freq, 20)\n\nfinder.score_ngrams(trigram_measures.raw_freq)[0:20]\n\n## This is very slow! Don't run unless you're serious :)\n\nfinder = TrigramCollocationFinder.from_words(clean,\n window_size = 20)\nfinder.apply_freq_filter(2)\n#ignored_words = nltk.corpus.stopwords.words('english') # if you hadn't removed them...\n# if you want to remove extra words, like character names, you can create the ignored_words list too:\n#finder.apply_word_filter(lambda w: len(w) < 3 or w.lower() in ignored_words)\nfinder.apply_word_filter(lambda w: len(w) < 3) # remove short words\nfinder.nbest(trigram_measures.likelihood_ratio, 10)",
"Some more help is here: http://www.nltk.org/howto/collocations.html\nWhat if we wanted to try non-fiction, to see if there are more interesting results?\nWe need to read and clean the text for another file. Let's try positive movie reviews, located in data/movie_reviews/all_pos.txt.",
"with open(\"../data/movie_reviews/all_pos.txt\", \"U\") as handle:\n text = handle.read()\n\ntokens = nltk.word_tokenize(text) # tokenize them - split into words and punct\nclean_posrevs = clean_tokens(tokens) # clean up stopwords and punct\n\nclean_posrevs[0:10]\n\nword_fd = nltk.FreqDist(clean_posrevs)\nbigram_fd = nltk.FreqDist(nltk.bigrams(clean_posrevs))\nfinder = BigramCollocationFinder(word_fd, bigram_fd)\nscored = finder.score_ngrams(bigram_measures.likelihood_ratio) # other options are \nscored[0:50]",
"To see more details about the NLTK Text object methods, read the code/doc here: http://www.nltk.org/_modules/nltk/text.html\nBigrams in Textkit at the command line:\nCreate a file with all the word pairs, after making everything lowercase and removing punctuation and basic stopwords:",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit filterpunc | textkit tokens2lower > ../outputdata/emma_lower.txt\n\n!textkit filterwords ../outputdata/emma_lower.txt | textkit filterlengths -m 3 | textkit words2bigrams > ../outputdata/bigrams_emma.txt\n\n!head -n10 ../outputdata/bigrams_emma.txt",
"Then count them to get frequencies of the pairs. This may reveal custom stopwords you want to filter out.",
"!textkit tokens2counts ../outputdata/bigrams_emma.txt > ../outputdata/bigrams_emma_counts.txt\n\n!head -n20 ../outputdata/bigrams_emma_counts.txt",
"Suppose you didn't want the names in there? Custom stopwords can be created in a file, one per line, and added as an argument to the filterwords command:",
"!textkit filterwords --custom ../data/emma_customstops.txt ../outputdata/emma_lower.txt > ../outputdata/emma_custom_stops.txt\n\n!textkit filterlengths -m 3 ../outputdata/emma_custom_stops.txt | textkit words2bigrams > ../outputdata/bigrams_emma.txt\n\n!textkit tokens2counts ../outputdata/bigrams_emma.txt > ../outputdata/bigrams_emma_counts.txt\n\n!head -n20 ../outputdata/bigrams_emma_counts.txt",
"You could add more if you wanted.\nParts of Speech - Abbreviated POS\nTo do this, you need to make sure your nltk_data has the the MaxEnt Treebank POS tagger -- you can get it interactively with nltk.download() (on the models tab) - but we have it here already in the nltk_data directory.",
"text = nltk.word_tokenize(\"And now I present your cat with something completely different.\")\ntagged = nltk.pos_tag(text) # there are a few options for taggers, details in NLTK books\ntagged\n\nnltk.untag(tagged)",
"The Penn Treebank part of speech tags are these:\n<img src=\"./img/TreebankPOSTags.png\">\nsource: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html\nParts of speech are used in anaysis that's \"deeper\" than bags-of-words approaches. For instance, chunking (parsing for structure) may be used for entity identification and semantics. See http://www.nltk.org/book/ch07.html for a little more info, and the 2 Perkins NLTK books.\nNote also that \"real linguists\" parse a sentence into a syntactic structure, which is usually a tree form.\n<img src=\"img/sentence_tree.png\">\n(Source)\nFor instance, try out the Stanford NLP parser visually at http://corenlp.run/.\nIn TextKit at the command line:\nThis requires more Unix-foo, since Textkit doesn't have the full capability yet to do just a count of certain POS. We'll use grep to search for all the NNPs (proper names, or characters) and cut to get the first column (the word).",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2pos | grep NNP | cut -d, -f1 > ../outputdata/emma_nouns.txt\n\n!textkit tokens2counts ../outputdata/emma_nouns.txt > ../outputdata/emma_NNP_counts.csv\n\n!head -n10 ../outputdata/emma_NNP_counts.csv",
"That's all proper names. Maybe not very interesting.\nLet's look at the verbs now.",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/emma_verbs.txt\n\n!textkit tokens2counts ../outputdata/emma_verbs.txt > ../outputdata/emma_VB_counts.csv\n\n!head -n20 ../outputdata/emma_VB_counts.csv",
"Keep in mind that you can filter stopwords before you do this, too, although you should also lowercase things too. Also note that \"grep VB\" will also match other forms of VB, like VBP!\nSuppose you want to make a word cloud of just the verbs... without stopwords, and you want to compare two books by the same author, say Emma and Pride and Prejudice. Let's try it. (I'm using a \\ to wrap the line here, so I don't need to use intermediate files for short commands.)",
"!textkit text2words ../data/books/Austen_Emma.txt | textkit tokens2lower \\\n| textkit filterwords | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/emma_verbs.txt\n\n!textkit tokens2counts ../outputdata/emma_verbs.txt > ../outputdata/emma_VB_counts.csv\n\n!textkit text2words ../data/books/Austen_Pride.txt | textkit tokens2lower \\\n| textkit filterwords | textkit tokens2pos | grep VB | cut -d, -f1 > ../outputdata/pride_verbs.txt\n\n!textkit tokens2counts ../outputdata/pride_verbs.txt > ../outputdata/pride_VB_counts.csv",
"If you load those files into wc_clouds_bars.html (at the bottom!), and use the provided extra stopwords, you'll get this:\n<img src=\"img/two_clouds_bars.png\">\nUnderneath the word clouds are simple bar charts, to allow you to more precisely see the top words (it's cut off at 150). This is one of the issues with word clouds, they lack precision in their display.\nAnother option for showing the difference more clearly is in analytic_wordlist.html.\n<img src=\"img/analytic_wordlists.png\">\nStemming / Lemmatizing\nThe goal is to merge data items that are the same at some \"root\" meaning level, and reduce the number of features in your data set. \"Cats\" and \"Cat\" might want to be treated as the same thing, from a topic or summarization perspective. You can really see this in the word clouds above...so many forms of the same word!",
"# stemming removes affixes. This is the default choice for stemming although other algorithms exist.\nfrom nltk.stem import PorterStemmer\nstemmer = PorterStemmer()\nstemmer.stem('believes')\n\n# lemmatizing transforms to root words using grammar rules. It is slower. Stemming is more common.\nfrom nltk.stem import WordNetLemmatizer\nlemmatizer = WordNetLemmatizer()\nlemmatizer.lemmatize('said', pos='v') # if you don't specify POS, you get zilch.\n\nlemmatizer.lemmatize('cookbooks')\n\nstemmer.stem('wicked')\n\nlemmatizer.lemmatize(\"were\", pos=\"v\") # lemmatizing would allow us to collapse all forms of \"be\" into one token\n\n# an apparently recommended compression recipe in Perkins Python 3 NLTK book? Not sure I agree.\nstemmer.stem(lemmatizer.lemmatize('buses'))",
"Look at some of the clouds above. How would this be useful, do you think?",
"def make_verbs_lemmas(filename, outputfile):\n from collections import Counter\n with open(filename, 'U') as handle:\n emmav = handle.read()\n emmaverbs = emmav.split('\\n')\n lemmaverbs = []\n for verb in emmaverbs:\n lemmaverbs.append(lemmatizer.lemmatize(verb, pos='v'))\n counts = Counter(lemmaverbs)\n with open(outputfile, 'w') as handle:\n for key, value in counts.items():\n if key:\n handle.write(key + \",\" + str(value) + \"\\n\")\n print \"wrote \", outputfile\n\nmake_verbs_lemmas(\"../outputdata/emma_verbs.txt\", \"../outputdata/emma_lemma_verbs.csv\")\n\n!head -n5 ../outputdata/emma_lemma_verbs.csv\n\nmake_verbs_lemmas(\"../outputdata/pride_verbs.txt\", \"../outputdata/pride_lemma_verbs.csv\")\n\n!head -n5 ../outputdata/pride_lemma_verbs.csv",
"Now let's look at those wordclouds. A giant improvement and changes in the counts, actually. Look what happened with the Pride one, where there's a new second place.\n<img src=\"img/lemma_wordlists.png\">"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
pvougiou/Wikidata-Referencing
|
References Script.ipynb
|
apache-2.0
|
[
"import matplotlib.pyplot as plt\nfrom matplotlib import rc\nrc('font',**{'family':'monospace','monospace':['Computer Modern Typewriter']})\nrc('text', usetex=True)\nimport matplotlib\nmatplotlib.rcParams[\"text.latex.preamble\"].append(r'\\usepackage{xfrac}')\n%matplotlib inline\nimport matplotlib.mlab as mlab\nfrom matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter\nimport numpy as np\nimport pandas\n\npediaDomains = pandas.read_csv('./Data/wp_used_domains.csv')\ndataDomains = pandas.read_csv('./Data/wd_used_domains.csv')\n\n\ntopPediaDomains = pandas.read_csv('./Data/wp_used_tld.csv')\ntopDataDomains = pandas.read_csv('./Data/wd_used_tld.csv')\n\ntopLevelDomainsScatter = pandas.read_csv('./Data/clean_tld.csv')\n\ndomains = pandas.read_csv('./Data/matching_domain_count.csv', header=None)",
"Ten most common matching domains between Wikidata items and Wikipedia articles.",
"totalDomainsOccurrences = 0\nfor num in domains[1]:\n totalDomainsOccurrences += num\n\nlength = 10\nwidth = 0.8\n\nfig = plt.figure()\nplt.barh(range(length), np.asarray(domains[1][0:length] * 100 / totalDomainsOccurrences), width, align='center', color='b')\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-width)\nplt.yticks(range(length), domains[0][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopDomains.pdf\", format=\"pdf\")\nplt.show()\n# plotly_fig = tls.mpl_to_plotly( fig )\n# print(plotly_fig)\n# plot_url = py.plot_mpl(plotly_fig, filename='mpl-axes-labels')\n\n# plot_url = py.plot_mpl(plotly_fig, filename='mpl-annotation-with-custom-font-size')",
"Top used domains in Wikidata.",
"totalDataOccurrences = 0\nfor num in dataDomains['refHost']:\n totalDataOccurrences += num\n\nlength = 10\nwidth = 0.8\nax = plt.barh(range(length), np.asarray(dataDomains['refHost'][0:length] * 100 / totalDataOccurrences), width, align='center', color='#b60628', edgecolor='white', hatch=\"//\")\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of References')\nplt.ylim(-width)\nplt.yticks(range(length), dataDomains['index'][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopDomainsData.pdf\", format=\"pdf\")\nplt.show()",
"Top used domains in Wikipedia.",
"totalPediaOccurrences = 0\nfor num in pediaDomains['refHost']:\n totalPediaOccurrences += num\n\nlength = 10\nwidth = 0.8\nax = plt.barh(np.arange(length), np.asarray(pediaDomains['refHost'][0:length] * 100 / totalPediaOccurrences), width, align='center', color='#06b694', edgecolor='white', hatch='x')\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Citations')\nplt.ylim(-width)\nplt.yticks(np.arange(length), pediaDomains['index'][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopDomainsPedia.pdf\", format=\"pdf\")\n\nplt.show()",
"Matching domains across both Wikipedia and Wikidata.\nWikidata top-level domains -- compare to Wikipedia top-level domains\nWe check whether Wikidata serves its multilingual purpose.",
"totalPediaOccurrences = 0\nfor num in topPediaDomains['citeTld']:\n totalPediaOccurrences += num\n\nlength = 10\nwidth = 0.8\nax = plt.barh(range(length), np.asarray(topPediaDomains['citeTld'][0:length] * 100 / totalPediaOccurrences), width, align='center', color='#06b694')\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-width)\nplt.yticks(range(length), topPediaDomains['index'][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopLevelDomainsPedia.pdf\", format=\"pdf\")\nplt.show()\n\ntotalDataOccurrences = 0\nfor num in topDataDomains['refTld']:\n totalDataOccurrences += num\n\nlength = 10\nwidth = 0.8\nax1 = plt.barh(range(length), np.asarray(topDataDomains['refTld'][0:length] * 100 / totalDataOccurrences), width, align='center', color='#b60628')\n\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-width)\nplt.yticks(np.arange(length), topDataDomains['index'][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopLevelDomainsData.pdf\", format=\"pdf\")\nplt.show()\n\ntempPediaDomains = np.zeros(length)\nfor i in range(0, len(topDataDomains['index'][0:length])):\n for j in range(0, len(topPediaDomains['citeTld'])):\n if topDataDomains['index'][i] == topPediaDomains['index'][j]:\n tempPediaDomains[i] = topPediaDomains['citeTld'][j]\n \nlength = 10\nwidth = 0.4\nax1 = plt.barh(np.arange(length), np.asarray(topDataDomains['refTld'][0:length] * 100 / totalDataOccurrences), width, label = 'Wikidata', color='#b60628')\nax2 = plt.barh(np.arange(length) + width, tempPediaDomains * 100 / totalPediaOccurrences, width, label = 'Wikipedia', color='#06b694')\nplt.legend()\nplt.grid(which='both')\nplt.ylim(-width)\nplt.yticks(np.arange(length) + width, topDataDomains['index'][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/TopLevelDomainsComparison.pdf\", format=\"pdf\")\nplt.show()\n",
"Scatter Plot of Top-Level Domains",
"# plt.plot(topLevelDomainsScatter['citeTld_pc'], topLevelDomainsScatter['refTld_pc'], \"o\")\n# plt.plot(np.log(topLevelDomainsScatter['citeTld']), np.log(topLevelDomainsScatter['refTld']), \"o\")\n\nplt.plot(topLevelDomainsScatter['citeTld'], topLevelDomainsScatter['refTld'], \"o\", color='b')\n\n\n\n## Find the selected n-max points; the ones that are close to the far upper-right corner in the logarithmic scale.\nselected = topLevelDomainsScatter.sort_values('refTld', ascending=False)\n\nn = 10\nfor index in selected[:n].index:\n # plt.text(selected['citeTld'][index], selected['refTld'][index], selected['index'][index])\n # We put this IF here in order to avoid clutter with overlapping labels in the graph.\n if selected['index'][index] in ['gov', 'pl']:\n adjust_x = 0.5 * (10 ** np.log10(selected['citeTld'][index]))\n adjust_y = 0.1 * (10 ** np.log10(selected['refTld'][index]))\n plt.annotate(selected['index'][index], (selected['citeTld'][index] - adjust_x, selected['refTld'][index] + adjust_y))\n print('Hi')\n else:\n adjust_x = 0.1 * (10 ** np.log10(selected['citeTld'][index]))\n adjust_y = 0.1 * (10 ** np.log10(selected['refTld'][index]))\n plt.annotate(selected['index'][index], (selected['citeTld'][index] + adjust_x, selected['refTld'][index] + adjust_y))\n\n \nplt.grid(which='both', linewidth=0.2)\nplt.xscale('log')\nplt.yscale('log')\nplt.xlabel('Wikipedia Citations')\nplt.ylabel('Wikidata References')\nplt.savefig(\"./Figures/ScatterPlot.pdf\", format=\"pdf\")\nplt.show()",
"Check the type distribution across Wikidata, items in our dataset and items with matching domains.",
"dataTypes = pandas.read_csv('./Data/item_type_all_wd.csv')\ndomainTypes = pandas.read_csv('./Data/item_types_matchdom.csv')\nitemTypes = pandas.read_csv('./Data/item_types.csv')\n\ndataTypes.sort_values(by='shareType', ascending=False, inplace=True)\ndataTypes.reset_index(drop=True, inplace=True)\n\ntempDomainTypes = np.zeros(len(dataTypes))\ntempItemTypes = np.zeros(len(dataTypes))\nfor i in range(0, len(dataTypes)):\n for j in range(0, len(domainTypes)):\n if dataTypes['type'][i] == domainTypes['type'][j]:\n tempDomainTypes[i] = domainTypes['shareType'][j]\n for j in range(0, len(itemTypes)):\n if dataTypes['type'][i] == itemTypes['type'][j]:\n tempItemTypes[i] = itemTypes['shareType'][j]\n\nwidth = 0.28\n\nplt.figure(figsize=(10, 6))\nax1 = plt.barh(np.arange(0, len(dataTypes) - 1) - width, np.asarray(dataTypes['shareType'][1:len(dataTypes)]), width, label = 'Wikidata', color='k', edgecolor='white', hatch='//')\nax3 = plt.barh(np.arange(0, len(dataTypes) - 1), tempItemTypes[1:len(dataTypes)], width, label = 'Items In our Dataset', color='#06b694', edgecolor='white', hatch='|')\nax3 = plt.barh(np.arange(0, len(dataTypes) - 1) + width, tempDomainTypes[1:len(dataTypes)], width, label = 'Items with Matching Domains', color='r', edgecolor='white', hatch='x')\n\nplt.legend()\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-2 * width)\nplt.yticks(np.arange(0, len(dataTypes)) + 0.45 * width, dataTypes['type'][1:len(dataTypes)])\nplt.tight_layout()\nplt.savefig(\"./Figures/MatchedTypes.pdf\", format=\"pdf\")\nplt.show()",
"Single web page matches between Wikidata items and corresponding Wikipedia articles, by Wikipedia language version.",
"referencesLanguagesSingle = pandas.read_csv('./Data/matching_refs_unique_lang_single_ref_count.csv', header=None)\nreferencesLanguages = pandas.read_csv('./Data/matching_refs_unique_lang_count.csv', header=None)\n\n\nreferencesLanguagesSingle.sort_values(by=1, ascending=False, inplace=True)\nreferencesLanguagesSingle.reset_index(drop=True, inplace=True)\n\ntotalReferencesSingleOccurrences = 0\nfor num in referencesLanguagesSingle[1]:\n totalReferencesSingleOccurrences += num\n \ntotalReferencesOccurrences = 0\nfor num in referencesLanguages[1]:\n totalReferencesOccurrences += num\n\nlength = 10\ntempReferencesLanguages = np.zeros(length)\nfor i in range(0, len(referencesLanguagesSingle[1][0:length])):\n for j in range(0, len(referencesLanguages[1])):\n if referencesLanguages[0][j] == referencesLanguagesSingle[0][i]:\n\n tempReferencesLanguages[i] = referencesLanguages[1][j]\n\n\nwidth = 0.4\nax1 = plt.barh(np.arange(length), np.asarray(referencesLanguagesSingle[1][0:length] * 100 / totalReferencesSingleOccurrences), width, label = 'Total Page Matches', color='#b60628', edgecolor='white', hatch='//')\nax2 = plt.barh(np.arange(length) + width, tempReferencesLanguages * 100 / totalReferencesOccurrences, width, label = 'Unique Page Matches', color='#06b694', edgecolor='white', hatch='x')\n\nplt.legend()\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-width)\nplt.yticks(np.arange(length) + width, referencesLanguagesSingle[0][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/ReferencesLanguages.pdf\", format=\"pdf\")\nplt.show()",
"Domain matches between Wikidata items and corresponding Wikipedia articles, by Wikipedia language version.",
"domainsLanguagesSingle = pandas.read_csv('./Data/matching_domain_lang_single_ref_count.csv', header=None)\ndomainsLanguages = pandas.read_csv('./Data/matching_domain_unique_lang_count.csv', header=None)\n\n\ndomainsLanguagesSingle.sort_values(by=1, ascending=False, inplace=True)\ndomainsLanguagesSingle.reset_index(drop=True, inplace=True)\n\ntotalDomainsSingleOccurrences = 0\nfor num in domainsLanguagesSingle[1]:\n totalDomainsSingleOccurrences += num\n \ntotalDomainsOccurrences = 0\nfor num in domainsLanguages[1]:\n totalDomainsOccurrences += num\n\nlength = 10\ntempDomainsLanguages = np.zeros(length)\nfor i in range(0, len(domainsLanguagesSingle[1][0:length])):\n for j in range(0, len(domainsLanguages[1])):\n if domainsLanguages[0][j] == domainsLanguagesSingle[0][i]:\n\n tempDomainsLanguages[i] = domainsLanguages[1][j]\n\n\n\nwidth = 0.4\nax1 = plt.barh(np.arange(length), np.asarray(domainsLanguagesSingle[1][0:length] * 100 / totalDomainsSingleOccurrences), width, label = 'Total Domain Matches', color='#b60628', edgecolor='white', hatch='//')\nax2 = plt.barh(np.arange(length) + width, tempDomainsLanguages * 100 / totalDomainsOccurrences, width, label = 'Unique Domain Matches', color='#06b694', edgecolor='white', hatch='x')\n\nplt.legend()\nplt.grid(which='both')\nplt.xlabel(r'$\\%$ on the Total Number of Matches')\nplt.ylim(-width)\nplt.yticks(np.arange(length) + width, domainsLanguagesSingle[0][0:length])\nplt.tight_layout()\nplt.savefig(\"./Figures/DomainsLanguages.pdf\", format=\"pdf\")\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
empet/Plotly-plots
|
Plotly-cube.ipynb
|
gpl-3.0
|
[
"Plotly Cube: a cube with Plotly logo mapped on its faces\nOur aim is to plot a cube having on each face the Plotly logo.\nFor, we choose a png image representing the Plotly logo, read it via matplotlib, and crop it such that to get a numpy array of shape (L,L).\nEach cube face will be defined as a Plotly Surface, colored via a discrete colorscale, according to the values in the third array of the image, representing the blue chanel.\nRead the image:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimg=plt.imread('Data/Plotly-logo3.png')\nplt.imshow(img)\nprint 'image shape', img.shape",
"Crop the image:",
"my_img=img[10:-10, :, :]\nmy_img.shape\n\nplt.imshow(my_img)",
"Since our image contains only two colors (white and blue) we select from my_img the array corresponding to the blue chanel:",
"pl_img=my_img[:,:, 2] # \nL, C=pl_img.shape\nassert L==C\n\nplotly_blue='rgb(68, 122, 219)'# the blue color in Plotly logo\n\nimport plotly.plotly as py\nfrom plotly.graph_objs import *",
"Define a discrete colorscale from plotly_blue, and the white color:",
"pl_scl=[ [0.0, 'rgb(68, 122, 219)'], #plotly_blue\n [0.5, 'rgb(68, 122, 219)'],\n [0.5, 'rgb(255,255,255)' ], #white\n [1.0, 'rgb(255,255,255)' ]]",
"Prepare data to represent a cube face as a Plotly Surface:",
"x=np.linspace(0, L-1, L)\ny=np.linspace(0, L-1, L)\nX, Y = np.meshgrid(x, y)",
"Define the array \"equations\" of cube faces.\nThe upper face has the equation zM=L-1, the lower one, zm=0, and similarly for x=constant faces and y=constant faces:",
"zm=np.zeros(X.shape)\nzM=(L-1)*np.ones(X.shape)",
"The next function returns a Surface:",
"def make_cube_face(x,y,z, colorscale=pl_scl, is_scl_reversed=False, \n surfacecolor=pl_img, text='Plotly cube'):\n return Surface(x=x, y=y, z=z,\n colorscale=colorscale,\n reversescale=is_scl_reversed,\n showscale=False,\n surfacecolor=surfacecolor,\n text=text,\n hoverinfo='text'\n )",
"In order to define a cube face as a Plotly Surface, it is referenced to a positively oriented cartesian system of coordinates, (X,Y), associated to the induced planar coordinate system of that face (when looking at it from the outside) from the 3d system of coordinates of the cube. \nThe image represented by pl_img is then fitted to this system of coordinates, eventually by flipping its rows or columns. \nThe Surface instances, representing the cube faces, are defined as follows:",
"trace_zm=make_cube_face(x=X, y=Y, z=zm, is_scl_reversed=True, surfacecolor=pl_img)\ntrace_zM=make_cube_face(x=X, y=Y, z=zM, is_scl_reversed=True, surfacecolor=np.flipud(pl_img))\ntrace_xm=make_cube_face(x=zm, y=Y, z=X, surfacecolor=np.flipud(pl_img))\ntrace_xM=make_cube_face(x=zM, y=Y, z=X, surfacecolor=pl_img)\ntrace_ym=make_cube_face(x=Y, y=zm, z=X, surfacecolor=pl_img)\ntrace_yM=make_cube_face(x=Y, y=zM, z=X, surfacecolor=np.fliplr(pl_img))",
"Set the plot layout:",
"noaxis=dict( \n showbackground=False,\n showgrid=False,\n showline=False,\n showticklabels=False,\n ticks='',\n title='',\n zeroline=False)\n\nmin_val=-0.01\nmax_val=L-1+0.01\n\nlayout = Layout(\n title=\"\",\n width=500,\n height=500,\n scene=Scene(xaxis=XAxis(noaxis, range=[min_val, max_val]),\n yaxis=YAxis(noaxis, range=[min_val, max_val]), \n zaxis=ZAxis(noaxis, range=[min_val, max_val]), \n aspectratio=dict(x=1,\n y=1,\n z=1\n ),\n camera=dict(eye=dict(x=-1.25, y=-1.25, z=1.25)),\n ),\n \n paper_bgcolor='rgb(240,240,240)',\n hovermode='closest',\n margin=dict(t=50)\n )\n\nfig=Figure(data=Data([trace_zm, trace_zM, trace_xm, trace_xM, trace_ym, trace_yM]), layout=layout)\npy.sign_in('empet', 'api_key')\npy.iplot(fig, filename='Plotly-cube')\n\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fnakashima/deep-learning
|
student-admissions-keras/StudentAdmissionsKeras.ipynb
|
mit
|
[
"Predicting Student Admissions with Neural Networks in Keras\nIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:\n- GRE Scores (Test)\n- GPA Scores (Grades)\n- Class rank (1-4)\nThe dataset originally came from here: http://www.ats.ucla.edu/\nLoading the data\nTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:\n- https://pandas.pydata.org/pandas-docs/stable/\n- https://docs.scipy.org/",
"# Importing pandas and numpy\nimport pandas as pd\nimport numpy as np\n\n# Reading the csv file into a pandas DataFrame\ndata = pd.read_csv('student_data.csv')\n\n# Printing out the first 10 rows of our data\ndata[:10]",
"Plotting the data\nFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.",
"# Importing matplotlib\nimport matplotlib.pyplot as plt\n\n# Function to help us plot\ndef plot_points(data):\n X = np.array(data[[\"gre\",\"gpa\"]])\n y = np.array(data[\"admit\"])\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')\n plt.xlabel('Test (GRE)')\n plt.ylabel('Grades (GPA)')\n \n# Plotting the points\nplot_points(data)\nplt.show()",
"Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.",
"# Separating the ranks\ndata_rank1 = data[data[\"rank\"]==1]\ndata_rank2 = data[data[\"rank\"]==2]\ndata_rank3 = data[data[\"rank\"]==3]\ndata_rank4 = data[data[\"rank\"]==4]\n\n# Plotting the graphs\nplot_points(data_rank1)\nplt.title(\"Rank 1\")\nplt.show()\nplot_points(data_rank2)\nplt.title(\"Rank 2\")\nplt.show()\nplot_points(data_rank3)\nplt.title(\"Rank 3\")\nplt.show()\nplot_points(data_rank4)\nplt.title(\"Rank 4\")\nplt.show()",
"This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.\nOne-hot encoding the rank\nFor this, we'll use the get_dummies function in numpy.",
"# Make dummy variables for rank\none_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)\n\n# Drop the previous rank column\none_hot_data = one_hot_data.drop('rank', axis=1)\n\n# Print the first 10 rows of our data\none_hot_data[:10]",
"Scaling the data\nThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.",
"# Copying our data\nprocessed_data = one_hot_data[:]\n\n# Scaling the columns\nprocessed_data['gre'] = processed_data['gre']/800\nprocessed_data['gpa'] = processed_data['gpa']/4.0\nprocessed_data[:10]",
"Splitting the data into Training and Testing\nIn order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.",
"sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)\ntrain_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)\n\nprint(\"Number of training samples is\", len(train_data))\nprint(\"Number of testing samples is\", len(test_data))\nprint(train_data[:10])\nprint(test_data[:10])",
"Splitting the data into features and targets (labels)\nNow, as a final step before the training, we'll split the data into features (X) and targets (y).\nAlso, in Keras, we need to one-hot encode the output. We'll do this with the to_categorical function.",
"import keras\n\n# Separate data and one-hot encode the output\n# Note: We're also turning the data into numpy arrays, in order to train the model in Keras\nfeatures = np.array(train_data.drop('admit', axis=1))\ntargets = np.array(keras.utils.to_categorical(train_data['admit'], 2))\nfeatures_test = np.array(test_data.drop('admit', axis=1))\ntargets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))\n\nprint(features[:10])\nprint(targets[:10])",
"Defining the model architecture\nHere's where we use Keras to build our neural network.",
"# Imports\nimport numpy as np\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout, Activation\nfrom keras.optimizers import SGD\nfrom keras.utils import np_utils\n\n# Building the model\nmodel = Sequential()\nmodel.add(Dense(128, activation='relu', input_shape=(6,)))\nmodel.add(Dropout(.2))\nmodel.add(Dense(64, activation='relu'))\nmodel.add(Dropout(.1))\nmodel.add(Dense(2, activation='softmax'))\n\n# Compiling the model\nmodel.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nmodel.summary()",
"Training the model",
"# Training the model\nmodel.fit(features, targets, epochs=200, batch_size=100, verbose=0)",
"Scoring the model",
"# Evaluating the model on the training and testing set\nscore = model.evaluate(features, targets)\nprint(\"\\n Training Accuracy:\", score[1])\nscore = model.evaluate(features_test, targets_test)\nprint(\"\\n Testing Accuracy:\", score[1])",
"Challenge: Play with the parameters!\nYou can see that we made several decisions in our training. For instance, the number of layers, the sizes of the layers, the number of epochs, etc.\nIt's your turn to play with parameters! Can you improve the accuracy? The following are other suggestions for these parameters. We'll learn the definitions later in the class:\n- Activation function: relu and sigmoid\n- Loss function: categorical_crossentropy, mean_squared_error\n- Optimizer: rmsprop, adam, ada"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Diyago/Machine-Learning-scripts
|
DEEP LEARNING/Pytorch from scratch/MLP/Part 4 - Fashion-MNIST (Solution).ipynb
|
apache-2.0
|
[
"Classifying Fashion-MNIST\nNow it's your turn to build and train a neural network. You'll be using the Fashion-MNIST dataset, a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.\n<img src='assets/fashion-mnist-sprite.png' width=500px>\nIn this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.\nFirst off, let's load the dataset through torchvision.",
"import torch\nfrom torchvision import datasets, transforms\nimport helper\n\n# Define a transform to normalize the data\ntransform = transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n# Download and load the training data\ntrainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n\n# Download and load the test data\ntestset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)",
"Here we can see one of the images.",
"image, label = next(iter(trainloader))\nhelper.imshow(image[0,:]);",
"Building the network\nHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.",
"from torch import nn, optim\nimport torch.nn.functional as F\n\n# TODO: Define your network architecture here\nclass Classifier(nn.Module):\n def __init__(self):\n super().__init__()\n self.fc1 = nn.Linear(784, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 64)\n self.fc4 = nn.Linear(64, 10)\n \n def forward(self, x):\n # make sure input tensor is flattened\n x = x.view(x.shape[0], -1)\n \n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = F.relu(self.fc3(x))\n x = F.log_softmax(self.fc4(x), dim=1)\n \n return x",
"Train the network\nNow you should create your network and train it. First you'll want to define the criterion (something like nn.CrossEntropyLoss or nn.NLLLoss) and the optimizer (typically optim.SGD or optim.Adam).\nThen write the training code. Remember the training pass is a fairly straightforward process:\n\nMake a forward pass through the network to get the logits \nUse the logits to calculate the loss\nPerform a backward pass through the network with loss.backward() to calculate the gradients\nTake a step with the optimizer to update the weights\n\nBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.",
"# TODO: Create the network, define the criterion and optimizer\nmodel = Classifier()\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.parameters(), lr=0.003)\n\n# TODO: Train the network here\nepochs = 5\n\nfor e in range(epochs):\n running_loss = 0\n for images, labels in trainloader:\n log_ps = model(images)\n loss = criterion(log_ps, labels)\n \n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n else:\n print(f\"Training loss: {running_loss/len(trainloader)}\")\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\n\n# Test out your network!\n\ndataiter = iter(testloader)\nimages, labels = dataiter.next()\nimg = images[1]\n\n# TODO: Calculate the class probabilities (softmax) for img\nps = torch.exp(model(img))\n\n# Plot the image and probabilities\nhelper.view_classify(img, ps, version='Fashion')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mcc-petrinets/formulas
|
spot/tests/python/automata-io.ipynb
|
mit
|
[
"from IPython.display import display\nimport spot\nspot.setup()",
"Converting automata to strings\nUse to_str() to output a string representing the automaton in different formats.",
"a = spot.translate('a U b')\nfor fmt in ('hoa', 'spin', 'dot', 'lbtt'):\n print(a.to_str(fmt))",
"Saving automata to files\nUse save() to save the automaton into a file.",
"a.save('example.aut').save('example.aut', format='lbtt', append=True)\n\n!cat example.aut",
"Reading automata from files\nUse spot.automata() to read multiple automata from a file, and spot.automaton() to read only one.",
"for a in spot.automata('example.aut'):\n display(a)",
"The --ABORT-- feature of the HOA format allows discarding the automaton being read and starting over.",
"%%file example.aut\nHOA: v1\nStates: 2\nStart: 1\nAP: 2 \"a\" \"b\"\nacc-name: Buchi\nAcceptance: 1 Inf(0)\n--BODY--\nState: 0 {0}\n[t] 0\n--ABORT-- /* the previous automaton should be ignored */\nHOA: v1\nStates: 2\nStart: 1\nAP: 2 \"a\" \"b\"\nAcceptance: 1 Inf(0)\n--BODY--\nState: 0 {0}\n[t] 0\nState: 1\n[1] 0\n[0&!1] 1\n--END--\n\nfor a in spot.automata('example.aut'):\n display(a)",
"Reading automata from strings\nInstead of passing a filename, you can also pass the contents of a file. spot.automata() and spot.automaton() look for the absence of newline to decide if this is a filename or a string containing some actual automaton text.",
"for a in spot.automata(\"\"\"\nHOA: v1\nStates: 2\nStart: 1\nname: \"Hello world\"\nAP: 2 \"a\" \"b\"\nAcceptance: 1 Inf(0)\n--BODY--\nState: 0 {0}\n[t] 0\nState: 1\n[1] 0\n[0&!1] 1\n--END--\nHOA: v1\nStates: 1\nStart: 0\nname: \"Hello world 2\"\nAP: 2 \"a\" \"b\"\nAcceptance: 2 Inf(0)&Inf(1)\n--BODY--\nState: 0 {0}\n[t] 0 {1}\n[0&!1] 0\n--END--\n\"\"\"):\n display(a)",
"Reading automata output from processes\nIf an argument of spot.automata ends with |, then it is interpreted as a shell command that outputs one automaton or more.",
"for a in spot.automata('ltl2tgba -s \"a U b\"; ltl2tgba --lbtt \"b\"|', 'ltl2tgba -H \"GFa\" \"a & GFb\"|'):\n display(a)",
"A single automaton can be read using spot.automaton(), with the same convention.",
"spot.automaton('ltl2tgba -s6 \"a U b\"|')\n\n!rm example.aut"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mjabri/holoviews
|
doc/Tutorials/Pandas_Conversion.ipynb
|
bsd-3-clause
|
[
"Pandas is one of the most popular Python libraries providing high-performance, easy-to-use data structures and data analysis tools. Additionally it provides IO interfaces to store and load your data in a variety of formats including csv files, json, pickles and even databases. In other words it makes loading data, munging data and even complex data analysis tasks a breeze.\nCombining the high-performance data analysis tools and IO capabilities that Pandas provides with interactivity and ease of generating complex visualization in HoloViews makes the two libraries a perfect match.\nIn this tutorial we will explore how you can easily convert between Pandas dataframes and HoloViews components. The tutorial assumes you already familiar with some of the core concepts of both libraries, so if you need a refresher on HoloViews have a look at the Introduction and Exploring Data.\nBasic conversions",
"import numpy as np\nimport pandas as pd\nimport holoviews as hv\nfrom IPython.display import HTML\n%reload_ext holoviews.ipython\n\n%output holomap='widgets'",
"The first thing to understand when working with pandas dataframes in HoloViews is how data is indexed. Pandas dataframes are structured as tables with any number of columns and indexes. HoloViews on the other hand deals with Dimensions. HoloViews container objects such as the HoloMap, NdLayout, GridSpace and NdOverlay have kdims, which provide metadata about the data along that dimension and how they can be sliced. Element objects on the other hand have both key dimensions (kdims) and value dimensions (vdims). The difference between kdims and vdims in HoloViews is that the former may be sliced and indexed while the latter merely provide a description about the values along that Dimension.\nLet's start by constructing a Pandas dataframe of a few columns and display it as it's html format (throughtout this notebook we will visualize the DFrames using the IPython HTML display function, to allow this notebook to be tested, you can of course visualize dataframes directly).",
"df = pd.DataFrame({'a':[1,2,3,4], 'b':[4,5,6,7], 'c':[8, 9, 10, 11]})\nHTML(df.to_html())",
"Now that we have a basic dataframe we can wrap it in the HoloViews DFrame wrapper element.",
"example = hv.DFrame(df)",
"The HoloViews DFrame wrapper element can either be displayed directly using some of the specialized plot types that Pandas supplies or be used as conversion interface to HoloViews objects. This Tutorial focuses only on the conversion interface, for the specialized Pandas and Seaborn plot types have a look at the Pandas and Seaborn tutorial.\nThe data on the DFrame Element is accessible via the .data attribute like on all other Elements.",
"list(example.data.columns)",
"Having wrapped the dataframe in the DFrame wrapper we can now begin interacting with it. The simplest thing we can do is to convert it to a HoloViews Table object. The conversion interface has a simple signature, after selecting the Element type you want to convert to, in this case a Table, you pass the desired kdims and vdims to the corresponding conversion method, either as list of column name strings or as a single string.",
"example_table = example.table(['a', 'b'], 'c')\nexample_table",
"As you can see, we now have a Table, which has a and b as its kdims and c as its value_dimension. The index of the original dataframe was dropped however. So if your data has some complex indices set ensure to convert them to simple columns using the .reset_index method on the pandas dataframe:",
"HTML(df.reset_index().to_html())",
"Now we can employ the HoloViews slicing semantics to select the desired subset of the data and use the usual compositing + operator to lay the data out side by side:",
"example_table[:, 4:8:2] + example_table[2:5:2, :]",
"Dropping and reducing columns\nThis was the simple case, we converted all the dataframe columns to a Table object. This time let's only select a subset of the Dimensions.",
"example.scatter('a', 'b')",
"As you can see HoloViews simply ignored the remaining Dimension. By default the conversion functions ignore any numeric unselected Dimensions. All non-numeric dimensions are converted to dimensions on the returned HoloMap however. Both of these behaviors can be overridden by supplying explicit map dimensions and/or a reduce_fn.\nYou can perform this conversion with any type and lay your results out side-by-side making it easy to look at the same dataset in any number of ways.",
"%%opts Curve [xticks=3 yticks=3]\nexample.curve('a', 'b') + example_table",
"Finally, we can convert all homogenous HoloViews types (i.e. anything except Layout and Overlay) back to a pandas dataframe using the dframe method.",
"HTML(example_table.dframe().to_html())",
"Working with higher-dimensional data\nThe last section only scratched the surface, where HoloViews really comes into its own is for very high-dimensional datasets. Let's load a dataset of some macro-economic indicators for a OECD countries from 1964-1990 from the holoviews website.",
"macro_df = pd.read_csv('http://ioam.github.com/holoviews/Tutorials/macro.csv', '\\t')",
"Now we can display the first ten rows:",
"HTML(macro_df[0:10].to_html())",
"As you can see some of the columns are poorly named and carry no information about the units of each quantity. The DFrame element allows defining either an explicit list of kdims which must match the number of columns or a dimensions dictionary, where the keys should match the columns and the values must be either string or HoloViews Dimension object.",
"dimensions = {'unem': hv.Dimension('Unemployment', unit='%'),\n 'capmob': 'Capital Mobility',\n 'gdp': hv.Dimension('GDP Growth', unit='%')}\nmacro = hv.DFrame(macro_df, dimensions=dimensions)",
"Let's list the conversion methods supported by the standard DFrame element, if you have the Seaborn extension the DFrame object that is imported by default will support additional conversions:",
"from holoviews.interface.pandas import DFrame as PDFrame\nsorted([k for k in PDFrame.__dict__ if not k.startswith('_') and k != 'name'])",
"All these methods have a common signature, first the kdims, vdims, HoloMap dimensions and a reduce_fn. We'll see what that means in practice for some of the complex Element types in a minute.\nConversion to complex HoloViews components\nWe'll begin by setting a few default plot options, which will apply to all the objects. You can do this by setting the appropriate options directly Store.options with the desired {type}.{group}.{label} path or using the %opts line magic, see the Options Tutorial for more details.\nHere we define some default options on Store.options directly using the %output magic only to set the dpi of the following figures.",
"%output dpi=100\noptions = hv.Store.options()\nopts = hv.Options('plot', aspect=2, fig_size=250, show_grid=True, legend_position='right')\noptions.NdOverlay = opts\noptions.Overlay = opts",
"Overlaying\nAbove we looked at converting a DFrame to simple Element types, however HoloViews also provides powerful container objects to explore high-dimensional data, currently these are HoloMap, NdOverlay, NdLayout and GridSpace. HoloMaps provide the basic conversion type from which you can conveniently convert to the other container types using the .overlay, .layout and .grid methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here 'year' is a key dimension and GDP Growth a value dimension. As we discussed before all non-numeric Dimensions become HoloMap kdims, in this case the 'country' is the only non-numeric Dimension, which we then overlay calling the .overlay method.",
"%%opts Curve (color=Palette('Set3'))\ngdp_curves = macro.curve('year', 'GDP Growth')\ngdp_curves.overlay('country')",
"Collapsing\nNow that we've extracted the gdp_curves we can apply some operations to them. The collapse method applies some function across the data along the supplied dimensions. This let's us quickly compute a the mean GDP Growth by year for example, but it also allows us to map a function with parameters to the data and visualize the resulting samples. A simple example is computing a curve for each percentile and embedding it in an NdOverlay.\nAdditionally we can apply a Palette to visualize the range of percentiles.",
"%%opts NdOverlay [show_legend=False] Curve (color=Palette('Blues'))\nhv.NdOverlay({i: gdp_curves.collapse('country', np.percentile, q=i) for i in range(0,101)})",
"Multiple key dimensions\nMany HoloViews Element types support multiple kdims, including HeatMaps, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally we choose to color the Bars for each item in the stack.",
"%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False]\n\n%%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2'))\nmacro.bars(['country', 'year'], 'trade')",
"Using the .select method we can pull out the data for just a few countries and specific years. We can also make more advanced use the Palettes.\nPalettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.",
"%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2])\ncountries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'}\nmacro.bars(['country', 'year'], 'Unemployment').select(year=(1978, 1985), country=countries)",
"Combining heterogeneous data\nMany HoloViews Elements support multiple key and value dimensions. A HeatMap may be indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other. Because all HoloViews objects support the + operator, we can use np.sum to compose them into a Layout.\nBefore we display the Layout let's apply some styling, we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.",
"%opts HeatMap [show_values=False xticks=40 xrotation=90 invert_yaxis=True]\n%opts Layout [figure_size=150] \n\nhv.Layout([macro.heatmap(['year', 'country'], value)\n for value in macro.data.columns[2:]]).cols(2)",
"Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements for example support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position. \nAs for the Curves above we supply 'year' as the sole key_dimension and rely on the DFrame to automatically convert the country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment but to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.",
"%%opts Scatter [scaling_factor=1.4] (color=Palette('Set3') edgecolors='k')\ngdp_unem_scatter = macro.scatter('year', ['GDP Growth', 'Unemployment'])\ngdp_unem_scatter.overlay('country')",
"Since the DFrame treats all columns in the dataframe as kdims we can map any dimension against any other, allowing us to explore relationships between economic indicators, for example the relationship between GDP Growth and Unemployment, again colored by country.",
"%%opts Scatter [size_index=1 scaling_factor=1.3] (color=Palette('Dark2'))\nmacro.scatter('GDP Growth', 'Unemployment').overlay('country')",
"Combining heterogeneous Elements\nSince all HoloViews Elements are composable we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points, which indicate the unemployment rate by size and annotate the data with some descriptions of what happened economically in these years.",
"%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k')\nmacro_overlay = gdp_curves * gdp_unem_scatter\nannotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\\\nhv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\\n Recession', 'v')\nmacro_overlay * annotations",
"Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks. \nWhile looking at the plots individually like this allows us to study trends for each country, we may want to lay outa subset of the countries side by side. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the aspect so the plots compose nicely.",
"%opts Overlay [aspect=1]\n\n%%opts NdLayout [figure_size=100] Scatter [color_index=2] (cmap='Reds')\ncountries = {'United States', 'Canada', 'United Kingdom'}\n(gdp_curves * gdp_unem_scatter).select(country=countries).layout('country')",
"Finally let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:",
"%%opts Layout [fig_size=100] Scatter [color_index=2] (cmap='Reds')\n(macro_overlay.relabel('GDP Growth', depth=1) +\\\nmacro.curve('year', 'Unemployment', group='Unemployment',) +\\\nmacro.curve('year', 'trade', ['country'], group='Trade') +\\\nmacro.points(['GDP Growth', 'Unemployment'], [])).cols(2)",
"That's it for this Tutorial, if you want to see some more examples of using HoloViews with Pandas look at the Pandas and Seaborn Tutorial."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ELind77/gensim
|
docs/notebooks/Tensorboard_visualizations.ipynb
|
lgpl-2.1
|
[
"TensorBoard Visualizations\nIn this tutorial, we will learn how to visualize different types of NLP based Embeddings via TensorBoard. TensorBoard is a data visualization framework for visualizing and inspecting the TensorFlow runs and graphs. We will use a built-in Tensorboard visualizer called Embedding Projector in this tutorial. It lets you interactively visualize and analyze high-dimensional data like embeddings.\nRead Data\nFor this tutorial, a transformed MovieLens dataset<sup>[1]</sup> is used. You can download the final prepared csv from here.",
"import gensim\nimport pandas as pd\nimport smart_open\nimport random\n\n# read data\ndataframe = pd.read_csv('movie_plots.csv')\ndataframe",
"1. Visualizing Doc2Vec\nIn this part, we will learn about visualizing Doc2Vec Embeddings aka Paragraph Vectors via TensorBoard. The input documents for training will be the synopsis of movies, on which Doc2Vec model is trained. \n<img src=\"Tensorboard.png\">\nThe visualizations will be a scatterplot as seen in the above image, where each datapoint is labelled by the movie title and colored by it's corresponding genre. You can also visit this Projector link which is configured with my embeddings for the above mentioned dataset. \nPreprocess Text\nBelow, we define a function to read the training documents, pre-process each document using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.",
"def read_corpus(documents):\n for i, plot in enumerate(documents):\n yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(plot, max_len=30), [i])\n\ntrain_corpus = list(read_corpus(dataframe.Plots))",
"Let's take a look at the training corpus.",
"train_corpus[:2]",
"Training the Doc2Vec Model\nWe'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 55 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time. Small datasets with short documents, like this one, can benefit from more training passes.",
"model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=55)\nmodel.build_vocab(train_corpus)\nmodel.train(train_corpus, total_examples=model.corpus_count, epochs=model.iter)",
"Now, we'll save the document embedding vectors per doctag.",
"model.save_word2vec_format('doc_tensor.w2v', doctag_vec=True, word_vec=False) ",
"Prepare the Input files for Tensorboard\nTensorboard takes two Input files. One containing the embedding vectors and the other containing relevant metadata. We'll use a gensim script to directly convert the embedding file saved in word2vec format above to the tsv format required in Tensorboard.",
"%run ../../gensim/scripts/word2vec2tensor.py -i doc_tensor.w2v -o movie_plot",
"The script above generates two files, movie_plot_tensor.tsv which contain the embedding vectors and movie_plot_metadata.tsv containing doctags. But, these doctags are simply the unique index values and hence are not really useful to interpret what the document was while visualizing. So, we will overwrite movie_plot_metadata.tsv to have a custom metadata file with two columns. The first column will be for the movie titles and the second for their corresponding genres.",
"with open('movie_plot_metadata.tsv','w') as w:\n w.write('Titles\\tGenres\\n')\n for i,j in zip(dataframe.Titles, dataframe.Genres):\n w.write(\"%s\\t%s\\n\" % (i,j))",
"Now you can go to http://projector.tensorflow.org/ and upload the two files by clicking on Load data in the left panel.\nFor demo purposes I have uploaded the Doc2Vec embeddings generated from the model trained above here. You can access the Embedding projector configured with these uploaded embeddings at this link.\nUsing Tensorboard\nFor the visualization purpose, the multi-dimensional embeddings that we get from the Doc2Vec model above, needs to be downsized to 2 or 3 dimensions. So that we basically end up with a new 2d or 3d embedding which tries to preserve information from the original multi-dimensional embedding. As these vectors are reduced to a much smaller dimension, the exact cosine/euclidean distances between them are not preserved, but rather relative, and hence as you’ll see below the nearest similarity results may change.\nTensorBoard has two popular dimensionality reduction methods for visualizing the embeddings and also provides a custom method based on text searches:\n\n\nPrincipal Component Analysis: PCA aims at exploring the global structure in data, and could end up losing the local similarities between neighbours. It maximizes the total variance in the lower dimensional subspace and hence, often preserves the larger pairwise distances better than the smaller ones. See an intuition behind it in this nicely explained answer on stackexchange.\n\n\nT-SNE: The idea of T-SNE is to place the local neighbours close to each other, and almost completely ignoring the global structure. It is useful for exploring local neighborhoods and finding local clusters. But the global trends are not represented accurately and the separation between different groups is often not preserved (see the t-sne plots of our data below which testify the same).\n\n\nCustom Projections: This is a custom bethod based on the text searches you define for different directions. It could be useful for finding meaningful directions in the vector space, for example, female to male, currency to country etc.\n\n\nYou can refer to this doc for instructions on how to use and navigate through different panels available in TensorBoard.\nVisualize using PCA\nThe Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three. \n<img src=\"pca.png\">\nThe above plot was made using the first two principal components with total variance covered being 36.5%.\nVisualize using T-SNE\nData is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors<sup>[2]</sup>. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.\n<img src=\"tsne.png\">\nThe above plot was generated with perplexity 8, learning rate 10 and iteration 500. Though the results could vary on successive runs, and you may not get the exact plot as above with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.\n2. Visualizing LDA\nIn this part, we will see how to visualize LDA in Tensorboard. We will be using the Document-topic distribution as the embedding vector of a document. Basically, we treat topics as the dimensions and the value in each dimension represents the topic proportion of that topic in the document.\nPreprocess Text\nWe use the movie Plots as our documents in corpus and remove rare words and common words based on their document frequency. Below we remove words that appear in less than 2 documents or in more than 30% of the documents.",
"import pandas as pd\nimport re\nfrom gensim.parsing.preprocessing import remove_stopwords, strip_punctuation\nfrom gensim.models import ldamodel\nfrom gensim.corpora.dictionary import Dictionary\n\n# read data\ndataframe = pd.read_csv('movie_plots.csv')\n\n# remove stopwords and punctuations\ndef preprocess(row):\n return strip_punctuation(remove_stopwords(row.lower()))\n \ndataframe['Plots'] = dataframe['Plots'].apply(preprocess)\n\n# Convert data to required input format by LDA\ntexts = []\nfor line in dataframe.Plots:\n lowered = line.lower()\n words = re.findall(r'\\w+', lowered, flags = re.UNICODE | re.LOCALE)\n texts.append(words)\n# Create a dictionary representation of the documents.\ndictionary = Dictionary(texts)\n\n# Filter out words that occur less than 2 documents, or more than 30% of the documents.\ndictionary.filter_extremes(no_below=2, no_above=0.3)\n# Bag-of-words representation of the documents.\ncorpus = [dictionary.doc2bow(text) for text in texts]",
"Train LDA Model",
"# Set training parameters.\nnum_topics = 10\nchunksize = 2000\npasses = 50\niterations = 200\neval_every = None\n\n# Train model\nmodel = ldamodel.LdaModel(corpus=corpus, id2word=dictionary, chunksize=chunksize, alpha='auto', eta='auto', iterations=iterations, num_topics=num_topics, passes=passes, eval_every=eval_every)",
"You can refer to this notebook also before training the LDA model. It contains tips and suggestions for pre-processing the text data, and how to train the LDA model to get good results.\nDoc-Topic distribution\nNow we will use get_document_topics which infers the topic distribution of a document. It basically returns a list of (topic_id, topic_probability) for each document in the input corpus.",
"# Get document topics\nall_topics = model.get_document_topics(corpus, minimum_probability=0)\nall_topics[0]",
"The above output shows the topic distribution of first document in the corpus as a list of (topic_id, topic_probability).\nNow, using the topic distribution of a document as it's vector embedding, we will plot all the documents in our corpus using Tensorboard.\nPrepare the Input files for Tensorboard\nTensorboard takes two input files, one containing the embedding vectors and the other containing relevant metadata. As described above we will use the topic distribution of documents as their embedding vector. Metadata file will consist of Movie titles with their genres.",
"# create file for tensors\nwith open('doc_lda_tensor.tsv','w') as w:\n for doc_topics in all_topics:\n for topics in doc_topics:\n w.write(str(topics[1])+ \"\\t\")\n w.write(\"\\n\")\n \n# create file for metadata\nwith open('doc_lda_metadata.tsv','w') as w:\n w.write('Titles\\tGenres\\n')\n for j, k in zip(dataframe.Titles, dataframe.Genres):\n w.write(\"%s\\t%s\\n\" % (j, k))",
"Now you can go to http://projector.tensorflow.org/ and upload these two files by clicking on Load data in the left panel.\nFor demo purposes I have uploaded the LDA doc-topic embeddings generated from the model trained above here. You can also access the Embedding projector configured with these uploaded embeddings at this link.\nVisualize using PCA\nThe Embedding Projector computes the top 10 principal components. The menu at the left panel lets you project those components onto any combination of two or three.\n<img src=\"doc_lda_pca.png\">\nFrom PCA, we get a simplex (tetrahedron in this case) where each data point represent a document. These data points are colored according to their Genres which were given in the Movie dataset. \nAs we can see there are a lot of points which cluster at the corners of the simplex. This is primarily due to the sparsity of vectors we are using. The documents at the corners primarily belongs to a single topic (hence, large weight in a single dimension and other dimensions have approximately zero weight.) You can modify the metadata file as explained below to see the dimension weights along with the Movie title.\nNow, we will append the topics with highest probability (topic_id, topic_probability) to the document's title, in order to explore what topics do the cluster corners or edges dominantly belong to. For this, we just need to overwrite the metadata file as below:",
"tensors = []\nfor doc_topics in all_topics:\n doc_tensor = []\n for topic in doc_topics:\n if round(topic[1], 3) > 0:\n doc_tensor.append((topic[0], float(round(topic[1], 3))))\n # sort topics according to highest probabilities\n doc_tensor = sorted(doc_tensor, key=lambda x: x[1], reverse=True)\n # store vectors to add in metadata file\n tensors.append(doc_tensor[:5])\n\n# overwrite metadata file\ni=0\nwith open('doc_lda_metadata.tsv','w') as w:\n w.write('Titles\\tGenres\\n')\n for j,k in zip(dataframe.Titles, dataframe.Genres):\n w.write(\"%s\\t%s\\n\" % (''.join((str(j), str(tensors[i]))),k))\n i+=1",
"Next, we upload the previous tensor file \"doc_lda_tensor.tsv\" and this new metadata file to http://projector.tensorflow.org/ .\n<img src=\"topic_with_coordinate.png\">\nVoila! Now we can click on any point to see it's top topics with their probabilty in that document, along with the title. As we can see in the above example, \"Beverly hill cops\" primarily belongs to the 0th and 1st topic as they have the highest probability amongst all.\nVisualize using T-SNE\nIn T-SNE, the data is visualized by animating through every iteration of the t-sne algorithm. The t-sne menu at the left lets you adjust the value of it's two hyperparameters. The first one is Perplexity, which is basically a measure of information. It may be viewed as a knob that sets the number of effective nearest neighbors[2]. The second one is learning rate that defines how quickly an algorithm learns on encountering new examples/data points.\nNow, as the topic distribution of a document is used as it’s embedding vector, t-sne ends up forming clusters of documents belonging to same topics. In order to understand and interpret about the theme of those topics, we can use show_topic() to explore the terms that the topics consisted of.\n<img src=\"doc_lda_tsne.png\">\nThe above plot was generated with perplexity 11, learning rate 10 and iteration 1100. Though the results could vary on successive runs, and you may not get the exact plot as above even with same hyperparameter settings. But some small clusters will start forming as above, with different orientations.\nI named some clusters above based on the genre of it's movies and also using the show_topic() to see relevant terms of the topic which was most prevelant in a cluster. Most of the clusters had doocumets belonging dominantly to a single topic. For ex. The cluster with movies belonging primarily to topic 0 could be named Fantasy/Romance based on terms displayed below for topic 0. You can play with the visualization yourself on this link and try to conclude a label for clusters based on movies it has and their dominant topic. You can see the top 5 topics of every point by hovering over it.\nNow, we can notice that their are more than 10 clusters in the above image, whereas we trained our model for num_topics=10. It's because their are few clusters, which has documents belonging to more than one topic with an approximately close topic probability values.",
"model.show_topic(topicid=0, topn=15)",
"You can even use pyLDAvis to deduce topics more efficiently. It provides a deeper inspection of the terms highly associated with each individual topic. For this, it uses a measure called relevance of a term to a topic that allows users to flexibly rank terms best suited for a meaningful topic interpretation. It's weight parameter called λ can be adjusted to display useful terms which could help in differentiating topics efficiently.",
"import pyLDAvis.gensim\n\nviz = pyLDAvis.gensim.prepare(model, corpus, dictionary)\npyLDAvis.display(viz)",
"The weight parameter λ can be viewed as a knob to adjust the ranks of the terms based on whether they are simply ranked according to their probability in the topic (λ=1) or are normalized by their marginal probability across the corpus (λ=0). Setting λ=1 could result in similar ranking of terms for large no. of topics hence making it difficult to differentiate between them, and setting λ=0 ranks terms solely based on their exclusiveness to current topic which could result in such rare terms that occur in only a single topic and hence the topics may remain difficult to interpret. (Sievert and Shirley 2014) suggested the optimal value of λ=0.6 based on a user study.\nConclusion\nWe learned about visualizing the Document Embeddings and LDA Doc-topic distributions through Tensorboard's Embedding Projector. It is a useful tool for visualizing different types of data for example, word embeddings, document embeddings or the gene expressions and biological sequences. It just needs an input of 2D tensors and then you can explore your data using provided algorithms. You can also perform nearest neighbours search to find most similar data points to your query point.\nReferences\n\nhttps://grouplens.org/datasets/movielens/\nhttps://lvdmaaten.github.io/tsne/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sysid/nbs
|
LP/Introduction-to-linear-programming/LaTeX_formatted_ipynb_files/Introduction to Linear Programming with Python - Part 3.ipynb
|
mit
|
[
"Introduction to Linear Programming with Python - Part 3\nReal world examples - Resourcing Problem\nWe'll now look at 2 more real world examples. \nThe first is a resourcing problem and the second is a blending problem.\nResourcing Problem\nWe're consulting for a boutique car manufacturer, producing luxury cars.\nThey run on one month (30 days) cycles, we have one cycle to show we can provide value.\nThere is one robot, 2 engineers and one detailer in the factory. The detailer has some holiday off, so only has 21 days available.\nThe 2 cars need different time with each resource:\nRobot time: Car A - 3 days; Car B - 4 days.\nEngineer time: Car A - 5 days; Car B - 6 days.\nDetailer time: Car A - 1.5 days; Car B - 3 days.\nCar A provides €30,000 profit, whilst Car B offers €45,000 profit.\nAt the moment, they produce 4 of each cars per month, for €300,000 profit. Not bad at all, but we think we can do better for them.\nThis can be modelled as follows:\nMaximise\n$\\text{Profit} = 30,000A + 45,000B$\nSubject to:\n$\nA \\geq 0 \\\nB \\geq 0 \\\n3A + 4B \\leq 30 \\\n5A + 6B \\leq 60 \\\n1.5A + 3B \\leq 21 \\\n$",
"import pulp\n\n# Instantiate our problem class\nmodel = pulp.LpProblem(\"Profit maximising problem\", pulp.LpMaximize)",
"Unlike our previous problem, the decision variables in this case won't be continuous (We can't sell half a car!), so the category is integer.",
"A = pulp.LpVariable('A', lowBound=0, cat='Integer')\nB = pulp.LpVariable('B', lowBound=0, cat='Integer')\n\n# Objective function\nmodel += 30000 * A + 45000 * B, \"Profit\"\n\n# Constraints\nmodel += 3 * A + 4 * B <= 30\nmodel += 5 * A + 6 * B <= 60\nmodel += 1.5 * A + 3 * B <= 21\n\n# Solve our problem\nmodel.solve()\npulp.LpStatus[model.status]\n\n# Print our decision variable values\nprint \"Production of Car A = {}\".format(A.varValue)\nprint \"Production of Car B = {}\".format(B.varValue)\n\n# Print our objective function value\nprint pulp.value(model.objective)",
"So that's €330,000 monthly profit, compared to their original monthly profit of €300,000\nBy producing 2 cars of Car A and 4 cars of Car B, we bolster the profits at the factory by €30,000 per month.\nWe take our consultancy fee and leave the company with €360,000 extra profit for the factory every year.\nIn the next part, we'll be making some sausages!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
uliang/First-steps-with-the-Python-language
|
Day 1 - Unit 2.1 Series.ipynb
|
mit
|
[
"2.1 Series\nContent:\n - 2.1.0 Series Structure\n - 2.1.1 Basic Operations\n - 2.1.2 Indexing and Selecting Data\n - 2.1.3 Boolean Masking\n - 2.1.4 Missing Values\n - 2.1.5 Exercises\nPython has abundant libraries for various purposes like scientific computing, data analysis, data visualisation and machine learning. In this course, we'll be using numpy for numerical computation, pandas for data preparation, matplotlib for visualisation and scipy.stats for statistical analysis.",
"#import (library) as (give the library a nickname/alias)\nimport numpy as np\nimport pandas as pd",
"2.1.0 Series Structure\nSeries is a one-dimensional array capable of holding any data type, like int, str, float,....\nIt is similar to list in Python, but each element is lablled with an index.",
"# Create a series from a list\ns = pd.Series([2,3,5,7,11])\n# Display the series\ns\n\n# Get the series values \ns.values\n\n# Get the series index\ns.index\n\n# Name the index\ns0 = pd.Series([2,3,5,7,11], index=['a','b','c','d','e'])\ns0\n\n# Size/Length of series\ns.size # same answer as len(s)\n\n# Sort values\ns.sort_values(ascending=False)",
"2.1.1 Basic Operations\nSeries allows us to use \"vectorised operations\". This means that arithmetic operations can be performed without the use of a for loop to iterate through elements of the Series.",
"s + s\n\ns * 2\n\ns **2\n\nnp.exp(s)",
"Everyday statistical functions are available as method calls to the Series object.",
"# Sum\ns.sum() # same answer as sum(s)\n\n# Average\ns.mean() # same asnwer as np.mean(s)\n\n# Std Dev\ns.std()\n\n# Median / Q1 / Q3\ns.median() # same as s.quantile(0.5)\n\n# Max/Min value\ns.max()\n\n# Which index has the max/min value?\ns.idxmax()",
"2.1.2 Indexing and Selecting Data using .iloc and .loc\nHow do we access specific elements in the series? There are two ways, and integer positionals method using the iloc method and a loc method which selects elements by their indexing label. \n.iloc is integer location based selection (from 0, 1, 2 to length-1 of the axis)",
"# Select the first value\ns.iloc[0]\n\n# Select the last value\ns.iloc[-1]\n\n# Select a contiguous subset of values\ns.iloc[1:4]\n\n# Select the first 3 values\ns.iloc[:3]",
".loc is labelled based selection.",
"# Select value based on index label\ns0.loc['b'] # return same asnwer as s0.iloc[1] or s0['b']\n\n# Select a list of values based on labels\ns0.loc[['a','c','e']] # return same answer as s0.iloc[[0,2,4]] or s0[['a','c','e']]",
"2.1.3 Conditional Selection\nHow do we select elements of a series based on some criteria. For example, select all elements in s which are less than 10.",
"# Which values are < 10?\ns < 10\n\n# Select values based on boolean mask\ns[s<10]\n\n# How many terms are <10? \nlen(s[s<10])\n\n# Select values which are 2 < x < 10.\ns[(s>2) & (s<10)]\n\n# Note that the following is not valid\n\ns[2<s<10]",
"2.1.4 Missing Values\nNaN (Not a Number) is the standard missing marker used in Pandas.",
"# Create a new series\ns1 = pd.Series([13,np.nan,19])\n# Append new series\ns2 = s.append(s1)\n# observe the indices\ns2\n\n# Append new series, ignoring the index\ns2 = s.append(s1,ignore_index=True)\ns2\n\n# Check for missing value\ns2.isnull()\n\n# How many missing values are there?\ns2.isnull().sum()\n\n# Drop missing values\ns2.dropna()\n\n# Replace missing values\ns2.fillna(17)\n\n# Replace a value by another value\n# Eg: Replace 2 by 'even prime'\ns2.replace(2,'even prime')",
"2.1.5 Exercises\nQ1: Create a new series t where the values are marks and the indices are names.\n\nnames ['Alvin','Ben','Carl','Danny','Ella','Fang','Gil','Han','Irene','Jane','Ken','Lim','Mark','Ng','Ong','Peng','Quek','Roy','Sam']\nmarks [48,62,66,26,72,74,72,55,70,80,62,66,'TX',93,65,30,75,58,51]",
"# Answer should return a series object\nnames = ['Alvin','Ben','Carl','Danny','Ella','Fang','Gil','Han','Irene','Jane','Ken','Lim','Mark','Ng','Ong','Peng','Quek','Roy','Sam']\nmarks = [48,62,66,26,72,74,72,55,70,80,62,66,'TX',93,65,30,75,58,51]\nt = pd.Series(marks, index = names)",
"Q2: How many students are there in this class?",
"# Answer should return a number.\nlen(t)",
"Q3(a): What is the average score of the test?",
"# t.mean() will return an error message",
"Q3(b): What should you replace 'TX' with? Then, answer Q3(a) again.",
"# Answer should return a number\nt1 = t.replace('TX',np.nan)\nt1.mean()",
"Q4: Who scored the highest mark?",
"# Answer should return a string (name)\nt1.idxmax()",
"Q5: Who failed the test (<50)?",
"# Answer should return a list of names\nt1[t1 < 50].index",
"Q6: What is the percentage of B+ and above (>=75 / number of students who took MST)? Leave your answer correct to 2 d.p.",
"# Answer should return a number correct to 2 d.p.\nx = t1[t1>=75]\nround(100*len(x)/18,2)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sequana/resources
|
coverage/01-virus/virus.ipynb
|
bsd-3-clause
|
[
"sequana_coverage test case example (Virus)\nThis notebook creates the BED file provided in \n- https://github.com/sequana/resources/tree/master/coverage and\n- https://www.synapse.org/#!Synapse:syn10638358/wiki/465309\nWARNING: you need an account on synapse to get the FastQ files.\nIf you just want to test the BED file, download it directly:\nwget https://github.com/sequana/resources/raw/master/coverage/JB409847.bed.bz2\nand jump to the section Using-Sequana-library-to-detect-ROI-in-the-coverage-data\notherwise, first download the FastQ from Synapse, its reference genome and its genbank annotation. Then, map the reads using BWA to get a BAM file. The BAM file is converted to a BED, which is going to be one input file to our analysis. Finally, we use the coverage tool from Sequana project (i) with the standalone (sequana_coverage) and (ii) the Python library to analyse the BED file.\nVersions used:\n- sequana 0.7.0\n- bwa mem 0.7.15\n- bedtools 2.26.0\n- samtools 1.5\n- synapseclient 1.7.2",
"%pylab inline\nmatplotlib.rcParams['figure.figsize'] = [10,7]",
"Download the genbank and genome reference\nMethod1: use sequana_coverage to download from ENA website\nhttp://www.ebi.ac.uk/ena/data/view/JB409847",
"!sequana_coverage --download-reference JB409847 --download-genbank JB409847",
"Download the FastQ",
"# to install synapseclient, use \n# pip install synapseclient\nimport synapseclient\nl = synapseclient.login()\n_ = l.get(\"syn10638367\", downloadLocation=\".\", ifcollision=\"overwrite.local\")",
"Map the reads",
"!sequana_mapping --file1 JB409847_R1_clean.fastq.gz --reference JB409847.fa",
"Convert the BAM to BED",
"!bedtools genomecov -d -ibam JB409847.fa.sorted.bam> JB409847.bed",
"Using Sequana library to detect ROI in the coverage data",
"from sequana import GenomeCov\nb = GenomeCov(\"JB409847.bed\", \"JB409847.gbk\")\nchromosome = b.chr_list[0]\nchromosome.running_median(4001, circular=True)\nchromosome.compute_zscore(k=2)\n# you can replace the 2 previous lines by since version 0.6.4\n# chromosome.run(4001, k=2, circular=True)\n\nchromosome.plot_coverage()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io
|
v0.13.2/examples/notebooks/generated/stationarity_detrending_adf_kpss.ipynb
|
bsd-3-clause
|
[
"Stationarity and detrending (ADF/KPSS)\nStationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.\nTwo statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.\nThis first cell imports standard packages and sets plots to appear inline.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm",
"Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.",
"sunspots = sm.datasets.sunspots.load_pandas().data",
"Some preprocessing is carried out on the data. The \"YEAR\" column is used in creating index.",
"sunspots.index = pd.Index(sm.tsa.datetools.dates_from_range(\"1700\", \"2008\"))\ndel sunspots[\"YEAR\"]",
"The data is plotted now.",
"sunspots.plot(figsize=(12, 8))",
"ADF test\nADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are:\nNull Hypothesis: The series has a unit root.\nAlternate Hypothesis: The series has no unit root.\nIf the null hypothesis in failed to be rejected, this test may provide evidence that the series is non-stationary.\nA function is created to carry out the ADF test on a time series.",
"from statsmodels.tsa.stattools import adfuller\n\n\ndef adf_test(timeseries):\n print(\"Results of Dickey-Fuller Test:\")\n dftest = adfuller(timeseries, autolag=\"AIC\")\n dfoutput = pd.Series(\n dftest[0:4],\n index=[\n \"Test Statistic\",\n \"p-value\",\n \"#Lags Used\",\n \"Number of Observations Used\",\n ],\n )\n for key, value in dftest[4].items():\n dfoutput[\"Critical Value (%s)\" % key] = value\n print(dfoutput)",
"KPSS test\nKPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.\nNull Hypothesis: The process is trend stationary.\nAlternate Hypothesis: The series has a unit root (series is not stationary).\nA function is created to carry out the KPSS test on a time series.",
"from statsmodels.tsa.stattools import kpss\n\n\ndef kpss_test(timeseries):\n print(\"Results of KPSS Test:\")\n kpsstest = kpss(timeseries, regression=\"c\", nlags=\"auto\")\n kpss_output = pd.Series(\n kpsstest[0:3], index=[\"Test Statistic\", \"p-value\", \"Lags Used\"]\n )\n for key, value in kpsstest[3].items():\n kpss_output[\"Critical Value (%s)\" % key] = value\n print(kpss_output)",
"The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.\nADF test is now applied on the data.",
"adf_test(sunspots[\"SUNACTIVITY\"])",
"Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.\nThe KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.\nKPSS test is now applied on the data.",
"kpss_test(sunspots[\"SUNACTIVITY\"])",
"Based upon the significance level of 0.05 and the p-value of KPSS test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is non-stationary as per the KPSS test.\nIt is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows:\nCase 1: Both tests conclude that the series is not stationary - The series is not stationary\nCase 2: Both tests conclude that the series is stationary - The series is stationary\nCase 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.\nCase 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity. \nHere, due to the difference in the results from ADF test and KPSS test, it can be inferred that the series is trend stationary and not strict stationary. The series can be detrended by differencing or by model fitting.\nDetrending by Differencing\nIt is one of the simplest methods for detrending a time series. A new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.\nDifferencing is applied on the data and the result is plotted.",
"sunspots[\"SUNACTIVITY_diff\"] = sunspots[\"SUNACTIVITY\"] - sunspots[\"SUNACTIVITY\"].shift(\n 1\n)\nsunspots[\"SUNACTIVITY_diff\"].dropna().plot(figsize=(12, 8))",
"ADF test is now applied on these detrended values and stationarity is checked.",
"adf_test(sunspots[\"SUNACTIVITY_diff\"].dropna())",
"Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.\nKPSS test is now applied on these detrended values and stationarity is checked.",
"kpss_test(sunspots[\"SUNACTIVITY_diff\"].dropna())",
"Based upon the p-value of KPSS test, the null hypothesis can not be rejected. Hence, the series is stationary.\nConclusion\nTwo tests for checking the stationarity of a time series are used, namely ADF test and KPSS test. Detrending is carried out by using differencing. Trend stationary time series is converted into strict stationary time series. Requisite forecasting model can now be applied on a stationary time series data."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
slowvak/MachineLearningForMedicalImages
|
notebooks/Module 1.ipynb
|
mit
|
[
"Module 1 - Data Load / Display / Normalization\nIn this module you will learn how to load a nifti image utilizing nibabel and create datasets that can be used with machine learning algorithms. The basic features we will consider are intensity based and originate from multiple acquisition types. \nStep 1: Load basic python libraries",
"%matplotlib inline\nimport os \nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport pandas as pd\nimport csv\nfrom pandas.tools.plotting import scatter_matrix\nfrom sklearn import preprocessing\nimport nibabel as nib",
"Step 2: Load the three types of images available.\n\nT1w pre-contrast\nFLAIR \nT1w post-contrast \n\nThe goal is to create a 4D image that contains all four 3D volumes we will use in our example",
"CurrentDir= os.getcwd()\n\n# Print current directory\nprint (CurrentDir)\n# Get parent direcotry \nprint(os.path.abspath(os.path.join(CurrentDir, os.pardir)))\n\n# Create the file paths. The images are contained in a subfolder called Data. \nPostName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), \"Data\", 'POST.nii.gz') )\nPreName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), \"Data\", 'PRE.nii.gz') )\nFLAIRName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), \"Data\", 'FLAIR.nii.gz') )\nGroundTruth= os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), \"Data\", 'GroundTruth.nii.gz') )\n\n# read Pre in--we assume that all images are same x,y dims\nPre = nib.load(PreName)\n# Pre is a class containing the image data among other information \nPre=Pre.get_data()\nxdim = np.shape(Pre)[0]\nydim = np.shape(Pre)[1]\nzdim = np.shape(Pre)[2]\n# Printing the dimensions of an image \nprint ('Dimensions')\nprint (xdim,ydim,zdim)\n# Normalize to mean\nPre=Pre/np.mean(Pre[np.nonzero(Pre)])\n# Post\nPost = nib.load(PostName)\nPost=Post.get_data()\n# Normalize to mean\nPost=Post/np.mean(Post[np.nonzero(Post)])\nFlair = nib.load(FLAIRName)\nFlair=Flair.get_data()\n# Normalize FLAIR \nFlair=Flair/np.mean(Flair[np.nonzero(Flair)])\nprint (\"Data Loaded\")",
"Create traing set\nWe assume the following labels. \n\nEnhancing Tumor = 4\nEdema = 2\nWM and CSF and GM=1\nBackground (air) = 0",
"# Load Ground Truth\nGroundTrutha = nib.load(GroundTruth)\nGroundTruth=GroundTrutha.get_data()\nprint (\"Data Loaded\")",
"Plot the images",
"def display_overlay(Image1, Image2):\n \"\"\"\n Function: Overlays Image2 over Image1\n Image 1: 2D image\n Image 2: 2D Image\n\n Requires numpy, matplotlib\n \"\"\"\n Image1=np.rot90(Image1,3)\n Image2=np.rot90(Image2,3)\n Image2 = np.ma.masked_where(Image2 == 0, Image2)\n plt.imshow(Image1, cmap=plt.cm.gray)\n plt.imshow(Image2, cmap=plt.cm.brg, alpha=.7, vmin=.7, vmax=5, interpolation='nearest')\n plt.axis('off')\n plt.show()\n\nf, (ax1,ax2,ax3,ax4)=plt.subplots(1,4)\nax1.imshow(np.rot90(Post[:, :, 55,],3), cmap=plt.cm.gray)\nax1.axis('off')\nax2.imshow(np.rot90(Flair[:, :, 55,],3), cmap=plt.cm.gray)\nax2.axis('off')\nax3.imshow(np.rot90(Pre[:, :, 55,],3), cmap=plt.cm.gray)\nax3.axis('off')\nax4.imshow(np.rot90(GroundTruth[:, :, 55,],3), cmap=plt.cm.gray)\nax4.axis('off')\n\nplt.show()\n\ndisplay_overlay(Post[:, :, 55,], GroundTruth[:,:,55]==4) \ndisplay_overlay(Flair[:, :, 55,], GroundTruth[:,:,55]==2) \ndisplay_overlay(Pre[:, :, 55,], GroundTruth[:,:,55]==1) ",
"Create dataset",
"# Create classes\n# Tissue =GM+CSG+WM\nClassTissuePost=(Post[np.nonzero(GroundTruth==1)])\nClassTissuePre=(Pre[np.nonzero(GroundTruth==1)])\nClassTissueFlair=(Flair[np.nonzero(GroundTruth==1)])\n# Enhancing Tumor \nClassTumorPost=(Post[np.nonzero(GroundTruth==4)])\nClassTumorPre=(Pre[np.nonzero(GroundTruth==4)])\nClassTumorFlair=(Flair[np.nonzero(GroundTruth==4)])\n# Edema \nClassEdemaPost=(Post[np.nonzero(GroundTruth==2)])\nClassEdemaPre=(Pre[np.nonzero(GroundTruth==2)])\nClassEdemaFlair=(Flair[np.nonzero(GroundTruth==2)])\n# We only select 1000 points for demosntration purposes\nIND=np.random.randint(np.shape(ClassTumorPre)[0], size=5000)\nClassTissuePost=ClassTissuePost[IND]\nClassTissuePre=ClassTissuePre[IND]\nClassTissueFlair=ClassTissueFlair[IND]\nClassTumorPost=ClassTumorPost[IND]\nClassTumorPre=ClassTumorPre[IND]\nClassTumorFlair=ClassTumorFlair[IND]\nClassEdemaPost=ClassEdemaPost[IND]\nClassEdemaPre=ClassEdemaPre[IND]\nClassEdemaFlair=ClassEdemaFlair[IND]\nprint (\"Saving the data to a pandas dataframe and subsequently to a csv\")\n# Create a dictionary containing the classes\ndatasetcomplete={\"ClassTissuePost\": ClassTissuePost, \"ClassTissuePre\": ClassTissuePre, \"ClassTissueFlair\": ClassTissueFlair, \"ClassTumorPost\": ClassTumorPost, \"ClassTumorPre\": ClassTumorPre, \"ClassTumorFlair\": ClassTumorFlair, \"ClassEdemaPost\": ClassEdemaPost, \"ClassEdemaPre\": ClassEdemaPre, \"ClassEdemaFlair\": ClassEdemaFlair}\ndatapd=pd.DataFrame.from_dict(datasetcomplete,orient=\"index\")\n# print (datapd)\ndatapd=datapd.transpose()\n# datapd=pd.DataFrame(dict([ (k,Series(v)) for k,v in datasetcomplete.iteritems() ]))\ndatapd.to_csv(\"DataExample.csv\",index=False)",
"Create some scatter plots",
"# Display Tumor vs NAWM\nIND=np.random.randint(1000, size=100)\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.scatter(ClassTissuePost[IND,], ClassTissuePre[IND,], ClassTissueFlair[IND,])\nax.scatter(ClassTumorPost[IND,], ClassTumorPre[IND,], ClassTumorFlair[IND,], c='r', marker='^')\nax.set_xlabel('post')\nax.set_ylabel('pret')\nax.set_zlabel('FLAIR')\nplt.show()\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(ClassTissuePost[IND,], ClassTissuePre[IND,])\nax.scatter(ClassTumorPost[IND,], ClassTumorPre[IND,], c='r', marker='^')\nax.set_xlabel('post')\nax.set_ylabel('pret')\nplt.show()",
"Describe the data",
"# descriptions\nprint(datapd.describe())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
xodbox/StudendLoans
|
01 StudentLoans - Cleaning.ipynb
|
mit
|
[
"STUDENT LOANS CHALLENGE\nCOURSERA ML CHALLENGE\n<br>\nThis notebook was created to document the steps taken to solve the Predict Students’ Ability to Repay Educational Loans posted on the Data Science Community in Coursera.\nThe data is aviable at:\nhttps://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv.\nDocumentation for the data is available at https://collegescorecard.ed.gov/data/documentation/. There is a data dictionary at https://collegescorecard.ed.gov/assets/CollegeScorecardDataDictionary.xlsx.\nWORKFLOW\nThe Workflow suggested in https://www.kaggle.com/startupsci/titanic-data-science-solutions is going to be followed. The Workflow is the following:\n Question or problem definition.\n Acquire training and testing data.\n Wrangle, prepare, cleanse the data.\n Analyze, identify patterns, and explore the data.\n Model, predict and solve the problem.\n Visualize, report, and present the problem solving steps and final solution.\n Supply the results.\n\nThe workflow indicates general sequence of how each stage may follow the other. However, there are use cases with exceptions:\n\n We may combine mulitple workflow stages. We may analyze by visualizing data.\n Perform a stage earlier than indicated. We may analyze data before and after wrangling.\n Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.\n Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.\n\nProblem Definition\nTest to see if a set of institutional features can be used to predict student otucomes, in particular debt repayment. This solution is intended to try to explore to what extent instututional characteristics as well as certain demographic factors can indicate or predict debt repayment.\nThe (US) “College Scorecard” (the data set) includes national data on the earnings of former college graduates and new data on student debt.\nImport Libraries\nFirst import the libraries that are going to be used:",
"# data analysis and manipulation\nimport numpy as np\nimport pandas as pd\nnp.set_printoptions(threshold=1000)\n\n# visualization\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n#machine learning\nimport tensorflow as tf\n\n#Regular expression\nimport re",
"Acqure Data\nThe data is acquired using pandas (I renamed the file to CollegeScorecardData.csv)",
"all_data = pd.read_csv('datasets/CollegeScorecardData.csv')",
"Analyze Data\nFirst, let's see a little bit of the data",
"all_data.head()",
"Find information about the features\nLet's find more about the data",
"all_data.info()",
"There are 7703 examples and 1743 features.\nThere are 443 float features that may be numeric, 13 integer features that may be categorical, and 1287 features that are strings, but may be numbers but data was not entered correctly (for example, if there was not data for a given feature, someone could have written \"blank\"). Given the high number of non numerical features, we need to explore them more. Luckly, there is a dictionary provided with the data, so we can explore it a little bit to learn about the data (The original file was converted do CSV)",
"data_dict = pd.read_csv('datasets/CollegeScorecardDataDictionary.csv')\n\ndata_dict.head()\n\ndata_dict.tail()\n\ndata_dict.info()",
"There are 1975 entries, but the column NAME OF DATA ELEMENT has only 1734 not nut elements, so something is up. Let's try to explore the dict a little bit more",
"data_dict[5:10]",
"Nothing suspicius here, lets try again",
"data_dict[10:20]",
"Aha! It seems that the feature at index 15 is categorical, and that's why the rows that follow it don't have a value under NAME OF DATA ELEMENT. Just for now, let's get rid of those NAN rows.",
"data_dict_no_nan_names = data_dict.dropna(subset=['NAME OF DATA ELEMENT'])\ndata_dict_no_nan_names[10:20]",
"Lets get the info of the new dict",
"data_dict_no_nan_names.info()",
"We are interested primarly in the NAME OF DATA ELEMENT, VARIABLE NAME and API data type. They seem complete. Let's see howe many data types there are",
"data_dict_no_nan_names['API data type'].unique()",
"Let's find out how many features have each data type",
"data_dict_no_nan_names['API data type'].value_counts()",
"So in reality, there are 1206 float features, 521 integers, and 7 string features. (For now we assume that the autocomplete type is string). This numbers differ a lot from our previus analisys, in which we had 443 float features, 13 integer features and 1287 features that are strings.\nAlso, we cannot asume that all features of type integer are categorical, for example the ZIP code feature is integer but is not a categorical feature.\nLet's find more about the autocomplete features:",
"data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete']",
"We can see that these autocomplete features can be treated as strings. \nDelete features that have all their values NaN",
"all_data_no_na_columns = all_data.dropna(axis=1, how='all')",
"Delete features that are meaningless\nThere are features that are meaningless for the problem we are trying to solve. We need to drop these features, but we need a criterion to eliminate them. The criterion that we are going to employ is to eliminate the features that are unique for every entry and don't add information to the problem, for example if we have a unique ID for every institution, this ID doesn't add information to the problem. \nAlso, we need to take in account that there area features that may be unique for every entry, but DOES add relevant information. For example, the tuition fees may be unique and add information. \nLet's find the ratio of the number of unique values over number of examples:",
"# Create a list to save the features that are above a certain threshold\nfeatures_with_high_ratio = []\n# Create a list to save the features in all_data but not in the dict\nfeatures_not_in_dict = []\n\n#Calculate the ratio\nfor feature in all_data_no_na_columns.columns.values: \n # Get the row in the dict wich have VARIABLE NAME == feature\n row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature]\n # Get the data type of the row\n data_type_series = row_in_dict['API data type']\n \n #Check if exists in the dict\n if data_type_series.size > 0:\n # Get the data type\n data_type = data_type_series.values[0]\n # float features (numeric features) are not taken in account\n if data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete':\n column = all_data_no_na_columns[feature]\n column_no_na = column.dropna()\n r = column_no_na.unique().size / column_no_na.size\n if r > 0.8:\n features_with_high_ratio.append(feature)\n print(str(feature) + \": \" + str(r))\n #The feature is not in the dict\n else:\n features_not_in_dict.append(feature)\n\nprint (\"\\nFeatures in data but not in the dictionary:\" + str(features_not_in_dict))",
"So there are some features in the data that are not explained in the dictionary. Tha is not necessarly an inconvenience, so we won't worry abot this right now.\nLets find what those NTP4 features are about",
"npt4_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT4_PUB'\nnpt41_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT41_PUB'\nnpt42_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT42_PUB'\ndata_dict_no_nan_names[npt4_pub | npt41_pub | npt42_pub ]",
"So those NTP4 features are about Average Net prices, so they are defenetly numeric features, and it makes sense to keep them.\nLet's run our previous analysis again with out those features so we can have a cleaner visualization as we lower the threshold",
"# Create a list to save the features that are above a certain threshold\nfeatures_with_high_ratio = []\n# Create a list to save the features in all_data but not in the dict\nfeatures_not_in_dict = []\n\n#Calculate the ratio\nfor feature in all_data_no_na_columns.columns.values: \n # Get the row in the dict wich have VARIABLE NAME == feature\n row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature]\n # Get the data type of the row\n data_type_series = row_in_dict['API data type']\n \n #Check if exists in the dict\n if data_type_series.size > 0:\n # Get the data type\n data_type = data_type_series.values[0]\n # float features (numeric features) are not taken in account\n if (data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete') \\\n and feature[:4] != 'NPT4':\n column = all_data_no_na_columns[feature]\n column_no_na = column.dropna()\n r = column_no_na.unique().size / column_no_na.size\n if r > 0.5:\n features_with_high_ratio.append(feature)\n print(str(feature) + \": \" + str(r))\nprint(features_with_high_ratio)",
"Let's see what are these features about:",
"high_ratio_features = pd.DataFrame()\nfor feature in features_with_high_ratio:\n high_ratio_features = high_ratio_features.append(data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature])\nhigh_ratio_features",
"So UNITID, OPEID, OPEID6, INSTNM, INSTURL, NPCURL and ALIAS are features that have to do with the identity of the institution, so they don't add relevant information to the problem, therfore they will be eliminated. (flag_e)\nThe ZIP code could be useful if it is used to group the schools to some sort of category about it's location. We are not going to to this so we are going to eliminate it as well.",
"all_data_no_id_cols = all_data_no_na_columns.drop(['UNITID', 'OPEID', 'OPEID6', 'INSTNM', 'INSTURL', 'NPCURL', 'ALIAS', 'ZIP'], axis = 1)\n\nall_data_no_id_cols.head()",
"Work on the string and autocmplet data",
"data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'string']",
"We already dropped INSTURL and NPCURL. Let's explore the STABBR feature",
"all_data_no_id_cols['STABBR']",
"So this feature has to do with the state where the school is located. Let's explore the ACCREDAGENCY feature:",
"all_data_no_id_cols['ACCREDAGENCY']\n\nall_data_no_id_cols['ACCREDAGENCY'].value_counts()",
"Now les's explore the autocomplete data type:",
"data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete']",
"INSTNM and ALIAS where dropped, let's see the CITY feature:",
"all_data_no_id_cols['CITY']",
"So STABBR, ACCREDAGENCY and CITY are features that we are going to keep, but they need to be transformed to an ordinal (using numbers) representation, since the ML algorithms use numbers and not strings.",
"all_data_no_strings = all_data_no_id_cols.copy()\n\n#STABBR mapping\nvalues = all_data_no_strings['STABBR'].unique()\nmapping = {}\nnumeric_value = 1\nfor value in values:\n mapping[value] = numeric_value\n numeric_value += 1\nall_data_no_strings['STABBR'] = all_data_no_strings['STABBR'].map(mapping)\n\n#ACCREDAGENCY mapping\nvalues = all_data_no_id_cols['ACCREDAGENCY'].unique()\nmapping = {}\nnumeric_value = 1\nfor value in values:\n mapping[value] = numeric_value\n numeric_value += 1\nall_data_no_strings['ACCREDAGENCY'] = all_data_no_strings['ACCREDAGENCY'].map(mapping)\n\n#CITY mapping\nvalues = all_data_no_id_cols['CITY'].unique()\nmapping = {}\nnumeric_value = 1\nfor value in values:\n mapping[value] = numeric_value\n numeric_value += 1\nall_data_no_strings['CITY'] = all_data_no_strings['CITY'].map(mapping)\n\nall_data_no_strings.head()",
"Let's see how our data looks so far",
"all_data_no_strings.info()",
"Although we mapped or eliminated the string features, we still have a lot object (not numeric) data types. Let's work on them\nFetures with object dtype\nLet's try to find a sample of features that should be numbers, but for some reason in the data they are not numbers",
"regex = re.compile('[0-9]+(\\.[0-9]+)?$')\nwords = []\nfor column in all_data_no_strings:\n if all_data_no_strings[column].dtypes == 'object':\n for data in all_data_no_strings[column]:\n if not regex.match(str(data)):\n words.append(data)\n\npd.Series(words).value_counts()",
"We can see that there is a lot of data suppresed for privacy reasons. Also, there are dates, and one of them 12/31/2999 seems to be invalid. Let's go ahead and replace these values with nan, so we will treat it as any nan value. Also, if any column ends having all of its values as Nan, we will delete this column.",
"all_data_replaced_with_nan = all_data_no_strings.replace(to_replace = 'PrivacySuppressed', value = np.nan)\nall_data_replaced_with_nan = all_data_replaced_with_nan.replace(to_replace = '12/31/2999', value = np.nan)\nall_data_replaced_with_nan = all_data_replaced_with_nan.dropna(axis=1, how='all')\n\nall_data_replaced_with_nan.info()",
"Lets find wich features are date features",
"features_with_date = []\nfor column in all_data_replaced_with_nan:\n if all_data_replaced_with_nan[column].dtypes == 'object':\n if all_data_replaced_with_nan[column].str.match('[0-9]{2}/[0-9]{2}/[0-9]{4}').any():\n features_with_date.append(column)\n\nfeatures_with_date\n\ndata_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == 'SEPAR_DT_MDN']",
"It seems that SEPAR_DT_MDN don't add valuable information to the problem, so we are going to drop it",
"all_data_no_dates = all_data_replaced_with_nan.drop(['SEPAR_DT_MDN'], axis = 1)",
"Now we will transfore all the object features to numeric",
"all_data_no_objects = all_data_no_dates.copy()\nfor feature in all_data_no_dates:\n if all_data_no_dates[feature].dtypes == 'object':\n #Make all data numeric\n all_data_no_objects[feature] = pd.to_numeric(all_data_no_dates[feature]) \n\nall_data_no_objects.info()",
"Now we have gotten rid of the object dtype\nEliminate features with high number of NaN values\nWe already deleted features with that had all of their value as NaN, but now we will eliminate features with a high percentage of NaN values (more than 90%)",
"high_nan_features = []\nfor feature in all_data_no_objects:\n size = all_data_no_objects[feature].size\n number_of_valid = all_data_no_objects[feature].count()\n number_of_nan = size - number_of_valid\n ratio = number_of_nan / size\n if ratio > 0.9:\n high_nan_features.append(feature)\nprint (len(high_nan_features))\n\nall_data_no_high_nan = all_data_no_objects.drop(high_nan_features, axis = 1)\n\nall_data_no_high_nan.info()",
"Filling missing data\nWe need to fill the mising data. To do this we need to know if the feature is numeric or categorical. Let's use the dictionary to get that info.",
"data_dict[15:25]",
"We can see that after the name of a categorical feature, there is at least one item with value NaN. Let's use this to get a list of categorical features",
"categorical_features = []\nis_null = data_dict['NAME OF DATA ELEMENT'].isnull()\nfor i in range(len(is_null) - 1):\n if not is_null[i] and is_null[i+1]:\n categorical_features.append(data_dict['VARIABLE NAME'][i])",
"To fill the missing data that belongs to a categorical feature, we will use the most common value of the data (mode). To fill the missing data that belongs to a numeric feature, we will use the the average of the data (mean).",
"all_data_no_nan = all_data_no_high_nan.copy()\nfor feature in all_data_no_high_nan:\n if feature in categorical_features:\n mode = all_data_no_high_nan[feature].mode()[0] \n all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mode)\n else:\n mean = all_data_no_high_nan[feature].mean()\n all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mean)\n\nall_data_no_nan.head()\n\nall_data_no_nan.info()",
"Let's save the data in a file",
"all_data_no_nan.to_csv('datasets/CollegeScorecardDataCleaned.csv', index = False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.